1
|
Keur-Huizinga L, Kramer SE, de Geus EJC, Zekveld AA. A Multimodal Approach to Measuring Listening Effort: A Systematic Review on the Effects of Auditory Task Demand on Physiological Measures and Their Relationship. Ear Hear 2024:00003446-990000000-00297. [PMID: 38880960 DOI: 10.1097/aud.0000000000001508] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 06/18/2024]
Abstract
OBJECTIVES Listening effort involves the mental effort required to perceive an auditory stimulus, for example in noisy environments. Prolonged increased listening effort, for example due to impaired hearing ability, may increase risk of health complications. It is therefore important to identify valid and sensitive measures of listening effort. Physiological measures have been shown to be sensitive to auditory task demand manipulations and are considered to reflect changes in listening effort. Such measures include pupil dilation, alpha power, skin conductance level, and heart rate variability. The aim of the current systematic review was to provide an overview of studies to listening effort that used multiple physiological measures. The two main questions were: (1) what is the effect of changes in auditory task demand on simultaneously acquired physiological measures from various modalities? and (2) what is the relationship between the responses in these physiological measures? DESIGN Following Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines, relevant articles were sought in PubMed, PsycInfo, and Web of Science and by examining the references of included articles. Search iterations with different combinations of psychophysiological measures were performed in conjunction with listening effort-related search terms. Quality was assessed using the Appraisal Tool for Cross-Sectional Studies. RESULTS A total of 297 articles were identified from three databases, of which 27 were included. One additional article was identified from reference lists. Of the total 28 included articles, 16 included an analysis regarding the relationship between the physiological measures. The overall quality of the included studies was reasonable. CONCLUSIONS The included studies showed that most of the physiological measures either show no effect to auditory task demand manipulations or a consistent effect in the expected direction. For example, pupil dilation increased, pre-ejection period decreased, and skin conductance level increased with increasing auditory task demand. Most of the relationships between the responses of these physiological measures were nonsignificant or weak. The physiological measures varied in their sensitivity to auditory task demand manipulations. One of the identified knowledge gaps was that the included studies mostly used tasks with high-performance levels, resulting in an underrepresentation of the physiological changes at lower performance levels. This makes it difficult to capture how the physiological responses behave across the full psychometric curve. Our results support the Framework for Understanding Effortful Listening and the need for a multimodal approach to listening effort. We furthermore discuss focus points for future studies.
Collapse
Affiliation(s)
- Laura Keur-Huizinga
- Amsterdam UMC location Vrije Universiteit Amsterdam, Otolaryngology-Head and Neck Surgery, Ear & Hearing, Amsterdam Public Health Research Institute, Amsterdam, The Netherlands
- Department of Biological Psychology, Vrije Universiteit Amsterdam, Amsterdam, The Netherlands
| | - Sophia E Kramer
- Amsterdam UMC location Vrije Universiteit Amsterdam, Otolaryngology-Head and Neck Surgery, Ear & Hearing, Amsterdam Public Health Research Institute, Amsterdam, The Netherlands
| | - Eco J C de Geus
- Department of Biological Psychology, Vrije Universiteit Amsterdam, Amsterdam, The Netherlands
| | - Adriana A Zekveld
- Amsterdam UMC location Vrije Universiteit Amsterdam, Otolaryngology-Head and Neck Surgery, Ear & Hearing, Amsterdam Public Health Research Institute, Amsterdam, The Netherlands
| |
Collapse
|
2
|
Silcox JW, Bennett K, Copeland A, Ferguson SH, Payne BR. The Costs (and Benefits?) of Effortful Listening for Older Adults: Insights from Simultaneous Electrophysiology, Pupillometry, and Memory. J Cogn Neurosci 2024; 36:997-1020. [PMID: 38579256 DOI: 10.1162/jocn_a_02161] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/07/2024]
Abstract
Although the impact of acoustic challenge on speech processing and memory increases as a person ages, older adults may engage in strategies that help them compensate for these demands. In the current preregistered study, older adults (n = 48) listened to sentences-presented in quiet or in noise-that were high constraint with either expected or unexpected endings or were low constraint with unexpected endings. Pupillometry and EEG were simultaneously recorded, and subsequent sentence recognition and word recall were measured. Like young adults in prior work, we found that noise led to increases in pupil size, delayed and reduced ERP responses, and decreased recall for unexpected words. However, in contrast to prior work in young adults where a larger pupillary response predicted a recovery of the N400 at the cost of poorer memory performance in noise, older adults did not show an associated recovery of the N400 despite decreased memory performance. Instead, we found that in quiet, increases in pupil size were associated with delays in N400 onset latencies and increased recognition memory performance. In conclusion, we found that transient variation in pupil-linked arousal predicted trade-offs between real-time lexical processing and memory that emerged at lower levels of task demand in aging. Moreover, with increased acoustic challenge, older adults still exhibited costs associated with transient increases in arousal without the corresponding benefits.
Collapse
|
3
|
Zhang X, Li J, Li Z, Hong B, Diao T, Ma X, Nolte G, Engel AK, Zhang D. Leading and following: Noise differently affects semantic and acoustic processing during naturalistic speech comprehension. Neuroimage 2023; 282:120404. [PMID: 37806465 DOI: 10.1016/j.neuroimage.2023.120404] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/17/2023] [Revised: 08/19/2023] [Accepted: 10/05/2023] [Indexed: 10/10/2023] Open
Abstract
Despite the distortion of speech signals caused by unavoidable noise in daily life, our ability to comprehend speech in noisy environments is relatively stable. However, the neural mechanisms underlying reliable speech-in-noise comprehension remain to be elucidated. The present study investigated the neural tracking of acoustic and semantic speech information during noisy naturalistic speech comprehension. Participants listened to narrative audio recordings mixed with spectrally matched stationary noise at three signal-to-ratio (SNR) levels (no noise, 3 dB, -3 dB), and 60-channel electroencephalography (EEG) signals were recorded. A temporal response function (TRF) method was employed to derive event-related-like responses to the continuous speech stream at both the acoustic and the semantic levels. Whereas the amplitude envelope of the naturalistic speech was taken as the acoustic feature, word entropy and word surprisal were extracted via the natural language processing method as two semantic features. Theta-band frontocentral TRF responses to the acoustic feature were observed at around 400 ms following speech fluctuation onset over all three SNR levels, and the response latencies were more delayed with increasing noise. Delta-band frontal TRF responses to the semantic feature of word entropy were observed at around 200 to 600 ms leading to speech fluctuation onset over all three SNR levels. The response latencies became more leading with increasing noise and decreasing speech comprehension and intelligibility. While the following responses to speech acoustics were consistent with previous studies, our study revealed the robustness of leading responses to speech semantics, which suggests a possible predictive mechanism at the semantic level for maintaining reliable speech comprehension in noisy environments.
Collapse
Affiliation(s)
- Xinmiao Zhang
- Department of Psychology, School of Social Sciences, Tsinghua University, Beijing 100084, China; Tsinghua Laboratory of Brain and Intelligence, Tsinghua University, Beijing 100084, China
| | - Jiawei Li
- Department of Education and Psychology, Freie Universität Berlin, Berlin 14195, Federal Republic of Germany
| | - Zhuoran Li
- Department of Psychology, School of Social Sciences, Tsinghua University, Beijing 100084, China; Tsinghua Laboratory of Brain and Intelligence, Tsinghua University, Beijing 100084, China
| | - Bo Hong
- Tsinghua Laboratory of Brain and Intelligence, Tsinghua University, Beijing 100084, China; Department of Biomedical Engineering, School of Medicine, Tsinghua University, Beijing 100084, China
| | - Tongxiang Diao
- Department of Otolaryngology, Head and Neck Surgery, Peking University, People's Hospital, Beijing 100044, China
| | - Xin Ma
- Department of Otolaryngology, Head and Neck Surgery, Peking University, People's Hospital, Beijing 100044, China
| | - Guido Nolte
- Department of Neurophysiology and Pathophysiology, University Medical Center Hamburg-Eppendorf, Hamburg 20246, Federal Republic of Germany
| | - Andreas K Engel
- Department of Neurophysiology and Pathophysiology, University Medical Center Hamburg-Eppendorf, Hamburg 20246, Federal Republic of Germany
| | - Dan Zhang
- Department of Psychology, School of Social Sciences, Tsinghua University, Beijing 100084, China; Tsinghua Laboratory of Brain and Intelligence, Tsinghua University, Beijing 100084, China.
| |
Collapse
|
4
|
Kallioinen P, Olofsson JK, von Mentzer CN. Semantic processing in children with Cochlear Implants: A review of current N400 studies and recommendations for future research. Biol Psychol 2023; 182:108655. [PMID: 37541539 DOI: 10.1016/j.biopsycho.2023.108655] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/25/2023] [Revised: 07/28/2023] [Accepted: 08/01/2023] [Indexed: 08/06/2023]
Abstract
Deaf and hard of hearing children with cochlear implants (CI) often display impaired spoken language skills. While a large number of studies investigated brain responses to sounds in this population, relatively few focused on semantic processing. Here we summarize and discuss findings in four studies of the N400, a cortical response that reflects semantic processing, in children with CI. A study with auditory target stimuli found N400 effects at delayed latencies at 12 months after implantation, but at 18 and 24 months after implantation effects had typical latencies. In studies with visual target stimuli N400 effects were larger than or similar to controls in children with CI, despite lower semantic abilities. We propose that in children with CI, the observed large N400 effect reflects a stronger reliance on top-down predictions, relative to bottom-up language processing. Recent behavioral studies of children and adults with CI suggest that top-down processing is a common compensatory strategy, but with distinct limitations such as being effortful. A majority of the studies have small sample sizes (N < 20), and only responses to image targets were studied repeatedly in similar paradigms. This precludes strong conclusions. We give suggestions for future research and ways to overcome the scarcity of participants, including extending research to children with conventional hearing aids, an understudied group.
Collapse
Affiliation(s)
- Petter Kallioinen
- Department of Linguistics, Stockholm University, Stockholm, Sweden; Lund University Cognitive Science, Lund University, Lund, Sweden.
| | - Jonas K Olofsson
- Department of Psychology, Stockholm University, Stockholm, Sweden
| | | |
Collapse
|
5
|
Gillis M, Kries J, Vandermosten M, Francart T. Neural tracking of linguistic and acoustic speech representations decreases with advancing age. Neuroimage 2023; 267:119841. [PMID: 36584758 PMCID: PMC9878439 DOI: 10.1016/j.neuroimage.2022.119841] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/29/2022] [Revised: 12/21/2022] [Accepted: 12/26/2022] [Indexed: 12/29/2022] Open
Abstract
BACKGROUND Older adults process speech differently, but it is not yet clear how aging affects different levels of processing natural, continuous speech, both in terms of bottom-up acoustic analysis and top-down generation of linguistic-based predictions. We studied natural speech processing across the adult lifespan via electroencephalography (EEG) measurements of neural tracking. GOALS Our goals are to analyze the unique contribution of linguistic speech processing across the adult lifespan using natural speech, while controlling for the influence of acoustic processing. Moreover, we also studied acoustic processing across age. In particular, we focus on changes in spatial and temporal activation patterns in response to natural speech across the lifespan. METHODS 52 normal-hearing adults between 17 and 82 years of age listened to a naturally spoken story while the EEG signal was recorded. We investigated the effect of age on acoustic and linguistic processing of speech. Because age correlated with hearing capacity and measures of cognition, we investigated whether the observed age effect is mediated by these factors. Furthermore, we investigated whether there is an effect of age on hemisphere lateralization and on spatiotemporal patterns of the neural responses. RESULTS Our EEG results showed that linguistic speech processing declines with advancing age. Moreover, as age increased, the neural response latency to certain aspects of linguistic speech processing increased. Also acoustic neural tracking (NT) decreased with increasing age, which is at odds with the literature. In contrast to linguistic processing, older subjects showed shorter latencies for early acoustic responses to speech. No evidence was found for hemispheric lateralization in neither younger nor older adults during linguistic speech processing. Most of the observed aging effects on acoustic and linguistic processing were not explained by age-related decline in hearing capacity or cognition. However, our results suggest that the effect of decreasing linguistic neural tracking with advancing age at word-level is also partially due to an age-related decline in cognition than a robust effect of age. CONCLUSION Spatial and temporal characteristics of the neural responses to continuous speech change across the adult lifespan for both acoustic and linguistic speech processing. These changes may be traces of structural and/or functional change that occurs with advancing age.
Collapse
Affiliation(s)
- Marlies Gillis
- Experimental Oto-Rhino-Laryngology, Department of Neurosciences, Leuven Brain Institute, KU Leuven, Belgium.
| | - Jill Kries
- Experimental Oto-Rhino-Laryngology, Department of Neurosciences, Leuven Brain Institute, KU Leuven, Belgium.
| | | | | |
Collapse
|
6
|
Hsin CH, Chao PC, Lee CY. Speech comprehension in noisy environments: Evidence from the predictability effects on the N400 and LPC. Front Psychol 2023; 14:1105346. [PMID: 36874840 PMCID: PMC9974639 DOI: 10.3389/fpsyg.2023.1105346] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/22/2022] [Accepted: 01/18/2023] [Indexed: 02/17/2023] Open
Abstract
Introduction Speech comprehension involves context-based lexical predictions for efficient semantic integration. This study investigated how noise affects the predictability effect on event-related potentials (ERPs) such as the N400 and late positive component (LPC) in speech comprehension. Methods Twenty-seven listeners were asked to comprehend sentences in clear and noisy conditions (hereinafter referred to as "clear speech" and "noisy speech," respectively) that ended with a high-or low-predictability word during electroencephalogram (EEG) recordings. Results The study results regarding clear speech showed the predictability effect on the N400, wherein low-predictability words elicited a larger N400 amplitude than did high-predictability words in the centroparietal and frontocentral regions. Noisy speech showed a reduced and delayed predictability effect on the N400 in the centroparietal regions. Additionally, noisy speech showed a predictability effect on the LPC in the centroparietal regions. Discussion These findings suggest that listeners achieve comprehension outcomes through different neural mechanisms according to listening conditions. Noisy speech may be comprehended with a second-pass process that possibly functions to recover the phonological form of degraded speech through phonetic reanalysis or repair, thus compensating for decreased predictive efficiency.
Collapse
Affiliation(s)
- Cheng-Hung Hsin
- Taiwan International Graduate Program in Interdisciplinary Neuroscience, National Yang Ming Chiao Tung University and Academia Sinica, Taipei, Taiwan.,Brain and Language Laboratory, Institute of Linguistics, Academia Sinica, Taipei, Taiwan.,Biomedical Acoustic Signal Processing Lab, Research Center for Information Technology Innovation, Academia Sinica, Taipei, Taiwan
| | - Pei-Chun Chao
- Brain and Language Laboratory, Institute of Linguistics, Academia Sinica, Taipei, Taiwan
| | - Chia-Ying Lee
- Brain and Language Laboratory, Institute of Linguistics, Academia Sinica, Taipei, Taiwan.,Institute of Cognitive Neuroscience, National Central University, Taoyuan, Taiwan.,Research Center for Mind, Brain, and Learning, National Chengchi University, Taipei, Taiwan
| |
Collapse
|
7
|
Burkhardt P, Müller V, Meister H, Weglage A, Lang-Roth R, Walger M, Sandmann P. Age effects on cognitive functions and speech-in-noise processing: An event-related potential study with cochlear-implant users and normal-hearing listeners. Front Neurosci 2022; 16:1005859. [PMID: 36620447 PMCID: PMC9815545 DOI: 10.3389/fnins.2022.1005859] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/28/2022] [Accepted: 11/15/2022] [Indexed: 12/24/2022] Open
Abstract
A cochlear implant (CI) can partially restore hearing in individuals with profound sensorineural hearing loss. However, electrical hearing with a CI is limited and highly variable. The current study aimed to better understand the different factors contributing to this variability by examining how age affects cognitive functions and cortical speech processing in CI users. Electroencephalography (EEG) was applied while two groups of CI users (young and elderly; N = 13 each) and normal-hearing (NH) listeners (young and elderly; N = 13 each) performed an auditory sentence categorization task, including semantically correct and incorrect sentences presented either with or without background noise. Event-related potentials (ERPs) representing earlier, sensory-driven processes (N1-P2 complex to sentence onset) and later, cognitive-linguistic integration processes (N400 to semantically correct/incorrect sentence-final words) were compared between the different groups and speech conditions. The results revealed reduced amplitudes and prolonged latencies of auditory ERPs in CI users compared to NH listeners, both at earlier (N1, P2) and later processing stages (N400 effect). In addition to this hearing-group effect, CI users and NH listeners showed a comparable background-noise effect, as indicated by reduced hit rates and reduced (P2) and delayed (N1/P2) ERPs in conditions with background noise. Moreover, we observed an age effect in CI users and NH listeners, with young individuals showing improved specific cognitive functions (working memory capacity, cognitive flexibility and verbal learning/retrieval), reduced latencies (N1/P2), decreased N1 amplitudes and an increased N400 effect when compared to the elderly. In sum, our findings extend previous research by showing that the CI users' speech processing is impaired not only at earlier (sensory) but also at later (semantic integration) processing stages, both in conditions with and without background noise. Using objective ERP measures, our study provides further evidence of strong age effects on cortical speech processing, which can be observed in both the NH listeners and the CI users. We conclude that elderly individuals require more effortful processing at sensory stages of speech processing, which however seems to be at the cost of the limited resources available for the later semantic integration processes.
Collapse
Affiliation(s)
- Pauline Burkhardt
- Department of Otorhinolaryngology, Head and Neck Surgery, Audiology and Pediatric Audiology, Cochlear Implant Center, Faculty of Medicine and University Hospital Cologne, University of Cologne, Cologne, Germany,*Correspondence: Pauline Burkhardt, ; orcid.org/0000-0001-9850-9881
| | - Verena Müller
- Department of Otorhinolaryngology, Head and Neck Surgery, Audiology and Pediatric Audiology, Cochlear Implant Center, Faculty of Medicine and University Hospital Cologne, University of Cologne, Cologne, Germany
| | - Hartmut Meister
- Jean-Uhrmacher-Institute for Clinical ENT-Research, University of Cologne, Cologne, Germany
| | - Anna Weglage
- Department of Otorhinolaryngology, Head and Neck Surgery, Audiology and Pediatric Audiology, Cochlear Implant Center, Faculty of Medicine and University Hospital Cologne, University of Cologne, Cologne, Germany
| | - Ruth Lang-Roth
- Department of Otorhinolaryngology, Head and Neck Surgery, Audiology and Pediatric Audiology, Cochlear Implant Center, Faculty of Medicine and University Hospital Cologne, University of Cologne, Cologne, Germany
| | - Martin Walger
- Department of Otorhinolaryngology, Head and Neck Surgery, Audiology and Pediatric Audiology, Cochlear Implant Center, Faculty of Medicine and University Hospital Cologne, University of Cologne, Cologne, Germany,Jean-Uhrmacher-Institute for Clinical ENT-Research, University of Cologne, Cologne, Germany
| | - Pascale Sandmann
- Department of Otorhinolaryngology, Head and Neck Surgery, Audiology and Pediatric Audiology, Cochlear Implant Center, Faculty of Medicine and University Hospital Cologne, University of Cologne, Cologne, Germany
| |
Collapse
|
8
|
Schwarz J, Li KK, Sim JH, Zhang Y, Buchanan-Worster E, Post B, Gibson JL, McDougall K. Semantic Cues Modulate Children’s and Adults’ Processing of Audio-Visual Face Mask Speech. Front Psychol 2022; 13:879156. [PMID: 35928422 PMCID: PMC9343587 DOI: 10.3389/fpsyg.2022.879156] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/18/2022] [Accepted: 05/25/2022] [Indexed: 12/03/2022] Open
Abstract
During the COVID-19 pandemic, questions have been raised about the impact of face masks on communication in classroom settings. However, it is unclear to what extent visual obstruction of the speaker’s mouth or changes to the acoustic signal lead to speech processing difficulties, and whether these effects can be mitigated by semantic predictability, i.e., the availability of contextual information. The present study investigated the acoustic and visual effects of face masks on speech intelligibility and processing speed under varying semantic predictability. Twenty-six children (aged 8-12) and twenty-six adults performed an internet-based cued shadowing task, in which they had to repeat aloud the last word of sentences presented in audio-visual format. The results showed that children and adults made more mistakes and responded more slowly when listening to face mask speech compared to speech produced without a face mask. Adults were only significantly affected by face mask speech when both the acoustic and the visual signal were degraded. While acoustic mask effects were similar for children, removal of visual speech cues through the face mask affected children to a lesser degree. However, high semantic predictability reduced audio-visual mask effects, leading to full compensation of the acoustically degraded mask speech in the adult group. Even though children did not fully compensate for face mask speech with high semantic predictability, overall, they still profited from semantic cues in all conditions. Therefore, in classroom settings, strategies that increase contextual information such as building on students’ prior knowledge, using keywords, and providing visual aids, are likely to help overcome any adverse face mask effects.
Collapse
Affiliation(s)
- Julia Schwarz
- Faculty of Modern and Medieval Languages and Linguistics, University of Cambridge, Cambridge, United Kingdom
- *Correspondence: Julia Schwarz,
| | - Katrina Kechun Li
- Faculty of Modern and Medieval Languages and Linguistics, University of Cambridge, Cambridge, United Kingdom
- Katrina Kechun Li,
| | - Jasper Hong Sim
- Faculty of Modern and Medieval Languages and Linguistics, University of Cambridge, Cambridge, United Kingdom
| | - Yixin Zhang
- Faculty of Modern and Medieval Languages and Linguistics, University of Cambridge, Cambridge, United Kingdom
| | - Elizabeth Buchanan-Worster
- Medical Research Council Cognition and Brain Sciences Unit, University of Cambridge, Cambridge, United Kingdom
| | - Brechtje Post
- Faculty of Modern and Medieval Languages and Linguistics, University of Cambridge, Cambridge, United Kingdom
| | | | - Kirsty McDougall
- Faculty of Modern and Medieval Languages and Linguistics, University of Cambridge, Cambridge, United Kingdom
| |
Collapse
|
9
|
Grant AM, Kousaie S, Coulter K, Gilbert AC, Baum SR, Gracco V, Titone D, Klein D, Phillips NA. Age of Acquisition Modulates Alpha Power During Bilingual Speech Comprehension in Noise. Front Psychol 2022; 13:865857. [PMID: 35548507 PMCID: PMC9083356 DOI: 10.3389/fpsyg.2022.865857] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/31/2022] [Accepted: 03/11/2022] [Indexed: 12/20/2022] Open
Abstract
Research on bilingualism has grown exponentially in recent years. However, the comprehension of speech in noise, given the ubiquity of both bilingualism and noisy environments, has seen only limited focus. Electroencephalogram (EEG) studies in monolinguals show an increase in alpha power when listening to speech in noise, which, in the theoretical context where alpha power indexes attentional control, is thought to reflect an increase in attentional demands. In the current study, English/French bilinguals with similar second language (L2) proficiency and who varied in terms of age of L2 acquisition (AoA) from 0 (simultaneous bilinguals) to 15 years completed a speech perception in noise task. Participants were required to identify the final word of high and low semantically constrained auditory sentences such as "Stir your coffee with a spoon" vs. "Bob could have known about the spoon" in both of their languages and in both noise (multi-talker babble) and quiet during electrophysiological recording. We examined the effects of language, AoA, semantic constraint, and listening condition on participants' induced alpha power during speech comprehension. Our results show an increase in alpha power when participants were listening in their L2, suggesting that listening in an L2 requires additional attentional control compared to the first language, particularly early in processing during word identification. Additionally, despite similar proficiency across participants, our results suggest that under difficult processing demands, AoA modulates the amount of attention required to process the second language.
Collapse
Affiliation(s)
- Angela M Grant
- Department of Psychology, Centre for Research in Human Development, Concordia University, Montreal, QC, Canada.,Centre for Research on Brain, Language and Music, McGill University, Montreal, QC, Canada
| | - Shanna Kousaie
- Centre for Research on Brain, Language and Music, McGill University, Montreal, QC, Canada.,School of Psychology, University of Ottawa, Ottawa, ON, Canada.,Cognitive Neuroscience Unit, Montreal Neurological Institute, McGill University, Montreal, QC, Canada
| | - Kristina Coulter
- Department of Psychology, Centre for Research in Human Development, Concordia University, Montreal, QC, Canada.,Centre for Research on Brain, Language and Music, McGill University, Montreal, QC, Canada
| | - Annie C Gilbert
- Centre for Research on Brain, Language and Music, McGill University, Montreal, QC, Canada.,School of Communication Sciences and Disorders, McGill University, Montreal, QC, Canada
| | - Shari R Baum
- Centre for Research on Brain, Language and Music, McGill University, Montreal, QC, Canada.,School of Communication Sciences and Disorders, McGill University, Montreal, QC, Canada
| | - Vincent Gracco
- School of Communication Sciences and Disorders, McGill University, Montreal, QC, Canada.,Haskins Laboratories, New Haven, CT, United States
| | - Debra Titone
- Centre for Research on Brain, Language and Music, McGill University, Montreal, QC, Canada.,Department of Psychology, McGill University Montreal, Montreal, QC, Canada
| | - Denise Klein
- Centre for Research on Brain, Language and Music, McGill University, Montreal, QC, Canada.,Cognitive Neuroscience Unit, Montreal Neurological Institute, McGill University, Montreal, QC, Canada.,Department of Neurology and Neurosurgery, Montreal Neurological Institute, McGill University, Montreal, QC, Canada
| | - Natalie A Phillips
- Department of Psychology, Centre for Research in Human Development, Concordia University, Montreal, QC, Canada.,Centre for Research on Brain, Language and Music, McGill University, Montreal, QC, Canada.,Bloomfield Centre for Research in Aging, Lady Davis Institute for Medical Research and Jewish General Hospital, McGill University Memory Clinic, Jewish General Hospital, Montreal, QC, Canada
| |
Collapse
|
10
|
Bieber RE, Brodbeck C, Anderson S. Examining the context benefit in older adults: A combined behavioral-electrophysiologic word identification study. Neuropsychologia 2022; 170:108224. [PMID: 35346650 DOI: 10.1016/j.neuropsychologia.2022.108224] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/28/2021] [Revised: 02/18/2022] [Accepted: 03/22/2022] [Indexed: 10/18/2022]
Abstract
When listening to degraded speech, listeners can use high-level semantic information to support recognition. The literature contains conflicting findings regarding older listeners' ability to benefit from semantic cues in recognizing speech, relative to younger listeners. Electrophysiologic (EEG) measures of lexical access (N400) often show that semantic context does not facilitate lexical access in older listeners; in contrast, auditory behavioral studies indicate that semantic context improves speech recognition in older listeners as much or more as in younger listeners. Many behavioral studies of aging and the context benefit have employed signal degradation or alteration, whereas this stimulus manipulation has been absent in the EEG literature, a possible reason for the inconsistencies between studies. Here we compared the context benefit as a function of age and signal type, using EEG combined with behavioral measures. Non-native accent, a common form of signal alteration which many older adults report as a challenge in daily speech recognition, was utilized for testing. The stimuli included English sentences produced by native speakers of English and Spanish, containing target words differing in cloze probability. Listeners performed a word identification task while 32-channel cortical responses were recorded. Results show that older adults' word identification performance was poorer in the low-predictability and non-native talker conditions than the younger adults, replicating earlier behavioral findings. However, older adults did not show reductions or delays in the average N400 response as compared to younger listeners, suggesting no age-related reduction in predictive processing capability. Potential sources for discrepancies in the prior literature are discussed.
Collapse
Affiliation(s)
- Rebecca E Bieber
- Department of Hearing and Speech Sciences, 0100 Lefrak Hall, University of Maryland College Park, College Park MD, 20740, USA.
| | - Christian Brodbeck
- Department of Psychological Sciences, University of Connecticut, Storrs CT, 06269, USA
| | - Samira Anderson
- Department of Hearing and Speech Sciences, 0100 Lefrak Hall, University of Maryland College Park, College Park MD, 20740, USA
| |
Collapse
|
11
|
Hidalgo C, Mohamed I, Zielinski C, Schön D. The effect of speech degradation on the ability to track and predict turn structure in conversation. Cortex 2022; 151:105-115. [DOI: 10.1016/j.cortex.2022.01.020] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/29/2021] [Revised: 11/15/2021] [Accepted: 01/20/2022] [Indexed: 11/03/2022]
|
12
|
Federmeier KD. Connecting and considering: Electrophysiology provides insights into comprehension. Psychophysiology 2022; 59:e13940. [PMID: 34520568 PMCID: PMC9009268 DOI: 10.1111/psyp.13940] [Citation(s) in RCA: 31] [Impact Index Per Article: 15.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/10/2021] [Revised: 08/27/2021] [Accepted: 08/30/2021] [Indexed: 11/29/2022]
Abstract
The ability to rapidly and systematically access knowledge stored in long-term memory in response to incoming sensory information-that is, to derive meaning from the world-lies at the core of human cognition. Research using methods that can precisely track brain activity over time has begun to reveal the multiple cognitive and neural mechanisms that make this possible. In this article, I delineate how a process of connecting affords an effortless, continuous infusion of meaning into human perception. In a relatively invariant time window, uncovered through studies using the N400 component of the event-related potential, incoming sensory information naturally induces a graded landscape of activation across long-term semantic memory, creating what might be called "proto-concepts". Connecting can be (but is not always) followed by a process of further considering those activations, wherein a set of more attentionally demanding "active comprehension" mechanisms mediate the selection, augmentation, and transformation of the initial semantic representations. The result is a limited set of more stable bindings that can be arranged in time or space, revised as needed, and brought to awareness. With this research, we are coming closer to understanding how the human brain is able to fluidly link sensation to experience, to appreciate language sequences and event structures, and, sometimes, to even predict what might be coming up next.
Collapse
Affiliation(s)
- Kara D Federmeier
- Department of Psychology, Program in Neuroscience, and the Beckman Institute for Advanced Science and Technology, University of Illinois, Champaign, Illinois, USA
| |
Collapse
|
13
|
Bieber RE, Gordon-Salant S. Semantic context and stimulus variability independently affect rapid adaptation to non-native English speech in young adults. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2022; 151:242. [PMID: 35104999 PMCID: PMC8769767 DOI: 10.1121/10.0009170] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/16/2021] [Revised: 11/26/2021] [Accepted: 12/07/2021] [Indexed: 06/14/2023]
Abstract
When speech is degraded or challenging to recognize, young adult listeners with normal hearing are able to quickly adapt, improving their recognition of the speech over a short period of time. This rapid adaptation is robust, but the factors influencing rate, magnitude, and generalization of improvement have not been fully described. Two factors of interest are lexico-semantic information and talker and accent variability; lexico-semantic information promotes perceptual learning for acoustically ambiguous speech, while talker and accent variability are beneficial for generalization of learning. In the present study, rate and magnitude of adaptation were measured for speech varying in level of semantic context, and in the type and number of talkers. Generalization of learning to an unfamiliar talker was also assessed. Results indicate that rate of rapid adaptation was slowed for semantically anomalous sentences, as compared to semantically intact or topic-grouped sentences; however, generalization was seen in the anomalous conditions. Magnitude of adaptation was greater for non-native as compared to native talker conditions, with no difference between single and multiple non-native talker conditions. These findings indicate that the previously documented benefit of lexical information in supporting rapid adaptation is not enhanced by the addition of supra-sentence context.
Collapse
Affiliation(s)
- Rebecca E Bieber
- Department of Hearing and Speech Sciences, University of Maryland College Park, College Park, Maryland 20742, USA
| | - Sandra Gordon-Salant
- Department of Hearing and Speech Sciences, University of Maryland College Park, College Park, Maryland 20742, USA
| |
Collapse
|
14
|
Silcox JW, Payne BR. The costs (and benefits) of effortful listening on context processing: A simultaneous electrophysiology, pupillometry, and behavioral study. Cortex 2021; 142:296-316. [PMID: 34332197 DOI: 10.1016/j.cortex.2021.06.007] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/06/2020] [Revised: 04/02/2021] [Accepted: 06/10/2021] [Indexed: 11/24/2022]
Abstract
There is an apparent disparity between the fields of cognitive audiology and cognitive electrophysiology as to how linguistic context is used when listening to perceptually challenging speech. To gain a clearer picture of how listening effort impacts context use, we conducted a pre-registered study to simultaneously examine electrophysiological, pupillometric, and behavioral responses when listening to sentences varying in contextual constraint and acoustic challenge in the same sample. Participants (N = 44) listened to sentences that were highly constraining and completed with expected or unexpected sentence-final words ("The prisoners were planning their escape/party") or were low-constraint sentences with unexpected sentence-final words ("All day she thought about the party"). Sentences were presented either in quiet or with +3 dB SNR background noise. Pupillometry and EEG were simultaneously recorded and subsequent sentence recognition and word recall were measured. While the N400 expectancy effect was diminished by noise, suggesting impaired real-time context use, we simultaneously observed a beneficial effect of constraint on subsequent recognition memory for degraded speech. Importantly, analyses of trial-to-trial coupling between pupil dilation and N400 amplitude showed that when participants' showed increased listening effort (i.e., greater pupil dilation), there was a subsequent recovery of the N400 effect, but at the same time, higher effort was related to poorer subsequent sentence recognition and word recall. Collectively, these findings suggest divergent effects of acoustic challenge and listening effort on context use: while noise impairs the rapid use of context to facilitate lexical semantic processing in general, this negative effect is attenuated when listeners show increased effort in response to noise. However, this effort-induced reliance on context for online word processing comes at the cost of poorer subsequent memory.
Collapse
Affiliation(s)
| | - Brennan R Payne
- Department of Psychology, University of Utah, USA; Interdepartmental Neuroscience Program, University of Utah, USA
| |
Collapse
|
15
|
Söderlund GBW, Åsberg Johnels J, Rothén B, Torstensson-Hultberg E, Magnusson A, Fälth L. Sensory white noise improves reading skills and memory recall in children with reading disability. Brain Behav 2021; 11:e02114. [PMID: 34096202 PMCID: PMC8323032 DOI: 10.1002/brb3.2114] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 09/18/2020] [Revised: 02/28/2021] [Accepted: 03/01/2021] [Indexed: 11/12/2022] Open
Abstract
BACKGROUND Reading disability (RD) is characterized by slow and inaccurate word reading development, commonly reflecting underlying phonological problems. We have previously shown that exposure to white noise acutely improves cognitive performance in children with ADHD. The question addressed here is whether white noise exposure yields positive outcomes also for RD. There are theoretical reasons to expect such a possibility: a) RD and ADHD are two overlapping neurodevelopmental disorders and b) since prior research on white noise benefits has suggested that a central mechanism might be the phenomenon of stochastic resonance, then adding certain kinds of white noise might strengthen the signal-to-noise ratio during phonological processing and phoneme-grapheme mapping. METHODS The study was conducted with a group of 30 children with RD and phonological decoding difficulties and two comparison groups: one consisting of skilled readers (n = 22) and another of children with mild orthographic reading problems and age adequate phonological decoding (n = 30). White noise was presented experimentally in visual and auditory modalities, while the children performed tests of single word reading, orthographic word recognition, nonword reading, and memory recall. RESULTS For the first time, we show that visual and auditory white noise exposure improves some reading and memory capacities "on the fly" in children with RD and phonological decoding difficulties. By contrast, the comparison groups displayed either no benefit or a gradual decrease in performance with increasing noise. In interviews, we also found that the white noise exposure was tolerable or even preferred by many children. CONCLUSION These novel findings suggest that poor readers with phonological decoding difficulties may be immediately helped by white noise during reading. Future research is needed to determine the robustness, mechanisms, and long-term practical implications of the white noise benefits in children with reading disabilities.
Collapse
Affiliation(s)
- Göran B W Söderlund
- Faculty of Teacher Education Arts and Sports, Western Norway University of Applied Sciences, Sogndal, Norway.,Department of Education and Special Education, University of Gothenburg, Gothenburg, Sweden
| | - Jakob Åsberg Johnels
- Speech and Language Pathology Unit & the Gillberg Neuropsychiatry Centre, Institute of Neuroscience and Physiology, Silvia Children's Hospital, University of Gothenburg & The Child Neuropsychiatric Clinic, Gothenburg, Sweden
| | - Bodil Rothén
- Department of Pedagogy and Learning, Linnaeus University, Växjö, Sweden
| | | | - Andreas Magnusson
- Complex Adaptive Systems, Chalmers University of Technology, Gothenburg, Sweden
| | - Linda Fälth
- Department of Pedagogy and Learning, Linnaeus University, Växjö, Sweden
| |
Collapse
|
16
|
Stringer L, Iverson P. Accent Intelligibility Differences in Noise Across Native and Nonnative Accents: Effects of Talker-Listener Pairing at Acoustic-Phonetic and Lexical Levels. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2019; 62:2213-2226. [PMID: 31251681 DOI: 10.1044/2019_jslhr-s-17-0414] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Abstract
Purpose The intelligibility of an accent strongly depends on the specific talker-listener pairing. To explore the causes of this phenomenon, we investigated the relationship between acoustic-phonetic similarity and accent intelligibility across native (1st language) and nonnative (2nd language) talker-listener pairings. We also used online measures to observe processing differences in quiet. Method English ( n = 16) and Spanish ( n = 16) listeners heard Standard Southern British English, Glaswegian English, and Spanish-accented English in a speech recognition task (in quiet and noise) and an electroencephalogram task (quiet only) designed to assess phonological and lexical processing. Stimuli were drawn from the nonnative speech recognition sentences ( Stringer & Iverson, 2019 ). The acoustic-phonetic similarity between listeners' accents and the 3 accents was calculated using the ACCDIST metric ( Huckvale, 2004 , 2007 ). Results Talker-listener pairing had a clear influence on accent intelligibility. This was linked to the phonetic similarity of the talkers and the listeners, but similarity could not account for all findings. The influence of talker-listener pairing on lexical processing was less clear; the N400 effect was mostly robust to accent mismatches, with some relationship to intelligibility. Conclusion These findings suggest that the influence of talker-listener pairing on intelligibility may be partly attributable to accent similarity in addition to accent familiarity. Online measures also show that differences in talker-listener accents can disrupt processing in quiet even where accents are highly intelligible.
Collapse
Affiliation(s)
- Louise Stringer
- Department of Speech, Hearing and Phonetic Sciences, University College London, United Kingdom
- Academic Support Office, University of York, United Kingdom
| | - Paul Iverson
- Department of Speech, Hearing and Phonetic Sciences, University College London, United Kingdom
| |
Collapse
|
17
|
Romero-Rivas C, Thorley C, Skelton K, Costa A. Foreign accents reduce false recognition rates in the DRM paradigm. JOURNAL OF COGNITIVE PSYCHOLOGY 2019. [DOI: 10.1080/20445911.2019.1634576] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/26/2022]
Affiliation(s)
- Carlos Romero-Rivas
- Department of Evolutive and Educational Psychology, Universidad Autónoma de Madrid, Spain
- Center for Brain and Cognition, Universitat Pompeu Fabra, Barcelona, Spain
| | - Craig Thorley
- Department of Psychology, James Cook University, Douglas, Australia
| | - Katie Skelton
- Department of Psychological Sciences, University of Liverpool, Liverpool, UK
| | - Albert Costa
- Center for Brain and Cognition, Universitat Pompeu Fabra, Barcelona, Spain
- Institució Catalana de Recerca i Estudis Avançats (ICREA), Barcelona, Spain
| |
Collapse
|
18
|
Listening effort during speech perception enhances auditory and lexical processing for non-native listeners and accents. Cognition 2018; 179:163-170. [PMID: 29957515 DOI: 10.1016/j.cognition.2018.06.001] [Citation(s) in RCA: 31] [Impact Index Per Article: 5.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/09/2016] [Revised: 05/31/2018] [Accepted: 06/04/2018] [Indexed: 11/21/2022]
Abstract
Speech communication in a non-native language (L2) can feel effortful, and the present study suggests that this effort affects both auditory and lexical processing. EEG recordings (electroencephalography) were made from native English (L1) and Korean listeners while they listened to English sentences spoken with two accents (English and Korean) in the presence of a distracting talker. Neural entrainment (i.e., phase locking between the EEG recording and the speech amplitude envelope) was measured for target and distractor talkers. L2 listeners had relatively greater entrainment for target talkers than did L1 listeners, likely because their difficulty with L2 speech recognition caused them to focus more attention on the speech signal. N400 was measured for the final word in each sentence, and L2 listeners had greater lexical processing in high-predictability sentences than did L1 listeners. L1 listeners had greater target-talker entrainment when listening to the more difficult L2 accent than their own L1 accent, and similarly had larger N400 responses for the L2 accent. It thus appears that the increased effort of L2 listeners, as well as L1 listeners understanding L2 speech, modulates their auditory and lexical processing during speech recognition. This may provide a mechanism to compensate for their perceptual challenges under adverse conditions.
Collapse
|
19
|
Simeon KM, Bicknell K, Grieco-Calub TM. Belief Shift or Only Facilitation: How Semantic Expectancy Affects Processing of Speech Degraded by Background Noise. Front Psychol 2018; 9:116. [PMID: 29472883 PMCID: PMC5809983 DOI: 10.3389/fpsyg.2018.00116] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/14/2017] [Accepted: 01/24/2018] [Indexed: 11/13/2022] Open
Abstract
Individuals use semantic expectancy - applying conceptual and linguistic knowledge to speech input - to improve the accuracy and speed of language comprehension. This study tested how adults use semantic expectancy in quiet and in the presence of speech-shaped broadband noise at -7 and -12 dB signal-to-noise ratio. Twenty-four adults (22.1 ± 3.6 years, mean ±SD) were tested on a four-alternative-forced-choice task whereby they listened to sentences and were instructed to select an image matching the sentence-final word. The semantic expectancy of the sentences was unrelated to (neutral), congruent with, or conflicting with the acoustic target. Congruent expectancy improved accuracy and conflicting expectancy decreased accuracy relative to neutral, consistent with a theory where expectancy shifts beliefs toward likely words and away from unlikely words. Additionally, there were no significant interactions of expectancy and noise level when analyzed in log-odds, supporting the predictions of ideal observer models of speech perception.
Collapse
Affiliation(s)
- Katherine M. Simeon
- The Roxelyn and Richard Pepper Department of Communication Sciences and Disorders, Northwestern University, Evanston, IL, United States
| | - Klinton Bicknell
- Department of Linguistics, Northwestern University, Evanston, IL, United States
| | - Tina M. Grieco-Calub
- The Roxelyn and Richard Pepper Department of Communication Sciences and Disorders, Northwestern University, Evanston, IL, United States
- Hugh Knowles Hearing Center, Northwestern University, Evanston, IL, United States
| |
Collapse
|
20
|
Drijvers L, Özyürek A. Native language status of the listener modulates the neural integration of speech and iconic gestures in clear and adverse listening conditions. BRAIN AND LANGUAGE 2018; 177-178:7-17. [PMID: 29421272 DOI: 10.1016/j.bandl.2018.01.003] [Citation(s) in RCA: 23] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/28/2017] [Revised: 01/05/2018] [Accepted: 01/15/2018] [Indexed: 06/08/2023]
Abstract
Native listeners neurally integrate iconic gestures with speech, which can enhance degraded speech comprehension. However, it is unknown how non-native listeners neurally integrate speech and gestures, as they might process visual semantic context differently than natives. We recorded EEG while native and highly-proficient non-native listeners watched videos of an actress uttering an action verb in clear or degraded speech, accompanied by a matching ('to drive'+driving gesture) or mismatching gesture ('to drink'+mixing gesture). Degraded speech elicited an enhanced N400 amplitude compared to clear speech in both groups, revealing an increase in neural resources needed to resolve the spoken input. A larger N400 effect was found in clear speech for non-natives compared to natives, but in degraded speech only for natives. Non-native listeners might thus process gesture more strongly than natives when speech is clear, but need more auditory cues to facilitate access to gestural semantic information when speech is degraded.
Collapse
Affiliation(s)
- Linda Drijvers
- Radboud University, Centre for Language Studies, Erasmusplein 1, 6525 HT Nijmegen, The Netherlands; Radboud University, Donders Institute for Brain, Cognition, and Behaviour, Montessorilaan 3, 6525 HR Nijmegen, The Netherlands.
| | - Asli Özyürek
- Radboud University, Centre for Language Studies, Erasmusplein 1, 6525 HT Nijmegen, The Netherlands; Radboud University, Donders Institute for Brain, Cognition, and Behaviour, Montessorilaan 3, 6525 HR Nijmegen, The Netherlands; Max Planck Institute for Psycholinguistics, Wundtlaan 1, 6525 XD Nijmegen, The Netherlands
| |
Collapse
|
21
|
Vavatzanidis NK, Mürbe D, Friederici AD, Hahne A. Establishing a mental lexicon with cochlear implants: an ERP study with young children. Sci Rep 2018; 8:910. [PMID: 29343736 PMCID: PMC5772553 DOI: 10.1038/s41598-017-18852-3] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/17/2017] [Accepted: 12/18/2017] [Indexed: 11/19/2022] Open
Abstract
In the present study we explore the implications of acquiring language when relying mainly or exclusively on input from a cochlear implant (CI), a device providing auditory input to otherwise deaf individuals. We focus on the time course of semantic learning in children within the second year of implant use; a period that equals the auditory age of normal hearing children during which vocabulary emerges and extends dramatically. 32 young bilaterally implanted children saw pictures paired with either matching or non-matching auditory words. Their electroencephalographic responses were recorded after 12, 18 and 24 months of implant use, revealing a large dichotomy: Some children failed to show semantic processing throughout their second year of CI use, which fell in line with their poor language outcomes. The majority of children, though, demonstrated semantic processing in form of the so-called N400 effect already after 12 months of implant use, even when their language experience relied exclusively on the implant. This is slightly earlier than observed for normal hearing children of the same auditory age, suggesting that more mature cognitive faculties at the beginning of language acquisition lead to faster semantic learning.
Collapse
Affiliation(s)
- Niki K Vavatzanidis
- Max Planck Institute for Human and Cognitive Brain Sciences, Leipzig, Germany. .,Saxonian Cochlear Implant Center, Technische Universität Dresden, Dresden, Germany.
| | - Dirk Mürbe
- Saxonian Cochlear Implant Center, Technische Universität Dresden, Dresden, Germany
| | - Angela D Friederici
- Max Planck Institute for Human and Cognitive Brain Sciences, Leipzig, Germany
| | - Anja Hahne
- Saxonian Cochlear Implant Center, Technische Universität Dresden, Dresden, Germany
| |
Collapse
|
22
|
Phonological and semantic processing during comprehension in Wernicke's aphasia: An N400 and Phonological Mapping Negativity Study. Neuropsychologia 2017; 100:144-154. [DOI: 10.1016/j.neuropsychologia.2017.04.012] [Citation(s) in RCA: 16] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/09/2016] [Revised: 03/14/2017] [Accepted: 04/07/2017] [Indexed: 11/18/2022]
|
23
|
Jamison C, Aiken SJ, Kiefte M, Newman AJ, Bance M, Sculthorpe-Petley L. Preliminary Investigation of the Passively Evoked N400 as a Tool for Estimating Speech-in-Noise Thresholds. Am J Audiol 2016; 25:344-358. [PMID: 27814664 DOI: 10.1044/2016_aja-15-0080] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/15/2015] [Accepted: 05/20/2016] [Indexed: 11/09/2022] Open
Abstract
PURPOSE Speech-in-noise testing relies on a number of factors beyond the auditory system, such as cognitive function, compliance, and motor function. It may be possible to avoid these limitations by using electroencephalography. The present study explored this possibility using the N400. METHOD Eleven adults with typical hearing heard high-constraint sentences with congruent and incongruent terminal words in the presence of speech-shaped noise. Participants ignored all auditory stimulation and watched a video. The signal-to-noise ratio (SNR) was varied around each participant's behavioral threshold during electroencephalography recording. Speech was also heard in quiet. RESULTS The amplitude of the N400 effect exhibited a nonlinear relationship with SNR. In the presence of background noise, amplitude decreased from high (+4 dB) to low (+1 dB) SNR but increased dramatically at threshold before decreasing again at subthreshold SNR (-2 dB). CONCLUSIONS The SNR of speech in noise modulates the amplitude of the N400 effect to semantic anomalies in a nonlinear fashion. These results are the first to demonstrate modulation of the passively evoked N400 by SNR in speech-shaped noise and represent a first step toward the end goal of developing an N400-based physiological metric for speech-in-noise testing.
Collapse
Affiliation(s)
- Caroline Jamison
- School of Human Communication Disorders, Dalhousie University, Halifax, Nova Scotia, Canada
| | - Steve J. Aiken
- School of Human Communication Disorders, Dalhousie University, Halifax, Nova Scotia, Canada
- School of Psychology, Dalhousie University, Halifax, Nova Scotia, Canada
- Division of Otolaryngology, Queen Elizabeth II Health Sciences Centre, Halifax, Nova Scotia, Canada
| | - Michael Kiefte
- School of Human Communication Disorders, Dalhousie University, Halifax, Nova Scotia, Canada
| | - Aaron J. Newman
- School of Psychology, Dalhousie University, Halifax, Nova Scotia, Canada
| | - Manohar Bance
- School of Human Communication Disorders, Dalhousie University, Halifax, Nova Scotia, Canada
- Division of Otolaryngology, Queen Elizabeth II Health Sciences Centre, Halifax, Nova Scotia, Canada
| | - Lauren Sculthorpe-Petley
- School of Human Communication Disorders, Dalhousie University, Halifax, Nova Scotia, Canada
- School of Psychology, Dalhousie University, Halifax, Nova Scotia, Canada
- Division of Otolaryngology, Queen Elizabeth II Health Sciences Centre, Halifax, Nova Scotia, Canada
- Biomedical Translational Imaging Centre, IWK Health Centre, Halifax, Nova Scotia, Canada
| |
Collapse
|
24
|
Carey D, Mercure E, Pizzioli F, Aydelott J. Auditory semantic processing in dichotic listening: Effects of competing speech, ear of presentation, and sentential bias on N400s to spoken words in context. Neuropsychologia 2014; 65:102-12. [DOI: 10.1016/j.neuropsychologia.2014.10.016] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/11/2013] [Revised: 08/29/2014] [Accepted: 10/13/2014] [Indexed: 11/16/2022]
|
25
|
Scudder MR, Federmeier KD, Raine LB, Direito A, Boyd JK, Hillman CH. The association between aerobic fitness and language processing in children: implications for academic achievement. Brain Cogn 2014; 87:140-52. [PMID: 24747513 PMCID: PMC4036460 DOI: 10.1016/j.bandc.2014.03.016] [Citation(s) in RCA: 49] [Impact Index Per Article: 4.9] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/26/2013] [Revised: 03/21/2014] [Accepted: 03/24/2014] [Indexed: 11/25/2022]
Abstract
Event-related brain potentials (ERPs) have been instrumental for discerning the relationship between children's aerobic fitness and aspects of cognition, yet language processing remains unexplored. ERPs linked to the processing of semantic information (the N400) and the analysis of language structure (the P600) were recorded from higher and lower aerobically fit children as they read normal sentences and those containing semantic or syntactic violations. Results revealed that higher fit children exhibited greater N400 amplitude and shorter latency across all sentence types, and a larger P600 effect for syntactic violations. Such findings suggest that higher fitness may be associated with a richer network of words and their meanings, and a greater ability to detect and/or repair syntactic errors. The current findings extend previous ERP research explicating the cognitive benefits associated with greater aerobic fitness in children and may have important implications for learning and academic performance.
Collapse
Affiliation(s)
- Mark R Scudder
- University of Illinois at Urbana-Champaign, Urbana, IL 61801, USA.
| | | | - Lauren B Raine
- University of Illinois at Urbana-Champaign, Urbana, IL 61801, USA.
| | - Artur Direito
- University of Illinois at Urbana-Champaign, Urbana, IL 61801, USA.
| | - Jeremy K Boyd
- University of Illinois at Urbana-Champaign, Urbana, IL 61801, USA.
| | | |
Collapse
|
26
|
Strauß A, Kotz SA, Obleser J. Narrowed Expectancies under Degraded Speech: Revisiting the N400. J Cogn Neurosci 2013; 25:1383-95. [DOI: 10.1162/jocn_a_00389] [Citation(s) in RCA: 55] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Abstract
Under adverse listening conditions, speech comprehension profits from the expectancies that listeners derive from the semantic context. However, the neurocognitive mechanisms of this semantic benefit are unclear: How are expectancies formed from context and adjusted as a sentence unfolds over time under various degrees of acoustic degradation? In an EEG study, we modified auditory signal degradation by applying noise-vocoding (severely degraded: four-band, moderately degraded: eight-band, and clear speech). Orthogonal to that, we manipulated the extent of expectancy: strong or weak semantic context (±con) and context-based typicality of the sentence-last word (high or low: ±typ). This allowed calculation of two distinct effects of expectancy on the N400 component of the evoked potential. The sentence-final N400 effect was taken as an index of the neural effort of automatic word-into-context integration; it varied in peak amplitude and latency with signal degradation and was not reliably observed in response to severely degraded speech. Under clear speech conditions in a strong context, typical and untypical sentence completions seemed to fulfill the neural prediction, as indicated by N400 reductions. In response to moderately degraded signal quality, however, the formed expectancies appeared more specific: Only typical (+con +typ), but not the less typical (+con −typ) context–word combinations led to a decrease in the N400 amplitude. The results show that adverse listening “narrows,” rather than broadens, the expectancies about the perceived speech signal: limiting the perceptual evidence forces the neural system to rely on signal-driven expectancies, rather than more abstract expectancies, while a sentence unfolds over time.
Collapse
|
27
|
Gao X, Levinthal BR, Stine-Morrow EAL. The Effects of Ageing and Visual Noise on Conceptual Integration during Sentence Reading. Q J Exp Psychol (Hove) 2012; 65:1833-47. [DOI: 10.1080/17470218.2012.674146] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/28/2022]
Abstract
The effortfulness hypothesis implies that difficulty in decoding the surface form, as in the case of age-related sensory limitations or background noise, consumes the attentional resources that are then unavailable for semantic integration in language comprehension. Because ageing is associated with sensory declines, degrading of the surface form by a noisy background can pose an extra challenge for older adults. In two experiments, this hypothesis was tested in a self-paced moving window paradigm in which younger and older readers’ online allocation of attentional resources to surface decoding and semantic integration was measured as they read sentences embedded in varying levels of visual noise. When visual noise was moderate (Experiment 1), resource allocation among young adults was unaffected but older adults allocated more resources to decode the surface form at the cost of resources that would otherwise be available for semantic processing; when visual noise was relatively intense (Experiment 2), both younger and older participants allocated more attention to the surface form and less attention to semantic processing. The decrease in attentional allocation to semantic integration resulted in reduced recall of core ideas in both experiments, suggesting that a less organized semantic representation was constructed in noise. The greater vulnerability of older adults at relatively low levels of noise is consistent with the effortfulness hypothesis.
Collapse
Affiliation(s)
- Xuefei Gao
- Beckman Institute & Department of Educational Psychology, University of Illinois at Urbana-Champaign, Urbana, IL, USA
| | | | - Elizabeth A. L. Stine-Morrow
- Beckman Institute & Department of Educational Psychology, University of Illinois at Urbana-Champaign, Urbana, IL, USA
| |
Collapse
|
28
|
The N400 and Late Positive Complex (LPC) Effects Reflect Controlled Rather than Automatic Mechanisms of Sentence Processing. Brain Sci 2012; 2:267-97. [PMID: 24961195 PMCID: PMC4061799 DOI: 10.3390/brainsci2030267] [Citation(s) in RCA: 25] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/21/2012] [Revised: 07/16/2012] [Accepted: 08/01/2012] [Indexed: 11/17/2022] Open
Abstract
This study compared automatic and controlled cognitive processes that underlie event-related potentials (ERPs) effects during speech perception. Sentences were presented to French native speakers, and the final word could be congruent or incongruent, and presented at one of four levels of degradation (using a modulation with pink noise): no degradation, mild degradation (2 levels), or strong degradation. We assumed that degradation impairs controlled more than automatic processes. The N400 and Late Positive Complex (LPC) effects were defined as the differences between the corresponding wave amplitudes to incongruent words minus congruent words. Under mild degradation, where controlled sentence-level processing could still occur (as indicated by behavioral data), both N400 and LPC effects were delayed and the latter effect was reduced. Under strong degradation, where sentence processing was rather automatic (as indicated by behavioral data), no ERP effect remained. These results suggest that ERP effects elicited in complex contexts, such as sentences, reflect controlled rather than automatic mechanisms of speech processing. These results differ from the results of experiments that used word-pair or word-list paradigms.
Collapse
|
29
|
Goslin J, Duffy H, Floccia C. An ERP investigation of regional and foreign accent processing. BRAIN AND LANGUAGE 2012; 122:92-102. [PMID: 22694999 DOI: 10.1016/j.bandl.2012.04.017] [Citation(s) in RCA: 40] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/19/2011] [Revised: 04/19/2012] [Accepted: 04/30/2012] [Indexed: 05/13/2023]
Abstract
This study used event-related potentials (ERPs) to examine whether we employ the same normalisation mechanisms when processing words spoken with a regional accent or foreign accent. Our results showed that the Phonological Mapping Negativity (PMN) following the onset of the final word of sentences spoken with an unfamiliar regional accent was greater than for those produced in the listener's own accent, whilst PMN for foreign accented speech was reduced. Foreign accents also resulted in a reduction in N400 amplitude when compared to both unfamiliar regional accents and the listener's own accent, with no significant difference found between the N400 of the regional and home accents. These results suggest that regional accent related variations are normalised at the earliest stages of spoken word recognition, requiring less top-down lexical intervention than foreign accents.
Collapse
|
30
|
Effect of speech degradation on top-down repair: phonemic restoration with simulations of cochlear implants and combined electric-acoustic stimulation. J Assoc Res Otolaryngol 2012; 13:683-92. [PMID: 22569838 PMCID: PMC3441953 DOI: 10.1007/s10162-012-0334-3] [Citation(s) in RCA: 32] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/31/2011] [Accepted: 04/24/2012] [Indexed: 11/11/2022] Open
Abstract
The brain, using expectations, linguistic knowledge, and context, can perceptually restore inaudible portions of speech. Such top-down repair is thought to enhance speech intelligibility in noisy environments. Hearing-impaired listeners with cochlear implants commonly complain about not understanding speech in noise. We hypothesized that the degradations in the bottom-up speech signals due to the implant signal processing may have a negative effect on the top-down repair mechanisms, which could partially be responsible for this complaint. To test the hypothesis, phonemic restoration of interrupted sentences was measured with young normal-hearing listeners using a noise-band vocoder simulation of implant processing. Decreasing the spectral resolution (by reducing the number of vocoder processing channels from 32 to 4) systematically degraded the speech stimuli. Supporting the hypothesis, the size of the restoration benefit varied as a function of spectral resolution. A significant benefit was observed only at the highest spectral resolution of 32 channels. With eight channels, which resembles the resolution available to most implant users, there was no significant restoration effect. Combined electric–acoustic hearing has been previously shown to provide better intelligibility of speech in adverse listening environments. In a second configuration, combined electric–acoustic hearing was simulated by adding low-pass-filtered acoustic speech to the vocoder processing. There was a slight improvement in phonemic restoration compared to the first configuration; the restoration benefit was observed at spectral resolutions of both 16 and 32 channels. However, the restoration was not observed at lower spectral resolutions (four or eight channels). Overall, the findings imply that the degradations in the bottom-up signals alone (such as occurs in cochlear implants) may reduce the top-down restoration of speech.
Collapse
|
31
|
Rhythm's gonna get you: Regular meter facilitates semantic sentence processing. Neuropsychologia 2012; 50:232-44. [DOI: 10.1016/j.neuropsychologia.2011.10.025] [Citation(s) in RCA: 81] [Impact Index Per Article: 6.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/08/2011] [Revised: 10/05/2011] [Accepted: 10/31/2011] [Indexed: 11/21/2022]
|
32
|
|
33
|
Romei L, Wambacq IJA, Besing J, Koehnke J, Jerger J. Neural indices of spoken word processing in background multi-talker babble. Int J Audiol 2011; 50:321-33. [DOI: 10.3109/14992027.2010.547875] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/13/2022]
|
34
|
Obleser J, Kotz SA. Multiple brain signatures of integration in the comprehension of degraded speech. Neuroimage 2011; 55:713-23. [PMID: 21172443 DOI: 10.1016/j.neuroimage.2010.12.020] [Citation(s) in RCA: 101] [Impact Index Per Article: 7.8] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/17/2010] [Revised: 11/26/2010] [Accepted: 12/06/2010] [Indexed: 11/20/2022] Open
Affiliation(s)
- Jonas Obleser
- Department of Neuropsychology, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany.
| | | |
Collapse
|
35
|
Boulenger V, Hoen M, Jacquier C, Meunier F. Interplay between acoustic/phonetic and semantic processes during spoken sentence comprehension: an ERP study. BRAIN AND LANGUAGE 2011; 116:51-63. [PMID: 20965558 DOI: 10.1016/j.bandl.2010.09.011] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/08/2010] [Revised: 09/14/2010] [Accepted: 09/18/2010] [Indexed: 05/30/2023]
Abstract
When listening to speech in everyday-life situations, our cognitive system must often cope with signal instabilities such as sudden breaks, mispronunciations, interfering noises or reverberations potentially causing disruptions at the acoustic/phonetic interface and preventing efficient lexical access and semantic integration. The physiological mechanisms allowing listeners to react instantaneously to such fast and unexpected perturbations in order to maintain intelligibility of the delivered message are still partly unknown. The present electroencephalography (EEG) study aimed at investigating the cortical responses to real-time detection of a sudden acoustic/phonetic change occurring in connected speech and how these mechanisms interfere with semantic integration. Participants listened to sentences in which final words could contain signal reversals along the temporal dimension (time-reversed speech) of varying durations and could have either a low- or high-cloze probability within sentence context. Results revealed that early detection of the acoustic/phonetic change elicited a fronto-central negativity shortly after the onset of the manipulation that matched the spatio-temporal features of the Mismatch Negativity (MMN) recorded in the same participants during an oddball paradigm. Time reversal also affected late event-related potentials (ERPs) reflecting semantic expectancies (N400) differently when words were predictable or not from the sentence context. These findings are discussed in the context of brain signatures to transient acoustic/phonetic variations in speech. They contribute to a better understanding of natural speech comprehension as they show that acoustic/phonetic information and semantic knowledge strongly interact under adverse conditions.
Collapse
Affiliation(s)
- Véronique Boulenger
- Laboratoire Dynamique du Langage, CNRS, Université Lyon 2, UMR 5596, Lyon, France.
| | | | | | | |
Collapse
|
36
|
|
37
|
Prause N, Heiman J. Reduced Labial Temperature in Response to Sexual Films with Distractors among Women with Lower Sexual Desire. J Sex Med 2010; 7:951-63. [DOI: 10.1111/j.1743-6109.2009.01525.x] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022]
|
38
|
Cardillo ER, Aydelott J, Matthews PM, Devlin JT. Left inferior prefrontal cortex activity reflects inhibitory rather than facilitatory priming. J Cogn Neurosci 2004; 16:1552-61. [PMID: 15601518 PMCID: PMC2651466 DOI: 10.1162/0898929042568523] [Citation(s) in RCA: 47] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Functional neuroimaging has demonstrated reduced activation correlated with behavioral priming effects, a finding generally interpreted in terms of facilitated retrieval of target items in the context of related primes. Without a neutral prime, however, one cannot separate facilitatory effects of related primes from inhibitory effects of unrelated primes. Here we report an auditory semantic priming paradigm with congruent (''The boy bounced the BALL''), neutral (''The next item is BALL''), and incongruent (''Pasta is my favorite kind of BALL'') sentence trials. As previously reported, reduced left inferior prefrontal cortex activation was observed for congruent relative to incongruent trials; however, the neutral condition allowed us to show that the effect arose from increased activation in the incongruent condition rather than reduced activation for congruent trials. Our results suggest that the left inferior prefrontal cortex inhibits interference from prepotent representations in order to select a task-appropriate target, and is consistent with its broader role in behavioral inhibition.
Collapse
Affiliation(s)
- Eileen R Cardillo
- Department of Experimental Psychology, University of Oxford, OX1 3UD, UK.
| | | | | | | |
Collapse
|