1
|
Tao X, Croom K, Newman-Tancredi A, Varney M, Razak KA. Acute administration of NLX-101, a Serotonin 1A receptor agonist, improves auditory temporal processing during development in a mouse model of Fragile X Syndrome. J Neurodev Disord 2025; 17:1. [PMID: 39754065 PMCID: PMC11697955 DOI: 10.1186/s11689-024-09587-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/14/2024] [Accepted: 12/11/2024] [Indexed: 01/06/2025] Open
Abstract
BACKGROUND Fragile X syndrome (FXS) is a leading known genetic cause of intellectual disability and autism spectrum disorders (ASD)-associated behaviors. A consistent and debilitating phenotype of FXS is auditory hypersensitivity that may lead to delayed language and high anxiety. Consistent with findings in FXS human studies, the mouse model of FXS, the Fmr1 knock out (KO) mouse, shows auditory hypersensitivity and temporal processing deficits. In electroencephalograph (EEG) recordings from humans and mice, these deficits manifest as increased N1 amplitudes in event-related potentials (ERP), increased gamma band single trial power (STP) and reduced phase locking to rapid temporal modulations of sound. In our previous study, we found that administration of the selective serotonin-1 A (5-HT1A)receptor biased agonist, NLX-101, protected Fmr1 KO mice from auditory hypersensitivity-associated seizures. Here we tested the hypothesis that NLX-101 will normalize EEG phenotypes in developing Fmr1 KO mice. METHODS To test this hypothesis, we examined the effect of NLX-101 on EEG phenotypes in male and female wildtype (WT) and Fmr1 KO mice. Using epidural electrodes, we recorded auditory event related potentials (ERP) and auditory temporal processing with a gap-in-noise auditory steady state response (ASSR) paradigm at two ages, postnatal (P) 21 and 30 days, from both auditory and frontal cortices of awake, freely moving mice, following NLX-101 (at 1.8 mg/kg i.p.) or saline administration. RESULTS Saline-injected Fmr1 KO mice showed increased N1 amplitudes, increased STP and reduced phase locking to auditory gap-in-noise stimuli versus wild-type mice, reproducing previously published EEG phenotypes. An acute injection of NLX-101 did not alter ERP amplitudes at either P21 or P30, but significantly reduces STP at P30. Inter-trial phase clustering was significantly increased in both age groups with NLX-101, indicating improved temporal processing. The differential effects of serotonin modulation on ERP, background power and temporal processing suggest different developmental mechanisms leading to these phenotypes. CONCLUSIONS These results suggest that NLX-101 could constitute a promising treatment option for targeting post-synaptic 5-HT1A receptors to improve auditory temporal processing, which in turn may improve speech and language function in FXS.
Collapse
Affiliation(s)
- Xin Tao
- Graduate Neuroscience Program, University of California, Riverside, CA, USA
| | - Katilynne Croom
- Graduate Neuroscience Program, University of California, Riverside, CA, USA
| | | | | | - Khaleel A Razak
- Graduate Neuroscience Program, University of California, Riverside, CA, USA.
- Department of Psychology, University of California, 900 University Avenue, Riverside, CA, 92521, USA.
| |
Collapse
|
2
|
Jin J, Zheng Q, Liu H, Feng K, Bai Y, Ni G. Musical experience enhances time discrimination: Evidence from cortical responses. Ann N Y Acad Sci 2024; 1536:167-176. [PMID: 38829709 DOI: 10.1111/nyas.15153] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 06/05/2024]
Abstract
Time discrimination, a critical aspect of auditory perception, is influenced by numerous factors. Previous research has suggested that musical experience can restructure the brain, thereby enhancing time discrimination. However, this phenomenon remains underexplored. In this study, we seek to elucidate the enhancing effect of musical experience on time discrimination, utilizing both behavioral and electroencephalogram methodologies. Additionally, we aim to explore, through brain connectivity analysis, the role of increased connectivity in brain regions associated with auditory perception as a potential contributory factor to time discrimination induced by musical experience. The results show that the music-experienced group demonstrated higher behavioral accuracy, shorter reaction time, and shorter P3 and mismatch response latencies as compared to the control group. Furthermore, the music-experienced group had higher connectivity in the left temporal lobe. In summary, our research underscores the positive impact of musical experience on time discrimination and suggests that enhanced connectivity in brain regions linked to auditory perception may be responsible for this enhancement.
Collapse
Affiliation(s)
- Jiaqi Jin
- Academy of Medical Engineering and Translational Medicine, Tianjin University, Tianjin, China
| | - Qi Zheng
- Academy of Medical Engineering and Translational Medicine, Tianjin University, Tianjin, China
| | - Hongxing Liu
- Academy of Medical Engineering and Translational Medicine, Tianjin University, Tianjin, China
| | - Kunyun Feng
- Academy of Medical Engineering and Translational Medicine, Tianjin University, Tianjin, China
| | - Yanru Bai
- Academy of Medical Engineering and Translational Medicine, Tianjin University, Tianjin, China
- Haihe Laboratory of Brain-Computer Interaction and Human-Machine Integration, Tianjin, China
- State Key Laboratory of Advanced Medical Materials and Devices, Tianjin University, Tianjin, China
| | - Guangjian Ni
- Academy of Medical Engineering and Translational Medicine, Tianjin University, Tianjin, China
- Haihe Laboratory of Brain-Computer Interaction and Human-Machine Integration, Tianjin, China
- State Key Laboratory of Advanced Medical Materials and Devices, Tianjin University, Tianjin, China
| |
Collapse
|
3
|
Zhang X, Li J, Li Z, Hong B, Diao T, Ma X, Nolte G, Engel AK, Zhang D. Leading and following: Noise differently affects semantic and acoustic processing during naturalistic speech comprehension. Neuroimage 2023; 282:120404. [PMID: 37806465 DOI: 10.1016/j.neuroimage.2023.120404] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/17/2023] [Revised: 08/19/2023] [Accepted: 10/05/2023] [Indexed: 10/10/2023] Open
Abstract
Despite the distortion of speech signals caused by unavoidable noise in daily life, our ability to comprehend speech in noisy environments is relatively stable. However, the neural mechanisms underlying reliable speech-in-noise comprehension remain to be elucidated. The present study investigated the neural tracking of acoustic and semantic speech information during noisy naturalistic speech comprehension. Participants listened to narrative audio recordings mixed with spectrally matched stationary noise at three signal-to-ratio (SNR) levels (no noise, 3 dB, -3 dB), and 60-channel electroencephalography (EEG) signals were recorded. A temporal response function (TRF) method was employed to derive event-related-like responses to the continuous speech stream at both the acoustic and the semantic levels. Whereas the amplitude envelope of the naturalistic speech was taken as the acoustic feature, word entropy and word surprisal were extracted via the natural language processing method as two semantic features. Theta-band frontocentral TRF responses to the acoustic feature were observed at around 400 ms following speech fluctuation onset over all three SNR levels, and the response latencies were more delayed with increasing noise. Delta-band frontal TRF responses to the semantic feature of word entropy were observed at around 200 to 600 ms leading to speech fluctuation onset over all three SNR levels. The response latencies became more leading with increasing noise and decreasing speech comprehension and intelligibility. While the following responses to speech acoustics were consistent with previous studies, our study revealed the robustness of leading responses to speech semantics, which suggests a possible predictive mechanism at the semantic level for maintaining reliable speech comprehension in noisy environments.
Collapse
Affiliation(s)
- Xinmiao Zhang
- Department of Psychology, School of Social Sciences, Tsinghua University, Beijing 100084, China; Tsinghua Laboratory of Brain and Intelligence, Tsinghua University, Beijing 100084, China
| | - Jiawei Li
- Department of Education and Psychology, Freie Universität Berlin, Berlin 14195, Federal Republic of Germany
| | - Zhuoran Li
- Department of Psychology, School of Social Sciences, Tsinghua University, Beijing 100084, China; Tsinghua Laboratory of Brain and Intelligence, Tsinghua University, Beijing 100084, China
| | - Bo Hong
- Tsinghua Laboratory of Brain and Intelligence, Tsinghua University, Beijing 100084, China; Department of Biomedical Engineering, School of Medicine, Tsinghua University, Beijing 100084, China
| | - Tongxiang Diao
- Department of Otolaryngology, Head and Neck Surgery, Peking University, People's Hospital, Beijing 100044, China
| | - Xin Ma
- Department of Otolaryngology, Head and Neck Surgery, Peking University, People's Hospital, Beijing 100044, China
| | - Guido Nolte
- Department of Neurophysiology and Pathophysiology, University Medical Center Hamburg-Eppendorf, Hamburg 20246, Federal Republic of Germany
| | - Andreas K Engel
- Department of Neurophysiology and Pathophysiology, University Medical Center Hamburg-Eppendorf, Hamburg 20246, Federal Republic of Germany
| | - Dan Zhang
- Department of Psychology, School of Social Sciences, Tsinghua University, Beijing 100084, China; Tsinghua Laboratory of Brain and Intelligence, Tsinghua University, Beijing 100084, China.
| |
Collapse
|
4
|
Lin Y, Fan X, Chen Y, Zhang H, Chen F, Zhang H, Ding H, Zhang Y. Neurocognitive Dynamics of Prosodic Salience over Semantics during Explicit and Implicit Processing of Basic Emotions in Spoken Words. Brain Sci 2022; 12:brainsci12121706. [PMID: 36552167 PMCID: PMC9776349 DOI: 10.3390/brainsci12121706] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/25/2022] [Revised: 12/06/2022] [Accepted: 12/07/2022] [Indexed: 12/15/2022] Open
Abstract
How language mediates emotional perception and experience is poorly understood. The present event-related potential (ERP) study examined the explicit and implicit processing of emotional speech to differentiate the relative influences of communication channel, emotion category and task type in the prosodic salience effect. Thirty participants (15 women) were presented with spoken words denoting happiness, sadness and neutrality in either the prosodic or semantic channel. They were asked to judge the emotional content (explicit task) and speakers' gender (implicit task) of the stimuli. Results indicated that emotional prosody (relative to semantics) triggered larger N100, P200 and N400 amplitudes with greater delta, theta and alpha inter-trial phase coherence (ITPC) and event-related spectral perturbation (ERSP) values in the corresponding early time windows, and continued to produce larger LPC amplitudes and faster responses during late stages of higher-order cognitive processing. The relative salience of prosodic and semantics was modulated by emotion and task, though such modulatory effects varied across different processing stages. The prosodic salience effect was reduced for sadness processing and in the implicit task during early auditory processing and decision-making but reduced for happiness processing in the explicit task during conscious emotion processing. Additionally, across-trial synchronization of delta, theta and alpha bands predicted the ERP components with higher ITPC and ERSP values significantly associated with stronger N100, P200, N400 and LPC enhancement. These findings reveal the neurocognitive dynamics of emotional speech processing with prosodic salience tied to stage-dependent emotion- and task-specific effects, which can reveal insights into understanding language and emotion processing from cross-linguistic/cultural and clinical perspectives.
Collapse
Affiliation(s)
- Yi Lin
- Speech-Language-Hearing Center, School of Foreign Languages, Shanghai Jiao Tong University, Shanghai 200240, China
| | - Xinran Fan
- Speech-Language-Hearing Center, School of Foreign Languages, Shanghai Jiao Tong University, Shanghai 200240, China
| | - Yueqi Chen
- Speech-Language-Hearing Center, School of Foreign Languages, Shanghai Jiao Tong University, Shanghai 200240, China
| | - Hao Zhang
- School of Foreign Languages and Literature, Shandong University, Jinan 250100, China
| | - Fei Chen
- School of Foreign Languages, Hunan University, Changsha 410012, China
| | - Hui Zhang
- School of International Education, Shandong University, Jinan 250100, China
| | - Hongwei Ding
- Speech-Language-Hearing Center, School of Foreign Languages, Shanghai Jiao Tong University, Shanghai 200240, China
- Correspondence: (H.D.); (Y.Z.); Tel.: +86-213-420-5664 (H.D.); +1-612-624-7818 (Y.Z.)
| | - Yang Zhang
- Department of Speech-Language-Hearing Science & Masonic Institute for the Developing Brain, University of Minnesota, Minneapolis, MN 55455, USA
- Correspondence: (H.D.); (Y.Z.); Tel.: +86-213-420-5664 (H.D.); +1-612-624-7818 (Y.Z.)
| |
Collapse
|
5
|
Saloranta A, Heikkola LM, Peltola MS. Listen-and-repeat training in the learning of non-native consonant duration contrasts: influence of consonant type as reflected by MMN and behavioral methods. JOURNAL OF PSYCHOLINGUISTIC RESEARCH 2022; 51:885-901. [PMID: 35312934 PMCID: PMC9338006 DOI: 10.1007/s10936-022-09868-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Accepted: 11/08/2021] [Indexed: 06/14/2023]
Abstract
Phonological duration differences in quantity languages can be problematic for second language learners whose native language does not use duration contrastively. Recent studies have found improvement in the processing of non-native vowel duration contrasts with the use of listen-and-repeat training, and the current study explores the efficacy of similar methodology on consonant duration contrasts. 18 adult participants underwent two days of listen-and-repeat training with pseudoword stimuli containing either a sibilant or a stop consonant contrast. The results were examined with psychophysiological event-related potentials (mismatch negativity and P3), behavioral discrimination tests and a production task. The results revealed no training-related effects in the event-related potentials or the production task, but behavioral discrimination performance improved. Furthermore, differences emerged between the processing of the two consonant types. The findings suggest that stop consonants are processed more slowly than the sibilants, and the findings are discussed with regard to possible segmentation difficulties.
Collapse
Affiliation(s)
- Antti Saloranta
- Department of Future Technologies, University of Turku, Turku, Finland.
- Learning, Age & Bilingualism laboratory (LAB-lab), University of Turku, Turku, Finland.
| | | | - Maija S Peltola
- Department of Future Technologies, University of Turku, Turku, Finland
- Learning, Age & Bilingualism laboratory (LAB-lab), University of Turku, Turku, Finland
| |
Collapse
|
6
|
Charuthamrong P, Israsena P, Hemrungrojn S, Pan-ngum S. Automatic Speech Discrimination Assessment Methods Based on Event-Related Potentials (ERP). SENSORS 2022; 22:s22072702. [PMID: 35408316 PMCID: PMC9002564 DOI: 10.3390/s22072702] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 02/08/2022] [Revised: 03/21/2022] [Accepted: 03/22/2022] [Indexed: 01/27/2023]
Abstract
Speech discrimination is used by audiologists in diagnosing and determining treatment for hearing loss patients. Usually, assessing speech discrimination requires subjective responses. Using electroencephalography (EEG), a method that is based on event-related potentials (ERPs), could provide objective speech discrimination. In this work we proposed a visual-ERP-based method to assess speech discrimination using pictures that represent word meaning. The proposed method was implemented with three strategies, each with different number of pictures and test sequences. Machine learning was adopted to classify between the task conditions based on features that were extracted from EEG signals. The results from the proposed method were compared to that of a similar visual-ERP-based method using letters and a method that is based on the auditory mismatch negativity (MMN) component. The P3 component and the late positive potential (LPP) component were observed in the two visual-ERP-based methods while MMN was observed during the MMN-based method. A total of two out of three strategies of the proposed method, along with the MMN-based method, achieved approximately 80% average classification accuracy by a combination of support vector machine (SVM) and common spatial pattern (CSP). Potentially, these methods could serve as a pre-screening tool to make speech discrimination assessment more accessible, particularly in areas with a shortage of audiologists.
Collapse
Affiliation(s)
- Pimwipa Charuthamrong
- Interdisciplinary Program of Biomedical Engineering, Faculty of Engineering, Chulalongkorn University, Pathumwan, Bangkok 10330, Thailand;
| | - Pasin Israsena
- National Electronics and Computer Technology Center, 112 Thailand Science Park, Klong Luang, Pathumthani 12120, Thailand;
| | - Solaphat Hemrungrojn
- Department of Psychiatry, Faculty of Medicine, Chulalongkorn University, Pathumwan, Bangkok 10330, Thailand;
| | - Setha Pan-ngum
- Department of Computer Engineering, Faculty of Engineering, Chulalongkorn University, Pathumwan, Bangkok 10330, Thailand
- Correspondence:
| |
Collapse
|
7
|
Informational Masking Effects of Similarity and Uncertainty on Early and Late Stages of Auditory Cortical Processing. Ear Hear 2021; 42:1006-1023. [PMID: 33416259 DOI: 10.1097/aud.0000000000000997] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
Abstract
PURPOSE Understanding speech in a background of other people talking is a difficult listening situation for hearing-impaired individuals, and even for those with normal hearing. Speech-on-speech masking is known to contribute to increased perceptual difficulty over nonspeech background noise because of informational masking provided over and above the effects of energetic masking. While informational masking research has identified factors of similarity and uncertainty between target and masker that contribute to reduced behavioral performance in speech background noise, critical gaps in knowledge including the underlying neural-perceptual processes remain. By systematically manipulating aspects of acoustic similarity and uncertainty in the same auditory paradigm, the current study examined the time course and objectively quantified these informational masking effects at both early and late stages of auditory processing using auditory evoked potentials (AEPs). METHOD Thirty participants were included in a cross-sectional repeated measures design. Target-masker similarity was manipulated by varying the linguistic/phonetic similarity (i.e., language) of the talkers in the background. Specifically, four levels representing hypothesized increasing levels of informational masking were implemented: (1) no masker (quiet); (2) Mandarin; (3) Dutch; and (4) English. Stimulus uncertainty was manipulated by task complexity, specifically presentation of target-to-target interval (TTI) in the auditory evoked paradigm. Participants had to discriminate between English word stimuli (/bæt/ and /pæt/) presented in an oddball paradigm under each masker condition pressing buttons to either the target or standard stimulus. Responses were recorded simultaneously for P1-N1-P2 (standard waveform) and P3 (target waveform). This design allowed for simultaneous recording of multiple AEP peaks, as well as accuracy, reaction time, and d' behavioral discrimination to button press responses. RESULTS Several trends in AEP components were consistent with effects of increasing linguistic/phonetic similarity and stimulus uncertainty. All babble maskers significantly affected outcomes compared to quiet. In addition, the native language English masker had the largest effect on outcomes in the AEP paradigm, including reduced P3 amplitude and area, as well as decreased accuracy and d' behavioral discrimination to target word responses. AEP outcomes for the Mandarin and Dutch maskers, however, were not significantly different across any measured component. Latency outcomes for both N1 and P3 also supported an effect of stimulus uncertainty, consistent with increased processing time related to greater task complexity. An unanticipated result was the absence of the interaction of linguistic/phonetic similarity and stimulus uncertainty. CONCLUSIONS Observable effects of both similarity and uncertainty were evidenced at a level of the P3 more than the earlier N1 level of auditory cortical processing suggesting that higher-level active auditory processing may be more sensitive to informational masking deficits. The lack of significant interaction between similarity and uncertainty at either level of processing suggests that these informational masking factors operated independently. Speech babble maskers across languages altered AEP component measures, behavioral detection, and reaction time. Specifically, this occurred when the babble was in the native/same language as the target, while the effects of foreign language maskers did not differ. The objective results from this study provide a foundation for further investigation of how the linguistic content of target and masker and task difficulty contribute to difficulty understanding speech-in-noise.
Collapse
|
8
|
Rao A, Koerner TK, Madsen B, Zhang Y. Investigating Influences of Medial Olivocochlear Efferent System on Central Auditory Processing and Listening in Noise: A Behavioral and Event-Related Potential Study. Brain Sci 2020; 10:brainsci10070428. [PMID: 32635442 PMCID: PMC7408540 DOI: 10.3390/brainsci10070428] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/28/2020] [Revised: 06/21/2020] [Accepted: 06/30/2020] [Indexed: 11/16/2022] Open
Abstract
This electrophysiological study investigated the role of the medial olivocochlear (MOC) efferents in listening in noise. Both ears of eleven normal-hearing adult participants were tested. The physiological tests consisted of transient-evoked otoacoustic emission (TEOAE) inhibition and the measurement of cortical event-related potentials (ERPs). The mismatch negativity (MMN) and P300 responses were obtained in passive and active listening tasks, respectively. Behavioral responses for the word recognition in noise test were also analyzed. Consistent with previous findings, the TEOAE data showed significant inhibition in the presence of contralateral acoustic stimulation. However, performance in the word recognition in noise test was comparable for the two conditions (i.e., without contralateral stimulation and with contralateral stimulation). Peak latencies and peak amplitudes of MMN and P300 did not show changes with contralateral stimulation. Behavioral performance was also maintained in the P300 task. Together, the results show that the peripheral auditory efferent effects captured via otoacoustic emission (OAE) inhibition might not necessarily be reflected in measures of central cortical processing and behavioral performance. As the MOC effects may not play a role in all listening situations in adults, the functional significance of the cochlear effects of the medial olivocochlear efferents and the optimal conditions conducive to corresponding effects in behavioral and cortical responses remain to be elucidated.
Collapse
Affiliation(s)
- Aparna Rao
- Department of Speech and Hearing Science, Arizona State University, Tempe, AZ 85287, USA
- Correspondence: (A.R.); (Y.Z.); Tel.: +1-480-727-2761 (A.R.); +1-612-624-7818 (Y.Z.)
| | - Tess K. Koerner
- VA RR & D National Center for Rehabilitative Auditory Research, Portland, OR 97239, USA; (T.K.K.); (B.M.)
| | - Brandon Madsen
- VA RR & D National Center for Rehabilitative Auditory Research, Portland, OR 97239, USA; (T.K.K.); (B.M.)
| | - Yang Zhang
- Department of Speech-Language-Hearing Sciences & Center for Neurobehavioral Development, University of Minnesota, Minneapolis, MN 55455, USA
- Correspondence: (A.R.); (Y.Z.); Tel.: +1-480-727-2761 (A.R.); +1-612-624-7818 (Y.Z.)
| |
Collapse
|
9
|
Papesh MA, Stefl AA, Gallun FJ, Billings CJ. Effects of Signal Type and Noise Background on Auditory Evoked Potential N1, P2, and P3 Measurements in Blast-Exposed Veterans. Ear Hear 2020; 42:106-121. [PMID: 32520849 DOI: 10.1097/aud.0000000000000906] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
Abstract
OBJECTIVES Veterans who have been exposed to high-intensity blast waves frequently report persistent auditory difficulties such as problems with speech-in-noise (SIN) understanding, even when hearing sensitivity remains normal. However, these subjective reports have proven challenging to corroborate objectively. Here, we sought to determine whether use of complex stimuli and challenging signal contrasts in auditory evoked potential (AEP) paradigms rather than traditional use of simple stimuli and easy signal contrasts improved the ability of these measures to (1) distinguish between blast-exposed Veterans with auditory complaints and neurologically normal control participants, and (2) predict behavioral measures of SIN perception. DESIGN A total of 33 adults (aged 19-56 years) took part in this study, including 17 Veterans exposed to high-intensity blast waves within the past 10 years and 16 neurologically normal control participants matched for age and hearing status with the Veteran participants. All participants completed the following test measures: (1) a questionnaire probing perceived hearing abilities; (2) behavioral measures of SIN understanding including the BKB-SIN, the AzBio presented in 0 and +5 dB signal to noise ratios (SNRs), and a word-level consonant-vowel-consonant test presented at +5 dB SNR; and (3) electrophysiological tasks involving oddball paradigms in response to simple tones (500 Hz standard, 1000 Hz deviant) and complex speech syllables (/ba/ standard, /da/ deviant) presented in quiet and in four-talker speech babble at a SNR of +5 dB. RESULTS Blast-exposed Veterans reported significantly greater auditory difficulties compared to control participants. Behavioral performance on tests of SIN perception was generally, but not significantly, poorer among the groups. Latencies of P3 responses to tone signals were significantly longer among blast-exposed participants compared to control participants regardless of background condition, though responses to speech signals were similar across groups. For cortical AEPs, no significant interactions were found between group membership and either stimulus type or background. P3 amplitudes measured in response to signals in background babble accounted for 30.9% of the variance in subjective auditory reports. Behavioral SIN performance was best predicted by a combination of N1 and P2 responses to signals in quiet which accounted for 69.6% and 57.4% of the variance on the AzBio at 0 dB SNR and the BKB-SIN, respectively. CONCLUSIONS Although blast-exposed participants reported far more auditory difficulties compared to controls, use of complex stimuli and challenging signal contrasts in cortical and cognitive AEP measures failed to reveal larger group differences than responses to simple stimuli and easy signal contrasts. Despite this, only P3 responses to signals presented in background babble were predictive of subjective auditory complaints. In contrast, cortical N1 and P2 responses were predictive of behavioral SIN performance but not subjective auditory complaints, and use of challenging background babble generally did not improve performance predictions. These results suggest that challenging stimulus protocols are more likely to tap into perceived auditory deficits, but may not be beneficial for predicting performance on clinical measures of SIN understanding. Finally, these results should be interpreted with caution since blast-exposed participants did not perform significantly poorer on tests of SIN perception.
Collapse
Affiliation(s)
- Melissa A Papesh
- National Center for Rehabilitative Auditory Research, VA Portland Health Care System, Portland, Oregon, USA.,Department of Otolaryngology Head and Neck Surgery, Oregon Health & Science University, Portland, Oregon, USA
| | - Alyssa A Stefl
- National Center for Rehabilitative Auditory Research, VA Portland Health Care System, Portland, Oregon, USA
| | - Frederick J Gallun
- National Center for Rehabilitative Auditory Research, VA Portland Health Care System, Portland, Oregon, USA.,Department of Otolaryngology Head and Neck Surgery, Oregon Health & Science University, Portland, Oregon, USA.,Department of Neurology, Oregon Health & Science University, Portland, Oregon, USA
| | - Curtis J Billings
- National Center for Rehabilitative Auditory Research, VA Portland Health Care System, Portland, Oregon, USA.,Department of Otolaryngology Head and Neck Surgery, Oregon Health & Science University, Portland, Oregon, USA
| |
Collapse
|
10
|
Miller SE, Zhang Y. Neural Coding of Syllable-Final Fricatives with and without Hearing Aid Amplification. J Am Acad Audiol 2020; 31:566-577. [PMID: 32340057 DOI: 10.1055/s-0040-1709448] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/24/2022]
Abstract
BACKGROUND Cortical auditory event-related potentials are a potentially useful clinical tool to objectively assess speech outcomes with rehabilitative devices. Whether hearing aids reliably encode the spectrotemporal characteristics of fricative stimuli in different phonological contexts and whether these differences result in distinct neural responses with and without hearing aid amplification remain unclear. PURPOSE To determine whether the neural coding of the voiceless fricatives /s/ and /ʃ/ in the syllable-final context reliably differed without hearing aid amplification and whether hearing aid amplification altered neural coding of the fricative contrast. RESEARCH DESIGN A repeated-measures, within subject design was used to compare the neural coding of a fricative contrast with and without hearing aid amplification. STUDY SAMPLE Ten adult listeners with normal hearing participated in the study. DATA COLLECTION AND ANALYSIS Cortical auditory event-related potentials were elicited to an /ɑs/-/ɑʃ/ vowel-fricative contrast in unaided and aided listening conditions. Neural responses to the speech contrast were recorded at 64-electrode sites. Peak latencies and amplitudes of the cortical response waveforms to the fricatives were analyzed using repeated-measures analysis of variance. RESULTS The P2' component of the acoustic change complex significantly differed from the syllable-final fricative contrast with and without hearing aid amplification. Hearing aid amplification differentially altered the neural coding of the contrast across frontal, temporal, and parietal electrode regions. CONCLUSIONS Hearing aid amplification altered the neural coding of syllable-final fricatives. However, the contrast remained acoustically distinct in the aided and unaided conditions, and cortical responses to the fricative significantly differed with and without the hearing aid.
Collapse
Affiliation(s)
- Sharon E Miller
- Department of Audiology and Speech-Language Pathology, University of North Texas, Denton, Texas
| | - Yang Zhang
- Department of Speech-Language Hearing Science, University of Minnesota, Minneapolis, Minnesota.,Center for Neurobehavioral Development, University of Minnesota, Minneapolis, Minnesota.,Center for Applied and Translational Sensory Science, University of Minnesota, Minneapolis, Minnesota
| |
Collapse
|
11
|
de la Salle S, Shah D, Choueiry J, Bowers H, McIntosh J, Ilivitsky V, Knott V. NMDA Receptor Antagonist Effects on Speech-Related Mismatch Negativity and Its Underlying Oscillatory and Source Activity in Healthy Humans. Front Pharmacol 2019; 10:455. [PMID: 31139075 PMCID: PMC6517681 DOI: 10.3389/fphar.2019.00455] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/18/2018] [Accepted: 04/11/2019] [Indexed: 11/18/2022] Open
Abstract
Background: Previous studies in schizophrenia have consistently shown that deficits in the generation of the auditory mismatch negativity (MMN) – a pre-attentive, event-related potential (ERP) typically elicited by changes to simple sound features – are linked to N-methyl-D-aspartate (NMDA) receptor hypofunction. Concomitant with extensive language dysfunction in schizophrenia, patients also exhibit MMN deficits to changes in speech but their relationship to NMDA-mediated neurotransmission is not clear. Accordingly, our study aimed to investigate speech MMNs in healthy humans and their underlying electrophysiological mechanisms in response to NMDA antagonist treatment. We also evaluated the relationship between baseline MMN/electrocortical activity and emergent schizophrenia-like symptoms associated with NMDA receptor blockade. Methods: In a sample of 18 healthy volunteers, a multi-feature Finnish language paradigm incorporating changes in syllables, vowels and consonant stimuli was used to assess the acute effects of the NMDA receptor antagonist ketamine and placebo on the MMN. Further, measures of underlying neural activity, including evoked theta power, theta phase locking and source-localized current density in cortical regions of interest were assessed. Subjective symptoms were assessed with the Clinician Administered Dissociative States Scale (CADSS). Results: Participants exhibited significant ketamine-induced increases in psychosis-like symptoms and depending on temporal or frontal recording region, co-occurred with reductions in MMN generation in response to syllable frequency/intensity, vowel duration, across vowel and consonant deviants. MMN attenuation was associated with decreases in evoked theta power, theta phase locking and diminished current density in auditory and inferior frontal (language-related cortical) regions. Baseline (placebo) MMN and underlying electrophysiological features associated with the processing of changes in syllable intensity correlated with the degree of psychotomimetic response to ketamine. Conclusion: Ketamine-induced impairments in healthy human speech MMNs and their underlying electrocortical mechanisms closely resemble those observed in schizophrenia and support a model of dysfunctional NMDA receptor-mediated neurotransmission of language processing deficits in schizophrenia.
Collapse
Affiliation(s)
| | - Dhrasti Shah
- School of Psychology, University of Ottawa, Ottawa, ON, Canada
| | - Joelle Choueiry
- Department of Cellular and Molecular Medicine, University of Ottawa, Ottawa, ON, Canada
| | - Hayley Bowers
- Department of Psychology, University of Guelph, Guelph, ON, Canada
| | - Judy McIntosh
- The Royal's Institute of Mental Health Research, Ottawa, ON, Canada
| | | | - Verner Knott
- School of Psychology, University of Ottawa, Ottawa, ON, Canada.,Department of Cellular and Molecular Medicine, University of Ottawa, Ottawa, ON, Canada.,The Royal's Institute of Mental Health Research, Ottawa, ON, Canada.,Royal Ottawa Mental Health Centre, Ottawa, ON, Canada
| |
Collapse
|
12
|
Gustafson SJ, Billings CJ, Hornsby BWY, Key AP. Effect of competing noise on cortical auditory evoked potentials elicited by speech sounds in 7- to 25-year-old listeners. Hear Res 2019; 373:103-112. [PMID: 30660965 DOI: 10.1016/j.heares.2019.01.004] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 09/06/2018] [Revised: 01/03/2019] [Accepted: 01/07/2019] [Indexed: 11/27/2022]
Abstract
Child listeners have particular difficulty with speech perception when competing speech noise is present; this challenge is often attributed to their immature top-down processing abilities. The purpose of this study was to determine if the effects of competing speech noise on speech-sound processing vary with age. Cortical auditory evoked potentials (CAEPs) were measured during an active speech-syllable discrimination task in 58 normal-hearing participants (age 7-25 years). Speech syllables were presented in quiet and embedded in competing speech noise (4-talker babble, +15 dB signal-to-noise ratio; SNR). While noise was expected to similarly reduce amplitude and delay latencies of N1 and P2 peaks in all listeners, it was hypothesized that effects of noise on the P3b peak would be inversely related to age due to the maturation of top-down processing abilities throughout childhood. Consistent with previous work, results showed that a +15 dB SNR reduces amplitudes and delays latencies of CAEPs for listeners of all ages, affecting speech-sound processing, delaying stimulus evaluation, and causing a reduction in behavioral speech-sound discrimination. Contrary to expectations, findings suggest that competing speech noise at a +15 dB SNR may have similar effects on various stages of speech-sound processing for listeners of all ages. Future research directions should examine how more difficult listening conditions (poorer SNRs) might affect results across ages.
Collapse
Affiliation(s)
- Samantha J Gustafson
- Department of Hearing and Speech Sciences, Vanderbilt Bill Wilkerson Center, Vanderbilt University School of Medicine, Nashville, TN, USA.
| | - Curtis J Billings
- Department of Otolaryngology/Head & Neck Surgery, Oregon Health & Science University, Portland, OR, USA; National Center for Rehabilitative Auditory Research, Veterans Affairs Portland Health Care System, Portland, OR, USA
| | - Benjamin W Y Hornsby
- Department of Hearing and Speech Sciences, Vanderbilt Bill Wilkerson Center, Vanderbilt University School of Medicine, Nashville, TN, USA
| | - Alexandra P Key
- Department of Hearing and Speech Sciences, Vanderbilt Bill Wilkerson Center, Vanderbilt University School of Medicine, Nashville, TN, USA; Vanderbilt Kennedy Center for Research on Human Development, Vanderbilt University School of Medicine, Nashville, TN, USA
| |
Collapse
|
13
|
Koerner TK, Zhang Y. Differential effects of hearing impairment and age on electrophysiological and behavioral measures of speech in noise. Hear Res 2018; 370:130-142. [DOI: 10.1016/j.heares.2018.10.009] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 04/06/2018] [Revised: 10/06/2018] [Accepted: 10/14/2018] [Indexed: 10/28/2022]
|
14
|
Gustafson SJ, Key AP, Hornsby BWY, Bess FH. Fatigue Related to Speech Processing in Children With Hearing Loss: Behavioral, Subjective, and Electrophysiological Measures. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2018; 61:1000-1011. [PMID: 29635434 PMCID: PMC6194945 DOI: 10.1044/2018_jslhr-h-17-0314] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/19/2017] [Revised: 12/05/2017] [Accepted: 01/04/2018] [Indexed: 06/01/2023]
Abstract
PURPOSE The purpose of this study was to examine fatigue associated with sustained and effortful speech-processing in children with mild to moderately severe hearing loss. METHOD We used auditory P300 responses, subjective reports, and behavioral indices (response time, lapses of attention) to measure fatigue resulting from sustained speech-processing demands in 34 children with mild to moderately severe hearing loss (M = 10.03 years, SD = 1.93). RESULTS Compared to baseline values, children with hearing loss showed increased lapses in attention, longer reaction times, reduced P300 amplitudes, and greater reports of fatigue following the completion of the demanding speech-processing tasks. CONCLUSIONS Similar to children with normal hearing, children with hearing loss demonstrate reductions in attentional processing of speech in noise following sustained speech-processing tasks-a finding consistent with the development of fatigue.
Collapse
Affiliation(s)
- Samantha J Gustafson
- Department of Hearing and Speech Sciences, Vanderbilt Bill Wilkerson Center, Nashville, TN
| | - Alexandra P Key
- Department of Hearing and Speech Sciences, Vanderbilt Bill Wilkerson Center, Nashville, TN
- Vanderbilt Kennedy Center for Research on Human Development, Vanderbilt University School of Medicine, Nashville, TN
| | - Benjamin W Y Hornsby
- Department of Hearing and Speech Sciences, Vanderbilt Bill Wilkerson Center, Nashville, TN
| | - Fred H Bess
- Department of Hearing and Speech Sciences, Vanderbilt Bill Wilkerson Center, Nashville, TN
| |
Collapse
|
15
|
Zhang X, Li X, Chen J, Gong Q. Background Suppression and its Relation to Foreground Processing of Speech Versus Non-speech Streams. Neuroscience 2018; 373:60-71. [PMID: 29337239 DOI: 10.1016/j.neuroscience.2018.01.009] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/16/2017] [Revised: 01/02/2018] [Accepted: 01/03/2018] [Indexed: 10/18/2022]
Abstract
Since sound perception takes place against a background with a certain amount of noise, both speech and non-speech processing involve extraction of target signals and suppression of background noise. Previous works on early processing of speech phonemes largely neglected how background noise is encoded and suppressed. This study aimed to fill in this gap. We adopted an oddball paradigm where speech (vowels) or non-speech stimuli (complex tones) were presented with or without a background of amplitude-modulated noise and analyzed cortical responses related to foreground stimulus processing, including mismatch negativity (MMN), N2b, and P300, as well as neural representations of the background noise, that is, auditory steady-state response (ASSR). We found that speech deviants elicited later and weaker MMN, later N2b, and later P300 than non-speech ones, but N2b and P300 had similar strength, suggesting more complex processing of certain acoustic features in speech. Only for vowels, background noise enhanced N2b strength relative to silence, suggesting an attention-related speech-specific process to improve perception of foreground targets. In addition, noise suppression in speech contexts, quantified by ASSR amplitude reduction after stimulus onset, was lateralized towards the left hemisphere. The left-lateralized suppression following N2b was associated with the N2b enhancement in noise for speech, indicating that foreground processing may interact with background suppression, particularly during speech processing. Together, our findings indicate that the differences between perception of speech and non-speech sounds involve not only the processing of target information in the foreground but also the suppression of irrelevant aspects in the background.
Collapse
Affiliation(s)
- Xiaochen Zhang
- Department of Biomedical Engineering, School of Medicine, Tsinghua University, Beijing, China
| | - Xiaolin Li
- Department of Biomedical Engineering, School of Medicine, Tsinghua University, Beijing, China
| | - Jingjing Chen
- Department of Biomedical Engineering, School of Medicine, Tsinghua University, Beijing, China
| | - Qin Gong
- Department of Biomedical Engineering, School of Medicine, Tsinghua University, Beijing, China; Research Center of Biomedical Engineering, Graduate School at Shenzhen, Tsinghua University, Shenzhen, Guangdong Province, China.
| |
Collapse
|
16
|
Von Holzen K, Nishibayashi LL, Nazzi T. Consonant and Vowel Processing in Word Form Segmentation: An Infant ERP Study. Brain Sci 2018; 8:E24. [PMID: 29385046 PMCID: PMC5836043 DOI: 10.3390/brainsci8020024] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/04/2017] [Revised: 01/19/2018] [Accepted: 01/25/2018] [Indexed: 11/16/2022] Open
Abstract
Segmentation skill and the preferential processing of consonants (C-bias) develop during the second half of the first year of life and it has been proposed that these facilitate language acquisition. We used Event-related brain potentials (ERPs) to investigate the neural bases of early word form segmentation, and of the early processing of onset consonants, medial vowels, and coda consonants, exploring how differences in these early skills might be related to later language outcomes. Our results with French-learning eight-month-old infants primarily support previous studies that found that the word familiarity effect in segmentation is developing from a positive to a negative polarity at this age. Although as a group infants exhibited an anterior-localized negative effect, inspection of individual results revealed that a majority of infants showed a negative-going response (Negative Responders), while a minority showed a positive-going response (Positive Responders). Furthermore, all infants demonstrated sensitivity to onset consonant mispronunciations, while Negative Responders demonstrated a lack of sensitivity to vowel mispronunciations, a developmental pattern similar to previous literature. Responses to coda consonant mispronunciations revealed neither sensitivity nor lack of sensitivity. We found that infants showing a more mature, negative response to newly segmented words compared to control words (evaluating segmentation skill) and mispronunciations (evaluating phonological processing) at test also had greater growth in word production over the second year of life than infants showing a more positive response. These results establish a relationship between early segmentation skills and phonological processing (not modulated by the type of mispronunciation) and later lexical skills.
Collapse
Affiliation(s)
- Katie Von Holzen
- Laboratoire Psychologie de la Perception, CNRS-Université Paris Descartes, 75006 Paris, France.
- Department of Hearing and Speech Sciences, University of Maryland, College Park, MD 20740, USA.
| | - Leo-Lyuki Nishibayashi
- Laboratoire Psychologie de la Perception, CNRS-Université Paris Descartes, 75006 Paris, France.
- Laboratory for Language Development, Riken Brain Science Institute, Wako-shi, Saitama-ken 351-0198, Japan.
| | - Thierry Nazzi
- Laboratoire Psychologie de la Perception, CNRS-Université Paris Descartes, 75006 Paris, France.
| |
Collapse
|