1
|
Fatić S, Stanojević N, Stokić M, Nenadović V, Jeličić L, Bilibajkić R, Gavrilović A, Maksimović S, Adamović T, Subotić M. Electroen cephalography correlates of word and non-word listening in children with specific language impairment: An observational study20F0. Medicine (Baltimore) 2022; 101:e31840. [PMID: 36401430 PMCID: PMC9678566 DOI: 10.1097/md.0000000000031840] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 12/03/2022] Open
Abstract
Auditory processing in children diagnosed with speech and language impairment (SLI) is atypical and characterized by reduced brain activation compared to typically developing (TD) children. In typical speech and language development processes, frontal, temporal, and posterior regions are engaged during single-word listening, while for non-word listening, it is highly unlikely that perceiving or speaking them is not followed by frequent neurones' activation enough to form stable network connections. This study aimed to investigate the electrophysiological cortical activity of alpha rhythm while listening words and non-words in children with SLI compared to TD children. The participants were 50 children with SLI, aged 4 to 6, and 50 age-related TD children. Groups were divided into 2 subgroups: first subgroup - children aged 4.0 to 5.0 years old (E = 25, C = 25) and second subgroup - children aged 5.0 to 6.0 years old (E = 25, C = 25). The younger children's group did not show statistically significant differences in alpha spectral power in word or non-word listening. In contrast, in the older age group for word and non-word listening, differences were present in the prefrontal, temporal, and parieto-occipital regions bilaterally. Children with SLI showed a certain lack of alpha desynchronization in word and non-word listening compared with TD children. Non-word perception arouses more brain regions because of the unknown presence of the word stimuli. The lack of adequate alpha desynchronization is consistent with established difficulties in lexical and phonological processing at the behavioral level in children with SLI.
Collapse
Affiliation(s)
- Saška Fatić
- Department for Cognitive Neuroscience, Research and Development Institute “Life Activities Advancement Center”, Belgrade, Serbia
- Department of Speech, Language, and Hearing Sciences, Institute for Experimental Phonetics and Speech Pathology ˝Đorđe Kostić˝, Belgrade, Serbia
- *Correspondence: Saška Fatić, Department for Cognitive Neuroscience, Research and Development Institute “Life Activities Advancement Center”, Gospodar Jovanova 35, Belgrade 11 000, Serbia (e-mail: )
| | - Nina Stanojević
- Department for Cognitive Neuroscience, Research and Development Institute “Life Activities Advancement Center”, Belgrade, Serbia
- Department of Speech, Language, and Hearing Sciences, Institute for Experimental Phonetics and Speech Pathology ˝Đorđe Kostić˝, Belgrade, Serbia
| | - Miodrag Stokić
- University of Belgrade, Faculty of Biology, Belgrade, Serbia
| | - Vanja Nenadović
- Department of Speech, Language, and Hearing Sciences, Institute for Experimental Phonetics and Speech Pathology ˝Đorđe Kostić˝, Belgrade, Serbia
| | - Ljiljana Jeličić
- Department for Cognitive Neuroscience, Research and Development Institute “Life Activities Advancement Center”, Belgrade, Serbia
- Department of Speech, Language, and Hearing Sciences, Institute for Experimental Phonetics and Speech Pathology ˝Đorđe Kostić˝, Belgrade, Serbia
| | - Ružica Bilibajkić
- Department for Cognitive Neuroscience, Research and Development Institute “Life Activities Advancement Center”, Belgrade, Serbia
| | - Aleksandar Gavrilović
- Faculty of Medical Sciences, Department of Neurology, University of Kragujevac, Kragujevac, Serbia
- Clinic of Neurology, Clinical Center Kragujevac, Kragujevac, Serbia
| | - Slavica Maksimović
- Department for Cognitive Neuroscience, Research and Development Institute “Life Activities Advancement Center”, Belgrade, Serbia
- Department of Speech, Language, and Hearing Sciences, Institute for Experimental Phonetics and Speech Pathology ˝Đorđe Kostić˝, Belgrade, Serbia
| | - Tatjana Adamović
- Department for Cognitive Neuroscience, Research and Development Institute “Life Activities Advancement Center”, Belgrade, Serbia
- Department of Speech, Language, and Hearing Sciences, Institute for Experimental Phonetics and Speech Pathology ˝Đorđe Kostić˝, Belgrade, Serbia
| | - Miško Subotić
- Department for Cognitive Neuroscience, Research and Development Institute “Life Activities Advancement Center”, Belgrade, Serbia
| |
Collapse
|
2
|
Di Dona G, Scaltritti M, Sulpizio S. Formant-invariant voice and pitch representations are pre-attentively formed from constantly varying speech and non-speech stimuli. Eur J Neurosci 2022; 56:4086-4106. [PMID: 35673798 PMCID: PMC9545905 DOI: 10.1111/ejn.15730] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/02/2022] [Revised: 05/23/2022] [Accepted: 05/24/2022] [Indexed: 11/30/2022]
Abstract
The present study investigated whether listeners can form abstract voice representations while ignoring constantly changing phonological information and if they can use the resulting information to facilitate voice change detection. Further, the study aimed at understanding whether the use of abstraction is restricted to the speech domain or can be deployed also in non‐speech contexts. We ran an electroencephalogram (EEG) experiment including one passive and one active oddball task, each featuring a speech and a rotated speech condition. In the speech condition, participants heard constantly changing vowels uttered by a male speaker (standard stimuli) which were infrequently replaced by vowels uttered by a female speaker with higher pitch (deviant stimuli). In the rotated speech condition, participants heard rotated vowels, in which the natural formant structure of speech was disrupted. In the passive task, the mismatch negativity was elicited after the presentation of the deviant voice in both conditions, indicating that listeners could successfully group together different stimuli into a formant‐invariant voice representation. In the active task, participants showed shorter reaction times (RTs), higher accuracy and a larger P3b in the speech condition with respect to the rotated speech condition. Results showed that whereas at a pre‐attentive level the cognitive system can track pitch regularities while presumably ignoring constantly changing formant information both in speech and in rotated speech, at an attentive level the use of such information is facilitated for speech. This facilitation was also testified by a stronger synchronisation in the theta band (4–7 Hz), potentially pointing towards differences in encoding/retrieval processes.
Collapse
Affiliation(s)
- Giuseppe Di Dona
- Dipartimento di Psicologia e Scienze Cognitive, Università degli Studi di Trento, Trento, Italy
| | - Michele Scaltritti
- Dipartimento di Psicologia e Scienze Cognitive, Università degli Studi di Trento, Trento, Italy
| | - Simone Sulpizio
- Dipartimento di Psicologia, Università degli Studi di Milano-Bicocca, Milano, Italy.,Milan Center for Neuroscience (NeuroMi), Università degli Studi di Milano-Bicocca, Milano, Italy
| |
Collapse
|
3
|
Speech Perception with Noise Vocoding and Background Noise: An EEG and Behavioral Study. J Assoc Res Otolaryngol 2021; 22:349-363. [PMID: 33851289 DOI: 10.1007/s10162-021-00787-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/13/2020] [Accepted: 01/26/2021] [Indexed: 10/21/2022] Open
Abstract
This study explored the physiological response of the human brain to degraded speech syllables. The degradation was introduced using noise vocoding and/or background noise. The goal was to identify physiological features of auditory-evoked potentials (AEPs) that may explain speech intelligibility. Ten human subjects with normal hearing participated in syllable-detection tasks, while their AEPs were recorded with 32-channel electroencephalography. Subjects were presented with six syllables in the form of consonant-vowel-consonant or vowel-consonant-vowel. Noise vocoding with 22 or 4 frequency channels was applied to the syllables. When examining the peak heights in the AEPs (P1, N1, and P2), vocoding alone showed no consistent effect. P1 was not consistently reduced by background noise, N1 was sometimes reduced by noise, and P2 was almost always highly reduced. Two other physiological metrics were examined: (1) classification accuracy of the syllables based on AEPs, which indicated whether AEPs were distinguishable for different syllables, and (2) cross-condition correlation of AEPs (rcc) between the clean and degraded speech, which indicated the brain's ability to extract speech-related features and suppress response to noise. Both metrics decreased with degraded speech quality. We further tested if the two metrics can explain cross-subject variations in their behavioral performance. A significant correlation existed for rcc, as well as classification based on early AEPs, in the fronto-central areas. Because rcc indicates similarities between clean and degraded speech, our finding suggests that high speech intelligibility may be a result of the brain's ability to ignore noise in the sound carrier and/or background.
Collapse
|
4
|
Wollman I, Arias P, Aucouturier JJ, Morillon B. Neural entrainment to music is sensitive to melodic spectral complexity. J Neurophysiol 2020; 123:1063-1071. [PMID: 32023136 DOI: 10.1152/jn.00758.2018] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/05/2023] Open
Abstract
During auditory perception, neural oscillations are known to entrain to acoustic dynamics but their role in the processing of auditory information remains unclear. As a complex temporal structure that can be parameterized acoustically, music is particularly suited to address this issue. In a combined behavioral and EEG experiment in human participants, we investigated the relative contribution of temporal (acoustic dynamics) and nontemporal (melodic spectral complexity) dimensions of stimulation on neural entrainment, a stimulus-brain coupling phenomenon operationally defined here as the temporal coherence between acoustical and neural dynamics. We first highlight that low-frequency neural oscillations robustly entrain to complex acoustic temporal modulations, which underscores the fine-grained nature of this coupling mechanism. We also reveal that enhancing melodic spectral complexity, in terms of pitch, harmony, and pitch variation, increases neural entrainment. Importantly, this manipulation enhances activity in the theta (5 Hz) range, a frequency-selective effect independent of the note rate of the melodies, which may reflect internal temporal constraints of the neural processes involved. Moreover, while both emotional arousal ratings and neural entrainment were positively modulated by spectral complexity, no direct relationship between arousal and neural entrainment was observed. Overall, these results indicate that neural entrainment to music is sensitive to the spectral content of auditory information and indexes an auditory level of processing that should be distinguished from higher-order emotional processing stages.NEW & NOTEWORTHY Low-frequency (<10 Hz) cortical neural oscillations are known to entrain to acoustic dynamics, the so-called neural entrainment phenomenon, but their functional implication in the processing of auditory information remains unclear. In a behavioral and EEG experiment capitalizing on parameterized musical textures, we disentangle the contribution of stimulus dynamics, melodic spectral complexity, and emotional judgments on neural entrainment and highlight their respective spatial and spectral neural signature.
Collapse
Affiliation(s)
- Indiana Wollman
- Montreal Neurological Institute, McGill University, Montreal, Quebec, Canada.,Cité de la musique, Philharmonie de Paris, Paris, France
| | - Pablo Arias
- Institut de Recherche et Coordination Acoustique/Musique-Centre National de la Recherche Scientifique-Sorbonne Université, Unité Mixte de Recherche 9912 STMS, Paris, France
| | - Jean-Julien Aucouturier
- Institut de Recherche et Coordination Acoustique/Musique-Centre National de la Recherche Scientifique-Sorbonne Université, Unité Mixte de Recherche 9912 STMS, Paris, France
| | - Benjamin Morillon
- Aix Marseille Université, Institut National de la Santé et de la Recherche Médicale, Institut de Neurosciences des Systèmes, Marseille, France
| |
Collapse
|
5
|
Riecke L, Snipes S, van Bree S, Kaas A, Hausfeld L. Audio-tactile enhancement of cortical speech-envelope tracking. Neuroimage 2019; 202:116134. [DOI: 10.1016/j.neuroimage.2019.116134] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/18/2019] [Revised: 08/07/2019] [Accepted: 08/26/2019] [Indexed: 11/25/2022] Open
|
6
|
Steinmetzger K, Zaar J, Relaño-Iborra H, Rosen S, Dau T. Predicting the effects of periodicity on the intelligibility of masked speech: An evaluation of different modelling approaches and their limitations. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2019; 146:2562. [PMID: 31671986 DOI: 10.1121/1.5129050] [Citation(s) in RCA: 18] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/27/2019] [Accepted: 09/20/2019] [Indexed: 06/10/2023]
Abstract
Four existing speech intelligibility models with different theoretical assumptions were used to predict previously published behavioural data. Those data showed that complex tones with pitch-related periodicity are far less effective maskers of speech than aperiodic noise. This so-called masker-periodicity benefit (MPB) far exceeded the fluctuating-masker benefit (FMB) obtained from slow masker envelope fluctuations. In contrast, the normal-hearing listeners hardly benefitted from periodicity in the target speech. All tested models consistently underestimated MPB and FMB, while most of them also overestimated the intelligibility of vocoded speech. To understand these shortcomings, the internal signal representations of the models were analysed in detail. The best-performing model, the correlation-based version of the speech-based envelope power spectrum model (sEPSMcorr), combined an auditory processing front end with a modulation filterbank and a correlation-based back end. This model was then modified to further improve the predictions. The resulting second version of the sEPSMcorr outperformed the original model with all tested maskers and accounted for about half the MPB, which can be attributed to reduced modulation masking caused by the periodic maskers. However, as the sEPSMcorr2 failed to account for the other half of the MPB, the results also indicate that future models should consider the contribution of pitch-related effects, such as enhanced stream segregation, to further improve their predictive power.
Collapse
Affiliation(s)
- Kurt Steinmetzger
- Speech, Hearing and Phonetic Sciences, University College London, Chandler House, 2 Wakefield Street, London WC1N 1PF, United Kingdom
| | - Johannes Zaar
- Hearing Systems Section, Department of Health Technology, Technical University of Denmark, DK-2800 Kgs. Lyngby, Denmark
| | - Helia Relaño-Iborra
- Hearing Systems Section, Department of Health Technology, Technical University of Denmark, DK-2800 Kgs. Lyngby, Denmark
| | - Stuart Rosen
- Speech, Hearing and Phonetic Sciences, University College London, Chandler House, 2 Wakefield Street, London WC1N 1PF, United Kingdom
| | - Torsten Dau
- Hearing Systems Section, Department of Health Technology, Technical University of Denmark, DK-2800 Kgs. Lyngby, Denmark
| |
Collapse
|
7
|
Eipert L, Selle A, Klump GM. Uncertainty in location, level and fundamental frequency results in informational masking in a vowel discrimination task for young and elderly subjects. Hear Res 2019; 377:142-152. [DOI: 10.1016/j.heares.2019.03.015] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 09/07/2018] [Revised: 03/15/2019] [Accepted: 03/18/2019] [Indexed: 10/27/2022]
|
8
|
Frequency specificity of amplitude envelope patterns in noise-vocoded speech. Hear Res 2018; 367:169-181. [DOI: 10.1016/j.heares.2018.06.005] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/12/2017] [Revised: 06/03/2018] [Accepted: 06/08/2018] [Indexed: 11/22/2022]
|
9
|
Haegens S, Zion Golumbic E. Rhythmic facilitation of sensory processing: A critical review. Neurosci Biobehav Rev 2017; 86:150-165. [PMID: 29223770 DOI: 10.1016/j.neubiorev.2017.12.002] [Citation(s) in RCA: 156] [Impact Index Per Article: 22.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/13/2017] [Revised: 11/02/2017] [Accepted: 12/03/2017] [Indexed: 11/17/2022]
Abstract
Here we review the role of brain oscillations in sensory processing. We examine the idea that neural entrainment of intrinsic oscillations underlies the processing of rhythmic stimuli in the context of simple isochronous rhythms as well as in music and speech. This has been a topic of growing interest over recent years; however, many issues remain highly controversial: how do fluctuations of intrinsic neural oscillations-both spontaneous and entrained to external stimuli-affect perception, and does this occur automatically or can it be actively controlled by top-down factors? Some of the controversy in the literature stems from confounding use of terminology. Moreover, it is not straightforward how theories and findings regarding isochronous rhythms generalize to more complex, naturalistic stimuli, such as speech and music. Here we aim to clarify terminology, and distinguish between different phenomena that are often lumped together as reflecting "neural entrainment" but may actually vary in their mechanistic underpinnings. Furthermore, we discuss specific caveats and confounds related to making inferences about oscillatory mechanisms from human electrophysiological data.
Collapse
Affiliation(s)
- Saskia Haegens
- Department of Neurological Surgery, Columbia University College of Physicians and Surgeons, New York, NY 10032, USA; Centre for Cognitive Neuroimaging, Donders Institute for Brain, Cognition and Behaviour, Radboud University Nijmegen, 6500 HB Nijmegen, The Netherlands
| | | |
Collapse
|
10
|
Xu Y, Chen M, LaFaire P, Tan X, Richter CP. Distorting temporal fine structure by phase shifting and its effects on speech intelligibility and neural phase locking. Sci Rep 2017; 7:13387. [PMID: 29042580 PMCID: PMC5645416 DOI: 10.1038/s41598-017-12975-3] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/12/2017] [Accepted: 09/13/2017] [Indexed: 11/27/2022] Open
Abstract
Envelope (E) and temporal fine structure (TFS) are important features of acoustic signals and their corresponding perceptual function has been investigated with various listening tasks. To further understand the underlying neural processing of TFS, experiments in humans and animals were conducted to demonstrate the effects of modifying the TFS in natural speech sentences on both speech recognition and neural coding. The TFS of natural speech sentences was modified by distorting the phase and maintaining the magnitude. Speech intelligibility was then tested for normal-hearing listeners using the intact and reconstructed sentences presented in quiet and against background noise. Sentences with modified TFS were then used to evoke neural activity in auditory neurons of the inferior colliculus in guinea pigs. Our study demonstrated that speech intelligibility in humans relied on the periodic cues of speech TFS in both quiet and noisy listening conditions. Furthermore, recordings of neural activity from the guinea pig inferior colliculus have shown that individual auditory neurons exhibit phase locking patterns to the periodic cues of speech TFS that disappear when reconstructed sounds do not show periodic patterns anymore. Thus, the periodic cues of TFS are essential for speech intelligibility and are encoded in auditory neurons by phase locking.
Collapse
Affiliation(s)
- Yingyue Xu
- Northwestern University, Department of Otolaryngology, 320 E. Superior Street, Searle 12-561, Chicago, IL, 60611, USA
| | - Maxin Chen
- Northwestern University, Department of Biomedical Engineering, 2145 Sheridan Road, Tech E310, Evanston, IL, 60208, USA
| | - Petrina LaFaire
- Northwestern University, Department of Otolaryngology, 320 E. Superior Street, Searle 12-561, Chicago, IL, 60611, USA
| | - Xiaodong Tan
- Northwestern University, Department of Otolaryngology, 320 E. Superior Street, Searle 12-561, Chicago, IL, 60611, USA
| | - Claus-Peter Richter
- Northwestern University, Department of Otolaryngology, 320 E. Superior Street, Searle 12-561, Chicago, IL, 60611, USA. .,Northwestern University, The Hugh Knowles Center, Department of Communication Sciences and Disorders, 2240 Campus Drive, Evanston, IL, 60208, USA.
| |
Collapse
|