1
|
Van Herck S, Economou M, Bempt FV, Ghesquière P, Vandermosten M, Wouters J. Pulsatile modulation greatly enhances neural synchronization at syllable rate in children. Neuroimage 2023:120223. [PMID: 37315772 DOI: 10.1016/j.neuroimage.2023.120223] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/07/2022] [Revised: 05/22/2023] [Accepted: 06/11/2023] [Indexed: 06/16/2023] Open
Abstract
Neural processing of the speech envelope is of crucial importance for speech perception and comprehension. This envelope processing is often investigated by measuring neural synchronization to sinusoidal amplitude-modulated stimuli at different modulation frequencies. However, it has been argued that these stimuli lack ecological validity. Pulsatile amplitude-modulated stimuli, on the other hand, are suggested to be more ecologically valid and efficient, and have increased potential to uncover the neural mechanisms behind some developmental disorders such a dyslexia. Nonetheless, pulsatile stimuli have not yet been investigated in pre-reading and beginning reading children, which is a crucial age for developmental reading research. We performed a longitudinal study to examine the potential of pulsatile stimuli in this age range. Fifty-two typically reading children were tested at three time points from the middle of their last year of kindergarten (5 years old) to the end of first grade (7 years old). Using electroencephalography, we measured neural synchronization to syllable rate and phoneme rate sinusoidal and pulsatile amplitude-modulated stimuli. Our results revealed that the pulsatile stimuli significantly enhance neural synchronization at syllable rate, compared to the sinusoidal stimuli. Additionally, the pulsatile stimuli at syllable rate elicited a different hemispheric specialization, more closely resembling natural speech envelope tracking. We postulate that using the pulsatile stimuli greatly increases EEG data acquisition efficiency compared to the common sinusoidal amplitude-modulated stimuli in research in younger children and in developmental reading research.
Collapse
Affiliation(s)
- Shauni Van Herck
- Research Group ExpORL, Department of Neurosciences, KU Leuven, Belgium; Parenting and Special Education Research Unit, Faculty of Psychology and Educational Sciences, KU Leuven, Belgium.
| | - Maria Economou
- Research Group ExpORL, Department of Neurosciences, KU Leuven, Belgium; Parenting and Special Education Research Unit, Faculty of Psychology and Educational Sciences, KU Leuven, Belgium
| | - Femke Vanden Bempt
- Research Group ExpORL, Department of Neurosciences, KU Leuven, Belgium; Parenting and Special Education Research Unit, Faculty of Psychology and Educational Sciences, KU Leuven, Belgium
| | - Pol Ghesquière
- Parenting and Special Education Research Unit, Faculty of Psychology and Educational Sciences, KU Leuven, Belgium
| | | | - Jan Wouters
- Research Group ExpORL, Department of Neurosciences, KU Leuven, Belgium
| |
Collapse
|
2
|
Cummings AE, Wu YC, Ogiela DA. Phonological Underspecification: An Explanation for How a Rake Can Become Awake. Front Hum Neurosci 2021; 15:585817. [PMID: 33679342 PMCID: PMC7925882 DOI: 10.3389/fnhum.2021.585817] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/21/2020] [Accepted: 01/25/2021] [Indexed: 11/13/2022] Open
Abstract
Neural markers, such as the mismatch negativity (MMN), have been used to examine the phonological underspecification of English feature contrasts using the Featurally Underspecified Lexicon (FUL) model. However, neural indices have not been examined within the approximant phoneme class, even though there is evidence suggesting processing asymmetries between liquid (e.g., /ɹ/) and glide (e.g., /w/) phonemes. The goal of this study was to determine whether glide phonemes elicit electrophysiological asymmetries related to [consonantal] underspecification when contrasted with liquid phonemes in adult English speakers. Specifically, /ɹɑ/ is categorized as [+consonantal] while /wɑ/ is not specified [i.e., (-consonantal)]. Following the FUL framework, if /w/ is less specified than /ɹ/, the former phoneme should elicit a larger MMN response than the latter phoneme. Fifteen English-speaking adults were presented with two syllables, /ɹɑ/ and /wɑ/, in an event-related potential (ERP) oddball paradigm in which both syllables served as the standard and deviant stimulus in opposite stimulus sets. Three types of analyses were used: (1) traditional mean amplitude measurements; (2) cluster-based permutation analyses; and (3) event-related spectral perturbation (ERSP) analyses. The less specified /wɑ/ elicited a large MMN, while a much smaller MMN was elicited by the more specified /ɹɑ/. In the standard and deviant ERP waveforms, /wɑ/ elicited a significantly larger negative response than did /ɹɑ/. Theta activity elicited by /ɹɑ/ was significantly greater than that elicited by /wɑ/ in the 100-300 ms time window. Also, low gamma activation was significantly lower for /ɹɑ/ vs. /wɑ/ deviants over the left hemisphere, as compared to the right, in the 100-150 ms window. These outcomes suggest that the [consonantal] feature follows the underspecification predictions of FUL previously tested with the place of articulation and voicing features. Thus, this study provides new evidence for phonological underspecification. Moreover, as neural oscillation patterns have not previously been discussed in the underspecification literature, the ERSP analyses identified potential new indices of phonological underspecification.
Collapse
Affiliation(s)
- Alycia E. Cummings
- Department of Communication Sciences and Disorders, Idaho State University, Meridian, ID, United States
| | - Ying C. Wu
- Swartz Center for Computational Neuroscience, University of California, San Diego, San Diego, CA, United States
| | - Diane A. Ogiela
- Department of Communication Sciences and Disorders, Idaho State University, Meridian, ID, United States
| |
Collapse
|
3
|
Daube C, Ince RAA, Gross J. Simple Acoustic Features Can Explain Phoneme-Based Predictions of Cortical Responses to Speech. Curr Biol 2019; 29:1924-1937.e9. [PMID: 31130454 PMCID: PMC6584359 DOI: 10.1016/j.cub.2019.04.067] [Citation(s) in RCA: 69] [Impact Index Per Article: 13.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/22/2018] [Revised: 03/25/2019] [Accepted: 04/25/2019] [Indexed: 01/06/2023]
Abstract
When we listen to speech, we have to make sense of a waveform of sound pressure. Hierarchical models of speech perception assume that, to extract semantic meaning, the signal is transformed into unknown, intermediate neuronal representations. Traditionally, studies of such intermediate representations are guided by linguistically defined concepts, such as phonemes. Here, we argue that in order to arrive at an unbiased understanding of the neuronal responses to speech, we should focus instead on representations obtained directly from the stimulus. We illustrate our view with a data-driven, information theoretic analysis of a dataset of 24 young, healthy humans who listened to a 1 h narrative while their magnetoencephalogram (MEG) was recorded. We find that two recent results, the improved performance of an encoding model in which annotated linguistic and acoustic features were combined and the decoding of phoneme subgroups from phoneme-locked responses, can be explained by an encoding model that is based entirely on acoustic features. These acoustic features capitalize on acoustic edges and outperform Gabor-filtered spectrograms, which can explicitly describe the spectrotemporal characteristics of individual phonemes. By replicating our results in publicly available electroencephalography (EEG) data, we conclude that models of brain responses based on linguistic features can serve as excellent benchmarks. However, we believe that in order to further our understanding of human cortical responses to speech, we should also explore low-level and parsimonious explanations for apparent high-level phenomena.
Collapse
Affiliation(s)
- Christoph Daube
- Institute of Neuroscience and Psychology, University of Glasgow, 62 Hillhead Street, Glasgow G12 8QB, UK.
| | - Robin A A Ince
- Institute of Neuroscience and Psychology, University of Glasgow, 62 Hillhead Street, Glasgow G12 8QB, UK
| | - Joachim Gross
- Institute of Neuroscience and Psychology, University of Glasgow, 62 Hillhead Street, Glasgow G12 8QB, UK; Institute for Biomagnetism and Biosignalanalysis, University of Münster, Malmedyweg 15, 48149 Münster, Germany
| |
Collapse
|
4
|
Atypical neural processing of rise time by adults with dyslexia. Cortex 2019; 113:128-140. [DOI: 10.1016/j.cortex.2018.12.006] [Citation(s) in RCA: 16] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/31/2018] [Revised: 10/30/2018] [Accepted: 12/11/2018] [Indexed: 11/16/2022]
|
5
|
García-Rosales F, Martin LM, Beetz MJ, Cabral-Calderin Y, Kössl M, Hechavarria JC. Low-Frequency Spike-Field Coherence Is a Fingerprint of Periodicity Coding in the Auditory Cortex. iScience 2018; 9:47-62. [PMID: 30384133 PMCID: PMC6214842 DOI: 10.1016/j.isci.2018.10.009] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/05/2017] [Revised: 06/20/2018] [Accepted: 10/10/2018] [Indexed: 11/04/2022] Open
Abstract
The extraction of temporal information from sensory input streams is of paramount importance in the auditory system. In this study, amplitude-modulated sounds were used as stimuli to drive auditory cortex (AC) neurons of the bat species Carollia perspicillata, to assess the interactions between cortical spikes and local-field potentials (LFPs) for the processing of temporal acoustic cues. We observed that neurons in the AC capable of eliciting synchronized spiking to periodic acoustic envelopes were significantly more coherent to theta- and alpha-band LFPs than their non-synchronized counterparts. These differences occurred independently of the modulation rate tested and could not be explained by power or phase modulations of the field potentials. We argue that the coupling between neuronal spiking and the phase of low-frequency LFPs might be important for orchestrating the coding of temporal acoustic structures in the AC. Auditory cortical neurons can track periodic sounds via synchronized spiking Neuronal synchronization ability is well marked by theta-alpha spike-LFP coherence Spike-LFP coherence patterns are independent of the stimulus' periodicity Theta-alpha LFPs may orchestrate phase-locked neuronal responses to periodic sounds
Collapse
Affiliation(s)
- Francisco García-Rosales
- Institut für Zellbiologie und Neurowissenschaft, Goethe-Universität, Max-von-Laue-Str. 13, 60438 Frankfurt am Main, Germany.
| | - Lisa M Martin
- Institut für Zellbiologie und Neurowissenschaft, Goethe-Universität, Max-von-Laue-Str. 13, 60438 Frankfurt am Main, Germany
| | - M Jerome Beetz
- Institut für Zellbiologie und Neurowissenschaft, Goethe-Universität, Max-von-Laue-Str. 13, 60438 Frankfurt am Main, Germany
| | - Yuranny Cabral-Calderin
- MEG Labor, Brain Imaging Center, Goethe-Universität, 60528 Frankfurt am Main, Germany; German Resilience Center, University Medical Center Mainz, Mainz, Germany
| | - Manfred Kössl
- Institut für Zellbiologie und Neurowissenschaft, Goethe-Universität, Max-von-Laue-Str. 13, 60438 Frankfurt am Main, Germany
| | - Julio C Hechavarria
- Institut für Zellbiologie und Neurowissenschaft, Goethe-Universität, Max-von-Laue-Str. 13, 60438 Frankfurt am Main, Germany.
| |
Collapse
|
6
|
Luke R, De Vos A, Wouters J. Source analysis of auditory steady-state responses in acoustic and electric hearing. Neuroimage 2016; 147:568-576. [PMID: 27894891 DOI: 10.1016/j.neuroimage.2016.11.023] [Citation(s) in RCA: 36] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/13/2016] [Revised: 10/06/2016] [Accepted: 11/05/2016] [Indexed: 11/17/2022] Open
Abstract
Speech is a complex signal containing a broad variety of acoustic information. For accurate speech reception, the listener must perceive modulations over a range of envelope frequencies. Perception of these modulations is particularly important for cochlear implant (CI) users, as all commercial devices use envelope coding strategies. Prolonged deafness affects the auditory pathway. However, little is known of how cochlear implantation affects the neural processing of modulated stimuli. This study investigates and contrasts the neural processing of envelope rate modulated signals in acoustic and CI listeners. Auditory steady-state responses (ASSRs) are used to study the neural processing of amplitude modulated (AM) signals. A beamforming technique is applied to determine the increase in neural activity relative to a control condition, with particular attention paid to defining the accuracy and precision of this technique relative to other tomographies. In a cohort of 44 acoustic listeners, the location, activity and hemispheric lateralisation of ASSRs is characterised while systematically varying the modulation rate (4, 10, 20, 40 and 80Hz) and stimulation ear (right, left and bilateral). We demonstrate a complex pattern of laterality depending on both modulation rate and stimulation ear that is consistent with, and extends, existing literature. We present a novel extension to the beamforming method which facilitates source analysis of electrically evoked auditory steady-state responses (EASSRs). In a cohort of 5 right implanted unilateral CI users, the neural activity is determined for the 40Hz rate and compared to the acoustic cohort. Results indicate that CI users activate typical thalamic locations for 40Hz stimuli. However, complementary to studies of transient stimuli, the CI population has atypical hemispheric laterality, preferentially activating the contralateral hemisphere.
Collapse
Affiliation(s)
- Robert Luke
- Research Group Experimental ORL, Department of Neurosciences, KU Leuven - University of Leuven, Belgium
| | - Astrid De Vos
- Research Group Experimental ORL, Department of Neurosciences, KU Leuven - University of Leuven, Belgium; Parenting and Special Education Research Unit, Faculty of Psychology and Educational Sciences, KU Leuven - University of Leuven, Belgium
| | - Jan Wouters
- Research Group Experimental ORL, Department of Neurosciences, KU Leuven - University of Leuven, Belgium
| |
Collapse
|
7
|
Lee CM, Osman AF, Volgushev M, Escabí MA, Read HL. Neural spike-timing patterns vary with sound shape and periodicity in three auditory cortical fields. J Neurophysiol 2016; 115:1886-904. [PMID: 26843599 DOI: 10.1152/jn.00784.2015] [Citation(s) in RCA: 20] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/10/2015] [Accepted: 01/29/2016] [Indexed: 11/22/2022] Open
Abstract
Mammals perceive a wide range of temporal cues in natural sounds, and the auditory cortex is essential for their detection and discrimination. The rat primary (A1), ventral (VAF), and caudal suprarhinal (cSRAF) auditory cortical fields have separate thalamocortical pathways that may support unique temporal cue sensitivities. To explore this, we record responses of single neurons in the three fields to variations in envelope shape and modulation frequency of periodic noise sequences. Spike rate, relative synchrony, and first-spike latency metrics have previously been used to quantify neural sensitivities to temporal sound cues; however, such metrics do not measure absolute spike timing of sustained responses to sound shape. To address this, in this study we quantify two forms of spike-timing precision, jitter, and reliability. In all three fields, we find that jitter decreases logarithmically with increase in the basis spline (B-spline) cutoff frequency used to shape the sound envelope. In contrast, reliability decreases logarithmically with increase in sound envelope modulation frequency. In A1, jitter and reliability vary independently, whereas in ventral cortical fields, jitter and reliability covary. Jitter time scales increase (A1 < VAF < cSRAF) and modulation frequency upper cutoffs decrease (A1 > VAF > cSRAF) with ventral progression from A1. These results suggest a transition from independent encoding of shape and periodicity sound cues on short time scales in A1 to a joint encoding of these same cues on longer time scales in ventral nonprimary cortices.
Collapse
Affiliation(s)
- Christopher M Lee
- Department of Psychology, University of Connecticut, Storrs, Connecticut
| | - Ahmad F Osman
- Department of Biomedical Engineering, University of Connecticut, Storrs, Connecticut; and
| | - Maxim Volgushev
- Department of Psychology, University of Connecticut, Storrs, Connecticut
| | - Monty A Escabí
- Department of Biomedical Engineering, University of Connecticut, Storrs, Connecticut; and Department of Electrical and Computer Engineering, University of Connecticut, Storrs, Connecticut
| | - Heather L Read
- Department of Psychology, University of Connecticut, Storrs, Connecticut; Department of Biomedical Engineering, University of Connecticut, Storrs, Connecticut; and
| |
Collapse
|
8
|
Ding N, Simon JZ. Cortical entrainment to continuous speech: functional roles and interpretations. Front Hum Neurosci 2014; 8:311. [PMID: 24904354 PMCID: PMC4036061 DOI: 10.3389/fnhum.2014.00311] [Citation(s) in RCA: 246] [Impact Index Per Article: 24.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/10/2014] [Accepted: 04/27/2014] [Indexed: 11/13/2022] Open
Abstract
Auditory cortical activity is entrained to the temporal envelope of speech, which corresponds to the syllabic rhythm of speech. Such entrained cortical activity can be measured from subjects naturally listening to sentences or spoken passages, providing a reliable neural marker of online speech processing. A central question still remains to be answered about whether cortical entrained activity is more closely related to speech perception or non-speech-specific auditory encoding. Here, we review a few hypotheses about the functional roles of cortical entrainment to speech, e.g., encoding acoustic features, parsing syllabic boundaries, and selecting sensory information in complex listening environments. It is likely that speech entrainment is not a homogeneous response and these hypotheses apply separately for speech entrainment generated from different neural sources. The relationship between entrained activity and speech intelligibility is also discussed. A tentative conclusion is that theta-band entrainment (4–8 Hz) encodes speech features critical for intelligibility while delta-band entrainment (1–4 Hz) is related to the perceived, non-speech-specific acoustic rhythm. To further understand the functional properties of speech entrainment, a splitter’s approach will be needed to investigate (1) not just the temporal envelope but what specific acoustic features are encoded and (2) not just speech intelligibility but what specific psycholinguistic processes are encoded by entrained cortical activity. Similarly, the anatomical and spectro-temporal details of entrained activity need to be taken into account when investigating its functional properties.
Collapse
Affiliation(s)
- Nai Ding
- Department of Psychology, New York University New York, NY, USA
| | - Jonathan Z Simon
- Department of Electrical and Computer Engineering, University of Maryland College Park, College Park MD, USA ; Department of Biology, University of Maryland College Park, College Park MD, USA ; Institute for Systems Research, University of Maryland College Park, College Park MD, USA
| |
Collapse
|
9
|
Stimulus variability affects the amplitude of the auditory steady-state response. PLoS One 2012; 7:e34668. [PMID: 22509343 PMCID: PMC3318001 DOI: 10.1371/journal.pone.0034668] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/10/2011] [Accepted: 03/05/2012] [Indexed: 11/19/2022] Open
Abstract
In this study we investigate whether stimulus variability affects the auditory steady-state response (ASSR). We present cosinusoidal AM pulses as stimuli where we are able to manipulate waveform shape independently of the fixed repetition rate of 4 Hz. We either present sounds in which the waveform shape, the pulse-width, is fixed throughout the presentation or where it varies pseudo-randomly. Importantly, the average spectra of all the fixed-width AM stimuli are equal to the spectra of the mixed-width AM. Our null hypothesis is that the average ASSR to the fixed-width AM will not be significantly different from the ASSR to the mixed-width AM. In a region of interest beamformer analysis of MEG data, we compare the 4 Hz component of the ASSR to the mixed-width AM with the 4 Hz component of the ASSR to the pooled fixed-width AM. We find that at the group level, there is a significantly greater response to the variable mixed-width AM at the medial boundary of the Middle and Superior Temporal Gyri. Hence, we find that adding variability into AM stimuli increases the amplitude of the ASSR. This observation is important, as it provides evidence that analysis of the modulation waveform shape is an integral part of AM processing. Therefore, standard steady-state studies in audition, using sinusoidal AM, may not be sensitive to a key feature of acoustic processing.
Collapse
|
10
|
Prendergast G, Green GGR. Cross-channel amplitude sweeps are crucial to speech intelligibility. BRAIN AND LANGUAGE 2012; 120:406-411. [PMID: 22137845 DOI: 10.1016/j.bandl.2011.11.001] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/19/2011] [Revised: 11/02/2011] [Accepted: 11/04/2011] [Indexed: 05/31/2023]
Abstract
Classical views of speech perception argue that the static and dynamic characteristics of spectral energy peaks (formants) are the acoustic features that underpin phoneme recognition. Here we use representations where the amplitude modulations of sub-band filtered speech are described, precisely, in terms of co-sinusoidal pulses. These pulses are parameterised in terms of their amplitude, duration and position in time across a large number of spectral channels. Coherent sweeps of energy across this parameter space are identified and the local transitions of pulse features across spectral channels are extracted. Synthesised speech based on manipulations of these local amplitude modulation features was used to explore the basis of intelligibility. The results show that removing changes in amplitude across channels has a much greater impact on intelligibility than differences in sweep transition or duration across channels. This finding has severe implications for future experimental design in the fields of psychophysics, electrophysiology and neuroimaging.
Collapse
|
11
|
Johnson S, Prendergast G, Hymers M, Green G. Examining the effects of one- and three-dimensional spatial filtering analyses in magnetoencephalography. PLoS One 2011; 6:e22251. [PMID: 21857916 PMCID: PMC3152290 DOI: 10.1371/journal.pone.0022251] [Citation(s) in RCA: 11] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/06/2011] [Accepted: 06/17/2011] [Indexed: 11/25/2022] Open
Abstract
Spatial filtering, or beamforming, is a commonly used data-driven analysis technique in the field of Magnetoencephalography (MEG). Although routinely referred to as a single technique, beamforming in fact encompasses several different methods, both with regard to defining the spatial filters used to reconstruct source-space time series and in terms of the analysis of these time series. This paper evaluates two alternative methods of spatial filter construction and application. It demonstrates how encoding different requirements into the design of these filters has an effect on the results obtained. The analyses presented demonstrate the potential value of implementations which examine the timeseries projections in multiple orientations at a single location by showing that beamforming can reconstruct predominantly radial sources in the case of a multiple-spheres forward model. The accuracy of source reconstruction appears to be more related to depth than source orientation. Furthermore, it is shown that using three 1-dimensional spatial filters can result in inaccurate source-space time series reconstruction. The paper concludes with brief recommendations regarding reporting beamforming methodologies in order to help remove ambiguity about the specifics of the techniques which have been used.
Collapse
Affiliation(s)
- Sam Johnson
- York NeuroImaging Centre, University of York, York, United Kingdom
| | - Garreth Prendergast
- York NeuroImaging Centre, University of York, York, United Kingdom
- Hull York Medical School, University of York, York, United Kingdom
| | - Mark Hymers
- York NeuroImaging Centre, University of York, York, United Kingdom
- * E-mail:
| | - Gary Green
- York NeuroImaging Centre, University of York, York, United Kingdom
- Hull York Medical School, University of York, York, United Kingdom
| |
Collapse
|