51
|
Genuine cross-frequency coupling networks in human resting-state electrophysiological recordings. PLoS Biol 2020; 18:e3000685. [PMID: 32374723 PMCID: PMC7233600 DOI: 10.1371/journal.pbio.3000685] [Citation(s) in RCA: 33] [Impact Index Per Article: 8.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/23/2019] [Revised: 05/18/2020] [Accepted: 04/02/2020] [Indexed: 12/28/2022] Open
Abstract
Phase synchronization of neuronal oscillations in specific frequency bands coordinates anatomically distributed neuronal processing and communication. Typically, oscillations and synchronization take place concurrently in many distinct frequencies, which serve separate computational roles in cognitive functions. While within-frequency phase synchronization has been studied extensively, less is known about the mechanisms that govern neuronal processing distributed across frequencies and brain regions. Such integration of processing between frequencies could be achieved via cross-frequency coupling (CFC), either by phase–amplitude coupling (PAC) or by n:m-cross–frequency phase synchrony (CFS). So far, studies have mostly focused on local CFC in individual brain regions, whereas the presence and functional organization of CFC between brain areas have remained largely unknown. We posit that interareal CFC may be essential for large-scale coordination of neuronal activity and investigate here whether genuine CFC networks are present in human resting-state (RS) brain activity. To assess the functional organization of CFC networks, we identified brain-wide CFC networks at mesoscale resolution from stereoelectroencephalography (SEEG) and at macroscale resolution from source-reconstructed magnetoencephalography (MEG) data. We developed a novel, to our knowledge, graph-theoretical method to distinguish genuine CFC from spurious CFC that may arise from nonsinusoidal signals ubiquitous in neuronal activity. We show that genuine interareal CFC is present in human RS activity in both SEEG and MEG data. Both CFS and PAC networks coupled theta and alpha oscillations with higher frequencies in large-scale networks connecting anterior and posterior brain regions. CFS and PAC networks had distinct spectral patterns and opposing distribution of low- and high-frequency network hubs, implying that they constitute distinct CFC mechanisms. The strength of CFS networks was also predictive of cognitive performance in a separate neuropsychological assessment. In conclusion, these results provide evidence for interareal CFS and PAC being 2 distinct mechanisms for coupling oscillations across frequencies in large-scale brain networks. Genuine interareal cross-frequency coupling (CFC) can be identified from human resting state activity using magnetoencephalography, stereoelectroencephalography, and novel network approaches. CFC couples slow theta and alpha oscillations to faster oscillations across brain regions.
Collapse
|
52
|
Oscillations in the auditory system and their possible role. Neurosci Biobehav Rev 2020; 113:507-528. [PMID: 32298712 DOI: 10.1016/j.neubiorev.2020.03.030] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/18/2019] [Revised: 03/25/2020] [Accepted: 03/30/2020] [Indexed: 12/26/2022]
Abstract
GOURÉVITCH, B., C. Martin, O. Postal, J.J. Eggermont. Oscillations in the auditory system, their possible role. NEUROSCI BIOBEHAV REV XXX XXX-XXX, 2020. - Neural oscillations are thought to have various roles in brain processing such as, attention modulation, neuronal communication, motor coordination, memory consolidation, decision-making, or feature binding. The role of oscillations in the auditory system is less clear, especially due to the large discrepancy between human and animal studies. Here we describe many methodological issues that confound the results of oscillation studies in the auditory field. Moreover, we discuss the relationship between neural entrainment and oscillations that remains unclear. Finally, we aim to identify which kind of oscillations could be specific or salient to the auditory areas and their processing. We suggest that the role of oscillations might dramatically differ between the primary auditory cortex and the more associative auditory areas. Despite the moderate presence of intrinsic low frequency oscillations in the primary auditory cortex, rhythmic components in the input seem crucial for auditory processing. This allows the phase entrainment between the oscillatory phase and rhythmic input, which is an integral part of stimulus selection within the auditory system.
Collapse
|
53
|
Vanheusden FJ, Kegler M, Ireland K, Georga C, Simpson DM, Reichenbach T, Bell SL. Hearing Aids Do Not Alter Cortical Entrainment to Speech at Audible Levels in Mild-to-Moderately Hearing-Impaired Subjects. Front Hum Neurosci 2020; 14:109. [PMID: 32317951 PMCID: PMC7147120 DOI: 10.3389/fnhum.2020.00109] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/21/2019] [Accepted: 03/11/2020] [Indexed: 11/13/2022] Open
Abstract
BACKGROUND Cortical entrainment to speech correlates with speech intelligibility and attention to a speech stream in noisy environments. However, there is a lack of data on whether cortical entrainment can help in evaluating hearing aid fittings for subjects with mild to moderate hearing loss. One particular problem that may arise is that hearing aids may alter the speech stimulus during (pre-)processing steps, which might alter cortical entrainment to the speech. Here, the effect of hearing aid processing on cortical entrainment to running speech in hearing impaired subjects was investigated. METHODOLOGY Seventeen native English-speaking subjects with mild-to-moderate hearing loss participated in the study. Hearing function and hearing aid fitting were evaluated using standard clinical procedures. Participants then listened to a 25-min audiobook under aided and unaided conditions at 70 dBA sound pressure level (SPL) in quiet conditions. EEG data were collected using a 32-channel system. Cortical entrainment to speech was evaluated using decoders reconstructing the speech envelope from the EEG data. Null decoders, obtained from EEG and the time-reversed speech envelope, were used to assess the chance level reconstructions. Entrainment in the delta- (1-4 Hz) and theta- (4-8 Hz) band, as well as wideband (1-20 Hz) EEG data was investigated. RESULTS Significant cortical responses could be detected for all but one subject in all three frequency bands under both aided and unaided conditions. However, no significant differences could be found between the two conditions in the number of responses detected, nor in the strength of cortical entrainment. The results show that the relatively small change in speech input provided by the hearing aid was not sufficient to elicit a detectable change in cortical entrainment. CONCLUSION For subjects with mild to moderate hearing loss, cortical entrainment to speech in quiet at an audible level is not affected by hearing aids. These results clear the pathway for exploring the potential to use cortical entrainment to running speech for evaluating hearing aid fitting at lower speech intensities (which could be inaudible when unaided), or using speech in noise conditions.
Collapse
Affiliation(s)
- Frederique J. Vanheusden
- Department of Engineering, School of Science and Technology, Nottingham Trent University, Nottingham, United Kingdom
- Institute of Sound and Vibration Research, Faculty of Engineering and Physical Sciences, University of Southampton, Southampton, United Kingdom
| | - Mikolaj Kegler
- Department of Bioengineering and Centre for Neurotechnology, Imperial College London, South Kensington Campus, London, United Kingdom
| | - Katie Ireland
- Audiology Department, Royal Berkshire NHS Foundation Trust, Reading, United Kingdom
| | - Constantina Georga
- Audiology Department, Royal Berkshire NHS Foundation Trust, Reading, United Kingdom
| | - David M. Simpson
- Institute of Sound and Vibration Research, Faculty of Engineering and Physical Sciences, University of Southampton, Southampton, United Kingdom
| | - Tobias Reichenbach
- Department of Bioengineering and Centre for Neurotechnology, Imperial College London, South Kensington Campus, London, United Kingdom
| | - Steven L. Bell
- Institute of Sound and Vibration Research, Faculty of Engineering and Physical Sciences, University of Southampton, Southampton, United Kingdom
| |
Collapse
|
54
|
García-Rosales F, López-Jury L, González-Palomares E, Cabral-Calderín Y, Hechavarría JC. Fronto-Temporal Coupling Dynamics During Spontaneous Activity and Auditory Processing in the Bat Carollia perspicillata. Front Syst Neurosci 2020; 14:14. [PMID: 32265670 PMCID: PMC7098971 DOI: 10.3389/fnsys.2020.00014] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/06/2020] [Accepted: 02/28/2020] [Indexed: 11/17/2022] Open
Abstract
Most mammals rely on the extraction of acoustic information from the environment in order to survive. However, the mechanisms that support sound representation in auditory neural networks involving sensory and association brain areas remain underexplored. In this study, we address the functional connectivity between an auditory region in frontal cortex (the frontal auditory field, FAF) and the auditory cortex (AC) in the bat Carollia perspicillata. The AC is a classic sensory area central for the processing of acoustic information. On the other hand, the FAF belongs to the frontal lobe, a brain region involved in the integration of sensory inputs, modulation of cognitive states, and in the coordination of behavioral outputs. The FAF-AC network was examined in terms of oscillatory coherence (local-field potentials, LFPs), and within an information theoretical framework linking FAF and AC spiking activity. We show that in the absence of acoustic stimulation, simultaneously recorded LFPs from FAF and AC are coherent in low frequencies (1-12 Hz). This "default" coupling was strongest in deep AC layers and was unaltered by acoustic stimulation. However, presenting auditory stimuli did trigger the emergence of coherent auditory-evoked gamma-band activity (>25 Hz) between the FAF and AC. In terms of spiking, our results suggest that FAF and AC engage in distinct coding strategies for representing artificial and natural sounds. Taken together, our findings shed light onto the neuronal coding strategies and functional coupling mechanisms that enable sound representation at the network level in the mammalian brain.
Collapse
Affiliation(s)
| | - Luciana López-Jury
- Institut für Zellbiologie und Neurowissenschaft, Goethe-Universität, Frankfurt, Germany
| | | | - Yuranny Cabral-Calderín
- Research Group Neural and Environmental Rhythms, MPI for Empirical Aesthetics, Frankfurt, Germany
| | - Julio C. Hechavarría
- Institut für Zellbiologie und Neurowissenschaft, Goethe-Universität, Frankfurt, Germany
| |
Collapse
|
55
|
Giroud J, Trébuchon A, Schön D, Marquis P, Liegeois-Chauvel C, Poeppel D, Morillon B. Asymmetric sampling in human auditory cortex reveals spectral processing hierarchy. PLoS Biol 2020; 18:e3000207. [PMID: 32119667 PMCID: PMC7067489 DOI: 10.1371/journal.pbio.3000207] [Citation(s) in RCA: 18] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/07/2019] [Revised: 03/12/2020] [Accepted: 02/13/2020] [Indexed: 11/18/2022] Open
Abstract
Speech perception is mediated by both left and right auditory cortices but with differential sensitivity to specific acoustic information contained in the speech signal. A detailed description of this functional asymmetry is missing, and the underlying models are widely debated. We analyzed cortical responses from 96 epilepsy patients with electrode implantation in left or right primary, secondary, and/or association auditory cortex (AAC). We presented short acoustic transients to noninvasively estimate the dynamical properties of multiple functional regions along the auditory cortical hierarchy. We show remarkably similar bimodal spectral response profiles in left and right primary and secondary regions, with evoked activity composed of dynamics in the theta (around 4–8 Hz) and beta–gamma (around 15–40 Hz) ranges. Beyond these first cortical levels of auditory processing, a hemispheric asymmetry emerged, with delta and beta band (3/15 Hz) responsivity prevailing in the right hemisphere and theta and gamma band (6/40 Hz) activity prevailing in the left. This asymmetry is also present during syllables presentation, but the evoked responses in AAC are more heterogeneous, with the co-occurrence of alpha (around 10 Hz) and gamma (>25 Hz) activity bilaterally. These intracranial data provide a more fine-grained and nuanced characterization of cortical auditory processing in the 2 hemispheres, shedding light on the neural dynamics that potentially shape auditory and speech processing at different levels of the cortical hierarchy. Capitalizing on intracranial data from 96 epileptic patients, this study precisely estimates the processing timescales along the cortical auditory hierarchy and reveals that an asymmetric sampling emerges in associative areas.
Collapse
Affiliation(s)
- Jérémy Giroud
- Aix Marseille University, Inserm, INS, Inst Neurosci Syst, Marseille, France
| | - Agnès Trébuchon
- Aix Marseille University, Inserm, INS, Inst Neurosci Syst, Marseille, France
- APHM, Hôpital de la Timone, Service de Neurophysiologie Clinique, Marseille, France
| | - Daniele Schön
- Aix Marseille University, Inserm, INS, Inst Neurosci Syst, Marseille, France
| | - Patrick Marquis
- Aix Marseille University, Inserm, INS, Inst Neurosci Syst, Marseille, France
| | - Catherine Liegeois-Chauvel
- Aix Marseille University, Inserm, INS, Inst Neurosci Syst, Marseille, France
- Cleveland Clinic Neurological Institute, Epilepsy Center, Cleveland, Ohio, United States of America
| | - David Poeppel
- Department of Neuroscience, Max-Planck-Institute for Empirical Aesthetics, Frankfurt am Main, Germany
- Department of Psychology and Center for Neural Science, New York University, New York, New York, United States of America
| | - Benjamin Morillon
- Aix Marseille University, Inserm, INS, Inst Neurosci Syst, Marseille, France
- * E-mail:
| |
Collapse
|
56
|
Keshavarzi M, Kegler M, Kadir S, Reichenbach T. Transcranial alternating current stimulation in the theta band but not in the delta band modulates the comprehension of naturalistic speech in noise. Neuroimage 2020; 210:116557. [PMID: 31968233 DOI: 10.1016/j.neuroimage.2020.116557] [Citation(s) in RCA: 26] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/26/2019] [Revised: 01/13/2020] [Accepted: 01/14/2020] [Indexed: 01/26/2023] Open
Abstract
Auditory cortical activity entrains to speech rhythms and has been proposed as a mechanism for online speech processing. In particular, neural activity in the theta frequency band (4-8 Hz) tracks the onset of syllables which may aid the parsing of a speech stream. Similarly, cortical activity in the delta band (1-4 Hz) entrains to the onset of words in natural speech and has been found to encode both syntactic as well as semantic information. Such neural entrainment to speech rhythms is not merely an epiphenomenon of other neural processes, but plays a functional role in speech processing: modulating the neural entrainment through transcranial alternating current stimulation influences the speech-related neural activity and modulates the comprehension of degraded speech. However, the distinct functional contributions of the delta- and of the theta-band entrainment to the modulation of speech comprehension have not yet been investigated. Here we use transcranial alternating current stimulation with waveforms derived from the speech envelope and filtered in the delta and theta frequency bands to alter cortical entrainment in both bands separately. We find that transcranial alternating current stimulation in the theta band but not in the delta band impacts speech comprehension. Moreover, we find that transcranial alternating current stimulation with the theta-band portion of the speech envelope can improve speech-in-noise comprehension beyond sham stimulation. Our results show a distinct contribution of the theta- but not of the delta-band stimulation to the modulation of speech comprehension. In addition, our findings open up a potential avenue of enhancing the comprehension of speech in noise.
Collapse
Affiliation(s)
- Mahmoud Keshavarzi
- Department of Bioengineering and Centre for Neurotechnology, Imperial College London, South Kensington Campus, SW7 2AZ, London, UK
| | - Mikolaj Kegler
- Department of Bioengineering and Centre for Neurotechnology, Imperial College London, South Kensington Campus, SW7 2AZ, London, UK
| | - Shabnam Kadir
- School of Engineering and Computer Science, University of Hertfordshire, Hatfield, Hertfordshire, AL10 9AB, UK
| | - Tobias Reichenbach
- Department of Bioengineering and Centre for Neurotechnology, Imperial College London, South Kensington Campus, SW7 2AZ, London, UK.
| |
Collapse
|
57
|
Weissbart H, Kandylaki KD, Reichenbach T. Cortical Tracking of Surprisal during Continuous Speech Comprehension. J Cogn Neurosci 2020; 32:155-166. [DOI: 10.1162/jocn_a_01467] [Citation(s) in RCA: 37] [Impact Index Per Article: 9.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/22/2022]
Abstract
Abstract
Speech comprehension requires rapid online processing of a continuous acoustic signal to extract structure and meaning. Previous studies on sentence comprehension have found neural correlates of the predictability of a word given its context, as well as of the precision of such a prediction. However, they have focused on single sentences and on particular words in those sentences. Moreover, they compared neural responses to words with low and high predictability, as well as with low and high precision. However, in speech comprehension, a listener hears many successive words whose predictability and precision vary over a large range. Here, we show that cortical activity in different frequency bands tracks word surprisal in continuous natural speech and that this tracking is modulated by precision. We obtain these results through quantifying surprisal and precision from naturalistic speech using a deep neural network and through relating these speech features to EEG responses of human volunteers acquired during auditory story comprehension. We find significant cortical tracking of surprisal at low frequencies, including the delta band as well as in the higher frequency beta and gamma bands, and observe that the tracking is modulated by the precision. Our results pave the way to further investigate the neurobiology of natural speech comprehension.
Collapse
|
58
|
Abstract
The brain is organized as a network of highly specialized networks of spiking neurons. To exploit such a modular architecture for computation, the brain has to be able to regulate the flow of spiking activity between these specialized networks. In this Opinion article, we review various prominent mechanisms that may underlie communication between neuronal networks. We show that communication between neuronal networks can be understood as trajectories in a two-dimensional state space, spanned by the properties of the input. Thus, we propose a common framework to understand neuronal communication mediated by seemingly different mechanisms. We also suggest that the nesting of slow (for example, alpha-band and theta-band) oscillations and fast (gamma-band) oscillations can serve as an important control mechanism that allows or prevents spiking signals to be routed between specific networks. We argue that slow oscillations can modulate the time required to establish network resonance or entrainment and, thereby, regulate communication between neuronal networks.
Collapse
|
59
|
Morillon B, Arnal LH, Schroeder CE, Keitel A. Prominence of delta oscillatory rhythms in the motor cortex and their relevance for auditory and speech perception. Neurosci Biobehav Rev 2019; 107:136-142. [DOI: 10.1016/j.neubiorev.2019.09.012] [Citation(s) in RCA: 38] [Impact Index Per Article: 7.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/30/2019] [Revised: 07/25/2019] [Accepted: 09/09/2019] [Indexed: 01/21/2023]
|
60
|
Kadir S, Kaza C, Weissbart H, Reichenbach T. Modulation of Speech-in-Noise Comprehension Through Transcranial Current Stimulation With the Phase-Shifted Speech Envelope. IEEE Trans Neural Syst Rehabil Eng 2019; 28:23-31. [PMID: 31751277 PMCID: PMC7001147 DOI: 10.1109/tnsre.2019.2939671] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/03/2022]
Abstract
Neural activity tracks the envelope of a speech signal at latencies from 50 ms to 300 ms. Modulating this neural tracking through transcranial alternating current stimulation influences speech comprehension. Two important variables that can affect this modulation are the latency and the phase of the stimulation with respect to the sound. While previous studies have found an influence of both variables on speech comprehension, the interaction between both has not yet been measured. We presented 17 subjects with speech in noise coupled with simultaneous transcranial alternating current stimulation. The currents were based on the envelope of the target speech but shifted by different phases, as well as by two temporal delays of 100 ms and 250 ms. We also employed various control stimulations, and assessed the signal-to-noise ratio at which the subject understood half of the speech. We found that, at both latencies, speech comprehension is modulated by the phase of the current stimulation. However, the form of the modulation differed between the two latencies. Phase and latency of neurostimulation have accordingly distinct influences on speech comprehension. The different effects at the latencies of 100 ms and 250 ms hint at distinct neural processes for speech processing.
Collapse
|
61
|
Nadalin JK, Martinet LE, Blackwood EB, Lo MC, Widge AS, Cash SS, Eden UT, Kramer MA. A statistical framework to assess cross-frequency coupling while accounting for confounding analysis effects. eLife 2019; 8:44287. [PMID: 31617848 PMCID: PMC6821458 DOI: 10.7554/elife.44287] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/12/2018] [Accepted: 10/06/2019] [Indexed: 01/14/2023] Open
Abstract
Cross frequency coupling (CFC) is emerging as a fundamental feature of brain activity, correlated with brain function and dysfunction. Many different types of CFC have been identified through application of numerous data analysis methods, each developed to characterize a specific CFC type. Choosing an inappropriate method weakens statistical power and introduces opportunities for confounding effects. To address this, we propose a statistical modeling framework to estimate high frequency amplitude as a function of both the low frequency amplitude and low frequency phase; the result is a measure of phase-amplitude coupling that accounts for changes in the low frequency amplitude. We show in simulations that the proposed method successfully detects CFC between the low frequency phase or amplitude and the high frequency amplitude, and outperforms an existing method in biologically-motivated examples. Applying the method to in vivo data, we illustrate examples of CFC during a seizure and in response to electrical stimuli.
Collapse
Affiliation(s)
- Jessica K Nadalin
- Department of Mathematics and Statistics, Boston University, Boston, United States
| | | | - Ethan B Blackwood
- Department of Psychiatry, University of Minnesota, Minneapolis, United States
| | - Meng-Chen Lo
- Department of Psychiatry, University of Minnesota, Minneapolis, United States
| | - Alik S Widge
- Department of Psychiatry, University of Minnesota, Minneapolis, United States
| | - Sydney S Cash
- Department of Neurology, Massachusetts General Hospital, Boston, United States
| | - Uri T Eden
- Department of Mathematics and Statistics, Boston University, Boston, United States
| | - Mark A Kramer
- Department of Mathematics and Statistics, Boston University, Boston, United States
| |
Collapse
|
62
|
García-Rosales F, Röhrig D, Weineck K, Röhm M, Lin YH, Cabral-Calderin Y, Kössl M, Hechavarria JC. Laminar specificity of oscillatory coherence in the auditory cortex. Brain Struct Funct 2019; 224:2907-2924. [PMID: 31456067 DOI: 10.1007/s00429-019-01944-3] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/16/2019] [Accepted: 08/16/2019] [Indexed: 12/11/2022]
Abstract
Empirical evidence suggests that, in the auditory cortex (AC), the phase relationship between spikes and local-field potentials (LFPs) plays an important role in the processing of auditory stimuli. Nevertheless, unlike the case of other sensory systems, it remains largely unexplored in the auditory modality whether the properties of the cortical columnar microcircuit shape the dynamics of spike-LFP coherence in a layer-specific manner. In this study, we directly tackle this issue by addressing whether spike-LFP and LFP-stimulus phase synchronization are spatially distributed in the AC during sensory processing, by performing laminar recordings in the cortex of awake short-tailed bats (Carollia perspicillata) while animals listened to conspecific distress vocalizations. We show that, in the AC, spike-LFP and LFP-stimulus synchrony depend significantly on cortical depth, and that sensory stimulation alters the spatial and spectral patterns of spike-LFP phase-locking. We argue that such laminar distribution of coherence could have functional implications for the representation of naturalistic auditory stimuli at a cortical level.
Collapse
Affiliation(s)
- Francisco García-Rosales
- Institut für Zellbiologie und Neurowissenschaft, Goethe-Universität, Max-von-Laue-Str. 13, 60438, Frankfurt/Main, Germany.
| | - Dennis Röhrig
- Institut für Zellbiologie und Neurowissenschaft, Goethe-Universität, Max-von-Laue-Str. 13, 60438, Frankfurt/Main, Germany
| | - Kristin Weineck
- Institut für Zellbiologie und Neurowissenschaft, Goethe-Universität, Max-von-Laue-Str. 13, 60438, Frankfurt/Main, Germany
| | - Mira Röhm
- Institut für Zellbiologie und Neurowissenschaft, Goethe-Universität, Max-von-Laue-Str. 13, 60438, Frankfurt/Main, Germany
| | - Yi-Hsuan Lin
- Institut für Zellbiologie und Neurowissenschaft, Goethe-Universität, Max-von-Laue-Str. 13, 60438, Frankfurt/Main, Germany
| | - Yuranny Cabral-Calderin
- Research Group Neural and Environmental Rhythms, Max Planck Institute for Empirical Aesthetics, 60322, Frankfurt/Main, Germany
| | - Manfred Kössl
- Institut für Zellbiologie und Neurowissenschaft, Goethe-Universität, Max-von-Laue-Str. 13, 60438, Frankfurt/Main, Germany
| | - Julio C Hechavarria
- Institut für Zellbiologie und Neurowissenschaft, Goethe-Universität, Max-von-Laue-Str. 13, 60438, Frankfurt/Main, Germany.
| |
Collapse
|
63
|
Dynamic modulation of theta–gamma coupling during rapid eye movement sleep. Sleep 2019; 42:5549700. [DOI: 10.1093/sleep/zsz182] [Citation(s) in RCA: 20] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/27/2019] [Revised: 06/17/2019] [Indexed: 11/15/2022] Open
Abstract
Abstract
Theta phase modulates gamma amplitude in hippocampal networks during spatial navigation and rapid eye movement (REM) sleep. This cross-frequency coupling has been linked to working memory and spatial memory consolidation; however, its spatial and temporal dynamics remains unclear. Here, we first investigate the dynamics of theta–gamma interactions using multiple frequency and temporal scales in simultaneous recordings from hippocampal CA3, CA1, subiculum, and parietal cortex in freely moving mice. We found that theta phase dynamically modulates distinct gamma bands during REM sleep. Interestingly, we further show that theta–gamma coupling switches between recorded brain structures during REM sleep and progressively increases over a single REM sleep episode. Finally, we show that optogenetic silencing of septohippocampal GABAergic projections significantly impedes both theta–gamma coupling and theta phase coherence. Collectively, our study shows that phase-space (i.e. cross-frequency coupling) coding of information during REM sleep is orchestrated across time and space consistent with region-specific processing of information during REM sleep including learning and memory.
Collapse
|
64
|
Velarde OM, Urdapilleta E, Mato G, Dellavale D. Bifurcation structure determines different phase-amplitude coupling patterns in the activity of biologically plausible neural networks. Neuroimage 2019; 202:116031. [PMID: 31330244 DOI: 10.1016/j.neuroimage.2019.116031] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/22/2019] [Revised: 07/10/2019] [Accepted: 07/16/2019] [Indexed: 12/15/2022] Open
Abstract
Phase-amplitude cross frequency coupling (PAC) is a rather ubiquitous phenomenon that has been observed in a variety of physical domains; however, the mechanisms underlying the emergence of PAC and its functional significance in the context of neural processes are open issues under debate. In this work we analytically demonstrate that PAC phenomenon naturally emerges in mean-field models of biologically plausible networks, as a signature of specific bifurcation structures. The proposed analysis, based on bifurcation theory, allows the identification of the mechanisms underlying oscillatory dynamics that are essentially different in the context of PAC. Specifically, we found that two PAC classes can coexist in the complex dynamics of the analyzed networks: 1) harmonic PAC which is an epiphenomenon of the nonsinusoidal waveform shape characterized by the linear superposition of harmonically related spectral components, and 2) nonharmonic PAC associated with "true" coupled oscillatory dynamics with independent frequencies elicited by a secondary Hopf bifurcation and mechanisms involving periodic excitation/inhibition (PEI) of a network population. Importantly, these two PAC types have been experimentally observed in a variety of neural architectures confounding traditional parametric and nonparametric PAC metrics, like those based on linear filtering or the waveform shape analysis, due to the fact that these methods operate on a single one-dimensional projection of an intrinsically multidimensional system dynamics. We exploit the proposed tools to study the functional significance of the PAC phenomenon in the context of Parkinson's disease (PD). Our results show that pathological slow oscillations (e.g. β band) and nonharmonic PAC patterns emerge from dissimilar underlying mechanisms (bifurcations) and are associated to the competition of different BG-thalamocortical loops. Thus, this study provides theoretical arguments that demonstrate that nonharmonic PAC is not an epiphenomenon related to the pathological β band oscillations, thus supporting the experimental evidence about the relevance of PAC as a potential biomarker of PD.
Collapse
Affiliation(s)
- Osvaldo Matías Velarde
- Centro Atómico Bariloche and Instituto Balseiro, Comisión Nacional de Energía Atómica (CNEA), Consejo Nacional de Investigaciones Científicas y Técnicas (CONICET), Universidad Nacional de Cuyo (UNCUYO), Av. E. Bustillo 9500, R8402AGP, San Carlos de Bariloche, Río Negro, Argentina
| | - Eugenio Urdapilleta
- Centro Atómico Bariloche and Instituto Balseiro, Comisión Nacional de Energía Atómica (CNEA), Consejo Nacional de Investigaciones Científicas y Técnicas (CONICET), Universidad Nacional de Cuyo (UNCUYO), Av. E. Bustillo 9500, R8402AGP, San Carlos de Bariloche, Río Negro, Argentina
| | - Germán Mato
- Centro Atómico Bariloche and Instituto Balseiro, Comisión Nacional de Energía Atómica (CNEA), Consejo Nacional de Investigaciones Científicas y Técnicas (CONICET), Universidad Nacional de Cuyo (UNCUYO), Av. E. Bustillo 9500, R8402AGP, San Carlos de Bariloche, Río Negro, Argentina.
| | - Damián Dellavale
- Centro Atómico Bariloche and Instituto Balseiro, Comisión Nacional de Energía Atómica (CNEA), Consejo Nacional de Investigaciones Científicas y Técnicas (CONICET), Universidad Nacional de Cuyo (UNCUYO), Av. E. Bustillo 9500, R8402AGP, San Carlos de Bariloche, Río Negro, Argentina.
| |
Collapse
|
65
|
Vidyasagar TR. Visual attention and neural oscillations in reading and dyslexia: Are they possible targets for remediation? Neuropsychologia 2019; 130:59-65. [DOI: 10.1016/j.neuropsychologia.2019.02.009] [Citation(s) in RCA: 22] [Impact Index Per Article: 4.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/22/2018] [Revised: 02/14/2019] [Accepted: 02/15/2019] [Indexed: 01/07/2023]
|
66
|
Daube C, Ince RAA, Gross J. Simple Acoustic Features Can Explain Phoneme-Based Predictions of Cortical Responses to Speech. Curr Biol 2019; 29:1924-1937.e9. [PMID: 31130454 PMCID: PMC6584359 DOI: 10.1016/j.cub.2019.04.067] [Citation(s) in RCA: 69] [Impact Index Per Article: 13.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/22/2018] [Revised: 03/25/2019] [Accepted: 04/25/2019] [Indexed: 01/06/2023]
Abstract
When we listen to speech, we have to make sense of a waveform of sound pressure. Hierarchical models of speech perception assume that, to extract semantic meaning, the signal is transformed into unknown, intermediate neuronal representations. Traditionally, studies of such intermediate representations are guided by linguistically defined concepts, such as phonemes. Here, we argue that in order to arrive at an unbiased understanding of the neuronal responses to speech, we should focus instead on representations obtained directly from the stimulus. We illustrate our view with a data-driven, information theoretic analysis of a dataset of 24 young, healthy humans who listened to a 1 h narrative while their magnetoencephalogram (MEG) was recorded. We find that two recent results, the improved performance of an encoding model in which annotated linguistic and acoustic features were combined and the decoding of phoneme subgroups from phoneme-locked responses, can be explained by an encoding model that is based entirely on acoustic features. These acoustic features capitalize on acoustic edges and outperform Gabor-filtered spectrograms, which can explicitly describe the spectrotemporal characteristics of individual phonemes. By replicating our results in publicly available electroencephalography (EEG) data, we conclude that models of brain responses based on linguistic features can serve as excellent benchmarks. However, we believe that in order to further our understanding of human cortical responses to speech, we should also explore low-level and parsimonious explanations for apparent high-level phenomena.
Collapse
Affiliation(s)
- Christoph Daube
- Institute of Neuroscience and Psychology, University of Glasgow, 62 Hillhead Street, Glasgow G12 8QB, UK.
| | - Robin A A Ince
- Institute of Neuroscience and Psychology, University of Glasgow, 62 Hillhead Street, Glasgow G12 8QB, UK
| | - Joachim Gross
- Institute of Neuroscience and Psychology, University of Glasgow, 62 Hillhead Street, Glasgow G12 8QB, UK; Institute for Biomagnetism and Biosignalanalysis, University of Münster, Malmedyweg 15, 48149 Münster, Germany
| |
Collapse
|
67
|
Márton CD, Fukushima M, Camalier CR, Schultz SR, Averbeck BB. Signature Patterns for Top-Down and Bottom-Up Information Processing via Cross-Frequency Coupling in Macaque Auditory Cortex. eNeuro 2019; 6:ENEURO.0467-18.2019. [PMID: 31088914 PMCID: PMC6520641 DOI: 10.1523/eneuro.0467-18.2019] [Citation(s) in RCA: 17] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/29/2018] [Revised: 03/01/2019] [Accepted: 03/05/2019] [Indexed: 11/24/2022] Open
Abstract
Predictive coding is a theoretical framework that provides a functional interpretation of top-down and bottom-up interactions in sensory processing. The theory suggests there are differences in message passing up versus down the cortical hierarchy. These differences result from the linear feedforward of prediction errors, and the nonlinear feedback of predictions. This implies that cross-frequency interactions should predominate top-down. But it remains unknown whether these differences are expressed in cross-frequency interactions in the brain. Here we examined bidirectional cross-frequency coupling across four sectors of the auditory hierarchy in the macaque. We computed two measures of cross-frequency coupling, phase-amplitude coupling (PAC) and amplitude-amplitude coupling (AAC). Our findings revealed distinct patterns for bottom-up and top-down information processing among cross-frequency interactions. Both top-down and bottom-up interactions made prominent use of low frequencies: low-to-low-frequency (theta, alpha, beta) and low-frequency-to-high- gamma couplings were predominant top-down, while low-frequency-to-low-gamma couplings were predominant bottom-up. These patterns were largely preserved across coupling types (PAC and AAC) and across stimulus types (natural and synthetic auditory stimuli), suggesting that they are a general feature of information processing in auditory cortex. Our findings suggest the modulatory effect of low frequencies on gamma-rhythms in distant regions is important for bidirectional information transfer. The finding of low-frequency-to-low-gamma interactions in the bottom-up direction suggest that nonlinearities may also play a role in feedforward message passing. Altogether, the patterns of cross-frequency interaction we observed across the auditory hierarchy are largely consistent with the predictive coding framework.
Collapse
Affiliation(s)
- Christian D Márton
- Centre for Neurotechnology, and Department of Bioengineering, Imperial College London, London SW7 2AZ, United Kingdom
- Section on Learning and Decision Making, Laboratory of Neuropsychology, National Institute of Mental Health/National Institutes of Health, Bethesda, Maryland 20892
| | - Makoto Fukushima
- Section on Learning and Decision Making, Laboratory of Neuropsychology, National Institute of Mental Health/National Institutes of Health, Bethesda, Maryland 20892
- RIKEN Center for Brain Science Institute, Saitama 351-0106, Japan
- Consumer Neuroscience, The Nielsen Company, Tokyo 107-0052, Japan
| | - Corrie R Camalier
- Section on Learning and Decision Making, Laboratory of Neuropsychology, National Institute of Mental Health/National Institutes of Health, Bethesda, Maryland 20892
| | - Simon R Schultz
- Centre for Neurotechnology, and Department of Bioengineering, Imperial College London, London SW7 2AZ, United Kingdom
| | - Bruno B Averbeck
- Section on Learning and Decision Making, Laboratory of Neuropsychology, National Institute of Mental Health/National Institutes of Health, Bethesda, Maryland 20892
| |
Collapse
|
68
|
Lizarazu M, Lallier M, Molinaro N. Phase-amplitude coupling between theta and gamma oscillations adapts to speech rate. Ann N Y Acad Sci 2019; 1453:140-152. [PMID: 31020680 PMCID: PMC6850406 DOI: 10.1111/nyas.14099] [Citation(s) in RCA: 29] [Impact Index Per Article: 5.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/23/2018] [Revised: 02/11/2019] [Accepted: 03/26/2019] [Indexed: 11/30/2022]
Abstract
Low- and high-frequency cortical oscillations play an important role in speech processing. Low-frequency neural oscillations in the delta (<4 Hz) and theta (4-8 Hz) bands entrain to the prosodic and syllabic rates of speech, respectively. Theta band neural oscillations modulate high-frequency neural oscillations in the gamma band (28-40 Hz), which have been hypothesized to be crucial for processing phonemes in natural speech. Since speech rate is known to vary considerably, both between and within talkers, it has yet to be determined whether this nested gamma response reflects an externally induced rhythm sensitive to the rate of the fine-grained structure of the input or a speech rate-independent endogenous response. Here, we recorded magnetoencephalography responses from participants listening to a speech delivered at different rates: decelerated, normal, and accelerated. We found that the phase of theta band oscillations in left and right auditory regions adjusts to speech rate variations. Importantly, we showed that the peak of the gamma response-coupled to the phase of theta-follows the speech rate. This indicates that gamma activity in auditory regions synchronizes with the fine-grain properties of speech, possibly reflecting detailed acoustic analysis of the input.
Collapse
Affiliation(s)
- Mikel Lizarazu
- BCBL, Basque Center on Cognition, Brain and Language, Donostia/San Sebastian, Spain.,Laboratoire de Sciences Cognitives et Psycholinguistique, Dept d'Etudes Cognitives, ENS, PSL University, EHESS, CNRS, Paris, France
| | - Marie Lallier
- BCBL, Basque Center on Cognition, Brain and Language, Donostia/San Sebastian, Spain
| | - Nicola Molinaro
- BCBL, Basque Center on Cognition, Brain and Language, Donostia/San Sebastian, Spain.,Ikerbasque, Basque Foundation for Science, Bilbao, Spain
| |
Collapse
|
69
|
Watanabe H, Tanaka H, Sakti S, Nakamura S. Synchronization between overt speech envelope and EEG oscillations during imagined speech. Neurosci Res 2019; 153:48-55. [PMID: 31005564 DOI: 10.1016/j.neures.2019.04.004] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/29/2018] [Revised: 03/11/2019] [Accepted: 04/17/2019] [Indexed: 11/17/2022]
Abstract
Neural oscillations synchronize with the periodicity of external stimuli such as the rhythm of the speech amplitude envelope. This synchronization induces a speech-specific, replicable neural phase pattern across trials and enables perceived speech to be classified. In this study, we hypothesized that neural oscillations during articulatory imagination of speech could also synchronize with the rhythm of speech imagery. To validate the hypothesis, after replacing the imagined speech with overt speech due to the physically unobservable nature of imagined speech, we investigated (1) whether the EEG-based regressed speech envelopes correlate with the overt speech envelope and (2) whether EEG during the imagined speech can classify speech stimuli with different envelopes. The variability of the duration of the imagined speech across trials was corrected using dynamic time warping. The classification was based on the distance between a test data and a template waveform of each class. Results showed a significant correlation between the EEG-based regressed envelope and the overt speech envelope. The average classification accuracy was 38.5%, which is significantly above the rate of chance (33.3%). These results demonstrate the synchronization between EEG during the imagined speech and the envelope of the overt counterpart.
Collapse
Affiliation(s)
- Hiroki Watanabe
- Graduate School of Information Science, Nara Institute of Science and Technology, 8916-5 Takayama-cho, Ikoma, Nara 630-0192, Japan.
| | - Hiroki Tanaka
- Graduate School of Science and Technology, Nara Institute of Science and Technology, 8916-5 Takayama-cho, Ikoma, Nara 630-0192, Japan
| | - Sakriani Sakti
- Graduate School of Science and Technology, Nara Institute of Science and Technology, 8916-5 Takayama-cho, Ikoma, Nara 630-0192, Japan; Center for Advanced Intelligence Project AIP, RIKEN, 8916-5 Takayama-cho, Ikoma, Nara 630-0192, Japan
| | - Satoshi Nakamura
- Graduate School of Science and Technology, Nara Institute of Science and Technology, 8916-5 Takayama-cho, Ikoma, Nara 630-0192, Japan; Center for Advanced Intelligence Project AIP, RIKEN, 8916-5 Takayama-cho, Ikoma, Nara 630-0192, Japan
| |
Collapse
|
70
|
Abstract
OBJECTIVE Speech signals have a remarkable ability to entrain brain activity to the rapid fluctuations of speech sounds. For instance, one can readily measure a correlation of the sound amplitude with the evoked responses of the electroencephalogram (EEG), and the strength of this correlation is indicative of whether the listener is attending to the speech. In this study we asked whether this stimulus-response correlation is also predictive of speech intelligibility. APPROACH We hypothesized that when a listener fails to understand the speech in adverse hearing conditions, attention wanes and stimulus-response correlation also drops. To test this, we measure a listener's ability to detect words in noisy speech while recording their brain activity using EEG. We alter intelligibility without changing the acoustic stimulus by pairing it with congruent and incongruent visual speech. MAIN RESULTS For almost all subjects we found that an improvement in speech detection coincided with an increase in correlation between the noisy speech and the EEG measured over a period of 30 min. SIGNIFICANCE We conclude that simultaneous recordings of the perceived sound and the corresponding EEG response may be a practical tool to assess speech intelligibility in the context of hearing aids.
Collapse
Affiliation(s)
- Ivan Iotzov
- Biomedical Engineering, City College of New York, New York City, NY, United States of America
| | | |
Collapse
|
71
|
García-Rosales F, Beetz MJ, Cabral-Calderin Y, Kössl M, Hechavarria JC. Neuronal coding of multiscale temporal features in communication sequences within the bat auditory cortex. Commun Biol 2018; 1:200. [PMID: 30480101 PMCID: PMC6244232 DOI: 10.1038/s42003-018-0205-5] [Citation(s) in RCA: 29] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/12/2018] [Accepted: 10/30/2018] [Indexed: 11/18/2022] Open
Abstract
Experimental evidence supports that cortical oscillations represent multiscale temporal modulations existent in natural stimuli, yet little is known about the processing of these multiple timescales at a neuronal level. Here, using extracellular recordings from the auditory cortex (AC) of awake bats (Carollia perspicillata), we show the existence of three neuronal types which represent different levels of the temporal structure of conspecific vocalizations, and therefore constitute direct evidence of multiscale temporal processing of naturalistic stimuli by neurons in the AC. These neuronal subpopulations synchronize differently to local-field potentials, particularly in theta- and high frequency bands, and are informative to a different degree in terms of their spike rate. Interestingly, we also observed that both low and high frequency cortical oscillations can be highly informative about the listened calls. Our results suggest that multiscale neuronal processing allows for the precise and non-redundant representation of natural vocalizations in the AC.
Collapse
Affiliation(s)
- Francisco García-Rosales
- Institut für Zellbiologie und Neurowissenschaft, Goethe-Universität, 60438, Frankfurt/M., Germany.
| | - M Jerome Beetz
- Institut für Zellbiologie und Neurowissenschaft, Goethe-Universität, 60438, Frankfurt/M., Germany
- Department of Zoology II, University of Würzburg, Am Hubland, 97074, Würzburg, Germany
| | - Yuranny Cabral-Calderin
- MEG Labor, Brain Imaging Center, Goethe-Universität, 60528, Frankfurt/M., Germany
- German Resilience Center, University Medical Center Mainz, 55131, Mainz, Germany
| | - Manfred Kössl
- Institut für Zellbiologie und Neurowissenschaft, Goethe-Universität, 60438, Frankfurt/M., Germany
| | - Julio C Hechavarria
- Institut für Zellbiologie und Neurowissenschaft, Goethe-Universität, 60438, Frankfurt/M., Germany.
| |
Collapse
|
72
|
Volk D, Dubinin I, Myasnikova A, Gutkin B, Nikulin VV. Generalized Cross-Frequency Decomposition: A Method for the Extraction of Neuronal Components Coupled at Different Frequencies. Front Neuroinform 2018; 12:72. [PMID: 30405385 PMCID: PMC6200871 DOI: 10.3389/fninf.2018.00072] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/19/2018] [Accepted: 09/26/2018] [Indexed: 11/15/2022] Open
Abstract
Perceptual, motor and cognitive processes are based on rich interactions between remote regions in the human brain. Such interactions can be carried out through phase synchronization of oscillatory signals. Neuronal synchronization has been primarily studied within the same frequency range, e.g., within alpha or beta frequency bands. Yet, recent research shows that neuronal populations can also demonstrate phase synchronization between different frequency ranges. An extraction of such cross-frequency interactions in EEG/MEG recordings remains, however, methodologically challenging. Here we present a new method for the robust extraction of cross-frequency phase-to-phase synchronized components. Generalized Cross-Frequency Decomposition (GCFD) reconstructs the time courses of synchronized neuronal components, their spatial filters and patterns. Our method extends the previous state of the art, Cross-Frequency Decomposition (CFD), to the whole range of frequencies: it works for any f1 and f2 whenever f1:f2 is a rational number. GCFD gives a compact description of non-linearly interacting neuronal sources on the basis of their cross-frequency phase coupling. We successfully validated the new method in simulations and tested it with real EEG recordings including resting state data and steady state visually evoked potentials (SSVEP).
Collapse
Affiliation(s)
- Denis Volk
- Interdisciplinary Scientific Center J.-V. Poncelet (CNRS UMI 2615), Moscow, Russia
| | - Igor Dubinin
- Institute for Cognitive Neuroscience of the National Research University Higher School of Economics, Moscow, Russia.,Moscow Institute of Physics and Technology, Moscow, Russia
| | - Alexandra Myasnikova
- Institute for Cognitive Neuroscience of the National Research University Higher School of Economics, Moscow, Russia
| | - Boris Gutkin
- Institute for Cognitive Neuroscience of the National Research University Higher School of Economics, Moscow, Russia.,Group for Neural Theory, Laboratoire des Neurosciences Cognitives et Computationelles INSERM U960, Department of Cognitive Studies, Ecole Normale Superieure PSL University, Paris, France
| | - Vadim V Nikulin
- Department of Neurology, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany.,Neurophysics Group, Department of Neurology, Charité-Universittsmedizin Berlin, Berlin, Germany.,Bernstein Center for Computational Neuroscience, Berlin, Germany.,Center for Bioelectric Interfaces of the Institute for Cognitive Neuroscience of the National Research University Higher School of Economics, Moscow, Russia
| |
Collapse
|
73
|
García-Rosales F, Martin LM, Beetz MJ, Cabral-Calderin Y, Kössl M, Hechavarria JC. Low-Frequency Spike-Field Coherence Is a Fingerprint of Periodicity Coding in the Auditory Cortex. iScience 2018; 9:47-62. [PMID: 30384133 PMCID: PMC6214842 DOI: 10.1016/j.isci.2018.10.009] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/05/2017] [Revised: 06/20/2018] [Accepted: 10/10/2018] [Indexed: 11/04/2022] Open
Abstract
The extraction of temporal information from sensory input streams is of paramount importance in the auditory system. In this study, amplitude-modulated sounds were used as stimuli to drive auditory cortex (AC) neurons of the bat species Carollia perspicillata, to assess the interactions between cortical spikes and local-field potentials (LFPs) for the processing of temporal acoustic cues. We observed that neurons in the AC capable of eliciting synchronized spiking to periodic acoustic envelopes were significantly more coherent to theta- and alpha-band LFPs than their non-synchronized counterparts. These differences occurred independently of the modulation rate tested and could not be explained by power or phase modulations of the field potentials. We argue that the coupling between neuronal spiking and the phase of low-frequency LFPs might be important for orchestrating the coding of temporal acoustic structures in the AC. Auditory cortical neurons can track periodic sounds via synchronized spiking Neuronal synchronization ability is well marked by theta-alpha spike-LFP coherence Spike-LFP coherence patterns are independent of the stimulus' periodicity Theta-alpha LFPs may orchestrate phase-locked neuronal responses to periodic sounds
Collapse
Affiliation(s)
- Francisco García-Rosales
- Institut für Zellbiologie und Neurowissenschaft, Goethe-Universität, Max-von-Laue-Str. 13, 60438 Frankfurt am Main, Germany.
| | - Lisa M Martin
- Institut für Zellbiologie und Neurowissenschaft, Goethe-Universität, Max-von-Laue-Str. 13, 60438 Frankfurt am Main, Germany
| | - M Jerome Beetz
- Institut für Zellbiologie und Neurowissenschaft, Goethe-Universität, Max-von-Laue-Str. 13, 60438 Frankfurt am Main, Germany
| | - Yuranny Cabral-Calderin
- MEG Labor, Brain Imaging Center, Goethe-Universität, 60528 Frankfurt am Main, Germany; German Resilience Center, University Medical Center Mainz, Mainz, Germany
| | - Manfred Kössl
- Institut für Zellbiologie und Neurowissenschaft, Goethe-Universität, Max-von-Laue-Str. 13, 60438 Frankfurt am Main, Germany
| | - Julio C Hechavarria
- Institut für Zellbiologie und Neurowissenschaft, Goethe-Universität, Max-von-Laue-Str. 13, 60438 Frankfurt am Main, Germany.
| |
Collapse
|
74
|
Penn LR, Ayasse ND, Wingfield A, Ghitza O. The possible role of brain rhythms in perceiving fast speech: Evidence from adult aging. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2018; 144:2088. [PMID: 30404494 PMCID: PMC6181647 DOI: 10.1121/1.5054905] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/11/2018] [Revised: 08/28/2018] [Accepted: 08/31/2018] [Indexed: 06/08/2023]
Abstract
The rhythms of speech and the time scales of linguistic units (e.g., syllables) correspond remarkably to cortical oscillations. Previous research has demonstrated that in young adults, the intelligibility of time-compressed speech can be rescued by "repackaging" the speech signal through the regular insertion of silent gaps to restore correspondence to the theta oscillator. This experiment tested whether this same phenomenon can be demonstrated in older adults, who show age-related changes in cortical oscillations. The results demonstrated a similar phenomenon for older adults, but that the "rescue point" of repackaging is shifted, consistent with a slowing of theta oscillations.
Collapse
Affiliation(s)
- Lana R Penn
- Volen National Center for Complex Systems, Brandeis University, Waltham, Massachusetts 02454, USA
| | - Nicole D Ayasse
- Volen National Center for Complex Systems, Brandeis University, Waltham, Massachusetts 02454, USA
| | - Arthur Wingfield
- Volen National Center for Complex Systems, Brandeis University, Waltham, Massachusetts 02454, USA
| | - Oded Ghitza
- Department of Biomedical Engineering, Hearing Research Center, Boston University, Boston, Massachusetts 02215, USA
| |
Collapse
|
75
|
Neural Entrainment Determines the Words We Hear. Curr Biol 2018; 28:2867-2875.e3. [PMID: 30197083 DOI: 10.1016/j.cub.2018.07.023] [Citation(s) in RCA: 78] [Impact Index Per Article: 13.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/04/2018] [Revised: 06/25/2018] [Accepted: 07/09/2018] [Indexed: 11/21/2022]
Abstract
Low-frequency neural entrainment to rhythmic input has been hypothesized as a canonical mechanism that shapes sensory perception in time. Neural entrainment is deemed particularly relevant for speech analysis, as it would contribute to the extraction of discrete linguistic elements from continuous acoustic signals. However, its causal influence in speech perception has been difficult to establish. Here, we provide evidence that oscillations build temporal predictions about the duration of speech tokens that affect perception. Using magnetoencephalography (MEG), we studied neural dynamics during listening to sentences that changed in speech rate. We observed neural entrainment to preceding speech rhythms persisting for several cycles after the change in rate. The sustained entrainment was associated with changes in the perceived duration of the last word's vowel, resulting in the perception of words with different meanings. These findings support oscillatory models of speech processing, suggesting that neural oscillations actively shape speech perception.
Collapse
|
76
|
Zakharov DG, Krupa M, Gutkin BS, Kuznetsov AS. High-frequency forced oscillations in neuronlike elements. Phys Rev E 2018; 97:062211. [PMID: 30011467 DOI: 10.1103/physreve.97.062211] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/13/2017] [Indexed: 11/07/2022]
Abstract
We analyzed a generic relaxation oscillator under moderately strong forcing at a frequency much greater that the natural intrinsic frequency of the oscillator. Additionally, the forcing is of the same sign and, thus, has a nonzero average, matching neuroscience applications. We found that, first, the transition to high-frequency synchronous oscillations occurs mostly through periodic solutions with virtually no chaotic regimes present. Second, the amplitude of the high-frequency oscillations is large, suggesting an important role for these oscillations in applications. Third, the 1:1 synchronized solution may lose stability, and, contrary to other cases, this occurs at smaller, but not at higher frequency differences between intrinsic and forcing oscillations. We analytically built a map that gives an explanation of these properties. Thus, we found a way to substantially "overclock" the oscillator with only a moderately strong external force. Interestingly, in application to neuroscience, both excitatory and inhibitory inputs can force the high-frequency oscillations.
Collapse
Affiliation(s)
- D G Zakharov
- Institute of Applied Physics of RAS, 46 Ulyanov Str., Nizhny Novgorod, Russia
| | - M Krupa
- Laboratoire Jean-Alexandre Dieudonné, Université de Cote d'Azur, Nice, France
| | - B S Gutkin
- Group of Neural Theory, LNC INSERM U960, École Normale Supérieure PSL* University, 29 rue d'Ulm, Paris, France.,Centre for Cognition and Decision Making, National Research University Higher School of Economics, Myasnitskaya St. 20, Moscow, Russia
| | - A S Kuznetsov
- Department of Mathematical Sciences and Center for Mathematical Modeling and Computational Sciences, Indiana University-Purdue University Indianapolis, 402 N. Blackford St., Indianapolis, Indiana 46202, USA
| |
Collapse
|
77
|
Kikuchi Y, Sedley W, Griffiths TD, Petkov CI. Evolutionarily conserved neural signatures involved in sequencing predictions and their relevance for language. Curr Opin Behav Sci 2018; 21:145-153. [PMID: 30057937 PMCID: PMC6058086 DOI: 10.1016/j.cobeha.2018.05.002] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/20/2022]
Abstract
Predicting the occurrence of future events from prior ones is vital for animal perception and cognition. Although how such sequence learning (a form of relational knowledge) relates to particular operations in language remains controversial, recent evidence shows that sequence learning is disrupted in frontal lobe damage associated with aphasia. Also, neural sequencing predictions at different temporal scales resemble those involved in language operations occurring at similar scales. Furthermore, comparative work in humans and monkeys highlights evolutionarily conserved frontal substrates and predictive oscillatory signatures in the temporal lobe processing learned sequences of speech signals. Altogether this evidence supports a relational knowledge hypothesis of language evolution, proposing that language processes in humans are functionally integrated with an ancestral neural system for predictive sequence learning.
Collapse
Affiliation(s)
- Yukiko Kikuchi
- Institute of Neuroscience, Newcastle University Medical School, Newcastle Upon Tyne, UK
- Centre for Behaviour and Evolution, Newcastle University, Newcastle Upon Tyne, UK
| | - William Sedley
- Institute of Neuroscience, Newcastle University Medical School, Newcastle Upon Tyne, UK
| | - Timothy D Griffiths
- Institute of Neuroscience, Newcastle University Medical School, Newcastle Upon Tyne, UK
- Wellcome Trust Centre for Neuroimaging, University College London, UK
- Department of Neurosurgery, University of Iowa, Iowa City, USA
| | - Christopher I Petkov
- Institute of Neuroscience, Newcastle University Medical School, Newcastle Upon Tyne, UK
- Centre for Behaviour and Evolution, Newcastle University, Newcastle Upon Tyne, UK
| |
Collapse
|
78
|
Boasen J, Takeshita Y, Kuriki S, Yokosawa K. Spectral-Spatial Differentiation of Brain Activity During Mental Imagery of Improvisational Music Performance Using MEG. Front Hum Neurosci 2018; 12:156. [PMID: 29740300 PMCID: PMC5928205 DOI: 10.3389/fnhum.2018.00156] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/21/2017] [Accepted: 04/05/2018] [Indexed: 11/13/2022] Open
Abstract
Group musical improvisation is thought to be akin to conversation, and therapeutically has been shown to be effective at improving communicativeness, sociability, creative expression, and overall psychological health. To understand these therapeutic effects, clarifying the nature of brain activity during improvisational cognition is important. Some insight regarding brain activity during improvisational music cognition has been gained via functional magnetic resonance imaging (fMRI) and electroencephalography (EEG). However, we have found no reports based on magnetoencephalography (MEG). With the present study, we aimed to demonstrate the feasibility of improvisational music performance experimentation in MEG. We designed a novel MEG-compatible keyboard, and used it with experienced musicians (N = 13) in a music performance paradigm to spectral-spatially differentiate spontaneous brain activity during mental imagery of improvisational music performance. Analyses of source activity revealed that mental imagery of improvisational music performance induced greater theta (5–7 Hz) activity in left temporal areas associated with rhythm production and communication, greater alpha (8–12 Hz) activity in left premotor and parietal areas associated with sensorimotor integration, and less beta (15–29 Hz) activity in right frontal areas associated with inhibition control. These findings support the notion that musical improvisation is conversational, and suggest that creation of novel auditory content is facilitated by a more internally-directed, disinhibited cognitive state.
Collapse
Affiliation(s)
- Jared Boasen
- Graduate School of Health Sciences, Hokkaido University, Hokkaido, Japan
| | - Yuya Takeshita
- Graduate School of Health Sciences, Hokkaido University, Hokkaido, Japan
| | - Shinya Kuriki
- Faculty of Health Sciences, Hokkaido University, Hokkaido, Japan
| | - Koichi Yokosawa
- Faculty of Health Sciences, Hokkaido University, Hokkaido, Japan
| |
Collapse
|
79
|
Sensorimotor Representation of Speech Perception. Cross-Decoding of Place of Articulation Features during Selective Attention to Syllables in 7T fMRI. eNeuro 2018; 5:eN-NWR-0252-17. [PMID: 29610768 PMCID: PMC5880028 DOI: 10.1523/eneuro.0252-17.2018] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/17/2017] [Revised: 02/09/2018] [Accepted: 02/14/2018] [Indexed: 12/25/2022] Open
Abstract
Sensorimotor integration, the translation between acoustic signals and motoric programs, may constitute a crucial mechanism for speech. During speech perception, the acoustic-motoric translations include the recruitment of cortical areas for the representation of speech articulatory features, such as place of articulation. Selective attention can shape the processing and performance of speech perception tasks. Whether and where sensorimotor integration takes place during attentive speech perception remains to be explored. Here, we investigate articulatory feature representations of spoken consonant-vowel (CV) syllables during two distinct tasks. Fourteen healthy humans attended to either the vowel or the consonant within a syllable in separate delayed-match-to-sample tasks. Single-trial fMRI blood oxygenation level-dependent (BOLD) responses from perception periods were analyzed using multivariate pattern classification and a searchlight approach to reveal neural activation patterns sensitive to the processing of place of articulation (i.e., bilabial/labiodental vs. alveolar). To isolate place of articulation representation from acoustic covariation, we applied a cross-decoding (generalization) procedure across distinct features of manner of articulation (i.e., stop, fricative, and nasal). We found evidence for the representation of place of articulation across tasks and in both tasks separately: for attention to vowels, generalization maps included bilateral clusters of superior and posterior temporal, insular, and frontal regions; for attention to consonants, generalization maps encompassed clusters in temporoparietal, insular, and frontal regions within the right hemisphere only. Our results specify the cortical representation of place of articulation features generalized across manner of articulation during attentive syllable perception, thus supporting sensorimotor integration during attentive speech perception and demonstrating the value of generalization.
Collapse
|
80
|
Borges AFT, Giraud AL, Mansvelder HD, Linkenkaer-Hansen K. Scale-Free Amplitude Modulation of Neuronal Oscillations Tracks Comprehension of Accelerated Speech. J Neurosci 2018; 38:710-722. [PMID: 29217685 PMCID: PMC6596185 DOI: 10.1523/jneurosci.1515-17.2017] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/01/2017] [Revised: 10/24/2017] [Accepted: 11/20/2017] [Indexed: 01/17/2023] Open
Abstract
Speech comprehension is preserved up to a threefold acceleration, but deteriorates rapidly at higher speeds. Current models posit that perceptual resilience to accelerated speech is limited by the brain's ability to parse speech into syllabic units using δ/θ oscillations. Here, we investigated whether the involvement of neuronal oscillations in processing accelerated speech also relates to their scale-free amplitude modulation as indexed by the strength of long-range temporal correlations (LRTC). We recorded MEG while 24 human subjects (12 females) listened to radio news uttered at different comprehensible rates, at a mostly unintelligible rate and at this same speed interleaved with silence gaps. δ, θ, and low-γ oscillations followed the nonlinear variation of comprehension, with LRTC rising only at the highest speed. In contrast, increasing the rate was associated with a monotonic increase in LRTC in high-γ activity. When intelligibility was restored with the insertion of silence gaps, LRTC in the δ, θ, and low-γ oscillations resumed the low levels observed for intelligible speech. Remarkably, the lower the individual subject scaling exponents of δ/θ oscillations, the greater the comprehension of the fastest speech rate. Moreover, the strength of LRTC of the speech envelope decreased at the maximal rate, suggesting an inverse relationship with the LRTC of brain dynamics when comprehension halts. Our findings show that scale-free amplitude modulation of cortical oscillations and speech signals are tightly coupled to speech uptake capacity.SIGNIFICANCE STATEMENT One may read this statement in 20-30 s, but reading it in less than five leaves us clueless. Our minds limit how much information we grasp in an instant. Understanding the neural constraints on our capacity for sensory uptake is a fundamental question in neuroscience. Here, MEG was used to investigate neuronal activity while subjects listened to radio news played faster and faster until becoming unintelligible. We found that speech comprehension is related to the scale-free dynamics of δ and θ bands, whereas this property in high-γ fluctuations mirrors speech rate. We propose that successful speech processing imposes constraints on the self-organization of synchronous cell assemblies and their scale-free dynamics adjusts to the temporal properties of spoken language.
Collapse
Affiliation(s)
- Ana Filipa Teixeira Borges
- Department of Integrative Neurophysiology, Center for Neurogenomics and Cognitive Research, VU University Amsterdam, Amsterdam, Netherlands
- Amsterdam Neuroscience, Amsterdam, Netherlands, and
| | - Anne-Lise Giraud
- Department of Neuroscience, University of Geneva, Biotech Campus, Geneva 1211, Switzerland
| | - Huibert D Mansvelder
- Department of Integrative Neurophysiology, Center for Neurogenomics and Cognitive Research, VU University Amsterdam, Amsterdam, Netherlands
- Amsterdam Neuroscience, Amsterdam, Netherlands, and
| | - Klaus Linkenkaer-Hansen
- Department of Integrative Neurophysiology, Center for Neurogenomics and Cognitive Research, VU University Amsterdam, Amsterdam, Netherlands,
- Amsterdam Neuroscience, Amsterdam, Netherlands, and
| |
Collapse
|
81
|
Riecke L, Formisano E, Sorger B, Başkent D, Gaudrain E. Neural Entrainment to Speech Modulates Speech Intelligibility. Curr Biol 2017; 28:161-169.e5. [PMID: 29290557 DOI: 10.1016/j.cub.2017.11.033] [Citation(s) in RCA: 111] [Impact Index Per Article: 15.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/14/2017] [Revised: 10/26/2017] [Accepted: 11/15/2017] [Indexed: 01/02/2023]
Abstract
Speech is crucial for communication in everyday life. Speech-brain entrainment, the alignment of neural activity to the slow temporal fluctuations (envelope) of acoustic speech input, is a ubiquitous element of current theories of speech processing. Associations between speech-brain entrainment and acoustic speech signal, listening task, and speech intelligibility have been observed repeatedly. However, a methodological bottleneck has prevented so far clarifying whether speech-brain entrainment contributes functionally to (i.e., causes) speech intelligibility or is merely an epiphenomenon of it. To address this long-standing issue, we experimentally manipulated speech-brain entrainment without concomitant acoustic and task-related variations, using a brain stimulation approach that enables modulating listeners' neural activity with transcranial currents carrying speech-envelope information. Results from two experiments involving a cocktail-party-like scenario and a listening situation devoid of aural speech-amplitude envelope input reveal consistent effects on listeners' speech-recognition performance, demonstrating a causal role of speech-brain entrainment in speech intelligibility. Our findings imply that speech-brain entrainment is critical for auditory speech comprehension and suggest that transcranial stimulation with speech-envelope-shaped currents can be utilized to modulate speech comprehension in impaired listening conditions.
Collapse
Affiliation(s)
- Lars Riecke
- Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University, 6229 EV Maastricht, the Netherlands.
| | - Elia Formisano
- Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University, 6229 EV Maastricht, the Netherlands
| | - Bettina Sorger
- Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University, 6229 EV Maastricht, the Netherlands
| | - Deniz Başkent
- Department of Otorhinolaryngology/Head and Neck Surgery, University Medical Center Groningen, University of Groningen, 9700 RB Groningen, the Netherlands
| | - Etienne Gaudrain
- Department of Otorhinolaryngology/Head and Neck Surgery, University Medical Center Groningen, University of Groningen, 9700 RB Groningen, the Netherlands; CNRS UMR 5292, Lyon Neuroscience Research Center, Auditory Cognition and Psychoacoustics, Inserm UMRS 1028, Université Claude Bernard Lyon 1, Université de Lyon, 69366 Lyon Cedex 07, France
| |
Collapse
|
82
|
Haegens S, Zion Golumbic E. Rhythmic facilitation of sensory processing: A critical review. Neurosci Biobehav Rev 2017; 86:150-165. [PMID: 29223770 DOI: 10.1016/j.neubiorev.2017.12.002] [Citation(s) in RCA: 156] [Impact Index Per Article: 22.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/13/2017] [Revised: 11/02/2017] [Accepted: 12/03/2017] [Indexed: 11/17/2022]
Abstract
Here we review the role of brain oscillations in sensory processing. We examine the idea that neural entrainment of intrinsic oscillations underlies the processing of rhythmic stimuli in the context of simple isochronous rhythms as well as in music and speech. This has been a topic of growing interest over recent years; however, many issues remain highly controversial: how do fluctuations of intrinsic neural oscillations-both spontaneous and entrained to external stimuli-affect perception, and does this occur automatically or can it be actively controlled by top-down factors? Some of the controversy in the literature stems from confounding use of terminology. Moreover, it is not straightforward how theories and findings regarding isochronous rhythms generalize to more complex, naturalistic stimuli, such as speech and music. Here we aim to clarify terminology, and distinguish between different phenomena that are often lumped together as reflecting "neural entrainment" but may actually vary in their mechanistic underpinnings. Furthermore, we discuss specific caveats and confounds related to making inferences about oscillatory mechanisms from human electrophysiological data.
Collapse
Affiliation(s)
- Saskia Haegens
- Department of Neurological Surgery, Columbia University College of Physicians and Surgeons, New York, NY 10032, USA; Centre for Cognitive Neuroimaging, Donders Institute for Brain, Cognition and Behaviour, Radboud University Nijmegen, 6500 HB Nijmegen, The Netherlands
| | | |
Collapse
|
83
|
Räsänen O, Doyle G, Frank MC. Pre-linguistic segmentation of speech into syllable-like units. Cognition 2017; 171:130-150. [PMID: 29156241 DOI: 10.1016/j.cognition.2017.11.003] [Citation(s) in RCA: 33] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/12/2016] [Revised: 10/23/2017] [Accepted: 11/10/2017] [Indexed: 11/15/2022]
Abstract
Syllables are often considered to be central to infant and adult speech perception. Many theories and behavioral studies on early language acquisition are also based on syllable-level representations of spoken language. There is little clarity, however, on what sort of pre-linguistic "syllable" would actually be accessible to an infant with no phonological or lexical knowledge. Anchored by the notion that syllables are organized around particularly sonorous (audible) speech sounds, the present study investigates the feasibility of speech segmentation into syllable-like chunks without any a priori linguistic knowledge. We first operationalize sonority as a measurable property of the acoustic input, and then use sonority variation across time, or speech rhythm, as the basis for segmentation. The entire process from acoustic input to chunks of syllable-like acoustic segments is implemented as a computational model inspired by the oscillatory entrainment of the brain to speech rhythm. We analyze the output of the segmentation process in three different languages, showing that the sonority fluctuation in speech is highly informative of syllable and word boundaries in all three cases without any language-specific tuning of the model. These findings support the widely held assumption that syllable-like structure is accessible to infants even when they are only beginning to learn the properties of their native language.
Collapse
Affiliation(s)
- Okko Räsänen
- Department of Signal Processing and Acoustics, Aalto University, P.O. Box 12000, Aalto, Finland.
| | - Gabriel Doyle
- Department of Psychology, Stanford University, Stanford, CA 94305, United States
| | - Michael C Frank
- Department of Psychology, Stanford University, Stanford, CA 94305, United States
| |
Collapse
|
84
|
θ-Band and β-Band Neural Activity Reflects Independent Syllable Tracking and Comprehension of Time-Compressed Speech. J Neurosci 2017; 37:7930-7938. [PMID: 28729443 DOI: 10.1523/jneurosci.2882-16.2017] [Citation(s) in RCA: 43] [Impact Index Per Article: 6.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/14/2016] [Revised: 05/24/2017] [Accepted: 05/31/2017] [Indexed: 11/21/2022] Open
Abstract
Recent psychophysics data suggest that speech perception is not limited by the capacity of the auditory system to encode fast acoustic variations through neural γ activity, but rather by the time given to the brain to decode them. Whether the decoding process is bounded by the capacity of θ rhythm to follow syllabic rhythms in speech, or constrained by a more endogenous top-down mechanism, e.g., involving β activity, is unknown. We addressed the dynamics of auditory decoding in speech comprehension by challenging syllable tracking and speech decoding using comprehensible and incomprehensible time-compressed auditory sentences. We recorded EEGs in human participants and found that neural activity in both θ and γ ranges was sensitive to syllabic rate. Phase patterns of slow neural activity consistently followed the syllabic rate (4-14 Hz), even when this rate went beyond the classical θ range (4-8 Hz). The power of θ activity increased linearly with syllabic rate but showed no sensitivity to comprehension. Conversely, the power of β (14-21 Hz) activity was insensitive to the syllabic rate, yet reflected comprehension on a single-trial basis. We found different long-range dynamics for θ and β activity, with β activity building up in time while more contextual information becomes available. This is consistent with the roles of θ and β activity in stimulus-driven versus endogenous mechanisms. These data show that speech comprehension is constrained by concurrent stimulus-driven θ and low-γ activity, and by endogenous β activity, but not primarily by the capacity of θ activity to track the syllabic rhythm.SIGNIFICANCE STATEMENT Speech comprehension partly depends on the ability of the auditory cortex to track syllable boundaries with θ-range neural oscillations. The reason comprehension drops when speech is accelerated could hence be because θ oscillations can no longer follow the syllabic rate. Here, we presented subjects with comprehensible and incomprehensible accelerated speech, and show that neural phase patterns in the θ band consistently reflect the syllabic rate, even when speech becomes too fast to be intelligible. The drop in comprehension, however, is signaled by a significant decrease in the power of low-β oscillations (14-21 Hz). These data suggest that speech comprehension is not limited by the capacity of θ oscillations to adapt to syllabic rate, but by an endogenous decoding process.
Collapse
|
85
|
Hancock R, Pugh KR, Hoeft F. Neural Noise Hypothesis of Developmental Dyslexia. Trends Cogn Sci 2017; 21:434-448. [PMID: 28400089 PMCID: PMC5489551 DOI: 10.1016/j.tics.2017.03.008] [Citation(s) in RCA: 78] [Impact Index Per Article: 11.1] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/16/2016] [Revised: 02/27/2017] [Accepted: 03/15/2017] [Indexed: 11/26/2022]
Abstract
Developmental dyslexia (decoding-based reading disorder; RD) is a complex trait with multifactorial origins at the genetic, neural, and cognitive levels. There is evidence that low-level sensory-processing deficits precede and underlie phonological problems, which are one of the best-documented aspects of RD. RD is also associated with impairments in integrating visual symbols with their corresponding speech sounds. Although causal relationships between sensory processing, print-speech integration, and fluent reading, and their neural bases are debated, these processes all require precise timing mechanisms across distributed brain networks. Neural excitability and neural noise are fundamental to these timing mechanisms. Here, we propose that neural noise stemming from increased neural excitability in cortical networks implicated in reading is one key distal contributor to RD.
Collapse
Affiliation(s)
- Roeland Hancock
- Department of Psychiatry and Weill Institute for Neurosciences, University of California, San Francisco (UCSF), 401 Parnassus Ave. Box-0984, San Francisco, CA 94143, USA; Science-based Innovation in Learning Center (SILC), 401 Parnassus Ave. Box-0984, San Francisco, CA 94143, USA.
| | - Kenneth R Pugh
- Haskins Laboratories, 300 George Street, New Haven, CT 06511, USA; Department of Linguistics, Yale University, 370 Temple Street, New Haven, CT 06520, USA; Department of Radiology and Biomedical Imaging, Yale University, 330 Cedar Street, New Haven, CT 06520, USA; Department of Psychological Sciences, University of Connecticut, 406 Babbidge Road, Storrs, CT 06269, USA
| | - Fumiko Hoeft
- Department of Psychiatry and Weill Institute for Neurosciences, University of California, San Francisco (UCSF), 401 Parnassus Ave. Box-0984, San Francisco, CA 94143, USA; Haskins Laboratories, 300 George Street, New Haven, CT 06511, USA; Department of Neuropsychiatry, Keio University School of Medicine, 35 Shinanomachi, Shinjuku-ku, Tokyo 160, Japan; Science-based Innovation in Learning Center (SILC), 401 Parnassus Ave. Box-0984, San Francisco, CA 94143, USA; Dyslexia Center, UCSF, 675 Nelson Rising Lane, San Francisco, CA 94158, USA.
| |
Collapse
|
86
|
An oscillopathic approach to developmental dyslexia: From genes to speech processing. Behav Brain Res 2017; 329:84-95. [DOI: 10.1016/j.bbr.2017.03.048] [Citation(s) in RCA: 16] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/15/2017] [Revised: 03/14/2017] [Accepted: 03/18/2017] [Indexed: 12/27/2022]
|
87
|
Active auditory experience in infancy promotes brain plasticity in Theta and Gamma oscillations. Dev Cogn Neurosci 2017; 26:9-19. [PMID: 28436834 PMCID: PMC6987829 DOI: 10.1016/j.dcn.2017.04.004] [Citation(s) in RCA: 26] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/17/2016] [Revised: 04/04/2017] [Accepted: 04/11/2017] [Indexed: 11/22/2022] Open
Abstract
Active acoustic experience (AEx) in infancy impacts cortical oscillations. AEx infants show left Theta- and Gamma-band activity to complex tone pairs. Passive and naïve infants yield less distinct, more bilateral responses.
Language acquisition in infants is driven by on-going neural plasticity that is acutely sensitive to environmental acoustic cues. Recent studies showed that attention-based experience with non-linguistic, temporally-modulated auditory stimuli sharpens cortical responses. A previous ERP study from this laboratory showed that interactive auditory experience via behavior-based feedback (AEx), over a 6-week period from 4- to 7-months-of-age, confers a processing advantage, compared to passive auditory exposure (PEx) or maturation alone (Naïve Control, NC). Here, we provide a follow-up investigation of the underlying neural oscillatory patterns in these three groups. In AEx infants, Standard stimuli with invariant frequency (STD) elicited greater Theta-band (4–6 Hz) activity in Right Auditory Cortex (RAC), as compared to NC infants, and Deviant stimuli with rapid frequency change (DEV) elicited larger responses in Left Auditory Cortex (LAC). PEx and NC counterparts showed less-mature bilateral patterns. AEx infants also displayed stronger Gamma (33–37 Hz) activity in the LAC during DEV discrimination, compared to NCs, while NC and PEx groups demonstrated bilateral activity in this band, if at all. This suggests that interactive acoustic experience with non-linguistic stimuli can promote a distinct, robust and precise cortical pattern during rapid auditory processing, perhaps reflecting mechanisms that support fine-tuning of early acoustic mapping.
Collapse
|
88
|
Xia Z, Hancock R, Hoeft F. Neurobiological bases of reading disorder Part I: Etiological investigations. LANGUAGE AND LINGUISTICS COMPASS 2017; 11:e12239. [PMID: 28785303 PMCID: PMC5543813 DOI: 10.1111/lnc3.12239] [Citation(s) in RCA: 18] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/31/2016] [Accepted: 03/22/2017] [Indexed: 05/29/2023]
Abstract
While many studies have focused on identifying the neural and behavioral characteristics of decoding-based reading disorder (RD, aka developmental dyslexia), the etiology of RD remains largely unknown and understudied. Because the brain plays an intermediate role between genetic factors and behavioral outcomes, it is promising to address causality from a neural perspective. In the current, Part I of the two-part review, we discuss neuroimaging approaches to addressing the causality issue and review the results of studies that have employed these approaches. We assume that if a neural signature were associated with RD etiology, it would (a) manifest across comparisons in different languages, (b) be experience independent and appear in comparisons between RD and reading-matched controls, (c) be present both pre- and post-intervention, (d) be found in at-risk, pre-reading children and (e) be associated with genetic risk. We discuss each of these five characteristics in turn and summarize the studies that have examined each of them. The available literature provides evidence that anomalies in left temporo-parietal cortex, and possibly occipito-temporal cortex, may be closely related to the etiology of RD. Improved understanding of the etiology of RD can help improve the accuracy of early detection and enable targeted intervention of cognitive processes that are amenable to change, leading to improved outcomes in at-risk or affected populations.
Collapse
Affiliation(s)
- Zhichao Xia
- Department of Psychiatry and Weill Institute for Neurosciences, University of California San Francisco, USA
- State Key Laboratory of Cognitive Neuroscience and Learning & IDG/McGovern Institute for Brain Research, Beijing Normal University, China
- Center for Collaboration and Innovation in Brain and Learning Sciences, Beijing Normal University, China
| | - Roeland Hancock
- Department of Psychiatry and Weill Institute for Neurosciences, University of California San Francisco, USA
| | - Fumiko Hoeft
- Department of Psychiatry and Weill Institute for Neurosciences, University of California San Francisco, USA
- Haskins Laboratories, USA
- Department of Neuropsychiatry, Keio University School of Medicine, Japan
- Dyslexia Center, University of California San Francisco, USA
| |
Collapse
|
89
|
Kikuchi Y, Attaheri A, Wilson B, Rhone AE, Nourski KV, Gander PE, Kovach CK, Kawasaki H, Griffiths TD, Howard MA, Petkov CI. Sequence learning modulates neural responses and oscillatory coupling in human and monkey auditory cortex. PLoS Biol 2017; 15:e2000219. [PMID: 28441393 PMCID: PMC5404755 DOI: 10.1371/journal.pbio.2000219] [Citation(s) in RCA: 47] [Impact Index Per Article: 6.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/03/2016] [Accepted: 03/20/2017] [Indexed: 02/07/2023] Open
Abstract
Learning complex ordering relationships between sensory events in a sequence is fundamental for animal perception and human communication. While it is known that rhythmic sensory events can entrain brain oscillations at different frequencies, how learning and prior experience with sequencing relationships affect neocortical oscillations and neuronal responses is poorly understood. We used an implicit sequence learning paradigm (an "artificial grammar") in which humans and monkeys were exposed to sequences of nonsense words with regularities in the ordering relationships between the words. We then recorded neural responses directly from the auditory cortex in both species in response to novel legal sequences or ones violating specific ordering relationships. Neural oscillations in both monkeys and humans in response to the nonsense word sequences show strikingly similar hierarchically nested low-frequency phase and high-gamma amplitude coupling, establishing this form of oscillatory coupling-previously associated with speech processing in the human auditory cortex-as an evolutionarily conserved biological process. Moreover, learned ordering relationships modulate the observed form of neural oscillatory coupling in both species, with temporally distinct neural oscillatory effects that appear to coordinate neuronal responses in the monkeys. This study identifies the conserved auditory cortical neural signatures involved in monitoring learned sequencing operations, evident as modulations of transient coupling and neuronal responses to temporally structured sensory input.
Collapse
Affiliation(s)
- Yukiko Kikuchi
- Institute of Neuroscience, Newcastle University, Newcastle upon Tyne, United Kingdom
- Centre for Behaviour and Evolution, Newcastle University, Newcastle upon Tyne, United Kingdom
| | - Adam Attaheri
- Institute of Neuroscience, Newcastle University, Newcastle upon Tyne, United Kingdom
- Centre for Behaviour and Evolution, Newcastle University, Newcastle upon Tyne, United Kingdom
| | - Benjamin Wilson
- Institute of Neuroscience, Newcastle University, Newcastle upon Tyne, United Kingdom
- Centre for Behaviour and Evolution, Newcastle University, Newcastle upon Tyne, United Kingdom
| | - Ariane E. Rhone
- Human Brain Research Laboratory, Department of Neurosurgery, The University of Iowa, Iowa City, Iowa, United States of America
| | - Kirill V. Nourski
- Human Brain Research Laboratory, Department of Neurosurgery, The University of Iowa, Iowa City, Iowa, United States of America
| | - Phillip E. Gander
- Human Brain Research Laboratory, Department of Neurosurgery, The University of Iowa, Iowa City, Iowa, United States of America
| | - Christopher K. Kovach
- Human Brain Research Laboratory, Department of Neurosurgery, The University of Iowa, Iowa City, Iowa, United States of America
| | - Hiroto Kawasaki
- Human Brain Research Laboratory, Department of Neurosurgery, The University of Iowa, Iowa City, Iowa, United States of America
| | - Timothy D. Griffiths
- Institute of Neuroscience, Newcastle University, Newcastle upon Tyne, United Kingdom
- Human Brain Research Laboratory, Department of Neurosurgery, The University of Iowa, Iowa City, Iowa, United States of America
- Wellcome Trust Centre for Neuroimaging, University College London, London, United Kingdom
| | - Matthew A. Howard
- Human Brain Research Laboratory, Department of Neurosurgery, The University of Iowa, Iowa City, Iowa, United States of America
| | - Christopher I. Petkov
- Institute of Neuroscience, Newcastle University, Newcastle upon Tyne, United Kingdom
- Centre for Behaviour and Evolution, Newcastle University, Newcastle upon Tyne, United Kingdom
| |
Collapse
|
90
|
Zoefel B, Costa-Faidella J, Lakatos P, Schroeder CE, VanRullen R. Characterization of neural entrainment to speech with and without slow spectral energy fluctuations in laminar recordings in monkey A1. Neuroimage 2017; 150:344-357. [PMID: 28188912 DOI: 10.1016/j.neuroimage.2017.02.014] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/30/2016] [Revised: 02/02/2017] [Accepted: 02/06/2017] [Indexed: 10/20/2022] Open
Abstract
Neural entrainment, the alignment between neural oscillations and rhythmic stimulation, is omnipresent in current theories of speech processing - nevertheless, the underlying neural mechanisms are still largely unknown. Here, we hypothesized that laminar recordings in non-human primates provide us with important insight into these mechanisms, in particular with respect to processing in cortical layers. We presented one monkey with human everyday speech sounds and recorded neural (as current-source density, CSD) oscillations in primary auditory cortex (A1). We observed that the high-excitability phase of neural oscillations was only aligned with those spectral components of speech the recording site was tuned to; the opposite, low-excitability phase was aligned with other spectral components. As low- and high-frequency components in speech alternate, this finding might reflect a particularly efficient way of stimulus processing that includes the preparation of the relevant neuronal populations to the upcoming input. Moreover, presenting speech/noise sounds without systematic fluctuations in amplitude and spectral content and their time-reversed versions, we found significant entrainment in all conditions and cortical layers. When compared with everyday speech, the entrainment in the speech/noise conditions was characterized by a change in the phase relation between neural signal and stimulus and the low-frequency neural phase was dominantly coupled to activity in a lower gamma-band. These results show that neural entrainment in response to speech without slow fluctuations in spectral energy includes a process with specific characteristics that is presumably preserved across species.
Collapse
Affiliation(s)
- Benedikt Zoefel
- Université Paul Sabatier, Toulouse, France; Centre de Recherche Cerveau et Cognition (CerCo), CNRS, UMR5549, Pavillon Baudot CHU Purpan, BP 25202, 31052 Toulouse Cedex, France; Nathan Kline Institute for Psychiatric Research, Orangeburg, NY, United States.
| | - Jordi Costa-Faidella
- Nathan Kline Institute for Psychiatric Research, Orangeburg, NY, United States; Institute of Neurosciences, University of Barcelona, Barcelona, Catalonia 08035, Spain; Brainlab - Cognitive Neuroscience Research Group, Department of Clinical Psychology and Psychobiology, University of Barcelona, Barcelona, Catalonia 08035, Spain
| | - Peter Lakatos
- Nathan Kline Institute for Psychiatric Research, Orangeburg, NY, United States; Department of Psychiatry, New York University School of Medicine, New York, NY, United States
| | - Charles E Schroeder
- Nathan Kline Institute for Psychiatric Research, Orangeburg, NY, United States; Departments of Neurosurgery and Psychiatry, Columbia University College of Physicians and Surgeons, New York, NY, United States
| | - Rufin VanRullen
- Université Paul Sabatier, Toulouse, France; Centre de Recherche Cerveau et Cognition (CerCo), CNRS, UMR5549, Pavillon Baudot CHU Purpan, BP 25202, 31052 Toulouse Cedex, France
| |
Collapse
|
91
|
Discriminating Valid from Spurious Indices of Phase-Amplitude Coupling. eNeuro 2017; 3:eN-OPN-0334-16. [PMID: 28101528 PMCID: PMC5237829 DOI: 10.1523/eneuro.0334-16.2016] [Citation(s) in RCA: 43] [Impact Index Per Article: 6.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/22/2016] [Revised: 12/22/2016] [Accepted: 12/22/2016] [Indexed: 11/22/2022] Open
Abstract
Recently there has been a strong interest in cross-frequency coupling, the interaction between neuronal oscillations in different frequency bands. In particular, measures quantifying the coupling between the phase of slow oscillations and the amplitude of fast oscillations have been applied to a wide range of data recorded from animals and humans. Some of the measures applied to detect phase-amplitude coupling have been criticized for being sensitive to nonsinusoidal properties of the oscillations and thus spuriously indicate the presence of coupling. While such instances of spurious identification of coupling have been observed, in this commentary we give concrete examples illustrating cases when the identification of cross-frequency coupling can be trusted. These examples are based on control analyses and empirical observations rather than signal-processing tools. Finally, we provide concrete advice on how to determine when measures of phase-amplitude coupling can be considered trustworthy.
Collapse
|
92
|
De Vos A, Vanvooren S, Vanderauwera J, Ghesquière P, Wouters J. Atypical neural synchronization to speech envelope modulations in dyslexia. BRAIN AND LANGUAGE 2017; 164:106-117. [PMID: 27833037 DOI: 10.1016/j.bandl.2016.10.002] [Citation(s) in RCA: 35] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/29/2016] [Revised: 09/06/2016] [Accepted: 10/19/2016] [Indexed: 05/13/2023]
Abstract
A fundamental deficit in the synchronization of neural oscillations to temporal information in speech could underlie phonological processing problems in dyslexia. In this study, the hypothesis of a neural synchronization impairment is investigated more specifically as a function of different neural oscillatory bands and temporal information rates in speech. Auditory steady-state responses to 4, 10, 20 and 40Hz modulations were recorded in normal reading and dyslexic adolescents to measure neural synchronization of theta, alpha, beta and low-gamma oscillations to syllabic and phonemic rate information. In comparison to normal readers, dyslexic readers showed reduced non-synchronized theta activity, reduced synchronized alpha activity and enhanced synchronized beta activity. Positive correlations between alpha synchronization and phonological skills were found in normal readers, but were absent in dyslexic readers. In contrast, dyslexic readers exhibited positive correlations between beta synchronization and phonological skills. Together, these results suggest that auditory neural synchronization of alpha and beta oscillations is atypical in dyslexia, indicating deviant neural processing of both syllabic and phonemic rate information. Impaired synchronization of alpha oscillations in particular demonstrated to be the most prominent neural anomaly possibly hampering speech and phonological processing in dyslexic readers.
Collapse
Affiliation(s)
- Astrid De Vos
- Research Group Experimental ORL, Department of Neurosciences, KU Leuven - University of Leuven, Herestraat 49 Box 721, 3000 Leuven, Belgium; Parenting and Special Education Research Unit, Faculty of Psychology and Educational Sciences, KU Leuven - University of Leuven, Leopold Vanderkelenstraat 32 Box 3765, 3000 Leuven, Belgium.
| | - Sophie Vanvooren
- Research Group Experimental ORL, Department of Neurosciences, KU Leuven - University of Leuven, Herestraat 49 Box 721, 3000 Leuven, Belgium; Parenting and Special Education Research Unit, Faculty of Psychology and Educational Sciences, KU Leuven - University of Leuven, Leopold Vanderkelenstraat 32 Box 3765, 3000 Leuven, Belgium
| | - Jolijn Vanderauwera
- Research Group Experimental ORL, Department of Neurosciences, KU Leuven - University of Leuven, Herestraat 49 Box 721, 3000 Leuven, Belgium; Parenting and Special Education Research Unit, Faculty of Psychology and Educational Sciences, KU Leuven - University of Leuven, Leopold Vanderkelenstraat 32 Box 3765, 3000 Leuven, Belgium
| | - Pol Ghesquière
- Parenting and Special Education Research Unit, Faculty of Psychology and Educational Sciences, KU Leuven - University of Leuven, Leopold Vanderkelenstraat 32 Box 3765, 3000 Leuven, Belgium
| | - Jan Wouters
- Research Group Experimental ORL, Department of Neurosciences, KU Leuven - University of Leuven, Herestraat 49 Box 721, 3000 Leuven, Belgium
| |
Collapse
|
93
|
Dipoppa M, Szwed M, Gutkin BS. Controlling Working Memory Operations by Selective Gating: The Roles of Oscillations and Synchrony. Adv Cogn Psychol 2016; 12:209-232. [PMID: 28154616 PMCID: PMC5280056 DOI: 10.5709/acp-0199-x] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/12/2015] [Accepted: 10/18/2016] [Indexed: 11/23/2022] Open
Abstract
Working memory (WM) is a primary cognitive function that corresponds to the ability to update, stably maintain, and manipulate short-term memory (ST M) rapidly to perform ongoing cognitive tasks. A prevalent neural substrate of WM coding is persistent neural activity, the property of neurons to remain active after having been activated by a transient sensory stimulus. This persistent activity allows for online maintenance of memory as well as its active manipulation necessary for task performance. WM is tightly capacity limited. Therefore, selective gating of sensory and internally generated information is crucial for WM function. While the exact neural substrate of selective gating remains unclear, increasing evidence suggests that it might be controlled by modulating ongoing oscillatory brain activity. Here, we review experiments and models that linked selective gating, persistent activity, and brain oscillations, putting them in the more general mechanistic context of WM. We do so by defining several operations necessary for successful WM function and then discussing how such operations may be carried out by mechanisms suggested by computational models. We specifically show how oscillatory mechanisms may provide a rapid and flexible active gating mechanism for WM operations.
Collapse
Affiliation(s)
- Mario Dipoppa
- Institute of Neurology, Faculty of Brain Sciences, University College
London, UK
| | - Marcin Szwed
- Departement of Psychology, Jagiellonian University, Kraków,
Poland
| | - Boris S. Gutkin
- Center for Cognition and Decision Making, NR U HSE , Moscow,
Russia
| |
Collapse
|
94
|
Auditory cortical delta-entrainment interacts with oscillatory power in multiple fronto-parietal networks. Neuroimage 2016; 147:32-42. [PMID: 27903440 PMCID: PMC5315055 DOI: 10.1016/j.neuroimage.2016.11.062] [Citation(s) in RCA: 68] [Impact Index Per Article: 8.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/13/2016] [Revised: 11/25/2016] [Accepted: 11/25/2016] [Indexed: 01/28/2023] Open
Abstract
The timing of slow auditory cortical activity aligns to the rhythmic fluctuations in speech. This entrainment is considered to be a marker of the prosodic and syllabic encoding of speech, and has been shown to correlate with intelligibility. Yet, whether and how auditory cortical entrainment is influenced by the activity in other speech–relevant areas remains unknown. Using source-localized MEG data, we quantified the dependency of auditory entrainment on the state of oscillatory activity in fronto-parietal regions. We found that delta band entrainment interacted with the oscillatory activity in three distinct networks. First, entrainment in the left anterior superior temporal gyrus (STG) was modulated by beta power in orbitofrontal areas, possibly reflecting predictive top-down modulations of auditory encoding. Second, entrainment in the left Heschl's Gyrus and anterior STG was dependent on alpha power in central areas, in line with the importance of motor structures for phonological analysis. And third, entrainment in the right posterior STG modulated theta power in parietal areas, consistent with the engagement of semantic memory. These results illustrate the topographical network interactions of auditory delta entrainment and reveal distinct cross-frequency mechanisms by which entrainment can interact with different cognitive processes underlying speech perception. We study auditory cortical speech entrainment from a network perspective. Found three distinct networks interacting with delta-entrainment in auditory cortex. Entrainment is modulated by frontal beta power, possibly indexing predictions. Central alpha power interacts with entrainment, suggesting motor involvement. Parietal theta is modulated by entrainment, suggesting working memory compensation.
Collapse
|
95
|
Lee B, Cho KH. Brain-inspired speech segmentation for automatic speech recognition using the speech envelope as a temporal reference. Sci Rep 2016; 6:37647. [PMID: 27876875 PMCID: PMC5120313 DOI: 10.1038/srep37647] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/28/2016] [Accepted: 10/28/2016] [Indexed: 11/18/2022] Open
Abstract
Speech segmentation is a crucial step in automatic speech recognition because additional speech analyses are performed for each framed speech segment. Conventional segmentation techniques primarily segment speech using a fixed frame size for computational simplicity. However, this approach is insufficient for capturing the quasi-regular structure of speech, which causes substantial recognition failure in noisy environments. How does the brain handle quasi-regular structured speech and maintain high recognition performance under any circumstance? Recent neurophysiological studies have suggested that the phase of neuronal oscillations in the auditory cortex contributes to accurate speech recognition by guiding speech segmentation into smaller units at different timescales. A phase-locked relationship between neuronal oscillation and the speech envelope has recently been obtained, which suggests that the speech envelope provides a foundation for multi-timescale speech segmental information. In this study, we quantitatively investigated the role of the speech envelope as a potential temporal reference to segment speech using its instantaneous phase information. We evaluated the proposed approach by the achieved information gain and recognition performance in various noisy environments. The results indicate that the proposed segmentation scheme not only extracts more information from speech but also provides greater robustness in a recognition test.
Collapse
Affiliation(s)
- Byeongwook Lee
- Laboratory for Systems Biology and Bio-inspired Engineering, Department of Bio and Brain Engineering, Korea Advanced Institute of Science and Technology (KAIST), Daejeon, 34141, Republic of Korea
| | - Kwang-Hyun Cho
- Laboratory for Systems Biology and Bio-inspired Engineering, Department of Bio and Brain Engineering, Korea Advanced Institute of Science and Technology (KAIST), Daejeon, 34141, Republic of Korea
| |
Collapse
|
96
|
Kösem A, Basirat A, Azizi L, van Wassenhove V. High-frequency neural activity predicts word parsing in ambiguous speech streams. J Neurophysiol 2016; 116:2497-2512. [PMID: 27605528 DOI: 10.1152/jn.00074.2016] [Citation(s) in RCA: 43] [Impact Index Per Article: 5.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/26/2016] [Accepted: 09/03/2016] [Indexed: 11/22/2022] Open
Abstract
During speech listening, the brain parses a continuous acoustic stream of information into computational units (e.g., syllables or words) necessary for speech comprehension. Recent neuroscientific hypotheses have proposed that neural oscillations contribute to speech parsing, but whether they do so on the basis of acoustic cues (bottom-up acoustic parsing) or as a function of available linguistic representations (top-down linguistic parsing) is unknown. In this magnetoencephalography study, we contrasted acoustic and linguistic parsing using bistable speech sequences. While listening to the speech sequences, participants were asked to maintain one of the two possible speech percepts through volitional control. We predicted that the tracking of speech dynamics by neural oscillations would not only follow the acoustic properties but also shift in time according to the participant's conscious speech percept. Our results show that the latency of high-frequency activity (specifically, beta and gamma bands) varied as a function of the perceptual report. In contrast, the phase of low-frequency oscillations was not strongly affected by top-down control. Whereas changes in low-frequency neural oscillations were compatible with the encoding of prelexical segmentation cues, high-frequency activity specifically informed on an individual's conscious speech percept.
Collapse
Affiliation(s)
- Anne Kösem
- Cognitive Neuroimaging Unit, CEA DRF/I2BM, Institut National de la Santé et de la Recherche Médicale, Université Paris-Sud, Université Paris-Saclay, Gif/Yvette, France; .,Radboud University, Donders Institute for Brain, Cognition and Behaviour, Nijmegen, The Netherlands.,Max Planck Institute for Psycholinguistics, Nijmegen, The Netherlands; and
| | - Anahita Basirat
- Cognitive Neuroimaging Unit, CEA DRF/I2BM, Institut National de la Santé et de la Recherche Médicale, Université Paris-Sud, Université Paris-Saclay, Gif/Yvette, France.,SCALab, Centre National de la Recherche Scientifique UMR 9193, Université Lille, Lille, France
| | - Leila Azizi
- Cognitive Neuroimaging Unit, CEA DRF/I2BM, Institut National de la Santé et de la Recherche Médicale, Université Paris-Sud, Université Paris-Saclay, Gif/Yvette, France
| | - Virginie van Wassenhove
- Cognitive Neuroimaging Unit, CEA DRF/I2BM, Institut National de la Santé et de la Recherche Médicale, Université Paris-Sud, Université Paris-Saclay, Gif/Yvette, France
| |
Collapse
|
97
|
Benito N, Martín-Vázquez G, Makarova J, Makarov VA, Herreras O. The right hippocampus leads the bilateral integration of gamma-parsed lateralized information. eLife 2016; 5. [PMID: 27599221 PMCID: PMC5050016 DOI: 10.7554/elife.16658] [Citation(s) in RCA: 29] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/05/2016] [Accepted: 09/05/2016] [Indexed: 12/26/2022] Open
Abstract
It is unclear whether the two hippocampal lobes convey similar or different activities and how they cooperate. Spatial discrimination of electric fields in anesthetized rats allowed us to compare the pathway-specific field potentials corresponding to the gamma-paced CA3 output (CA1 Schaffer potentials) and CA3 somatic inhibition within and between sides. Bilateral excitatory Schaffer gamma waves are generally larger and lead from the right hemisphere with only moderate covariation of amplitude, and drive CA1 pyramidal units more strongly than unilateral waves. CA3 waves lock to the ipsilateral Schaffer potentials, although bilateral coherence was weak. Notably, Schaffer activity may run laterally, as seen after the disruption of the connecting pathways. Thus, asymmetric operations promote the entrainment of CA3-autonomous gamma oscillators bilaterally, synchronizing lateralized gamma strings to converge optimally on CA1 targets. The findings support the view that interhippocampal connections integrate different aspects of information that flow through the left and right lobes. DOI:http://dx.doi.org/10.7554/eLife.16658.001 In humans and other backboned animals, the brain is divided into the left and right hemispheres, which are connected by several large bundles of nerve fibers. Thanks to these fiber tracts, sensory information from each side of the body can reach both sides of the brain. However, although many areas of the brain work with a counterpart on the opposite hemisphere to process this sensory information, they do not necessarily perform the same tasks, or perform them at the same time as their partner. The hippocampus is a brain region that helps to support navigation, to detect novelty, and to produce memories. In fact, our brains contain two hippocampi – one in each hemisphere. Previous studies of the hippocampus have tended to record from only one side of the brain. Benito, Martín-Vázquez, Makarova et al. now compare the activity of the left and right hippocampi, and consider how the two structures might work together. Recordings of the electrical activity of the hippocampi of anesthetized rats show that different groups of neurons fire in rhythmic sequence, forming waves called gamma waves. Successive waves have different amplitudes, and can be thought to form ‘strings’. The recordings made by Benito et al. show that the two hippocampi produce parallel strings of waves, although the waves that originate in the right hemisphere are generally larger than those that originate in the left. Right-hemisphere waves also tend to begin slightly earlier than their left-hemisphere counterparts. Further experiments revealed that disrupting the fiber tracts between the hemispheres uncouples the waves that no longer occur at the same time, and the strings of waves may remain constrained to one side of the brain. In healthy animals, however, the right-hand dominance acts as a master-slave device, and makes the waves from the two hemispheres pair up and merge in the neurons that receive them both. Thus the information running in both hippocampi can be integrated or compared before sending to the cortex for task execution or storage. Overall, the findings reported by Benito et al. suggest that different types of information flow through the left and right hemispheres, and that the brain integrates these two streams using asymmetric connections. The next challenge is to identify how the information in the two streams differs: whether each stream reflects different sensory stimuli, different features of a scene, or the difference between recalled and perceived information. DOI:http://dx.doi.org/10.7554/eLife.16658.002
Collapse
Affiliation(s)
- Nuria Benito
- Department of Translational Neuroscience, Cajal Institute - CSIC, Madrid, Spain
| | | | - Julia Makarova
- Department of Translational Neuroscience, Cajal Institute - CSIC, Madrid, Spain.,N.I. Lobachevsky State University of Nizhny Novgorod, Nizhny Novgorod, Russia
| | - Valeri A Makarov
- N.I. Lobachevsky State University of Nizhny Novgorod, Nizhny Novgorod, Russia.,Department of Applied Mathematics, Faculty of Mathematics, Universidad Complutense de Madrid, Madrid, Spain
| | - Oscar Herreras
- Department of Translational Neuroscience, Cajal Institute - CSIC, Madrid, Spain
| |
Collapse
|
98
|
Hyafil A, Giraud AL, Fontolan L, Gutkin B. Neural Cross-Frequency Coupling: Connecting Architectures, Mechanisms, and Functions. Trends Neurosci 2016; 38:725-740. [PMID: 26549886 DOI: 10.1016/j.tins.2015.09.001] [Citation(s) in RCA: 240] [Impact Index Per Article: 30.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/25/2015] [Revised: 08/14/2015] [Accepted: 09/01/2015] [Indexed: 10/22/2022]
Abstract
Neural oscillations are ubiquitously observed in the mammalian brain, but it has proven difficult to tie oscillatory patterns to specific cognitive operations. Notably, the coupling between neural oscillations at different timescales has recently received much attention, both from experimentalists and theoreticians. We review the mechanisms underlying various forms of this cross-frequency coupling. We show that different types of neural oscillators and cross-frequency interactions yield distinct signatures in neural dynamics. Finally, we associate these mechanisms with several putative functions of cross-frequency coupling, including neural representations of multiple environmental items, communication over distant areas, internal clocking of neural processes, and modulation of neural processing based on temporal predictions.
Collapse
Affiliation(s)
- Alexandre Hyafil
- Universitat Pompeu Fabra, Theoretical and Computational Neuroscience, Roc Boronat 138, 08018 Barcelona, Spain; Research Unit, Parc Sanitari Sant Joan de Déu and Universitat de Barcelona, Esplugues de Llobregat, Barcelona, Spain.
| | - Anne-Lise Giraud
- Department of Neuroscience, University of Geneva, Campus Biotech, 9 chemin des Mines, 1211 Geneva, Switzerland
| | - Lorenzo Fontolan
- Department of Neuroscience, University of Geneva, Campus Biotech, 9 chemin des Mines, 1211 Geneva, Switzerland
| | - Boris Gutkin
- Group for Neural Theory, Institut National de la Santé et de la Recherche Médicale (INSERM) Unité 960, Département d'Etudes Cognitives, Ecole Normale Supérieure, 29 rue d'Ulm, 75005 Paris, France; Centre for Cognition and Decision Making, National Research University Higher School of Economics, Myasnitskaya Street 20, Moscow 101000, Russia
| |
Collapse
|
99
|
Gips B, van der Eerden JPJM, Jensen O. A biologically plausible mechanism for neuronal coding organized by the phase of alpha oscillations. Eur J Neurosci 2016; 44:2147-61. [PMID: 27320148 PMCID: PMC5129495 DOI: 10.1111/ejn.13318] [Citation(s) in RCA: 26] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/03/2015] [Revised: 06/15/2016] [Accepted: 06/17/2016] [Indexed: 01/18/2023]
Abstract
The visual system receives a wealth of sensory information of which only little is relevant for behaviour. We present a mechanism in which alpha oscillations serve to prioritize different components of visual information. By way of simulated neuronal networks, we show that inhibitory modulation in the alpha range (~ 10 Hz) can serve to temporally segment the visual information to prevent information overload. Coupled excitatory and inhibitory neurons generate a gamma rhythm in which information is segmented and sorted according to excitability in each alpha cycle. Further details are coded by distributed neuronal firing patterns within each gamma cycle. The network model produces coupling between alpha phase and gamma (40–100 Hz) amplitude in the simulated local field potential similar to that observed experimentally in human and animal recordings.
Collapse
Affiliation(s)
- Bart Gips
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, Kapittelweg 29, 6525 EN, Nijmegen, The Netherlands
| | - Jan P J M van der Eerden
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, Kapittelweg 29, 6525 EN, Nijmegen, The Netherlands
| | - Ole Jensen
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, Kapittelweg 29, 6525 EN, Nijmegen, The Netherlands
| |
Collapse
|
100
|
Voloh B, Womelsdorf T. A Role of Phase-Resetting in Coordinating Large Scale Neural Networks During Attention and Goal-Directed Behavior. Front Syst Neurosci 2016; 10:18. [PMID: 27013986 PMCID: PMC4782140 DOI: 10.3389/fnsys.2016.00018] [Citation(s) in RCA: 59] [Impact Index Per Article: 7.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/25/2015] [Accepted: 02/17/2016] [Indexed: 01/03/2023] Open
Abstract
Short periods of oscillatory activation are ubiquitous signatures of neural circuits. A broad range of studies documents not only their circuit origins, but also a fundamental role for oscillatory activity in coordinating information transfer during goal directed behavior. Recent studies suggest that resetting the phase of ongoing oscillatory activity to endogenous or exogenous cues facilitates coordinated information transfer within circuits and between distributed brain areas. Here, we review evidence that pinpoints phase resetting as a critical marker of dynamic state changes of functional networks. Phase resets: (1) set a "neural context" in terms of narrow band frequencies that uniquely characterizes the activated circuits; (2) impose coherent low frequency phases to which high frequency activations can synchronize, identifiable as cross-frequency correlations across large anatomical distances; (3) are critical for neural coding models that depend on phase, increasing the informational content of neural representations; and (4) likely originate from the dynamics of canonical E-I circuits that are anatomically ubiquitous. These multiple signatures of phase resets are directly linked to enhanced information transfer and behavioral success. We survey how phase resets re-organize oscillations in diverse task contexts, including sensory perception, attentional stimulus selection, cross-modal integration, Pavlovian conditioning, and spatial navigation. The evidence we consider suggests that phase-resets can drive changes in neural excitability, ensemble organization, functional networks, and ultimately, overt behavior.
Collapse
Affiliation(s)
- Benjamin Voloh
- Department of Biology, Centre for Vision Research, York University Toronto, ON, Canada
| | - Thilo Womelsdorf
- Department of Biology, Centre for Vision Research, York University Toronto, ON, Canada
| |
Collapse
|