601
|
Calderone DJ, Lakatos P, Butler PD, Castellanos FX. Entrainment of neural oscillations as a modifiable substrate of attention. Trends Cogn Sci 2014; 18:300-9. [PMID: 24630166 DOI: 10.1016/j.tics.2014.02.005] [Citation(s) in RCA: 167] [Impact Index Per Article: 16.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/30/2013] [Revised: 02/09/2014] [Accepted: 02/10/2014] [Indexed: 11/28/2022]
Abstract
Brain operation is profoundly rhythmic. Oscillations of neural excitability shape sensory, motor, and cognitive processes. Intrinsic oscillations also entrain to external rhythms, allowing the brain to optimize the processing of predictable events such as speech. Moreover, selective attention to a particular rhythm in a complex environment entails entrainment of neural oscillations to its temporal structure. Entrainment appears to form one of the core mechanisms of selective attention, which is likely to be relevant to certain psychiatric disorders. Deficient entrainment has been found in schizophrenia and dyslexia and mounting evidence also suggests that it may be abnormal in attention-deficit/hyperactivity disorder (ADHD). Accordingly, we suggest that studying entrainment in selective-attention paradigms is likely to reveal mechanisms underlying deficits across multiple disorders.
Collapse
Affiliation(s)
- Daniel J Calderone
- Department of Child and Adolescent Psychiatry, NYU Langone School of Medicine, New York, NY, USA.
| | - Peter Lakatos
- Nathan S. Kline Institute for Psychiatric Research, Orangeburg, NY, USA
| | - Pamela D Butler
- Nathan S. Kline Institute for Psychiatric Research, Orangeburg, NY, USA; Department of Psychiatry, NYU Langone School of Medicine, New York, NY, USA; Department of Psychology, City University of New York, New York, NY, USA
| | - F Xavier Castellanos
- Department of Child and Adolescent Psychiatry, NYU Langone School of Medicine, New York, NY, USA; Nathan S. Kline Institute for Psychiatric Research, Orangeburg, NY, USA.
| |
Collapse
|
602
|
Engel AK, Gerloff C, Hilgetag CC, Nolte G. Intrinsic coupling modes: multiscale interactions in ongoing brain activity. Neuron 2014; 80:867-86. [PMID: 24267648 DOI: 10.1016/j.neuron.2013.09.038] [Citation(s) in RCA: 309] [Impact Index Per Article: 30.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 09/23/2013] [Indexed: 01/10/2023]
Abstract
Intrinsic coupling constitutes a key feature of ongoing brain activity, which exhibits rich spatiotemporal patterning and contains information that influences cognitive processing. We discuss evidence for two distinct types of intrinsic coupling modes which seem to reflect the operation of different coupling mechanisms. One type arises from phase coupling of band-limited oscillatory signals, whereas the other results from coupled aperiodic fluctuations of signal envelopes. The two coupling modes differ in their dynamics, their origins, and their putative functions and with respect to their alteration in neuropsychiatric disorders. We propose that the concept of intrinsic coupling modes can provide a unifying framework for capturing the dynamics of intrinsically generated neuronal interactions at multiple spatial and temporal scales.
Collapse
Affiliation(s)
- Andreas K Engel
- Department of Neurophysiology and Pathophysiology, University Medical Center Hamburg-Eppendorf, 20246 Hamburg, Germany.
| | | | | | | |
Collapse
|
603
|
Bendixen A, Scharinger M, Strauß A, Obleser J. Prediction in the service of comprehension: modulated early brain responses to omitted speech segments. Cortex 2014; 53:9-26. [PMID: 24561233 DOI: 10.1016/j.cortex.2014.01.001] [Citation(s) in RCA: 34] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/07/2012] [Revised: 06/13/2013] [Accepted: 01/02/2014] [Indexed: 10/25/2022]
Abstract
Speech signals are often compromised by disruptions originating from external (e.g., masking noise) or internal (e.g., inaccurate articulation) sources. Speech comprehension thus entails detecting and replacing missing information based on predictive and restorative neural mechanisms. The present study targets predictive mechanisms by investigating the influence of a speech segment's predictability on early, modality-specific electrophysiological responses to this segment's omission. Predictability was manipulated in simple physical terms in a single-word framework (Experiment 1) or in more complex semantic terms in a sentence framework (Experiment 2). In both experiments, final consonants of the German words Lachs ([laks], salmon) or Latz ([lats], bib) were occasionally omitted, resulting in the syllable La ([la], no semantic meaning), while brain responses were measured with multi-channel electroencephalography (EEG). In both experiments, the occasional presentation of the fragment La elicited a larger omission response when the final speech segment had been predictable. The omission response occurred ∼125-165 msec after the expected onset of the final segment and showed characteristics of the omission mismatch negativity (MMN), with generators in auditory cortical areas. Suggestive of a general auditory predictive mechanism at work, this main observation was robust against varying source of predictive information or attentional allocation, differing between the two experiments. Source localization further suggested the omission response enhancement by predictability to emerge from left superior temporal gyrus and left angular gyrus in both experiments, with additional experiment-specific contributions. These results are consistent with the existence of predictive coding mechanisms in the central auditory system, and suggestive of the general predictive properties of the auditory system to support spoken word recognition.
Collapse
Affiliation(s)
- Alexandra Bendixen
- Institute of Psychology, University of Leipzig, Leipzig, Germany; Auditory Psychophysiology Lab, Department of Psychology, Cluster of Excellence "Hearing4all", European Medical School, Carl von Ossietzky University of Oldenburg, Oldenburg, Germany.
| | - Mathias Scharinger
- Max Planck Research Group "Auditory Cognition", Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
| | - Antje Strauß
- Max Planck Research Group "Auditory Cognition", Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
| | - Jonas Obleser
- Max Planck Research Group "Auditory Cognition", Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
| |
Collapse
|
604
|
Tillmann B, Poulin-Charronnat B, Bigand E. The role of expectation in music: from the score to emotions and the brain. WILEY INTERDISCIPLINARY REVIEWS. COGNITIVE SCIENCE 2014; 5:105-113. [PMID: 26304299 DOI: 10.1002/wcs.1262] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/06/2013] [Revised: 08/14/2013] [Accepted: 08/29/2013] [Indexed: 11/08/2022]
Abstract
Like discourse, music is a dynamic process that occurs over time. Listeners usually expect some events or structures of events to occur in the prolongation of a given context. Part of the musical emotional experience would depend upon how composers (improvisers) fulfill these expectancies. Musical expectations are a core phenomenon of music cognition, and the present article provides an overview of its foundation in the score as well as in listeners' behavior and brain, and how it can be simulated by artificial neural networks. We highlight parallels to language processing and include the attentional and emotional dimensions of musical expectations. Studying musical expectations is thus valuable not only for our understanding of music perception and production but also for more general brain functioning. Some open and challenging issues are summarized in this article. WIREs Cogn Sci 2014, 5:105-113. doi: 10.1002/wcs.1262 CONFLICT OF INTEREST: The authors have no conflict of interest to declare. For further resources related to this article, please visit the WIREs website.
Collapse
Affiliation(s)
- B Tillmann
- Lyon Neuroscience Research Center, CNRS-UMR 5292, INSERM U1028, University Lyon 1, Lyon, France
| | | | - E Bigand
- Université de Bourgogne, LEAD-CNRS 5022, Dijon, France.,Institut Universitaire de France, France
| |
Collapse
|
605
|
Schwartze M, Kotz SA. A dual-pathway neural architecture for specific temporal prediction. Neurosci Biobehav Rev 2013; 37:2587-96. [DOI: 10.1016/j.neubiorev.2013.08.005] [Citation(s) in RCA: 83] [Impact Index Per Article: 7.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/07/2013] [Revised: 07/19/2013] [Accepted: 08/15/2013] [Indexed: 10/26/2022]
|
606
|
The Mechanisms and Meaning of the Mismatch Negativity. Brain Topogr 2013; 27:500-26. [DOI: 10.1007/s10548-013-0337-3] [Citation(s) in RCA: 69] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/12/2013] [Accepted: 11/15/2013] [Indexed: 10/26/2022]
|
607
|
Tavano A, Widmann A, Bendixen A, Trujillo-Barreto N, Schröger E. Temporal regularity facilitates higher-order sensory predictions in fast auditory sequences. Eur J Neurosci 2013; 39:308-18. [DOI: 10.1111/ejn.12404] [Citation(s) in RCA: 24] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/24/2012] [Revised: 09/18/2013] [Accepted: 10/04/2013] [Indexed: 11/29/2022]
Affiliation(s)
- Alessandro Tavano
- Institute of Psychology; University of Leipzig; 04109 Leipzig Germany
| | - Andreas Widmann
- Institute of Psychology; University of Leipzig; 04109 Leipzig Germany
| | - Alexandra Bendixen
- Institute of Psychology; University of Leipzig; 04109 Leipzig Germany
- Department of Psychology; Cluster of Excellence ‘Hearing4all’; European Medical School; Carl von Ossietzky University of Oldenburg; 26129 Oldenburg Germany
| | | | - Erich Schröger
- Institute of Psychology; University of Leipzig; 04109 Leipzig Germany
| |
Collapse
|
608
|
Temporal expectation and spectral expectation operate in distinct fashion on neuronal populations. Neuropsychologia 2013; 51:2548-55. [DOI: 10.1016/j.neuropsychologia.2013.09.018] [Citation(s) in RCA: 16] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/24/2013] [Revised: 08/07/2013] [Accepted: 09/06/2013] [Indexed: 11/17/2022]
|
609
|
Grimsley CA, Sanchez JT, Sivaramakrishnan S. Midbrain local circuits shape sound intensity codes. Front Neural Circuits 2013; 7:174. [PMID: 24198763 PMCID: PMC3812908 DOI: 10.3389/fncir.2013.00174] [Citation(s) in RCA: 18] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/29/2013] [Accepted: 10/09/2013] [Indexed: 12/28/2022] Open
Abstract
Hierarchical processing of sensory information requires interaction at multiple levels along the peripheral to central pathway. Recent evidence suggests that interaction between driving and modulating components can shape both top down and bottom up processing of sensory information. Here we show that a component inherited from extrinsic sources combines with local components to code sound intensity. By applying high concentrations of divalent cations to neurons in the nucleus of the inferior colliculus in the auditory midbrain, we show that as sound intensity increases, the source of synaptic efficacy changes from inherited inputs to local circuits. In neurons with a wide dynamic range response to intensity, inherited inputs increase firing rates at low sound intensities but saturate at mid-to-high intensities. Local circuits activate at high sound intensities and widen dynamic range by continuously increasing their output gain with intensity. Inherited inputs are necessary and sufficient to evoke tuned responses, however local circuits change peak output. Push–pull driving inhibition and excitation create net excitatory drive to intensity-variant neurons and tune neurons to intensity. Our results reveal that dynamic range and tuning re-emerge in the auditory midbrain through local circuits that are themselves variable or tuned.
Collapse
Affiliation(s)
- Calum Alex Grimsley
- Department of Anatomy and Neurobiology, Northeast Ohio Medical University Rootstown, OH, USA
| | | | | |
Collapse
|
610
|
Tan HRM, Lana L, Uhlhaas PJ. High-frequency neural oscillations and visual processing deficits in schizophrenia. Front Psychol 2013; 4:621. [PMID: 24130535 PMCID: PMC3793130 DOI: 10.3389/fpsyg.2013.00621] [Citation(s) in RCA: 40] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/15/2013] [Accepted: 08/23/2013] [Indexed: 12/30/2022] Open
Abstract
Visual information is fundamental to how we understand our environment, make predictions, and interact with others. Recent research has underscored the importance of visuo-perceptual dysfunctions for cognitive deficits and pathophysiological processes in schizophrenia. In the current paper, we review evidence for the relevance of high frequency (beta/gamma) oscillations towards visuo-perceptual dysfunctions in schizophrenia. In the first part of the paper, we examine the relationship between beta/gamma band oscillations and visual processing during normal brain functioning. We then summarize EEG/MEG-studies which demonstrate reduced amplitude and synchrony of high-frequency activity during visual stimulation in schizophrenia. In the final part of the paper, we identify neurobiological correlates as well as offer perspectives for future research to stimulate further inquiry into the role of high-frequency oscillations in visual processing impairments in the disorder.
Collapse
Affiliation(s)
- Heng-Ru May Tan
- Institute of Neuroscience and Psychology, College of Science and Engineering and College of Medical, Veterinary and Life Sciences, University of Glasgow Glasgow, UK
| | | | | |
Collapse
|
611
|
Baluška F, Mancuso S. Root apex transition zone as oscillatory zone. FRONTIERS IN PLANT SCIENCE 2013; 4:354. [PMID: 24106493 PMCID: PMC3788588 DOI: 10.3389/fpls.2013.00354] [Citation(s) in RCA: 46] [Impact Index Per Article: 4.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/11/2013] [Accepted: 08/22/2013] [Indexed: 05/17/2023]
Abstract
Root apex of higher plants shows very high sensitivity to environmental stimuli. The root cap acts as the most prominent plant sensory organ; sensing diverse physical parameters such as gravity, light, humidity, oxygen, and critical inorganic nutrients. However, the motoric responses to these stimuli are accomplished in the elongation region. This spatial discrepancy was solved when we have discovered and characterized the transition zone which is interpolated between the apical meristem and the subapical elongation zone. Cells of this zone are very active in the cytoskeletal rearrangements, endocytosis and endocytic vesicle recycling, as well as in electric activities. Here we discuss the oscillatory nature of the transition zone which, together with several other features of this zone, suggest that it acts as some kind of command center. In accordance with the early proposal of Charles and Francis Darwin, cells of this root zone receive sensory information from the root cap and instruct the motoric responses of cells in the elongation zone.
Collapse
Affiliation(s)
- František Baluška
- Institute of Cellular and Molecular Botany, Department of Plant Cell Biology, University of BonnBonn, Germany
| | - Stefano Mancuso
- LINV – DiSPAA, Department of Agri-Food and Environmental Science, University of FlorenceSesto Fiorentino, Italy
| |
Collapse
|
612
|
Sedley W, Cunningham MO. Do cortical gamma oscillations promote or suppress perception? An under-asked question with an over-assumed answer. Front Hum Neurosci 2013; 7:595. [PMID: 24065913 PMCID: PMC3778316 DOI: 10.3389/fnhum.2013.00595] [Citation(s) in RCA: 48] [Impact Index Per Article: 4.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/15/2013] [Accepted: 09/03/2013] [Indexed: 01/04/2023] Open
Abstract
Cortical gamma oscillations occur alongside perceptual processes, and in proportion to perceptual salience. They have a number of properties that make them ideal candidates to explain perception, including incorporating synchronized discharges of neural assemblies, and their emergence over a fast timescale consistent with that of perception. These observations have led to widespread assumptions that gamma oscillations' role is to cause or facilitate conscious perception (i.e., a "positive" role). While the majority of the human literature on gamma oscillations is consistent with this interpretation, many or most of these studies could equally be interpreted as showing a suppressive or inhibitory (i.e., "negative") role. For example, presenting a stimulus and recording a response of increased gamma oscillations would only suggest a role for gamma oscillations in the representation of that stimulus, and would not specify what that role were; if gamma oscillations were inhibitory, then they would become selectively activated in response to the stimulus they acted to inhibit. In this review, we consider two classes of gamma oscillations: "broadband" and "narrowband," which have very different properties (and likely roles). We first discuss studies on gamma oscillations that are non-discriminatory, with respect to the role of gamma oscillations, followed by studies that specifically support specifically a positive or negative role. These include work on perception in healthy individuals, and in the pathological contexts of phantom perception and epilepsy. Reference is based as much as possible on magnetoencephalography (MEG) and electroencephalography (EEG) studies, but we also consider evidence from invasive recordings in humans and other animals. Attempts are made to reconcile findings within a common framework. We conclude with a summary of the pertinent questions that remain unanswered, and suggest how future studies might address these.
Collapse
Affiliation(s)
- William Sedley
- Institute of Neuroscience, Faculty of Medical Sciences, Newcastle University Medical School Newcastle Upon Tyne, UK
| | | |
Collapse
|
613
|
Suppression of the µ rhythm during speech and non-speech discrimination revealed by independent component analysis: implications for sensorimotor integration in speech processing. PLoS One 2013; 8:e72024. [PMID: 23991030 PMCID: PMC3750026 DOI: 10.1371/journal.pone.0072024] [Citation(s) in RCA: 32] [Impact Index Per Article: 2.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/10/2012] [Accepted: 07/11/2013] [Indexed: 01/17/2023] Open
Abstract
Background Constructivist theories propose that articulatory hypotheses about incoming phonetic targets may function to enhance perception by limiting the possibilities for sensory analysis. To provide evidence for this proposal, it is necessary to map ongoing, high-temporal resolution changes in sensorimotor activity (i.e., the sensorimotor μ rhythm) to accurate speech and non-speech discrimination performance (i.e., correct trials.) Methods Sixteen participants (15 female and 1 male) were asked to passively listen to or actively identify speech and tone-sweeps in a two-force choice discrimination task while the electroencephalograph (EEG) was recorded from 32 channels. The stimuli were presented at signal-to-noise ratios (SNRs) in which discrimination accuracy was high (i.e., 80–100%) and low SNRs producing discrimination performance at chance. EEG data were decomposed using independent component analysis and clustered across participants using principle component methods in EEGLAB. Results ICA revealed left and right sensorimotor µ components for 14/16 and 13/16 participants respectively that were identified on the basis of scalp topography, spectral peaks, and localization to the precentral and postcentral gyri. Time-frequency analysis of left and right lateralized µ component clusters revealed significant (pFDR<.05) suppression in the traditional beta frequency range (13–30 Hz) prior to, during, and following syllable discrimination trials. No significant differences from baseline were found for passive tasks. Tone conditions produced right µ beta suppression following stimulus onset only. For the left µ, significant differences in the magnitude of beta suppression were found for correct speech discrimination trials relative to chance trials following stimulus offset. Conclusions Findings are consistent with constructivist, internal model theories proposing that early forward motor models generate predictions about likely phonemic units that are then synthesized with incoming sensory cues during active as opposed to passive processing. Future directions and possible translational value for clinical populations in which sensorimotor integration may play a functional role are discussed.
Collapse
|
614
|
Morillon B, Barbot A. Attention in the temporal domain: a phase-coding mechanism controls the gain of sensory processing. Front Hum Neurosci 2013; 7:480. [PMID: 23966934 PMCID: PMC3746179 DOI: 10.3389/fnhum.2013.00480] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/07/2013] [Accepted: 07/29/2013] [Indexed: 11/13/2022] Open
Affiliation(s)
- Benjamin Morillon
- Department of Psychiatry, Columbia University Medical Center New York, NY, USA
| | | |
Collapse
|
615
|
Weigmann K. Our sense of self. Phenomenology is a philosophical discipline that gives a detailed description of selfhood; it can contribute to understanding psychiatric diseases such as schizophrenia and its neurological causes. EMBO Rep 2013; 14:765-8. [PMID: 23938331 DOI: 10.1038/embor.2013.124] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/17/2022] Open
|
616
|
Hearing silences: human auditory processing relies on preactivation of sound-specific brain activity patterns. J Neurosci 2013; 33:8633-9. [PMID: 23678108 DOI: 10.1523/jneurosci.5821-12.2013] [Citation(s) in RCA: 89] [Impact Index Per Article: 8.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022] Open
Abstract
The remarkable capabilities displayed by humans in making sense of an overwhelming amount of sensory information cannot be explained easily if perception is viewed as a passive process. Current theoretical and computational models assume that to achieve meaningful and coherent perception, the human brain must anticipate upcoming stimulation. But how are upcoming stimuli predicted in the brain? We unmasked the neural representation of a prediction by omitting the predicted sensory input. Electrophysiological brain signals showed that when a clear prediction can be formulated, the brain activates a template of its response to the predicted stimulus before it arrives to our senses.
Collapse
|
617
|
Sanmiguel I, Saupe K, Schröger E. I know what is missing here: electrophysiological prediction error signals elicited by omissions of predicted "what" but not "when". Front Hum Neurosci 2013; 7:407. [PMID: 23908618 PMCID: PMC3725431 DOI: 10.3389/fnhum.2013.00407] [Citation(s) in RCA: 52] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/31/2013] [Accepted: 07/10/2013] [Indexed: 11/13/2022] Open
Abstract
In the present study we investigated the neural code of sensory predictions. Grounded on a variety of empirical findings, we set out from the proposal that sensory predictions are coded via the top-down modulation of the sensory units whose response properties match the specific characteristics of the predicted stimulus (Albright, 2012; Arnal and Giraud, 2012). From this proposal, we derive the hypothesis that when the specific physical characteristics of the predicted stimulus cannot be advanced, the sensory system should not be able to formulate such predictions, as it would lack the means to represent them. In different conditions, participant's self-paced button presses predicted either only the precise time when a random sound would be presented (random sound condition) or both the timing and the identity of the sound (single sound condition). To isolate prediction-related activity, we inspected the event-related potential (ERP) elicited by rare omissions of the sounds following the button press (see SanMiguel et al., 2013). As expected, in the single sound condition, omissions elicited a complex response in the ERP, reflecting the presence of sound prediction and the violation of this prediction. In contrast, in the random sound condition, sound omissions were not followed by any significant responses in the ERP. These results confirmed our hypothesis, and provide support to current proposals advocating that sensory systems rely on the top-down modulation of stimulus-specific sensory representations as the neural code for prediction. In light of these findings, we discuss the significance of the omission ERP as an electrophysiological marker of predictive processing and we address the paradox that no indicators of violations of temporal prediction alone were found in the present paradigm.
Collapse
Affiliation(s)
- Iria Sanmiguel
- BioCog, Institute for Psychology, University of Leipzig Leipzig, Germany
| | | | | |
Collapse
|
618
|
Rönnberg J, Lunner T, Zekveld A, Sörqvist P, Danielsson H, Lyxell B, Dahlström O, Signoret C, Stenfelt S, Pichora-Fuller MK, Rudner M. The Ease of Language Understanding (ELU) model: theoretical, empirical, and clinical advances. Front Syst Neurosci 2013; 7:31. [PMID: 23874273 PMCID: PMC3710434 DOI: 10.3389/fnsys.2013.00031] [Citation(s) in RCA: 566] [Impact Index Per Article: 51.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/12/2013] [Accepted: 06/24/2013] [Indexed: 12/28/2022] Open
Abstract
Working memory is important for online language processing during conversation. We use it to maintain relevant information, to inhibit or ignore irrelevant information, and to attend to conversation selectively. Working memory helps us to keep track of and actively participate in conversation, including taking turns and following the gist. This paper examines the Ease of Language Understanding model (i.e., the ELU model, Rönnberg, 2003; Rönnberg et al., 2008) in light of new behavioral and neural findings concerning the role of working memory capacity (WMC) in uni-modal and bimodal language processing. The new ELU model is a meaning prediction system that depends on phonological and semantic interactions in rapid implicit and slower explicit processing mechanisms that both depend on WMC albeit in different ways. It is based on findings that address the relationship between WMC and (a) early attention processes in listening to speech, (b) signal processing in hearing aids and its effects on short-term memory, (c) inhibition of speech maskers and its effect on episodic long-term memory, (d) the effects of hearing impairment on episodic and semantic long-term memory, and finally, (e) listening effort. New predictions and clinical implications are outlined. Comparisons with other WMC and speech perception models are made.
Collapse
Affiliation(s)
- Jerker Rönnberg
- Department of Behavioural Sciences and Learning, Linköping University Linköping, Sweden ; Linnaeus Centre HEAD, Swedish Institute for Disability Research, Linköping University Linköping, Sweden
| | | | | | | | | | | | | | | | | | | | | |
Collapse
|
619
|
Rönnberg J, Lunner T, Zekveld A, Sörqvist P, Danielsson H, Lyxell B, Dahlström O, Signoret C, Stenfelt S, Pichora-Fuller MK, Rudner M. The Ease of Language Understanding (ELU) model: theoretical, empirical, and clinical advances. Front Syst Neurosci 2013; 7:31. [PMID: 23874273 DOI: 10.3389/fnsys] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/12/2013] [Accepted: 06/24/2013] [Indexed: 05/28/2023] Open
Abstract
Working memory is important for online language processing during conversation. We use it to maintain relevant information, to inhibit or ignore irrelevant information, and to attend to conversation selectively. Working memory helps us to keep track of and actively participate in conversation, including taking turns and following the gist. This paper examines the Ease of Language Understanding model (i.e., the ELU model, Rönnberg, 2003; Rönnberg et al., 2008) in light of new behavioral and neural findings concerning the role of working memory capacity (WMC) in uni-modal and bimodal language processing. The new ELU model is a meaning prediction system that depends on phonological and semantic interactions in rapid implicit and slower explicit processing mechanisms that both depend on WMC albeit in different ways. It is based on findings that address the relationship between WMC and (a) early attention processes in listening to speech, (b) signal processing in hearing aids and its effects on short-term memory, (c) inhibition of speech maskers and its effect on episodic long-term memory, (d) the effects of hearing impairment on episodic and semantic long-term memory, and finally, (e) listening effort. New predictions and clinical implications are outlined. Comparisons with other WMC and speech perception models are made.
Collapse
Affiliation(s)
- Jerker Rönnberg
- Department of Behavioural Sciences and Learning, Linköping University Linköping, Sweden ; Linnaeus Centre HEAD, Swedish Institute for Disability Research, Linköping University Linköping, Sweden
| | | | | | | | | | | | | | | | | | | | | |
Collapse
|
620
|
van Wassenhove V. Speech through ears and eyes: interfacing the senses with the supramodal brain. Front Psychol 2013; 4:388. [PMID: 23874309 PMCID: PMC3709159 DOI: 10.3389/fpsyg.2013.00388] [Citation(s) in RCA: 56] [Impact Index Per Article: 5.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/09/2013] [Accepted: 06/10/2013] [Indexed: 12/02/2022] Open
Abstract
The comprehension of auditory-visual (AV) speech integration has greatly benefited from recent advances in neurosciences and multisensory research. AV speech integration raises numerous questions relevant to the computational rules needed for binding information (within and across sensory modalities), the representational format in which speech information is encoded in the brain (e.g., auditory vs. articulatory), or how AV speech ultimately interfaces with the linguistic system. The following non-exhaustive review provides a set of empirical findings and theoretical questions that have fed the original proposal for predictive coding in AV speech processing. More recently, predictive coding has pervaded many fields of inquiries and positively reinforced the need to refine the notion of internal models in the brain together with their implications for the interpretation of neural activity recorded with various neuroimaging techniques. However, it is argued here that the strength of predictive coding frameworks reside in the specificity of the generative internal models not in their generality; specifically, internal models come with a set of rules applied on particular representational formats themselves depending on the levels and the network structure at which predictive operations occur. As such, predictive coding in AV speech owes to specify the level(s) and the kinds of internal predictions that are necessary to account for the perceptual benefits or illusions observed in the field. Among those specifications, the actual content of a prediction comes first and foremost, followed by the representational granularity of that prediction in time. This review specifically presents a focused discussion on these issues.
Collapse
Affiliation(s)
- Virginie van Wassenhove
- Cognitive Neuroimaging Unit, Brain Dynamics, INSERM, U992 Gif/Yvette, France ; NeuroSpin Center, CEA, DSV/I2BM Gif/Yvette, France ; Cognitive Neuroimaging Unit, University Paris-Sud Gif/Yvette, France
| |
Collapse
|
621
|
Look now and hear what's coming: on the functional role of cross-modal phase reset. Hear Res 2013; 307:144-52. [PMID: 23856236 DOI: 10.1016/j.heares.2013.07.002] [Citation(s) in RCA: 47] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/06/2013] [Revised: 06/27/2013] [Accepted: 07/01/2013] [Indexed: 11/23/2022]
Abstract
In our multisensory environment our sensory systems are continuously receiving information that is often interrelated and must be integrated. Recent work in animals and humans has demonstrated that input to one sensory modality can reset the phase of ambient cortical oscillatory activity in another. The periodic fluctuations in neuronal excitability reflected in these oscillations can thereby be aligned to forthcoming anticipated sensory input. In the auditory domain, the example par excellence is speech, because of its inherently rhythmic structure. In contrast, fluctuations of oscillatory phase in the visual system are argued to reflect periodic sampling of the environment. Thus rhythmic structure is imposed on, rather than extracted from, the visual sensory input. Given this distinction, we suggest that cross-modal phase reset subserves separate functions in the auditory and visual systems. We propose a modality-dependent role for cross-modal input in temporal prediction whereby an auditory event signals the visual system to look now, but a visual event signals the auditory system that it needs to hear what is coming. This article is part of a Special Issue entitled <Human Auditory Neuroimaging>.
Collapse
|
622
|
Cheyne DO. MEG studies of sensorimotor rhythms: A review. Exp Neurol 2013; 245:27-39. [PMID: 22981841 DOI: 10.1016/j.expneurol.2012.08.030] [Citation(s) in RCA: 193] [Impact Index Per Article: 17.5] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/31/2012] [Revised: 08/24/2012] [Accepted: 08/30/2012] [Indexed: 11/15/2022]
Affiliation(s)
- Douglas Owen Cheyne
- Program in Neurosciences and Mental Health, Hospital for Sick Children Research Institute, 555 University Avenue, Toronto, Ontario, Canada, M5G 1X8.
| |
Collapse
|
623
|
Doelling KB, Arnal LH, Ghitza O, Poeppel D. Acoustic landmarks drive delta-theta oscillations to enable speech comprehension by facilitating perceptual parsing. Neuroimage 2013; 85 Pt 2:761-8. [PMID: 23791839 DOI: 10.1016/j.neuroimage.2013.06.035] [Citation(s) in RCA: 324] [Impact Index Per Article: 29.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/13/2013] [Revised: 06/06/2013] [Accepted: 06/07/2013] [Indexed: 11/19/2022] Open
Abstract
A growing body of research suggests that intrinsic neuronal slow (<10 Hz) oscillations in auditory cortex appear to track incoming speech and other spectro-temporally complex auditory signals. Within this framework, several recent studies have identified critical-band temporal envelopes as the specific acoustic feature being reflected by the phase of these oscillations. However, how this alignment between speech acoustics and neural oscillations might underpin intelligibility is unclear. Here we test the hypothesis that the 'sharpness' of temporal fluctuations in the critical band envelope acts as a temporal cue to speech syllabic rate, driving delta-theta rhythms to track the stimulus and facilitate intelligibility. We interpret our findings as evidence that sharp events in the stimulus cause cortical rhythms to re-align and parse the stimulus into syllable-sized chunks for further decoding. Using magnetoencephalographic recordings, we show that by removing temporal fluctuations that occur at the syllabic rate, envelope-tracking activity is reduced. By artificially reinstating these temporal fluctuations, envelope-tracking activity is regained. These changes in tracking correlate with intelligibility of the stimulus. Together, the results suggest that the sharpness of fluctuations in the stimulus, as reflected in the cochlear output, drive oscillatory activity to track and entrain to the stimulus, at its syllabic rate. This process likely facilitates parsing of the stimulus into meaningful chunks appropriate for subsequent decoding, enhancing perception and intelligibility.
Collapse
|
624
|
Sohoglu E, Peelle JE, Carlyon RP, Davis MH. Top-down influences of written text on perceived clarity of degraded speech. J Exp Psychol Hum Percept Perform 2013; 40:186-99. [PMID: 23750966 PMCID: PMC3906796 DOI: 10.1037/a0033206] [Citation(s) in RCA: 47] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/02/2022]
Abstract
An unresolved question is how the reported clarity of degraded speech is enhanced when listeners have prior knowledge of speech content. One account of this phenomenon proposes top-down modulation of early acoustic processing by higher-level linguistic knowledge. Alternative, strictly bottom-up accounts argue that acoustic information and higher-level knowledge are combined at a late decision stage without modulating early acoustic processing. Here we tested top-down and bottom-up accounts using written text to manipulate listeners’ knowledge of speech content. The effect of written text on the reported clarity of noise-vocoded speech was most pronounced when text was presented before (rather than after) speech (Experiment 1). Fine-grained manipulation of the onset asynchrony between text and speech revealed that this effect declined when text was presented more than 120 ms after speech onset (Experiment 2). Finally, the influence of written text was found to arise from phonological (rather than lexical) correspondence between text and speech (Experiment 3). These results suggest that prior knowledge effects are time-limited by the duration of auditory echoic memory for degraded speech, consistent with top-down modulation of early acoustic processing by linguistic knowledge.
Collapse
|
625
|
Vanneste S, Song JJ, De Ridder D. Tinnitus and musical hallucinosis: the same but more. Neuroimage 2013; 82:373-83. [PMID: 23732881 DOI: 10.1016/j.neuroimage.2013.05.107] [Citation(s) in RCA: 47] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/13/2012] [Revised: 05/21/2013] [Accepted: 05/22/2013] [Indexed: 11/24/2022] Open
Abstract
While tinnitus can be interpreted as a simple or elementary form of auditory phantom perception, musical hallucinosis is a more complex auditory phantom phenomenon not only limited to sound perception, but also containing semantic and musical content. It most often occurs in association with hearing loss. To elucidate the relation between simple and complex auditory phantom percepts a source localized electroencephalography (EEG) study is performed. The analyses showed in both simple and complex auditory phantoms an increase in theta-gamma activity and coupling within the auditory cortex that could be associated with the thalamocortical dysrhythmia model. Furthermore increased beta activity within the dorsal anterior cingulate cortex and anterior insula is demonstrated, that might be related to auditory awareness, salience and its attribution to an external sound source. The difference between simple and complex auditory phantoms relies on differential alpha band activity within the auditory cortex and on beta activity in the dorsal anterior cingulate cortex and (para)hippocampal area. This could be related to memory based load dependency, while suppression within the primary visual cortex might be due the presence of a continuous auditory cortex activation inducing an inhibitory signal to the visual system. Complex auditory phantoms further activate the right inferior frontal area (right sided Broca homolog) and right superior temporal pole that might be associated with the musical content. In summary, this study showed for the first time that simple and complex auditory phantoms might share a common neural substrate but differ as complex auditory phantoms are associated with activation in brain areas related to music and language processing.
Collapse
Affiliation(s)
- Sven Vanneste
- Department of Translational Neuroscience, Faculty of Medicine, University of Antwerp, Belgium.
| | | | | |
Collapse
|
626
|
Phillips JM, Vinck M, Everling S, Womelsdorf T. A long-range fronto-parietal 5- to 10-Hz network predicts "top-down" controlled guidance in a task-switch paradigm. Cereb Cortex 2013; 24:1996-2008. [PMID: 23448872 DOI: 10.1093/cercor/bht050] [Citation(s) in RCA: 83] [Impact Index Per Article: 7.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/23/2022] Open
Abstract
The capacity to rapidly adjust behavioral strategies according to changing task demands is closely associated with coordinated activity in lateral and medial prefrontal cortices. Subdivisions within prefrontal cortex are implicated to encode attentional task sets and to update changing task rules, particularly when changing task demands require top-down control. Here, we tested whether these top-down processes precede stimulus processing and constitute a preparatory attentional state that functionally couples with parietal cortex. We examined this functional coupling by recording from intracranial EEG electrodes in macaques during performance of a task-switching paradigm that separates task performance that is based on controlled top-down guidance from automatic, stimulus-triggered processing modes. We identify a prefrontal-parietal network that phase synchronizes at 5-10 Hz, particularly during preparatory states that indicate top-down controlled task-processing modes. Phase relations in the network suggest that medial and lateral prefrontal cortices synchronize bidirectionally, with medial prefrontal cortex showing a phase-lead relative to left parietal recorded 5- to 10-Hz preparatory signals. These findings reveal a 5- to 10-Hz coordinated, long-range fronto-parietal network prior to actual task-relevant stimulus processing, particularly when subjects engage in controlled task processing modes.
Collapse
Affiliation(s)
| | - Martin Vinck
- Cognitive and Systems Neuroscience Group, Center for Neuroscience, University of Amsterdam, Amsterdam, the Netherlands
| | - Stefan Everling
- Department of Physiology and Pharmacology, University of Western Ontario, London, ON, Canada N6A 5K8, Robarts Research Institute, London, ON, Canada N6A 5K8 and
| | - Thilo Womelsdorf
- Department of Biology, Centre for Vision Research, York University, Toronto, Canada
| |
Collapse
|
627
|
Sanabria D, Correa Á. Electrophysiological evidence of temporal preparation driven by rhythms in audition. Biol Psychol 2013. [DOI: 10.1016/j.biopsycho.2012.11.012] [Citation(s) in RCA: 19] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/27/2022]
|
628
|
Andics A, Gál V, Vicsi K, Rudas G, Vidnyánszky Z. FMRI repetition suppression for voices is modulated by stimulus expectations. Neuroimage 2012; 69:277-83. [PMID: 23268783 DOI: 10.1016/j.neuroimage.2012.12.033] [Citation(s) in RCA: 24] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/18/2012] [Revised: 12/12/2012] [Accepted: 12/17/2012] [Indexed: 11/25/2022] Open
Abstract
According to predictive coding models of sensory processing, stimulus expectations have a profound effect on sensory cortical responses. This was supported by experimental results, showing that fMRI repetition suppression (fMRI RS) for face stimuli is strongly modulated by the probability of stimulus repetitions throughout the visual cortical processing hierarchy. To test whether processing of voices is also affected by stimulus expectations, here we investigated the effect of repetition probability on fMRI RS in voice-selective cortical areas. Changing ('alt') and identical ('rep') voice stimulus pairs were presented to the listeners in blocks, with a varying probability of alt and rep trials across blocks. We found auditory fMRI RS in the nonprimary voice-selective cortical regions, including the bilateral posterior STS, the right anterior STG and the right IFC, as well as in the IPL. Importantly, fMRI RS effects in all of these areas were strongly modulated by the probability of stimulus repetition: auditory fMRI RS was reduced or not present in blocks with low repetition probability. Our results revealed that auditory fMRI RS in higher-level voice-selective cortical regions is modulated by repetition probabilities and thus suggest that in audition, similarly to the visual modality, processing of sensory information is shaped by stimulus expectation processes.
Collapse
Affiliation(s)
- Attila Andics
- MR Research Center, Szentágothai János Knowledge Center - Semmelweis University, Budapest, Balassa u. 6., 1083, Hungary.
| | | | | | | | | |
Collapse
|
629
|
den Ouden HEM, Kok P, de Lange FP. How prediction errors shape perception, attention, and motivation. Front Psychol 2012; 3:548. [PMID: 23248610 PMCID: PMC3518876 DOI: 10.3389/fpsyg.2012.00548] [Citation(s) in RCA: 210] [Impact Index Per Article: 17.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/16/2012] [Accepted: 11/22/2012] [Indexed: 01/03/2023] Open
Abstract
Prediction errors (PE) are a central notion in theoretical models of reinforcement learning, perceptual inference, decision-making and cognition, and prediction error signals have been reported across a wide range of brain regions and experimental paradigms. Here, we will make an attempt to see the forest for the trees and consider the commonalities and differences of reported PE signals in light of recent suggestions that the computation of PE forms a fundamental mode of brain function. We discuss where different types of PE are encoded, how they are generated, and the different functional roles they fulfill. We suggest that while encoding of PE is a common computation across brain regions, the content and function of these error signals can be very different and are determined by the afferent and efferent connections within the neural circuitry in which they arise.
Collapse
Affiliation(s)
- Hanneke E M den Ouden
- Donders Institute for Brain, Cognition and Behaviour, Radboud University Nijmegen Nijmegen, Netherlands ; Center for Neural Science, New York University New York, NY, USA
| | | | | |
Collapse
|
630
|
Uhlhaas PJ. Dysconnectivity, large-scale networks and neuronal dynamics in schizophrenia. Curr Opin Neurobiol 2012; 23:283-90. [PMID: 23228430 DOI: 10.1016/j.conb.2012.11.004] [Citation(s) in RCA: 131] [Impact Index Per Article: 10.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/15/2012] [Revised: 11/02/2012] [Accepted: 11/11/2012] [Indexed: 01/15/2023]
Abstract
Schizophrenia remains a daunting challenge for efforts aimed at identifying fundamental pathophysiological processes and to develop evidence-based effective treatments and interventions. One reason for the lack of progress lies in the fact that the pathophysiology of schizophrenia has been predominantly conceived in terms of circumscribed alterations in cellular and anatomical variables. In the current review, it is proposed that this approach needs to be complemented by a focus on the neuronal dynamics in large-scale networks which is compatible with the notion of dysconnectivity, highlighting the involvement of both reduced and increased interactions in extended cortical circuits in schizophrenia. Neural synchrony is one candidate mechanisms for achieving functional connectivity in large-scale networks and has been found to be impaired in schizophrenia. Importantly, alterations in the synchronization of neural oscillations can be related to dysfunctions in the excitation-inhibition (E/I)-balance and developmental modifications with important implications for translational research.
Collapse
Affiliation(s)
- Peter J Uhlhaas
- Institute of Neuroscience and Psychology, University of Glasgow, 58 Hillhead Street, Glasgow G12 8QB, UK.
| |
Collapse
|
631
|
Klimesch W. α-band oscillations, attention, and controlled access to stored information. Trends Cogn Sci 2012; 16:606-17. [PMID: 23141428 PMCID: PMC3507158 DOI: 10.1016/j.tics.2012.10.007] [Citation(s) in RCA: 1741] [Impact Index Per Article: 145.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/04/2012] [Revised: 10/15/2012] [Accepted: 10/15/2012] [Indexed: 11/28/2022]
Abstract
Alpha-band oscillations are the dominant oscillations in the human brain and recent evidence suggests that they have an inhibitory function. Nonetheless, there is little doubt that alpha-band oscillations also play an active role in information processing. In this article, I suggest that alpha-band oscillations have two roles (inhibition and timing) that are closely linked to two fundamental functions of attention (suppression and selection), which enable controlled knowledge access and semantic orientation (the ability to be consciously oriented in time, space, and context). As such, alpha-band oscillations reflect one of the most basic cognitive processes and can also be shown to play a key role in the coalescence of brain activity in different frequencies.
Collapse
Affiliation(s)
- Wolfgang Klimesch
- Department of Physiological Psychology, University of Salzburg, A-5020 Salzburg, Austria.
| |
Collapse
|
632
|
Peelle JE, Davis MH. Neural Oscillations Carry Speech Rhythm through to Comprehension. Front Psychol 2012; 3:320. [PMID: 22973251 PMCID: PMC3434440 DOI: 10.3389/fpsyg.2012.00320] [Citation(s) in RCA: 304] [Impact Index Per Article: 25.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/05/2012] [Accepted: 08/11/2012] [Indexed: 11/17/2022] Open
Abstract
A key feature of speech is the quasi-regular rhythmic information contained in its slow amplitude modulations. In this article we review the information conveyed by speech rhythm, and the role of ongoing brain oscillations in listeners' processing of this content. Our starting point is the fact that speech is inherently temporal, and that rhythmic information conveyed by the amplitude envelope contains important markers for place and manner of articulation, segmental information, and speech rate. Behavioral studies demonstrate that amplitude envelope information is relied upon by listeners and plays a key role in speech intelligibility. Extending behavioral findings, data from neuroimaging - particularly electroencephalography (EEG) and magnetoencephalography (MEG) - point to phase locking by ongoing cortical oscillations to low-frequency information (~4-8 Hz) in the speech envelope. This phase modulation effectively encodes a prediction of when important events (such as stressed syllables) are likely to occur, and acts to increase sensitivity to these relevant acoustic cues. We suggest a framework through which such neural entrainment to speech rhythm can explain effects of speech rate on word and segment perception (i.e., that the perception of phonemes and words in connected speech is influenced by preceding speech rate). Neuroanatomically, acoustic amplitude modulations are processed largely bilaterally in auditory cortex, with intelligible speech resulting in differential recruitment of left-hemisphere regions. Notable among these is lateral anterior temporal cortex, which we propose functions in a domain-general fashion to support ongoing memory and integration of meaningful input. Together, the reviewed evidence suggests that low-frequency oscillations in the acoustic speech signal form the foundation of a rhythmic hierarchy supporting spoken language, mirrored by phase-locked oscillations in the human brain.
Collapse
Affiliation(s)
- Jonathan E. Peelle
- Center for Cognitive Neuroscience and Department of Neurology, University of PennsylvaniaPhiladelphia, PA, USA
| | - Matthew H. Davis
- Medical Research Council Cognition and Brain Sciences UnitCambridge, UK
| |
Collapse
|
633
|
Arnal LH. Predicting "When" Using the Motor System's Beta-Band Oscillations. Front Hum Neurosci 2012; 6:225. [PMID: 22876228 PMCID: PMC3410664 DOI: 10.3389/fnhum.2012.00225] [Citation(s) in RCA: 76] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/11/2012] [Accepted: 07/13/2012] [Indexed: 01/22/2023] Open
Affiliation(s)
- Luc H Arnal
- Department of Psychology, New York University New York, NY, USA
| |
Collapse
|