51
|
Rajendran VG, Teki S, Schnupp JWH. Temporal Processing in Audition: Insights from Music. Neuroscience 2018; 389:4-18. [PMID: 29108832 PMCID: PMC6371985 DOI: 10.1016/j.neuroscience.2017.10.041] [Citation(s) in RCA: 27] [Impact Index Per Article: 3.9] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/25/2017] [Revised: 10/24/2017] [Accepted: 10/27/2017] [Indexed: 11/28/2022]
Abstract
Music is a curious example of a temporally patterned acoustic stimulus, and a compelling pan-cultural phenomenon. This review strives to bring some insights from decades of music psychology and sensorimotor synchronization (SMS) literature into the mainstream auditory domain, arguing that musical rhythm perception is shaped in important ways by temporal processing mechanisms in the brain. The feature that unites these disparate disciplines is an appreciation of the central importance of timing, sequencing, and anticipation. Perception of musical rhythms relies on an ability to form temporal predictions, a general feature of temporal processing that is equally relevant to auditory scene analysis, pattern detection, and speech perception. By bringing together findings from the music and auditory literature, we hope to inspire researchers to look beyond the conventions of their respective fields and consider the cross-disciplinary implications of studying auditory temporal sequence processing. We begin by highlighting music as an interesting sound stimulus that may provide clues to how temporal patterning in sound drives perception. Next, we review the SMS literature and discuss possible neural substrates for the perception of, and synchronization to, musical beat. We then move away from music to explore the perceptual effects of rhythmic timing in pattern detection, auditory scene analysis, and speech perception. Finally, we review the neurophysiology of general timing processes that may underlie aspects of the perception of rhythmic patterns. We conclude with a brief summary and outlook for future research.
Collapse
Affiliation(s)
- Vani G Rajendran
- Auditory Neuroscience Group, University of Oxford, Department of Physiology, Anatomy, and Genetics, Oxford, UK
| | - Sundeep Teki
- Auditory Neuroscience Group, University of Oxford, Department of Physiology, Anatomy, and Genetics, Oxford, UK
| | - Jan W H Schnupp
- City University of Hong Kong, Department of Biomedical Sciences, 31 To Yuen Street, Kowloon Tong, Hong Kong.
| |
Collapse
|
52
|
Proactive Sensing of Periodic and Aperiodic Auditory Patterns. Trends Cogn Sci 2018; 22:870-882. [DOI: 10.1016/j.tics.2018.08.003] [Citation(s) in RCA: 134] [Impact Index Per Article: 19.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/02/2018] [Revised: 08/12/2018] [Accepted: 08/13/2018] [Indexed: 11/18/2022]
|
53
|
Temporal expectancies and rhythmic cueing in touch: The influence of spatial attention. Cognition 2018; 182:140-150. [PMID: 30248473 DOI: 10.1016/j.cognition.2018.09.011] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/05/2018] [Revised: 09/12/2018] [Accepted: 09/15/2018] [Indexed: 11/20/2022]
Abstract
Attention resources can be allocated in both space and time. Exogenous temporal attention can be driven by rhythmic events in our environment which automatically entrain periods of attention. Temporal expectancies can also be generated by the elapse of time, leading to foreperiod effects (the longer between a cue and imperative target, the faster the response). This study investigates temporal attention in touch and the influence of spatial orienting. In experiment 1, participants used bilateral tactile cues to orient endogenous spatial attention to the left or right hand where a unilateral tactile target was presented. This facilitated response times for attended over unattended targets. In experiment 2, the cue was unilateral and non-predictive of the target location resulting in inhibition of return. Importantly, the cue was rhythmic and targets were presented early, in synchrony or late in relation to the rhythmic cue. A foreperiod effect was observed in experiment 1 that was independent from any spatial attention effects. In experiment 2, in synchrony were slower compared to out of synchrony targets but only for cued and not uncued targets, suggesting the rhythm generates periods of exogenous inhibition. Taken together, temporal and spatial attention interact in touch, but only when both types of attention are exogenous. If the task requires endogenous spatial orienting, space and time are independent.
Collapse
|
54
|
Holt LL, Tierney AT, Guerra G, Laffere A, Dick F. Dimension-selective attention as a possible driver of dynamic, context-dependent re-weighting in speech processing. Hear Res 2018; 366:50-64. [PMID: 30131109 PMCID: PMC6107307 DOI: 10.1016/j.heares.2018.06.014] [Citation(s) in RCA: 18] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/18/2018] [Revised: 06/10/2018] [Accepted: 06/19/2018] [Indexed: 12/24/2022]
Abstract
The contribution of acoustic dimensions to an auditory percept is dynamically adjusted and reweighted based on prior experience about how informative these dimensions are across the long-term and short-term environment. This is especially evident in speech perception, where listeners differentially weight information across multiple acoustic dimensions, and use this information selectively to update expectations about future sounds. The dynamic and selective adjustment of how acoustic input dimensions contribute to perception has made it tempting to conceive of this as a form of non-spatial auditory selective attention. Here, we review several human speech perception phenomena that might be consistent with auditory selective attention although, as of yet, the literature does not definitively support a mechanistic tie. We relate these human perceptual phenomena to illustrative nonhuman animal neurobiological findings that offer informative guideposts in how to test mechanistic connections. We next present a novel empirical approach that can serve as a methodological bridge from human research to animal neurobiological studies. Finally, we describe four preliminary results that demonstrate its utility in advancing understanding of human non-spatial dimension-based auditory selective attention.
Collapse
Affiliation(s)
- Lori L Holt
- Department of Psychology, Carnegie Mellon University, Pittsburgh, PA, 15213, USA; Center for the Neural Basis of Cognition, Carnegie Mellon University, Pittsburgh, PA, 15213, USA.
| | - Adam T Tierney
- Department of Psychological Sciences, Birkbeck College, University of London, London, WC1E 7HX, UK; Centre for Brain and Cognitive Development, Birkbeck College, London, WC1E 7HX, UK
| | - Giada Guerra
- Department of Psychological Sciences, Birkbeck College, University of London, London, WC1E 7HX, UK; Centre for Brain and Cognitive Development, Birkbeck College, London, WC1E 7HX, UK
| | - Aeron Laffere
- Department of Psychological Sciences, Birkbeck College, University of London, London, WC1E 7HX, UK
| | - Frederic Dick
- Department of Psychological Sciences, Birkbeck College, University of London, London, WC1E 7HX, UK; Centre for Brain and Cognitive Development, Birkbeck College, London, WC1E 7HX, UK; Department of Experimental Psychology, University College London, London, WC1H 0AP, UK
| |
Collapse
|
55
|
Not All Predictions Are Equal: "What" and "When" Predictions Modulate Activity in Auditory Cortex through Different Mechanisms. J Neurosci 2018; 38:8680-8693. [PMID: 30143578 DOI: 10.1523/jneurosci.0369-18.2018] [Citation(s) in RCA: 43] [Impact Index Per Article: 6.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/09/2018] [Revised: 07/22/2018] [Accepted: 07/26/2018] [Indexed: 11/21/2022] Open
Abstract
Using predictions based on environmental regularities is fundamental for adaptive behavior. While it is widely accepted that predictions across different stimulus attributes (e.g., time and content) facilitate sensory processing, it is unknown whether predictions across these attributes rely on the same neural mechanism. Here, to elucidate the neural mechanisms of predictions, we combine invasive electrophysiological recordings (human electrocorticography in 4 females and 2 males) with computational modeling while manipulating predictions about content ("what") and time ("when"). We found that "when" predictions increased evoked activity over motor and prefrontal regions both at early (∼180 ms) and late (430-450 ms) latencies. "What" predictability, however, increased evoked activity only over prefrontal areas late in time (420-460 ms). Beyond these dissociable influences, we found that "what" and "when" predictability interactively modulated the amplitude of early (165 ms) evoked responses in the superior temporal gyrus. We modeled the observed neural responses using biophysically realistic neural mass models, to better understand whether "what" and "when" predictions tap into similar or different neurophysiological mechanisms. Our modeling results suggest that "what" and "when" predictability rely on complementary neural processes: "what" predictions increased short-term plasticity in auditory areas, whereas "when" predictability increased synaptic gain in motor areas. Thus, content and temporal predictions engage complementary neural mechanisms in different regions, suggesting domain-specific prediction signaling along the cortical hierarchy. Encoding predictions through different mechanisms may endow the brain with the flexibility to efficiently signal different sources of predictions, weight them by their reliability, and allow for their encoding without mutual interference.SIGNIFICANCE STATEMENT Predictions of different stimulus features facilitate sensory processing. However, it is unclear whether predictions of different attributes rely on similar or different neural mechanisms. By combining invasive electrophysiological recordings of cortical activity with experimental manipulations of participants' predictions about content and time of acoustic events, we found that the two types of predictions had dissociable influences on cortical activity, both in terms of the regions involved and the timing of the observed effects. Further, our biophysical modeling analysis suggests that predictability of content and time rely on complementary neural processes: short-term plasticity in auditory areas and synaptic gain in motor areas, respectively. This suggests that predictions of different features are encoded with complementary neural mechanisms in different brain regions.
Collapse
|
56
|
Intracortical Microstimulation Modulates Cortical Induced Responses. J Neurosci 2018; 38:7774-7786. [PMID: 30054394 DOI: 10.1523/jneurosci.0928-18.2018] [Citation(s) in RCA: 15] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/11/2018] [Revised: 06/19/2018] [Accepted: 07/06/2018] [Indexed: 12/31/2022] Open
Abstract
Recent advances in cortical prosthetics relied on intracortical microstimulation (ICMS) to activate the cortical neural network and convey information to the brain. Here we show that activity elicited by low-current ICMS modulates induced cortical responses to a sensory stimulus in the primary auditory cortex (A1). A1 processes sensory stimuli in a stereotyped manner, encompassing two types of activity: evoked activity (phase-locked to the stimulus) and induced activity (non-phase-locked to the stimulus). Time-frequency analyses of extracellular potentials recorded from all layers and the surface of the auditory cortex of anesthetized guinea pigs of both sexes showed that ICMS during the processing of a transient acoustic stimulus differentially affected the evoked and induced response. Specifically, ICMS enhanced the long-latency-induced component, mimicking physiological gain increasing top-down feedback processes. Furthermore, the phase of the local field potential at the time of stimulation was predictive of the response amplitude for acoustic stimulation, ICMS, as well as combined acoustic and electric stimulation. Together, this was interpreted as a sign that the response to electrical stimulation was integrated into the ongoing cortical processes in contrast to substituting them. Consequently, ICMS modulated the cortical response to a sensory stimulus. We propose such targeted modulation of cortical activity (as opposed to a stimulation that substitutes the ongoing processes) as an alternative approach for cortical prostheses.SIGNIFICANCE STATEMENT Intracortical microstimulation (ICMS) is commonly used to activate a specific subset of cortical neurons, without taking into account the ongoing activity at the time of stimulation. Here, we found that a low-current ICMS pulse modulated the way the auditory cortex processed a peripheral stimulus, by supra-additively combining the response to the ICMS with the cortical processing of the peripheral stimulus. This artificial modulation mimicked natural modulations of response magnitude such as attention or expectation. In contrast to what was implied in earlier studies, this shows that the response to electrical stimulation is not substituting ongoing cortical activity but is integrated into the natural processes.
Collapse
|
57
|
Delis I, Dmochowski JP, Sajda P, Wang Q. Correlation of neural activity with behavioral kinematics reveals distinct sensory encoding and evidence accumulation processes during active tactile sensing. Neuroimage 2018; 175:12-21. [PMID: 29580968 PMCID: PMC5960621 DOI: 10.1016/j.neuroimage.2018.03.035] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/29/2017] [Revised: 02/21/2018] [Accepted: 03/17/2018] [Indexed: 12/16/2022] Open
Abstract
Many real-world decisions rely on active sensing, a dynamic process for directing our sensors (e.g. eyes or fingers) across a stimulus to maximize information gain. Though ecologically pervasive, limited work has focused on identifying neural correlates of the active sensing process. In tactile perception, we often make decisions about an object/surface by actively exploring its shape/texture. Here we investigate the neural correlates of active tactile decision-making by simultaneously measuring electroencephalography (EEG) and finger kinematics while subjects interrogated a haptic surface to make perceptual judgments. Since sensorimotor behavior underlies decision formation in active sensing tasks, we hypothesized that the neural correlates of decision-related processes would be detectable by relating active sensing to neural activity. Novel brain-behavior correlation analysis revealed that three distinct EEG components, localizing to right-lateralized occipital cortex (LOC), middle frontal gyrus (MFG), and supplementary motor area (SMA), respectively, were coupled with active sensing as their activity significantly correlated with finger kinematics. To probe the functional role of these components, we fit their single-trial-couplings to decision-making performance using a hierarchical-drift-diffusion-model (HDDM), revealing that the LOC modulated the encoding of the tactile stimulus whereas the MFG predicted the rate of information integration towards a choice. Interestingly, the MFG disappeared from components uncovered from control subjects performing active sensing but not required to make perceptual decisions. By uncovering the neural correlates of distinct stimulus encoding and evidence accumulation processes, this study delineated, for the first time, the functional role of cortical areas in active tactile decision-making.
Collapse
Affiliation(s)
- Ioannis Delis
- Department of Biomedical Engineering, Columbia University, New York, NY, 10027, USA
| | - Jacek P Dmochowski
- Department of Biomedical Engineering, City College of New York, New York, NY, 10031, USA
| | - Paul Sajda
- Department of Biomedical Engineering, Columbia University, New York, NY, 10027, USA; Data Science Institute, Columbia University, New York, NY, 10027, USA.
| | - Qi Wang
- Department of Biomedical Engineering, Columbia University, New York, NY, 10027, USA.
| |
Collapse
|
58
|
Schroeder KE, Irwin ZT, Bullard AJ, Thompson DE, Bentley JN, Stacey WC, Patil PG, Chestek CA. Robust tactile sensory responses in finger area of primate motor cortex relevant to prosthetic control. J Neural Eng 2018; 14:046016. [PMID: 28504971 DOI: 10.1088/1741-2552/aa7329] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022]
Abstract
OBJECTIVE Challenges in improving the performance of dexterous upper-limb brain-machine interfaces (BMIs) have prompted renewed interest in quantifying the amount and type of sensory information naturally encoded in the primary motor cortex (M1). Previous single unit studies in monkeys showed M1 is responsive to tactile stimulation, as well as passive and active movement of the limbs. However, recent work in this area has focused primarily on proprioception. Here we examined instead how tactile somatosensation of the hand and fingers is represented in M1. APPROACH We recorded multi- and single units and thresholded neural activity from macaque M1 while gently brushing individual finger pads at 2 Hz. We also recorded broadband neural activity from electrocorticogram (ECoG) grids placed on human motor cortex, while applying the same tactile stimulus. MAIN RESULTS Units displaying significant differences in firing rates between individual fingers (p < 0.05) represented up to 76.7% of sorted multiunits across four monkeys. After normalizing by the number of channels with significant motor finger responses, the percentage of electrodes with significant tactile responses was 74.9% ± 24.7%. No somatotopic organization of finger preference was obvious across cortex, but many units exhibited cosine-like tuning across multiple digits. Sufficient sensory information was present in M1 to correctly decode stimulus position from multiunit activity above chance levels in all monkeys, and also from ECoG gamma power in two human subjects. SIGNIFICANCE These results provide some explanation for difficulties experienced by motor decoders in clinical trials of cortically controlled prosthetic hands, as well as the general problem of disentangling motor and sensory signals in primate motor cortex during dextrous tasks. Additionally, examination of unit tuning during tactile and proprioceptive inputs indicates cells are often tuned differently in different contexts, reinforcing the need for continued refinement of BMI training and decoding approaches to closed-loop BMI systems for dexterous grasping.
Collapse
Affiliation(s)
- Karen E Schroeder
- Department of Biomedical Engineering, University of Michigan, Ann Arbor, MI 48109, United States of America
| | | | | | | | | | | | | | | |
Collapse
|
59
|
Theta oscillations mediate pre-activation of highly expected word initial phonemes. Sci Rep 2018; 8:9503. [PMID: 29934613 PMCID: PMC6015046 DOI: 10.1038/s41598-018-27898-w] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/07/2018] [Accepted: 06/13/2018] [Indexed: 11/21/2022] Open
Abstract
Prediction has been proposed to be a fundamental neurocognitive mechanism. However, its role in language comprehension is currently under debate. In this magnetoencephalography study we aimed to find evidence of word-form phonological pre-activation and to characterize the oscillatory mechanisms supporting this. Participants were presented firstly with a picture of an object, and then, after a delay (fixed or variable), they heard the corresponding word. Target words could contain a phoneme substitution, and participants’ task was to detect mispronunciations. Word-initial phonemes were either fricatives or plosives, generating two experimental conditions (expect-fricative and expect-plosive). In the pre-word interval, significant differences (α = 0.05) emerged between conditions both for fixed and variable delays. Source reconstruction of this effect showed a brain-wide network involving several frequency bands, including bilateral superior temporal areas commonly associated with phonological processing, in a theta range. These results show that phonological representations supported by the theta band may be active before word onset, even under temporal uncertainty. However, in the evoked response just prior to the word, differences between conditions were apparent under variable- but not fixed-delays. This suggests that additional top-down mechanisms sensitive to phonological form may be recruited when there is uncertainty in the signal.
Collapse
|
60
|
Corcoran AW, Pezzulo G, Hohwy J. Commentary: Respiration-Entrained Brain Rhythms Are Global but Often Overlooked. Front Syst Neurosci 2018; 12:25. [PMID: 29937718 PMCID: PMC6003246 DOI: 10.3389/fnsys.2018.00025] [Citation(s) in RCA: 27] [Impact Index Per Article: 3.9] [Reference Citation Analysis] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/04/2018] [Accepted: 05/16/2018] [Indexed: 11/13/2022] Open
Affiliation(s)
- Andrew W. Corcoran
- Cognition and Philosophy Laboratory, School of Philosophical, Historical and International Studies, Monash University, Melbourne, VIC, Australia
| | - Giovanni Pezzulo
- Institute of Cognitive Sciences and Technologies, National Research Council, Rome, Italy
| | - Jakob Hohwy
- Cognition and Philosophy Laboratory, School of Philosophical, Historical and International Studies, Monash University, Melbourne, VIC, Australia
| |
Collapse
|
61
|
Kaiser J, Schütz‐Bosbach S. Sensory attenuation of self‐produced signals does not rely on self‐specific motor predictions. Eur J Neurosci 2018; 47:1303-1310. [DOI: 10.1111/ejn.13931] [Citation(s) in RCA: 28] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/21/2017] [Revised: 04/04/2018] [Accepted: 04/16/2018] [Indexed: 11/29/2022]
Affiliation(s)
- Jakob Kaiser
- General and Experimental PsychologyLudwig‐Maximilian‐University Munich Germany
| | | |
Collapse
|
62
|
Keitel A, Gross J, Kayser C. Perceptually relevant speech tracking in auditory and motor cortex reflects distinct linguistic features. PLoS Biol 2018. [PMID: 29529019 PMCID: PMC5864086 DOI: 10.1371/journal.pbio.2004473] [Citation(s) in RCA: 162] [Impact Index Per Article: 23.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022] Open
Abstract
During online speech processing, our brain tracks the acoustic fluctuations in speech at different timescales. Previous research has focused on generic timescales (for example, delta or theta bands) that are assumed to map onto linguistic features such as prosody or syllables. However, given the high intersubject variability in speaking patterns, such a generic association between the timescales of brain activity and speech properties can be ambiguous. Here, we analyse speech tracking in source-localised magnetoencephalographic data by directly focusing on timescales extracted from statistical regularities in our speech material. This revealed widespread significant tracking at the timescales of phrases (0.6–1.3 Hz), words (1.8–3 Hz), syllables (2.8–4.8 Hz), and phonemes (8–12.4 Hz). Importantly, when examining its perceptual relevance, we found stronger tracking for correctly comprehended trials in the left premotor (PM) cortex at the phrasal scale as well as in left middle temporal cortex at the word scale. Control analyses using generic bands confirmed that these effects were specific to the speech regularities in our stimuli. Furthermore, we found that the phase at the phrasal timescale coupled to power at beta frequency (13–30 Hz) in motor areas. This cross-frequency coupling presumably reflects top-down temporal prediction in ongoing speech perception. Together, our results reveal specific functional and perceptually relevant roles of distinct tracking and cross-frequency processes along the auditory–motor pathway. How we comprehend speech—and how the brain encodes information from a continuous speech stream—is of interest for neuroscience, linguistics, and research on language disorders. Previous work that examined dynamic brain activity has addressed the issue of comprehension only indirectly, by contrasting intelligible speech with unintelligible speech or baseline activity. Recent work, however, suggests that brain areas can show similar stimulus-driven activity but differently contribute to perception or comprehension. To directly address the perceptual relevance of dynamic brain activity for speech encoding, we used a straightforward, single-trial comprehension measure. Furthermore, previous work has been vague regarding the analysed timescales. We therefore base our analysis directly on the timescales of phrases, words, syllables, and phonemes of our speech stimuli. By incorporating these two conceptual innovations, we demonstrate that different areas of the brain track acoustic information at the time-scales of words and phrases. Moreover, our results suggest that the motor cortex uses a cross-frequency coupling mechanism to predict the timing of phrases in ongoing speech. Our findings suggest spatially and temporally distinct brain mechanisms that directly shape our comprehension.
Collapse
Affiliation(s)
- Anne Keitel
- Institute of Neuroscience and Psychology, University of Glasgow, Glasgow, United Kingdom
- * E-mail:
| | - Joachim Gross
- Institute of Neuroscience and Psychology, University of Glasgow, Glasgow, United Kingdom
- Institute for Biomagnetism and Biosignalanalysis, University of Münster, Münster, Germany
| | - Christoph Kayser
- Institute of Neuroscience and Psychology, University of Glasgow, Glasgow, United Kingdom
- Cognitive Neuroscience, Bielefeld University, Bielefeld, Germany
| |
Collapse
|
63
|
te Woerd ES, Oostenveld R, de Lange FP, Praamstra P. Entrainment for attentional selection in Parkinson's disease. Cortex 2018; 99:166-178. [DOI: 10.1016/j.cortex.2017.11.011] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/02/2017] [Revised: 09/05/2017] [Accepted: 11/16/2017] [Indexed: 11/16/2022]
|
64
|
Periodicity versus Prediction in Sensory Perception. J Neurosci 2018; 36:7343-5. [PMID: 27413145 DOI: 10.1523/jneurosci.1335-16.2016] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/22/2016] [Accepted: 06/06/2016] [Indexed: 11/21/2022] Open
|
65
|
Yusuf PA, Hubka P, Tillein J, Kral A. Induced cortical responses require developmental sensory experience. Brain 2017; 140:3153-3165. [PMID: 29155975 PMCID: PMC5841147 DOI: 10.1093/brain/awx286] [Citation(s) in RCA: 27] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/02/2017] [Accepted: 09/12/2017] [Indexed: 01/25/2023] Open
Abstract
Sensory areas of the cerebral cortex integrate the sensory inputs with the ongoing activity. We studied how complete absence of auditory experience affects this process in a higher mammal model of complete sensory deprivation, the congenitally deaf cat. Cortical responses were elicited by intracochlear electric stimulation using cochlear implants in adult hearing controls and deaf cats. Additionally, in hearing controls, acoustic stimuli were used to assess the effect of stimulus mode (electric versus acoustic) on the cortical responses. We evaluated time-frequency representations of local field potential recorded simultaneously in the primary auditory cortex and a higher-order area, the posterior auditory field, known to be differentially involved in cross-modal (visual) reorganization in deaf cats. The results showed the appearance of evoked (phase-locked) responses at early latencies (<100 ms post-stimulus) and more abundant induced (non-phase-locked) responses at later latencies (>150 ms post-stimulus). In deaf cats, substantially reduced induced responses were observed in overall power as well as duration in both investigated fields. Additionally, a reduction of ongoing alpha band activity was found in the posterior auditory field (but not in primary auditory cortex) of deaf cats. The present study demonstrates that induced activity requires developmental experience and suggests that higher-order areas involved in the cross-modal reorganization show more auditory deficits than primary areas.
Collapse
Affiliation(s)
- Prasandhya Astagiri Yusuf
- Institute of AudioNeuroTechnology and Department of Experimental Otology, ENT Clinics, Hannover Medical School, Germany
| | - Peter Hubka
- Institute of AudioNeuroTechnology and Department of Experimental Otology, ENT Clinics, Hannover Medical School, Germany
| | - Jochen Tillein
- Institute of AudioNeuroTechnology and Department of Experimental Otology, ENT Clinics, Hannover Medical School, Germany.,ENT Clinics, J. W. Goethe University, Frankfurt am Main, Germany
| | - Andrej Kral
- Institute of AudioNeuroTechnology and Department of Experimental Otology, ENT Clinics, Hannover Medical School, Germany.,School of Behavioral and Brain Sciences, The University of Texas at Dallas, USA
| |
Collapse
|
66
|
Rajendran VG, Harper NS, Garcia-Lazaro JA, Lesica NA, Schnupp JWH. Midbrain adaptation may set the stage for the perception of musical beat. Proc Biol Sci 2017; 284:20171455. [PMID: 29118141 PMCID: PMC5698641 DOI: 10.1098/rspb.2017.1455] [Citation(s) in RCA: 25] [Impact Index Per Article: 3.1] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/28/2017] [Accepted: 10/13/2017] [Indexed: 11/20/2022] Open
Abstract
The ability to spontaneously feel a beat in music is a phenomenon widely believed to be unique to humans. Though beat perception involves the coordinated engagement of sensory, motor and cognitive processes in humans, the contribution of low-level auditory processing to the activation of these networks in a beat-specific manner is poorly understood. Here, we present evidence from a rodent model that midbrain preprocessing of sounds may already be shaping where the beat is ultimately felt. For the tested set of musical rhythms, on-beat sounds on average evoked higher firing rates than off-beat sounds, and this difference was a defining feature of the set of beat interpretations most commonly perceived by human listeners over others. Basic firing rate adaptation provided a sufficient explanation for these results. Our findings suggest that midbrain adaptation, by encoding the temporal context of sounds, creates points of neural emphasis that may influence the perceptual emergence of a beat.
Collapse
Affiliation(s)
- Vani G Rajendran
- Auditory Neuroscience Group, Department of Physiology, Anatomy, and Genetics, University of Oxford, Oxford, UK
| | - Nicol S Harper
- Auditory Neuroscience Group, Department of Physiology, Anatomy, and Genetics, University of Oxford, Oxford, UK
| | | | - Nicholas A Lesica
- UCL Ear Institute, 332 Grays Inn Rd, Kings Cross, London WC1X 8EE, UK
| | - Jan W H Schnupp
- Department of Biomedical Sciences, City University of Hong Kong, 1/F, Block 1, To Yuen Building, 31 To Yuen Street, Hong Kong
| |
Collapse
|
67
|
Fujioka T, Ross B. Beta-band oscillations during passive listening to metronome sounds reflect improved timing representation after short-term musical training in healthy older adults. Eur J Neurosci 2017; 46:2339-2354. [DOI: 10.1111/ejn.13693] [Citation(s) in RCA: 16] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/21/2017] [Revised: 08/21/2017] [Accepted: 08/29/2017] [Indexed: 11/29/2022]
Affiliation(s)
- Takako Fujioka
- Center for Computer Research in Music and Acoustics; Department of Music; Stanford University; 660 Lomita Ct. Stanford CA 94305 USA
- Stanford Neurosciences Institute; Stanford University; Stanford CA USA
| | - Bernhard Ross
- Rotman Research Institute; Baycrest Centre; Toronto ON Canada
- Department of Medical Biophysics; University of Toronto; Toronto ON Canada
| |
Collapse
|
68
|
Abstract
In behavior, action and perception are inherently interdependent. However, the actual mechanistic contributions of the motor system to sensory processing are unknown. We present neurophysiological evidence that the motor system is involved in predictive timing, a brain function that aligns temporal fluctuations of attention with the timing of events in a task-relevant stream, thus facilitating sensory selection and optimizing behavior. In a magnetoencephalography experiment involving auditory temporal attention, participants had to disentangle two streams of sound on the unique basis of endogenous temporal cues. We show that temporal predictions are encoded by interdependent delta and beta neural oscillations originating from the left sensorimotor cortex, and directed toward auditory regions. We also found that overt rhythmic movements improved the quality of temporal predictions and sharpened the temporal selection of relevant auditory information. This latter behavioral and functional benefit was associated with increased signaling of temporal predictions in right-lateralized frontoparietal associative regions. In sum, this study points at a covert form of auditory active sensing. Our results emphasize the key role of motor brain areas in providing contextual temporal information to sensory regions, driving perceptual and behavioral selection.
Collapse
|
69
|
Pezzulo G, Kemere C, van der Meer MAA. Internally generated hippocampal sequences as a vantage point to probe future-oriented cognition. Ann N Y Acad Sci 2017; 1396:144-165. [PMID: 28548460 DOI: 10.1111/nyas.13329] [Citation(s) in RCA: 48] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/19/2016] [Revised: 01/31/2017] [Accepted: 02/07/2017] [Indexed: 12/22/2022]
Abstract
Information processing in the rodent hippocampus is fundamentally shaped by internally generated sequences (IGSs), expressed during two different network states: theta sequences, which repeat and reset at the ∼8 Hz theta rhythm associated with active behavior, and punctate sharp wave-ripple (SWR) sequences associated with wakeful rest or slow-wave sleep. A potpourri of diverse functional roles has been proposed for these IGSs, resulting in a fragmented conceptual landscape. Here, we advance a unitary view of IGSs, proposing that they reflect an inferential process that samples a policy from the animal's generative model, supported by hippocampus-specific priors. The same inference affords different cognitive functions when the animal is in distinct dynamical modes, associated with specific functional networks. Theta sequences arise when inference is coupled to the animal's action-perception cycle, supporting online spatial decisions, predictive processing, and episode encoding. SWR sequences arise when the animal is decoupled from the action-perception cycle and may support offline cognitive processing, such as memory consolidation, the prospective simulation of spatial trajectories, and imagination. We discuss the empirical bases of this proposal in relation to rodent studies and highlight how the proposed computational principles can shed light on the mechanisms of future-oriented cognition in humans.
Collapse
Affiliation(s)
- Giovanni Pezzulo
- Institute of Cognitive Sciences and Technologies, National Research Council, Rome, Italy
| | - Caleb Kemere
- Electrical and Computer Engineering, Rice University, Houston, Texas
| | | |
Collapse
|
70
|
Costa-Faidella J, Sussman ES, Escera C. Selective entrainment of brain oscillations drives auditory perceptual organization. Neuroimage 2017; 159:195-206. [PMID: 28757195 DOI: 10.1016/j.neuroimage.2017.07.056] [Citation(s) in RCA: 20] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/30/2017] [Revised: 07/06/2017] [Accepted: 07/25/2017] [Indexed: 01/23/2023] Open
Abstract
Perceptual sound organization supports our ability to make sense of the complex acoustic environment, to understand speech and to enjoy music. However, the neuronal mechanisms underlying the subjective experience of perceiving univocal auditory patterns that can be listened to, despite hearing all sounds in a scene, are poorly understood. We hereby investigated the manner in which competing sound organizations are simultaneously represented by specific brain activity patterns and the way attention and task demands prime the internal model generating the current percept. Using a selective attention task on ambiguous auditory stimulation coupled with EEG recordings, we found that the phase of low-frequency oscillatory activity dynamically tracks multiple sound organizations concurrently. However, whereas the representation of ignored sound patterns is circumscribed to auditory regions, large-scale oscillatory entrainment in auditory, sensory-motor and executive-control network areas reflects the active perceptual organization, thereby giving rise to the subjective experience of a unitary percept.
Collapse
Affiliation(s)
- Jordi Costa-Faidella
- Brainlab - Cognitive Neuroscience Research Group, Department of Clinical Psychology and Psychobiology, University of Barcelona, 08035, Barcelona, Catalonia, Spain; Institute of Neurosciences, University of Barcelona, 08035, Barcelona, Catalonia, Spain
| | - Elyse S Sussman
- Departments of Neuroscience, Albert Einstein College of Medicine, Bronx, NY, 10461, USA; Otorhinolaryngology-HNS, Albert Einstein College of Medicine, Bronx, NY, 10461, USA
| | - Carles Escera
- Brainlab - Cognitive Neuroscience Research Group, Department of Clinical Psychology and Psychobiology, University of Barcelona, 08035, Barcelona, Catalonia, Spain; Institute of Neurosciences, University of Barcelona, 08035, Barcelona, Catalonia, Spain; Institut de Recerca Sant Joan de Déu, 08950, Esplugues de Llobregat, Catalonia, Spain.
| |
Collapse
|
71
|
Tomassini A, Ambrogioni L, Medendorp WP, Maris E. Theta oscillations locked to intended actions rhythmically modulate perception. eLife 2017; 6. [PMID: 28686161 PMCID: PMC5553936 DOI: 10.7554/elife.25618] [Citation(s) in RCA: 71] [Impact Index Per Article: 8.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/31/2017] [Accepted: 07/06/2017] [Indexed: 11/13/2022] Open
Abstract
Ongoing brain oscillations are known to influence perception, and to be reset by exogenous stimulations. Voluntary action is also accompanied by prominent rhythmic activity, and recent behavioral evidence suggests that this might be coupled with perception. Here, we reveal the neurophysiological underpinnings of this sensorimotor coupling in humans. We link the trial-by-trial dynamics of EEG oscillatory activity during movement preparation to the corresponding dynamics in perception, for two unrelated visual and motor tasks. The phase of theta oscillations (~4 Hz) predicts perceptual performance, even >1 s before movement. Moreover, theta oscillations are phase-locked to the onset of the movement. Remarkably, the alignment of theta phase and its perceptual relevance unfold with similar non-monotonic profiles, suggesting their relatedness. The present work shows that perception and movement initiation are automatically synchronized since the early stages of motor planning through neuronal oscillatory activity in the theta range.
Collapse
Affiliation(s)
- Alice Tomassini
- Donders Institute for Brain, Cognition and Behavior, Centre for Cognition, Radboud University, Nijmegen, Netherlands
| | - Luca Ambrogioni
- Donders Institute for Brain, Cognition and Behavior, Centre for Cognition, Radboud University, Nijmegen, Netherlands
| | - W Pieter Medendorp
- Donders Institute for Brain, Cognition and Behavior, Centre for Cognition, Radboud University, Nijmegen, Netherlands
| | - Eric Maris
- Donders Institute for Brain, Cognition and Behavior, Centre for Cognition, Radboud University, Nijmegen, Netherlands
| |
Collapse
|
72
|
Giordano BL, Ince RAA, Gross J, Schyns PG, Panzeri S, Kayser C. Contributions of local speech encoding and functional connectivity to audio-visual speech perception. eLife 2017; 6. [PMID: 28590903 PMCID: PMC5462535 DOI: 10.7554/elife.24763] [Citation(s) in RCA: 49] [Impact Index Per Article: 6.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/31/2016] [Accepted: 05/07/2017] [Indexed: 11/13/2022] Open
Abstract
Seeing a speaker’s face enhances speech intelligibility in adverse environments. We investigated the underlying network mechanisms by quantifying local speech representations and directed connectivity in MEG data obtained while human participants listened to speech of varying acoustic SNR and visual context. During high acoustic SNR speech encoding by temporally entrained brain activity was strong in temporal and inferior frontal cortex, while during low SNR strong entrainment emerged in premotor and superior frontal cortex. These changes in local encoding were accompanied by changes in directed connectivity along the ventral stream and the auditory-premotor axis. Importantly, the behavioral benefit arising from seeing the speaker’s face was not predicted by changes in local encoding but rather by enhanced functional connectivity between temporal and inferior frontal cortex. Our results demonstrate a role of auditory-frontal interactions in visual speech representations and suggest that functional connectivity along the ventral pathway facilitates speech comprehension in multisensory environments. DOI:http://dx.doi.org/10.7554/eLife.24763.001 When listening to someone in a noisy environment, such as a cocktail party, we can understand the speaker more easily if we can also see his or her face. Movements of the lips and tongue convey additional information that helps the listener’s brain separate out syllables, words and sentences. However, exactly where in the brain this effect occurs and how it works remain unclear. To find out, Giordano et al. scanned the brains of healthy volunteers as they watched clips of people speaking. The clarity of the speech varied between clips. Furthermore, in some of the clips the lip movements of the speaker corresponded to the speech in question, whereas in others the lip movements were nonsense babble. As expected, the volunteers performed better on a word recognition task when the speech was clear and when the lips movements agreed with the spoken dialogue. Watching the video clips stimulated rhythmic activity in multiple regions of the volunteers’ brains, including areas that process sound and areas that plan movements. Speech is itself rhythmic, and the volunteers’ brain activity synchronized with the rhythms of the speech they were listening to. Seeing the speaker’s face increased this degree of synchrony. However, it also made it easier for sound-processing regions within the listeners’ brains to transfer information to one other. Notably, only the latter effect predicted improved performance on the word recognition task. This suggests that seeing a person’s face makes it easier to understand his or her speech by boosting communication between brain regions, rather than through effects on individual areas. Further work is required to determine where and how the brain encodes lip movements and speech sounds. The next challenge will be to identify where these two sets of information interact, and how the brain merges them together to generate the impression of specific words. DOI:http://dx.doi.org/10.7554/eLife.24763.002
Collapse
Affiliation(s)
- Bruno L Giordano
- Institut de Neurosciences de la Timone UMR 7289, Aix Marseille Université - Centre National de la Recherche Scientifique, Marseille, France.,Institute of Neuroscience and Psychology, University of Glasgow, Glasgow, United Kingdom
| | - Robin A A Ince
- Institute of Neuroscience and Psychology, University of Glasgow, Glasgow, United Kingdom
| | - Joachim Gross
- Institute of Neuroscience and Psychology, University of Glasgow, Glasgow, United Kingdom
| | - Philippe G Schyns
- Institute of Neuroscience and Psychology, University of Glasgow, Glasgow, United Kingdom
| | - Stefano Panzeri
- Neural Computation Laboratory, Center for Neuroscience and Cognitive Systems, Istituto Italiano di Tecnologia, Rovereto, Italy
| | - Christoph Kayser
- Institute of Neuroscience and Psychology, University of Glasgow, Glasgow, United Kingdom
| |
Collapse
|
73
|
Temporal expectancies driven by self- and externally generated rhythms. Neuroimage 2017; 156:352-362. [PMID: 28528848 DOI: 10.1016/j.neuroimage.2017.05.042] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/09/2017] [Revised: 05/15/2017] [Accepted: 05/17/2017] [Indexed: 11/21/2022] Open
Abstract
The dynamic attending theory proposes that rhythms entrain periodic fluctuations of attention which modulate the gain of sensory input. However, temporal expectancies can also be driven by the mere passage of time (foreperiod effect). It is currently unknown how these two types of temporal expectancy relate to each other, i.e. whether they work in parallel and have distinguishable neural signatures. The current research addresses this issue. Participants either tapped a 1Hz rhythm (active task) or were passively presented with the same rhythm using tactile stimulators (passive task). Based on this rhythm an auditory target was then presented early, in synchrony, or late. Behavioural results were in line with the dynamic attending theory as RTs were faster for in- compared to out-of-synchrony targets. Electrophysiological results suggested self-generated and externally induced rhythms to entrain neural oscillations in the delta frequency band. Auditory ERPs showed evidence of two distinct temporal expectancy processes. Both tasks demonstrated a pattern which followed a linear foreperiod effect. In the active task, however, we also observed an ERP effect consistent with the dynamic attending theory. This study shows that temporal expectancies generated by a rhythm and expectancy generated by the mere passage of time can work in parallel and sheds light on how these mechanisms are implemented in the brain.
Collapse
|
74
|
Falk S, Volpi-Moncorger C, Dalla Bella S. Auditory-Motor Rhythms and Speech Processing in French and German Listeners. Front Psychol 2017; 8:395. [PMID: 28443036 PMCID: PMC5387104 DOI: 10.3389/fpsyg.2017.00395] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/01/2016] [Accepted: 03/02/2017] [Indexed: 11/25/2022] Open
Abstract
Moving to a speech rhythm can enhance verbal processing in the listener by increasing temporal expectancies (Falk and Dalla Bella, 2016). Here we tested whether this hypothesis holds for prosodically diverse languages such as German (a lexical stress-language) and French (a non-stress language). Moreover, we examined the relation between motor performance and the benefits for verbal processing as a function of language. Sixty-four participants, 32 German and 32 French native speakers detected subtle word changes in accented positions in metrically structured sentences to which they previously tapped with their index finger. Before each sentence, they were cued by a metronome to tap either congruently (i.e., to accented syllables) or incongruently (i.e., to non-accented parts) to the following speech stimulus. Both French and German speakers detected words better when cued to tap congruently compared to incongruent tapping. Detection performance was predicted by participants' motor performance in the non-verbal cueing phase. Moreover, tapping rate while participants tapped to speech predicted detection differently for the two language groups, in particular in the incongruent tapping condition. We discuss our findings in light of the rhythmic differences of both languages and with respect to recent theories of expectancy-driven and multisensory speech processing.
Collapse
Affiliation(s)
- Simone Falk
- Institut für Deutsche Philologie, Ludwig-Maximilians-UniversityMunich, Germany.,Laboratoire Parole et Langage, UMR 7309, Centre National de la Recherche Scientifique, Aix-Marseille UniversityAix-en-Provence, France.,Laboratoire Phonétique et Phonologie, UMR 7018, CNRS, Université Sorbonne Nouvelle Paris-3Paris, France
| | - Chloé Volpi-Moncorger
- Laboratoire Parole et Langage, UMR 7309, Centre National de la Recherche Scientifique, Aix-Marseille UniversityAix-en-Provence, France
| | - Simone Dalla Bella
- EuroMov, University of MontpellierMontpellier, France.,Institut Universitaire de FranceParis, France.,International Laboratory for Brain, Music, and Sound ResearchMontreal, QC, Canada.,Department of Cognitive Psychology, Wyższa Szkoła Finansów i Zarządzania w Warszawie (WSFiZ)Warsaw, Poland
| |
Collapse
|
75
|
Abstract
Musical rhythm positively impacts on subsequent speech processing. However, the neural mechanisms underlying this phenomenon are so far unclear. We investigated whether carryover effects from a preceding musical cue to a speech stimulus result from a continuation of neural phase entrainment to periodicities that are present in both music and speech. Participants listened and memorized French metrical sentences that contained (quasi-)periodic recurrences of accents and syllables. Speech stimuli were preceded by a rhythmically regular or irregular musical cue. Our results show that the presence of a regular cue modulates neural response as estimated by EEG power spectral density, intertrial coherence, and source analyses at critical frequencies during speech processing compared with the irregular condition. Importantly, intertrial coherences for regular cues were indicative of the participants' success in memorizing the subsequent speech stimuli. These findings underscore the highly adaptive nature of neural phase entrainment across fundamentally different auditory stimuli. They also support current models of neural phase entrainment as a tool of predictive timing and attentional selection across cognitive domains.
Collapse
Affiliation(s)
- Simone Falk
- Aix-Marseille Univ, LPL, UMR 7309, CNRS, Aix-en-Provence, France.,Université Sorbonne Nouvelle Paris-3, LPP, UMR 7018, CNRS, Paris, France.,Ludwig-Maximilians-University, Munich, Germany
| | | | | |
Collapse
|
76
|
Falk S, Kello CT. Hierarchical organization in the temporal structure of infant-direct speech and song. Cognition 2017; 163:80-86. [PMID: 28292666 DOI: 10.1016/j.cognition.2017.02.017] [Citation(s) in RCA: 67] [Impact Index Per Article: 8.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/03/2016] [Revised: 02/01/2017] [Accepted: 02/28/2017] [Indexed: 11/26/2022]
Abstract
Caregivers alter the temporal structure of their utterances when talking and singing to infants compared with adult communication. The present study tested whether temporal variability in infant-directed registers serves to emphasize the hierarchical temporal structure of speech. Fifteen German-speaking mothers sang a play song and told a story to their 6-months-old infants, or to an adult. Recordings were analyzed using a recently developed method that determines the degree of nested clustering of temporal events in speech. Events were defined as peaks in the amplitude envelope, and clusters of various sizes related to periods of acoustic speech energy at varying timescales. Infant-directed speech and song clearly showed greater event clustering compared with adult-directed registers, at multiple timescales of hundreds of milliseconds to tens of seconds. We discuss the relation of this newly discovered acoustic property to temporal variability in linguistic units and its potential implications for parent-infant communication and infants learning the hierarchical structures of speech and language.
Collapse
Affiliation(s)
- Simone Falk
- Ludwig-Maximilians-University, Munich, Germany; Laboratoire Parole et Langage, UMR 7309, CNRS / Aix-Marseille University, Aix-en-Provence, France; Laboratoire Phonétique et Phonologie, UMR 7018, CNRS / Université Sorbonne Nouvelle Paris-3, Paris, France.
| | | |
Collapse
|
77
|
Te Woerd ES, Oostenveld R, de Lange FP, Praamstra P. Impaired auditory-to-motor entrainment in Parkinson's disease. J Neurophysiol 2017; 117:1853-1864. [PMID: 28179479 DOI: 10.1152/jn.00547.2016] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/06/2016] [Revised: 01/23/2017] [Accepted: 02/05/2017] [Indexed: 11/22/2022] Open
Abstract
Several electrophysiological studies suggest that Parkinson's disease (PD) patients have a reduced tendency to entrain to regular environmental patterns. Here we investigate whether this reduced entrainment concerns a generalized deficit or is confined to movement-related activity, leaving sensory entrainment intact. Magnetoencephalography was recorded during a rhythmic auditory target detection task in 14 PD patients and 14 control subjects. Participants were instructed to press a button when hearing a target tone amid an isochronous sequence of standard tones. The variable pitch of standard tones indicated the probability of the next tone to be a target. In addition, targets were occasionally omitted to evaluate entrainment uncontaminated by stimulus effects. Response times were not significantly different between groups and both groups benefited equally from the predictive value of standard tones. Analyses of oscillatory beta power over auditory cortices showed equal entrainment to the tones in both groups. By contrast, oscillatory beta power and event-related fields demonstrated a reduced engagement of motor cortical areas in PD patients, expressed in the modulation depth of beta power, in the response to omitted stimuli, and in an absent motor area P300 effect. Together, these results show equally strong entrainment of neural activity over sensory areas in controls and patients, but, in patients, a deficient translation of the adjustment to the task rhythm to motor circuits. We suggest that the reduced activation reflects not merely altered resonance to rhythmic external events, but a compromised recruitment of an endogenous response reflecting internal rhythm generation.NEW & NOTEWORTHY Previous studies suggest that motor cortical activity in PD patients has a reduced tendency to entrain to regular environmental patterns. This study demonstrates that the deficient entrainment in PD concerns the motor system only, by showing equally strong entrainment of neural activity over sensory areas in controls and patients but, in patients, a deficient translation of this adjustment to the task rhythm to motor circuits.
Collapse
Affiliation(s)
- Erik S Te Woerd
- Radboud University Medical Centre, Dept. of Neurology, Radboud University, Nijmegen, The Netherlands; and.,Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, The Netherlands
| | - Robert Oostenveld
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, The Netherlands
| | - Floris P de Lange
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, The Netherlands
| | - Peter Praamstra
- Radboud University Medical Centre, Dept. of Neurology, Radboud University, Nijmegen, The Netherlands; and .,Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, The Netherlands
| |
Collapse
|
78
|
Breska A, Deouell LY. Neural mechanisms of rhythm-based temporal prediction: Delta phase-locking reflects temporal predictability but not rhythmic entrainment. PLoS Biol 2017; 15:e2001665. [PMID: 28187128 PMCID: PMC5302287 DOI: 10.1371/journal.pbio.2001665] [Citation(s) in RCA: 117] [Impact Index Per Article: 14.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/28/2016] [Accepted: 01/13/2017] [Indexed: 11/18/2022] Open
Abstract
Predicting the timing of upcoming events enables efficient resource allocation and action preparation. Rhythmic streams, such as music, speech, and biological motion, constitute a pervasive source for temporal predictions. Widely accepted entrainment theories postulate that rhythm-based predictions are mediated by synchronizing low-frequency neural oscillations to the rhythm, as indicated by increased phase concentration (PC) of low-frequency neural activity for rhythmic compared to random streams. However, we show here that PC enhancement in scalp recordings is not specific to rhythms but is observed to the same extent in less periodic streams if they enable memory-based prediction. This is inconsistent with the predictions of a computational entrainment model of stronger PC for rhythmic streams. Anticipatory change in alpha activity and facilitation of electroencephalogram (EEG) manifestations of response selection are also comparable between rhythm- and memory-based predictions. However, rhythmic sequences uniquely result in obligatory depression of preparation-related premotor brain activity when an on-beat event is omitted, even when it is strategically beneficial to maintain preparation, leading to larger behavioral costs for violation of prediction. Thus, while our findings undermine the validity of PC as a sign of rhythmic entrainment, they constitute the first electrophysiological dissociation, to our knowledge, between mechanisms of rhythmic predictions and of memory-based predictions: the former obligatorily lead to resonance-like preparation patterns (that are in line with entrainment), while the latter allow flexible resource allocation in time regardless of periodicity in the input. Taken together, they delineate the neural mechanisms of three distinct modes of preparation: continuous vigilance, interval-timing-based prediction and rhythm-based prediction.
Collapse
Affiliation(s)
- Assaf Breska
- Department of Psychology, Hebrew University, Jerusalem, Israel
| | - Leon Y. Deouell
- Department of Psychology, Hebrew University, Jerusalem, Israel
- Edmond and Lily Safra Center for Brain Sciences, Hebrew University, Jerusalem, Israel
| |
Collapse
|
79
|
Manning FC, Harris J, Schutz M. Temporal prediction abilities are mediated by motor effector and rhythmic expertise. Exp Brain Res 2016; 235:861-871. [DOI: 10.1007/s00221-016-4845-8] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/15/2016] [Accepted: 11/23/2016] [Indexed: 10/20/2022]
|
80
|
Auditory cortical delta-entrainment interacts with oscillatory power in multiple fronto-parietal networks. Neuroimage 2016; 147:32-42. [PMID: 27903440 PMCID: PMC5315055 DOI: 10.1016/j.neuroimage.2016.11.062] [Citation(s) in RCA: 80] [Impact Index Per Article: 8.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/13/2016] [Revised: 11/25/2016] [Accepted: 11/25/2016] [Indexed: 01/28/2023] Open
Abstract
The timing of slow auditory cortical activity aligns to the rhythmic fluctuations in speech. This entrainment is considered to be a marker of the prosodic and syllabic encoding of speech, and has been shown to correlate with intelligibility. Yet, whether and how auditory cortical entrainment is influenced by the activity in other speech–relevant areas remains unknown. Using source-localized MEG data, we quantified the dependency of auditory entrainment on the state of oscillatory activity in fronto-parietal regions. We found that delta band entrainment interacted with the oscillatory activity in three distinct networks. First, entrainment in the left anterior superior temporal gyrus (STG) was modulated by beta power in orbitofrontal areas, possibly reflecting predictive top-down modulations of auditory encoding. Second, entrainment in the left Heschl's Gyrus and anterior STG was dependent on alpha power in central areas, in line with the importance of motor structures for phonological analysis. And third, entrainment in the right posterior STG modulated theta power in parietal areas, consistent with the engagement of semantic memory. These results illustrate the topographical network interactions of auditory delta entrainment and reveal distinct cross-frequency mechanisms by which entrainment can interact with different cognitive processes underlying speech perception. We study auditory cortical speech entrainment from a network perspective. Found three distinct networks interacting with delta-entrainment in auditory cortex. Entrainment is modulated by frontal beta power, possibly indexing predictions. Central alpha power interacts with entrainment, suggesting motor involvement. Parietal theta is modulated by entrainment, suggesting working memory compensation.
Collapse
|
81
|
Su YH. Visual Enhancement of Illusory Phenomenal Accents in Non-Isochronous Auditory Rhythms. PLoS One 2016; 11:e0166880. [PMID: 27880850 PMCID: PMC5120798 DOI: 10.1371/journal.pone.0166880] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/24/2016] [Accepted: 11/04/2016] [Indexed: 11/19/2022] Open
Abstract
Musical rhythms encompass temporal patterns that often yield regular metrical accents (e.g., a beat). There have been mixed results regarding perception as a function of metrical saliency, namely, whether sensitivity to a deviant was greater in metrically stronger or weaker positions. Besides, effects of metrical position have not been examined in non-isochronous rhythms, or with respect to multisensory influences. This study was concerned with two main issues: (1) In non-isochronous auditory rhythms with clear metrical accents, how would sensitivity to a deviant be modulated by metrical positions? (2) Would the effects be enhanced by multisensory information? Participants listened to strongly metrical rhythms with or without watching a point-light figure dance to the rhythm in the same meter, and detected a slight loudness increment. Both conditions were presented with or without an auditory interference that served to impair auditory metrical perception. Sensitivity to a deviant was found greater in weak beat than in strong beat positions, consistent with the Predictive Coding hypothesis and the idea of metrically induced illusory phenomenal accents. The visual rhythm of dance hindered auditory detection, but more so when the latter was itself less impaired. This pattern suggested that the visual and auditory rhythms were perceptually integrated to reinforce metrical accentuation, yielding more illusory phenomenal accents and thus lower sensitivity to deviants, in a manner consistent with the principle of inverse effectiveness. Results were discussed in the predictive framework for multisensory rhythms involving observed movements and possible mediation of the motor system.
Collapse
Affiliation(s)
- Yi-Huang Su
- Department of Movement Science, Faculty of Sport and Health Sciences, Technical University of Munich, Munich, Germany
| |
Collapse
|
82
|
Wohlgemuth MJ, Kothari NB, Moss CF. Action Enhances Acoustic Cues for 3-D Target Localization by Echolocating Bats. PLoS Biol 2016; 14:e1002544. [PMID: 27608186 PMCID: PMC5015854 DOI: 10.1371/journal.pbio.1002544] [Citation(s) in RCA: 22] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/28/2016] [Accepted: 08/04/2016] [Indexed: 11/19/2022] Open
Abstract
Under natural conditions, animals encounter a barrage of sensory information from which they must select and interpret biologically relevant signals. Active sensing can facilitate this process by engaging motor systems in the sampling of sensory information. The echolocating bat serves as an excellent model to investigate the coupling between action and sensing because it adaptively controls both the acoustic signals used to probe the environment and movements to receive echoes at the auditory periphery. We report here that the echolocating bat controls the features of its sonar vocalizations in tandem with the positioning of the outer ears to maximize acoustic cues for target detection and localization. The bat’s adaptive control of sonar vocalizations and ear positioning occurs on a millisecond timescale to capture spatial information from arriving echoes, as well as on a longer timescale to track target movement. Our results demonstrate that purposeful control over sonar sound production and reception can serve to improve acoustic cues for localization tasks. This finding also highlights the general importance of movement to sensory processing across animal species. Finally, our discoveries point to important parallels between spatial perception by echolocation and vision. As an echolocating bat tracks a moving target, it produces head waggles and adjusts the separation of the tips of its ears to enhance cues for target detection and localization. These findings suggest parallels in active sensing between echolocation and vision. As animals operate in the natural environment, they must detect and process relevant sensory information embedded in complex and noisy signals. One strategy to overcome this challenge is to use active sensing or behavioral adjustments to extract sensory information from a selected region of the environment. We studied one of nature’s champions in auditory active sensing—the echolocating bat—to understand how this animal extracts task-relevant acoustic cues to detect and track a moving target. The bat produces high-frequency vocalizations and processes information carried by returning echoes to navigate and catch prey. This animal serves as an excellent model of active sensing because both sonar signal transmission and echo reception are under the animal’s active control. We used high-speed stereo video images of the bat’s head and ear movements, along with synchronized audio recordings, to study how the bat coordinates adaptive motor behaviors when detecting and tracking moving prey. We found that the bat synchronizes changes in sonar vocal production with changes in the movements of the head and ears to enhance acoustic cues for target detection and localization.
Collapse
Affiliation(s)
- Melville J. Wohlgemuth
- Department of Psychology and Institute for Systems Research, Program in Neuroscience and Cognitive Science, University of Maryland, College Park, Maryland, United States of America
- Department of Psychological and Brain Sciences, Johns Hopkins University, Baltimore, Maryland, United States of America
- * E-mail:
| | - Ninad B. Kothari
- Department of Psychology and Institute for Systems Research, Program in Neuroscience and Cognitive Science, University of Maryland, College Park, Maryland, United States of America
- Department of Psychological and Brain Sciences, Johns Hopkins University, Baltimore, Maryland, United States of America
| | - Cynthia F. Moss
- Department of Psychology and Institute for Systems Research, Program in Neuroscience and Cognitive Science, University of Maryland, College Park, Maryland, United States of America
- Department of Psychological and Brain Sciences, Johns Hopkins University, Baltimore, Maryland, United States of America
| |
Collapse
|
83
|
Hyafil A, Giraud AL, Fontolan L, Gutkin B. Neural Cross-Frequency Coupling: Connecting Architectures, Mechanisms, and Functions. Trends Neurosci 2016; 38:725-740. [PMID: 26549886 DOI: 10.1016/j.tins.2015.09.001] [Citation(s) in RCA: 265] [Impact Index Per Article: 29.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/25/2015] [Revised: 08/14/2015] [Accepted: 09/01/2015] [Indexed: 10/22/2022]
Abstract
Neural oscillations are ubiquitously observed in the mammalian brain, but it has proven difficult to tie oscillatory patterns to specific cognitive operations. Notably, the coupling between neural oscillations at different timescales has recently received much attention, both from experimentalists and theoreticians. We review the mechanisms underlying various forms of this cross-frequency coupling. We show that different types of neural oscillators and cross-frequency interactions yield distinct signatures in neural dynamics. Finally, we associate these mechanisms with several putative functions of cross-frequency coupling, including neural representations of multiple environmental items, communication over distant areas, internal clocking of neural processes, and modulation of neural processing based on temporal predictions.
Collapse
Affiliation(s)
- Alexandre Hyafil
- Universitat Pompeu Fabra, Theoretical and Computational Neuroscience, Roc Boronat 138, 08018 Barcelona, Spain; Research Unit, Parc Sanitari Sant Joan de Déu and Universitat de Barcelona, Esplugues de Llobregat, Barcelona, Spain.
| | - Anne-Lise Giraud
- Department of Neuroscience, University of Geneva, Campus Biotech, 9 chemin des Mines, 1211 Geneva, Switzerland
| | - Lorenzo Fontolan
- Department of Neuroscience, University of Geneva, Campus Biotech, 9 chemin des Mines, 1211 Geneva, Switzerland
| | - Boris Gutkin
- Group for Neural Theory, Institut National de la Santé et de la Recherche Médicale (INSERM) Unité 960, Département d'Etudes Cognitives, Ecole Normale Supérieure, 29 rue d'Ulm, 75005 Paris, France; Centre for Cognition and Decision Making, National Research University Higher School of Economics, Myasnitskaya Street 20, Moscow 101000, Russia
| |
Collapse
|
84
|
Lima CF, Krishnan S, Scott SK. Roles of Supplementary Motor Areas in Auditory Processing and Auditory Imagery. Trends Neurosci 2016; 39:527-542. [PMID: 27381836 PMCID: PMC5441995 DOI: 10.1016/j.tins.2016.06.003] [Citation(s) in RCA: 156] [Impact Index Per Article: 17.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/25/2016] [Revised: 05/26/2016] [Accepted: 06/09/2016] [Indexed: 11/28/2022]
Abstract
Although the supplementary and pre-supplementary motor areas have been intensely investigated in relation to their motor functions, they are also consistently reported in studies of auditory processing and auditory imagery. This involvement is commonly overlooked, in contrast to lateral premotor and inferior prefrontal areas. We argue here for the engagement of supplementary motor areas across a variety of sound categories, including speech, vocalizations, and music, and we discuss how our understanding of auditory processes in these regions relate to findings and hypotheses from the motor literature. We suggest that supplementary and pre-supplementary motor areas play a role in facilitating spontaneous motor responses to sound, and in supporting a flexible engagement of sensorimotor processes to enable imagery and to guide auditory perception. Hearing and imagining sounds–including speech, vocalizations, and music–can recruit SMA and pre-SMA, which are normally discussed in relation to their motor functions. Emerging research indicates that individual differences in the structure and function of SMA and pre-SMA can predict performance in auditory perception and auditory imagery tasks. Responses during auditory processing primarily peak in pre-SMA and in the boundary area between pre-SMA and SMA. This boundary area is crucially involved in the control of speech and vocal production, suggesting that sounds engage this region in an effector-specific manner. Activating sound-related motor representations in SMA and pre-SMA might facilitate behavioral responses to sounds. This might also support a flexible generation of sensory predictions based on previous experience to enable imagery and guide perception.
Collapse
Affiliation(s)
- César F Lima
- Institute of Cognitive Neuroscience, University College London, London, UK
| | - Saloni Krishnan
- Department of Experimental Psychology, University of Oxford, Oxford, UK
| | - Sophie K Scott
- Institute of Cognitive Neuroscience, University College London, London, UK.
| |
Collapse
|
85
|
Abstract
Predicting not only what will happen, but also when it will happen is extremely helpful for optimizing perception and action. Temporal predictions driven by periodic stimulation increase perceptual sensitivity and reduce response latencies. At the neurophysiological level, a single mechanism has been proposed to mediate this twofold behavioral improvement: the rhythmic entrainment of slow cortical oscillations to the stimulation rate. However, temporal regularities can occur in aperiodic contexts, suggesting that temporal predictions per se may be dissociable from entrainment to periodic sensory streams. We investigated this possibility in two behavioral experiments, asking human participants to detect near-threshold auditory tones embedded in streams whose temporal and spectral properties were manipulated. While our findings confirm that periodic stimulation reduces response latencies, in agreement with the hypothesis of a stimulus-driven entrainment of neural excitability, they further reveal that this motor facilitation can be dissociated from the enhancement of auditory sensitivity. Perceptual sensitivity improvement is unaffected by the nature of temporal regularities (periodic vs aperiodic), but contingent on the co-occurrence of a fulfilled spectral prediction. Altogether, the dissociation between predictability and periodicity demonstrates that distinct mechanisms flexibly and synergistically operate to facilitate perception and action.
Collapse
|
86
|
Park H, Kayser C, Thut G, Gross J. Lip movements entrain the observers' low-frequency brain oscillations to facilitate speech intelligibility. eLife 2016; 5. [PMID: 27146891 PMCID: PMC4900800 DOI: 10.7554/elife.14521] [Citation(s) in RCA: 78] [Impact Index Per Article: 8.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/18/2016] [Accepted: 05/03/2016] [Indexed: 12/02/2022] Open
Abstract
During continuous speech, lip movements provide visual temporal signals that facilitate speech processing. Here, using MEG we directly investigated how these visual signals interact with rhythmic brain activity in participants listening to and seeing the speaker. First, we investigated coherence between oscillatory brain activity and speaker’s lip movements and demonstrated significant entrainment in visual cortex. We then used partial coherence to remove contributions of the coherent auditory speech signal from the lip-brain coherence. Comparing this synchronization between different attention conditions revealed that attending visual speech enhances the coherence between activity in visual cortex and the speaker’s lips. Further, we identified a significant partial coherence between left motor cortex and lip movements and this partial coherence directly predicted comprehension accuracy. Our results emphasize the importance of visually entrained and attention-modulated rhythmic brain activity for the enhancement of audiovisual speech processing. DOI:http://dx.doi.org/10.7554/eLife.14521.001 People are able communicate effectively with each other even in very noisy places where it is difficult to actually hear what others are saying. In a face-to-face conversation, people detect and respond to many physical cues – including body posture, facial expressions, head and eye movements and gestures – alongside the sound cues. Lip movements are particularly important and contain enough information to allow trained observers to understand speech even if they cannot hear the speech itself. It is known that brain waves in listeners are synchronized with the rhythms in a speech, especially the syllables. This is thought to establish a channel for communication – similar to tuning a radio to a certain frequency to listen to a certain radio station. Park et al. studied if listeners’ brain waves also align to the speaker’s lip movements during continuous speech and if this is important for understanding the speech. The experiments reveal that a part of the brain that processes visual information – called the visual cortex – produces brain waves that are synchronized to the rhythm of syllables in continuous speech. This synchronization was more precise in a complex situation where lip movements would be more important to understand speech. Park et al. also found that the area of the observer’s brain that controls the lips (the motor cortex) also produced brain waves that were synchronized to lip movements. Volunteers whose motor cortex was more synchronized to the lip movements understood speech better. This supports the idea that brain areas that are used for producing speech are also important for understanding speech. Future challenges include understanding how synchronization of brain waves with the rhythms of speech helps us to understand speech, and how the brain waves produced by the visual and motor areas interact. DOI:http://dx.doi.org/10.7554/eLife.14521.002
Collapse
Affiliation(s)
- Hyojin Park
- Institute of Neuroscience and Psychology, University of Glasgow, Glasgow, United Kingdom
| | - Christoph Kayser
- Institute of Neuroscience and Psychology, University of Glasgow, Glasgow, United Kingdom
| | - Gregor Thut
- Institute of Neuroscience and Psychology, University of Glasgow, Glasgow, United Kingdom
| | - Joachim Gross
- Institute of Neuroscience and Psychology, University of Glasgow, Glasgow, United Kingdom
| |
Collapse
|
87
|
|
88
|
Moberget T, Ivry RB. Cerebellar contributions to motor control and language comprehension: searching for common computational principles. Ann N Y Acad Sci 2016; 1369:154-71. [PMID: 27206249 PMCID: PMC5260470 DOI: 10.1111/nyas.13094] [Citation(s) in RCA: 67] [Impact Index Per Article: 7.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/25/2022]
Abstract
The past 25 years have seen the functional domain of the cerebellum extend beyond the realm of motor control, with considerable discussion of how this subcortical structure contributes to cognitive domains including attention, memory, and language. Drawing on evidence from neuroanatomy, physiology, neuropsychology, and computational work, sophisticated models have been developed to describe cerebellar function in sensorimotor control and learning. In contrast, mechanistic accounts of how the cerebellum contributes to cognition have remained elusive. Inspired by the homogeneous cerebellar microanatomy and a desire for parsimony, many researchers have sought to extend mechanistic ideas from motor control to cognition. One influential hypothesis centers on the idea that the cerebellum implements internal models, representations of the context-specific dynamics of an agent's interactions with the environment, enabling predictive control. We briefly review cerebellar anatomy and physiology, to review the internal model hypothesis as applied in the motor domain, before turning to extensions of these ideas in the linguistic domain, focusing on speech perception and semantic processing. While recent findings are consistent with this computational generalization, they also raise challenging questions regarding the nature of cerebellar learning, and may thus inspire revisions of our views on the role of the cerebellum in sensorimotor control.
Collapse
Affiliation(s)
- Torgeir Moberget
- Norwegian Centre for Mental Disorders Research (NORMENT), KG Jebsen Centre for Psychosis Research, Division of Mental Health and Addiction, Oslo University Hospital, Norway
| | - Richard B. Ivry
- Department of Psychology, and the Helen Wills Neuroscience Institute, University of California, Berkeley, Berkeley, California
| |
Collapse
|
89
|
Chang A, Bosnyak DJ, Trainor LJ. Unpredicted Pitch Modulates Beta Oscillatory Power during Rhythmic Entrainment to a Tone Sequence. Front Psychol 2016; 7:327. [PMID: 27014138 PMCID: PMC4782565 DOI: 10.3389/fpsyg.2016.00327] [Citation(s) in RCA: 29] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/25/2015] [Accepted: 02/21/2016] [Indexed: 11/13/2022] Open
Abstract
Extracting temporal regularities in external stimuli in order to predict upcoming events is an essential aspect of perception. Fluctuations in induced power of beta band (15–25 Hz) oscillations in auditory cortex are involved in predictive timing during rhythmic entrainment, but whether such fluctuations are affected by prediction in the spectral (frequency/pitch) domain remains unclear. We tested whether unpredicted (i.e., unexpected) pitches in a rhythmic tone sequence modulate beta band activity by recording EEG while participants passively listened to isochronous auditory oddball sequences with occasional unpredicted deviant pitches at two different presentation rates. The results showed that the power in low-beta (15–20 Hz) was larger around 200–300 ms following deviant tones compared to standard tones, and this effect was larger when the deviant tones were less predicted. Our results suggest that the induced beta power activities in auditory cortex are consistent with a role in sensory prediction of both “when” (timing) upcoming sounds will occur as well as the prediction precision error of “what” (spectral content in this case). We suggest, further, that both timing and content predictions may co-modulate beta oscillations via attention. These findings extend earlier work on neural oscillations by investigating the functional significance of beta oscillations for sensory prediction. The findings help elucidate the functional significance of beta oscillations in perception.
Collapse
Affiliation(s)
- Andrew Chang
- Department of Psychology, Neuroscience and Behaviour, McMaster University Hamilton, ON, Canada
| | - Dan J Bosnyak
- Department of Psychology, Neuroscience and Behaviour, McMaster UniversityHamilton, ON, Canada; McMaster Institute for Music and the Mind, McMaster UniversityHamilton, ON, Canada
| | - Laurel J Trainor
- Department of Psychology, Neuroscience and Behaviour, McMaster UniversityHamilton, ON, Canada; McMaster Institute for Music and the Mind, McMaster UniversityHamilton, ON, Canada; Rotman Research Institute, Baycrest HospitalToronto, ON, Canada
| |
Collapse
|
90
|
Nozaradan S, Peretz I, Keller PE. Individual Differences in Rhythmic Cortical Entrainment Correlate with Predictive Behavior in Sensorimotor Synchronization. Sci Rep 2016; 6:20612. [PMID: 26847160 PMCID: PMC4742877 DOI: 10.1038/srep20612] [Citation(s) in RCA: 94] [Impact Index Per Article: 10.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/07/2015] [Accepted: 01/08/2016] [Indexed: 11/21/2022] Open
Abstract
The current study aims at characterizing the mechanisms that allow humans to entrain the mind and body to incoming rhythmic sensory inputs in real time. We addressed this unresolved issue by examining the relationship between covert neural processes and overt behavior in the context of musical rhythm. We measured temporal prediction abilities, sensorimotor synchronization accuracy and neural entrainment to auditory rhythms as captured using an EEG frequency-tagging approach. Importantly, movement synchronization accuracy with a rhythmic beat could be explained by the amplitude of neural activity selectively locked with the beat period when listening to the rhythmic inputs. Furthermore, stronger endogenous neural entrainment at the beat frequency was associated with superior temporal prediction abilities. Together, these results reveal a direct link between cortical and behavioral measures of rhythmic entrainment, thus providing evidence that frequency-tagged brain activity has functional relevance for beat perception and synchronization.
Collapse
Affiliation(s)
- Sylvie Nozaradan
- Institute of Neuroscience (Ions), Université catholique de Louvain (UCL), Belgium
- International Laboratory for Brain, Music and Sound Research (BRAMS), Université de Montréal, Canada
| | - Isabelle Peretz
- International Laboratory for Brain, Music and Sound Research (BRAMS), Université de Montréal, Canada
| | - Peter E. Keller
- The MARCS Institute, Western Sydney University, Sydney, Australia
- Music Cognition & Action Group, Max Planck Institute for Human Cognitive & Brain Sciences, Leipzig, Germany
| |
Collapse
|
91
|
Zoefel B, VanRullen R. The Role of High-Level Processes for Oscillatory Phase Entrainment to Speech Sound. Front Hum Neurosci 2015; 9:651. [PMID: 26696863 PMCID: PMC4667100 DOI: 10.3389/fnhum.2015.00651] [Citation(s) in RCA: 39] [Impact Index Per Article: 3.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/30/2015] [Accepted: 11/16/2015] [Indexed: 11/13/2022] Open
Abstract
Constantly bombarded with input, the brain has the need to filter out relevant information while ignoring the irrelevant rest. A powerful tool may be represented by neural oscillations which entrain their high-excitability phase to important input while their low-excitability phase attenuates irrelevant information. Indeed, the alignment between brain oscillations and speech improves intelligibility and helps dissociating speakers during a “cocktail party”. Although well-investigated, the contribution of low- and high-level processes to phase entrainment to speech sound has only recently begun to be understood. Here, we review those findings, and concentrate on three main results: (1) Phase entrainment to speech sound is modulated by attention or predictions, likely supported by top-down signals and indicating higher-level processes involved in the brain’s adjustment to speech. (2) As phase entrainment to speech can be observed without systematic fluctuations in sound amplitude or spectral content, it does not only reflect a passive steady-state “ringing” of the cochlea, but entails a higher-level process. (3) The role of intelligibility for phase entrainment is debated. Recent results suggest that intelligibility modulates the behavioral consequences of entrainment, rather than directly affecting the strength of entrainment in auditory regions. We conclude that phase entrainment to speech reflects a sophisticated mechanism: several high-level processes interact to optimally align neural oscillations with predicted events of high relevance, even when they are hidden in a continuous stream of background noise.
Collapse
Affiliation(s)
- Benedikt Zoefel
- Université Paul Sabatier Toulouse, France ; Centre de Recherche Cerveau et Cognition (CerCo), CNRS, UMR5549, Pavillon Baudot CHU Purpan Toulouse, France
| | - Rufin VanRullen
- Université Paul Sabatier Toulouse, France ; Centre de Recherche Cerveau et Cognition (CerCo), CNRS, UMR5549, Pavillon Baudot CHU Purpan Toulouse, France
| |
Collapse
|
92
|
Large EW, Herrera JA, Velasco MJ. Neural Networks for Beat Perception in Musical Rhythm. Front Syst Neurosci 2015; 9:159. [PMID: 26635549 PMCID: PMC4658578 DOI: 10.3389/fnsys.2015.00159] [Citation(s) in RCA: 145] [Impact Index Per Article: 14.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/02/2015] [Accepted: 11/02/2015] [Indexed: 11/30/2022] Open
Abstract
Entrainment of cortical rhythms to acoustic rhythms has been hypothesized to be the neural correlate of pulse and meter perception in music. Dynamic attending theory first proposed synchronization of endogenous perceptual rhythms nearly 40 years ago, but only recently has the pivotal role of neural synchrony been demonstrated. Significant progress has since been made in understanding the role of neural oscillations and the neural structures that support synchronized responses to musical rhythm. Synchronized neural activity has been observed in auditory and motor networks, and has been linked with attentional allocation and movement coordination. Here we describe a neurodynamic model that shows how self-organization of oscillations in interacting sensory and motor networks could be responsible for the formation of the pulse percept in complex rhythms. In a pulse synchronization study, we test the model's key prediction that pulse can be perceived at a frequency for which no spectral energy is present in the amplitude envelope of the acoustic rhythm. The result shows that participants perceive the pulse at the theoretically predicted frequency. This model is one of the few consistent with neurophysiological evidence on the role of neural oscillation, and it explains a phenomenon that other computational models fail to explain. Because it is based on a canonical model, the predictions hold for an entire family of dynamical systems, not only a specific one. Thus, this model provides a theoretical link between oscillatory neurodynamics and the induction of pulse and meter in musical rhythm.
Collapse
Affiliation(s)
- Edward W Large
- Department of Psychological Sciences, University of Connecticut Storrs, CT, USA ; Department of Physics, University of Connecticut Storrs, CT, USA
| | - Jorge A Herrera
- Department of Music, Center for Computer Research in Music and Acoustics, Stanford University Stanford, CA, USA
| | - Marc J Velasco
- Center for Complex Systems and Brain Sciences, Florida Atlantic University Boca Raton, FL, USA
| |
Collapse
|