1
|
Weng G, Clark K, Akbarian A, Noudoost B, Nategh N. Time-varying generalized linear models: characterizing and decoding neuronal dynamics in higher visual areas. Front Comput Neurosci 2024; 18:1273053. [PMID: 38348287 PMCID: PMC10859875 DOI: 10.3389/fncom.2024.1273053] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/05/2023] [Accepted: 01/09/2024] [Indexed: 02/15/2024] Open
Abstract
To create a behaviorally relevant representation of the visual world, neurons in higher visual areas exhibit dynamic response changes to account for the time-varying interactions between external (e.g., visual input) and internal (e.g., reward value) factors. The resulting high-dimensional representational space poses challenges for precisely quantifying individual factors' contributions to the representation and readout of sensory information during a behavior. The widely used point process generalized linear model (GLM) approach provides a powerful framework for a quantitative description of neuronal processing as a function of various sensory and non-sensory inputs (encoding) as well as linking particular response components to particular behaviors (decoding), at the level of single trials and individual neurons. However, most existing variations of GLMs assume the neural systems to be time-invariant, making them inadequate for modeling nonstationary characteristics of neuronal sensitivity in higher visual areas. In this review, we summarize some of the existing GLM variations, with a focus on time-varying extensions. We highlight their applications to understanding neural representations in higher visual areas and decoding transient neuronal sensitivity as well as linking physiology to behavior through manipulation of model components. This time-varying class of statistical models provide valuable insights into the neural basis of various visual behaviors in higher visual areas and hold significant potential for uncovering the fundamental computational principles that govern neuronal processing underlying various behaviors in different regions of the brain.
Collapse
Affiliation(s)
- Geyu Weng
- Department of Biomedical Engineering, University of Utah, Salt Lake City, UT, United States
- Department of Ophthalmology and Visual Sciences, University of Utah, Salt Lake City, UT, United States
| | - Kelsey Clark
- Department of Ophthalmology and Visual Sciences, University of Utah, Salt Lake City, UT, United States
| | - Amir Akbarian
- Department of Ophthalmology and Visual Sciences, University of Utah, Salt Lake City, UT, United States
| | - Behrad Noudoost
- Department of Ophthalmology and Visual Sciences, University of Utah, Salt Lake City, UT, United States
| | - Neda Nategh
- Department of Ophthalmology and Visual Sciences, University of Utah, Salt Lake City, UT, United States
- Department of Electrical and Computer Engineering, University of Utah, Salt Lake City, UT, United States
| |
Collapse
|
2
|
Köhler MHA, Weisz N. Cochlear Theta Activity Oscillates in Phase Opposition during Interaural Attention. J Cogn Neurosci 2023; 35:588-602. [PMID: 36626349 DOI: 10.1162/jocn_a_01959] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/11/2023]
Abstract
It is widely established that sensory perception is a rhythmic process as opposed to a continuous one. In the context of auditory perception, this effect is only established on a cortical and behavioral level. Yet, the unique architecture of the auditory sensory system allows its primary sensory cortex to modulate the processes of its sensory receptors at the cochlear level. Previously, we could demonstrate the existence of a genuine cochlear theta (∼6-Hz) rhythm that is modulated in amplitude by intermodal selective attention. As the study's paradigm was not suited to assess attentional effects on the oscillatory phase of cochlear activity, the question of whether attention can also affect the temporal organization of the cochlea's ongoing activity remained open. The present study utilizes an interaural attention paradigm to investigate ongoing otoacoustic activity during a stimulus-free cue-target interval and an omission period of the auditory target in humans. We were able to replicate the existence of the cochlear theta rhythm. Importantly, we found significant phase opposition between the two ears and attention conditions of anticipatory as well as cochlear oscillatory activity during target presentation. Yet, the amplitude was unaffected by interaural attention. These results are the first to demonstrate that intermodal and interaural attention deploy different aspects of excitation and inhibition at the first level of auditory processing. Whereas intermodal attention modulates the level of cochlear activity, interaural attention modulates the timing.
Collapse
Affiliation(s)
| | - Nathan Weisz
- University of Salzburg.,Paracelsus Medical University, Salzburg, Austria
| |
Collapse
|
3
|
Elmer S, Besson M, Rodriguez-Fornells A, Giroud N. Foreign speech sound discrimination and associative word learning lead to a fast reconfiguration of resting-state networks. Neuroimage 2023; 271:120026. [PMID: 36921678 DOI: 10.1016/j.neuroimage.2023.120026] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/21/2022] [Revised: 03/09/2023] [Accepted: 03/12/2023] [Indexed: 03/18/2023] Open
Abstract
Learning new words in an unfamiliar language is a complex endeavor that requires the orchestration of multiple perceptual and cognitive functions. Although the neural mechanisms governing word learning are becoming better understood, little is known about the predictive value of resting-state (RS) metrics for foreign word discrimination and word learning attainment. In addition, it is still unknown which of the multistep processes involved in word learning have the potential to rapidly reconfigure RS networks. To address these research questions, we used electroencephalography (EEG), measured forty participants, and examined scalp-based power spectra, source-based spectral density maps and functional connectivity metrics before (RS1), in between (RS2) and after (RS3) a series of tasks which are known to facilitate the acquisition of new words in a foreign language, namely word discrimination, word-referent mapping and semantic generalization. Power spectra at the scalp level consistently revealed a reconfiguration of RS networks as a function of foreign word discrimination (RS1 vs. RS2) and word learning (RS1 vs. RS3) tasks in the delta, lower and upper alpha, and upper beta frequency ranges. Otherwise, functional reconfigurations at the source level were restricted to the theta (spectral density maps) and to the lower and upper alpha frequency bands (spectral density maps and functional connectivity). Notably, scalp RS changes related to the word discrimination tasks (difference between RS2 and RS1) correlated with word discrimination abilities (upper alpha band) and semantic generalization performance (theta and upper alpha bands), whereas functional changes related to the word learning tasks (difference between RS3 and RS1) correlated with word discrimination scores (lower alpha band). Taken together, these results highlight that foreign speech sound discrimination and word learning have the potential to rapidly reconfigure RS networks at multiple functional scales.
Collapse
Affiliation(s)
- Stefan Elmer
- Department of Computational Linguistics, Computational Neuroscience of Speech & Hearing, University of Zurich, Zurich, Switzerland; Bellvitge Biomedical Research Institute, Barcelona, Spain; Competence center Language & Medicine, University of Zurich, Switzerland.
| | - Mireille Besson
- Laboratoire de Neurosciences Cognitives, Université Publique de France, CNRS & Aix-Marseille University, Marseille, France
| | - Antoni Rodriguez-Fornells
- Bellvitge Biomedical Research Institute, Barcelona, Spain; University of Barcelona, Spain; Institució Catalana de Recerca i Estudis Avançats, Barcelona, Spain
| | - Nathalie Giroud
- Department of Computational Linguistics, Computational Neuroscience of Speech & Hearing, University of Zurich, Zurich, Switzerland; Center for Neuroscience Zurich, University and ETH of Zurich, Zurich, Switzerland; Competence center Language & Medicine, University of Zurich, Switzerland
| |
Collapse
|
4
|
Chalas N, Daube C, Kluger DS, Abbasi O, Nitsch R, Gross J. Speech onsets and sustained speech contribute differentially to delta and theta speech tracking in auditory cortex. Cereb Cortex 2023; 33:6273-6281. [PMID: 36627246 DOI: 10.1093/cercor/bhac502] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/15/2022] [Revised: 11/21/2022] [Accepted: 11/22/2022] [Indexed: 01/12/2023] Open
Abstract
When we attentively listen to an individual's speech, our brain activity dynamically aligns to the incoming acoustic input at multiple timescales. Although this systematic alignment between ongoing brain activity and speech in auditory brain areas is well established, the acoustic events that drive this phase-locking are not fully understood. Here, we use magnetoencephalographic recordings of 24 human participants (12 females) while they were listening to a 1 h story. We show that whereas speech-brain coupling is associated with sustained acoustic fluctuations in the speech envelope in the theta-frequency range (4-7 Hz), speech tracking in the low-frequency delta (below 1 Hz) was strongest around onsets of speech, like the beginning of a sentence. Crucially, delta tracking in bilateral auditory areas was not sustained after onsets, proposing a delta tracking during continuous speech perception that is driven by speech onsets. We conclude that both onsets and sustained components of speech contribute differentially to speech tracking in delta- and theta-frequency bands, orchestrating sampling of continuous speech. Thus, our results suggest a temporal dissociation of acoustically driven oscillatory activity in auditory areas during speech tracking, providing valuable implications for orchestration of speech tracking at multiple time scales.
Collapse
Affiliation(s)
- Nikos Chalas
- Institute for Biomagnetism and Biosignal Analysis, University of Münster, Malmedyweg 15, 48149, Münster, Germany.,Otto-Creutzfeldt-Center for Cognitive and Behavioral Neuroscience, University of Münster, Fliednerstr. 21, 48149 Münster, Germany.,Institute for Translational Neuroscience, University of Münster, Albert-Schweitzer-Campus 1, Geb. A9a, Münster, Germany
| | - Christoph Daube
- Centre for Cognitive Neuroimaging, University of Glasgow, 56-64 Hillhead Street, G12 8QB, Glasgow, United Kingdom
| | - Daniel S Kluger
- Institute for Biomagnetism and Biosignal Analysis, University of Münster, Malmedyweg 15, 48149, Münster, Germany.,Otto-Creutzfeldt-Center for Cognitive and Behavioral Neuroscience, University of Münster, Fliednerstr. 21, 48149 Münster, Germany
| | - Omid Abbasi
- Institute for Biomagnetism and Biosignal Analysis, University of Münster, Malmedyweg 15, 48149, Münster, Germany
| | - Robert Nitsch
- Institute for Translational Neuroscience, University of Münster, Albert-Schweitzer-Campus 1, Geb. A9a, Münster, Germany
| | - Joachim Gross
- Institute for Biomagnetism and Biosignal Analysis, University of Münster, Malmedyweg 15, 48149, Münster, Germany.,Otto-Creutzfeldt-Center for Cognitive and Behavioral Neuroscience, University of Münster, Fliednerstr. 21, 48149 Münster, Germany
| |
Collapse
|
5
|
David W, Gransier R, Wouters J. Evaluation of phase-locking to parameterized speech envelopes. Front Neurol 2022; 13:852030. [PMID: 35989900 PMCID: PMC9382131 DOI: 10.3389/fneur.2022.852030] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/10/2022] [Accepted: 06/29/2022] [Indexed: 12/04/2022] Open
Abstract
Humans rely on the temporal processing ability of the auditory system to perceive speech during everyday communication. The temporal envelope of speech is essential for speech perception, particularly envelope modulations below 20 Hz. In the literature, the neural representation of this speech envelope is usually investigated by recording neural phase-locked responses to speech stimuli. However, these phase-locked responses are not only associated with envelope modulation processing, but also with processing of linguistic information at a higher-order level when speech is comprehended. It is thus difficult to disentangle the responses into components from the acoustic envelope itself and the linguistic structures in speech (such as words, phrases and sentences). Another way to investigate neural modulation processing is to use sinusoidal amplitude-modulated stimuli at different modulation frequencies to obtain the temporal modulation transfer function. However, these transfer functions are considerably variable across modulation frequencies and individual listeners. To tackle the issues of both speech and sinusoidal amplitude-modulated stimuli, the recently introduced Temporal Speech Envelope Tracking (TEMPEST) framework proposed the use of stimuli with a distribution of envelope modulations. The framework aims to assess the brain's capability to process temporal envelopes in different frequency bands using stimuli with speech-like envelope modulations. In this study, we provide a proof-of-concept of the framework using stimuli with modulation frequency bands around the syllable and phoneme rate in natural speech. We evaluated whether the evoked phase-locked neural activity correlates with the speech-weighted modulation transfer function measured using sinusoidal amplitude-modulated stimuli in normal-hearing listeners. Since many studies on modulation processing employ different metrics and comparing their results is difficult, we included different power- and phase-based metrics and investigate how these metrics relate to each other. Results reveal a strong correspondence across listeners between the neural activity evoked by the speech-like stimuli and the activity evoked by the sinusoidal amplitude-modulated stimuli. Furthermore, strong correspondence was also apparent between each metric, facilitating comparisons between studies using different metrics. These findings indicate the potential of the TEMPEST framework to efficiently assess the neural capability to process temporal envelope modulations within a frequency band that is important for speech perception.
Collapse
Affiliation(s)
- Wouter David
- ExpORL, Department of Neurosciences, KU Leuven, Leuven, Belgium
| | | | | |
Collapse
|
6
|
Chalas N, Daube C, Kluger DS, Abbasi O, Nitsch R, Gross J. Multivariate analysis of speech envelope tracking reveals coupling beyond auditory cortex. Neuroimage 2022; 258:119395. [PMID: 35718023 DOI: 10.1016/j.neuroimage.2022.119395] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/08/2022] [Revised: 05/16/2022] [Accepted: 06/14/2022] [Indexed: 11/19/2022] Open
Abstract
The systematic alignment of low-frequency brain oscillations with the acoustic speech envelope signal is well established and has been proposed to be crucial for actively perceiving speech. Previous studies investigating speech-brain coupling in source space are restricted to univariate pairwise approaches between brain and speech signals, and therefore speech tracking information in frequency-specific communication channels might be lacking. To address this, we propose a novel multivariate framework for estimating speech-brain coupling where neural variability from source-derived activity is taken into account along with the rate of envelope's amplitude change (derivative). We applied it in magnetoencephalographic (MEG) recordings while human participants (male and female) listened to one hour of continuous naturalistic speech, showing that a multivariate approach outperforms the corresponding univariate method in low- and high frequencies across frontal, motor, and temporal areas. Systematic comparisons revealed that the gain in low frequencies (0.6 - 0.8 Hz) was related to the envelope's rate of change whereas in higher frequencies (from 0.8 to 10 Hz) it was mostly related to the increased neural variability from source-derived cortical areas. Furthermore, following a non-negative matrix factorization approach we found distinct speech-brain components across time and cortical space related to speech processing. We confirm that speech envelope tracking operates mainly in two timescales (δ and θ frequency bands) and we extend those findings showing shorter coupling delays in auditory-related components and longer delays in higher-association frontal and motor components, indicating temporal differences of speech tracking and providing implications for hierarchical stimulus-driven speech processing.
Collapse
Affiliation(s)
- Nikos Chalas
- Institute for Biomagnetism and Biosignal Analysis, University of Münster, Münster, Germany; Otto-Creutzfeldt-Center for Cognitive and Behavioral Neuroscience, University of Münster, Münster, Germany.
| | - Christoph Daube
- Centre for Cognitive Neuroimaging, University of Glasgow, Glasgow, UK
| | - Daniel S Kluger
- Institute for Biomagnetism and Biosignal Analysis, University of Münster, Münster, Germany; Otto-Creutzfeldt-Center for Cognitive and Behavioral Neuroscience, University of Münster, Münster, Germany
| | - Omid Abbasi
- Institute for Biomagnetism and Biosignal Analysis, University of Münster, Münster, Germany
| | - Robert Nitsch
- Institute for Translational Neuroscience, University of Münster, Münster, Germany
| | - Joachim Gross
- Institute for Biomagnetism and Biosignal Analysis, University of Münster, Münster, Germany; Otto-Creutzfeldt-Center for Cognitive and Behavioral Neuroscience, University of Münster, Münster, Germany
| |
Collapse
|
7
|
The influence of the respiratory cycle on reaction times in sensory-cognitive paradigms. Sci Rep 2022; 12:2586. [PMID: 35173204 PMCID: PMC8850565 DOI: 10.1038/s41598-022-06364-8] [Citation(s) in RCA: 13] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/30/2021] [Accepted: 01/25/2022] [Indexed: 11/08/2022] Open
Abstract
Behavioural and electrophysiological studies point to an apparent influence of the state of respiration, i.e., whether we inhale or exhale, on brain activity and cognitive performance. Still, the prevalence and relevance of such respiratory-behavioural relations in typical sensory-cognitive tasks remain unclear. We here used a battery of six tasks probing sensory detection, discrimination and short-term memory to address the questions of whether and by how much behaviour covaries with the respiratory cycle. Our results show that participants tend to align their respiratory cycle to the experimental paradigm, in that they tend to inhale around stimulus presentation and exhale when submitting their responses. Furthermore, their reaction times, but not so much their response accuracy, consistently and significantly covary with the respiratory cycle, differing between inhalation and exhalation. This effect is strongest when analysed contingent on the respiratory state around participants' responses. The respective effect sizes of these respiration-behaviour relations are comparable to those seen in other typical experimental manipulations in sensory-cognitive tasks, highlighting the relevance of these effects. Overall, our results support a prominent relation between respiration and sensory-cognitive function and show that sensation is intricately linked to rhythmic bodily or interoceptive functions.
Collapse
|
8
|
Benedetto A, Binda P, Costagli M, Tosetti M, Morrone MC. Predictive visuo-motor communication through neural oscillations. Curr Biol 2021; 31:3401-3408.e4. [PMID: 34111403 PMCID: PMC8360767 DOI: 10.1016/j.cub.2021.05.026] [Citation(s) in RCA: 15] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/25/2021] [Revised: 04/22/2021] [Accepted: 05/13/2021] [Indexed: 12/21/2022]
Abstract
The mechanisms coordinating action and perception over time are poorly understood. The sensory cortex needs to prepare for upcoming changes contingent on action, and this requires temporally precise communication that takes into account the variable delays between sensory and motor processing. Several theorists1,2 have proposed synchronization of the endogenous oscillatory activity observed in most regions of the brain3 as the basis for an efficient and flexible communication protocol between distal brain areas,2,4 a concept known as "communication through coherence." Synchronization of endogenous oscillations5,6 occurs after a salient sensory stimulus, such as a flash or a sound,7-11 and after a voluntary action,12-18 and this directly impacts perception, causing performance to oscillate rhythmically over time. Here we introduce a novel fMRI paradigm to probe the neural sources of oscillations, based on the concept of perturbative signals, which overcomes the low temporal resolution of BOLD signals. The assumption is that a synchronized endogenous rhythm will modulate cortical excitability rhythmically, which should be reflected in the BOLD responses to brief stimuli presented at different phases of the oscillation cycle. We record rhythmic oscillations of V1 BOLD synchronized by a simple voluntary action, in phase with behaviorally measured oscillations in visual sensitivity in the theta range. The functional connectivity between V1 and M1 also oscillates at the same rhythm. By demonstrating oscillatory temporal coupling between primary motor and sensory cortices, our results strongly implicate communication through coherence to achieve precise coordination and to encode sensory-motor timing.
Collapse
Affiliation(s)
- Alessandro Benedetto
- Department of Translational Research on New Technologies in Medicine and Surgery, University of Pisa, Pisa, Italy
| | - Paola Binda
- Department of Translational Research on New Technologies in Medicine and Surgery, University of Pisa, Pisa, Italy
| | - Mauro Costagli
- Department of Neuroscience, Rehabilitation, Ophthalmology, Genetics, Maternal and Child Sciences (DINOGMI), University of Genova, Genova, Italy; Laboratory of Medical Physics and Magnetic Resonance, IRCCS Stella Maris, Pisa, Italy
| | - Michela Tosetti
- Laboratory of Medical Physics and Magnetic Resonance, IRCCS Stella Maris, Pisa, Italy; Imago 7 Research Foundation, Calambrone, Pisa, Italy
| | - Maria Concetta Morrone
- Department of Translational Research on New Technologies in Medicine and Surgery, University of Pisa, Pisa, Italy; Laboratory of Medical Physics and Magnetic Resonance, IRCCS Stella Maris, Pisa, Italy.
| |
Collapse
|
9
|
Differential contributions of synaptic and intrinsic inhibitory currents to speech segmentation via flexible phase-locking in neural oscillators. PLoS Comput Biol 2021; 17:e1008783. [PMID: 33852573 PMCID: PMC8104450 DOI: 10.1371/journal.pcbi.1008783] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/06/2020] [Revised: 05/07/2021] [Accepted: 02/05/2021] [Indexed: 01/07/2023] Open
Abstract
Current hypotheses suggest that speech segmentation—the initial division and grouping of the speech stream into candidate phrases, syllables, and phonemes for further linguistic processing—is executed by a hierarchy of oscillators in auditory cortex. Theta (∼3-12 Hz) rhythms play a key role by phase-locking to recurring acoustic features marking syllable boundaries. Reliable synchronization to quasi-rhythmic inputs, whose variable frequency can dip below cortical theta frequencies (down to ∼1 Hz), requires “flexible” theta oscillators whose underlying neuronal mechanisms remain unknown. Using biophysical computational models, we found that the flexibility of phase-locking in neural oscillators depended on the types of hyperpolarizing currents that paced them. Simulated cortical theta oscillators flexibly phase-locked to slow inputs when these inputs caused both (i) spiking and (ii) the subsequent buildup of outward current sufficient to delay further spiking until the next input. The greatest flexibility in phase-locking arose from a synergistic interaction between intrinsic currents that was not replicated by synaptic currents at similar timescales. Flexibility in phase-locking enabled improved entrainment to speech input, optimal at mid-vocalic channels, which in turn supported syllabic-timescale segmentation through identification of vocalic nuclei. Our results suggest that synaptic and intrinsic inhibition contribute to frequency-restricted and -flexible phase-locking in neural oscillators, respectively. Their differential deployment may enable neural oscillators to play diverse roles, from reliable internal clocking to adaptive segmentation of quasi-regular sensory inputs like speech. Oscillatory activity in auditory cortex is believed to play an important role in auditory and speech processing. One suggested function of these rhythms is to divide the speech stream into candidate phonemes, syllables, words, and phrases, to be matched with learned linguistic templates. This requires brain rhythms to flexibly synchronize with regular acoustic features of the speech stream. How neuronal circuits implement this task remains unknown. In this study, we explored the contribution of inhibitory currents to flexible phase-locking in neuronal theta oscillators, believed to perform initial syllabic segmentation. We found that a combination of specific intrinsic inhibitory currents at multiple timescales, present in a large class of cortical neurons, enabled exceptionally flexible phase-locking, which could be used to precisely segment speech by identifying vowels at mid-syllable. This suggests that the cells exhibiting these currents are a key component in the brain’s auditory and speech processing architecture.
Collapse
|
10
|
Chicharro D, Panzeri S, Haefner RM. Stimulus-dependent relationships between behavioral choice and sensory neural responses. eLife 2021; 10:e54858. [PMID: 33825683 PMCID: PMC8184215 DOI: 10.7554/elife.54858] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/02/2020] [Accepted: 04/06/2021] [Indexed: 01/16/2023] Open
Abstract
Understanding perceptual decision-making requires linking sensory neural responses to behavioral choices. In two-choice tasks, activity-choice covariations are commonly quantified with a single measure of choice probability (CP), without characterizing their changes across stimulus levels. We provide theoretical conditions for stimulus dependencies of activity-choice covariations. Assuming a general decision-threshold model, which comprises both feedforward and feedback processing and allows for a stimulus-modulated neural population covariance, we analytically predict a very general and previously unreported stimulus dependence of CPs. We develop new tools, including refined analyses of CPs and generalized linear models with stimulus-choice interactions, which accurately assess the stimulus- or choice-driven signals of each neuron, characterizing stimulus-dependent patterns of choice-related signals. With these tools, we analyze CPs of macaque MT neurons during a motion discrimination task. Our analysis provides preliminary empirical evidence for the promise of studying stimulus dependencies of choice-related signals, encouraging further assessment in wider data sets.
Collapse
Affiliation(s)
- Daniel Chicharro
- Neural Computation Laboratory, Center for Neuroscience and Cognitive Systems@UniTn, Istituto Italiano di TecnologiaRoveretoItaly
- Department of Neurobiology, Harvard Medical SchoolBostonUnited States
| | - Stefano Panzeri
- Neural Computation Laboratory, Center for Neuroscience and Cognitive Systems@UniTn, Istituto Italiano di TecnologiaRoveretoItaly
| | - Ralf M Haefner
- Brain and Cognitive Sciences, Center for Visual Science, University of RochesterRochesterUnited States
| |
Collapse
|
11
|
Delta/Theta band EEG activity shapes the rhythmic perceptual sampling of auditory scenes. Sci Rep 2021; 11:2370. [PMID: 33504860 PMCID: PMC7840678 DOI: 10.1038/s41598-021-82008-7] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/22/2020] [Accepted: 01/13/2021] [Indexed: 11/08/2022] Open
Abstract
Many studies speak in favor of a rhythmic mode of listening, by which the encoding of acoustic information is structured by rhythmic neural processes at the time scale of about 1 to 4 Hz. Indeed, psychophysical data suggest that humans sample acoustic information in extended soundscapes not uniformly, but weigh the evidence at different moments for their perceptual decision at the time scale of about 2 Hz. We here test the critical prediction that such rhythmic perceptual sampling is directly related to the state of ongoing brain activity prior to the stimulus. Human participants judged the direction of frequency sweeps in 1.2 s long soundscapes while their EEG was recorded. We computed the perceptual weights attributed to different epochs within these soundscapes contingent on the phase or power of pre-stimulus EEG activity. This revealed a direct link between 4 Hz EEG phase and power prior to the stimulus and the phase of the rhythmic component of these perceptual weights. Hence, the temporal pattern by which the acoustic information is sampled over time for behavior is directly related to pre-stimulus brain activity in the delta/theta band. These results close a gap in the mechanistic picture linking ongoing delta band activity with their role in shaping the segmentation and perceptual influence of subsequent acoustic information.
Collapse
|
12
|
Eqlimi E, Bockstael A, De Coensel B, Schönwiesner M, Talsma D, Botteldooren D. EEG Correlates of Learning From Speech Presented in Environmental Noise. Front Psychol 2020; 11:1850. [PMID: 33250798 PMCID: PMC7676901 DOI: 10.3389/fpsyg.2020.01850] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/13/2020] [Accepted: 07/06/2020] [Indexed: 01/07/2023] Open
Abstract
How the human brain retains relevant vocal information while suppressing irrelevant sounds is one of the ongoing challenges in cognitive neuroscience. Knowledge of the underlying mechanisms of this ability can be used to identify whether a person is distracted during listening to a target speech, especially in a learning context. This paper investigates the neural correlates of learning from the speech presented in a noisy environment using an ecologically valid learning context and electroencephalography (EEG). To this end, the following listening tasks were performed while 64-channel EEG signals were recorded: (1) attentive listening to the lectures in background sound, (2) attentive listening to the background sound presented alone, and (3) inattentive listening to the background sound. For the first task, 13 lectures of 5 min in length embedded in different types of realistic background noise were presented to participants who were asked to focus on the lectures. As background noise, multi-talker babble, continuous highway, and fluctuating traffic sounds were used. After the second task, a written exam was taken to quantify the amount of information that participants have acquired and retained from the lectures. In addition to various power spectrum-based EEG features in different frequency bands, the peak frequency and long-range temporal correlations (LRTC) of alpha-band activity were estimated. To reduce these dimensions, a principal component analysis (PCA) was applied to the different listening conditions resulting in the feature combinations that discriminate most between listening conditions and persons. Linear mixed-effect modeling was used to explain the origin of extracted principal components, showing their dependence on listening condition and type of background sound. Following this unsupervised step, a supervised analysis was performed to explain the link between the exam results and the EEG principal component scores using both linear fixed and mixed-effect modeling. Results suggest that the ability to learn from the speech presented in environmental noise can be predicted by the several components over the specific brain regions better than by knowing the background noise type. These components were linked to deterioration in attention, speech envelope following, decreased focusing during listening, cognitive prediction error, and specific inhibition mechanisms.
Collapse
Affiliation(s)
- Ehsan Eqlimi
- WAVES Research Group, Department of Information Technology, Ghent University, Ghent, Belgium
| | - Annelies Bockstael
- WAVES Research Group, Department of Information Technology, Ghent University, Ghent, Belgium.,École d'Orthophonie et d'Audiologie, Université de Montréal, Montreal, QC, Canada.,Erasmushogeschool Brussel, Brussels, Belgium
| | - Bert De Coensel
- WAVES Research Group, Department of Information Technology, Ghent University, Ghent, Belgium.,ASAsense, Bruges, Belgium
| | - Marc Schönwiesner
- Faculty of Biosciences, Pharmacy and Psychology, Institute of Biology, University of Leipzig, Leipzig, Germany.,International Laboratory for Brain, Music and Sound Research (BRAMS), Université de Montréal, Montreal, QC, Canada
| | - Durk Talsma
- Department of Experimental Psychology, Ghent University, Ghent, Belgium
| | - Dick Botteldooren
- WAVES Research Group, Department of Information Technology, Ghent University, Ghent, Belgium
| |
Collapse
|
13
|
Zoefel B, Davis MH, Valente G, Riecke L. How to test for phasic modulation of neural and behavioural responses. Neuroimage 2019; 202:116175. [PMID: 31499178 PMCID: PMC6773602 DOI: 10.1016/j.neuroimage.2019.116175] [Citation(s) in RCA: 31] [Impact Index Per Article: 6.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/11/2019] [Revised: 07/31/2019] [Accepted: 09/05/2019] [Indexed: 12/30/2022] Open
Abstract
Research on whether perception or other processes depend on the phase of neural oscillations is rapidly gaining popularity. However, it is unknown which methods are optimally suited to evaluate the hypothesized phase effect. Using a simulation approach, we here test the ability of different methods to detect such an effect on dichotomous (e.g., "hit" vs "miss") and continuous (e.g., scalp potentials) response variables. We manipulated parameters that characterise the phase effect or define the experimental approach to test for this effect. For each parameter combination and response variable, we identified an optimal method. We found that methods regressing single-trial responses on circular (sine and cosine) predictors perform best for all of the simulated parameters, regardless of the nature of the response variable (dichotomous or continuous). In sum, our study lays a foundation for optimized experimental designs and analyses in future studies investigating the role of phase for neural and behavioural responses. We provide MATLAB code for the statistical methods tested.
Collapse
Affiliation(s)
- Benedikt Zoefel
- MRC Cognition and Brain Sciences Unit, University of Cambridge, UK.
| | - Matthew H Davis
- MRC Cognition and Brain Sciences Unit, University of Cambridge, UK
| | - Giancarlo Valente
- Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University, 6229, EV Maastricht, the Netherlands
| | - Lars Riecke
- Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University, 6229, EV Maastricht, the Netherlands
| |
Collapse
|
14
|
Obleser J, Kayser C. Neural Entrainment and Attentional Selection in the Listening Brain. Trends Cogn Sci 2019; 23:913-926. [PMID: 31606386 DOI: 10.1016/j.tics.2019.08.004] [Citation(s) in RCA: 193] [Impact Index Per Article: 38.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/26/2019] [Revised: 08/16/2019] [Accepted: 08/20/2019] [Indexed: 01/07/2023]
Abstract
The streams of sounds we typically attend to abound in acoustic regularities. Neural entrainment is seen as an important mechanism that the listening brain exploits to attune to these regularities and to enhance the representation of attended sounds. We delineate the neurophysiology underlying this mechanism and review entrainment alongside its more pragmatic signature, often called 'speech tracking'. The latter has become a popular analytical approach to trace the reflection of acoustic and linguistic information at different levels of granularity, from neurophysiology to neuroimaging. As we discuss, the concept of entrainment offers both a putative neurophysiological mechanism for selective listening and a versatile window onto the neural basis of hearing and speech comprehension.
Collapse
Affiliation(s)
- Jonas Obleser
- Department of Psychology, University of Lübeck, 23562 Lübeck, Germany.
| | - Christoph Kayser
- Department for Cognitive Neuroscience and Cognitive Interaction Technology, Center of Excellence, Bielefeld University, 33615 Bielefeld, Germany.
| |
Collapse
|
15
|
Herbst SK, Obleser J. Implicit temporal predictability enhances pitch discrimination sensitivity and biases the phase of delta oscillations in auditory cortex. Neuroimage 2019; 203:116198. [PMID: 31539590 DOI: 10.1016/j.neuroimage.2019.116198] [Citation(s) in RCA: 30] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/18/2019] [Revised: 08/23/2019] [Accepted: 09/14/2019] [Indexed: 10/26/2022] Open
Abstract
Can human listeners use implicit temporal contingencies in auditory input to form temporal predictions, and if so, how are these predictions represented endogenously? To assess this question, we implicitly manipulated temporal predictability in an auditory pitch discrimination task: unbeknownst to participants, the pitch of the standard tone could either be deterministically predictive of the temporal onset of the target tone, or convey no predictive information. Predictive and non-predictive conditions were presented interleaved in one stream, and separated by variable inter-stimulus intervals such that there was no dominant stimulus rhythm throughout. Even though participants were unaware of the implicit temporal contingencies, pitch discrimination sensitivity (the slope of the psychometric function) increased when the onset of the target tone was predictable in time (N = 49, 28 female, 21 male). Concurrently recorded EEG data (N = 24) revealed that standard tones that conveyed temporal predictions evoked a more negative N1 component than non-predictive standards. We observed no significant differences in oscillatory power or phase coherence between conditions during the foreperiod. Importantly, the phase angle of delta oscillations (1-3 Hz) in auditory areas in the post-standard and pre-target time windows predicted behavioral pitch discrimination sensitivity. This suggests that temporal predictions are encoded in delta oscillatory phase during the foreperiod interval. In sum, we show that auditory perception benefits from implicit temporal contingencies, and provide evidence for a role of slow neural oscillations in the endogenous representation of temporal predictions, in absence of exogenously driven entrainment to rhythmic input.
Collapse
Affiliation(s)
- Sophie K Herbst
- Department of Psychology, University of Lübeck, Ratzeburger Allee 160, 23552, Lübeck, Germany; NeuroSpin, CEA, DRF/Joliot; INSERM Cognitive Neuroimaging Unit; Université Paris-Sud, Université Paris-Saclay; Bât 145Gif s/ Yvette, 91190 France.
| | - Jonas Obleser
- Department of Psychology, University of Lübeck, Ratzeburger Allee 160, 23552, Lübeck, Germany
| |
Collapse
|
16
|
Active High-Density Electrode Arrays: Technology and Applications in Neuronal Cell Cultures. ADVANCES IN NEUROBIOLOGY 2019. [PMID: 31073940 DOI: 10.1007/978-3-030-11135-9_11] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 08/02/2023]
Abstract
Active high-density electrode arrays realized with complementary metal-oxide-semiconductor (CMOS) technology provide electrophysiological recordings from several thousands of closely spaced microelectrodes. This has drastically advanced the spatiotemporal recording resolution of conventional multielectrode arrays (MEAs). Thus, today's electrophysiology in neuronal cultures can exploit label-free electrical readouts from a large number of single neurons within the same network. This provides advanced capabilities to investigate the properties of self-assembling neuronal networks, to advance studies on neurotoxicity and neurodevelopmental alterations associated with human brain diseases, and to develop cell culture models for testing drug- or cell-based strategies for therapies.Here, after introducing the reader to this neurotechnology, we summarize the results of different recent studies demonstrating the potential of active high-density electrode arrays for experimental applications. We also discuss ongoing and possible future research directions that might allow for moving these platforms forward for screening applications.
Collapse
|
17
|
Kayser C. Evidence for the Rhythmic Perceptual Sampling of Auditory Scenes. Front Hum Neurosci 2019; 13:249. [PMID: 31396064 PMCID: PMC6663999 DOI: 10.3389/fnhum.2019.00249] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/14/2019] [Accepted: 07/04/2019] [Indexed: 12/15/2022] Open
Abstract
Converging results suggest that perception is controlled by rhythmic processes in the brain. In the auditory domain, neuroimaging studies show that the perception of sounds is shaped by rhythmic activity prior to the stimulus, and electrophysiological recordings have linked delta and theta band activity to the functioning of individual neurons. These results have promoted theories of rhythmic modes of listening and generally suggest that the perceptually relevant encoding of acoustic information is structured by rhythmic processes along auditory pathways. A prediction from this perspective-which so far has not been tested-is that such rhythmic processes also shape how acoustic information is combined over time to judge extended soundscapes. The present study was designed to directly test this prediction. Human participants judged the overall change in perceived frequency content in temporally extended (1.2-1.8 s) soundscapes, while the perceptual use of the available sensory evidence was quantified using psychophysical reverse correlation. Model-based analysis of individual participant's perceptual weights revealed a rich temporal structure, including linear trends, a U-shaped profile tied to the overall stimulus duration, and importantly, rhythmic components at the time scale of 1-2 Hz. The collective evidence found here across four versions of the experiment supports the notion that rhythmic processes operating on the delta time scale structure how perception samples temporally extended acoustic scenes.
Collapse
Affiliation(s)
- Christoph Kayser
- Department for Cognitive Neuroscience & Cognitive Interaction Technology, Center of Excellence, Bielefeld University, Bielefeld, Germany
| |
Collapse
|
18
|
McNair SW, Kayser SJ, Kayser C. Consistent pre-stimulus influences on auditory perception across the lifespan. Neuroimage 2019; 186:22-32. [PMID: 30391564 PMCID: PMC6347568 DOI: 10.1016/j.neuroimage.2018.10.085] [Citation(s) in RCA: 18] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/27/2018] [Revised: 10/29/2018] [Accepted: 10/31/2018] [Indexed: 01/29/2023] Open
Abstract
As we get older, perception in cluttered environments becomes increasingly difficult as a result of changes in peripheral and central neural processes. Given the aging society, it is important to understand the neural mechanisms constraining perception in the elderly. In young participants, the state of rhythmic brain activity prior to a stimulus has been shown to modulate the neural encoding and perceptual impact of this stimulus - yet it remains unclear whether, and if so, how, the perceptual relevance of pre-stimulus activity changes with age. Using the auditory system as a model, we recorded EEG activity during a frequency discrimination task from younger and older human listeners. By combining single-trial EEG decoding with linear modelling we demonstrate consistent statistical relations between pre-stimulus power and the encoding of sensory evidence in short-latency EEG components, and more variable relations between pre-stimulus phase and subjects' decisions in longer-latency components. At the same time, we observed a significant slowing of auditory evoked responses and a flattening of the overall EEG frequency spectrum in the older listeners. Our results point to mechanistically consistent relations between rhythmic brain activity and sensory encoding that emerge despite changes in neural response latencies and the relative amplitude of rhythmic brain activity with age.
Collapse
Affiliation(s)
- Steven W McNair
- Institute of Neuroscience and Psychology, University of Glasgow, 62 Hillhead Street, G12 8QB, United Kingdom
| | - Stephanie J Kayser
- Department for Cognitive Neuroscience, Faculty of Biology, Bielefeld University, Universitätsstr. 25, 33615, Bielefeld, Germany; Cognitive Interaction Technology - Center of Excellence, Bielefeld University, Inspiration 1, 33615, Bielefeld, Germany
| | - Christoph Kayser
- Department for Cognitive Neuroscience, Faculty of Biology, Bielefeld University, Universitätsstr. 25, 33615, Bielefeld, Germany; Cognitive Interaction Technology - Center of Excellence, Bielefeld University, Inspiration 1, 33615, Bielefeld, Germany.
| |
Collapse
|
19
|
Intracortical Microstimulation Modulates Cortical Induced Responses. J Neurosci 2018; 38:7774-7786. [PMID: 30054394 DOI: 10.1523/jneurosci.0928-18.2018] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/11/2018] [Revised: 06/19/2018] [Accepted: 07/06/2018] [Indexed: 12/31/2022] Open
Abstract
Recent advances in cortical prosthetics relied on intracortical microstimulation (ICMS) to activate the cortical neural network and convey information to the brain. Here we show that activity elicited by low-current ICMS modulates induced cortical responses to a sensory stimulus in the primary auditory cortex (A1). A1 processes sensory stimuli in a stereotyped manner, encompassing two types of activity: evoked activity (phase-locked to the stimulus) and induced activity (non-phase-locked to the stimulus). Time-frequency analyses of extracellular potentials recorded from all layers and the surface of the auditory cortex of anesthetized guinea pigs of both sexes showed that ICMS during the processing of a transient acoustic stimulus differentially affected the evoked and induced response. Specifically, ICMS enhanced the long-latency-induced component, mimicking physiological gain increasing top-down feedback processes. Furthermore, the phase of the local field potential at the time of stimulation was predictive of the response amplitude for acoustic stimulation, ICMS, as well as combined acoustic and electric stimulation. Together, this was interpreted as a sign that the response to electrical stimulation was integrated into the ongoing cortical processes in contrast to substituting them. Consequently, ICMS modulated the cortical response to a sensory stimulus. We propose such targeted modulation of cortical activity (as opposed to a stimulation that substitutes the ongoing processes) as an alternative approach for cortical prostheses.SIGNIFICANCE STATEMENT Intracortical microstimulation (ICMS) is commonly used to activate a specific subset of cortical neurons, without taking into account the ongoing activity at the time of stimulation. Here, we found that a low-current ICMS pulse modulated the way the auditory cortex processed a peripheral stimulus, by supra-additively combining the response to the ICMS with the cortical processing of the peripheral stimulus. This artificial modulation mimicked natural modulations of response magnitude such as attention or expectation. In contrast to what was implied in earlier studies, this shows that the response to electrical stimulation is not substituting ongoing cortical activity but is integrated into the natural processes.
Collapse
|
20
|
Temporal Expectation Modulates the Cortical Dynamics of Short-Term Memory. J Neurosci 2018; 38:7428-7439. [PMID: 30012685 DOI: 10.1523/jneurosci.2928-17.2018] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/11/2017] [Revised: 07/09/2018] [Accepted: 07/10/2018] [Indexed: 11/21/2022] Open
Abstract
Increased memory load is often signified by enhanced neural oscillatory power in the alpha range (8-13 Hz), which is taken to reflect inhibition of task-irrelevant brain regions. The corresponding neural correlates of memory decay, however, are not yet well understood. In the current study, we investigated auditory short-term memory decay in humans using a delayed matching-to-sample task with pure-tone sequences. First, in a behavioral experiment, we modeled memory performance over six different delay-phase durations. Second, in a MEG experiment, we assessed alpha-power modulations over three different delay-phase durations. In both experiments, the temporal expectation for the to-be-remembered sound was manipulated so that it was either temporally expected or not. In both studies, memory performance declined over time, but this decline was weaker when the onset time of the to-be-remembered sound was expected. Similarly, patterns of alpha power in and alpha-tuned connectivity between sensory cortices changed parametrically with delay duration (i.e., decrease in occipitoparietal regions, increase in temporal regions). Temporal expectation not only counteracted alpha-power decline in heteromodal brain areas (i.e., supramarginal gyrus), but also had a beneficial effect on memory decay, counteracting memory performance decline. Correspondingly, temporal expectation also boosted alpha connectivity within attention networks known to play an active role during memory maintenance. The present data show how patterns of alpha power orchestrate short-term memory decay and encourage a more nuanced perspective on alpha power across brain space and time beyond its inhibitory role.SIGNIFICANCE STATEMENT Our sensory memories of the physical world fade quickly. We show here that this decay of short-term memory can be counteracted by so-called temporal expectation; that is, knowledge of when to expect a sensory event that an individual must remember. We also show that neural oscillations in the "alpha" (8-13 Hz) range index both the degree of memory decay (for brief sound patterns) and the respective memory benefit from temporal expectation. Spatially distributed cortical patterns of alpha power show opposing effects in auditory versus visual sensory cortices. Moreover, alpha-tuned connectivity changes within supramodal attention networks reflect the allocation of neural resources as short-term memory representations fade.
Collapse
|
21
|
Kikuchi Y, Sedley W, Griffiths TD, Petkov CI. Evolutionarily conserved neural signatures involved in sequencing predictions and their relevance for language. Curr Opin Behav Sci 2018; 21:145-153. [PMID: 30057937 PMCID: PMC6058086 DOI: 10.1016/j.cobeha.2018.05.002] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/20/2022]
Abstract
Predicting the occurrence of future events from prior ones is vital for animal perception and cognition. Although how such sequence learning (a form of relational knowledge) relates to particular operations in language remains controversial, recent evidence shows that sequence learning is disrupted in frontal lobe damage associated with aphasia. Also, neural sequencing predictions at different temporal scales resemble those involved in language operations occurring at similar scales. Furthermore, comparative work in humans and monkeys highlights evolutionarily conserved frontal substrates and predictive oscillatory signatures in the temporal lobe processing learned sequences of speech signals. Altogether this evidence supports a relational knowledge hypothesis of language evolution, proposing that language processes in humans are functionally integrated with an ancestral neural system for predictive sequence learning.
Collapse
Affiliation(s)
- Yukiko Kikuchi
- Institute of Neuroscience, Newcastle University Medical School, Newcastle Upon Tyne, UK
- Centre for Behaviour and Evolution, Newcastle University, Newcastle Upon Tyne, UK
| | - William Sedley
- Institute of Neuroscience, Newcastle University Medical School, Newcastle Upon Tyne, UK
| | - Timothy D Griffiths
- Institute of Neuroscience, Newcastle University Medical School, Newcastle Upon Tyne, UK
- Wellcome Trust Centre for Neuroimaging, University College London, UK
- Department of Neurosurgery, University of Iowa, Iowa City, USA
| | - Christopher I Petkov
- Institute of Neuroscience, Newcastle University Medical School, Newcastle Upon Tyne, UK
- Centre for Behaviour and Evolution, Newcastle University, Newcastle Upon Tyne, UK
| |
Collapse
|
22
|
Nieus T, D'Andrea V, Amin H, Di Marco S, Safaai H, Maccione A, Berdondini L, Panzeri S. State-dependent representation of stimulus-evoked activity in high-density recordings of neural cultures. Sci Rep 2018; 8:5578. [PMID: 29615719 PMCID: PMC5882875 DOI: 10.1038/s41598-018-23853-x] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/15/2018] [Accepted: 03/21/2018] [Indexed: 01/01/2023] Open
Abstract
Neuronal responses to external stimuli vary from trial to trial partly because they depend on continuous spontaneous variations of the state of neural circuits, reflected in variations of ongoing activity prior to stimulus presentation. Understanding how post-stimulus responses relate to the pre-stimulus spontaneous activity is thus important to understand how state dependence affects information processing and neural coding, and how state variations can be discounted to better decode single-trial neural responses. Here we exploited high-resolution CMOS electrode arrays to record simultaneously from thousands of electrodes in in-vitro cultures stimulated at specific sites. We used information-theoretic analyses to study how ongoing activity affects the information that neuronal responses carry about the location of the stimuli. We found that responses exhibited state dependence on the time between the last spontaneous burst and the stimulus presentation and that the dependence could be described with a linear model. Importantly, we found that a small number of selected neurons carry most of the stimulus information and contribute to the state-dependent information gain. This suggests that a major value of large-scale recording is that it individuates the small subset of neurons that carry most information and that benefit the most from knowledge of its state dependence.
Collapse
Affiliation(s)
- Thierry Nieus
- NetS3 Laboratory, Neuroscience and Brain Technologies Department, Istituto Italiano di Tecnologia, Genova, Italy. .,Department of Biomedical and Clinical Sciences "Luigi Sacco", Università di Milano, Milano, Italy.
| | - Valeria D'Andrea
- Neural Computation Laboratory, Center for Neuroscience and Cognitive Systems @UniTn, Istituto Italiano di Tecnologia, Rovereto, Italy
| | - Hayder Amin
- NetS3 Laboratory, Neuroscience and Brain Technologies Department, Istituto Italiano di Tecnologia, Genova, Italy
| | - Stefano Di Marco
- NetS3 Laboratory, Neuroscience and Brain Technologies Department, Istituto Italiano di Tecnologia, Genova, Italy.,Scienze cliniche applicate e biotecnologiche, Università dell'Aquila, L'Aquila, Italy
| | - Houman Safaai
- Neural Computation Laboratory, Center for Neuroscience and Cognitive Systems @UniTn, Istituto Italiano di Tecnologia, Rovereto, Italy.,Department of Neurobiology, Harvard Medical School, 02115, Boston, Massachusetts, USA
| | - Alessandro Maccione
- NetS3 Laboratory, Neuroscience and Brain Technologies Department, Istituto Italiano di Tecnologia, Genova, Italy
| | - Luca Berdondini
- NetS3 Laboratory, Neuroscience and Brain Technologies Department, Istituto Italiano di Tecnologia, Genova, Italy
| | - Stefano Panzeri
- Neural Computation Laboratory, Center for Neuroscience and Cognitive Systems @UniTn, Istituto Italiano di Tecnologia, Rovereto, Italy.
| |
Collapse
|
23
|
Keitel A, Gross J, Kayser C. Perceptually relevant speech tracking in auditory and motor cortex reflects distinct linguistic features. PLoS Biol 2018. [PMID: 29529019 PMCID: PMC5864086 DOI: 10.1371/journal.pbio.2004473] [Citation(s) in RCA: 141] [Impact Index Per Article: 23.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022] Open
Abstract
During online speech processing, our brain tracks the acoustic fluctuations in speech at different timescales. Previous research has focused on generic timescales (for example, delta or theta bands) that are assumed to map onto linguistic features such as prosody or syllables. However, given the high intersubject variability in speaking patterns, such a generic association between the timescales of brain activity and speech properties can be ambiguous. Here, we analyse speech tracking in source-localised magnetoencephalographic data by directly focusing on timescales extracted from statistical regularities in our speech material. This revealed widespread significant tracking at the timescales of phrases (0.6–1.3 Hz), words (1.8–3 Hz), syllables (2.8–4.8 Hz), and phonemes (8–12.4 Hz). Importantly, when examining its perceptual relevance, we found stronger tracking for correctly comprehended trials in the left premotor (PM) cortex at the phrasal scale as well as in left middle temporal cortex at the word scale. Control analyses using generic bands confirmed that these effects were specific to the speech regularities in our stimuli. Furthermore, we found that the phase at the phrasal timescale coupled to power at beta frequency (13–30 Hz) in motor areas. This cross-frequency coupling presumably reflects top-down temporal prediction in ongoing speech perception. Together, our results reveal specific functional and perceptually relevant roles of distinct tracking and cross-frequency processes along the auditory–motor pathway. How we comprehend speech—and how the brain encodes information from a continuous speech stream—is of interest for neuroscience, linguistics, and research on language disorders. Previous work that examined dynamic brain activity has addressed the issue of comprehension only indirectly, by contrasting intelligible speech with unintelligible speech or baseline activity. Recent work, however, suggests that brain areas can show similar stimulus-driven activity but differently contribute to perception or comprehension. To directly address the perceptual relevance of dynamic brain activity for speech encoding, we used a straightforward, single-trial comprehension measure. Furthermore, previous work has been vague regarding the analysed timescales. We therefore base our analysis directly on the timescales of phrases, words, syllables, and phonemes of our speech stimuli. By incorporating these two conceptual innovations, we demonstrate that different areas of the brain track acoustic information at the time-scales of words and phrases. Moreover, our results suggest that the motor cortex uses a cross-frequency coupling mechanism to predict the timing of phrases in ongoing speech. Our findings suggest spatially and temporally distinct brain mechanisms that directly shape our comprehension.
Collapse
Affiliation(s)
- Anne Keitel
- Institute of Neuroscience and Psychology, University of Glasgow, Glasgow, United Kingdom
- * E-mail:
| | - Joachim Gross
- Institute of Neuroscience and Psychology, University of Glasgow, Glasgow, United Kingdom
- Institute for Biomagnetism and Biosignalanalysis, University of Münster, Münster, Germany
| | - Christoph Kayser
- Institute of Neuroscience and Psychology, University of Glasgow, Glasgow, United Kingdom
- Cognitive Neuroscience, Bielefeld University, Bielefeld, Germany
| |
Collapse
|
24
|
Riecke L, Peters JC, Valente G, Kemper VG, Formisano E, Sorger B. Frequency-Selective Attention in Auditory Scenes Recruits Frequency Representations Throughout Human Superior Temporal Cortex. Cereb Cortex 2018; 27:3002-3014. [PMID: 27230215 DOI: 10.1093/cercor/bhw160] [Citation(s) in RCA: 25] [Impact Index Per Article: 4.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/22/2023] Open
Abstract
A sound of interest may be tracked amid other salient sounds by focusing attention on its characteristic features including its frequency. Functional magnetic resonance imaging findings have indicated that frequency representations in human primary auditory cortex (AC) contribute to this feat. However, attentional modulations were examined at relatively low spatial and spectral resolutions, and frequency-selective contributions outside the primary AC could not be established. To address these issues, we compared blood oxygenation level-dependent (BOLD) responses in the superior temporal cortex of human listeners while they identified single frequencies versus listened selectively for various frequencies within a multifrequency scene. Using best-frequency mapping, we observed that the detailed spatial layout of attention-induced BOLD response enhancements in primary AC follows the tonotopy of stimulus-driven frequency representations-analogous to the "spotlight" of attention enhancing visuospatial representations in retinotopic visual cortex. Moreover, using an algorithm trained to discriminate stimulus-driven frequency representations, we could successfully decode the focus of frequency-selective attention from listeners' BOLD response patterns in nonprimary AC. Our results indicate that the human brain facilitates selective listening to a frequency of interest in a scene by reinforcing the fine-grained activity pattern throughout the entire superior temporal cortex that would be evoked if that frequency was present alone.
Collapse
Affiliation(s)
- Lars Riecke
- Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University, 6229 EV Maastricht, The Netherlands
| | - Judith C Peters
- Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University, 6229 EV Maastricht, The Netherlands.,Netherlands Institute for Neuroscience, Institute of the Royal Netherlands Academy of Arts and Sciences (KNAW), 1105 BA Amsterdam, The Netherlands
| | - Giancarlo Valente
- Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University, 6229 EV Maastricht, The Netherlands
| | - Valentin G Kemper
- Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University, 6229 EV Maastricht, The Netherlands
| | - Elia Formisano
- Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University, 6229 EV Maastricht, The Netherlands
| | - Bettina Sorger
- Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University, 6229 EV Maastricht, The Netherlands
| |
Collapse
|
25
|
Riecke L, Formisano E, Sorger B, Başkent D, Gaudrain E. Neural Entrainment to Speech Modulates Speech Intelligibility. Curr Biol 2017; 28:161-169.e5. [PMID: 29290557 DOI: 10.1016/j.cub.2017.11.033] [Citation(s) in RCA: 116] [Impact Index Per Article: 16.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/14/2017] [Revised: 10/26/2017] [Accepted: 11/15/2017] [Indexed: 01/02/2023]
Abstract
Speech is crucial for communication in everyday life. Speech-brain entrainment, the alignment of neural activity to the slow temporal fluctuations (envelope) of acoustic speech input, is a ubiquitous element of current theories of speech processing. Associations between speech-brain entrainment and acoustic speech signal, listening task, and speech intelligibility have been observed repeatedly. However, a methodological bottleneck has prevented so far clarifying whether speech-brain entrainment contributes functionally to (i.e., causes) speech intelligibility or is merely an epiphenomenon of it. To address this long-standing issue, we experimentally manipulated speech-brain entrainment without concomitant acoustic and task-related variations, using a brain stimulation approach that enables modulating listeners' neural activity with transcranial currents carrying speech-envelope information. Results from two experiments involving a cocktail-party-like scenario and a listening situation devoid of aural speech-amplitude envelope input reveal consistent effects on listeners' speech-recognition performance, demonstrating a causal role of speech-brain entrainment in speech intelligibility. Our findings imply that speech-brain entrainment is critical for auditory speech comprehension and suggest that transcranial stimulation with speech-envelope-shaped currents can be utilized to modulate speech comprehension in impaired listening conditions.
Collapse
Affiliation(s)
- Lars Riecke
- Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University, 6229 EV Maastricht, the Netherlands.
| | - Elia Formisano
- Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University, 6229 EV Maastricht, the Netherlands
| | - Bettina Sorger
- Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University, 6229 EV Maastricht, the Netherlands
| | - Deniz Başkent
- Department of Otorhinolaryngology/Head and Neck Surgery, University Medical Center Groningen, University of Groningen, 9700 RB Groningen, the Netherlands
| | - Etienne Gaudrain
- Department of Otorhinolaryngology/Head and Neck Surgery, University Medical Center Groningen, University of Groningen, 9700 RB Groningen, the Netherlands; CNRS UMR 5292, Lyon Neuroscience Research Center, Auditory Cognition and Psychoacoustics, Inserm UMRS 1028, Université Claude Bernard Lyon 1, Université de Lyon, 69366 Lyon Cedex 07, France
| |
Collapse
|
26
|
Meyer L. The neural oscillations of speech processing and language comprehension: state of the art and emerging mechanisms. Eur J Neurosci 2017; 48:2609-2621. [PMID: 29055058 DOI: 10.1111/ejn.13748] [Citation(s) in RCA: 150] [Impact Index Per Article: 21.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/11/2017] [Revised: 09/14/2017] [Accepted: 10/09/2017] [Indexed: 12/17/2022]
Abstract
Neural oscillations subserve a broad range of functions in speech processing and language comprehension. On the one hand, speech contains-somewhat-repetitive trains of air pressure bursts that occur at three dominant amplitude modulation frequencies, physically marking the linguistically meaningful progressions of phonemes, syllables and intonational phrase boundaries. To these acoustic events, neural oscillations of isomorphous operating frequencies are thought to synchronise, presumably resulting in an implicit temporal alignment of periods of neural excitability to linguistically meaningful spectral information on the three low-level linguistic description levels. On the other hand, speech is a carrier signal that codes for high-level linguistic meaning, such as syntactic structure and semantic information-which cannot be read from stimulus acoustics, but must be acquired during language acquisition and decoded for language comprehension. Neural oscillations subserve the processing of both syntactic structure and semantic information. Here, I synthesise a mapping from each linguistic processing domain to a unique set of subserving oscillatory mechanisms-the mapping is plausible given the role ascribed to different oscillatory mechanisms in different subfunctions of cortical information processing and faithful to the underlying electrophysiology. In sum, the present article provides an accessible and extensive review of the functional mechanisms that neural oscillations subserve in speech processing and language comprehension.
Collapse
Affiliation(s)
- Lars Meyer
- Department of Neuropsychology, Max Planck Institute for Human Cognitive and Brain Sciences, Stephanstraße 1A, 04103, Leipzig, Germany
| |
Collapse
|
27
|
Abstract
In this issue of Neuron, Guo et al. (2017) describe a layer 6 corticothalamic circuit that alternately drives cortical states favoring either sensory detection or discrimination. They also identify a neural mechanism that resets the phase of low-frequency cortical oscillations.
Collapse
Affiliation(s)
- Jennifer F Linden
- Ear Institute and Department of Neuroscience, Physiology & Pharmacology, University College London, 332 Gray's Inn Road, London, WC1X 8EE, UK.
| |
Collapse
|
28
|
Yague JG, Tsunematsu T, Sakata S. Distinct Temporal Coordination of Spontaneous Population Activity between Basal Forebrain and Auditory Cortex. Front Neural Circuits 2017; 11:64. [PMID: 28959191 PMCID: PMC5603709 DOI: 10.3389/fncir.2017.00064] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/01/2017] [Accepted: 08/31/2017] [Indexed: 12/19/2022] Open
Abstract
The basal forebrain (BF) has long been implicated in attention, learning and memory, and recent studies have established a causal relationship between artificial BF activation and arousal. However, neural ensemble dynamics in the BF still remains unclear. Here, recording neural population activity in the BF and comparing it with simultaneously recorded cortical population under both anesthetized and unanesthetized conditions, we investigate the difference in the structure of spontaneous population activity between the BF and the auditory cortex (AC) in mice. The AC neuronal population show a skewed spike rate distribution, a higher proportion of short (≤80 ms) inter-spike intervals (ISIs) and a rich repertoire of rhythmic firing across frequencies. Although the distribution of spontaneous firing rate in the BF is also skewed, a proportion of short ISIs can be explained by a Poisson model at short time scales (≤20 ms) and spike count correlations are lower compared to AC cells, with optogenetically identified cholinergic cell pairs showing exceptionally higher correlations. Furthermore, a smaller fraction of BF neurons shows spike-field entrainment across frequencies: a subset of BF neurons fire rhythmically at slow (≤6 Hz) frequencies, with varied phase preferences to ongoing field potentials, in contrast to a consistent phase preference of AC populations. Firing of these slow rhythmic BF cells is correlated to a greater degree than other rhythmic BF cell pairs. Overall, the fundamental difference in the structure of population activity between the AC and BF is their temporal coordination, in particular their operational timescales. These results suggest that BF neurons slowly modulate downstream populations whereas cortical circuits transmit signals on multiple timescales. Thus, the characterization of the neural ensemble dynamics in the BF provides further insight into the neural mechanisms, by which brain states are regulated.
Collapse
Affiliation(s)
- Josue G Yague
- Strathclyde Institute of Pharmacy and Biomedical Sciences, University of StrathclydeGlasgow, United Kingdom
| | - Tomomi Tsunematsu
- Strathclyde Institute of Pharmacy and Biomedical Sciences, University of StrathclydeGlasgow, United Kingdom
| | - Shuzo Sakata
- Strathclyde Institute of Pharmacy and Biomedical Sciences, University of StrathclydeGlasgow, United Kingdom
| |
Collapse
|
29
|
Northoff G. “Paradox of slow frequencies” – Are slow frequencies in upper cortical layers a neural predisposition of the level/state of consciousness (NPC)? Conscious Cogn 2017; 54:20-35. [DOI: 10.1016/j.concog.2017.03.006] [Citation(s) in RCA: 18] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/10/2016] [Revised: 02/05/2017] [Accepted: 03/13/2017] [Indexed: 01/01/2023]
|
30
|
Cortical Representations of Speech in a Multitalker Auditory Scene. J Neurosci 2017; 37:9189-9196. [PMID: 28821680 DOI: 10.1523/jneurosci.0938-17.2017] [Citation(s) in RCA: 58] [Impact Index Per Article: 8.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/06/2017] [Revised: 07/20/2017] [Accepted: 08/08/2017] [Indexed: 11/21/2022] Open
Abstract
The ability to parse a complex auditory scene into perceptual objects is facilitated by a hierarchical auditory system. Successive stages in the hierarchy transform an auditory scene of multiple overlapping sources, from peripheral tonotopically based representations in the auditory nerve, into perceptually distinct auditory-object-based representations in the auditory cortex. Here, using magnetoencephalography recordings from men and women, we investigate how a complex acoustic scene consisting of multiple speech sources is represented in distinct hierarchical stages of the auditory cortex. Using systems-theoretic methods of stimulus reconstruction, we show that the primary-like areas in the auditory cortex contain dominantly spectrotemporal-based representations of the entire auditory scene. Here, both attended and ignored speech streams are represented with almost equal fidelity, and a global representation of the full auditory scene with all its streams is a better candidate neural representation than that of individual streams being represented separately. We also show that higher-order auditory cortical areas, by contrast, represent the attended stream separately and with significantly higher fidelity than unattended streams. Furthermore, the unattended background streams are more faithfully represented as a single unsegregated background object rather than as separated objects. Together, these findings demonstrate the progression of the representations and processing of a complex acoustic scene up through the hierarchy of the human auditory cortex.SIGNIFICANCE STATEMENT Using magnetoencephalography recordings from human listeners in a simulated cocktail party environment, we investigate how a complex acoustic scene consisting of multiple speech sources is represented in separate hierarchical stages of the auditory cortex. We show that the primary-like areas in the auditory cortex use a dominantly spectrotemporal-based representation of the entire auditory scene, with both attended and unattended speech streams represented with almost equal fidelity. We also show that higher-order auditory cortical areas, by contrast, represent an attended speech stream separately from, and with significantly higher fidelity than, unattended speech streams. Furthermore, the unattended background streams are represented as a single undivided background object rather than as distinct background objects.
Collapse
|
31
|
Guo W, Clause AR, Barth-Maron A, Polley DB. A Corticothalamic Circuit for Dynamic Switching between Feature Detection and Discrimination. Neuron 2017; 95:180-194.e5. [PMID: 28625486 PMCID: PMC5568886 DOI: 10.1016/j.neuron.2017.05.019] [Citation(s) in RCA: 112] [Impact Index Per Article: 16.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/08/2016] [Revised: 03/03/2017] [Accepted: 05/09/2017] [Indexed: 01/05/2023]
Abstract
Sensory processing must be sensitive enough to encode faint signals near the noise floor but selective enough to differentiate between similar stimuli. Here we describe a layer 6 corticothalamic (L6 CT) circuit in the mouse auditory forebrain that alternately biases sound processing toward hypersensitivity and improved behavioral sound detection or dampened excitability and enhanced sound discrimination. Optogenetic activation of L6 CT neurons could increase or decrease the gain and tuning precision in the thalamus and all layers of the cortical column, depending on the timing between L6 CT activation and sensory stimulation. The direction of neural and perceptual modulation - enhanced detection at the expense of discrimination or vice versa - arose from the interaction of L6 CT neurons and subnetworks of fast-spiking inhibitory neurons that reset the phase of low-frequency cortical rhythms. These findings suggest that L6 CT neurons contribute to the resolution of the competing demands of detection and discrimination.
Collapse
Affiliation(s)
- Wei Guo
- Eaton-Peabody Laboratories, Massachusetts Eye and Ear Infirmary, Boston, MA 02114, USA; Center for Computational Neuroscience and Neural Technology, Boston University, Boston, MA 02215, USA
| | - Amanda R Clause
- Eaton-Peabody Laboratories, Massachusetts Eye and Ear Infirmary, Boston, MA 02114, USA
| | - Asa Barth-Maron
- Eaton-Peabody Laboratories, Massachusetts Eye and Ear Infirmary, Boston, MA 02114, USA
| | - Daniel B Polley
- Eaton-Peabody Laboratories, Massachusetts Eye and Ear Infirmary, Boston, MA 02114, USA; Department of Otolaryngology, Harvard Medical School, Boston, MA 02114, USA.
| |
Collapse
|
32
|
De Feo V, Boi F, Safaai H, Onken A, Panzeri S, Vato A. State-Dependent Decoding Algorithms Improve the Performance of a Bidirectional BMI in Anesthetized Rats. Front Neurosci 2017; 11:269. [PMID: 28620273 PMCID: PMC5449465 DOI: 10.3389/fnins.2017.00269] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/28/2017] [Accepted: 04/26/2017] [Indexed: 11/24/2022] Open
Abstract
Brain-machine interfaces (BMIs) promise to improve the quality of life of patients suffering from sensory and motor disabilities by creating a direct communication channel between the brain and the external world. Yet, their performance is currently limited by the relatively small amount of information that can be decoded from neural activity recorded form the brain. We have recently proposed that such decoding performance may be improved when using state-dependent decoding algorithms that predict and discount the large component of the trial-to-trial variability of neural activity which is due to the dependence of neural responses on the network's current internal state. Here we tested this idea by using a bidirectional BMI to investigate the gain in performance arising from using a state-dependent decoding algorithm. This BMI, implemented in anesthetized rats, controlled the movement of a dynamical system using neural activity decoded from motor cortex and fed back to the brain the dynamical system's position by electrically microstimulating somatosensory cortex. We found that using state-dependent algorithms that tracked the dynamics of ongoing activity led to an increase in the amount of information extracted form neural activity by 22%, with a consequently increase in all of the indices measuring the BMI's performance in controlling the dynamical system. This suggests that state-dependent decoding algorithms may be used to enhance BMIs at moderate computational cost.
Collapse
Affiliation(s)
- Vito De Feo
- Neural Computation Laboratory, Istituto Italiano di TecnologiaRovereto, Italy
| | - Fabio Boi
- Neural Computation Laboratory, Istituto Italiano di TecnologiaRovereto, Italy.,Nets3 Laboratory, Department of Neuroscience and Brain Technologies, Istituto Italiano di TecnologiaGenova, Italy
| | - Houman Safaai
- Neural Computation Laboratory, Istituto Italiano di TecnologiaRovereto, Italy.,Department of Neurobiology, Harvard Medical SchoolBoston, MA, United States
| | - Arno Onken
- Neural Computation Laboratory, Istituto Italiano di TecnologiaRovereto, Italy
| | - Stefano Panzeri
- Neural Computation Laboratory, Istituto Italiano di TecnologiaRovereto, Italy
| | - Alessandro Vato
- Neural Computation Laboratory, Istituto Italiano di TecnologiaRovereto, Italy
| |
Collapse
|
33
|
At What Latency Does the Phase of Brain Oscillations Influence Perception? eNeuro 2017; 4:eN-NWR-0078-17. [PMID: 28593191 PMCID: PMC5461555 DOI: 10.1523/eneuro.0078-17.2017] [Citation(s) in RCA: 19] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/06/2017] [Revised: 05/08/2017] [Accepted: 05/15/2017] [Indexed: 01/09/2023] Open
Abstract
Recent evidence has shown a rhythmic modulation of perception: prestimulus ongoing electroencephalography (EEG) phase in the θ (4–8 Hz) and α (8–13 Hz) bands has been directly linked with fluctuations in target detection. In fact, the ongoing EEG phase directly reflects cortical excitability: it acts as a gating mechanism for information flow at the neuronal level. Consequently, the key phase modulating perception should be the one present in the brain when the stimulus is actually being processed. Most previous studies, however, reported phase modulation peaking 100 ms or more before target onset. To explain this discrepancy, we first use simulations showing that contamination of spontaneous oscillatory signals by target-evoked ERP and signal filtering (e.g., wavelet) can result in an apparent shift of the peak phase modulation towards earlier latencies, potentially reaching the prestimulus period. We then present a paradigm based on linear systems analysis which can uncover the true latency at which ongoing EEG phase influences perception. After measuring the impulse response function, we use it to reconstruct (rather than record) the brain activity of human observers during white noise sequences. We can then present targets in those sequences, and reliably estimate EEG phase around these targets without any influence of the target-evoked response. We find that in these reconstructed signals, the important phase for perception is that of fronto-occipital ∼6 Hz background oscillations at about 75 ms after target onset. These results confirm the causal influence of phase on perception at the time the stimulus is effectively processed in the brain.
Collapse
|
34
|
Henry MJ, Herrmann B, Grahn JA. What can we learn about beat perception by comparing brain signals and stimulus envelopes? PLoS One 2017; 12:e0172454. [PMID: 28225796 PMCID: PMC5321456 DOI: 10.1371/journal.pone.0172454] [Citation(s) in RCA: 22] [Impact Index Per Article: 3.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/08/2015] [Accepted: 02/06/2017] [Indexed: 01/30/2023] Open
Abstract
Entrainment of neural oscillations on multiple time scales is important for the perception of speech. Musical rhythms, and in particular the perception of a regular beat in musical rhythms, is also likely to rely on entrainment of neural oscillations. One recently proposed approach to studying beat perception in the context of neural entrainment and resonance (the "frequency-tagging" approach) has received an enthusiastic response from the scientific community. A specific version of the approach involves comparing frequency-domain representations of acoustic rhythm stimuli to the frequency-domain representations of neural responses to those rhythms (measured by electroencephalography, EEG). The relative amplitudes at specific EEG frequencies are compared to the relative amplitudes at the same stimulus frequencies, and enhancements at beat-related frequencies in the EEG signal are interpreted as reflecting an internal representation of the beat. Here, we show that frequency-domain representations of rhythms are sensitive to the acoustic features of the tones making up the rhythms (tone duration, onset/offset ramp duration); in fact, relative amplitudes at beat-related frequencies can be completely reversed by manipulating tone acoustics. Crucially, we show that changes to these acoustic tone features, and in turn changes to the frequency-domain representations of rhythms, do not affect beat perception. Instead, beat perception depends on the pattern of onsets (i.e., whether a rhythm has a simple or complex metrical structure). Moreover, we show that beat perception can differ for rhythms that have numerically identical frequency-domain representations. Thus, frequency-domain representations of rhythms are dissociable from beat perception. For this reason, we suggest caution in interpreting direct comparisons of rhythms and brain signals in the frequency domain. Instead, we suggest that combining EEG measurements of neural signals with creative behavioral paradigms is of more benefit to our understanding of beat perception.
Collapse
Affiliation(s)
- Molly J. Henry
- Brain and Mind Institute, Department of Psychology The University of Western Ontario, London, ON, Canada
| | - Björn Herrmann
- Brain and Mind Institute, Department of Psychology The University of Western Ontario, London, ON, Canada
| | - Jessica A. Grahn
- Brain and Mind Institute, Department of Psychology The University of Western Ontario, London, ON, Canada
| |
Collapse
|
35
|
Meyer AF, Williamson RS, Linden JF, Sahani M. Models of Neuronal Stimulus-Response Functions: Elaboration, Estimation, and Evaluation. Front Syst Neurosci 2017; 10:109. [PMID: 28127278 PMCID: PMC5226961 DOI: 10.3389/fnsys.2016.00109] [Citation(s) in RCA: 34] [Impact Index Per Article: 4.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/31/2016] [Accepted: 12/19/2016] [Indexed: 11/13/2022] Open
Abstract
Rich, dynamic, and dense sensory stimuli are encoded within the nervous system by the time-varying activity of many individual neurons. A fundamental approach to understanding the nature of the encoded representation is to characterize the function that relates the moment-by-moment firing of a neuron to the recent history of a complex sensory input. This review provides a unifying and critical survey of the techniques that have been brought to bear on this effort thus far—ranging from the classical linear receptive field model to modern approaches incorporating normalization and other nonlinearities. We address separately the structure of the models; the criteria and algorithms used to identify the model parameters; and the role of regularizing terms or “priors.” In each case we consider benefits or drawbacks of various proposals, providing examples for when these methods work and when they may fail. Emphasis is placed on key concepts rather than mathematical details, so as to make the discussion accessible to readers from outside the field. Finally, we review ways in which the agreement between an assumed model and the neuron's response may be quantified. Re-implemented and unified code for many of the methods are made freely available.
Collapse
Affiliation(s)
- Arne F Meyer
- Gatsby Computational Neuroscience Unit, University College London London, UK
| | - Ross S Williamson
- Eaton-Peabody Laboratories, Massachusetts Eye and Ear InfirmaryBoston, MA, USA; Department of Otology and Laryngology, Harvard Medical SchoolBoston, MA, USA
| | - Jennifer F Linden
- Ear Institute, University College LondonLondon, UK; Department of Neuroscience, Physiology and Pharmacology, University College LondonLondon, UK
| | - Maneesh Sahani
- Gatsby Computational Neuroscience Unit, University College London London, UK
| |
Collapse
|
36
|
Herrmann B, Parthasarathy A, Bartlett EL. Ageing affects dual encoding of periodicity and envelope shape in rat inferior colliculus neurons. Eur J Neurosci 2017; 45:299-311. [PMID: 27813207 PMCID: PMC5247336 DOI: 10.1111/ejn.13463] [Citation(s) in RCA: 30] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/26/2016] [Revised: 10/19/2016] [Accepted: 10/31/2016] [Indexed: 11/27/2022]
Abstract
Extracting temporal periodicities and envelope shapes of sounds is important for listening within complex auditory scenes but declines behaviorally with age. Here, we recorded local field potentials (LFPs) and spikes to investigate how ageing affects the neural representations of different modulation rates and envelope shapes in the inferior colliculus of rats. We specifically aimed to explore the input-output (LFP-spike) response transformations of inferior colliculus neurons. Our results show that envelope shapes up to 256-Hz modulation rates are represented in the neural synchronisation phase lags in younger and older animals. Critically, ageing was associated with (i) an enhanced gain in onset response magnitude from LFPs to spikes; (ii) an enhanced gain in neural synchronisation strength from LFPs to spikes for a low modulation rate (45 Hz); (iii) a decrease in LFP synchronisation strength for higher modulation rates (128 and 256 Hz) and (iv) changes in neural synchronisation strength to different envelope shapes. The current age-related changes are discussed in the context of an altered excitation-inhibition balance accompanying ageing.
Collapse
Affiliation(s)
- Björn Herrmann
- Department of Psychology & Brain and Mind Institute, The University of Western Ontario, London, ON, N6A 3K7, Canada
| | - Aravindakshan Parthasarathy
- Depts. of Biological Sciences and Biomedical Engineering, Purdue University, West Lafayette, IN, 47906, USA
- Dept. of Otology and Laryngology, Harvard Medical School, and Eaton-Peabody Laboratories, Massachusetts Eye and Ear Infirmary, Boston, MA 02114
| | - Edward L. Bartlett
- Depts. of Biological Sciences and Biomedical Engineering, Purdue University, West Lafayette, IN, 47906, USA
| |
Collapse
|
37
|
Onken A, Liu JK, Karunasekara PPCR, Delis I, Gollisch T, Panzeri S. Using Matrix and Tensor Factorizations for the Single-Trial Analysis of Population Spike Trains. PLoS Comput Biol 2016; 12:e1005189. [PMID: 27814363 PMCID: PMC5096699 DOI: 10.1371/journal.pcbi.1005189] [Citation(s) in RCA: 35] [Impact Index Per Article: 4.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/22/2016] [Accepted: 10/11/2016] [Indexed: 11/21/2022] Open
Abstract
Advances in neuronal recording techniques are leading to ever larger numbers of simultaneously monitored neurons. This poses the important analytical challenge of how to capture compactly all sensory information that neural population codes carry in their spatial dimension (differences in stimulus tuning across neurons at different locations), in their temporal dimension (temporal neural response variations), or in their combination (temporally coordinated neural population firing). Here we investigate the utility of tensor factorizations of population spike trains along space and time. These factorizations decompose a dataset of single-trial population spike trains into spatial firing patterns (combinations of neurons firing together), temporal firing patterns (temporal activation of these groups of neurons) and trial-dependent activation coefficients (strength of recruitment of such neural patterns on each trial). We validated various factorization methods on simulated data and on populations of ganglion cells simultaneously recorded in the salamander retina. We found that single-trial tensor space-by-time decompositions provided low-dimensional data-robust representations of spike trains that capture efficiently both their spatial and temporal information about sensory stimuli. Tensor decompositions with orthogonality constraints were the most efficient in extracting sensory information, whereas non-negative tensor decompositions worked well even on non-independent and overlapping spike patterns, and retrieved informative firing patterns expressed by the same population in response to novel stimuli. Our method showed that populations of retinal ganglion cells carried information in their spike timing on the ten-milliseconds-scale about spatial details of natural images. This information could not be recovered from the spike counts of these cells. First-spike latencies carried the majority of information provided by the whole spike train about fine-scale image features, and supplied almost as much information about coarse natural image features as firing rates. Together, these results highlight the importance of spike timing, and particularly of first-spike latencies, in retinal coding.
Collapse
Affiliation(s)
- Arno Onken
- Neural Computation Laboratory, Center for Neuroscience and Cognitive Systems @UniTn, Istituto Italiano di Tecnologia, Rovereto, Italy
| | - Jian K. Liu
- Department of Ophthalmology, University Medical Center Goettingen, Goettingen, Germany
- Bernstein Center for Computational Neuroscience Goettingen, Goettingen, Germany
| | - P. P. Chamanthi R. Karunasekara
- Neural Computation Laboratory, Center for Neuroscience and Cognitive Systems @UniTn, Istituto Italiano di Tecnologia, Rovereto, Italy
- Center for Mind/Brain Sciences, University of Trento, Rovereto, Italy
| | - Ioannis Delis
- Department of Biomedical Engineering, Columbia University, New York, New York, United States of America
| | - Tim Gollisch
- Department of Ophthalmology, University Medical Center Goettingen, Goettingen, Germany
- Bernstein Center for Computational Neuroscience Goettingen, Goettingen, Germany
| | - Stefano Panzeri
- Neural Computation Laboratory, Center for Neuroscience and Cognitive Systems @UniTn, Istituto Italiano di Tecnologia, Rovereto, Italy
| |
Collapse
|
38
|
Perceptual Cycles. Trends Cogn Sci 2016; 20:723-735. [DOI: 10.1016/j.tics.2016.07.006] [Citation(s) in RCA: 396] [Impact Index Per Article: 49.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/15/2016] [Revised: 07/22/2016] [Accepted: 07/29/2016] [Indexed: 11/21/2022]
|
39
|
Abstract
Human perception fluctuates with the phase of neural oscillations in the presence of environmental rhythmic structure by which neural oscillations become entrained. However, in the absence of predictability afforded by rhythmic structure, we hypothesize that the neural dynamical states associated with optimal psychophysical performance are more complex than what has been described previously for rhythmic stimuli. The current electroencephalography study characterized the brain dynamics associated with optimal detection of gaps embedded in narrow-band acoustic noise stimuli lacking low-frequency rhythmic structure. Optimal gap detection was associated with three spectrotemporally distinct delta-governed neural microstates. Individual microstates were characterized by unique instantaneous combinations of neural phase in the delta, theta, and alpha frequency bands. Critically, gap detection was not predictable from local fluctuations in stimulus acoustics. The current results suggest that, in the absence of rhythmic structure to entrain neural oscillations, good performance hinges on complex neural states that vary from moment to moment. Significance statement: Our ability to hear faint sounds fluctuates together with slow brain activity that synchronizes with environmental rhythms. However, it is so far not known how brain activity at different time scales might interact to influence perception when there is no rhythm with which brain activity can synchronize. Here, we used electroencephalography to measure brain activity while participants listened for short silences that interrupted ongoing noise. We examined brain activity in three different frequency bands: delta, theta, and alpha. Participants' ability to detect gaps depended on different numbers of frequency bands--sometimes one, sometimes two, and sometimes three--at different times. Changes in the number of frequency bands that predict perception are a hallmark of a complex neural system.
Collapse
|
40
|
Panzeri S, Safaai H, De Feo V, Vato A. Implications of the Dependence of Neuronal Activity on Neural Network States for the Design of Brain-Machine Interfaces. Front Neurosci 2016; 10:165. [PMID: 27147955 PMCID: PMC4837323 DOI: 10.3389/fnins.2016.00165] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/03/2016] [Accepted: 04/01/2016] [Indexed: 01/07/2023] Open
Abstract
Brain-machine interfaces (BMIs) can improve the quality of life of patients with sensory and motor disabilities by both decoding motor intentions expressed by neural activity, and by encoding artificially sensed information into patterns of neural activity elicited by causal interventions on the neural tissue. Yet, current BMIs can exchange relatively small amounts of information with the brain. This problem has proved difficult to overcome by simply increasing the number of recording or stimulating electrodes, because trial-to-trial variability of neural activity partly arises from intrinsic factors (collectively known as the network state) that include ongoing spontaneous activity and neuromodulation, and so is shared among neurons. Here we review recent progress in characterizing the state dependence of neural responses, and in particular of how neural responses depend on endogenous slow fluctuations of network excitability. We then elaborate on how this knowledge may be used to increase the amount of information that BMIs exchange with brain. Knowledge of network state can be used to fine-tune the stimulation pattern that should reliably elicit a target neural response used to encode information in the brain, and to discount part of the trial-by-trial variability of neural responses, so that they can be decoded more accurately.
Collapse
Affiliation(s)
- Stefano Panzeri
- Neural Computation Laboratory, Istituto Italiano di Tecnologia Rovereto, Italy
| | - Houman Safaai
- Neural Computation Laboratory, Istituto Italiano di Tecnologia Rovereto, Italy
| | - Vito De Feo
- Neural Computation Laboratory, Istituto Italiano di Tecnologia Rovereto, Italy
| | - Alessandro Vato
- Neural Computation Laboratory, Istituto Italiano di Tecnologia Rovereto, Italy
| |
Collapse
|
41
|
Abstract
The roles that neural oscillations play in the auditory cortex of the human brain are becoming clearer.
Collapse
Affiliation(s)
- Jonas Obleser
- Department of Psychology, University of Lübeck, Lübeck, Germany
| |
Collapse
|
42
|
Prestimulus influences on auditory perception from sensory representations and decision processes. Proc Natl Acad Sci U S A 2016; 113:4842-7. [PMID: 27071110 DOI: 10.1073/pnas.1524087113] [Citation(s) in RCA: 49] [Impact Index Per Article: 6.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/03/2023] Open
Abstract
The qualities of perception depend not only on the sensory inputs but also on the brain state before stimulus presentation. Although the collective evidence from neuroimaging studies for a relation between prestimulus state and perception is strong, the interpretation in the context of sensory computations or decision processes has remained difficult. In the auditory system, for example, previous studies have reported a wide range of effects in terms of the perceptually relevant frequency bands and state parameters (phase/power). To dissociate influences of state on earlier sensory representations and higher-level decision processes, we collected behavioral and EEG data in human participants performing two auditory discrimination tasks relying on distinct acoustic features. Using single-trial decoding, we quantified the relation between prestimulus activity, relevant sensory evidence, and choice in different task-relevant EEG components. Within auditory networks, we found that phase had no direct influence on choice, whereas power in task-specific frequency bands affected the encoding of sensory evidence. Within later-activated frontoparietal regions, theta and alpha phase had a direct influence on choice, without involving sensory evidence. These results delineate two consistent mechanisms by which prestimulus activity shapes perception. However, the timescales of the relevant neural activity depend on the specific brain regions engaged by the respective task.
Collapse
|
43
|
Irregular Speech Rate Dissociates Auditory Cortical Entrainment, Evoked Responses, and Frontal Alpha. J Neurosci 2016; 35:14691-701. [PMID: 26538641 DOI: 10.1523/jneurosci.2243-15.2015] [Citation(s) in RCA: 78] [Impact Index Per Article: 9.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022] Open
Abstract
UNLABELLED The entrainment of slow rhythmic auditory cortical activity to the temporal regularities in speech is considered to be a central mechanism underlying auditory perception. Previous work has shown that entrainment is reduced when the quality of the acoustic input is degraded, but has also linked rhythmic activity at similar time scales to the encoding of temporal expectations. To understand these bottom-up and top-down contributions to rhythmic entrainment, we manipulated the temporal predictive structure of speech by parametrically altering the distribution of pauses between syllables or words, thereby rendering the local speech rate irregular while preserving intelligibility and the envelope fluctuations of the acoustic signal. Recording EEG activity in human participants, we found that this manipulation did not alter neural processes reflecting the encoding of individual sound transients, such as evoked potentials. However, the manipulation significantly reduced the fidelity of auditory delta (but not theta) band entrainment to the speech envelope. It also reduced left frontal alpha power and this alpha reduction was predictive of the reduced delta entrainment across participants. Our results show that rhythmic auditory entrainment in delta and theta bands reflect functionally distinct processes. Furthermore, they reveal that delta entrainment is under top-down control and likely reflects prefrontal processes that are sensitive to acoustical regularities rather than the bottom-up encoding of acoustic features. SIGNIFICANCE STATEMENT The entrainment of rhythmic auditory cortical activity to the speech envelope is considered to be critical for hearing. Previous work has proposed divergent views in which entrainment reflects either early evoked responses related to sound encoding or high-level processes related to expectation or cognitive selection. Using a manipulation of speech rate, we dissociated auditory entrainment at different time scales. Specifically, our results suggest that delta entrainment is controlled by frontal alpha mechanisms and thus support the notion that rhythmic auditory cortical entrainment is shaped by top-down mechanisms.
Collapse
|
44
|
Riecke L, Sack AT, Schroeder CE. Endogenous Delta/Theta Sound-Brain Phase Entrainment Accelerates the Buildup of Auditory Streaming. Curr Biol 2015; 25:3196-201. [PMID: 26628008 DOI: 10.1016/j.cub.2015.10.045] [Citation(s) in RCA: 60] [Impact Index Per Article: 6.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/02/2015] [Revised: 10/01/2015] [Accepted: 10/19/2015] [Indexed: 11/30/2022]
Abstract
In many natural listening situations, meaningful sounds (e.g., speech) fluctuate in slow rhythms among other sounds. When a slow rhythmic auditory stream is selectively attended, endogenous delta (1‒4 Hz) oscillations in auditory cortex may shift their timing so that higher-excitability neuronal phases become aligned with salient events in that stream [1, 2]. As a consequence of this stream-brain phase entrainment [3], these events are processed and perceived more readily than temporally non-overlapping events [4-11], essentially enhancing the neural segregation between the attended stream and temporally noncoherent streams [12]. Stream-brain phase entrainment is robust to acoustic interference [13-20] provided that target stream-evoked rhythmic activity can be segregated from noncoherent activity evoked by other sounds [21], a process that usually builds up over time [22-27]. However, it has remained unclear whether stream-brain phase entrainment functionally contributes to this buildup of rhythmic streams or whether it is merely an epiphenomenon of it. Here, we addressed this issue directly by experimentally manipulating endogenous stream-brain phase entrainment in human auditory cortex with non-invasive transcranial alternating current stimulation (TACS) [28-30]. We assessed the consequences of these manipulations on the perceptual buildup of the target stream (the time required to recognize its presence in a noisy background), using behavioral measures in 20 healthy listeners performing a naturalistic listening task. Experimentally induced cyclic 4-Hz variations in stream-brain phase entrainment reliably caused a cyclic 4-Hz pattern in perceptual buildup time. Our findings demonstrate that strong endogenous delta/theta stream-brain phase entrainment accelerates the perceptual emergence of task-relevant rhythmic streams in noisy environments.
Collapse
Affiliation(s)
- Lars Riecke
- Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University, 6229 Maastricht, the Netherlands.
| | - Alexander T Sack
- Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University, 6229 Maastricht, the Netherlands
| | - Charles E Schroeder
- Cognitive Neuroscience and Schizophrenia Program, Nathan Kline Institute for Psychiatric Research, Orangeburg, NY 10962, USA; Departments of Neurosurgery and Psychiatry, Columbia University College of Physicians and Surgeons, New York, NY 10032-2695, USA
| |
Collapse
|
45
|
Wilsch A, Obleser J. What works in auditory working memory? A neural oscillations perspective. Brain Res 2015; 1640:193-207. [PMID: 26556773 DOI: 10.1016/j.brainres.2015.10.054] [Citation(s) in RCA: 36] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/05/2015] [Revised: 10/28/2015] [Accepted: 10/30/2015] [Indexed: 11/16/2022]
Abstract
Working memory is a limited resource: brains can only maintain small amounts of sensory input (memory load) over a brief period of time (memory decay). The dynamics of slow neural oscillations as recorded using magneto- and electroencephalography (M/EEG) provide a window into the neural mechanics of these limitations. Especially oscillations in the alpha range (8-13Hz) are a sensitive marker for memory load. Moreover, according to current models, the resultant working memory load is determined by the relative noise in the neural representation of maintained information. The auditory domain allows memory researchers to apply and test the concept of noise quite literally: Employing degraded stimulus acoustics increases memory load and, at the same time, allows assessing the cognitive resources required to process speech in noise in an ecologically valid and clinically relevant way. The present review first summarizes recent findings on neural oscillations, especially alpha power, and how they reflect memory load and memory decay in auditory working memory. The focus is specifically on memory load resulting from acoustic degradation. These findings are then contrasted with contextual factors that benefit neural as well as behavioral markers of memory performance, by reducing representational noise. We end on discussing the functional role of alpha power in auditory working memory and suggest extensions of the current methodological toolkit. This article is part of a Special Issue entitled SI: Auditory working memory.
Collapse
Affiliation(s)
- Anna Wilsch
- Max Planck Research Group "Auditory Cognition", Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany.
| | - Jonas Obleser
- Max Planck Research Group "Auditory Cognition", Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany; Department of Psychology, University of Lübeck, Ratzeburger Allee 160, 23562 Lübeck, Germany.
| |
Collapse
|
46
|
Abstract
Neural oscillations at distinct frequencies are increasingly being related to a number of basic and higher cognitive faculties. Oscillations enable the construction of coherently organized neuronal assemblies through establishing transitory temporal correlations. By exploring the elementary operations of the language faculty-labeling, concatenation, cyclic transfer-alongside neural dynamics, a new model of linguistic computation is proposed. It is argued that the universality of language, and the true biological source of Universal Grammar, is not to be found purely in the genome as has long been suggested, but more specifically within the extraordinarily preserved nature of mammalian brain rhythms employed in the computation of linguistic structures. Computational-representational theories are used as a guide in investigating the neurobiological foundations of the human "cognome"-the set of computations performed by the nervous system-and new directions are suggested for how the dynamics of the brain (the "dynome") operate and execute linguistic operations. The extent to which brain rhythms are the suitable neuronal processes which can capture the computational properties of the human language faculty is considered against a backdrop of existing cartographic research into the localization of linguistic interpretation. Particular focus is placed on labeling, the operation elsewhere argued to be species-specific. A Basic Label model of the human cognome-dynome is proposed, leading to clear, causally-addressable empirical predictions, to be investigated by a suggested research program, Dynamic Cognomics. In addition, a distinction between minimal and maximal degrees of explanation is introduced to differentiate between the depth of analysis provided by cartographic, rhythmic, neurochemical, and other approaches to computation.
Collapse
Affiliation(s)
- Elliot Murphy
- Division of Psychology and Language Sciences, University College LondonLondon, UK
| |
Collapse
|
47
|
Modeling the effect of locus coeruleus firing on cortical state dynamics and single-trial sensory processing. Proc Natl Acad Sci U S A 2015; 112:12834-9. [PMID: 26417078 DOI: 10.1073/pnas.1516539112] [Citation(s) in RCA: 57] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/12/2022] Open
Abstract
Neuronal responses to sensory stimuli are not only driven by feedforward sensory pathways but also depend upon intrinsic factors (collectively known as the network state) that include ongoing spontaneous activity and neuromodulation. To understand how these factors together regulate cortical dynamics, we recorded simultaneously spontaneous and somatosensory-evoked multiunit activity from primary somatosensory cortex and from the locus coeruleus (LC) (the neuromodulatory nucleus releasing norepinephrine) in urethane-anesthetized rats. We found that bursts of ipsilateral-LC firing preceded by few tens of milliseconds increases of cortical excitability, and that the 1- to 10-Hz rhythmicity of LC discharge appeared to increase the power of delta-band (1-4 Hz) cortical synchronization. To investigate quantitatively how LC firing might causally influence spontaneous and stimulus-driven cortical dynamics, we then constructed and fitted to these data a model describing the dynamical interaction of stimulus drive, ongoing synchronized cortical activity, and noradrenergic neuromodulation. The model proposes a coupling between LC and cortex that can amplify delta-range cortical fluctuations, and shows how suitably timed phasic LC bursts can lead to enhanced cortical responses to weaker stimuli and increased temporal precision of cortical stimulus-evoked responses. Thus, the temporal structure of noradrenergic modulation may selectively and dynamically enhance or attenuate cortical responses to stimuli. Finally, using the model prediction of single-trial cortical stimulus-evoked responses to discount single-trial state-dependent variability increased by ∼70% the sensory information extracted from cortical responses. This suggests that downstream circuits may extract information more effectively after estimating the state of the circuit transmitting the sensory message.
Collapse
|
48
|
Temporal expectations and neural amplitude fluctuations in auditory cortex interactively influence perception. Neuroimage 2015; 124:487-497. [PMID: 26386347 DOI: 10.1016/j.neuroimage.2015.09.019] [Citation(s) in RCA: 49] [Impact Index Per Article: 5.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/14/2015] [Revised: 08/07/2015] [Accepted: 09/09/2015] [Indexed: 02/02/2023] Open
Abstract
Alignment of neural oscillations with temporally regular input allows listeners to generate temporal expectations. However, it remains unclear how behavior is governed in the context of temporal variability: What role do temporal expectations play, and how do they interact with the strength of neural oscillatory activity? Here, human participants detected near-threshold targets in temporally variable acoustic sequences. Temporal expectation strength was estimated using an oscillator model and pre-target neural amplitudes in auditory cortex were extracted from magnetoencephalography signals. Temporal expectations modulated target-detection performance, however, only when neural delta-band amplitudes were large. Thus, slow neural oscillations act to gate influences of temporal expectation on perception. Furthermore, slow amplitude fluctuations governed linear and quadratic influences of auditory alpha-band activity on performance. By fusing a model of temporal expectation with neural oscillatory dynamics, the current findings show that human perception in temporally variable contexts relies on complex interactions between multiple neural frequency bands.
Collapse
|