1
|
Wei 魏赣超 G, Tajik Mansouri زینب تاجیک منصوری Z, Wang 王晓婧 X, Stevenson IH. Calibrating Bayesian Decoders of Neural Spiking Activity. J Neurosci 2024; 44:e2158232024. [PMID: 38538143 PMCID: PMC11063820 DOI: 10.1523/jneurosci.2158-23.2024] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/17/2023] [Revised: 01/29/2024] [Accepted: 03/11/2024] [Indexed: 05/03/2024] Open
Abstract
Accurately decoding external variables from observations of neural activity is a major challenge in systems neuroscience. Bayesian decoders, which provide probabilistic estimates, are some of the most widely used. Here we show how, in many common settings, the probabilistic predictions made by traditional Bayesian decoders are overconfident. That is, the estimates for the decoded stimulus or movement variables are more certain than they should be. We then show how Bayesian decoding with latent variables, taking account of low-dimensional shared variability in the observations, can improve calibration, although additional correction for overconfidence is still needed. Using data from males, we examine (1) decoding the direction of grating stimuli from spike recordings in the primary visual cortex in monkeys, (2) decoding movement direction from recordings in the primary motor cortex in monkeys, (3) decoding natural images from multiregion recordings in mice, and (4) decoding position from hippocampal recordings in rats. For each setting, we characterize the overconfidence, and we describe a possible method to correct miscalibration post hoc. Properly calibrated Bayesian decoders may alter theoretical results on probabilistic population coding and lead to brain-machine interfaces that more accurately reflect confidence levels when identifying external variables.
Collapse
Affiliation(s)
- Ganchao Wei 魏赣超
- Department of Statistical Science, Duke University, Durham, North Carolina 27708
| | | | | | - Ian H Stevenson
- Departments of Biomedical Engineering, University of Connecticut, Storrs, Connecticut 06269
- Psychological Sciences, University of Connecticut, Storrs, Connecticut 06269
- Connecticut Institute for Brain and Cognitive Science, University of Connecticut, Storrs, Connecticut 06269
| |
Collapse
|
2
|
Churchland MM, Shenoy KV. Preparatory activity and the expansive null-space. Nat Rev Neurosci 2024; 25:213-236. [PMID: 38443626 DOI: 10.1038/s41583-024-00796-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 01/26/2024] [Indexed: 03/07/2024]
Abstract
The study of the cortical control of movement experienced a conceptual shift over recent decades, as the basic currency of understanding shifted from single-neuron tuning towards population-level factors and their dynamics. This transition was informed by a maturing understanding of recurrent networks, where mechanism is often characterized in terms of population-level factors. By estimating factors from data, experimenters could test network-inspired hypotheses. Central to such hypotheses are 'output-null' factors that do not directly drive motor outputs yet are essential to the overall computation. In this Review, we highlight how the hypothesis of output-null factors was motivated by the venerable observation that motor-cortex neurons are active during movement preparation, well before movement begins. We discuss how output-null factors then became similarly central to understanding neural activity during movement. We discuss how this conceptual framework provided key analysis tools, making it possible for experimenters to address long-standing questions regarding motor control. We highlight an intriguing trend: as experimental and theoretical discoveries accumulate, the range of computational roles hypothesized to be subserved by output-null factors continues to expand.
Collapse
Affiliation(s)
- Mark M Churchland
- Department of Neuroscience, Columbia University, New York, NY, USA.
- Grossman Center for the Statistics of Mind, Columbia University, New York, NY, USA.
- Kavli Institute for Brain Science, Columbia University, New York, NY, USA.
| | - Krishna V Shenoy
- Department of Electrical Engineering, Stanford University, Stanford, CA, USA
- Department of Bioengineering, Stanford University, Stanford, CA, USA
- Department of Neurobiology, Stanford University, Stanford, CA, USA
- Department of Neurosurgery, Stanford University, Stanford, CA, USA
- Wu Tsai Neurosciences Institute, Stanford University, Stanford, CA, USA
- Bio-X Institute, Stanford University, Stanford, CA, USA
- Howard Hughes Medical Institute at Stanford University, Stanford, CA, USA
| |
Collapse
|
3
|
Nakai S, Kitanishi T, Mizuseki K. Distinct manifold encoding of navigational information in the subiculum and hippocampus. SCIENCE ADVANCES 2024; 10:eadi4471. [PMID: 38295173 PMCID: PMC10830115 DOI: 10.1126/sciadv.adi4471] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/26/2023] [Accepted: 12/29/2023] [Indexed: 02/02/2024]
Abstract
The subiculum (SUB) plays a crucial role in spatial navigation and encodes navigational information differently from the hippocampal CA1 area. However, the representation of subicular population activity remains unknown. Here, we investigated the neuronal population activity recorded extracellularly from the CA1 and SUB of rats performing T-maze and open-field tasks. The trajectory of population activity in both areas was confined to low-dimensional neural manifolds homoeomorphic to external space. The manifolds conveyed position, speed, and future path information with higher decoding accuracy in the SUB than in the CA1. The manifolds exhibited common geometry across rats and regions for the CA1 and SUB and between tasks in the SUB. During post-task ripples in slow-wave sleep, population activity represented reward locations/events more frequently in the SUB than in CA1. Thus, the CA1 and SUB encode information distinctly into the neural manifolds that underlie navigational information processing during wakefulness and sleep.
Collapse
Affiliation(s)
- Shinya Nakai
- Department of Physiology, Graduate School of Medicine, Osaka Metropolitan University, Osaka 545-8585, Japan
- Department of Physiology, Graduate School of Medicine, Osaka City University, Osaka 545-8585, Japan
| | - Takuma Kitanishi
- Department of Physiology, Graduate School of Medicine, Osaka City University, Osaka 545-8585, Japan
- Department of Life Sciences, Graduate School of Arts and Sciences, The University of Tokyo, Meguro, Tokyo 153-8902, Japan
- Komaba Institute for Science, The University of Tokyo, Meguro, Tokyo 153-8902, Japan
- PRESTO, Japan Science and Technology Agency (JST), Kawaguchi, Saitama 332-0012, Japan
| | - Kenji Mizuseki
- Department of Physiology, Graduate School of Medicine, Osaka Metropolitan University, Osaka 545-8585, Japan
- Department of Physiology, Graduate School of Medicine, Osaka City University, Osaka 545-8585, Japan
| |
Collapse
|
4
|
Hatsopoulos N, Moore D, MacLean J, Walker J. A dynamic subset of network interactions underlies tuning to natural movements in marmoset sensorimotor cortex. RESEARCH SQUARE 2023:rs.3.rs-3750312. [PMID: 38234779 PMCID: PMC10793486 DOI: 10.21203/rs.3.rs-3750312/v1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/19/2024]
Abstract
Mechanisms of computation in sensorimotor cortex must be flexible and robust to support skilled motor behavior. Patterns of neuronal coactivity emerge as a result of computational processes. Pairwise spike-time statistical relationships, across the population, can be summarized as a functional network (FN) which retains single-unit properties. We record populations of single-unit neural activity in forelimb sensorimotor cortex during prey-capture and spontaneous behavior and use an encoding model incorporating kinematic trajectories and network features to predict single-unit activity during forelimb movements. The contribution of network features depends on structured connectivity within strongly connected functional groups. We identify a context-specific functional group that is highly tuned to kinematics and reorganizes its connectivity between spontaneous and prey-capture movements. In the remaining context-invariant group, interactions are comparatively stable across behaviors and units are less tuned to kinematics. This suggests different roles in producing natural forelimb movements and contextualizes single-unit tuning properties within population dynamics.
Collapse
|
5
|
Sachdeva P, Bak JH, Livezey J, Kirst C, Frank L, Bhattacharyya S, Bouchard KE. Resolving Non-identifiability Mitigates Bias in Models of Neural Tuning and Functional Coupling. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2023:2023.07.11.548615. [PMID: 37503030 PMCID: PMC10370036 DOI: 10.1101/2023.07.11.548615] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 07/29/2023]
Abstract
In the brain, all neurons are driven by the activity of other neurons, some of which maybe simultaneously recorded, but most are not. As such, models of neuronal activity need to account for simultaneously recorded neurons and the influences of unmeasured neurons. This can be done through inclusion of model terms for observed external variables (e.g., tuning to stimuli) as well as terms for latent sources of variability. Determining the influence of groups of neurons on each other relative to other influences is important to understand brain functioning. The parameters of statistical models fit to data are commonly used to gain insight into the relative importance of those influences. Scientific interpretation of models hinge upon unbiased parameter estimates. However, evaluation of biased inference is rarely performed and sources of bias are poorly understood. Through extensive numerical study and analytic calculation, we show that common inference procedures and models are typically biased. We demonstrate that accurate parameter selection before estimation resolves model non-identifiability and mitigates bias. In diverse neurophysiology data sets, we found that contributions of coupling to other neurons are often overestimated while tuning to exogenous variables are underestimated in common methods. We explain heterogeneity in observed biases across data sets in terms of data statistics. Finally, counter to common intuition, we found that model non-identifiability contributes to bias, not variance, making it a particularly insidious form of statistical error. Together, our results identify the causes of statistical biases in common models of neural data, provide inference procedures to mitigate that bias, and reveal and explain the impact of those biases in diverse neural data sets.
Collapse
Affiliation(s)
- Pratik Sachdeva
- Physics Department, UC Berkeley
- Redwood Center for Theoretical Neuroscience, UC Berkeley
| | - Ji Hyun Bak
- Kavli Institute for Fundamental Neuroscience, UC San Francisco
- Biological Systems and Engineering Division, Lawrence Berkeley National Lab
| | - Jesse Livezey
- Biological Systems and Engineering Division, Lawrence Berkeley National Lab
| | - Christoph Kirst
- Kavli Institute for Fundamental Neuroscience, UC San Francisco
- Scientific Data Division, Lawrence Berkeley National Lab
- Deptartment of Anatomy, UC San Francisco
| | - Loren Frank
- Kavli Institute for Fundamental Neuroscience, UC San Francisco
- Departments of Physiology and Psychiatry, UC San Francisco
- Howard Hughes Medical Institute
| | | | - Kristofer E. Bouchard
- Redwood Center for Theoretical Neuroscience, UC Berkeley
- Kavli Institute for Fundamental Neuroscience, UC San Francisco
- Biological Systems and Engineering Division, Lawrence Berkeley National Lab
- Scientific Data Division, Lawrence Berkeley National Lab
- Helen Wills Neuroscience Institute, UC Berkeley
| |
Collapse
|
6
|
Sundiang M, Hatsopoulos NG, MacLean JN. Dynamic structure of motor cortical neuron coactivity carries behaviorally relevant information. Netw Neurosci 2023; 7:661-678. [PMID: 37397877 PMCID: PMC10312288 DOI: 10.1162/netn_a_00298] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/12/2022] [Accepted: 12/02/2022] [Indexed: 01/28/2024] Open
Abstract
Skillful, voluntary movements are underpinned by computations performed by networks of interconnected neurons in the primary motor cortex (M1). Computations are reflected by patterns of coactivity between neurons. Using pairwise spike time statistics, coactivity can be summarized as a functional network (FN). Here, we show that the structure of FNs constructed from an instructed-delay reach task in nonhuman primates is behaviorally specific: Low-dimensional embedding and graph alignment scores show that FNs constructed from closer target reach directions are also closer in network space. Using short intervals across a trial, we constructed temporal FNs and found that temporal FNs traverse a low-dimensional subspace in a reach-specific trajectory. Alignment scores show that FNs become separable and correspondingly decodable shortly after the Instruction cue. Finally, we observe that reciprocal connections in FNs transiently decrease following the Instruction cue, consistent with the hypothesis that information external to the recorded population temporarily alters the structure of the network at this moment.
Collapse
Affiliation(s)
- Marina Sundiang
- Committee on Computational Neuroscience, University of Chicago, Chicago, IL, USA
| | - Nicholas G. Hatsopoulos
- Committee on Computational Neuroscience, University of Chicago, Chicago, IL, USA
- University of Chicago Neuroscience Institute, Chicago, IL, USA
- Department of Organismal Biology and Anatomy, University of Chicago, Chicago, IL, USA
| | - Jason N. MacLean
- Committee on Computational Neuroscience, University of Chicago, Chicago, IL, USA
- University of Chicago Neuroscience Institute, Chicago, IL, USA
- Department of Neurobiology, University of Chicago, Chicago, IL, USA
| |
Collapse
|
7
|
Sadagopan S, Kar M, Parida S. Quantitative models of auditory cortical processing. Hear Res 2023; 429:108697. [PMID: 36696724 PMCID: PMC9928778 DOI: 10.1016/j.heares.2023.108697] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/18/2022] [Revised: 12/17/2022] [Accepted: 01/12/2023] [Indexed: 01/15/2023]
Abstract
To generate insight from experimental data, it is critical to understand the inter-relationships between individual data points and place them in context within a structured framework. Quantitative modeling can provide the scaffolding for such an endeavor. Our main objective in this review is to provide a primer on the range of quantitative tools available to experimental auditory neuroscientists. Quantitative modeling is advantageous because it can provide a compact summary of observed data, make underlying assumptions explicit, and generate predictions for future experiments. Quantitative models may be developed to characterize or fit observed data, to test theories of how a task may be solved by neural circuits, to determine how observed biophysical details might contribute to measured activity patterns, or to predict how an experimental manipulation would affect neural activity. In complexity, quantitative models can range from those that are highly biophysically realistic and that include detailed simulations at the level of individual synapses, to those that use abstract and simplified neuron models to simulate entire networks. Here, we survey the landscape of recently developed models of auditory cortical processing, highlighting a small selection of models to demonstrate how they help generate insight into the mechanisms of auditory processing. We discuss examples ranging from models that use details of synaptic properties to explain the temporal pattern of cortical responses to those that use modern deep neural networks to gain insight into human fMRI data. We conclude by discussing a biologically realistic and interpretable model that our laboratory has developed to explore aspects of vocalization categorization in the auditory pathway.
Collapse
Affiliation(s)
- Srivatsun Sadagopan
- Department of Neurobiology, University of Pittsburgh, Pittsburgh, PA, USA; Center for Neuroscience, University of Pittsburgh, Pittsburgh, PA, USA; Center for the Neural Basis of Cognition, University of Pittsburgh, Pittsburgh, PA, USA; Department of Bioengineering, University of Pittsburgh, Pittsburgh, PA, USA; Department of Communication Science and Disorders, University of Pittsburgh, Pittsburgh, PA, USA.
| | - Manaswini Kar
- Department of Neurobiology, University of Pittsburgh, Pittsburgh, PA, USA; Center for Neuroscience, University of Pittsburgh, Pittsburgh, PA, USA; Center for the Neural Basis of Cognition, University of Pittsburgh, Pittsburgh, PA, USA
| | - Satyabrata Parida
- Department of Neurobiology, University of Pittsburgh, Pittsburgh, PA, USA; Center for Neuroscience, University of Pittsburgh, Pittsburgh, PA, USA
| |
Collapse
|
8
|
Sarmashghi M, Jadhav SP, Eden U. Efficient spline regression for neural spiking data. PLoS One 2021; 16:e0258321. [PMID: 34644315 PMCID: PMC8513896 DOI: 10.1371/journal.pone.0258321] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/18/2020] [Accepted: 09/27/2021] [Indexed: 11/29/2022] Open
Abstract
Point process generalized linear models (GLMs) provide a powerful tool for characterizing the coding properties of neural populations. Spline basis functions are often used in point process GLMs, when the relationship between the spiking and driving signals are nonlinear, but common choices for the structure of these spline bases often lead to loss of statistical power and numerical instability when the signals that influence spiking are bounded above or below. In particular, history dependent spike train models often suffer these issues at times immediately following a previous spike. This can make inferences related to refractoriness and bursting activity more challenging. Here, we propose a modified set of spline basis functions that assumes a flat derivative at the endpoints and show that this limits the uncertainty and numerical issues associated with cardinal splines. We illustrate the application of this modified basis to the problem of simultaneously estimating the place field and history dependent properties of a set of neurons from the CA1 region of rat hippocampus, and compare it with the other commonly used basis functions. We have made code available in MATLAB to implement spike train regression using these modified basis functions.
Collapse
Affiliation(s)
- Mehrad Sarmashghi
- Systems Engineering/Systems Engineering/Boston University, Boston, Massachusetts, United States of America
| | - Shantanu P. Jadhav
- Psychology/Neuroscience/Brandeis University, Waltham, Massachusetts, United States of America
| | - Uri Eden
- Mathematics and Statistics/Mathematics and Statistics/Boston University, Boston, Massachusetts, United States of America
| |
Collapse
|
9
|
Sachdeva PS, Livezey JA, Dougherty ME, Gu BM, Berke JD, Bouchard KE. Improved inference in coupling, encoding, and decoding models and its consequence for neuroscientific interpretation. J Neurosci Methods 2021; 358:109195. [PMID: 33905791 DOI: 10.1016/j.jneumeth.2021.109195] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/01/2020] [Revised: 04/08/2021] [Accepted: 04/10/2021] [Indexed: 10/21/2022]
Abstract
BACKGROUND A central goal of systems neuroscience is to understand the relationships amongst constituent units in neural populations, and their modulation by external factors, using high-dimensional and stochastic neural recordings. Parametric statistical models (e.g., coupling, encoding, and decoding models), play an instrumental role in accomplishing this goal. However, extracting conclusions from a parametric model requires that it is fit using an inference algorithm capable of selecting the correct parameters and properly estimating their values. Traditional approaches to parameter inference have been shown to suffer from failures in both selection and estimation. The recent development of algorithms that ameliorate these deficiencies raises the question of whether past work relying on such inference procedures have produced inaccurate systems neuroscience models, thereby impairing their interpretation. NEW METHOD We used algorithms based on Union of Intersections, a statistical inference framework based on stability principles, capable of improved selection and estimation. COMPARISON We fit functional coupling, encoding, and decoding models across a battery of neural datasets using both UoI and baseline inference procedures (e.g., ℓ1-penalized GLMs), and compared the structure of their fitted parameters. RESULTS Across recording modality, brain region, and task, we found that UoI inferred models with increased sparsity, improved stability, and qualitatively different parameter distributions, while maintaining predictive performance. We obtained highly sparse functional coupling networks with substantially different community structure, more parsimonious encoding models, and decoding models that relied on fewer single-units. CONCLUSIONS Together, these results demonstrate that improved parameter inference, achieved via UoI, reshapes interpretation in diverse neuroscience contexts.
Collapse
Affiliation(s)
- Pratik S Sachdeva
- Redwood Center for Theoretical Neuroscience, University of California, Berkeley, 94720, CA, USA; Department of Physics, University of California, Berkeley, 94720, CA, USA; Biological Systems and Engineering Division, Lawrence Berkeley National Laboratory, Berkeley, 94720, CA, USA
| | - Jesse A Livezey
- Redwood Center for Theoretical Neuroscience, University of California, Berkeley, 94720, CA, USA; Biological Systems and Engineering Division, Lawrence Berkeley National Laboratory, Berkeley, 94720, CA, USA
| | - Maximilian E Dougherty
- Biological Systems and Engineering Division, Lawrence Berkeley National Laboratory, Berkeley, 94720, CA, USA
| | - Bon-Mi Gu
- Department of Neurology, University of California, San Francisco, San Francisco, 94143, CA, USA
| | - Joshua D Berke
- Department of Neurology, University of California, San Francisco, San Francisco, 94143, CA, USA; Department of Psychiatry; Neuroscience Graduate Program; Kavli Institute for Fundamental Neuroscience; Weill Institute for Neurosciences, University of California, San Francisco, San Francisco, 94143, CA, USA
| | - Kristofer E Bouchard
- Redwood Center for Theoretical Neuroscience, University of California, Berkeley, 94720, CA, USA; Biological Systems and Engineering Division, Lawrence Berkeley National Laboratory, Berkeley, 94720, CA, USA; Computational Resources Division, Lawrence Berkeley National Laboratory, Berkeley, 94720, CA, USA; Helen Wills Neuroscience Institute, University of California, Berkeley, Berkeley, 94720, CA, USA
| |
Collapse
|
10
|
Perich MG, Rajan K. Rethinking brain-wide interactions through multi-region 'network of networks' models. Curr Opin Neurobiol 2020; 65:146-151. [PMID: 33254073 DOI: 10.1016/j.conb.2020.11.003] [Citation(s) in RCA: 27] [Impact Index Per Article: 6.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/02/2020] [Revised: 10/17/2020] [Accepted: 11/08/2020] [Indexed: 12/20/2022]
Abstract
The neural control of behavior is distributed across many functionally and anatomically distinct brain regions even in small nervous systems. While classical neuroscience models treated these regions as a set of hierarchically isolated nodes, the brain comprises a recurrently interconnected network in which each region is intimately modulated by many others. Uncovering these interactions is now possible through experimental techniques that access large neural populations from many brain regions simultaneously. Harnessing these large-scale datasets, however, requires new theoretical approaches. Here, we review recent work to understand brain-wide interactions using multi-region 'network of networks' models and discuss how they can guide future experiments. We also emphasize the importance of multi-region recordings, and posit that studying individual components in isolation will be insufficient to understand the neural basis of behavior.
Collapse
Affiliation(s)
- Matthew G Perich
- Department of Neuroscience, Icahn School of Medicine at Mount Sinai, New York, NY, USA.
| | - Kanaka Rajan
- Department of Neuroscience, Icahn School of Medicine at Mount Sinai, New York, NY, USA.
| |
Collapse
|
11
|
Griffin DM, Strick PL. The motor cortex uses active suppression to sculpt movement. SCIENCE ADVANCES 2020; 6:6/34/eabb8395. [PMID: 32937371 PMCID: PMC7442473 DOI: 10.1126/sciadv.abb8395] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/21/2020] [Accepted: 07/10/2020] [Indexed: 06/11/2023]
Abstract
Even the simplest movements are generated by a remarkably complex pattern of muscle activity. Fast, accurate movements at a single joint are produced by a stereotyped pattern that includes a decrease in any preexisting activity in antagonist muscles. This premovement suppression is necessary to prevent the antagonist muscle from opposing movement generated by the agonist muscle. Here, we provide evidence that the primary motor cortex (M1) sends a command signal that generates this premovement suppression. Thus, output neurons in M1 sculpt complex spatiotemporal patterns of motor output not only by actively turning on muscles but also by actively turning them off.
Collapse
Affiliation(s)
- Darcy M Griffin
- Department of Neurobiology, University of Pittsburgh School of Medicine, Pittsburgh, PA, USA
- Systems Neuroscience Center, University of Pittsburgh School of Medicine, Pittsburgh, PA, USA
- Center for the Neural Basis of Cognition, University of Pittsburgh, Pittsburgh, PA, USA
- University of Pittsburgh Brain Institute, University of Pittsburgh, Pittsburgh, PA, USA
| | - Peter L Strick
- Department of Neurobiology, University of Pittsburgh School of Medicine, Pittsburgh, PA, USA.
- Systems Neuroscience Center, University of Pittsburgh School of Medicine, Pittsburgh, PA, USA
- Center for the Neural Basis of Cognition, University of Pittsburgh, Pittsburgh, PA, USA
- University of Pittsburgh Brain Institute, University of Pittsburgh, Pittsburgh, PA, USA
| |
Collapse
|
12
|
Ghanbari A, Ren N, Keine C, Stoelzel C, Englitz B, Swadlow HA, Stevenson IH. Modeling the Short-Term Dynamics of in Vivo Excitatory Spike Transmission. J Neurosci 2020; 40:4185-4202. [PMID: 32303648 PMCID: PMC7244199 DOI: 10.1523/jneurosci.1482-19.2020] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/24/2019] [Revised: 03/18/2020] [Accepted: 03/22/2020] [Indexed: 11/21/2022] Open
Abstract
Information transmission in neural networks is influenced by both short-term synaptic plasticity (STP) as well as nonsynaptic factors, such as after-hyperpolarization currents and changes in excitability. Although these effects have been widely characterized in vitro using intracellular recordings, how they interact in vivo is unclear. Here, we develop a statistical model of the short-term dynamics of spike transmission that aims to disentangle the contributions of synaptic and nonsynaptic effects based only on observed presynaptic and postsynaptic spiking. The model includes a dynamic functional connection with short-term plasticity as well as effects due to the recent history of postsynaptic spiking and slow changes in postsynaptic excitability. Using paired spike recordings, we find that the model accurately describes the short-term dynamics of in vivo spike transmission at a diverse set of identified and putative excitatory synapses, including a pair of connected neurons within thalamus in mouse, a thalamocortical connection in a female rabbit, and an auditory brainstem synapse in a female gerbil. We illustrate the utility of this modeling approach by showing how the spike transmission patterns captured by the model may be sufficient to account for stimulus-dependent differences in spike transmission in the auditory brainstem (endbulb of Held). Finally, we apply this model to large-scale multielectrode recordings to illustrate how such an approach has the potential to reveal cell type-specific differences in spike transmission in vivo Although STP parameters estimated from ongoing presynaptic and postsynaptic spiking are highly uncertain, our results are partially consistent with previous intracellular observations in these synapses.SIGNIFICANCE STATEMENT Although synaptic dynamics have been extensively studied and modeled using intracellular recordings of postsynaptic currents and potentials, inferring synaptic effects from extracellular spiking is challenging. Whether or not a synaptic current contributes to postsynaptic spiking depends not only on the amplitude of the current, but also on many other factors, including the activity of other, typically unobserved, synapses, the overall excitability of the postsynaptic neuron, and how recently the postsynaptic neuron has spiked. Here, we developed a model that, using only observations of presynaptic and postsynaptic spiking, aims to describe the dynamics of in vivo spike transmission by modeling both short-term synaptic plasticity (STP) and nonsynaptic effects. This approach may provide a novel description of fast, structured changes in spike transmission.
Collapse
Affiliation(s)
| | - Naixin Ren
- Department of Psychological Sciences, University of Connecticut, Storrs, CT 06268
| | - Christian Keine
- Carver College of Medicine, Iowa Neuroscience Institute, Department of Anatomy and Cell Biology, University of Iowa, Iowa, IA 52242
| | - Carl Stoelzel
- Department of Psychological Sciences, University of Connecticut, Storrs, CT 06268
| | - Bernhard Englitz
- Department of Neurophysiology, Donders Institute for Brain, Cognition and Behavior, Radboud University, 6525 AJ Nijmegen, Netherlands
| | - Harvey A Swadlow
- Department of Psychological Sciences, University of Connecticut, Storrs, CT 06268
| | - Ian H Stevenson
- Department of Biomedical Engineering
- Department of Psychological Sciences, University of Connecticut, Storrs, CT 06268
| |
Collapse
|
13
|
Latimer KW, Rieke F, Pillow JW. Inferring synaptic inputs from spikes with a conductance-based neural encoding model. eLife 2019; 8:47012. [PMID: 31850846 PMCID: PMC6989090 DOI: 10.7554/elife.47012] [Citation(s) in RCA: 17] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/22/2019] [Accepted: 12/17/2019] [Indexed: 01/15/2023] Open
Abstract
Descriptive statistical models of neural responses generally aim to characterize the mapping from stimuli to spike responses while ignoring biophysical details of the encoding process. Here, we introduce an alternative approach, the conductance-based encoding model (CBEM), which describes a mapping from stimuli to excitatory and inhibitory synaptic conductances governing the dynamics of sub-threshold membrane potential. Remarkably, we show that the CBEM can be fit to extracellular spike train data and then used to predict excitatory and inhibitory synaptic currents. We validate these predictions with intracellular recordings from macaque retinal ganglion cells. Moreover, we offer a novel quasi-biophysical interpretation of the Poisson generalized linear model (GLM) as a special case of the CBEM in which excitation and inhibition are perfectly balanced. This work forges a new link between statistical and biophysical models of neural encoding and sheds new light on the biophysical variables that underlie spiking in the early visual pathway.
Collapse
Affiliation(s)
- Kenneth W Latimer
- Department of Physiology and Biophysics, University of Washington, Seattle, United States
| | - Fred Rieke
- Department of Physiology and Biophysics, University of Washington, Seattle, United States
| | - Jonathan W Pillow
- Princeton Neuroscience Institute, Department of Psychology, Princeton University, Princeton, United States
| |
Collapse
|
14
|
Inferring and validating mechanistic models of neural microcircuits based on spike-train data. Nat Commun 2019; 10:4933. [PMID: 31666513 PMCID: PMC6821748 DOI: 10.1038/s41467-019-12572-0] [Citation(s) in RCA: 18] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/25/2018] [Accepted: 09/18/2019] [Indexed: 01/11/2023] Open
Abstract
The interpretation of neuronal spike train recordings often relies on abstract statistical models that allow for principled parameter estimation and model selection but provide only limited insights into underlying microcircuits. In contrast, mechanistic models are useful to interpret microcircuit dynamics, but are rarely quantitatively matched to experimental data due to methodological challenges. Here we present analytical methods to efficiently fit spiking circuit models to single-trial spike trains. Using derived likelihood functions, we statistically infer the mean and variance of hidden inputs, neuronal adaptation properties and connectivity for coupled integrate-and-fire neurons. Comprehensive evaluations on synthetic data, validations using ground truth in-vitro and in-vivo recordings, and comparisons with existing techniques demonstrate that parameter estimation is very accurate and efficient, even for highly subsampled networks. Our methods bridge statistical, data-driven and theoretical, model-based neurosciences at the level of spiking circuits, for the purpose of a quantitative, mechanistic interpretation of recorded neuronal population activity. It is difficult to fit mechanistic, biophysically constrained circuit models to spike train data from in vivo extracellular recordings. Here the authors present analytical methods that enable efficient parameter estimation for integrate-and-fire circuit models and inference of the underlying connectivity structure in subsampled networks.
Collapse
|
15
|
Ghanbari A, Lee CM, Read HL, Stevenson IH. Modeling stimulus-dependent variability improves decoding of population neural responses. J Neural Eng 2019; 16:066018. [PMID: 31404915 DOI: 10.1088/1741-2552/ab3a68] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022]
Abstract
OBJECTIVE Neural responses to repeated presentations of an identical stimulus often show substantial trial-to-trial variability. How the mean firing rate varies in response to different stimuli or during different movements (tuning curves) has been extensively modeled in a wide variety of neural systems. However, the variability of neural responses can also have clear tuning independent of the tuning in the mean firing rate. This suggests that the variability could contain information regarding the stimulus/movement beyond what is encoded in the mean firing rate. Here we demonstrate how taking variability into account can improve neural decoding. APPROACH In a typical neural coding model spike counts are assumed to be Poisson with the mean response depending on an external variable, such as a stimulus or movement. Bayesian decoding methods then use the probabilities under these Poisson tuning models (the likelihood) to estimate the probability of each stimulus given the spikes on a given trial (the posterior). However, under the Poisson model, spike count variability is always exactly equal to the mean (Fano factor = 1). Here we use two alternative models-the Conway-Maxwell-Poisson (CMP) model and negative binomial (NB) model-to more flexibly characterize how neural variability depends on external stimuli. These models both contain the Poisson distribution as a special case but have an additional parameter that allows the variance to be greater than the mean (Fano factor > 1) or, for the CMP model, less than the mean (Fano factor < 1). MAIN RESULTS We find that neural responses in primary motor (M1), visual (V1), and auditory (A1) cortices have diverse tuning in both their mean firing rates and response variability. Across cortical areas, we find that Bayesian decoders using the CMP or NB models improve stimulus/movement estimation accuracy by 4%-12% compared to the Poisson model. SIGNIFICANCE Moreover, the uncertainty of the non-Poisson decoders more accurately reflects the magnitude of estimation errors. In addition to tuning curves that reflect average neural responses, stimulus-dependent response variability may be an important aspect of the neural code. Modeling this structure could, potentially, lead to improvements in brain machine interfaces.
Collapse
Affiliation(s)
- Abed Ghanbari
- Department of Biomedical Engineering, University of Connecticut, Storrs, CT, United States of America
| | | | | | | |
Collapse
|
16
|
Brinkman BAW, Rieke F, Shea-Brown E, Buice MA. Predicting how and when hidden neurons skew measured synaptic interactions. PLoS Comput Biol 2018; 14:e1006490. [PMID: 30346943 PMCID: PMC6219819 DOI: 10.1371/journal.pcbi.1006490] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/07/2017] [Revised: 11/06/2018] [Accepted: 09/05/2018] [Indexed: 11/18/2022] Open
Abstract
A major obstacle to understanding neural coding and computation is the fact that experimental recordings typically sample only a small fraction of the neurons in a circuit. Measured neural properties are skewed by interactions between recorded neurons and the “hidden” portion of the network. To properly interpret neural data and determine how biological structure gives rise to neural circuit function, we thus need a better understanding of the relationships between measured effective neural properties and the true underlying physiological properties. Here, we focus on how the effective spatiotemporal dynamics of the synaptic interactions between neurons are reshaped by coupling to unobserved neurons. We find that the effective interactions from a pre-synaptic neuron r′ to a post-synaptic neuron r can be decomposed into a sum of the true interaction from r′ to r plus corrections from every directed path from r′ to r through unobserved neurons. Importantly, the resulting formula reveals when the hidden units have—or do not have—major effects on reshaping the interactions among observed neurons. As a particular example of interest, we derive a formula for the impact of hidden units in random networks with “strong” coupling—connection weights that scale with 1/N, where N is the network size, precisely the scaling observed in recent experiments. With this quantitative relationship between measured and true interactions, we can study how network properties shape effective interactions, which properties are relevant for neural computations, and how to manipulate effective interactions. No experiment in neuroscience can record from more than a tiny fraction of the total number of neurons present in a circuit. This severely complicates measurement of a network’s true properties, as unobserved neurons skew measurements away from what would be measured if all neurons were observed. For example, the measured post-synaptic response of a neuron to a spike from a particular pre-synaptic neuron incorporates direct connections between the two neurons as well as the effect of any number of indirect connections, including through unobserved neurons. To understand how measured quantities are distorted by unobserved neurons, we calculate a general relationship between measured “effective” synaptic interactions and the ground-truth interactions in the network. This allows us to identify conditions under which hidden neurons substantially alter measured interactions. Moreover, it provides a foundation for future work on manipulating effective interactions between neurons to better understand and potentially alter circuit function—or dysfunction.
Collapse
Affiliation(s)
- Braden A W Brinkman
- Department of Applied Mathematics, University of Washington, Seattle, Washington, United States of America.,Department of Physiology and Biophysics, University of Washington, Seattle, Washington, United States of America
| | - Fred Rieke
- Department of Physiology and Biophysics, University of Washington, Seattle, Washington, United States of America.,Graduate Program in Neuroscience, University of Washington, Seattle, Washington, United States of America
| | - Eric Shea-Brown
- Department of Applied Mathematics, University of Washington, Seattle, Washington, United States of America.,Department of Physiology and Biophysics, University of Washington, Seattle, Washington, United States of America.,Graduate Program in Neuroscience, University of Washington, Seattle, Washington, United States of America.,Allen Institute for Brain Science, Seattle, Washington, United States of America
| | - Michael A Buice
- Department of Applied Mathematics, University of Washington, Seattle, Washington, United States of America.,Allen Institute for Brain Science, Seattle, Washington, United States of America
| |
Collapse
|
17
|
Lawlor PN, Perich MG, Miller LE, Kording KP. Linear-nonlinear-time-warp-poisson models of neural activity. J Comput Neurosci 2018; 45:173-191. [PMID: 30294750 DOI: 10.1007/s10827-018-0696-6] [Citation(s) in RCA: 25] [Impact Index Per Article: 4.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/23/2018] [Revised: 08/13/2018] [Accepted: 09/10/2018] [Indexed: 01/15/2023]
Abstract
Prominent models of spike trains assume only one source of variability - stochastic (Poisson) spiking - when stimuli and behavior are fixed. However, spike trains may also reflect variability due to internal processes such as planning. For example, we can plan a movement at one point in time and execute it at some arbitrary later time. Neurons involved in planning may thus share an underlying time course that is not precisely locked to the actual movement. Here we combine the standard Linear-Nonlinear-Poisson (LNP) model with Dynamic Time Warping (DTW) to account for shared temporal variability. When applied to recordings from macaque premotor cortex, we find that time warping considerably improves predictions of neural activity. We suggest that such temporal variability is a widespread phenomenon in the brain which should be modeled.
Collapse
Affiliation(s)
- Patrick N Lawlor
- Division of Child Neurology, Children's Hospital of Philadelphia, Philadelphia, PA, USA.
| | | | - Lee E Miller
- Department of Physiology, Northwestern University, Chicago, IL, USA
| | - Konrad P Kording
- Departments of Bioengineering and Neuroscience, University of Pennsylvania, Philadelphia, PA, USA
| |
Collapse
|
18
|
Lara AH, Cunningham JP, Churchland MM. Different population dynamics in the supplementary motor area and motor cortex during reaching. Nat Commun 2018; 9:2754. [PMID: 30013188 PMCID: PMC6048147 DOI: 10.1038/s41467-018-05146-z] [Citation(s) in RCA: 58] [Impact Index Per Article: 9.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/05/2017] [Accepted: 06/11/2018] [Indexed: 11/24/2022] Open
Abstract
Neural populations perform computations through their collective activity. Different computations likely require different population-level dynamics. We leverage this assumption to examine neural responses recorded from the supplementary motor area (SMA) and motor cortex. During visually guided reaching, the respective roles of these areas remain unclear; neurons in both areas exhibit preparation-related activity and complex patterns of movement-related activity. To explore population dynamics, we employ a novel "hypothesis-guided" dimensionality reduction approach. This approach reveals commonalities but also stark differences: linear population dynamics, dominated by rotations, are prominent in motor cortex but largely absent in SMA. In motor cortex, the observed dynamics produce patterns resembling muscle activity. Conversely, the non-rotational patterns in SMA co-vary with cues regarding when movement should be initiated. Thus, while SMA and motor cortex display superficially similar single-neuron responses during visually guided reaching, their different population dynamics indicate they are likely performing quite different computations.
Collapse
Affiliation(s)
- A H Lara
- Department of Neuroscience, Columbia University Medical Center, New York, NY, 10032, USA
| | - J P Cunningham
- Department of Statistics, Columbia University, New York, NY, 10027, USA
- Zuckerman Mind Brain Behavior Institute, Columbia University, New York, NY, 10027, USA
- Grossman Center for the Statistics of Mind, Columbia University, New York, NY, 10027, USA
- Center for Theoretical Neuroscience, Columbia University Medical Center, New York, NY, 10032, USA
| | - M M Churchland
- Department of Neuroscience, Columbia University Medical Center, New York, NY, 10032, USA.
- Zuckerman Mind Brain Behavior Institute, Columbia University, New York, NY, 10027, USA.
- Grossman Center for the Statistics of Mind, Columbia University, New York, NY, 10027, USA.
- Kavli Institute for Brain Science, Columbia University Medical Center, New York, NY, 10032, USA.
| |
Collapse
|
19
|
David SV. Incorporating behavioral and sensory context into spectro-temporal models of auditory encoding. Hear Res 2018; 360:107-123. [PMID: 29331232 PMCID: PMC6292525 DOI: 10.1016/j.heares.2017.12.021] [Citation(s) in RCA: 15] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/27/2017] [Revised: 12/18/2017] [Accepted: 12/26/2017] [Indexed: 01/11/2023]
Abstract
For several decades, auditory neuroscientists have used spectro-temporal encoding models to understand how neurons in the auditory system represent sound. Derived from early applications of systems identification tools to the auditory periphery, the spectro-temporal receptive field (STRF) and more sophisticated variants have emerged as an efficient means of characterizing representation throughout the auditory system. Most of these encoding models describe neurons as static sensory filters. However, auditory neural coding is not static. Sensory context, reflecting the acoustic environment, and behavioral context, reflecting the internal state of the listener, can both influence sound-evoked activity, particularly in central auditory areas. This review explores recent efforts to integrate context into spectro-temporal encoding models. It begins with a brief tutorial on the basics of estimating and interpreting STRFs. Then it describes three recent studies that have characterized contextual effects on STRFs, emerging over a range of timescales, from many minutes to tens of milliseconds. An important theme of this work is not simply that context influences auditory coding, but also that contextual effects span a large continuum of internal states. The added complexity of these context-dependent models introduces new experimental and theoretical challenges that must be addressed in order to be used effectively. Several new methodological advances promise to address these limitations and allow the development of more comprehensive context-dependent models in the future.
Collapse
Affiliation(s)
- Stephen V David
- Oregon Hearing Research Center, Oregon Health & Science University, 3181 SW Sam Jackson Park Rd, MC L335A, Portland, OR 97239, United States.
| |
Collapse
|
20
|
Smith RJ, Soares AB, Rouse AG, Schieber MH, Thakor NV. Modeling task-specific neuronal ensembles improves decoding of grasp. J Neural Eng 2018; 15:036006. [PMID: 29393065 DOI: 10.1088/1741-2552/aaac93] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022]
Abstract
OBJECTIVE Dexterous movement involves the activation and coordination of networks of neuronal populations across multiple cortical regions. Attempts to model firing of individual neurons commonly treat the firing rate as directly modulating with motor behavior. However, motor behavior may additionally be associated with modulations in the activity and functional connectivity of neurons in a broader ensemble. Accounting for variations in neural ensemble connectivity may provide additional information about the behavior being performed. APPROACH In this study, we examined neural ensemble activity in primary motor cortex (M1) and premotor cortex (PM) of two male rhesus monkeys during performance of a center-out reach, grasp and manipulate task. We constructed point process encoding models of neuronal firing that incorporated task-specific variations in the baseline firing rate as well as variations in functional connectivity with the neural ensemble. Models were evaluated both in terms of their encoding capabilities and their ability to properly classify the grasp being performed. MAIN RESULTS Task-specific ensemble models correctly predicted the performed grasp with over 95% accuracy and were shown to outperform models of neuronal activity that assume only a variable baseline firing rate. Task-specific ensemble models exhibited superior decoding performance in 82% of units in both monkeys (p < 0.01). Inclusion of ensemble activity also broadly improved the ability of models to describe observed spiking. Encoding performance of task-specific ensemble models, measured by spike timing predictability, improved upon baseline models in 62% of units. SIGNIFICANCE These results suggest that additional discriminative information about motor behavior found in the variations in functional connectivity of neuronal ensembles located in motor-related cortical regions is relevant to decode complex tasks such as grasping objects, and may serve the basis for more reliable and accurate neural prosthesis.
Collapse
Affiliation(s)
- Ryan J Smith
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD, United States of America
| | | | | | | | | |
Collapse
|
21
|
Abstract
Psychology moved beyond the stimulus response mapping of behaviorism by adopting an information processing framework. This shift from behavioral to cognitive science was partly inspired by work demonstrating that the concept of information could be defined and quantified (Shannon, 1948). This transition developed further from cognitive science into cognitive neuroscience, in an attempt to measure information in the brain. In the cognitive neurosciences, however, the term information is often used without a clear definition. This paper will argue that, if the formulation proposed by Shannon is applied to modern neuroimaging, then numerous results would be interpreted differently. More specifically, we argue that much modern cognitive neuroscience implicitly focuses on the question of how we can interpret the activations we record in the brain (experimenter-as-receiver), rather than on the core question of how the rest of the brain can interpret those activations (cortex-as-receiver). A clearer focus on whether activations recorded via neuroimaging can actually act as information in the brain would not only change how findings are interpreted but should also change the direction of empirical research in cognitive neuroscience.
Collapse
|
22
|
Dyer EL, Gheshlaghi Azar M, Perich MG, Fernandes HL, Naufel S, Miller LE, Körding KP. A cryptography-based approach for movement decoding. Nat Biomed Eng 2017; 1:967-976. [PMID: 31015712 PMCID: PMC8376093 DOI: 10.1038/s41551-017-0169-7] [Citation(s) in RCA: 26] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/13/2016] [Accepted: 11/04/2017] [Indexed: 12/15/2022]
Abstract
Brain decoders use neural recordings to infer the activity or intent of a user. To train a decoder, one generally needs to infer the measured variables of interest (covariates) from simultaneously measured neural activity. However, there are cases for which obtaining supervised data is difficult or impossible. Here, we describe an approach for movement decoding that does not require access to simultaneously measured neural activity and motor outputs. We use the statistics of movement-much like cryptographers use the statistics of language-to find a mapping between neural activity and motor variables, and then align the distribution of decoder outputs with the typical distribution of motor outputs by minimizing their Kullback-Leibler divergence. By using datasets collected from the motor cortex of three non-human primates performing either a reaching task or an isometric force-production task, we show that the performance of such a distribution-alignment decoding algorithm is comparable to the performance of supervised approaches. Distribution-alignment decoding promises to broaden the set of potential applications of brain decoding.
Collapse
Affiliation(s)
- Eva L Dyer
- Department of Biomedical Engineering, Georgia Institute of Technology & Emory University, Atlanta, GA, USA.
| | - Mohammad Gheshlaghi Azar
- Department of Physical Medicine and Rehabilitation, Northwestern University, Chicago, IL, USA
- Sensory Motor Performance Program, Rehabilitation Institute of Chicago, Chicago, IL, USA
| | - Matthew G Perich
- Department of Biomedical Engineering, Northwestern University, Evanston, IL, USA
| | - Hugo L Fernandes
- Department of Physical Medicine and Rehabilitation, Northwestern University, Chicago, IL, USA
- Sensory Motor Performance Program, Rehabilitation Institute of Chicago, Chicago, IL, USA
| | - Stephanie Naufel
- Department of Biomedical Engineering, Northwestern University, Evanston, IL, USA
| | - Lee E Miller
- Department of Physical Medicine and Rehabilitation, Northwestern University, Chicago, IL, USA
- Department of Biomedical Engineering, Northwestern University, Evanston, IL, USA
- Department of Physiology, Northwestern University, Chicago, IL, USA
| | - Konrad P Körding
- Department of Biomedical Engineering, University of Pennsylvania, Philadelphia, PA, USA
| |
Collapse
|
23
|
Yu S, Ribeiro TL, Meisel C, Chou S, Mitz A, Saunders R, Plenz D. Maintained avalanche dynamics during task-induced changes of neuronal activity in nonhuman primates. eLife 2017; 6. [PMID: 29115213 PMCID: PMC5677367 DOI: 10.7554/elife.27119] [Citation(s) in RCA: 35] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/23/2017] [Accepted: 10/28/2017] [Indexed: 11/24/2022] Open
Abstract
Sensory events, cognitive processing and motor actions correlate with transient changes in neuronal activity. In cortex, these transients form widespread spatiotemporal patterns with largely unknown statistical regularities. Here, we show that activity associated with behavioral events carry the signature of scale-invariant spatiotemporal clusters, neuronal avalanches. Using high-density microelectrode arrays in nonhuman primates, we recorded extracellular unit activity and the local field potential (LFP) in premotor and prefrontal cortex during motor and cognitive tasks. Unit activity and negative LFP deflections (nLFP) consistently changed in rate at single electrodes during tasks. Accordingly, nLFP clusters on the array deviated from scale-invariance compared to ongoing activity. Scale-invariance was recovered using ‘adaptive binning’, that is identifying clusters at temporal resolution given by task-induced changes in nLFP rate. Measures of LFP synchronization confirmed and computer simulations detailed our findings. We suggest optimization principles identified for avalanches during ongoing activity to apply to cortical information processing during behavior.
Collapse
Affiliation(s)
- Shan Yu
- Section on Critical Brain Dynamics, National Institute of Mental Health, Bethesda, United States
| | - Tiago L Ribeiro
- Section on Critical Brain Dynamics, National Institute of Mental Health, Bethesda, United States
| | - Christian Meisel
- Section on Critical Brain Dynamics, National Institute of Mental Health, Bethesda, United States
| | - Samantha Chou
- Section on Critical Brain Dynamics, National Institute of Mental Health, Bethesda, United States
| | - Andrew Mitz
- Laboratory of Neuropsychology, National Institute of Mental Health, Bethesda, United States
| | - Richard Saunders
- Laboratory of Neuropsychology, National Institute of Mental Health, Bethesda, United States
| | - Dietmar Plenz
- Section on Critical Brain Dynamics, National Institute of Mental Health, Bethesda, United States
| |
Collapse
|
24
|
Matsuda T, Kitajo K, Yamaguchi Y, Komaki F. A point process modeling approach for investigating the effect of online brain activity on perceptual switching. Neuroimage 2017; 152:50-59. [PMID: 28242318 DOI: 10.1016/j.neuroimage.2017.02.068] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/20/2016] [Revised: 01/30/2017] [Accepted: 02/23/2017] [Indexed: 11/19/2022] Open
Abstract
When watching an ambiguous figure that allows for multiple interpretations, our interpretation spontaneously switches between the possible options. Such spontaneous switching is called perceptual switching and it is modulated by top-down selective attention. In this study, we propose a point process modeling approach for investigating the effects of online brain activity on perceptual switching, where we define online activity as continuous brain activity including spontaneous background and induced activities. Specifically, we modeled perceptual switching during Necker cube perception using electroencephalography (EEG) data. Our method is based on the framework of point process model, which is a statistical model of a series of events. We regard perceptual switching phenomenon as a stochastic process and construct its model in a data-driven manner. We develop a model called the online activity regression model, which enables to determine whether online brain activity has excitatory or inhibitory effects on perceptual switching. By fitting online activity regression models to experimental data and applying the likelihood ratio testing with correction for multiple comparisons, we explore the brain regions and frequency bands with significant effects on perceptual switching. The results demonstrate that the modulation of online occipital alpha activity mediates the suppression of perceptual switching to the non-attended interpretation. Thus, our method provides a dynamic description of the attentional process by naturally accounting for the entire time course of brain activity, which is difficult to resolve by focusing only on the brain activity around the time of perceptual switching.
Collapse
Affiliation(s)
- Takeru Matsuda
- Graduate School of Information Science and Technology, The University of Tokyo, Tokyo, Japan.
| | - Keiichi Kitajo
- RIKEN BSI-Toyota Collaboration Center, RIKEN Brain Science Institute, Wako, Saitama, Japan; RIKEN Brain Science Institute, Wako, Saitama, Japan
| | | | - Fumiyasu Komaki
- Graduate School of Information Science and Technology, The University of Tokyo, Tokyo, Japan; RIKEN Brain Science Institute, Wako, Saitama, Japan
| |
Collapse
|
25
|
Krucoff MO, Rahimpour S, Slutzky MW, Edgerton VR, Turner DA. Enhancing Nervous System Recovery through Neurobiologics, Neural Interface Training, and Neurorehabilitation. Front Neurosci 2016; 10:584. [PMID: 28082858 PMCID: PMC5186786 DOI: 10.3389/fnins.2016.00584] [Citation(s) in RCA: 85] [Impact Index Per Article: 10.6] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/15/2016] [Accepted: 12/06/2016] [Indexed: 12/21/2022] Open
Abstract
After an initial period of recovery, human neurological injury has long been thought to be static. In order to improve quality of life for those suffering from stroke, spinal cord injury, or traumatic brain injury, researchers have been working to restore the nervous system and reduce neurological deficits through a number of mechanisms. For example, neurobiologists have been identifying and manipulating components of the intra- and extracellular milieu to alter the regenerative potential of neurons, neuro-engineers have been producing brain-machine and neural interfaces that circumvent lesions to restore functionality, and neurorehabilitation experts have been developing new ways to revitalize the nervous system even in chronic disease. While each of these areas holds promise, their individual paths to clinical relevance remain difficult. Nonetheless, these methods are now able to synergistically enhance recovery of native motor function to levels which were previously believed to be impossible. Furthermore, such recovery can even persist after training, and for the first time there is evidence of functional axonal regrowth and rewiring in the central nervous system of animal models. To attain this type of regeneration, rehabilitation paradigms that pair cortically-based intent with activation of affected circuits and positive neurofeedback appear to be required-a phenomenon which raises new and far reaching questions about the underlying relationship between conscious action and neural repair. For this reason, we argue that multi-modal therapy will be necessary to facilitate a truly robust recovery, and that the success of investigational microscopic techniques may depend on their integration into macroscopic frameworks that include task-based neurorehabilitation. We further identify critical components of future neural repair strategies and explore the most updated knowledge, progress, and challenges in the fields of cellular neuronal repair, neural interfacing, and neurorehabilitation, all with the goal of better understanding neurological injury and how to improve recovery.
Collapse
Affiliation(s)
- Max O Krucoff
- Department of Neurosurgery, Duke University Medical Center Durham, NC, USA
| | - Shervin Rahimpour
- Department of Neurosurgery, Duke University Medical Center Durham, NC, USA
| | - Marc W Slutzky
- Department of Physiology, Feinberg School of Medicine, Northwestern UniversityChicago, IL, USA; Department of Neurology, Feinberg School of Medicine, Northwestern UniversityChicago, IL, USA
| | - V Reggie Edgerton
- Department of Integrative Biology and Physiology, University of California, Los Angeles Los Angeles, CA, USA
| | - Dennis A Turner
- Department of Neurosurgery, Duke University Medical CenterDurham, NC, USA; Department of Neurobiology, Duke University Medical CenterDurham, NC, USA; Research and Surgery Services, Durham Veterans Affairs Medical CenterDurham, NC, USA
| |
Collapse
|
26
|
Abstract
As information flows through the brain, neuronal firing progresses from encoding the world as sensed by the animal to driving the motor output of subsequent behavior. One of the more tractable goals of quantitative neuroscience is to develop predictive models that relate the sensory or motor streams with neuronal firing. Here we review and contrast analytical tools used to accomplish this task. We focus on classes of models in which the external variable is compared with one or more feature vectors to extract a low-dimensional representation, the history of spiking and other variables are potentially incorporated, and these factors are nonlinearly transformed to predict the occurrences of spikes. We illustrate these techniques in application to datasets of different degrees of complexity. In particular, we address the fitting of models in the presence of strong correlations in the external variable, as occurs in natural sensory stimuli and in movement. Spectral correlation between predicted and measured spike trains is introduced to contrast the relative success of different methods.
Collapse
Affiliation(s)
- Johnatan Aljadeff
- Department of Physics, University of California, San Diego, San Diego, CA 92093, USA; Department of Neurobiology, University of Chicago, Chicago, IL 60637, USA.
| | - Benjamin J Lansdell
- Department of Applied Mathematics, University of Washington, Seattle, WA 98195, USA
| | - Adrienne L Fairhall
- Department of Physiology and Biophysics, University of Washington, Seattle, WA 98195, USA; WRF UW Institute for Neuroengineering, University of Washington, Seattle, WA 98195, USA
| | - David Kleinfeld
- Department of Physics, University of California, San Diego, San Diego, CA 92093, USA; Section of Neurobiology, University of California, San Diego, La Jolla, CA 92093, USA; Department of Electrical and Computer Engineering, University of California, San Diego, La Jolla, CA 92093, USA.
| |
Collapse
|
27
|
Luongo FJ, Zimmerman CA, Horn ME, Sohal VS. Correlations between prefrontal neurons form a small-world network that optimizes the generation of multineuron sequences of activity. J Neurophysiol 2016; 115:2359-75. [PMID: 26888108 DOI: 10.1152/jn.01043.2015] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/18/2015] [Accepted: 02/15/2016] [Indexed: 12/11/2022] Open
Abstract
Sequential patterns of prefrontal activity are believed to mediate important behaviors, e.g., working memory, but it remains unclear exactly how they are generated. In accordance with previous studies of cortical circuits, we found that prefrontal microcircuits in young adult mice spontaneously generate many more stereotyped sequences of activity than expected by chance. However, the key question of whether these sequences depend on a specific functional organization within the cortical microcircuit, or emerge simply as a by-product of random interactions between neurons, remains unanswered. We observed that correlations between prefrontal neurons do follow a specific functional organization-they have a small-world topology. However, until now it has not been possible to directly link small-world topologies to specific circuit functions, e.g., sequence generation. Therefore, we developed a novel analysis to address this issue. Specifically, we constructed surrogate data sets that have identical levels of network activity at every point in time but nevertheless represent various network topologies. We call this method shuffling activity to rearrange correlations (SHARC). We found that only surrogate data sets based on the actual small-world functional organization of prefrontal microcircuits were able to reproduce the levels of sequences observed in actual data. As expected, small-world data sets contained many more sequences than surrogate data sets with randomly arranged correlations. Surprisingly, small-world data sets also outperformed data sets in which correlations were maximally clustered. Thus the small-world functional organization of cortical microcircuits, which effectively balances the random and maximally clustered regimes, is optimal for producing stereotyped sequential patterns of activity.
Collapse
Affiliation(s)
- Francisco J Luongo
- Department of Psychiatry, University of California, San Francisco, California; Center for Integrative Neuroscience, University of California, San Francisco, California; Sloan-Swartz Center for Theoretical Neurobiology, University of California, San Francisco, California; and Neuroscience Graduate Program, University of California, San Francisco, California
| | - Chris A Zimmerman
- Neuroscience Graduate Program, University of California, San Francisco, California
| | - Meryl E Horn
- Neuroscience Graduate Program, University of California, San Francisco, California
| | - Vikaas S Sohal
- Department of Psychiatry, University of California, San Francisco, California; Center for Integrative Neuroscience, University of California, San Francisco, California; Sloan-Swartz Center for Theoretical Neurobiology, University of California, San Francisco, California; and
| |
Collapse
|
28
|
Glaser JI, Kording KP. The Development and Analysis of Integrated Neuroscience Data. Front Comput Neurosci 2016; 10:11. [PMID: 26903852 PMCID: PMC4749710 DOI: 10.3389/fncom.2016.00011] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/02/2015] [Accepted: 01/28/2016] [Indexed: 12/12/2022] Open
Abstract
There is a strong emphasis on developing novel neuroscience technologies, in particular on recording from more neurons. There has thus been increasing discussion about how to analyze the resulting big datasets. What has received less attention is that over the last 30 years, papers in neuroscience have progressively integrated more approaches, such as electrophysiology, anatomy, and genetics. As such, there has been little discussion on how to combine and analyze this multimodal data. Here, we describe the growth of multimodal approaches, and discuss the needed analysis advancements to make sense of this data.
Collapse
Affiliation(s)
- Joshua I Glaser
- Interdepartmental Neuroscience Program, Northwestern UniversityChicago, IL, USA; Department of Physical Medicine and Rehabilitation, Northwestern University and Rehabilitation Institute of ChicagoChicago, IL, USA
| | - Konrad P Kording
- Interdepartmental Neuroscience Program, Northwestern UniversityChicago, IL, USA; Department of Physical Medicine and Rehabilitation, Northwestern University and Rehabilitation Institute of ChicagoChicago, IL, USA; Department of Physiology, Northwestern UniversityChicago, IL, USA; Department of Applied Mathematics, Northwestern UniversityChicago, IL, USA
| |
Collapse
|
29
|
Helmer M, Kozyrev V, Stephan V, Treue S, Geisel T, Battaglia D. Model-Free Estimation of Tuning Curves and Their Attentional Modulation, Based on Sparse and Noisy Data. PLoS One 2016; 11:e0146500. [PMID: 26785378 PMCID: PMC4718600 DOI: 10.1371/journal.pone.0146500] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/06/2015] [Accepted: 12/17/2015] [Indexed: 11/23/2022] Open
Abstract
Tuning curves are the functions that relate the responses of sensory neurons to various values within one continuous stimulus dimension (such as the orientation of a bar in the visual domain or the frequency of a tone in the auditory domain). They are commonly determined by fitting a model e.g. a Gaussian or other bell-shaped curves to the measured responses to a small subset of discrete stimuli in the relevant dimension. However, as neuronal responses are irregular and experimental measurements noisy, it is often difficult to determine reliably the appropriate model from the data. We illustrate this general problem by fitting diverse models to representative recordings from area MT in rhesus monkey visual cortex during multiple attentional tasks involving complex composite stimuli. We find that all models can be well-fitted, that the best model generally varies between neurons and that statistical comparisons between neuronal responses across different experimental conditions are affected quantitatively and qualitatively by specific model choices. As a robust alternative to an often arbitrary model selection, we introduce a model-free approach, in which features of interest are extracted directly from the measured response data without the need of fitting any model. In our attentional datasets, we demonstrate that data-driven methods provide descriptions of tuning curve features such as preferred stimulus direction or attentional gain modulations which are in agreement with fit-based approaches when a good fit exists. Furthermore, these methods naturally extend to the frequent cases of uncertain model selection. We show that model-free approaches can identify attentional modulation patterns, such as general alterations of the irregular shape of tuning curves, which cannot be captured by fitting stereotyped conventional models. Finally, by comparing datasets across different conditions, we demonstrate effects of attention that are cell- and even stimulus-specific. Based on these proofs-of-concept, we conclude that our data-driven methods can reliably extract relevant tuning information from neuronal recordings, including cells whose seemingly haphazard response curves defy conventional fitting approaches.
Collapse
Affiliation(s)
- Markus Helmer
- Max Planck Institute for Dynamics and Self-Organization, Department of Nonlinear Dynamics, Göttingen, Germany
- Bernstein Center for Computational Neuroscience, Göttingen, Germany
- * E-mail:
| | - Vladislav Kozyrev
- Institute of Neuroinformatics, Ruhr-University Bochum, Bochum, Germany
- Cognitive Neuroscience Laboratory, German Primate Center, Göttingen, Germany
| | - Valeska Stephan
- Bernstein Center for Computational Neuroscience, Göttingen, Germany
- Cognitive Neuroscience Laboratory, German Primate Center, Göttingen, Germany
| | - Stefan Treue
- Bernstein Center for Computational Neuroscience, Göttingen, Germany
- Cognitive Neuroscience Laboratory, German Primate Center, Göttingen, Germany
| | - Theo Geisel
- Max Planck Institute for Dynamics and Self-Organization, Department of Nonlinear Dynamics, Göttingen, Germany
- Bernstein Center for Computational Neuroscience, Göttingen, Germany
| | - Demian Battaglia
- Bernstein Center for Computational Neuroscience, Göttingen, Germany
- Institut de Neurosciences des Systèmes, Aix-Marseille Université, Marseille, France
| |
Collapse
|
30
|
Distinct predictive performance of Rac1 and Cdc42 in cell migration. Sci Rep 2015; 5:17527. [PMID: 26634649 PMCID: PMC4669460 DOI: 10.1038/srep17527] [Citation(s) in RCA: 34] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/03/2015] [Accepted: 10/30/2015] [Indexed: 11/12/2022] Open
Abstract
We propose a new computation-based approach for elucidating how signaling molecules are decoded in cell migration. In this approach, we performed FRET time-lapse imaging of Rac1 and Cdc42, members of Rho GTPases which are responsible for cell motility, and quantitatively identified the response functions that describe the conversion from the molecular activities to the morphological changes. Based on the identified response functions, we clarified the profiles of how the morphology spatiotemporally changes in response to local and transient activation of Rac1 and Cdc42, and found that Rac1 and Cdc42 activation triggers laterally propagating membrane protrusion. The response functions were also endowed with property of differentiator, which is beneficial for maintaining sensitivity under adaptation to the mean level of input. Using the response function, we could predict the morphological change from molecular activity, and its predictive performance provides a new quantitative measure of how much the Rho GTPases participate in the cell migration. Interestingly, we discovered distinct predictive performance of Rac1 and Cdc42 depending on the migration modes, indicating that Rac1 and Cdc42 contribute to persistent and random migration, respectively. Thus, our proposed predictive approach enabled us to uncover the hidden information processing rules of Rho GTPases in the cell migration.
Collapse
|
31
|
Hiremath SV, Chen W, Wang W, Foldes S, Yang Y, Tyler-Kabara EC, Collinger JL, Boninger ML. Brain computer interface learning for systems based on electrocorticography and intracortical microelectrode arrays. Front Integr Neurosci 2015; 9:40. [PMID: 26113812 PMCID: PMC4462099 DOI: 10.3389/fnint.2015.00040] [Citation(s) in RCA: 34] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/19/2014] [Accepted: 05/20/2015] [Indexed: 12/20/2022] Open
Abstract
A brain-computer interface (BCI) system transforms neural activity into control signals for external devices in real time. A BCI user needs to learn to generate specific cortical activity patterns to control external devices effectively. We call this process BCI learning, and it often requires significant effort and time. Therefore, it is important to study this process and develop novel and efficient approaches to accelerate BCI learning. This article reviews major approaches that have been used for BCI learning, including computer-assisted learning, co-adaptive learning, operant conditioning, and sensory feedback. We focus on BCIs based on electrocorticography and intracortical microelectrode arrays for restoring motor function. This article also explores the possibility of brain modulation techniques in promoting BCI learning, such as electrical cortical stimulation, transcranial magnetic stimulation, and optogenetics. Furthermore, as proposed by recent BCI studies, we suggest that BCI learning is in many ways analogous to motor and cognitive skill learning, and therefore skill learning should be a useful metaphor to model BCI learning.
Collapse
Affiliation(s)
- Shivayogi V Hiremath
- Department of Physical Medicine and Rehabilitation, University of Pittsburgh Pittsburgh, PA, USA ; Department of Veterans Affairs, Human Engineering Research Laboratories Pittsburgh, PA, USA
| | - Weidong Chen
- Department of Physical Medicine and Rehabilitation, University of Pittsburgh Pittsburgh, PA, USA ; Qiushi Academy for Advanced Studies (QAAS), Zhejiang University Hangzhou, China
| | - Wei Wang
- Department of Physical Medicine and Rehabilitation, University of Pittsburgh Pittsburgh, PA, USA ; Department of Bioengineering, University of Pittsburgh Pittsburgh, PA, USA ; Clinical and Translational Science Institute, University of Pittsburgh Pittsburgh, PA, USA ; Center for the Neural Basis of Cognition, Carnegie Mellon University and the University of Pittsburgh Pittsburgh, PA, USA
| | - Stephen Foldes
- Department of Physical Medicine and Rehabilitation, University of Pittsburgh Pittsburgh, PA, USA ; Department of Veterans Affairs, Human Engineering Research Laboratories Pittsburgh, PA, USA ; Center for the Neural Basis of Cognition, Carnegie Mellon University and the University of Pittsburgh Pittsburgh, PA, USA
| | - Ying Yang
- Department of Physical Medicine and Rehabilitation, University of Pittsburgh Pittsburgh, PA, USA ; Center for the Neural Basis of Cognition, Carnegie Mellon University and the University of Pittsburgh Pittsburgh, PA, USA
| | - Elizabeth C Tyler-Kabara
- Department of Physical Medicine and Rehabilitation, University of Pittsburgh Pittsburgh, PA, USA ; Department of Bioengineering, University of Pittsburgh Pittsburgh, PA, USA ; Department of Neurological Surgery, University of Pittsburgh Pittsburgh, PA, USA
| | - Jennifer L Collinger
- Department of Physical Medicine and Rehabilitation, University of Pittsburgh Pittsburgh, PA, USA ; Department of Veterans Affairs, Human Engineering Research Laboratories Pittsburgh, PA, USA ; Department of Bioengineering, University of Pittsburgh Pittsburgh, PA, USA ; Center for the Neural Basis of Cognition, Carnegie Mellon University and the University of Pittsburgh Pittsburgh, PA, USA
| | - Michael L Boninger
- Department of Physical Medicine and Rehabilitation, University of Pittsburgh Pittsburgh, PA, USA ; Department of Veterans Affairs, Human Engineering Research Laboratories Pittsburgh, PA, USA ; Department of Bioengineering, University of Pittsburgh Pittsburgh, PA, USA ; Clinical and Translational Science Institute, University of Pittsburgh Pittsburgh, PA, USA
| |
Collapse
|
32
|
Volgushev M, Ilin V, Stevenson IH. Identifying and tracking simulated synaptic inputs from neuronal firing: insights from in vitro experiments. PLoS Comput Biol 2015; 11:e1004167. [PMID: 25823000 PMCID: PMC4379067 DOI: 10.1371/journal.pcbi.1004167] [Citation(s) in RCA: 18] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/21/2014] [Accepted: 02/02/2015] [Indexed: 11/18/2022] Open
Abstract
Accurately describing synaptic interactions between neurons and how interactions change over time are key challenges for systems neuroscience. Although intracellular electrophysiology is a powerful tool for studying synaptic integration and plasticity, it is limited by the small number of neurons that can be recorded simultaneously in vitro and by the technical difficulty of intracellular recording in vivo. One way around these difficulties may be to use large-scale extracellular recording of spike trains and apply statistical methods to model and infer functional connections between neurons. These techniques have the potential to reveal large-scale connectivity structure based on the spike timing alone. However, the interpretation of functional connectivity is often approximate, since only a small fraction of presynaptic inputs are typically observed. Here we use in vitro current injection in layer 2/3 pyramidal neurons to validate methods for inferring functional connectivity in a setting where input to the neuron is controlled. In experiments with partially-defined input, we inject a single simulated input with known amplitude on a background of fluctuating noise. In a fully-defined input paradigm, we then control the synaptic weights and timing of many simulated presynaptic neurons. By analyzing the firing of neurons in response to these artificial inputs, we ask 1) How does functional connectivity inferred from spikes relate to simulated synaptic input? and 2) What are the limitations of connectivity inference? We find that individual current-based synaptic inputs are detectable over a broad range of amplitudes and conditions. Detectability depends on input amplitude and output firing rate, and excitatory inputs are detected more readily than inhibitory. Moreover, as we model increasing numbers of presynaptic inputs, we are able to estimate connection strengths more accurately and detect the presence of connections more quickly. These results illustrate the possibilities and outline the limits of inferring synaptic input from spikes.
Collapse
Affiliation(s)
- Maxim Volgushev
- Department of Psychology, University of Connecticut, Storrs, Connecticut, United States of America
| | - Vladimir Ilin
- Department of Psychology, University of Connecticut, Storrs, Connecticut, United States of America
| | - Ian H. Stevenson
- Department of Psychology, University of Connecticut, Storrs, Connecticut, United States of America
| |
Collapse
|
33
|
Dunn B, Mørreaunet M, Roudi Y. Correlations and functional connections in a population of grid cells. PLoS Comput Biol 2015; 11:e1004052. [PMID: 25714908 PMCID: PMC4340907 DOI: 10.1371/journal.pcbi.1004052] [Citation(s) in RCA: 36] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/30/2014] [Accepted: 09/29/2014] [Indexed: 11/28/2022] Open
Abstract
We study the statistics of spike trains of simultaneously recorded grid cells in freely behaving rats. We evaluate pairwise correlations between these cells and, using a maximum entropy kinetic pairwise model (kinetic Ising model), study their functional connectivity. Even when we account for the covariations in firing rates due to overlapping fields, both the pairwise correlations and functional connections decay as a function of the shortest distance between the vertices of the spatial firing pattern of pairs of grid cells, i.e. their phase difference. They take positive values between cells with nearby phases and approach zero or negative values for larger phase differences. We find similar results also when, in addition to correlations due to overlapping fields, we account for correlations due to theta oscillations and head directional inputs. The inferred connections between neurons in the same module and those from different modules can be both negative and positive, with a mean close to zero, but with the strongest inferred connections found between cells of the same module. Taken together, our results suggest that grid cells in the same module do indeed form a local network of interconnected neurons with a functional connectivity that supports a role for attractor dynamics in the generation of grid pattern. The way mammals navigate in space is hypothesized to depend on neural structures in the temporal lobe including the hippocampus and medial entorhinal cortex (MEC). In particular, grid cells, neurons whose firing is mostly restricted to regions of space that form a hexagonal pattern, are believed to be an important part of this circuitry. Despite several years of work, not much is known about the correlated activity of neurons in the MEC and how grid cells are functionally coupled to each other. Here, we have taken a statistical approach to these questions and studied pairwise correlations and functional connections between simultaneously recorded grid cells. Through careful statistical analysis, we demonstrate that grid cells with nearby firing vertices tend to have positive effects on eliciting responses in each other, while those further apart tend to have inhibitory or no effects. Cells that respond similarly to manipulations of the environment are considered to belong to the same module. Cells belonging to a module have stronger interactions with each other than those in different modules. These results are consistent with and shed light on the population-based mechanisms suggested by models for the generation of grid cell firing.
Collapse
Affiliation(s)
- Benjamin Dunn
- Kavli Institute for Systems Neuroscience and Centre for Neural Computation, NTNU, Trondheim, Norway
| | - Maria Mørreaunet
- Kavli Institute for Systems Neuroscience and Centre for Neural Computation, NTNU, Trondheim, Norway
| | - Yasser Roudi
- Kavli Institute for Systems Neuroscience and Centre for Neural Computation, NTNU, Trondheim, Norway
- Nordita, KTH and Stockholm University, Stockholm, Sweden
- * E-mail:
| |
Collapse
|
34
|
Cadieu CF, Hong H, Yamins DLK, Pinto N, Ardila D, Solomon EA, Majaj NJ, DiCarlo JJ. Deep neural networks rival the representation of primate IT cortex for core visual object recognition. PLoS Comput Biol 2014; 10:e1003963. [PMID: 25521294 PMCID: PMC4270441 DOI: 10.1371/journal.pcbi.1003963] [Citation(s) in RCA: 315] [Impact Index Per Article: 31.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/23/2014] [Accepted: 10/03/2014] [Indexed: 11/19/2022] Open
Abstract
The primate visual system achieves remarkable visual object recognition performance even in brief presentations, and under changes to object exemplar, geometric transformations, and background variation (a.k.a. core visual object recognition). This remarkable performance is mediated by the representation formed in inferior temporal (IT) cortex. In parallel, recent advances in machine learning have led to ever higher performing models of object recognition using artificial deep neural networks (DNNs). It remains unclear, however, whether the representational performance of DNNs rivals that of the brain. To accurately produce such a comparison, a major difficulty has been a unifying metric that accounts for experimental limitations, such as the amount of noise, the number of neural recording sites, and the number of trials, and computational limitations, such as the complexity of the decoding classifier and the number of classifier training examples. In this work, we perform a direct comparison that corrects for these experimental limitations and computational considerations. As part of our methodology, we propose an extension of "kernel analysis" that measures the generalization accuracy as a function of representational complexity. Our evaluations show that, unlike previous bio-inspired models, the latest DNNs rival the representational performance of IT cortex on this visual object recognition task. Furthermore, we show that models that perform well on measures of representational performance also perform well on measures of representational similarity to IT, and on measures of predicting individual IT multi-unit responses. Whether these DNNs rely on computational mechanisms similar to the primate visual system is yet to be determined, but, unlike all previous bio-inspired models, that possibility cannot be ruled out merely on representational performance grounds.
Collapse
Affiliation(s)
- Charles F. Cadieu
- Department of Brain and Cognitive Sciences and McGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, Massachusetts, United States of America
| | - Ha Hong
- Department of Brain and Cognitive Sciences and McGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, Massachusetts, United States of America
- Harvard–MIT Division of Health Sciences and Technology, Institute for Medical Engineering and Science, Massachusetts Institute of Technology, Cambridge, Massachusetts, United States of America
| | - Daniel L. K. Yamins
- Department of Brain and Cognitive Sciences and McGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, Massachusetts, United States of America
| | - Nicolas Pinto
- Department of Brain and Cognitive Sciences and McGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, Massachusetts, United States of America
| | - Diego Ardila
- Department of Brain and Cognitive Sciences and McGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, Massachusetts, United States of America
| | - Ethan A. Solomon
- Department of Brain and Cognitive Sciences and McGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, Massachusetts, United States of America
| | - Najib J. Majaj
- Department of Brain and Cognitive Sciences and McGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, Massachusetts, United States of America
| | - James J. DiCarlo
- Department of Brain and Cognitive Sciences and McGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, Massachusetts, United States of America
| |
Collapse
|
35
|
Nakae K, Ikegaya Y, Ishikawa T, Oba S, Urakubo H, Koyama M, Ishii S. A statistical method of identifying interactions in neuron-glia systems based on functional multicell Ca2+ imaging. PLoS Comput Biol 2014; 10:e1003949. [PMID: 25393874 PMCID: PMC4230777 DOI: 10.1371/journal.pcbi.1003949] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/04/2013] [Accepted: 09/29/2014] [Indexed: 11/18/2022] Open
Abstract
Crosstalk between neurons and glia may constitute a significant part of information processing in the brain. We present a novel method of statistically identifying interactions in a neuron-glia network. We attempted to identify neuron-glia interactions from neuronal and glial activities via maximum-a-posteriori (MAP)-based parameter estimation by developing a generalized linear model (GLM) of a neuron-glia network. The interactions in our interest included functional connectivity and response functions. We evaluated the cross-validated likelihood of GLMs that resulted from the addition or removal of connections to confirm the existence of specific neuron-to-glia or glia-to-neuron connections. We only accepted addition or removal when the modification improved the cross-validated likelihood. We applied the method to a high-throughput, multicellular in vitro Ca2+ imaging dataset obtained from the CA3 region of a rat hippocampus, and then evaluated the reliability of connectivity estimates using a statistical test based on a surrogate method. Our findings based on the estimated connectivity were in good agreement with currently available physiological knowledge, suggesting our method can elucidate undiscovered functions of neuron-glia systems.
Collapse
Affiliation(s)
- Ken Nakae
- Integrated Systems Biology Laboratory, Graduate School of Informatics, Kyoto University, Sakyo-ku, Kyoto, Japan
| | - Yuji Ikegaya
- Laboratory of Chemical Pharmacology, Graduate School of Pharmaceutical Sciences, The University of Tokyo, Bunkyo-ku, Tokyo, Japan
- Center for Information and Neural Networks, Suita City, Osaka, Japan
- * E-mail: (YI); (SI)
| | - Tomoe Ishikawa
- Laboratory of Chemical Pharmacology, Graduate School of Pharmaceutical Sciences, The University of Tokyo, Bunkyo-ku, Tokyo, Japan
| | - Shigeyuki Oba
- Integrated Systems Biology Laboratory, Graduate School of Informatics, Kyoto University, Sakyo-ku, Kyoto, Japan
| | - Hidetoshi Urakubo
- Integrated Systems Biology Laboratory, Graduate School of Informatics, Kyoto University, Sakyo-ku, Kyoto, Japan
| | - Masanori Koyama
- Integrated Systems Biology Laboratory, Graduate School of Informatics, Kyoto University, Sakyo-ku, Kyoto, Japan
| | - Shin Ishii
- Integrated Systems Biology Laboratory, Graduate School of Informatics, Kyoto University, Sakyo-ku, Kyoto, Japan
- * E-mail: (YI); (SI)
| |
Collapse
|
36
|
Abstract
Coupling between sensory neurons impacts their tuning properties and correlations in their responses. How such coupling affects sensory representations and ultimately behavior remains unclear. We investigated the role of neuronal coupling during visual processing using a realistic biophysical model of the vertical system (VS) cell network in the blow fly. These neurons are thought to encode the horizontal rotation axis during rapid free-flight maneuvers. Experimental findings suggest that neurons of the VS are strongly electrically coupled, and that several downstream neurons driving motor responses to ego-rotation receive inputs primarily from a small subset of VS cells. These downstream neurons must decode information about the axis of rotation from a partial readout of the VS population response. To investigate the role of coupling, we simulated the VS response to a variety of rotating visual scenes and computed optimal Bayesian estimates from the relevant subset of VS cells. Our analysis shows that coupling leads to near-optimal estimates from a subpopulation readout. In contrast, coupling between VS cells has no impact on the quality of encoding in the response of the full population. We conclude that coupling at one level of the fly visual system allows for near-optimal decoding from partial information at the subsequent, premotor level. Thus, electrical coupling may provide a general mechanism to achieve near-optimal information transfer from neuronal subpopulations across organisms and modalities.
Collapse
|
37
|
Chakrabarti S, Martinez-Vazquez P, Gail A. Synchronization patterns suggest different functional organization in parietal reach region and dorsal premotor cortex. J Neurophysiol 2014; 112:3138-53. [PMID: 25231609 DOI: 10.1152/jn.00621.2013] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
The parietal reach region (PRR) and dorsal premotor cortex (PMd) form part of the fronto-parietal reach network. While neural selectivity profiles of single-cell activity in these areas can be remarkably similar, other data suggest that both areas serve different computational functions in visually guided reaching. Here we test the hypothesis that different neural functional organizations characterized by different neural synchronization patterns might be underlying the putatively different functional roles. We use cross-correlation analysis on single-unit activity (SUA) and multiunit activity (MUA) to determine the prevalence of synchronized neural ensembles within each area. First, we reliably find synchronization in PRR but not in PMd. Second, we demonstrate that synchronization in PRR is present in different cognitive states, including "idle" states prior to task-relevant instructions and without neural tuning. Third, we show that local field potentials (LFPs) in PRR but not PMd are characterized by an increased power and spike field coherence in the beta frequency range (12-30 Hz), further indicating stronger synchrony in PRR compared with PMd. Finally, we show that neurons with similar tuning properties tend to be correlated in their random spike rate fluctuations in PRR but not in PMd. Our data support the idea that PRR and PMd, despite striking similarity in single-cell tuning properties, are characterized by unequal local functional organization, which likely reflects different network architectures to support different functional roles within the fronto-parietal reach network.
Collapse
Affiliation(s)
- Shubhodeep Chakrabarti
- Bernstein Center for Computational Neuroscience, German Primate Center-Leibniz Institute for Primate Research, Göttingen, Germany; Systems Neurophysiology Group, Werner Reichardt Centre for Integrative Neuroscience, University of Tübingen, Tübingen, Germany; and Department of Cognitive Neurology, Hertie Institute for Clinical Brain Research, Tübingen, Germany
| | - Pablo Martinez-Vazquez
- Bernstein Center for Computational Neuroscience, German Primate Center-Leibniz Institute for Primate Research, Göttingen, Germany
| | - Alexander Gail
- Bernstein Center for Computational Neuroscience, German Primate Center-Leibniz Institute for Primate Research, Göttingen, Germany;
| |
Collapse
|
38
|
Cha K, Zatorre RJ, Schönwiesner M. Frequency Selectivity of Voxel-by-Voxel Functional Connectivity in Human Auditory Cortex. Cereb Cortex 2014; 26:211-24. [PMID: 25183885 DOI: 10.1093/cercor/bhu193] [Citation(s) in RCA: 30] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/13/2022] Open
Abstract
While functional connectivity in the human cortex has been increasingly studied, its relationship to cortical representation of sensory features has not been documented as much. We used functional magnetic resonance imaging to demonstrate that voxel-by-voxel intrinsic functional connectivity (FC) is selective to frequency preference of voxels in the human auditory cortex. Thus, FC was significantly higher for voxels with similar frequency tuning than for voxels with dissimilar tuning functions. Frequency-selective FC, measured via the correlation of residual hemodynamic activity, was not explained by generic FC that is dependent on spatial distance over the cortex. This pattern remained even when FC was computed using residual activity taken from resting epochs. Further analysis showed that voxels in the core fields in the right hemisphere have a higher frequency selectivity in within-area FC than their counterpart in the left hemisphere, or than in the noncore-fields in the same hemisphere. Frequency-selective FC is consistent with previous findings of topographically organized FC in the human visual and motor cortices. The high degree of frequency selectivity in the right core area is in line with findings and theoretical proposals regarding the asymmetry of human auditory cortex for spectral processing.
Collapse
Affiliation(s)
- Kuwook Cha
- Cognitive Neuroscience Unit, Montréal Neurological Institute, McGill University, Montréal, QC, Canada H3A 2B4 International Laboratory for Brain, Music, and Sound Research (BRAMS), Montréal, QC, Canada H2V 4P3 Center for Research on Brain, Language and Music (CRBLM), Montréal, QC, Canada H3G 2A8
| | - Robert J Zatorre
- Cognitive Neuroscience Unit, Montréal Neurological Institute, McGill University, Montréal, QC, Canada H3A 2B4 International Laboratory for Brain, Music, and Sound Research (BRAMS), Montréal, QC, Canada H2V 4P3 Center for Research on Brain, Language and Music (CRBLM), Montréal, QC, Canada H3G 2A8
| | - Marc Schönwiesner
- Département de Psychologie, Université de Montréal, Montréal, QC, Canada H2V 2S9 International Laboratory for Brain, Music, and Sound Research (BRAMS), Montréal, QC, Canada H2V 4P3 Center for Research on Brain, Language and Music (CRBLM), Montréal, QC, Canada H3G 2A8
| |
Collapse
|
39
|
Xu J, Wu T, Liu W, Yang Z. A frequency shaping neural recorder with 3 pF input capacitance and 11 plus 4.5 bits dynamic range. IEEE TRANSACTIONS ON BIOMEDICAL CIRCUITS AND SYSTEMS 2014; 8:510-527. [PMID: 25073127 DOI: 10.1109/tbcas.2013.2293821] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/03/2023]
Abstract
This paper presents a frequency-shaping (FS) neural recording architecture and its implementation in a 0.13 μ m CMOS process. Compared with its conventional counterpart, the proposed architecture inherently rejects electrode offset, increases input impedance 5-10 fold, compresses neural data dynamic range (DR) by 4.5-bit, simultaneously records local field potentials (LFPs) and extracellular spikes, and is more suitable for long-term recording experiments. Measured at a 40 kHz sampling clock and ± 0.6 V supply, the recorder consumes 50 μW/ch, of which 22 μW per FS amplifier, 24 μ W per buffer, 4 μ W per 11-bit successive approximation register analog-to-digital converter (SAR ADC). The input-referred noise for LFPs and extracellular spikes are 13 μ Vrms and 7 μVrms, respectively, which are sufficient to achieve high-fidelity full-spectrum neural data. In addition, the designed recorder has a 3 pF input capacitance and allows " 11+4.5"-bit neural data DR without system saturation, where the extra 4.5-bit owes to the FS technique. Its figure-of-merit (FOM) based on data DR reaches 36.0 fJ/conversion-step.
Collapse
|
40
|
Foster JD, Nuyujukian P, Freifeld O, Gao H, Walker R, I Ryu S, H Meng T, Murmann B, J Black M, Shenoy KV. A freely-moving monkey treadmill model. J Neural Eng 2014; 11:046020. [PMID: 24995476 DOI: 10.1088/1741-2560/11/4/046020] [Citation(s) in RCA: 47] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022]
Abstract
OBJECTIVE Motor neuroscience and brain-machine interface (BMI) design is based on examining how the brain controls voluntary movement, typically by recording neural activity and behavior from animal models. Recording technologies used with these animal models have traditionally limited the range of behaviors that can be studied, and thus the generality of science and engineering research. We aim to design a freely-moving animal model using neural and behavioral recording technologies that do not constrain movement. APPROACH We have established a freely-moving rhesus monkey model employing technology that transmits neural activity from an intracortical array using a head-mounted device and records behavior through computer vision using markerless motion capture. We demonstrate the flexibility and utility of this new monkey model, including the first recordings from motor cortex while rhesus monkeys walk quadrupedally on a treadmill. MAIN RESULTS Using this monkey model, we show that multi-unit threshold-crossing neural activity encodes the phase of walking and that the average firing rate of the threshold crossings covaries with the speed of individual steps. On a population level, we find that neural state-space trajectories of walking at different speeds have similar rotational dynamics in some dimensions that evolve at the step rate of walking, yet robustly separate by speed in other state-space dimensions. SIGNIFICANCE Freely-moving animal models may allow neuroscientists to examine a wider range of behaviors and can provide a flexible experimental paradigm for examining the neural mechanisms that underlie movement generation across behaviors and environments. For BMIs, freely-moving animal models have the potential to aid prosthetic design by examining how neural encoding changes with posture, environment and other real-world context changes. Understanding this new realm of behavior in more naturalistic settings is essential for overall progress of basic motor neuroscience and for the successful translation of BMIs to people with paralysis.
Collapse
Affiliation(s)
- Justin D Foster
- Department of Electrical Engineering, Stanford University, Stanford, CA 94305, USA
| | | | | | | | | | | | | | | | | | | |
Collapse
|
41
|
Lin CP, Chen YP, Hung CP. Tuning and spontaneous spike time synchrony share a common structure in macaque inferior temporal cortex. J Neurophysiol 2014; 112:856-69. [PMID: 24848472 DOI: 10.1152/jn.00485.2013] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
Investigating the relationship between tuning and spike timing is necessary to understand how neuronal populations in anterior visual cortex process complex stimuli. Are tuning and spontaneous spike time synchrony linked by a common spatial structure (do some cells covary more strongly, even in the absence of visual stimulation?), and what is the object coding capability of this structure? Here, we recorded from spiking populations in macaque inferior temporal (IT) cortex under neurolept anesthesia. We report that, although most nearby IT neurons are weakly correlated, neurons with more similar tuning are also more synchronized during spontaneous activity. This link between tuning and synchrony was not simply due to cell separation distance. Instead, it expands on previous reports that neurons along an IT penetration are tuned to similar but slightly different features. This constraint on possible population firing rate patterns was consistent across stimulus sets, including animate vs. inanimate object categories. A classifier trained on this structure was able to generalize category "read-out" to untrained objects using only a few dimensions (a few patterns of site weightings per electrode array). We suggest that tuning and spike synchrony are linked by a common spatial structure that is highly efficient for object representation.
Collapse
Affiliation(s)
- Chia-Pei Lin
- Institute of Neuroscience and Brain Research Center, National Yang-Ming University, Taipei, Taiwan; RIKEN Brain Science Institute, Saitama, Japan
| | - Yueh-Peng Chen
- Institute of Neuroscience and Brain Research Center, National Yang-Ming University, Taipei, Taiwan
| | - Chou P Hung
- Institute of Neuroscience and Brain Research Center, National Yang-Ming University, Taipei, Taiwan; Department of Neuroscience, Georgetown University, Washington, District of Columbia; and
| |
Collapse
|
42
|
Mizuseki K, Diba K, Pastalkova E, Teeters J, Sirota A, Buzsáki G. Neurosharing: large-scale data sets (spike, LFP) recorded from the hippocampal-entorhinal system in behaving rats. F1000Res 2014; 3:98. [PMID: 25075302 PMCID: PMC4097350 DOI: 10.12688/f1000research.3895.1] [Citation(s) in RCA: 32] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Accepted: 04/22/2014] [Indexed: 12/02/2022] Open
Abstract
Using silicon-based recording electrodes, we recorded neuronal activity of the dorsal hippocampus and dorsomedial entorhinal cortex from behaving rats. The entorhinal neurons were classified as principal neurons and interneurons based on monosynaptic interactions and wave-shapes. The hippocampal neurons were classified as principal neurons and interneurons based on monosynaptic interactions, wave-shapes and burstiness. The data set contains recordings from 7,736 neurons (6,100 classified as principal neurons, 1,132 as interneurons, and 504 cells that did not clearly fit into either category) obtained during 442 recording sessions from 11 rats (a total of 204.5 hours) while they were engaged in one of eight different behaviours/tasks. Both original and processed data (time stamp of spikes, spike waveforms, result of spike sorting and local field potential) are included, along with metadata of behavioural markers. Community-driven data sharing may offer cross-validation of findings, refinement of interpretations and facilitate discoveries.
Collapse
Affiliation(s)
- Kenji Mizuseki
- NYU Neuroscience Institute, Langone Medical Center, New York University, New York, NY, USA ; Center for Molecular and Behavioral Neuroscience, Rutgers, The State University of New Jersey, Newark, NJ, USA ; Allen Institute for Brain Science, Seattle, WA, USA
| | - Kamran Diba
- Center for Molecular and Behavioral Neuroscience, Rutgers, The State University of New Jersey, Newark, NJ, USA ; Department of Psychology, University of Wisconsin at Milwaukee, Milwaukee, WI, USA
| | - Eva Pastalkova
- Center for Molecular and Behavioral Neuroscience, Rutgers, The State University of New Jersey, Newark, NJ, USA ; Janelia Farm Research Campus, Howard Hughes Medical Institute, Ashburn, VA, USA
| | - Jeff Teeters
- Redwood Center for Theoretical Neuroscience, University of California, Berkeley, Berkeley, CA, USA
| | - Anton Sirota
- Center for Molecular and Behavioral Neuroscience, Rutgers, The State University of New Jersey, Newark, NJ, USA ; Centre for Integrative Neuroscience, University of Tübingen, Tübingen, Germany
| | - György Buzsáki
- NYU Neuroscience Institute, Langone Medical Center, New York University, New York, NY, USA ; Center for Molecular and Behavioral Neuroscience, Rutgers, The State University of New Jersey, Newark, NJ, USA ; Center for Neural Science, New York University, New York, NY, USA
| |
Collapse
|
43
|
Gerhard F, Kispersky T, Gutierrez GJ, Marder E, Kramer M, Eden U. Successful reconstruction of a physiological circuit with known connectivity from spiking activity alone. PLoS Comput Biol 2013; 9:e1003138. [PMID: 23874181 PMCID: PMC3708849 DOI: 10.1371/journal.pcbi.1003138] [Citation(s) in RCA: 57] [Impact Index Per Article: 5.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/24/2012] [Accepted: 05/31/2013] [Indexed: 11/18/2022] Open
Abstract
Identifying the structure and dynamics of synaptic interactions between neurons is the first step to understanding neural network dynamics. The presence of synaptic connections is traditionally inferred through the use of targeted stimulation and paired recordings or by post-hoc histology. More recently, causal network inference algorithms have been proposed to deduce connectivity directly from electrophysiological signals, such as extracellularly recorded spiking activity. Usually, these algorithms have not been validated on a neurophysiological data set for which the actual circuitry is known. Recent work has shown that traditional network inference algorithms based on linear models typically fail to identify the correct coupling of a small central pattern generating circuit in the stomatogastric ganglion of the crab Cancer borealis. In this work, we show that point process models of observed spike trains can guide inference of relative connectivity estimates that match the known physiological connectivity of the central pattern generator up to a choice of threshold. We elucidate the necessary steps to derive faithful connectivity estimates from a model that incorporates the spike train nature of the data. We then apply the model to measure changes in the effective connectivity pattern in response to two pharmacological interventions, which affect both intrinsic neural dynamics and synaptic transmission. Our results provide the first successful application of a network inference algorithm to a circuit for which the actual physiological synapses between neurons are known. The point process methodology presented here generalizes well to larger networks and can describe the statistics of neural populations. In general we show that advanced statistical models allow for the characterization of effective network structure, deciphering underlying network dynamics and estimating information-processing capabilities. To appreciate how neural circuits control behaviors, we must understand two things. First, how the neurons comprising the circuit are connected, and second, how neurons and their connections change after learning or in response to neuromodulators. Neuronal connectivity is difficult to determine experimentally, whereas neuronal activity can often be readily measured. We describe a statistical model to estimate circuit connectivity directly from measured activity patterns. We use the timing relationships between observed spikes to predict synaptic interactions between simultaneously observed neurons. The model estimate provides each predicted connection with a curve that represents how strongly, and at which temporal delays, one circuit element effectively influences another. These curves are analogous to synaptic interactions of the level of the membrane potential of biological neurons and share some of their features such as being inhibitory or excitatory. We test our method on recordings from the pyloric circuit in the crab stomatogastric ganglion, a small circuit whose connectivity is completely known beforehand, and find that the predicted circuit matches the biological one — a result other techniques failed to achieve. In addition, we show that drug manipulations impacting the circuit are revealed by this technique. These results illustrate the utility of our analysis approach for inferring connections from neural spiking activity.
Collapse
Affiliation(s)
- Felipe Gerhard
- Brain Mind Institute, Ecole Polytechnique Fédérale de Lausanne (EPFL), Lausanne, Switzerland.
| | | | | | | | | | | |
Collapse
|
44
|
Mahan MY, Georgopoulos AP. Motor directional tuning across brain areas: directional resonance and the role of inhibition for directional accuracy. Front Neural Circuits 2013; 7:92. [PMID: 23720612 PMCID: PMC3654201 DOI: 10.3389/fncir.2013.00092] [Citation(s) in RCA: 30] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/07/2013] [Accepted: 04/26/2013] [Indexed: 11/30/2022] Open
Abstract
Motor directional tuning (Georgopoulos et al., 1982) has been found in every brain area in which it has been sought for during the past 30-odd years. It is typically broad, with widely distributed preferred directions and a population signal that predicts accurately the direction of an upcoming reaching movement or isometric force pulse (Georgopoulos et al., 1992). What is the basis for such ubiquitous directional tuning? How does the tuning come about? What are the implications of directional tuning for understanding the brain mechanisms of movement in space? This review addresses these questions in the light of accumulated knowledge in various sub-fields of neuroscience and motor behavior. It is argued (a) that direction in space encompasses many aspects, from vision to muscles, (b) that there is a directional congruence among the central representations of these distributed “directions” arising from rough but orderly topographic connectivities among brain areas, (c) that broad directional tuning is the result of broad excitation limited by recurrent and non-recurrent (i.e., direct) inhibition within the preferred direction loci in brain areas, and (d) that the width of the directional tuning curve, modulated by local inhibitory mechanisms, is a parameter that determines the accuracy of the directional command.
Collapse
Affiliation(s)
- Margaret Y Mahan
- Graduate Program in Biomedical Informatics and Computational Biology, University of Minnesota Minneapolis, MN, USA
| | | |
Collapse
|