1
|
Wu S, Zhang X, Wang Y. Neural Manifold Constraint for Spike Prediction Models Under Behavioral Reinforcement. IEEE Trans Neural Syst Rehabil Eng 2024; 32:2772-2781. [PMID: 39074025 DOI: 10.1109/tnsre.2024.3435568] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 07/31/2024]
Abstract
Spike prediction models effectively predict downstream spike trains from upstream neural activity for neural prostheses. Such prostheses could potentially restore damaged neural communication pathways using predicted patterns to guide electrical stimulations on downstream. Since the ground truth of downstream neural activity is unavailable for subjects with the damage, reinforcement learning (RL) with behavior-level rewards becomes necessary for model training. However, existing models do not involve any constraint on the generated firing patterns and neglect the correlations among neural activities. Thus, the model outputs can greatly deviate from the natural range of neural activities, causing concerns for clinical usage. This study proposes the neural manifold constraint to solve this problem, shaping RL-generated spike trains in the feature space. The constraint terms describe the first and second order statistics of the neural manifold estimated from neural recordings during subjects' freely moving period. Then, the models can be optimized within the neural manifold by behavioral reinforcement. We test the method to predict primary motor cortex (M1) spikes from medial prefrontal (mPFC) spikes when rats perform the two-lever discrimination task. Results show that the neural activity generated by constrained models resembles the real M1 recordings. Compared with models without constraints, our approach achieves similar behavioral success rates, but reduces the mean squared error of neural firing by 61%. The constraints also increase the model's robustness across data segments and induce realistic neural correlations. Our method provides a promising tool to restore transregional communication with high behavioral performance and more realistic microscopic patterns.
Collapse
|
2
|
Vaccari FE, Diomedi S, De Vitis M, Filippini M, Fattori P. Similar neural states, but dissimilar decoding patterns for motor control in parietal cortex. Netw Neurosci 2024; 8:486-516. [PMID: 38952818 PMCID: PMC11146678 DOI: 10.1162/netn_a_00364] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/27/2023] [Accepted: 01/29/2024] [Indexed: 07/03/2024] Open
Abstract
Discrete neural states are associated with reaching movements across the fronto-parietal network. Here, the Hidden Markov Model (HMM) applied to spiking activity of the somato-motor parietal area PE revealed a sequence of states similar to those of the contiguous visuomotor areas PEc and V6A. Using a coupled clustering and decoding approach, we proved that these neural states carried spatiotemporal information regarding behaviour in all three posterior parietal areas. However, comparing decoding accuracy, PE was less informative than V6A and PEc. In addition, V6A outperformed PEc in target inference, indicating functional differences among the parietal areas. To check the consistency of these differences, we used both a supervised and an unsupervised variant of the HMM, and compared its performance with two more common classifiers, Support Vector Machine and Long-Short Term Memory. The differences in decoding between areas were invariant to the algorithm used, still showing the dissimilarities found with HMM, thus indicating that these dissimilarities are intrinsic in the information encoded by parietal neurons. These results highlight that, when decoding from the parietal cortex, for example, in brain machine interface implementations, attention should be paid in selecting the most suitable source of neural signals, given the great heterogeneity of this cortical sector.
Collapse
Affiliation(s)
| | - Stefano Diomedi
- Department of Biomedical and Neuromotor Sciences, University of Bologna, Italy
| | - Marina De Vitis
- Department of Biomedical and Neuromotor Sciences, University of Bologna, Italy
| | - Matteo Filippini
- Department of Biomedical and Neuromotor Sciences, University of Bologna, Italy
- Alma Mater Research Institute for Human-Centered Artificial Intelligence, University of Bologna, Italy
| | - Patrizia Fattori
- Department of Biomedical and Neuromotor Sciences, University of Bologna, Italy
- Alma Mater Research Institute for Human-Centered Artificial Intelligence, University of Bologna, Italy
| |
Collapse
|
3
|
Menéndez JA, Hennig JA, Golub MD, Oby ER, Sadtler PT, Batista AP, Chase SM, Yu BM, Latham PE. A theory of brain-computer interface learning via low-dimensional control. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.04.18.589952. [PMID: 38712193 PMCID: PMC11071278 DOI: 10.1101/2024.04.18.589952] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/08/2024]
Abstract
A remarkable demonstration of the flexibility of mammalian motor systems is primates' ability to learn to control brain-computer interfaces (BCIs). This constitutes a completely novel motor behavior, yet primates are capable of learning to control BCIs under a wide range of conditions. BCIs with carefully calibrated decoders, for example, can be learned with only minutes to hours of practice. With a few weeks of practice, even BCIs with randomly constructed decoders can be learned. What are the biological substrates of this learning process? Here, we develop a theory based on a re-aiming strategy, whereby learning operates within a low-dimensional subspace of task-relevant inputs driving the local population of recorded neurons. Through comprehensive numerical and formal analysis, we demonstrate that this theory can provide a unifying explanation for disparate phenomena previously reported in three different BCI learning tasks, and we derive a novel experimental prediction that we verify with previously published data. By explicitly modeling the underlying neural circuitry, the theory reveals an interpretation of these phenomena in terms of biological constraints on neural activity.
Collapse
|
4
|
Crosser JT, Brinkman BAW. Applications of information geometry to spiking neural network activity. Phys Rev E 2024; 109:024302. [PMID: 38491696 DOI: 10.1103/physreve.109.024302] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/18/2023] [Accepted: 01/10/2024] [Indexed: 03/18/2024]
Abstract
The space of possible behaviors that complex biological systems may exhibit is unimaginably vast, and these systems often appear to be stochastic, whether due to variable noisy environmental inputs or intrinsically generated chaos. The brain is a prominent example of a biological system with complex behaviors. The number of possible patterns of spikes emitted by a local brain circuit is combinatorially large, although the brain may not make use of all of them. Understanding which of these possible patterns are actually used by the brain, and how those sets of patterns change as properties of neural circuitry change is a major goal in neuroscience. Recently, tools from information geometry have been used to study embeddings of probabilistic models onto a hierarchy of model manifolds that encode how model outputs change as a function of their parameters, giving a quantitative notion of "distances" between outputs. We apply this method to a network model of excitatory and inhibitory neural populations to understand how the competition between membrane and synaptic response timescales shapes the network's information geometry. The hyperbolic embedding allows us to identify the statistical parameters to which the model behavior is most sensitive, and demonstrate how the ranking of these coordinates changes with the balance of excitation and inhibition in the network.
Collapse
Affiliation(s)
- Jacob T Crosser
- Department of Applied Mathematics and Statistics, Stony Brook University, Stony Brook, New York 11794, USA and Department of Neurobiology and Behavior, Stony Brook University, Stony Brook, New York 11794, USA
| | - Braden A W Brinkman
- Department of Applied Mathematics and Statistics, Stony Brook University, Stony Brook, New York 11794, USA and Department of Neurobiology and Behavior, Stony Brook University, Stony Brook, New York 11794, USA
| |
Collapse
|
5
|
Gort J. Emergence of Universal Computations Through Neural Manifold Dynamics. Neural Comput 2024; 36:227-270. [PMID: 38101328 DOI: 10.1162/neco_a_01631] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/07/2023] [Accepted: 09/05/2023] [Indexed: 12/17/2023]
Abstract
There is growing evidence that many forms of neural computation may be implemented by low-dimensional dynamics unfolding at the population scale. However, neither the connectivity structure nor the general capabilities of these embedded dynamical processes are currently understood. In this work, the two most common formalisms of firing-rate models are evaluated using tools from analysis, topology, and nonlinear dynamics in order to provide plausible explanations for these problems. It is shown that low-rank structured connectivities predict the formation of invariant and globally attracting manifolds in all these models. Regarding the dynamics arising in these manifolds, it is proved they are topologically equivalent across the considered formalisms. This letter also shows that under the low-rank hypothesis, the flows emerging in neural manifolds, including input-driven systems, are universal, which broadens previous findings. It explores how low-dimensional orbits can bear the production of continuous sets of muscular trajectories, the implementation of central pattern generators, and the storage of memory states. These dynamics can robustly simulate any Turing machine over arbitrary bounded memory strings, virtually endowing rate models with the power of universal computation. In addition, the letter shows how the low-rank hypothesis predicts the parsimonious correlation structure observed in cortical activity. Finally, it discusses how this theory could provide a useful tool from which to study neuropsychological phenomena using mathematical methods.
Collapse
Affiliation(s)
- Joan Gort
- Facultat de Psicologia, Universitat Autònoma de Barcelona, 08193, Bellaterra, Barcelona, Spain
| |
Collapse
|
6
|
Si H, Sun X. Inter-areal transmission of multiple neural signals through frequency-division-multiplexing communication. Cogn Neurodyn 2023; 17:1153-1165. [PMID: 37786658 PMCID: PMC10542065 DOI: 10.1007/s11571-022-09914-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/26/2022] [Revised: 10/26/2022] [Accepted: 11/17/2022] [Indexed: 12/03/2022] Open
Abstract
Inter-areal information transmission in the brain cortex relates to cognitive functions. Researches used to pay attention to activity pattern transmission, signals gating, or routing in neuronal networks. However, the underlying mechanism of simultaneous transmission of multiple neural signals in the same channel across networks remains unclear. In this work, we construct a two-layer feedforward neuronal network (sender-receiver) with each layer's intrinsic rhythms consisting of slow- (low-frequency) and fast- gamma rhythms (high-frequency), investigating how to realize simultaneous transmission of multiple signals in neuronal systems. With the aid of resonance and frequency analysis, it is shown that low- and high-frequency signals can be transmitted simultaneously in such a feedforward network through frequency division multiplexing (FDM) communication. The transmission performance is related to the local resonance, connectivity, as well as background noise. Moreover, low- and high-frequency signals can also be gated or selected with appropriate adjustments of recurrent connection strength and delay, and background noise. Our model might provide a novel insight into the underlying mechanism of complex signals communication between different cortex areas.
Collapse
Affiliation(s)
- Hao Si
- School of Science, Beijing University of Posts and Telecommunications, Beijing, 100876 China
| | - Xiaojuan Sun
- School of Science, Beijing University of Posts and Telecommunications, Beijing, 100876 China
| |
Collapse
|
7
|
Naik S, Dehaene-Lambertz G, Battaglia D. Repairing Artifacts in Neural Activity Recordings Using Low-Rank Matrix Estimation. SENSORS (BASEL, SWITZERLAND) 2023; 23:4847. [PMID: 37430760 DOI: 10.3390/s23104847] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/11/2023] [Revised: 05/09/2023] [Accepted: 05/10/2023] [Indexed: 07/12/2023]
Abstract
Electrophysiology recordings are frequently affected by artifacts (e.g., subject motion or eye movements), which reduces the number of available trials and affects the statistical power. When artifacts are unavoidable and data are scarce, signal reconstruction algorithms that allow for the retention of sufficient trials become crucial. Here, we present one such algorithm that makes use of large spatiotemporal correlations in neural signals and solves the low-rank matrix completion problem, to fix artifactual entries. The method uses a gradient descent algorithm in lower dimensions to learn the missing entries and provide faithful reconstruction of signals. We carried out numerical simulations to benchmark the method and estimate optimal hyperparameters for actual EEG data. The fidelity of reconstruction was assessed by detecting event-related potentials (ERP) from a highly artifacted EEG time series from human infants. The proposed method significantly improved the standardized error of the mean in ERP group analysis and a between-trial variability analysis compared to a state-of-the-art interpolation technique. This improvement increased the statistical power and revealed significant effects that would have been deemed insignificant without reconstruction. The method can be applied to any time-continuous neural signal where artifacts are sparse and spread out across epochs and channels, increasing data retention and statistical power.
Collapse
Affiliation(s)
- Shruti Naik
- Cognitive Neuroimaging Unit, Centre National de la Recherche Scientifique (CNRS), Institut National de la Santé et de la Recherche Médicale (INSERM), CEA, Université Paris-Saclay, NeuroSpin Center, F-91190 Gif-sur-Yvette, France
| | - Ghislaine Dehaene-Lambertz
- Cognitive Neuroimaging Unit, Centre National de la Recherche Scientifique (CNRS), Institut National de la Santé et de la Recherche Médicale (INSERM), CEA, Université Paris-Saclay, NeuroSpin Center, F-91190 Gif-sur-Yvette, France
| | - Demian Battaglia
- Institut de Neurosciences des Systèmes, U1106, Centre National de la Recherche Scientifique (CNRS) Aix-Marseille Université, F-13005 Marseille, France
- Institute for Advanced Studies, University of Strasbourg, (USIAS), F-67000 Strasbourg, France
| |
Collapse
|
8
|
Thivierge JP, Pilzak A. Estimating null and potent modes of feedforward communication in a computational model of cortical activity. Sci Rep 2022; 12:742. [PMID: 35031628 PMCID: PMC8760251 DOI: 10.1038/s41598-021-04684-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/22/2021] [Accepted: 12/15/2021] [Indexed: 11/08/2022] Open
Abstract
Communication across anatomical areas of the brain is key to both sensory and motor processes. Dimensionality reduction approaches have shown that the covariation of activity across cortical areas follows well-delimited patterns. Some of these patterns fall within the "potent space" of neural interactions and generate downstream responses; other patterns fall within the "null space" and prevent the feedforward propagation of synaptic inputs. Despite growing evidence for the role of null space activity in visual processing as well as preparatory motor control, a mechanistic understanding of its neural origins is lacking. Here, we developed a mean-rate model that allowed for the systematic control of feedforward propagation by potent and null modes of interaction. In this model, altering the number of null modes led to no systematic changes in firing rates, pairwise correlations, or mean synaptic strengths across areas, making it difficult to characterize feedforward communication with common measures of functional connectivity. A novel measure termed the null ratio captured the proportion of null modes relayed from one area to another. Applied to simultaneous recordings of primate cortical areas V1 and V2 during image viewing, the null ratio revealed that feedforward interactions have a broad null space that may reflect properties of visual stimuli.
Collapse
Affiliation(s)
- Jean-Philippe Thivierge
- School of Psychology, University of Ottawa, Ottawa, ON, Canada.
- Brain and Mind Research Institute, University of Ottawa, Ottawa, ON, Canada.
| | - Artem Pilzak
- School of Psychology, University of Ottawa, Ottawa, ON, Canada
| |
Collapse
|
9
|
Hennig JA, Oby ER, Losey DM, Batista AP, Yu BM, Chase SM. How learning unfolds in the brain: toward an optimization view. Neuron 2021; 109:3720-3735. [PMID: 34648749 PMCID: PMC8639641 DOI: 10.1016/j.neuron.2021.09.005] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/30/2021] [Revised: 08/25/2021] [Accepted: 09/02/2021] [Indexed: 12/17/2022]
Abstract
How do changes in the brain lead to learning? To answer this question, consider an artificial neural network (ANN), where learning proceeds by optimizing a given objective or cost function. This "optimization framework" may provide new insights into how the brain learns, as many idiosyncratic features of neural activity can be recapitulated by an ANN trained to perform the same task. Nevertheless, there are key features of how neural population activity changes throughout learning that cannot be readily explained in terms of optimization and are not typically features of ANNs. Here we detail three of these features: (1) the inflexibility of neural variability throughout learning, (2) the use of multiple learning processes even during simple tasks, and (3) the presence of large task-nonspecific activity changes. We propose that understanding the role of these features in the brain will be key to describing biological learning using an optimization framework.
Collapse
Affiliation(s)
- Jay A Hennig
- Neuroscience Institute, Carnegie Mellon University, Pittsburgh, PA, USA; Center for the Neural Basis of Cognition, Pittsburgh, PA, USA; Machine Learning Department, Carnegie Mellon University, Pittsburgh, PA, USA.
| | - Emily R Oby
- Center for the Neural Basis of Cognition, Pittsburgh, PA, USA; Department of Bioengineering, University of Pittsburgh, Pittsburgh, PA, USA
| | - Darby M Losey
- Neuroscience Institute, Carnegie Mellon University, Pittsburgh, PA, USA; Center for the Neural Basis of Cognition, Pittsburgh, PA, USA; Machine Learning Department, Carnegie Mellon University, Pittsburgh, PA, USA
| | - Aaron P Batista
- Center for the Neural Basis of Cognition, Pittsburgh, PA, USA; Department of Bioengineering, University of Pittsburgh, Pittsburgh, PA, USA
| | - Byron M Yu
- Neuroscience Institute, Carnegie Mellon University, Pittsburgh, PA, USA; Center for the Neural Basis of Cognition, Pittsburgh, PA, USA; Department of Electrical and Computer Engineering, Carnegie Mellon University, Pittsburgh, PA, USA; Department of Biomedical Engineering, Carnegie Mellon University, Pittsburgh, PA, USA
| | - Steven M Chase
- Neuroscience Institute, Carnegie Mellon University, Pittsburgh, PA, USA; Center for the Neural Basis of Cognition, Pittsburgh, PA, USA; Department of Biomedical Engineering, Carnegie Mellon University, Pittsburgh, PA, USA
| |
Collapse
|
10
|
Altan E, Solla SA, Miller LE, Perreault EJ. Estimating the dimensionality of the manifold underlying multi-electrode neural recordings. PLoS Comput Biol 2021; 17:e1008591. [PMID: 34843461 PMCID: PMC8659648 DOI: 10.1371/journal.pcbi.1008591] [Citation(s) in RCA: 17] [Impact Index Per Article: 5.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/04/2020] [Revised: 12/09/2021] [Accepted: 11/11/2021] [Indexed: 01/07/2023] Open
Abstract
It is generally accepted that the number of neurons in a given brain area far exceeds the number of neurons needed to carry any specific function controlled by that area. For example, motor areas of the human brain contain tens of millions of neurons that control the activation of tens or at most hundreds of muscles. This massive redundancy implies the covariation of many neurons, which constrains the population activity to a low-dimensional manifold within the space of all possible patterns of neural activity. To gain a conceptual understanding of the complexity of the neural activity within a manifold, it is useful to estimate its dimensionality, which quantifies the number of degrees of freedom required to describe the observed population activity without significant information loss. While there are many algorithms for dimensionality estimation, we do not know which are well suited for analyzing neural activity. The objective of this study was to evaluate the efficacy of several representative algorithms for estimating the dimensionality of linearly and nonlinearly embedded data. We generated synthetic neural recordings with known intrinsic dimensionality and used them to test the algorithms' accuracy and robustness. We emulated some of the important challenges associated with experimental data by adding noise, altering the nature of the embedding of the low-dimensional manifold within the high-dimensional recordings, varying the dimensionality of the manifold, and limiting the amount of available data. We demonstrated that linear algorithms overestimate the dimensionality of nonlinear, noise-free data. In cases of high noise, most algorithms overestimated the dimensionality. We thus developed a denoising algorithm based on deep learning, the "Joint Autoencoder", which significantly improved subsequent dimensionality estimation. Critically, we found that all algorithms failed when the intrinsic dimensionality was high (above 20) or when the amount of data used for estimation was low. Based on the challenges we observed, we formulated a pipeline for estimating the dimensionality of experimental neural data.
Collapse
Affiliation(s)
- Ege Altan
- Department of Neuroscience, Northwestern University, Chicago, Illinois, United States of America
- Department of Biomedical Engineering, Northwestern University, Evanston, Illinois, United States of America
| | - Sara A. Solla
- Department of Neuroscience, Northwestern University, Chicago, Illinois, United States of America
- Department of Physics and Astronomy, Northwestern University, Evanston, Illinois, United States of America
| | - Lee E. Miller
- Department of Neuroscience, Northwestern University, Chicago, Illinois, United States of America
- Department of Biomedical Engineering, Northwestern University, Evanston, Illinois, United States of America
- Department of Physical Medicine and Rehabilitation, Northwestern University, Chicago, Illinois, United States of America
- Shirley Ryan AbilityLab, Chicago, Illinois, United States of America
| | - Eric J. Perreault
- Department of Biomedical Engineering, Northwestern University, Evanston, Illinois, United States of America
- Department of Physical Medicine and Rehabilitation, Northwestern University, Chicago, Illinois, United States of America
- Shirley Ryan AbilityLab, Chicago, Illinois, United States of America
| |
Collapse
|
11
|
Liu YH, Zhu J, Constantinidis C, Zhou X. Emergence of prefrontal neuron maturation properties by training recurrent neural networks in cognitive tasks. iScience 2021; 24:103178. [PMID: 34667944 PMCID: PMC8506971 DOI: 10.1016/j.isci.2021.103178] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/14/2021] [Revised: 06/16/2021] [Accepted: 09/22/2021] [Indexed: 01/14/2023] Open
Abstract
Working memory and response inhibition are functions that mature relatively late in life, after adolescence, paralleling the maturation of the prefrontal cortex. The link between behavioral and neural maturation is not obvious, however, making it challenging to understand how neural activity underlies the maturation of cognitive function. To gain insights into the nature of observed changes in prefrontal activity between adolescence and adulthood, we investigated the progressive changes in unit activity of recurrent neural networks as they were trained to perform working memory and response inhibition tasks. These included increased delay period activity during working memory tasks and increased activation in antisaccade tasks. These findings reveal universal properties underlying the neuronal computations behind cognitive tasks and explicate the nature of changes that occur as the result of developmental maturation. Properties of RNN networks during training offer insights in prefrontal maturation Fully trained networks exhibit higher levels of activity in working memory tasks Trained networks also exhibit higher activation in antisaccade tasks Partially trained RNNs can generate accurate predictions of immature PFC activity
Collapse
Affiliation(s)
- Yichen Henry Liu
- Department of Computer Science, Vanderbilt University, Nashville, TN 37235, USA
| | - Junda Zhu
- Neuroscience Program, Vanderbilt University, Nashville, TN 37235, USA
| | - Christos Constantinidis
- Neuroscience Program, Vanderbilt University, Nashville, TN 37235, USA.,Department of Biomedical Engineering, Vanderbilt University, Nashville, TN 37235, USA.,Department of Ophthalmology and Visual Sciences, Vanderbilt University Medical Center, Nashville, TN 37232, USA
| | - Xin Zhou
- Department of Computer Science, Vanderbilt University, Nashville, TN 37235, USA.,Department of Biomedical Engineering, Vanderbilt University, Nashville, TN 37235, USA.,Data Science Institute, Vanderbilt University, Nashville, TN 37235, USA
| |
Collapse
|
12
|
Abstract
Significant experimental, computational, and theoretical work has identified rich structure within the coordinated activity of interconnected neural populations. An emerging challenge now is to uncover the nature of the associated computations, how they are implemented, and what role they play in driving behavior. We term this computation through neural population dynamics. If successful, this framework will reveal general motifs of neural population activity and quantitatively describe how neural population dynamics implement computations necessary for driving goal-directed behavior. Here, we start with a mathematical primer on dynamical systems theory and analytical tools necessary to apply this perspective to experimental data. Next, we highlight some recent discoveries resulting from successful application of dynamical systems. We focus on studies spanning motor control, timing, decision-making, and working memory. Finally, we briefly discuss promising recent lines of investigation and future directions for the computation through neural population dynamics framework.
Collapse
Affiliation(s)
- Saurabh Vyas
- Department of Bioengineering, Stanford University, Stanford, California 94305, USA; .,Wu Tsai Neurosciences Institute, Stanford University, Stanford, California 94305, USA
| | - Matthew D Golub
- Department of Electrical Engineering, Stanford University, Stanford, California 94305, USA.,Wu Tsai Neurosciences Institute, Stanford University, Stanford, California 94305, USA
| | - David Sussillo
- Department of Electrical Engineering, Stanford University, Stanford, California 94305, USA.,Wu Tsai Neurosciences Institute, Stanford University, Stanford, California 94305, USA.,Google AI, Google Inc., Mountain View, California 94305, USA
| | - Krishna V Shenoy
- Department of Bioengineering, Stanford University, Stanford, California 94305, USA; .,Department of Electrical Engineering, Stanford University, Stanford, California 94305, USA.,Wu Tsai Neurosciences Institute, Stanford University, Stanford, California 94305, USA.,Department of Neurobiology, Bio-X Institute, Neurosciences Program, and Howard Hughes Medical Institute, Stanford University, Stanford, California 94305, USA
| |
Collapse
|
13
|
Zhou J, Huang H. Weakly correlated synapses promote dimension reduction in deep neural networks. Phys Rev E 2021; 103:012315. [PMID: 33601541 DOI: 10.1103/physreve.103.012315] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/24/2020] [Accepted: 01/08/2021] [Indexed: 11/07/2022]
Abstract
By controlling synaptic and neural correlations, deep learning has achieved empirical successes in improving classification performances. How synaptic correlations affect neural correlations to produce disentangled hidden representations remains elusive. Here we propose a simplified model of dimension reduction, taking into account pairwise correlations among synapses, to reveal the mechanism underlying how the synaptic correlations affect dimension reduction. Our theory determines the scaling of synaptic correlations requiring only mathematical self-consistency for both binary and continuous synapses. The theory also predicts that weakly correlated synapses encourage dimension reduction compared to their orthogonal counterparts. In addition, these synapses attenuate the decorrelation process along the network depth. These two computational roles are explained by a proposed mean-field equation. The theoretical predictions are in excellent agreement with numerical simulations, and the key features are also captured by deep learning with Hebbian rules.
Collapse
Affiliation(s)
- Jianwen Zhou
- PMI Lab, School of Physics, Sun Yat-sen University, Guangzhou 510275, People's Republic of China
| | - Haiping Huang
- PMI Lab, School of Physics, Sun Yat-sen University, Guangzhou 510275, People's Republic of China
| |
Collapse
|
14
|
Feulner B, Clopath C. Neural manifold under plasticity in a goal driven learning behaviour. PLoS Comput Biol 2021; 17:e1008621. [PMID: 33544700 PMCID: PMC7864452 DOI: 10.1371/journal.pcbi.1008621] [Citation(s) in RCA: 20] [Impact Index Per Article: 6.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/10/2020] [Accepted: 12/08/2020] [Indexed: 11/19/2022] Open
Abstract
Neural activity is often low dimensional and dominated by only a few prominent neural covariation patterns. It has been hypothesised that these covariation patterns could form the building blocks used for fast and flexible motor control. Supporting this idea, recent experiments have shown that monkeys can learn to adapt their neural activity in motor cortex on a timescale of minutes, given that the change lies within the original low-dimensional subspace, also called neural manifold. However, the neural mechanism underlying this within-manifold adaptation remains unknown. Here, we show in a computational model that modification of recurrent weights, driven by a learned feedback signal, can account for the observed behavioural difference between within- and outside-manifold learning. Our findings give a new perspective, showing that recurrent weight changes do not necessarily lead to change in the neural manifold. On the contrary, successful learning is naturally constrained to a common subspace.
Collapse
Affiliation(s)
- Barbara Feulner
- Department of Bioengineering, Imperial College London, London, United Kingdom
| | - Claudia Clopath
- Department of Bioengineering, Imperial College London, London, United Kingdom
| |
Collapse
|
15
|
Tafazoli S, MacDowell CJ, Che Z, Letai KC, Steinhardt CR, Buschman TJ. Learning to control the brain through adaptive closed-loop patterned stimulation. J Neural Eng 2020; 17:056007. [PMID: 32927437 DOI: 10.1088/1741-2552/abb860] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/27/2022]
Abstract
OBJECTIVE Stimulation of neural activity is an important scientific and clinical tool, causally testing hypotheses and treating neurodegenerative and neuropsychiatric diseases. However, current stimulation approaches cannot flexibly control the pattern of activity in populations of neurons. To address this, we developed a model-free, adaptive, closed-loop stimulation (ACLS) system that learns to use multi-site electrical stimulation to control the pattern of activity of a population of neurons. APPROACH The ACLS system combined multi-electrode electrophysiological recordings with multi-site electrical stimulation to simultaneously record the activity of a population of 5-15 multiunit neurons and deliver spatially-patterned electrical stimulation across 4-16 sites. Using a closed-loop learning system, ACLS iteratively updated the pattern of stimulation to reduce the difference between the observed neural response and a specific target pattern of firing rates in the recorded multiunits. MAIN RESULTS In silico and in vivo experiments showed ACLS learns to produce specific patterns of neural activity (in ∼15 min) and was robust to noise and drift in neural responses. In visual cortex of awake mice, ACLS learned electrical stimulation patterns that produced responses similar to the natural response evoked by visual stimuli. Similar to how repetition of a visual stimulus causes an adaptation in the neural response, the response to electrical stimulation was adapted when it was preceded by the associated visual stimulus. SIGNIFICANCE Our results show an ACLS system that can learn, in real-time, to generate specific patterns of neural activity. This work provides a framework for using model-free closed-loop learning to control neural activity.
Collapse
Affiliation(s)
- Sina Tafazoli
- Princeton Neuroscience Institute, Princeton University, Princeton, NJ 08540, United States of America. Lead contact and corresponding author
| | | | | | | | | | | |
Collapse
|
16
|
Gallego JA, Perich MG, Chowdhury RH, Solla SA, Miller LE. Long-term stability of cortical population dynamics underlying consistent behavior. Nat Neurosci 2020; 23:260-270. [PMID: 31907438 PMCID: PMC7007364 DOI: 10.1038/s41593-019-0555-4] [Citation(s) in RCA: 138] [Impact Index Per Article: 34.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/11/2018] [Accepted: 11/11/2019] [Indexed: 01/08/2023]
Abstract
Animals readily execute learned behaviors in a consistent manner over long periods of time, and yet no equally stable neural correlate has been demonstrated. How does the cortex achieve this stable control? Using the sensorimotor system as a model of cortical processing, we investigated the hypothesis that the dynamics of neural latent activity, which captures the dominant co-variation patterns within the neural population, must be preserved across time. We recorded from populations of neurons in premotor, primary motor and somatosensory cortices as monkeys performed a reaching task, for up to 2 years. Intriguingly, despite a steady turnover in the recorded neurons, the low-dimensional latent dynamics remained stable. The stability allowed reliable decoding of behavioral features for the entire timespan, while fixed decoders based directly on the recorded neural activity degraded substantially. We posit that stable latent cortical dynamics within the manifold are the fundamental building blocks underlying consistent behavioral execution.
Collapse
Affiliation(s)
- Juan A Gallego
- Neural and Cognitive Engineering Group, Center for Automation and Robotics, Spanish National Research Council, Arganda del Rey, Spain.
- Department of Physiology, Northwestern University, Chicago, IL, USA.
- Department of Bioengineering, Imperial College London, London, UK.
| | - Matthew G Perich
- Department of Fundamental Neuroscience, Faculty of Medicine, University of Geneva, Geneva, Switzerland
| | - Raeed H Chowdhury
- Department of Biomedical Engineering, Northwestern University, Evanston, IL, USA
| | - Sara A Solla
- Department of Physiology, Northwestern University, Chicago, IL, USA
- Department of Physics and Astronomy, Northwestern University, Evanston, IL, USA
| | - Lee E Miller
- Department of Physiology, Northwestern University, Chicago, IL, USA.
- Department of Biomedical Engineering, Northwestern University, Evanston, IL, USA.
- Department of Physical Medicine and Rehabilitation, Northwestern University, and Shirley Ryan Ability Lab, Chicago, IL, USA.
| |
Collapse
|
17
|
Neuroscience out of control: control-theoretic perspectives on neural circuit dynamics. Curr Opin Neurobiol 2019; 58:122-129. [DOI: 10.1016/j.conb.2019.09.001] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/27/2019] [Revised: 07/16/2019] [Accepted: 09/03/2019] [Indexed: 12/19/2022]
|