1
|
Rajalingham R, Sohn H, Jazayeri M. Dynamic tracking of objects in the macaque dorsomedial frontal cortex. Nat Commun 2025; 16:346. [PMID: 39746908 PMCID: PMC11696028 DOI: 10.1038/s41467-024-54688-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/18/2024] [Accepted: 11/18/2024] [Indexed: 01/04/2025] Open
Abstract
A central tenet of cognitive neuroscience is that humans build an internal model of the external world and use mental simulation of the model to perform physical inferences. Decades of human experiments have shown that behaviors in many physical reasoning tasks are consistent with predictions from the mental simulation theory. However, evidence for the defining feature of mental simulation - that neural population dynamics reflect simulations of physical states in the environment - is limited. We test the mental simulation hypothesis by combining a naturalistic ball-interception task, large-scale electrophysiology in non-human primates, and recurrent neural network modeling. We find that neurons in the monkeys' dorsomedial frontal cortex (DMFC) represent task-relevant information about the ball position in a multiplexed fashion. At a population level, the activity pattern in DMFC comprises a low-dimensional neural embedding that tracks the ball both when it is visible and invisible, serving as a neural substrate for mental simulation. A systematic comparison of different classes of task-optimized RNN models with the DMFC data provides further evidence supporting the mental simulation hypothesis. Our findings provide evidence that neural dynamics in the frontal cortex are consistent with internal simulation of external states in the environment.
Collapse
Affiliation(s)
- Rishi Rajalingham
- McGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, MA, USA
- Reality Labs, Meta; 390 9th Ave, New York, NY, USA
| | - Hansem Sohn
- Center for Neuroscience Imaging Research, Institute for Basic Science (IBS), Suwon, Republic of Korea
- Department of Biomedical Engineering, Sungkyunkwan University (SKKU), Suwon, Republic of Korea
| | - Mehrdad Jazayeri
- McGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, MA, USA.
- Department of Brain & Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, Massachusetts, USA.
- Howard Hughes Medical Institute, Massachusetts Institute of Technology, Cambridge, USA.
| |
Collapse
|
2
|
Soldado-Magraner J, Mante V, Sahani M. Inferring context-dependent computations through linear approximations of prefrontal cortex dynamics. SCIENCE ADVANCES 2024; 10:eadl4743. [PMID: 39693450 DOI: 10.1126/sciadv.adl4743] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/13/2023] [Accepted: 11/13/2024] [Indexed: 12/20/2024]
Abstract
The complex neural activity of prefrontal cortex (PFC) is a hallmark of cognitive processes. How these rich dynamics emerge and support neural computations is largely unknown. Here, we infer mechanisms underlying the context-dependent integration of sensory inputs by fitting dynamical models to PFC population responses of behaving monkeys. A class of models implementing linear dynamics driven by external inputs accurately captured PFC responses within contexts and revealed equally performing mechanisms. One model implemented context-dependent recurrent dynamics and relied on transient input amplification; the other relied on subtle contextual modulations of the inputs, providing constraints on the attentional effects in sensory areas required to explain flexible PFC responses and behavior. Both models revealed properties of inputs and recurrent dynamics that were not apparent from qualitative descriptions of PFC responses. By revealing mechanisms that are quantitatively consistent with complex cortical dynamics, our modeling approach provides a principled and general framework to link neural population activity and computation.
Collapse
Affiliation(s)
- Joana Soldado-Magraner
- Gatsby Computational Neuroscience Unit, University College London, 25 Howland St, London W1T 4JG, UK
| | - Valerio Mante
- Institute of Neuroinformatics, ETH Zurich and University of Zurich, Winterthurerstrasse 190, 8057 Zurich, Switzerland
| | - Maneesh Sahani
- Gatsby Computational Neuroscience Unit, University College London, 25 Howland St, London W1T 4JG, UK
| |
Collapse
|
3
|
Tolooshams B, Matias S, Wu H, Temereanca S, Uchida N, Murthy VN, Masset P, Ba D. Interpretable deep learning for deconvolutional analysis of neural signals. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.01.05.574379. [PMID: 38260512 PMCID: PMC10802267 DOI: 10.1101/2024.01.05.574379] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/24/2024]
Abstract
The widespread adoption of deep learning to build models that capture the dynamics of neural populations is typically based on "black-box" approaches that lack an interpretable link between neural activity and network parameters. Here, we propose to apply algorithm unrolling, a method for interpretable deep learning, to design the architecture of sparse deconvolutional neural networks and obtain a direct interpretation of network weights in relation to stimulus-driven single-neuron activity through a generative model. We characterize our method, referred to as deconvolutional unrolled neural learning (DUNL), and show its versatility by applying it to deconvolve single-trial local signals across multiple brain areas and recording modalities. To exemplify use cases of our decomposition method, we uncover multiplexed salience and reward prediction error signals from midbrain dopamine neurons in an unbiased manner, perform simultaneous event detection and characterization in somatosensory thalamus recordings, and characterize the heterogeneity of neural responses in the piriform cortex and in the striatum during unstructured, naturalistic experiments. Our work leverages the advances in interpretable deep learning to gain a mechanistic understanding of neural activity.
Collapse
Affiliation(s)
- Bahareh Tolooshams
- Center for Brain Science, Harvard University, Cambridge MA, 02138
- John A. Paulson School of Engineering and Applied Sciences, Harvard University, Cambridge MA, 02138
- Computing + Mathematical Sciences, California Institute of Technology, Pasadena, CA, 91125
| | - Sara Matias
- Center for Brain Science, Harvard University, Cambridge MA, 02138
- Department of Molecular and Cellular Biology, Harvard University, Cambridge MA, 02138
| | - Hao Wu
- Center for Brain Science, Harvard University, Cambridge MA, 02138
- Department of Molecular and Cellular Biology, Harvard University, Cambridge MA, 02138
| | - Simona Temereanca
- Carney Institute for Brain Science, Brown University, Providence, RI, 02906
| | - Naoshige Uchida
- Center for Brain Science, Harvard University, Cambridge MA, 02138
- Department of Molecular and Cellular Biology, Harvard University, Cambridge MA, 02138
| | - Venkatesh N. Murthy
- Center for Brain Science, Harvard University, Cambridge MA, 02138
- Department of Molecular and Cellular Biology, Harvard University, Cambridge MA, 02138
| | - Paul Masset
- Center for Brain Science, Harvard University, Cambridge MA, 02138
- Department of Molecular and Cellular Biology, Harvard University, Cambridge MA, 02138
- Department of Psychology, McGill University, Montréal QC, H3A 1G1
| | - Demba Ba
- Center for Brain Science, Harvard University, Cambridge MA, 02138
- John A. Paulson School of Engineering and Applied Sciences, Harvard University, Cambridge MA, 02138
- Kempner Institute for the Study of Natural & Artificial Intelligence, Harvard University, Cambridge MA, 02138
| |
Collapse
|
4
|
Xiao G, Cai Y, Zhang Y, Xie J, Wu L, Xie H, Wu J, Dai Q. Mesoscale neuronal granular trial variability in vivo illustrated by nonlinear recurrent network in silico. Nat Commun 2024; 15:9894. [PMID: 39548098 PMCID: PMC11567969 DOI: 10.1038/s41467-024-54346-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/23/2024] [Accepted: 11/06/2024] [Indexed: 11/17/2024] Open
Abstract
Large-scale neural recording with single-neuron resolution has revealed the functional complexity of the neural systems. However, even under well-designed task conditions, the cortex-wide network exhibits highly dynamic trial variability, posing challenges to the conventional trial-averaged analysis. To study mesoscale trial variability, we conducted a comparative study between fluorescence imaging of layer-2/3 neurons in vivo and network simulation in silico. We imaged up to 40,000 cortical neurons' triggered responses by deep brain stimulus (DBS). And we build an in silico network to reproduce the biological phenomena we observed in vivo. We proved the existence of ineluctable trial variability and found it influenced by input amplitude and range. Moreover, we demonstrated that a spatially heterogeneous coding community accounts for more reliable inter-trial coding despite single-unit trial variability. A deeper understanding of trial variability from the perspective of a dynamical system may lead to uncovering intellectual abilities such as parallel coding and creativity.
Collapse
Affiliation(s)
- Guihua Xiao
- Beijing National Research Center for Information Science and Technology, Tsinghua University, Beijing, China
- Department of Automation, Tsinghua University, Beijing, China
- Institute for Brain and Cognitive Sciences, Tsinghua University, Beijing, China
| | - Yeyi Cai
- Department of Automation, Tsinghua University, Beijing, China
- Institute for Brain and Cognitive Sciences, Tsinghua University, Beijing, China
| | - Yuanlong Zhang
- Department of Automation, Tsinghua University, Beijing, China
- Institute for Brain and Cognitive Sciences, Tsinghua University, Beijing, China
| | - Jingyu Xie
- Department of Automation, Tsinghua University, Beijing, China
- Institute for Brain and Cognitive Sciences, Tsinghua University, Beijing, China
| | - Lifan Wu
- Department of Automation, Tsinghua University, Beijing, China
- Institute for Brain and Cognitive Sciences, Tsinghua University, Beijing, China
| | - Hao Xie
- Department of Automation, Tsinghua University, Beijing, China
- Institute for Brain and Cognitive Sciences, Tsinghua University, Beijing, China
| | - Jiamin Wu
- Beijing National Research Center for Information Science and Technology, Tsinghua University, Beijing, China.
- Department of Automation, Tsinghua University, Beijing, China.
- Institute for Brain and Cognitive Sciences, Tsinghua University, Beijing, China.
- IDG/McGovern Institute for Brain Research, Tsinghua University, Beijing, China.
| | - Qionghai Dai
- Beijing National Research Center for Information Science and Technology, Tsinghua University, Beijing, China.
- Department of Automation, Tsinghua University, Beijing, China.
- Institute for Brain and Cognitive Sciences, Tsinghua University, Beijing, China.
- IDG/McGovern Institute for Brain Research, Tsinghua University, Beijing, China.
| |
Collapse
|
5
|
Karpowicz BM, Bhaduri B, Nason-Tomaszewski SR, Jacques BG, Ali YH, Flint RD, Bechefsky PH, Hochberg LR, AuYong N, Slutzky MW, Pandarinath C. Reducing power requirements for high-accuracy decoding in iBCIs. J Neural Eng 2024; 21:066001. [PMID: 39423832 PMCID: PMC11528220 DOI: 10.1088/1741-2552/ad88a4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/10/2024] [Revised: 09/24/2024] [Accepted: 10/18/2024] [Indexed: 10/21/2024]
Abstract
Objective.Current intracortical brain-computer interfaces (iBCIs) rely predominantly on threshold crossings ('spikes') for decoding neural activity into a control signal for an external device. Spiking data can yield high accuracy online control during complex behaviors; however, its dependence on high-sampling-rate data collection can pose challenges. An alternative signal for iBCI decoding is the local field potential (LFP), a continuous-valued signal that can be acquired simultaneously with spiking activity. However, LFPs are seldom used alone for online iBCI control as their decoding performance has yet to achieve parity with spikes.Approach.Here, we present a strategy to improve the performance of LFP-based decoders by first training a neural dynamics model to use LFPs to reconstruct the firing rates underlying spiking data, and then decoding from the estimated rates. We test these models on previously-collected macaque data during center-out and random-target reaching tasks as well as data collected from a human iBCI participant during attempted speech.Main results.In all cases, training models from LFPs enables firing rate reconstruction with accuracy comparable to spiking-based dynamics models. In addition, LFP-based dynamics models enable decoding performance exceeding that of LFPs alone and approaching that of spiking-based models. In all applications except speech, LFP-based dynamics models also facilitate decoding accuracy exceeding that of direct decoding from spikes.Significance.Because LFP-based dynamics models operate on lower bandwidth and with lower sampling rate than spiking models, our findings indicate that iBCI devices can be designed to operate with lower power requirements than devices dependent on recorded spiking activity, without sacrificing high-accuracy decoding.
Collapse
Affiliation(s)
- Brianna M Karpowicz
- Wallace H. Coulter Department of Biomedical Engineering, Emory University and Georgia Institute of Technology, Atlanta, GA, United States of America
| | - Bareesh Bhaduri
- Wallace H. Coulter Department of Biomedical Engineering, Emory University and Georgia Institute of Technology, Atlanta, GA, United States of America
| | - Samuel R Nason-Tomaszewski
- Wallace H. Coulter Department of Biomedical Engineering, Emory University and Georgia Institute of Technology, Atlanta, GA, United States of America
| | - Brandon G Jacques
- Wallace H. Coulter Department of Biomedical Engineering, Emory University and Georgia Institute of Technology, Atlanta, GA, United States of America
| | - Yahia H Ali
- Wallace H. Coulter Department of Biomedical Engineering, Emory University and Georgia Institute of Technology, Atlanta, GA, United States of America
| | - Robert D Flint
- Department of Neurology, Northwestern University, Chicago, IL, United States of America
| | - Payton H Bechefsky
- Wallace H. Coulter Department of Biomedical Engineering, Emory University and Georgia Institute of Technology, Atlanta, GA, United States of America
| | - Leigh R Hochberg
- Center for Neurotechnology and Neurorecovery, Department of Neurology, Massachusetts General Hospital, Harvard Medical School, Boston, MA, United States of America
- Veterans Affairs Rehabilitation Research & Development Center for Neurorestoration and Neurotechnology, Providence VA Medical Center, Providence, RI, United States of America
- Robert J. & Nancy D. Carney Institute for Brain Science and School of Engineering, Brown University, Providence, RI, United States of America
| | - Nicholas AuYong
- Wallace H. Coulter Department of Biomedical Engineering, Emory University and Georgia Institute of Technology, Atlanta, GA, United States of America
- Department of Neurosurgery, Emory University, Atlanta, GA, United States of America
- Department of Cell Biology, Emory University, Atlanta, GA, United States of America
| | - Marc W Slutzky
- Department of Neurology, Northwestern University, Chicago, IL, United States of America
- Department of Neuroscience, Northwestern University, Chicago, IL, United States of America
- Department of Biomedical Engineering, Northwestern University, Evanston, IL, United States of America
- Department of Physical Medicine and Rehabilitation, Northwestern University, Chicago, IL, United States of America
- Shirley Ryan AbilityLab, Chicago, IL, United States of America
| | - Chethan Pandarinath
- Wallace H. Coulter Department of Biomedical Engineering, Emory University and Georgia Institute of Technology, Atlanta, GA, United States of America
- Department of Neurosurgery, Emory University, Atlanta, GA, United States of America
| |
Collapse
|
6
|
Karpowicz BM, Ye J, Fan C, Tostado-Marcos P, Rizzoglio F, Washington C, Scodeler T, de Lucena D, Nason-Tomaszewski SR, Mender MJ, Ma X, Arneodo EM, Hochberg LR, Chestek CA, Henderson JM, Gentner TQ, Gilja V, Miller LE, Rouse AG, Gaunt RA, Collinger JL, Pandarinath C. Few-shot Algorithms for Consistent Neural Decoding (FALCON) Benchmark. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.09.15.613126. [PMID: 39345641 PMCID: PMC11429771 DOI: 10.1101/2024.09.15.613126] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/01/2024]
Abstract
Intracortical brain-computer interfaces (iBCIs) can restore movement and communication abilities to individuals with paralysis by decoding their intended behavior from neural activity recorded with an implanted device. While this activity yields high-performance decoding over short timescales, neural data are often nonstationary, which can lead to decoder failure if not accounted for. To maintain performance, users must frequently recalibrate decoders, which requires the arduous collection of new neural and behavioral data. Aiming to reduce this burden, several approaches have been developed that either limit recalibration data requirements (few-shot approaches) or eliminate explicit recalibration entirely (zero-shot approaches). However, progress is limited by a lack of standardized datasets and comparison metrics, causing methods to be compared in an ad hoc manner. Here we introduce the FALCON benchmark suite (Few-shot Algorithms for COnsistent Neural decoding) to standardize evaluation of iBCI robustness. FALCON curates five datasets of neural and behavioral data that span movement and communication tasks to focus on behaviors of interest to modern-day iBCIs. Each dataset includes calibration data, optional few-shot recalibration data, and private evaluation data. We implement a flexible evaluation platform which only requires user-submitted code to return behavioral predictions on unseen data. We also seed the benchmark by applying baseline methods spanning several classes of possible approaches. FALCON aims to provide rigorous selection criteria for robust iBCI decoders, easing their translation to real-world devices.
Collapse
|
7
|
Mathis MW, Perez Rotondo A, Chang EF, Tolias AS, Mathis A. Decoding the brain: From neural representations to mechanistic models. Cell 2024; 187:5814-5832. [PMID: 39423801 PMCID: PMC11637322 DOI: 10.1016/j.cell.2024.08.051] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/01/2024] [Revised: 07/29/2024] [Accepted: 08/26/2024] [Indexed: 10/21/2024]
Abstract
A central principle in neuroscience is that neurons within the brain act in concert to produce perception, cognition, and adaptive behavior. Neurons are organized into specialized brain areas, dedicated to different functions to varying extents, and their function relies on distributed circuits to continuously encode relevant environmental and body-state features, enabling other areas to decode (interpret) these representations for computing meaningful decisions and executing precise movements. Thus, the distributed brain can be thought of as a series of computations that act to encode and decode information. In this perspective, we detail important concepts of neural encoding and decoding and highlight the mathematical tools used to measure them, including deep learning methods. We provide case studies where decoding concepts enable foundational and translational science in motor, visual, and language processing.
Collapse
Affiliation(s)
- Mackenzie Weygandt Mathis
- Brain Mind Institute, École Polytechnique Fédérale de Lausanne (EPFL), Geneva, Switzerland; Neuro-X Institute, École Polytechnique Fédérale de Lausanne (EPFL), Geneva, Switzerland.
| | - Adriana Perez Rotondo
- Brain Mind Institute, École Polytechnique Fédérale de Lausanne (EPFL), Geneva, Switzerland; Neuro-X Institute, École Polytechnique Fédérale de Lausanne (EPFL), Geneva, Switzerland
| | - Edward F Chang
- Department of Neurological Surgery, UCSF, San Francisco, CA, USA
| | - Andreas S Tolias
- Department of Ophthalmology, Byers Eye Institute, Stanford University, Stanford, CA, USA; Department of Electrical Engineering, Stanford University, Stanford, CA, USA; Stanford BioX, Stanford University, Stanford, CA, USA; Wu Tsai Neurosciences Institute, Stanford University, Stanford, CA, USA
| | - Alexander Mathis
- Brain Mind Institute, École Polytechnique Fédérale de Lausanne (EPFL), Geneva, Switzerland; Neuro-X Institute, École Polytechnique Fédérale de Lausanne (EPFL), Geneva, Switzerland
| |
Collapse
|
8
|
Sani OG, Pesaran B, Shanechi MM. Dissociative and prioritized modeling of behaviorally relevant neural dynamics using recurrent neural networks. Nat Neurosci 2024; 27:2033-2045. [PMID: 39242944 PMCID: PMC11452342 DOI: 10.1038/s41593-024-01731-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/22/2023] [Accepted: 07/17/2024] [Indexed: 09/09/2024]
Abstract
Understanding the dynamical transformation of neural activity to behavior requires new capabilities to nonlinearly model, dissociate and prioritize behaviorally relevant neural dynamics and test hypotheses about the origin of nonlinearity. We present dissociative prioritized analysis of dynamics (DPAD), a nonlinear dynamical modeling approach that enables these capabilities with a multisection neural network architecture and training approach. Analyzing cortical spiking and local field potential activity across four movement tasks, we demonstrate five use-cases. DPAD enabled more accurate neural-behavioral prediction. It identified nonlinear dynamical transformations of local field potentials that were more behavior predictive than traditional power features. Further, DPAD achieved behavior-predictive nonlinear neural dimensionality reduction. It enabled hypothesis testing regarding nonlinearities in neural-behavioral transformation, revealing that, in our datasets, nonlinearities could largely be isolated to the mapping from latent cortical dynamics to behavior. Finally, DPAD extended across continuous, intermittently sampled and categorical behaviors. DPAD provides a powerful tool for nonlinear dynamical modeling and investigation of neural-behavioral data.
Collapse
Affiliation(s)
- Omid G Sani
- Ming Hsieh Department of Electrical and Computer Engineering, Viterbi School of Engineering, University of Southern California, Los Angeles, CA, USA
| | - Bijan Pesaran
- Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA
| | - Maryam M Shanechi
- Ming Hsieh Department of Electrical and Computer Engineering, Viterbi School of Engineering, University of Southern California, Los Angeles, CA, USA.
- Thomas Lord Department of Computer Science, University of Southern California, Los Angeles, CA, USA.
- Neuroscience Graduate Program, University of Southern California, Los Angeles, CA, USA.
- Alfred E. Mann Department of Biomedical Engineering, University of Southern California, Los Angeles, CA, USA.
| |
Collapse
|
9
|
McCart JD, Sedler AR, Versteeg C, Mifsud D, Rigotti-Thompson M, Pandarinath C. Diffusion-Based Generation of Neural Activity from Disentangled Latent Codes. ARXIV 2024:arXiv:2407.21195v1. [PMID: 39130199 PMCID: PMC11312623] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Download PDF] [Subscribe] [Scholar Register] [Indexed: 08/13/2024]
Abstract
Recent advances in recording technology have allowed neuroscientists to monitor activity from thousands of neurons simultaneously. Latent variable models are increasingly valuable for distilling these recordings into compact and interpretable representations. Here we propose a new approach to neural data analysis that leverages advances in conditional generative modeling to enable the unsupervised inference of disentangled behavioral variables from recorded neural activity. Our approach builds on InfoDiffusion, which augments diffusion models with a set of latent variables that capture important factors of variation in the data. We apply our model, called Generating Neural Observations Conditioned on Codes with High Information (GNOCCHI), to time series neural data and test its application to synthetic and biological recordings of neural activity during reaching. In comparison to a VAE-based sequential autoencoder, GNOCCHI learns higher-quality latent spaces that are more clearly structured and more disentangled with respect to key behavioral variables. These properties enable accurate generation of novel samples (unseen behavioral conditions) through simple linear traversal of the latent spaces produced by GNOCCHI. Our work demonstrates the potential of unsupervised, information-based models for the discovery of interpretable latent spaces from neural data, enabling researchers to generate high-quality samples from unseen conditions.
Collapse
Affiliation(s)
- Jonathan D. McCart
- Center for Machine Learning, Georgia Tech
- Department of Biomedical Engineering, Georgia Tech and Emory University
| | - Andrew R. Sedler
- Center for Machine Learning, Georgia Tech
- Department of Biomedical Engineering, Georgia Tech and Emory University
| | | | - Domenick Mifsud
- Center for Machine Learning, Georgia Tech
- Department of Biomedical Engineering, Georgia Tech and Emory University
| | | | - Chethan Pandarinath
- Center for Machine Learning, Georgia Tech
- Department of Biomedical Engineering, Georgia Tech and Emory University
- Department of Neurosurgery, Emory University School of Medicine
| |
Collapse
|
10
|
Silva AB, Littlejohn KT, Liu JR, Moses DA, Chang EF. The speech neuroprosthesis. Nat Rev Neurosci 2024; 25:473-492. [PMID: 38745103 PMCID: PMC11540306 DOI: 10.1038/s41583-024-00819-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 04/12/2024] [Indexed: 05/16/2024]
Abstract
Loss of speech after paralysis is devastating, but circumventing motor-pathway injury by directly decoding speech from intact cortical activity has the potential to restore natural communication and self-expression. Recent discoveries have defined how key features of speech production are facilitated by the coordinated activity of vocal-tract articulatory and motor-planning cortical representations. In this Review, we highlight such progress and how it has led to successful speech decoding, first in individuals implanted with intracranial electrodes for clinical epilepsy monitoring and subsequently in individuals with paralysis as part of early feasibility clinical trials to restore speech. We discuss high-spatiotemporal-resolution neural interfaces and the adaptation of state-of-the-art speech computational algorithms that have driven rapid and substantial progress in decoding neural activity into text, audible speech, and facial movements. Although restoring natural speech is a long-term goal, speech neuroprostheses already have performance levels that surpass communication rates offered by current assistive-communication technology. Given this accelerated rate of progress in the field, we propose key evaluation metrics for speed and accuracy, among others, to help standardize across studies. We finish by highlighting several directions to more fully explore the multidimensional feature space of speech and language, which will continue to accelerate progress towards a clinically viable speech neuroprosthesis.
Collapse
Affiliation(s)
- Alexander B Silva
- Department of Neurological Surgery, University of California, San Francisco, San Francisco, CA, USA
- Weill Institute for Neuroscience, University of California, San Francisco, San Francisco, CA, USA
| | - Kaylo T Littlejohn
- Department of Neurological Surgery, University of California, San Francisco, San Francisco, CA, USA
- Weill Institute for Neuroscience, University of California, San Francisco, San Francisco, CA, USA
- Department of Electrical Engineering and Computer Sciences, University of California, Berkeley, Berkeley, CA, USA
| | - Jessie R Liu
- Department of Neurological Surgery, University of California, San Francisco, San Francisco, CA, USA
- Weill Institute for Neuroscience, University of California, San Francisco, San Francisco, CA, USA
| | - David A Moses
- Department of Neurological Surgery, University of California, San Francisco, San Francisco, CA, USA
- Weill Institute for Neuroscience, University of California, San Francisco, San Francisco, CA, USA
| | - Edward F Chang
- Department of Neurological Surgery, University of California, San Francisco, San Francisco, CA, USA.
- Weill Institute for Neuroscience, University of California, San Francisco, San Francisco, CA, USA.
| |
Collapse
|
11
|
Wimalasena LN, Pandarinath C, Yong NA. Spinal interneuron population dynamics underlying flexible pattern generation. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.06.20.599927. [PMID: 38948833 PMCID: PMC11213001 DOI: 10.1101/2024.06.20.599927] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 07/02/2024]
Abstract
The mammalian spinal locomotor network is composed of diverse populations of interneurons that collectively orchestrate and execute a range of locomotor behaviors. Despite the identification of many classes of spinal interneurons constituting the locomotor network, it remains unclear how the network's collective activity computes and modifies locomotor output on a step-by-step basis. To investigate this, we analyzed lumbar interneuron population recordings and multi-muscle electromyography from spinalized cats performing air stepping and used artificial intelligence methods to uncover state space trajectories of spinal interneuron population activity on single step cycles and at millisecond timescales. Our analyses of interneuron population trajectories revealed that traversal of specific state space regions held millisecond-timescale correspondence to the timing adjustments of extensor-flexor alternation. Similarly, we found that small variations in the path of state space trajectories were tightly linked to single-step, microvolt-scale adjustments in the magnitude of muscle output. One sentence summary Features of spinal interneuron state space trajectories capture variations in the timing and magnitude of muscle activations across individual step cycles, with precision on the scales of milliseconds and microvolts respectively.
Collapse
|
12
|
Pellegrino A, Stein H, Cayco-Gajic NA. Dimensionality reduction beyond neural subspaces with slice tensor component analysis. Nat Neurosci 2024; 27:1199-1210. [PMID: 38710876 PMCID: PMC11537991 DOI: 10.1038/s41593-024-01626-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/06/2022] [Accepted: 03/20/2024] [Indexed: 05/08/2024]
Abstract
Recent work has argued that large-scale neural recordings are often well described by patterns of coactivation across neurons. Yet the view that neural variability is constrained to a fixed, low-dimensional subspace may overlook higher-dimensional structure, including stereotyped neural sequences or slowly evolving latent spaces. Here we argue that task-relevant variability in neural data can also cofluctuate over trials or time, defining distinct 'covariability classes' that may co-occur within the same dataset. To demix these covariability classes, we develop sliceTCA (slice tensor component analysis), a new unsupervised dimensionality reduction method for neural data tensors. In three example datasets, including motor cortical activity during a classic reaching task in primates and recent multiregion recordings in mice, we show that sliceTCA can capture more task-relevant structure in neural data using fewer components than traditional methods. Overall, our theoretical framework extends the classic view of low-dimensional population activity by incorporating additional classes of latent variables capturing higher-dimensional structure.
Collapse
Affiliation(s)
- Arthur Pellegrino
- Laboratoire de Neurosciences Cognitives et Computationnelles, INSERM U960, Département D'Etudes Cognitives, Ecole Normale Supérieure, PSL University, Paris, France.
- Institute for Adaptive and Neural Computation, School of Informatics, University of Edinburgh, Edinburgh, UK.
| | - Heike Stein
- Laboratoire de Neurosciences Cognitives et Computationnelles, INSERM U960, Département D'Etudes Cognitives, Ecole Normale Supérieure, PSL University, Paris, France
| | - N Alex Cayco-Gajic
- Laboratoire de Neurosciences Cognitives et Computationnelles, INSERM U960, Département D'Etudes Cognitives, Ecole Normale Supérieure, PSL University, Paris, France.
| |
Collapse
|
13
|
Lee WH, Karpowicz BM, Pandarinath C, Rouse AG. Identifying Distinct Neural Features between the Initial and Corrective Phases of Precise Reaching Using AutoLFADS. J Neurosci 2024; 44:e1224232024. [PMID: 38538142 PMCID: PMC11097258 DOI: 10.1523/jneurosci.1224-23.2024] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/14/2023] [Revised: 03/11/2024] [Accepted: 03/11/2024] [Indexed: 04/09/2024] Open
Abstract
Many initial movements require subsequent corrective movements, but how the motor cortex transitions to make corrections and how similar the encoding is to initial movements is unclear. In our study, we explored how the brain's motor cortex signals both initial and corrective movements during a precision reaching task. We recorded a large population of neurons from two male rhesus macaques across multiple sessions to examine the neural firing rates during not only initial movements but also subsequent corrective movements. AutoLFADS, an autoencoder-based deep-learning model, was applied to provide a clearer picture of neurons' activity on individual corrective movements across sessions. Decoding of reach velocity generalized poorly from initial to corrective submovements. Unlike initial movements, it was challenging to predict the velocity of corrective movements using traditional linear methods in a single, global neural space. We identified several locations in the neural space where corrective submovements originated after the initial reaches, signifying firing rates different than the baseline before initial movements. To improve corrective movement decoding, we demonstrate that a state-dependent decoder incorporating the population firing rates at the initiation of correction improved performance, highlighting the diverse neural features of corrective movements. In summary, we show neural differences between initial and corrective submovements and how the neural activity encodes specific combinations of velocity and position. These findings are inconsistent with assumptions that neural correlations with kinematic features are global and independent, emphasizing that traditional methods often fall short in describing these diverse neural processes for online corrective movements.
Collapse
Affiliation(s)
- Wei-Hsien Lee
- Bioengineering Program, University of Kansas, Lawrence, Kansas 66045
| | - Brianna M Karpowicz
- Wallace H. Coulter Department of Biomedical Engineering, Emory University and Georgia Institute of Technology, Atlanta, Georgia 30322
| | - Chethan Pandarinath
- Wallace H. Coulter Department of Biomedical Engineering, Emory University and Georgia Institute of Technology, Atlanta, Georgia 30322
- Department of Neurosurgery, Emory University, Atlanta, Georgia 30322
| | - Adam G Rouse
- Bioengineering Program, University of Kansas, Lawrence, Kansas 66045
- Neurosurgery Department, University of Kansas Medical Center, Kansas City, Kansas 66160
- Electrical Engineering and Computer Science Department, University of Kansas, Lawrence, Kansas 66045
- Cell Biology and Physiology Department, University of Kansas Medical Center, Kansas City, Kansas 66160
| |
Collapse
|
14
|
Rosenthal IA, Bashford L, Bjånes D, Pejsa K, Lee B, Liu C, Andersen RA. Visual context affects the perceived timing of tactile sensations elicited through intra-cortical microstimulation. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.05.13.593529. [PMID: 38798438 PMCID: PMC11118490 DOI: 10.1101/2024.05.13.593529] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/29/2024]
Abstract
Intra-cortical microstimulation (ICMS) is a technique to provide tactile sensations for a somatosensory brain-machine interface (BMI). A viable BMI must function within the rich, multisensory environment of the real world, but how ICMS is integrated with other sensory modalities is poorly understood. To investigate how ICMS percepts are integrated with visual information, ICMS and visual stimuli were delivered at varying times relative to one another. Both visual context and ICMS current amplitude were found to bias the qualitative experience of ICMS. In two tetraplegic participants, ICMS and visual stimuli were more likely to be experienced as occurring simultaneously when visual stimuli were more realistic, demonstrating an effect of visual context on the temporal binding window. The peak of the temporal binding window varied but was consistently offset from zero, suggesting that multisensory integration with ICMS can suffer from temporal misalignment. Recordings from primary somatosensory cortex (S1) during catch trials where visual stimuli were delivered without ICMS demonstrated that S1 represents visual information related to ICMS across visual contexts.
Collapse
Affiliation(s)
- Isabelle A Rosenthal
- Biology and Biological Engineering, California Institute of Technology, Pasadena, CA 91125, USA
- T&C Chen Brain-machine Interface Center, California Institute of Technology, Pasadena, CA 91125, USA
- Lead Contact
| | - Luke Bashford
- Biology and Biological Engineering, California Institute of Technology, Pasadena, CA 91125, USA
- T&C Chen Brain-machine Interface Center, California Institute of Technology, Pasadena, CA 91125, USA
- Biosciences Institute, Newcastle University, UK
| | - David Bjånes
- Biology and Biological Engineering, California Institute of Technology, Pasadena, CA 91125, USA
- T&C Chen Brain-machine Interface Center, California Institute of Technology, Pasadena, CA 91125, USA
| | - Kelsie Pejsa
- Biology and Biological Engineering, California Institute of Technology, Pasadena, CA 91125, USA
- T&C Chen Brain-machine Interface Center, California Institute of Technology, Pasadena, CA 91125, USA
| | - Brian Lee
- Biology and Biological Engineering, California Institute of Technology, Pasadena, CA 91125, USA
- Department of Neurological Surgery, Keck School of Medicine of USC, Los Angeles, CA 90033, USA
- USC Neurorestoration Center, Keck School of Medicine of USC, Los Angeles, CA 90033, USA
| | - Charles Liu
- Biology and Biological Engineering, California Institute of Technology, Pasadena, CA 91125, USA
- Department of Neurological Surgery, Keck School of Medicine of USC, Los Angeles, CA 90033, USA
- USC Neurorestoration Center, Keck School of Medicine of USC, Los Angeles, CA 90033, USA
- Rancho Los Amigos National Rehabilitation Center, Downey, CA 90242, USA
| | - Richard A Andersen
- Biology and Biological Engineering, California Institute of Technology, Pasadena, CA 91125, USA
- T&C Chen Brain-machine Interface Center, California Institute of Technology, Pasadena, CA 91125, USA
| |
Collapse
|
15
|
Vahidi P, Sani OG, Shanechi MM. Modeling and dissociation of intrinsic and input-driven neural population dynamics underlying behavior. Proc Natl Acad Sci U S A 2024; 121:e2212887121. [PMID: 38335258 PMCID: PMC10873612 DOI: 10.1073/pnas.2212887121] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/28/2022] [Accepted: 12/03/2023] [Indexed: 02/12/2024] Open
Abstract
Neural dynamics can reflect intrinsic dynamics or dynamic inputs, such as sensory inputs or inputs from other brain regions. To avoid misinterpreting temporally structured inputs as intrinsic dynamics, dynamical models of neural activity should account for measured inputs. However, incorporating measured inputs remains elusive in joint dynamical modeling of neural-behavioral data, which is important for studying neural computations of behavior. We first show how training dynamical models of neural activity while considering behavior but not input or input but not behavior may lead to misinterpretations. We then develop an analytical learning method for linear dynamical models that simultaneously accounts for neural activity, behavior, and measured inputs. The method provides the capability to prioritize the learning of intrinsic behaviorally relevant neural dynamics and dissociate them from both other intrinsic dynamics and measured input dynamics. In data from a simulated brain with fixed intrinsic dynamics that performs different tasks, the method correctly finds the same intrinsic dynamics regardless of the task while other methods can be influenced by the task. In neural datasets from three subjects performing two different motor tasks with task instruction sensory inputs, the method reveals low-dimensional intrinsic neural dynamics that are missed by other methods and are more predictive of behavior and/or neural activity. The method also uniquely finds that the intrinsic behaviorally relevant neural dynamics are largely similar across the different subjects and tasks, whereas the overall neural dynamics are not. These input-driven dynamical models of neural-behavioral data can uncover intrinsic dynamics that may otherwise be missed.
Collapse
Affiliation(s)
- Parsa Vahidi
- Ming Hsieh Department of Electrical and Computer Engineering, Viterbi School of Engineering, University of Southern California, Los Angeles, CA90089
| | - Omid G. Sani
- Ming Hsieh Department of Electrical and Computer Engineering, Viterbi School of Engineering, University of Southern California, Los Angeles, CA90089
| | - Maryam M. Shanechi
- Ming Hsieh Department of Electrical and Computer Engineering, Viterbi School of Engineering, University of Southern California, Los Angeles, CA90089
- Neuroscience Graduate Program, University of Southern California, Los Angeles, CA90089
- Thomas Lord Department of Computer Science and Alfred E. Mann Department of Biomedical Engineering, Viterbi School of Engineering, University of Southern California, Los Angeles, CA90089
| |
Collapse
|
16
|
Lee WH, Karpowicz BM, Pandarinath C, Rouse AG. Identifying distinct neural features between the initial and corrective phases of precise reaching using AutoLFADS. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2023.06.30.547252. [PMID: 38352314 PMCID: PMC10862710 DOI: 10.1101/2023.06.30.547252] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 04/09/2024]
Abstract
Many initial movements require subsequent corrective movements, but how motor cortex transitions to make corrections and how similar the encoding is to initial movements is unclear. In our study, we explored how the brain's motor cortex signals both initial and corrective movements during a precision reaching task. We recorded a large population of neurons from two male rhesus macaques across multiple sessions to examine the neural firing rates during not only initial movements but also subsequent corrective movements. AutoLFADS, an auto-encoder-based deep-learning model, was applied to provide a clearer picture of neurons' activity on individual corrective movements across sessions. Decoding of reach velocity generalized poorly from initial to corrective submovements. Unlike initial movements, it was challenging to predict the velocity of corrective movements using traditional linear methods in a single, global neural space. We identified several locations in the neural space where corrective submovements originated after the initial reaches, signifying firing rates different than the baseline before initial movements. To improve corrective movement decoding, we demonstrate that a state-dependent decoder incorporating the population firing rates at the initiation of correction improved performance, highlighting the diverse neural features of corrective movements. In summary, we show neural differences between initial and corrective submovements and how the neural activity encodes specific combinations of velocity and position. These findings are inconsistent with assumptions that neural correlations with kinematic features are global and independent, emphasizing that traditional methods often fall short in describing these diverse neural processes for online corrective movements. Significance Statement We analyzed submovement neural population dynamics during precision reaching. Using an auto- encoder-based deep-learning model, AutoLFADS, we examined neural activity on a single-trial basis. Our study shows distinct neural dynamics between initial and corrective submovements. We demonstrate the existence of unique neural features within each submovement class that encode complex combinations of position and reach direction. Our study also highlights the benefit of state-specific decoding strategies, which consider the neural firing rates at the onset of any given submovement, when decoding complex motor tasks such as corrective submovements.
Collapse
|
17
|
Abbaspourazad H, Erturk E, Pesaran B, Shanechi MM. Dynamical flexible inference of nonlinear latent factors and structures in neural population activity. Nat Biomed Eng 2024; 8:85-108. [PMID: 38082181 PMCID: PMC11735406 DOI: 10.1038/s41551-023-01106-1] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/21/2022] [Accepted: 09/12/2023] [Indexed: 12/26/2023]
Abstract
Modelling the spatiotemporal dynamics in the activity of neural populations while also enabling their flexible inference is hindered by the complexity and noisiness of neural observations. Here we show that the lower-dimensional nonlinear latent factors and latent structures can be computationally modelled in a manner that allows for flexible inference causally, non-causally and in the presence of missing neural observations. To enable flexible inference, we developed a neural network that separates the model into jointly trained manifold and dynamic latent factors such that nonlinearity is captured through the manifold factors and the dynamics can be modelled in tractable linear form on this nonlinear manifold. We show that the model, which we named 'DFINE' (for 'dynamical flexible inference for nonlinear embeddings') achieves flexible inference in simulations of nonlinear dynamics and across neural datasets representing a diversity of brain regions and behaviours. Compared with earlier neural-network models, DFINE enables flexible inference, better predicts neural activity and behaviour, and better captures the latent neural manifold structure. DFINE may advance the development of neurotechnology and investigations in neuroscience.
Collapse
Affiliation(s)
- Hamidreza Abbaspourazad
- Ming Hsieh Department of Electrical and Computer Engineering, Viterbi School of Engineering, University of Southern California, Los Angeles, CA, USA
| | - Eray Erturk
- Ming Hsieh Department of Electrical and Computer Engineering, Viterbi School of Engineering, University of Southern California, Los Angeles, CA, USA
| | - Bijan Pesaran
- Departments of Neurosurgery, Neuroscience, and Bioengineering, University of Pennsylvania, Philadelphia, PA, USA
| | - Maryam M Shanechi
- Ming Hsieh Department of Electrical and Computer Engineering, Viterbi School of Engineering, University of Southern California, Los Angeles, CA, USA.
- Thomas Lord Department of Computer Science, Alfred E. Mann Department of Biomedical Engineering, Neuroscience Graduate Program, University of Southern California, Los Angeles, CA, USA.
| |
Collapse
|
18
|
Meghanath G, Jimenez B, Makin JG. Inferring population dynamics in macaque cortex. J Neural Eng 2023; 20:056041. [PMID: 37875104 DOI: 10.1088/1741-2552/ad0651] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/01/2023] [Accepted: 10/24/2023] [Indexed: 10/26/2023]
Abstract
Objective.The proliferation of multi-unit cortical recordings over the last two decades, especially in macaques and during motor-control tasks, has generated interest in neural 'population dynamics': the time evolution of neural activity across a group of neurons working together. A good model of these dynamics should be able to infer the activity of unobserved neurons within the same population and of the observed neurons at future times. Accordingly, Pandarinath and colleagues have introduced a benchmark to evaluate models on these two (and related) criteria: four data sets, each consisting of firing rates from a population of neurons, recorded from macaque cortex during movement-related tasks.Approach.Since this is a discriminative-learning task, we hypothesize that general-purpose architectures based on recurrent neural networks (RNNs) trained with masking can outperform more 'bespoke' models. To capture long-distance dependencies without sacrificing the autoregressive bias of recurrent networks, we also propose a novel, hybrid architecture ('TERN') that augments the RNN with self-attention, as in transformer networks.Main results.Our RNNs outperform all published models on all four data sets in the benchmark. The hybrid architecture improves performance further still. Pure transformer models fail to achieve this level of performance, either in our work or that of other groups.Significance.We argue that the autoregressive bias imposed by RNNs is critical for achieving the highest levels of performance, and establish the state of the art on the neural latents benchmark. We conclude, however, by proposing that the benchmark be augmented with an alternative evaluation of latent dynamics that favors generative over discriminative models like the ones we propose in this report.
Collapse
Affiliation(s)
- Ganga Meghanath
- Elmore Family School of Electrical and Computer Engineering, Purdue University, West Lafayette, IN, United States of America
| | - Bryan Jimenez
- Elmore Family School of Electrical and Computer Engineering, Purdue University, West Lafayette, IN, United States of America
| | - Joseph G Makin
- Elmore Family School of Electrical and Computer Engineering, Purdue University, West Lafayette, IN, United States of America
| |
Collapse
|
19
|
Kim TD, Luo TZ, Can T, Krishnamurthy K, Pillow JW, Brody CD. Flow-field inference from neural data using deep recurrent networks. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2023:2023.11.14.567136. [PMID: 38014290 PMCID: PMC10680687 DOI: 10.1101/2023.11.14.567136] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/29/2023]
Abstract
Computations involved in processes such as decision-making, working memory, and motor control are thought to emerge from the dynamics governing the collective activity of neurons in large populations. But the estimation of these dynamics remains a significant challenge. Here we introduce Flow-field Inference from Neural Data using deep Recurrent networks (FINDR), an unsupervised deep learning method that can infer low-dimensional nonlinear stochastic dynamics underlying neural population activity. Using population spike train data from frontal brain regions of rats performing an auditory decision-making task, we demonstrate that FINDR outperforms existing methods in capturing the heterogeneous responses of individual neurons. We further show that FINDR can discover interpretable low-dimensional dynamics when it is trained to disentangle task-relevant and irrelevant components of the neural population activity. Importantly, the low-dimensional nature of the learned dynamics allows for explicit visualization of flow fields and attractor structures. We suggest FINDR as a powerful method for revealing the low-dimensional task-relevant dynamics of neural populations and their associated computations.
Collapse
Affiliation(s)
| | - Thomas Zhihao Luo
- Princeton Neuroscience Institute, Princeton University, Princeton, NJ
| | - Tankut Can
- School of Natural Sciences, Institute for Advanced Study, Princeton, NJ
| | - Kamesh Krishnamurthy
- Princeton Neuroscience Institute, Princeton University, Princeton, NJ
- Joseph Henry Laboratories of Physics, Princeton University, Princeton, NJ
| | - Jonathan W Pillow
- Princeton Neuroscience Institute, Princeton University, Princeton, NJ
| | - Carlos D Brody
- Princeton Neuroscience Institute, Princeton University, Princeton, NJ
- Howard Hughes Medical Institute, Princeton University, Princeton, NJ
| |
Collapse
|
20
|
Durstewitz D, Koppe G, Thurm MI. Reconstructing computational system dynamics from neural data with recurrent neural networks. Nat Rev Neurosci 2023; 24:693-710. [PMID: 37794121 DOI: 10.1038/s41583-023-00740-7] [Citation(s) in RCA: 12] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 08/18/2023] [Indexed: 10/06/2023]
Abstract
Computational models in neuroscience usually take the form of systems of differential equations. The behaviour of such systems is the subject of dynamical systems theory. Dynamical systems theory provides a powerful mathematical toolbox for analysing neurobiological processes and has been a mainstay of computational neuroscience for decades. Recently, recurrent neural networks (RNNs) have become a popular machine learning tool for studying the non-linear dynamics of neural and behavioural processes by emulating an underlying system of differential equations. RNNs have been routinely trained on similar behavioural tasks to those used for animal subjects to generate hypotheses about the underlying computational mechanisms. By contrast, RNNs can also be trained on the measured physiological and behavioural data, thereby directly inheriting their temporal and geometrical properties. In this way they become a formal surrogate for the experimentally probed system that can be further analysed, perturbed and simulated. This powerful approach is called dynamical system reconstruction. In this Perspective, we focus on recent trends in artificial intelligence and machine learning in this exciting and rapidly expanding field, which may be less well known in neuroscience. We discuss formal prerequisites, different model architectures and training approaches for RNN-based dynamical system reconstructions, ways to evaluate and validate model performance, how to interpret trained models in a neuroscience context, and current challenges.
Collapse
Affiliation(s)
- Daniel Durstewitz
- Dept. of Theoretical Neuroscience, Central Institute of Mental Health, Medical Faculty Mannheim, Heidelberg University, Mannheim, Germany.
- Interdisciplinary Center for Scientific Computing, Heidelberg University, Heidelberg, Germany.
- Faculty of Physics and Astronomy, Heidelberg University, Heidelberg, Germany.
| | - Georgia Koppe
- Dept. of Theoretical Neuroscience, Central Institute of Mental Health, Medical Faculty Mannheim, Heidelberg University, Mannheim, Germany
- Dept. of Psychiatry and Psychotherapy, Central Institute of Mental Health, Medical Faculty Mannheim, Heidelberg University, Mannheim, Germany
- Hector Institute for Artificial Intelligence in Psychiatry, Central Institute of Mental Health, Medical Faculty Mannheim, Heidelberg University, Mannheim, Germany
| | - Max Ingo Thurm
- Dept. of Theoretical Neuroscience, Central Institute of Mental Health, Medical Faculty Mannheim, Heidelberg University, Mannheim, Germany
| |
Collapse
|
21
|
Mirfathollahi A, Ghodrati MT, Shalchyan V, Zarrindast MR, Daliri MR. Decoding hand kinetics and kinematics using somatosensory cortex activity in active and passive movement. iScience 2023; 26:107808. [PMID: 37736040 PMCID: PMC10509302 DOI: 10.1016/j.isci.2023.107808] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/05/2023] [Revised: 07/20/2023] [Accepted: 08/30/2023] [Indexed: 09/23/2023] Open
Abstract
Area 2 of the primary somatosensory cortex (S1), encodes proprioceptive information of limbs. Several studies investigated the encoding of movement parameters in this area. However, the single-trial decoding of these parameters, which can provide additional knowledge about the amount of information available in sub-regions of this area about instantaneous limb movement, has not been well investigated. We decoded kinematic and kinetic parameters of active and passive hand movement during center-out task using conventional and state-based decoders. Our results show that this area can be used to accurately decode position, velocity, force, moment, and joint angles of hand. Kinematics had higher accuracies compared to kinetics and active trials were decoded more accurately than passive trials. Although the state-based decoder outperformed the conventional decoder in the active task, it was the opposite in the passive task. These results can be used in intracortical micro-stimulation procedures to provide proprioceptive feedback to BCI subjects.
Collapse
Affiliation(s)
- Alavie Mirfathollahi
- Institute for Cognitive Science Studies (ICSS), Pardis 16583- 44575 Tehran, Iran
- Neuroscience & Neuroengineering Research Lab, Biomedical Engineering Department, School of Electrical Engineering, Iran University of Science and Technology (IUST), Narmak, Tehran 16846-13114, Iran
| | - Mohammad Taghi Ghodrati
- Neuroscience & Neuroengineering Research Lab, Biomedical Engineering Department, School of Electrical Engineering, Iran University of Science and Technology (IUST), Narmak, Tehran 16846-13114, Iran
| | - Vahid Shalchyan
- Neuroscience & Neuroengineering Research Lab, Biomedical Engineering Department, School of Electrical Engineering, Iran University of Science and Technology (IUST), Narmak, Tehran 16846-13114, Iran
| | - Mohammad Reza Zarrindast
- Institute for Cognitive Science Studies (ICSS), Pardis 16583- 44575 Tehran, Iran
- Department of Pharmacology, School of Medicine, Tehran University of Medical Sciences, Tehran 14166-34793, Iran
| | - Mohammad Reza Daliri
- Institute for Cognitive Science Studies (ICSS), Pardis 16583- 44575 Tehran, Iran
- Neuroscience & Neuroengineering Research Lab, Biomedical Engineering Department, School of Electrical Engineering, Iran University of Science and Technology (IUST), Narmak, Tehran 16846-13114, Iran
| |
Collapse
|
22
|
Versteeg C, Sedler AR, McCart JD, Pandarinath C. Expressive dynamics models with nonlinear injective readouts enable reliable recovery of latent features from neural activity. ARXIV 2023:arXiv:2309.06402v1. [PMID: 37744459 PMCID: PMC10516113] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Subscribe] [Scholar Register] [Indexed: 09/26/2023]
Abstract
The advent of large-scale neural recordings has enabled new approaches that aim to discover the computational mechanisms of neural circuits by understanding the rules that govern how their state evolves over time. While these neural dynamics cannot be directly measured, they can typically be approximated by low-dimensional models in a latent space. How these models represent the mapping from latent space to neural space can affect the interpretability of the latent representation. We show that typical choices for this mapping (e.g., linear or MLP) often lack the property of injectivity, meaning that changes in latent state are not obligated to affect activity in the neural space. During training, non-injective readouts incentivize the invention of dynamics that misrepresent the underlying system and the computation it performs. Combining our injective Flow readout with prior work on interpretable latent dynamics models, we created the Ordinary Differential equations autoencoder with Injective Nonlinear readout (ODIN), which learns to capture latent dynamical systems that are nonlinearly embedded into observed neural activity via an approximately injective nonlinear mapping. We show that ODIN can recover nonlinearly embedded systems from simulated neural activity, even when the nature of the system and embedding are unknown. Additionally, we show that ODIN enables the unsupervised recovery of underlying dynamical features (e.g., fixed points) and embedding geometry. When applied to biological neural recordings, ODIN can reconstruct neural activity with comparable accuracy to previous state-of-the-art methods while using substantially fewer latent dimensions. Overall, ODIN's accuracy in recovering ground-truth latent features and ability to accurately reconstruct neural activity with low dimensionality make it a promising method for distilling interpretable dynamics that can help explain neural computation.
Collapse
Affiliation(s)
- Christopher Versteeg
- Wallace H. Coulter Department of Biomedical Engineering Emory University and Georgia Institute of Technology Atlanta, GA, USA
| | - Andrew R Sedler
- Wallace H. Coulter Department of Biomedical Engineering Emory University and Georgia Institute of Technology Atlanta, GA, USA
- Center for Machine Learning Georgia Institute of Technology Atlanta, GA, USA
| | - Jonathan D McCart
- Wallace H. Coulter Department of Biomedical Engineering Emory University and Georgia Institute of Technology Atlanta, GA, USA
- Center for Machine Learning Georgia Institute of Technology Atlanta, GA, USA
| | - Chethan Pandarinath
- Wallace H. Coulter Department of Biomedical Engineering Emory University and Georgia Institute of Technology Atlanta, GA, USA
- Center for Machine Learning Georgia Institute of Technology Atlanta, GA, USA
| |
Collapse
|
23
|
Schneider S, Lee JH, Mathis MW. Learnable latent embeddings for joint behavioural and neural analysis. Nature 2023; 617:360-368. [PMID: 37138088 PMCID: PMC10172131 DOI: 10.1038/s41586-023-06031-6] [Citation(s) in RCA: 66] [Impact Index Per Article: 33.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/30/2022] [Accepted: 03/28/2023] [Indexed: 05/05/2023]
Abstract
Mapping behavioural actions to neural activity is a fundamental goal of neuroscience. As our ability to record large neural and behavioural data increases, there is growing interest in modelling neural dynamics during adaptive behaviours to probe neural representations1-3. In particular, although neural latent embeddings can reveal underlying correlates of behaviour, we lack nonlinear techniques that can explicitly and flexibly leverage joint behaviour and neural data to uncover neural dynamics3-5. Here, we fill this gap with a new encoding method, CEBRA, that jointly uses behavioural and neural data in a (supervised) hypothesis- or (self-supervised) discovery-driven manner to produce both consistent and high-performance latent spaces. We show that consistency can be used as a metric for uncovering meaningful differences, and the inferred latents can be used for decoding. We validate its accuracy and demonstrate our tool's utility for both calcium and electrophysiology datasets, across sensory and motor tasks and in simple or complex behaviours across species. It allows leverage of single- and multi-session datasets for hypothesis testing or can be used label free. Lastly, we show that CEBRA can be used for the mapping of space, uncovering complex kinematic features, for the production of consistent latent spaces across two-photon and Neuropixels data, and can provide rapid, high-accuracy decoding of natural videos from visual cortex.
Collapse
Affiliation(s)
- Steffen Schneider
- Brain Mind Institute & Neuro X Institute, École Polytechnique Fédérale de Lausanne, Geneva, Switzerland
| | - Jin Hwa Lee
- Brain Mind Institute & Neuro X Institute, École Polytechnique Fédérale de Lausanne, Geneva, Switzerland
| | - Mackenzie Weygandt Mathis
- Brain Mind Institute & Neuro X Institute, École Polytechnique Fédérale de Lausanne, Geneva, Switzerland.
| |
Collapse
|
24
|
Sedler AR, Versteeg C, Pandarinath C. Expressive architectures enhance interpretability of dynamics-based neural population models. NEURONS, BEHAVIOR, DATA ANALYSIS, AND THEORY 2023; 2023:10.51628/001c.73987. [PMID: 38699512 PMCID: PMC11065448 DOI: 10.51628/001c.73987] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/05/2024]
Abstract
Artificial neural networks that can recover latent dynamics from recorded neural activity may provide a powerful avenue for identifying and interpreting the dynamical motifs underlying biological computation. Given that neural variance alone does not uniquely determine a latent dynamical system, interpretable architectures should prioritize accurate and low-dimensional latent dynamics. In this work, we evaluated the performance of sequential autoencoders (SAEs) in recovering latent chaotic attractors from simulated neural datasets. We found that SAEs with widely-used recurrent neural network (RNN)-based dynamics were unable to infer accurate firing rates at the true latent state dimensionality, and that larger RNNs relied upon dynamical features not present in the data. On the other hand, SAEs with neural ordinary differential equation (NODE)-based dynamics inferred accurate rates at the true latent state dimensionality, while also recovering latent trajectories and fixed point structure. Ablations reveal that this is mainly because NODEs (1) allow use of higher-capacity multi-layer perceptrons (MLPs) to model the vector field and (2) predict the derivative rather than the next state. Decoupling the capacity of the dynamics model from its latent dimensionality enables NODEs to learn the requisite low-D dynamics where RNN cells fail. Additionally, the fact that the NODE predicts derivatives imposes a useful autoregressive prior on the latent states. The suboptimal interpretability of widely-used RNN-based dynamics may motivate substitution for alternative architectures, such as NODE, that enable learning of accurate dynamics in low-dimensional latent spaces.
Collapse
Affiliation(s)
- Andrew R. Sedler
- Center for Machine Learning, Georgia Institute of Technology, Atlanta, GA, USA
- Wallace H. Coulter Department of Biomedical Engineering, Emory University and Georgia Institute of Technology, Atlanta, GA, USA
| | - Christopher Versteeg
- Wallace H. Coulter Department of Biomedical Engineering, Emory University and Georgia Institute of Technology, Atlanta, GA, USA
| | - Chethan Pandarinath
- Center for Machine Learning, Georgia Institute of Technology, Atlanta, GA, USA
- Wallace H. Coulter Department of Biomedical Engineering, Emory University and Georgia Institute of Technology, Atlanta, GA, USA
| |
Collapse
|
25
|
Patel AN, Sedler AR, Huang J, Pandarinath C, Gilja V. High-performance neural population dynamics modeling enabled by scalable computational infrastructure. JOURNAL OF OPEN SOURCE SOFTWARE 2023; 8:5023. [PMID: 37520691 PMCID: PMC10374446 DOI: 10.21105/joss.05023] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Indexed: 08/01/2023]
Affiliation(s)
- Aashish N Patel
- Department of Electrical and Computer Engineering, University of California San Diego, United States of America
- Institute for Neural Computation, University of California San Diego, United States of America
| | - Andrew R Sedler
- Center for Machine Learning, Georgia Institute of Technology, United States of America
- Department of Biomedical Engineering, Georgia Institute of Technology, United States of America
| | - Jingya Huang
- Department of Electrical and Computer Engineering, University of California San Diego, United States of America
| | - Chethan Pandarinath
- Center for Machine Learning, Georgia Institute of Technology, United States of America
- Department of Biomedical Engineering, Georgia Institute of Technology, United States of America
- Department of Neurosurgery, Emory University, United States of America
- These authors contributed equally
| | - Vikash Gilja
- Department of Electrical and Computer Engineering, University of California San Diego, United States of America
- These authors contributed equally
| |
Collapse
|
26
|
Chen ZS, Wilson MA. How our understanding of memory replay evolves. J Neurophysiol 2023; 129:552-580. [PMID: 36752404 PMCID: PMC9988534 DOI: 10.1152/jn.00454.2022] [Citation(s) in RCA: 9] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/02/2022] [Revised: 01/20/2023] [Accepted: 01/20/2023] [Indexed: 02/09/2023] Open
Abstract
Memory reactivations and replay, widely reported in the hippocampus and cortex across species, have been implicated in memory consolidation, planning, and spatial and skill learning. Technological advances in electrophysiology, calcium imaging, and human neuroimaging techniques have enabled neuroscientists to measure large-scale neural activity with increasing spatiotemporal resolution and have provided opportunities for developing robust analytic methods to identify memory replay. In this article, we first review a large body of historically important and representative memory replay studies from the animal and human literature. We then discuss our current understanding of memory replay functions in learning, planning, and memory consolidation and further discuss the progress in computational modeling that has contributed to these improvements. Next, we review past and present analytic methods for replay analyses and discuss their limitations and challenges. Finally, looking ahead, we discuss some promising analytic methods for detecting nonstereotypical, behaviorally nondecodable structures from large-scale neural recordings. We argue that seamless integration of multisite recordings, real-time replay decoding, and closed-loop manipulation experiments will be essential for delineating the role of memory replay in a wide range of cognitive and motor functions.
Collapse
Affiliation(s)
- Zhe Sage Chen
- Department of Psychiatry, New York University Grossman School of Medicine, New York, New York, United States
- Department of Neuroscience and Physiology, New York University Grossman School of Medicine, New York, New York, United States
- Neuroscience Institute, New York University Grossman School of Medicine, New York, New York, United States
- Department of Biomedical Engineering, New York University Tandon School of Engineering, Brooklyn, New York, United States
| | - Matthew A Wilson
- Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, Massachusetts, United States
- Picower Institute for Learning and Memory, Massachusetts Institute of Technology, Cambridge, Massachusetts, United States
| |
Collapse
|