1
|
Sihn D, Chae S, Kim SP. A method to find temporal structure of neuronal coactivity patterns with across-trial correlations. J Neurosci Methods 2024; 408:110172. [PMID: 38782124 DOI: 10.1016/j.jneumeth.2024.110172] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/05/2024] [Revised: 05/08/2024] [Accepted: 05/17/2024] [Indexed: 05/25/2024]
Abstract
BACKGROUND The across-trial correlation of neurons' coactivity patterns emerges to be important for information coding, but methods for finding their temporal structures remain largely unexplored. NEW METHOD In the present study, we propose a method to find time clusters in which coactivity patterns of neurons are correlated across trials. We transform the multidimensional neural activity at each timing into a coactivity pattern of binary states, and predict the coactivity patterns at different timings. We devise a method suitable for these coactivity pattern predictions, call general event prediction. Cross-temporal prediction accuracy is then used to estimate across-trial correlations between coactivity patterns at two timings. We extract time clusters from the cross-temporal prediction accuracy by a modified k-means algorithm. RESULTS The feasibility of the proposed method is verified through simulations based on ground truth. We apply the proposed method to a calcium imaging dataset recorded from the motor cortex of mice, and demonstrate time clusters of motor cortical coactivity patterns during a motor task. COMPARISON WITH EXISTING METHODS While the existing cosine similarity method, which does not account for across-trial correlation, shows temporal structures only for contralateral neural responses, the proposed method reveals those for both contralateral and ipsilateral neural responses, demonstrating the effect of across-trial correlations. CONCLUSIONS This study introduces a novel method for measuring the temporal structure of neuronal ensemble activity.
Collapse
Affiliation(s)
- Duho Sihn
- Department of Biomedical Engineering, Ulsan National Institute of Science and Technology, Ulsan 44919, the Republic of Korea
| | - Soyoung Chae
- Department of Biomedical Engineering, Ulsan National Institute of Science and Technology, Ulsan 44919, the Republic of Korea
| | - Sung-Phil Kim
- Department of Biomedical Engineering, Ulsan National Institute of Science and Technology, Ulsan 44919, the Republic of Korea.
| |
Collapse
|
2
|
Chen Y, Chien J, Dai B, Lin D, Chen ZS. Identifying behavioral links to neural dynamics of multifiber photometry recordings in a mouse social behavior network. J Neural Eng 2024; 21:10.1088/1741-2552/ad5702. [PMID: 38861996 PMCID: PMC11246699 DOI: 10.1088/1741-2552/ad5702] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/09/2024] [Accepted: 06/11/2024] [Indexed: 06/13/2024]
Abstract
Objective.Distributed hypothalamic-midbrain neural circuits help orchestrate complex behavioral responses during social interactions. Given rapid advances in optical imaging, it is a fundamental question how population-averaged neural activity measured by multi-fiber photometry (MFP) for calcium fluorescence signals correlates with social behaviors is a fundamental question. This paper aims to investigate the correspondence between MFP data and social behaviors.Approach:We propose a state-space analysis framework to characterize mouse MFP data based on dynamic latent variable models, which include a continuous-state linear dynamical system and a discrete-state hidden semi-Markov model. We validate these models on extensive MFP recordings during aggressive and mating behaviors in male-male and male-female interactions, respectively.Main results:Our results show that these models are capable of capturing both temporal behavioral structure and associated neural states, and produce interpretable latent states. Our approach is also validated in computer simulations in the presence of known ground truth.Significance:Overall, these analysis approaches provide a state-space framework to examine neural dynamics underlying social behaviors and reveals mechanistic insights into the relevant networks.
Collapse
Affiliation(s)
- Yibo Chen
- Department of Psychiatry, Department of Neuroscience and Physiology, New York University School of Medicine, New York, NY, USA
- Program in Artificial Intelligence, University of Science and Technology of China, Hefei, Anhui, China
- Equal contributions (Y.C. and J.C.)
| | - Jonathan Chien
- Department of Psychiatry, Department of Neuroscience and Physiology, New York University School of Medicine, New York, NY, USA
- Equal contributions (Y.C. and J.C.)
| | - Bing Dai
- Neuroscience Institute, New York University Grossman School of Medicine, New York, NY, USA
| | - Dayu Lin
- Department of Psychiatry, Department of Neuroscience and Physiology, New York University School of Medicine, New York, NY, USA
- Neuroscience Institute, New York University Grossman School of Medicine, New York, NY, USA
- Center for Neural Science, New York University, New York, NY, USA
| | - Zhe Sage Chen
- Department of Psychiatry, Department of Neuroscience and Physiology, New York University School of Medicine, New York, NY, USA
- Neuroscience Institute, New York University Grossman School of Medicine, New York, NY, USA
- Department of Biomedical Engineering, NYU Tandon School of Engineering, Brooklyn, NY, USA
| |
Collapse
|
3
|
Manley J, Lu S, Barber K, Demas J, Kim H, Meyer D, Traub FM, Vaziri A. Simultaneous, cortex-wide dynamics of up to 1 million neurons reveal unbounded scaling of dimensionality with neuron number. Neuron 2024; 112:1694-1709.e5. [PMID: 38452763 PMCID: PMC11098699 DOI: 10.1016/j.neuron.2024.02.011] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/23/2022] [Revised: 05/18/2023] [Accepted: 02/14/2024] [Indexed: 03/09/2024]
Abstract
The brain's remarkable properties arise from the collective activity of millions of neurons. Widespread application of dimensionality reduction to multi-neuron recordings implies that neural dynamics can be approximated by low-dimensional "latent" signals reflecting neural computations. However, can such low-dimensional representations truly explain the vast range of brain activity, and if not, what is the appropriate resolution and scale of recording to capture them? Imaging neural activity at cellular resolution and near-simultaneously across the mouse cortex, we demonstrate an unbounded scaling of dimensionality with neuron number in populations up to 1 million neurons. Although half of the neural variance is contained within sixteen dimensions correlated with behavior, our discovered scaling of dimensionality corresponds to an ever-increasing number of neuronal ensembles without immediate behavioral or sensory correlates. The activity patterns underlying these higher dimensions are fine grained and cortex wide, highlighting that large-scale, cellular-resolution recording is required to uncover the full substrates of neuronal computations.
Collapse
Affiliation(s)
- Jason Manley
- Laboratory of Neurotechnology and Biophysics, The Rockefeller University, New York, NY 10065, USA; The Kavli Neural Systems Institute, The Rockefeller University, New York, NY 10065, USA
| | - Sihao Lu
- Laboratory of Neurotechnology and Biophysics, The Rockefeller University, New York, NY 10065, USA
| | - Kevin Barber
- Laboratory of Neurotechnology and Biophysics, The Rockefeller University, New York, NY 10065, USA
| | - Jeffrey Demas
- Laboratory of Neurotechnology and Biophysics, The Rockefeller University, New York, NY 10065, USA; The Kavli Neural Systems Institute, The Rockefeller University, New York, NY 10065, USA
| | - Hyewon Kim
- Laboratory of Neurotechnology and Biophysics, The Rockefeller University, New York, NY 10065, USA
| | - David Meyer
- Laboratory of Neurotechnology and Biophysics, The Rockefeller University, New York, NY 10065, USA
| | - Francisca Martínez Traub
- Laboratory of Neurotechnology and Biophysics, The Rockefeller University, New York, NY 10065, USA
| | - Alipasha Vaziri
- Laboratory of Neurotechnology and Biophysics, The Rockefeller University, New York, NY 10065, USA; The Kavli Neural Systems Institute, The Rockefeller University, New York, NY 10065, USA.
| |
Collapse
|
4
|
Sadras N, Pesaran B, Shanechi MM. Event detection and classification from multimodal time series with application to neural data. J Neural Eng 2024; 21:026049. [PMID: 38513289 DOI: 10.1088/1741-2552/ad3678] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/15/2023] [Accepted: 03/21/2024] [Indexed: 03/23/2024]
Abstract
The detection of events in time-series data is a common signal-processing problem. When the data can be modeled as a known template signal with an unknown delay in Gaussian noise, detection of the template signal can be done with a traditional matched filter. However, in many applications, the event of interest is represented in multimodal data consisting of both Gaussian and point-process time series. Neuroscience experiments, for example, can simultaneously record multimodal neural signals such as local field potentials (LFPs), which can be modeled as Gaussian, and neuronal spikes, which can be modeled as point processes. Currently, no method exists for event detection from such multimodal data, and as such our objective in this work is to develop a method to meet this need. Here we address this challenge by developing the multimodal event detector (MED) algorithm which simultaneously estimates event times and classes. To do this, we write a multimodal likelihood function for Gaussian and point-process observations and derive the associated maximum likelihood estimator of simultaneous event times and classes. We additionally introduce a cross-modal scaling parameter to account for model mismatch in real datasets. We validate this method in extensive simulations as well as in a neural spike-LFP dataset recorded during an eye-movement task, where the events of interest are eye movements with unknown times and directions. We show that the MED can successfully detect eye movement onset and classify eye movement direction. Further, the MED successfully combines information across data modalities, with multimodal performance exceeding unimodal performance. This method can facilitate applications such as the discovery of latent events in multimodal neural population activity and the development of brain-computer interfaces for naturalistic settings without constrained tasks or prior knowledge of event times.
Collapse
Affiliation(s)
- Nitin Sadras
- Ming Hsieh Department of Electrical and Computer Engineering, Viterbi School of Engineering, University of Southern California, Los Angeles, CA, United States of America
| | - Bijan Pesaran
- Department of Neurosurgery, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, United States of America
| | - Maryam M Shanechi
- Ming Hsieh Department of Electrical and Computer Engineering, Viterbi School of Engineering, University of Southern California, Los Angeles, CA, United States of America
- Thomas Lord Department of Computer Science, Alfred E. Mann Department of Biomedical Engineering, and the Neuroscience Graduate Program, University of Southern California, Los Angeles, CA, United States of America
| |
Collapse
|
5
|
Ali YH, Bodkin K, Rigotti-Thompson M, Patel K, Card NS, Bhaduri B, Nason-Tomaszewski SR, Mifsud DM, Hou X, Nicolas C, Allcroft S, Hochberg LR, Au Yong N, Stavisky SD, Miller LE, Brandman DM, Pandarinath C. BRAND: a platform for closed-loop experiments with deep network models. J Neural Eng 2024; 21:026046. [PMID: 38579696 PMCID: PMC11021878 DOI: 10.1088/1741-2552/ad3b3a] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/11/2023] [Revised: 01/27/2024] [Accepted: 04/05/2024] [Indexed: 04/07/2024]
Abstract
Objective.Artificial neural networks (ANNs) are state-of-the-art tools for modeling and decoding neural activity, but deploying them in closed-loop experiments with tight timing constraints is challenging due to their limited support in existing real-time frameworks. Researchers need a platform that fully supports high-level languages for running ANNs (e.g. Python and Julia) while maintaining support for languages that are critical for low-latency data acquisition and processing (e.g. C and C++).Approach.To address these needs, we introduce the Backend for Realtime Asynchronous Neural Decoding (BRAND). BRAND comprises Linux processes, termednodes, which communicate with each other in agraphvia streams of data. Its asynchronous design allows for acquisition, control, and analysis to be executed in parallel on streams of data that may operate at different timescales. BRAND uses Redis, an in-memory database, to send data between nodes, which enables fast inter-process communication and supports 54 different programming languages. Thus, developers can easily deploy existing ANN models in BRAND with minimal implementation changes.Main results.In our tests, BRAND achieved <600 microsecond latency between processes when sending large quantities of data (1024 channels of 30 kHz neural data in 1 ms chunks). BRAND runs a brain-computer interface with a recurrent neural network (RNN) decoder with less than 8 ms of latency from neural data input to decoder prediction. In a real-world demonstration of the system, participant T11 in the BrainGate2 clinical trial (ClinicalTrials.gov Identifier: NCT00912041) performed a standard cursor control task, in which 30 kHz signal processing, RNN decoding, task control, and graphics were all executed in BRAND. This system also supports real-time inference with complex latent variable models like Latent Factor Analysis via Dynamical Systems.Significance.By providing a framework that is fast, modular, and language-agnostic, BRAND lowers the barriers to integrating the latest tools in neuroscience and machine learning into closed-loop experiments.
Collapse
Affiliation(s)
- Yahia H Ali
- Wallace H. Coulter Department of Biomedical Engineering, Emory University and Georgia Institute of Technology, Atlanta, GA, United States of America
| | - Kevin Bodkin
- Department of Neuroscience, Northwestern University, Chicago, IL, United States of America
| | - Mattia Rigotti-Thompson
- Wallace H. Coulter Department of Biomedical Engineering, Emory University and Georgia Institute of Technology, Atlanta, GA, United States of America
| | - Kushant Patel
- Department of Neurological Surgery, University of California, Davis, CA, United States of America
| | - Nicholas S Card
- Department of Neurological Surgery, University of California, Davis, CA, United States of America
| | - Bareesh Bhaduri
- Wallace H. Coulter Department of Biomedical Engineering, Emory University and Georgia Institute of Technology, Atlanta, GA, United States of America
| | - Samuel R Nason-Tomaszewski
- Wallace H. Coulter Department of Biomedical Engineering, Emory University and Georgia Institute of Technology, Atlanta, GA, United States of America
| | - Domenick M Mifsud
- Wallace H. Coulter Department of Biomedical Engineering, Emory University and Georgia Institute of Technology, Atlanta, GA, United States of America
| | - Xianda Hou
- Department of Neurological Surgery, University of California, Davis, CA, United States of America
| | - Claire Nicolas
- Center for Neurotechnology and Neurorecovery, Department of Neurology, Massachusetts General Hospital, Boston, MA, United States of America
| | - Shane Allcroft
- School of Engineering and Carney Institute for Brain Science, Brown University, Providence, RI, United States of America
| | - Leigh R Hochberg
- Center for Neurotechnology and Neurorecovery, Department of Neurology, Massachusetts General Hospital, Boston, MA, United States of America
- School of Engineering and Carney Institute for Brain Science, Brown University, Providence, RI, United States of America
- Harvard Medical School, Boston, MA, United States of America
- Veterans Affairs Rehabilitation Research & Development Center for Neurorestoration and Neurotechnology, Providence VA Medical Center, Providence, RI, United States of America
| | - Nicholas Au Yong
- Department of Neurosurgery, Emory University, Atlanta, GA, United States of America
| | - Sergey D Stavisky
- Department of Neurological Surgery, University of California, Davis, CA, United States of America
| | - Lee E Miller
- Department of Neuroscience, Northwestern University, Chicago, IL, United States of America
- Department of Biomedical Engineering, Northwestern University, Evanston, IL, United States of America
- Department of Physical Medicine and Rehabilitation, Northwestern University, Chicago, IL, United States of America
- Shirley Ryan AbilityLab, Chicago, IL, United States of America
| | - David M Brandman
- Department of Neurological Surgery, University of California, Davis, CA, United States of America
| | - Chethan Pandarinath
- Wallace H. Coulter Department of Biomedical Engineering, Emory University and Georgia Institute of Technology, Atlanta, GA, United States of America
- Department of Neurosurgery, Emory University, Atlanta, GA, United States of America
| |
Collapse
|
6
|
Tlaie A, Shapcott K, van der Plas TL, Rowland J, Lees R, Keeling J, Packer A, Tiesinga P, Schölvinck ML, Havenith MN. What does the mean mean? A simple test for neuroscience. PLoS Comput Biol 2024; 20:e1012000. [PMID: 38640119 PMCID: PMC11062559 DOI: 10.1371/journal.pcbi.1012000] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/03/2023] [Revised: 05/01/2024] [Accepted: 03/12/2024] [Indexed: 04/21/2024] Open
Abstract
Trial-averaged metrics, e.g. tuning curves or population response vectors, are a ubiquitous way of characterizing neuronal activity. But how relevant are such trial-averaged responses to neuronal computation itself? Here we present a simple test to estimate whether average responses reflect aspects of neuronal activity that contribute to neuronal processing. The test probes two assumptions implicitly made whenever average metrics are treated as meaningful representations of neuronal activity: Reliability: Neuronal responses repeat consistently enough across trials that they convey a recognizable reflection of the average response to downstream regions.Behavioural relevance: If a single-trial response is more similar to the average template, it is more likely to evoke correct behavioural responses. We apply this test to two data sets: (1) Two-photon recordings in primary somatosensory cortices (S1 and S2) of mice trained to detect optogenetic stimulation in S1; and (2) Electrophysiological recordings from 71 brain areas in mice performing a contrast discrimination task. Under the highly controlled settings of Data set 1, both assumptions were largely fulfilled. In contrast, the less restrictive paradigm of Data set 2 met neither assumption. Simulations predict that the larger diversity of neuronal response preferences, rather than higher cross-trial reliability, drives the better performance of Data set 1. We conclude that when behaviour is less tightly restricted, average responses do not seem particularly relevant to neuronal computation, potentially because information is encoded more dynamically. Most importantly, we encourage researchers to apply this simple test of computational relevance whenever using trial-averaged neuronal metrics, in order to gauge how representative cross-trial averages are in a given context.
Collapse
Affiliation(s)
- Alejandro Tlaie
- Ernst Strüngmann Institute for Neuroscience, Frankfurt am Main, Germany
- Laboratory for Clinical Neuroscience, Centre for Biomedical Technology, Technical University of Madrid, Madrid, Spain
| | | | - Thijs L. van der Plas
- Department of Physiology, Anatomy, and Genetics, University of Oxford, Oxford, United Kingdom
| | - James Rowland
- Department of Physiology, Anatomy, and Genetics, University of Oxford, Oxford, United Kingdom
| | - Robert Lees
- Department of Physiology, Anatomy, and Genetics, University of Oxford, Oxford, United Kingdom
| | - Joshua Keeling
- Department of Physiology, Anatomy, and Genetics, University of Oxford, Oxford, United Kingdom
| | - Adam Packer
- Department of Physiology, Anatomy, and Genetics, University of Oxford, Oxford, United Kingdom
| | - Paul Tiesinga
- Department of Neuroinformatics, Donders Institute, Radboud University, Nijmegen, The Netherlands
| | | | - Martha N. Havenith
- Ernst Strüngmann Institute for Neuroscience, Frankfurt am Main, Germany
- Department of Physiology, Anatomy, and Genetics, University of Oxford, Oxford, United Kingdom
| |
Collapse
|
7
|
Ahmadipour P, Sani OG, Pesaran B, Shanechi MM. Multimodal subspace identification for modeling discrete-continuous spiking and field potential population activity. J Neural Eng 2024; 21:026001. [PMID: 38016450 PMCID: PMC10913727 DOI: 10.1088/1741-2552/ad1053] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/02/2023] [Revised: 10/23/2023] [Accepted: 11/28/2023] [Indexed: 11/30/2023]
Abstract
Objective.Learning dynamical latent state models for multimodal spiking and field potential activity can reveal their collective low-dimensional dynamics and enable better decoding of behavior through multimodal fusion. Toward this goal, developing unsupervised learning methods that are computationally efficient is important, especially for real-time learning applications such as brain-machine interfaces (BMIs). However, efficient learning remains elusive for multimodal spike-field data due to their heterogeneous discrete-continuous distributions and different timescales.Approach.Here, we develop a multiscale subspace identification (multiscale SID) algorithm that enables computationally efficient learning for modeling and dimensionality reduction for multimodal discrete-continuous spike-field data. We describe the spike-field activity as combined Poisson and Gaussian observations, for which we derive a new analytical SID method. Importantly, we also introduce a novel constrained optimization approach to learn valid noise statistics, which is critical for multimodal statistical inference of the latent state, neural activity, and behavior. We validate the method using numerical simulations and with spiking and local field potential population activity recorded during a naturalistic reach and grasp behavior.Main results.We find that multiscale SID accurately learned dynamical models of spike-field signals and extracted low-dimensional dynamics from these multimodal signals. Further, it fused multimodal information, thus better identifying the dynamical modes and predicting behavior compared to using a single modality. Finally, compared to existing multiscale expectation-maximization learning for Poisson-Gaussian observations, multiscale SID had a much lower training time while being better in identifying the dynamical modes and having a better or similar accuracy in predicting neural activity and behavior.Significance.Overall, multiscale SID is an accurate learning method that is particularly beneficial when efficient learning is of interest, such as for online adaptive BMIs to track non-stationary dynamics or for reducing offline training time in neuroscience investigations.
Collapse
Affiliation(s)
- Parima Ahmadipour
- Ming Hsieh Department of Electrical and Computer Engineering, Viterbi School of Engineering, University of Southern California, Los Angeles, CA, United States of America
| | - Omid G Sani
- Ming Hsieh Department of Electrical and Computer Engineering, Viterbi School of Engineering, University of Southern California, Los Angeles, CA, United States of America
| | - Bijan Pesaran
- Department of Neurosurgery, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, United States of America
| | - Maryam M Shanechi
- Ming Hsieh Department of Electrical and Computer Engineering, Viterbi School of Engineering, University of Southern California, Los Angeles, CA, United States of America
- Thomas Lord Department of Computer Science, Alfred E. Mann Department of Biomedical Engineering, and the Neuroscience Graduate Program, University of Southern California, Los Angeles, CA, United States of America
| |
Collapse
|
8
|
Vahidi P, Sani OG, Shanechi MM. Modeling and dissociation of intrinsic and input-driven neural population dynamics underlying behavior. Proc Natl Acad Sci U S A 2024; 121:e2212887121. [PMID: 38335258 PMCID: PMC10873612 DOI: 10.1073/pnas.2212887121] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/28/2022] [Accepted: 12/03/2023] [Indexed: 02/12/2024] Open
Abstract
Neural dynamics can reflect intrinsic dynamics or dynamic inputs, such as sensory inputs or inputs from other brain regions. To avoid misinterpreting temporally structured inputs as intrinsic dynamics, dynamical models of neural activity should account for measured inputs. However, incorporating measured inputs remains elusive in joint dynamical modeling of neural-behavioral data, which is important for studying neural computations of behavior. We first show how training dynamical models of neural activity while considering behavior but not input or input but not behavior may lead to misinterpretations. We then develop an analytical learning method for linear dynamical models that simultaneously accounts for neural activity, behavior, and measured inputs. The method provides the capability to prioritize the learning of intrinsic behaviorally relevant neural dynamics and dissociate them from both other intrinsic dynamics and measured input dynamics. In data from a simulated brain with fixed intrinsic dynamics that performs different tasks, the method correctly finds the same intrinsic dynamics regardless of the task while other methods can be influenced by the task. In neural datasets from three subjects performing two different motor tasks with task instruction sensory inputs, the method reveals low-dimensional intrinsic neural dynamics that are missed by other methods and are more predictive of behavior and/or neural activity. The method also uniquely finds that the intrinsic behaviorally relevant neural dynamics are largely similar across the different subjects and tasks, whereas the overall neural dynamics are not. These input-driven dynamical models of neural-behavioral data can uncover intrinsic dynamics that may otherwise be missed.
Collapse
Affiliation(s)
- Parsa Vahidi
- Ming Hsieh Department of Electrical and Computer Engineering, Viterbi School of Engineering, University of Southern California, Los Angeles, CA90089
| | - Omid G. Sani
- Ming Hsieh Department of Electrical and Computer Engineering, Viterbi School of Engineering, University of Southern California, Los Angeles, CA90089
| | - Maryam M. Shanechi
- Ming Hsieh Department of Electrical and Computer Engineering, Viterbi School of Engineering, University of Southern California, Los Angeles, CA90089
- Neuroscience Graduate Program, University of Southern California, Los Angeles, CA90089
- Thomas Lord Department of Computer Science and Alfred E. Mann Department of Biomedical Engineering, Viterbi School of Engineering, University of Southern California, Los Angeles, CA90089
| |
Collapse
|
9
|
Zimnik AJ, Cora Ames K, An X, Driscoll L, Lara AH, Russo AA, Susoy V, Cunningham JP, Paninski L, Churchland MM, Glaser JI. Identifying Interpretable Latent Factors with Sparse Component Analysis. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.02.05.578988. [PMID: 38370650 PMCID: PMC10871230 DOI: 10.1101/2024.02.05.578988] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/20/2024]
Abstract
In many neural populations, the computationally relevant signals are posited to be a set of 'latent factors' - signals shared across many individual neurons. Understanding the relationship between neural activity and behavior requires the identification of factors that reflect distinct computational roles. Methods for identifying such factors typically require supervision, which can be suboptimal if one is unsure how (or whether) factors can be grouped into distinct, meaningful sets. Here, we introduce Sparse Component Analysis (SCA), an unsupervised method that identifies interpretable latent factors. SCA seeks factors that are sparse in time and occupy orthogonal dimensions. With these simple constraints, SCA facilitates surprisingly clear parcellations of neural activity across a range of behaviors. We applied SCA to motor cortex activity from reaching and cycling monkeys, single-trial imaging data from C. elegans, and activity from a multitask artificial network. SCA consistently identified sets of factors that were useful in describing network computations.
Collapse
Affiliation(s)
- Andrew J Zimnik
- Department of Neuroscience, Columbia University Medical Center, New York, NY, USA
- Zuckerman Institute, Columbia University, New York, NY, USA
| | - K Cora Ames
- Department of Neuroscience, Columbia University Medical Center, New York, NY, USA
- Zuckerman Institute, Columbia University, New York, NY, USA
- Grossman Center for the Statistics of Mind, Columbia University, New York, NY, USA
- Center for Theoretical Neuroscience, Columbia University, New York, NY, USA
| | - Xinyue An
- Department of Neurology, Northwestern University, Chicago, IL, USA
- Interdepartmental Neuroscience Program, Northwestern University, Chicago, IL, USA
| | - Laura Driscoll
- Department of Electrical Engineering, Stanford University, Stanford, CA, USA
- Allen Institute for Neural Dynamics, Allen Institute, Seattle, CA, USA
| | - Antonio H Lara
- Department of Neuroscience, Columbia University Medical Center, New York, NY, USA
- Zuckerman Institute, Columbia University, New York, NY, USA
| | - Abigail A Russo
- Department of Neuroscience, Columbia University Medical Center, New York, NY, USA
- Zuckerman Institute, Columbia University, New York, NY, USA
| | - Vladislav Susoy
- Department of Physics, Harvard University, Cambridge, MA, USA
- Center for Brain Science, Harvard University, Cambridge, MA, USA
| | - John P Cunningham
- Zuckerman Institute, Columbia University, New York, NY, USA
- Grossman Center for the Statistics of Mind, Columbia University, New York, NY, USA
- Center for Theoretical Neuroscience, Columbia University, New York, NY, USA
- Department of Statistics, Columbia University, New York, NY, USA
| | - Liam Paninski
- Zuckerman Institute, Columbia University, New York, NY, USA
- Grossman Center for the Statistics of Mind, Columbia University, New York, NY, USA
- Center for Theoretical Neuroscience, Columbia University, New York, NY, USA
- Department of Statistics, Columbia University, New York, NY, USA
| | - Mark M Churchland
- Department of Neuroscience, Columbia University Medical Center, New York, NY, USA
- Zuckerman Institute, Columbia University, New York, NY, USA
- Grossman Center for the Statistics of Mind, Columbia University, New York, NY, USA
- Kavli Institute for Brain Science, Columbia University Medical Center, New York, NY, USA
| | - Joshua I Glaser
- Department of Neurology, Northwestern University, Chicago, IL, USA
- Department of Computer Science, Northwestern University, Evanston, IL, USA
| |
Collapse
|
10
|
Chen S, Liu Y, Wang ZA, Colonell J, Liu LD, Hou H, Tien NW, Wang T, Harris T, Druckmann S, Li N, Svoboda K. Brain-wide neural activity underlying memory-guided movement. Cell 2024; 187:676-691.e16. [PMID: 38306983 DOI: 10.1016/j.cell.2023.12.035] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/03/2023] [Revised: 09/19/2023] [Accepted: 12/27/2023] [Indexed: 02/04/2024]
Abstract
Behavior relies on activity in structured neural circuits that are distributed across the brain, but most experiments probe neurons in a single area at a time. Using multiple Neuropixels probes, we recorded from multi-regional loops connected to the anterior lateral motor cortex (ALM), a circuit node mediating memory-guided directional licking. Neurons encoding sensory stimuli, choices, and actions were distributed across the brain. However, choice coding was concentrated in the ALM and subcortical areas receiving input from the ALM in an ALM-dependent manner. Diverse orofacial movements were encoded in the hindbrain; midbrain; and, to a lesser extent, forebrain. Choice signals were first detected in the ALM and the midbrain, followed by the thalamus and other brain areas. At movement initiation, choice-selective activity collapsed across the brain, followed by new activity patterns driving specific actions. Our experiments provide the foundation for neural circuit models of decision-making and movement initiation.
Collapse
Affiliation(s)
- Susu Chen
- Janelia Research Campus, Howard Hughes Medical Institute, Ashburn, VA, USA
| | - Yi Liu
- Stanford University, Palo Alto, CA, USA
| | | | - Jennifer Colonell
- Janelia Research Campus, Howard Hughes Medical Institute, Ashburn, VA, USA
| | - Liu D Liu
- Janelia Research Campus, Howard Hughes Medical Institute, Ashburn, VA, USA; Baylor College of Medicine, Houston, TX, USA
| | - Han Hou
- Janelia Research Campus, Howard Hughes Medical Institute, Ashburn, VA, USA; Allen Institute for Neural Dynamics, Seattle, WA, USA
| | - Nai-Wen Tien
- Janelia Research Campus, Howard Hughes Medical Institute, Ashburn, VA, USA
| | - Tim Wang
- Janelia Research Campus, Howard Hughes Medical Institute, Ashburn, VA, USA; Allen Institute for Neural Dynamics, Seattle, WA, USA
| | - Timothy Harris
- Janelia Research Campus, Howard Hughes Medical Institute, Ashburn, VA, USA; Johns Hopkins University, Baltimore, MD, USA
| | - Shaul Druckmann
- Janelia Research Campus, Howard Hughes Medical Institute, Ashburn, VA, USA; Stanford University, Palo Alto, CA, USA.
| | - Nuo Li
- Janelia Research Campus, Howard Hughes Medical Institute, Ashburn, VA, USA; Baylor College of Medicine, Houston, TX, USA.
| | - Karel Svoboda
- Janelia Research Campus, Howard Hughes Medical Institute, Ashburn, VA, USA; Allen Institute for Neural Dynamics, Seattle, WA, USA.
| |
Collapse
|
11
|
Weng G, Clark K, Akbarian A, Noudoost B, Nategh N. Time-varying generalized linear models: characterizing and decoding neuronal dynamics in higher visual areas. Front Comput Neurosci 2024; 18:1273053. [PMID: 38348287 PMCID: PMC10859875 DOI: 10.3389/fncom.2024.1273053] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/05/2023] [Accepted: 01/09/2024] [Indexed: 02/15/2024] Open
Abstract
To create a behaviorally relevant representation of the visual world, neurons in higher visual areas exhibit dynamic response changes to account for the time-varying interactions between external (e.g., visual input) and internal (e.g., reward value) factors. The resulting high-dimensional representational space poses challenges for precisely quantifying individual factors' contributions to the representation and readout of sensory information during a behavior. The widely used point process generalized linear model (GLM) approach provides a powerful framework for a quantitative description of neuronal processing as a function of various sensory and non-sensory inputs (encoding) as well as linking particular response components to particular behaviors (decoding), at the level of single trials and individual neurons. However, most existing variations of GLMs assume the neural systems to be time-invariant, making them inadequate for modeling nonstationary characteristics of neuronal sensitivity in higher visual areas. In this review, we summarize some of the existing GLM variations, with a focus on time-varying extensions. We highlight their applications to understanding neural representations in higher visual areas and decoding transient neuronal sensitivity as well as linking physiology to behavior through manipulation of model components. This time-varying class of statistical models provide valuable insights into the neural basis of various visual behaviors in higher visual areas and hold significant potential for uncovering the fundamental computational principles that govern neuronal processing underlying various behaviors in different regions of the brain.
Collapse
Affiliation(s)
- Geyu Weng
- Department of Biomedical Engineering, University of Utah, Salt Lake City, UT, United States
- Department of Ophthalmology and Visual Sciences, University of Utah, Salt Lake City, UT, United States
| | - Kelsey Clark
- Department of Ophthalmology and Visual Sciences, University of Utah, Salt Lake City, UT, United States
| | - Amir Akbarian
- Department of Ophthalmology and Visual Sciences, University of Utah, Salt Lake City, UT, United States
| | - Behrad Noudoost
- Department of Ophthalmology and Visual Sciences, University of Utah, Salt Lake City, UT, United States
| | - Neda Nategh
- Department of Ophthalmology and Visual Sciences, University of Utah, Salt Lake City, UT, United States
- Department of Electrical and Computer Engineering, University of Utah, Salt Lake City, UT, United States
| |
Collapse
|
12
|
Manley J, Demas J, Kim H, Traub FM, Vaziri A. Simultaneous, cortex-wide and cellular-resolution neuronal population dynamics reveal an unbounded scaling of dimensionality with neuron number. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.01.15.575721. [PMID: 38293036 PMCID: PMC10827059 DOI: 10.1101/2024.01.15.575721] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/01/2024]
Abstract
The brain's remarkable properties arise from collective activity of millions of neurons. Widespread application of dimensionality reduction to multi-neuron recordings implies that neural dynamics can be approximated by low-dimensional "latent" signals reflecting neural computations. However, what would be the biological utility of such a redundant and metabolically costly encoding scheme and what is the appropriate resolution and scale of neural recording to understand brain function? Imaging the activity of one million neurons at cellular resolution and near-simultaneously across mouse cortex, we demonstrate an unbounded scaling of dimensionality with neuron number. While half of the neural variance lies within sixteen behavior-related dimensions, we find this unbounded scaling of dimensionality to correspond to an ever-increasing number of internal variables without immediate behavioral correlates. The activity patterns underlying these higher dimensions are fine-grained and cortex-wide, highlighting that large-scale recording is required to uncover the full neural substrates of internal and potentially cognitive processes.
Collapse
Affiliation(s)
- Jason Manley
- Laboratory of Neurotechnology and Biophysics, The Rockefeller University, New York, NY 10065, USA
- The Kavli Neural Systems Institute, The Rockefeller University, New York, NY 10065, USA
| | - Jeffrey Demas
- Laboratory of Neurotechnology and Biophysics, The Rockefeller University, New York, NY 10065, USA
- The Kavli Neural Systems Institute, The Rockefeller University, New York, NY 10065, USA
| | - Hyewon Kim
- Laboratory of Neurotechnology and Biophysics, The Rockefeller University, New York, NY 10065, USA
| | - Francisca Martínez Traub
- Laboratory of Neurotechnology and Biophysics, The Rockefeller University, New York, NY 10065, USA
| | - Alipasha Vaziri
- Laboratory of Neurotechnology and Biophysics, The Rockefeller University, New York, NY 10065, USA
- The Kavli Neural Systems Institute, The Rockefeller University, New York, NY 10065, USA
- Lead Contact
| |
Collapse
|
13
|
Chen Y, Chien J, Dai B, Lin D, Chen ZS. Identifying behavioral links to neural dynamics of multifiber photometry recordings in a mouse social behavior network. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2023.12.25.573308. [PMID: 38234793 PMCID: PMC10793434 DOI: 10.1101/2023.12.25.573308] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/19/2024]
Abstract
Distributed hypothalamic-midbrain neural circuits orchestrate complex behavioral responses during social interactions. How population-averaged neural activity measured by multi-fiber photometry (MFP) for calcium fluorescence signals correlates with social behaviors is a fundamental question. We propose a state-space analysis framework to characterize mouse MFP data based on dynamic latent variable models, which include continuous-state linear dynamical system (LDS) and discrete-state hidden semi-Markov model (HSMM). We validate these models on extensive MFP recordings during aggressive and mating behaviors in male-male and male-female interactions, respectively. Our results show that these models are capable of capturing both temporal behavioral structure and associated neural states. Overall, these analysis approaches provide an unbiased strategy to examine neural dynamics underlying social behaviors and reveals mechanistic insights into the relevant networks.
Collapse
Affiliation(s)
- Yibo Chen
- Department of Psychiatry, Department of Neuroscience and Physiology, New York University School of Medicine, New York, NY, USA
- Program in Artificial Intelligence, University of Science and Technology of China, Hefei, Anhui, China
| | - Jonathan Chien
- Department of Psychiatry, Department of Neuroscience and Physiology, New York University School of Medicine, New York, NY, USA
| | - Bing Dai
- Neuroscience Institute, New York University Grossman School of Medicine, New York, NY, USA
| | - Dayu Lin
- Department of Psychiatry, Department of Neuroscience and Physiology, New York University School of Medicine, New York, NY, USA
- Neuroscience Institute, New York University Grossman School of Medicine, New York, NY, USA
- Center for Neural Science, New York University, New York, NY, USA
| | - Zhe Sage Chen
- Department of Psychiatry, Department of Neuroscience and Physiology, New York University School of Medicine, New York, NY, USA
- Neuroscience Institute, New York University Grossman School of Medicine, New York, NY, USA
- Department of Biomedical Engineering, NYU Tandon School of Engineering, Brooklyn, NY, USA
| |
Collapse
|
14
|
Sellers KK, Cohen JL, Khambhati AN, Fan JM, Lee AM, Chang EF, Krystal AD. Closed-loop neurostimulation for the treatment of psychiatric disorders. Neuropsychopharmacology 2024; 49:163-178. [PMID: 37369777 PMCID: PMC10700557 DOI: 10.1038/s41386-023-01631-2] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/07/2023] [Revised: 05/31/2023] [Accepted: 06/02/2023] [Indexed: 06/29/2023]
Abstract
Despite increasing prevalence and huge personal and societal burden, psychiatric diseases still lack treatments which can control symptoms for a large fraction of patients. Increasing insight into the neurobiology underlying these diseases has demonstrated wide-ranging aberrant activity and functioning in multiple brain circuits and networks. Together with varied presentation and symptoms, this makes one-size-fits-all treatment a challenge. There has been a resurgence of interest in the use of neurostimulation as a treatment for psychiatric diseases. Initial studies using continuous open-loop stimulation, in which clinicians adjusted stimulation parameters during patient visits, showed promise but also mixed results. Given the periodic nature and fluctuations of symptoms often observed in psychiatric illnesses, the use of device-driven closed-loop stimulation may provide more effective therapy. The use of a biomarker, which is correlated with specific symptoms, to deliver stimulation only during symptomatic periods allows for the personalized therapy needed for such heterogeneous disorders. Here, we provide the reader with background motivating the use of closed-loop neurostimulation for the treatment of psychiatric disorders. We review foundational studies of open- and closed-loop neurostimulation for neuropsychiatric indications, focusing on deep brain stimulation, and discuss key considerations when designing and implementing closed-loop neurostimulation.
Collapse
Affiliation(s)
- Kristin K Sellers
- Department of Neurological Surgery, University of California, San Francisco, CA, USA
- Weill Institute for Neurosciences, University of California, San Francisco, CA, USA
| | - Joshua L Cohen
- Weill Institute for Neurosciences, University of California, San Francisco, CA, USA
- Department of Psychiatry and Behavioral Sciences, University of California, San Francisco, CA, USA
| | - Ankit N Khambhati
- Department of Neurological Surgery, University of California, San Francisco, CA, USA
- Weill Institute for Neurosciences, University of California, San Francisco, CA, USA
| | - Joline M Fan
- Weill Institute for Neurosciences, University of California, San Francisco, CA, USA
- Department of Neurology, University of California, San Francisco, CA, USA
| | - A Moses Lee
- Weill Institute for Neurosciences, University of California, San Francisco, CA, USA
- Department of Psychiatry and Behavioral Sciences, University of California, San Francisco, CA, USA
| | - Edward F Chang
- Department of Neurological Surgery, University of California, San Francisco, CA, USA
- Weill Institute for Neurosciences, University of California, San Francisco, CA, USA
| | - Andrew D Krystal
- Weill Institute for Neurosciences, University of California, San Francisco, CA, USA.
- Department of Psychiatry and Behavioral Sciences, University of California, San Francisco, CA, USA.
| |
Collapse
|
15
|
Abbaspourazad H, Erturk E, Pesaran B, Shanechi MM. Dynamical flexible inference of nonlinear latent factors and structures in neural population activity. Nat Biomed Eng 2024; 8:85-108. [PMID: 38082181 DOI: 10.1038/s41551-023-01106-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/21/2022] [Accepted: 09/12/2023] [Indexed: 12/26/2023]
Abstract
Modelling the spatiotemporal dynamics in the activity of neural populations while also enabling their flexible inference is hindered by the complexity and noisiness of neural observations. Here we show that the lower-dimensional nonlinear latent factors and latent structures can be computationally modelled in a manner that allows for flexible inference causally, non-causally and in the presence of missing neural observations. To enable flexible inference, we developed a neural network that separates the model into jointly trained manifold and dynamic latent factors such that nonlinearity is captured through the manifold factors and the dynamics can be modelled in tractable linear form on this nonlinear manifold. We show that the model, which we named 'DFINE' (for 'dynamical flexible inference for nonlinear embeddings') achieves flexible inference in simulations of nonlinear dynamics and across neural datasets representing a diversity of brain regions and behaviours. Compared with earlier neural-network models, DFINE enables flexible inference, better predicts neural activity and behaviour, and better captures the latent neural manifold structure. DFINE may advance the development of neurotechnology and investigations in neuroscience.
Collapse
Affiliation(s)
- Hamidreza Abbaspourazad
- Ming Hsieh Department of Electrical and Computer Engineering, Viterbi School of Engineering, University of Southern California, Los Angeles, CA, USA
| | - Eray Erturk
- Ming Hsieh Department of Electrical and Computer Engineering, Viterbi School of Engineering, University of Southern California, Los Angeles, CA, USA
| | - Bijan Pesaran
- Departments of Neurosurgery, Neuroscience, and Bioengineering, University of Pennsylvania, Philadelphia, PA, USA
| | - Maryam M Shanechi
- Ming Hsieh Department of Electrical and Computer Engineering, Viterbi School of Engineering, University of Southern California, Los Angeles, CA, USA.
- Thomas Lord Department of Computer Science, Alfred E. Mann Department of Biomedical Engineering, Neuroscience Graduate Program, University of Southern California, Los Angeles, CA, USA.
| |
Collapse
|
16
|
Song CY, Shanechi MM. Unsupervised learning of stationary and switching dynamical system models from Poisson observations. J Neural Eng 2023; 20:066029. [PMID: 38083862 PMCID: PMC10714100 DOI: 10.1088/1741-2552/ad038d] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/24/2023] [Revised: 09/15/2023] [Accepted: 10/16/2023] [Indexed: 12/18/2023]
Abstract
Objective. Investigating neural population dynamics underlying behavior requires learning accurate models of the recorded spiking activity, which can be modeled with a Poisson observation distribution. Switching dynamical system models can offer both explanatory power and interpretability by piecing together successive regimes of simpler dynamics to capture more complex ones. However, in many cases, reliable regime labels are not available, thus demanding accurate unsupervised learning methods for Poisson observations. Existing learning methods, however, rely on inference of latent states in neural activity using the Laplace approximation, which may not capture the broader properties of densities and may lead to inaccurate learning. Thus, there is a need for new inference methods that can enable accurate model learning.Approach. To achieve accurate model learning, we derive a novel inference method based on deterministic sampling for Poisson observations called the Poisson Cubature Filter (PCF) and embed it in an unsupervised learning framework. This method takes a minimum mean squared error approach to estimation. Terms that are difficult to find analytically for Poisson observations are approximated in a novel way with deterministic sampling based on numerical integration and cubature rules.Main results. PCF enabled accurate unsupervised learning in both stationary and switching dynamical systems and largely outperformed prior Laplace approximation-based learning methods in both simulations and motor cortical spiking data recorded during a reaching task. These improvements were larger for smaller data sizes, showing that PCF-based learning was more data efficient and enabled more reliable regime identification. In experimental data and unsupervised with respect to behavior, PCF-based learning uncovered interpretable behavior-relevant regimes unlike prior learning methods.Significance. The developed unsupervised learning methods for switching dynamical systems can accurately uncover latent regimes and states in population spiking activity, with important applications in both basic neuroscience and neurotechnology.
Collapse
Affiliation(s)
- Christian Y Song
- Ming Hsieh Department of Electrical and Computer Engineering, Viterbi School of Engineering, University of Southern California, Los Angeles, CA, United States of America
| | - Maryam M Shanechi
- Ming Hsieh Department of Electrical and Computer Engineering, Viterbi School of Engineering, University of Southern California, Los Angeles, CA, United States of America
- Neuroscience Graduate Program, University of Southern California, Los Angeles, CA, United States of America
- Alfred E. Mann Department of Biomedical Engineering, Viterbi School of Engineering, University of Southern California, Los Angeles, CA, United States of America
- Thomas Lord Department of Computer Science, Viterbi School of Engineering, University of Southern California, Los Angeles, CA, United States of America
| |
Collapse
|
17
|
Tian Y, Yin J, Wang C, He Z, Xie J, Feng X, Zhou Y, Ma T, Xie Y, Li X, Yang T, Ren C, Li C, Zhao Z. An Ultraflexible Electrode Array for Large-Scale Chronic Recording in the Nonhuman Primate Brain. ADVANCED SCIENCE (WEINHEIM, BADEN-WURTTEMBERG, GERMANY) 2023; 10:e2302333. [PMID: 37870175 PMCID: PMC10667845 DOI: 10.1002/advs.202302333] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/12/2023] [Revised: 09/08/2023] [Indexed: 10/24/2023]
Abstract
Single-unit (SU) recording in nonhuman primates (NHPs) is indispensible in the quest of how the brain works, yet electrodes currently used for the NHP brain are limited in signal longevity, stability, and spatial coverage. Using new structural materials, microfabrication, and penetration techniques, we develop a mechanically robust ultraflexible, 1 µm thin electrode array (MERF) that enables pial penetration and high-density, large-scale, and chronic recording of neurons along both vertical and horizontal cortical axes in the nonhuman primate brain. Recording from three monkeys yields 2,913 SUs from 1,065 functional recording channels (up to 240 days), with some SUs tracked for up to 2 months. Recording from the primary visual cortex (V1) reveals that neurons with similar orientation preferences for visual stimuli exhibited higher spike correlation. Furthermore, simultaneously recorded neurons in different cortical layers of the primary motor cortex (M1) show preferential firing for hand movements of different directions. Finally, it is shown that a linear decoder trained with neuronal spiking activity across M1 layers during monkey's hand movements can be used to achieve on-line control of cursor movement. Thus, the MERF electrode array offers a new tool for basic neuroscience studies and brain-machine interface (BMI) applications in the primate brain.
Collapse
Affiliation(s)
- Yixin Tian
- Institute of NeuroscienceCenter for Excellence in Brain Science and Intelligence TechnologyChinese Academy of SciencesShanghai200031China
| | - Jiapeng Yin
- Shanghai Center for Brain Science and Brain‐Inspired TechnologyShanghai201602China
- Lingang LaboratoryShanghai200031China
| | - Chengyao Wang
- Institute of NeuroscienceCenter for Excellence in Brain Science and Intelligence TechnologyChinese Academy of SciencesShanghai200031China
| | - Zhenliang He
- Lingang LaboratoryShanghai200031China
- Institute of NeuroscienceState Key Laboratory of Neuroscience, Center for Excellence in Brain Science and Intelligence TechnologyChinese Academy of SciencesShanghai200031China
| | - Jingyi Xie
- Institute of NeuroscienceCenter for Excellence in Brain Science and Intelligence TechnologyChinese Academy of SciencesShanghai200031China
- University of Chinese Academy of SciencesBeijing100049China
| | - Xiaoshan Feng
- Institute of NeuroscienceCenter for Excellence in Brain Science and Intelligence TechnologyChinese Academy of SciencesShanghai200031China
| | - Yang Zhou
- Institute of NeuroscienceCenter for Excellence in Brain Science and Intelligence TechnologyChinese Academy of SciencesShanghai200031China
| | - Tianyu Ma
- Institute of NeuroscienceCenter for Excellence in Brain Science and Intelligence TechnologyChinese Academy of SciencesShanghai200031China
- University of Chinese Academy of SciencesBeijing100049China
| | - Yang Xie
- Lingang LaboratoryShanghai200031China
- Institute of NeuroscienceKey Laboratory of Primate Neurobiology, Center for Excellence in Brain Science and Intelligence TechnologyChinese Academy of SciencesShanghai200031China
| | - Xue Li
- Institute of NeuroscienceCenter for Excellence in Brain Science and Intelligence TechnologyChinese Academy of SciencesShanghai200031China
| | - Tianming Yang
- Institute of NeuroscienceCenter for Excellence in Brain Science and Intelligence TechnologyChinese Academy of SciencesShanghai200031China
- University of Chinese Academy of SciencesBeijing100049China
| | - Chi Ren
- Institute of NeuroscienceCenter for Excellence in Brain Science and Intelligence TechnologyChinese Academy of SciencesShanghai200031China
| | - Chengyu Li
- Lingang LaboratoryShanghai200031China
- Institute of NeuroscienceState Key Laboratory of Neuroscience, Center for Excellence in Brain Science and Intelligence TechnologyChinese Academy of SciencesShanghai200031China
| | - Zhengtuo Zhao
- Institute of NeuroscienceCenter for Excellence in Brain Science and Intelligence TechnologyChinese Academy of SciencesShanghai200031China
- University of Chinese Academy of SciencesBeijing100049China
| |
Collapse
|
18
|
Durstewitz D, Koppe G, Thurm MI. Reconstructing computational system dynamics from neural data with recurrent neural networks. Nat Rev Neurosci 2023; 24:693-710. [PMID: 37794121 DOI: 10.1038/s41583-023-00740-7] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 08/18/2023] [Indexed: 10/06/2023]
Abstract
Computational models in neuroscience usually take the form of systems of differential equations. The behaviour of such systems is the subject of dynamical systems theory. Dynamical systems theory provides a powerful mathematical toolbox for analysing neurobiological processes and has been a mainstay of computational neuroscience for decades. Recently, recurrent neural networks (RNNs) have become a popular machine learning tool for studying the non-linear dynamics of neural and behavioural processes by emulating an underlying system of differential equations. RNNs have been routinely trained on similar behavioural tasks to those used for animal subjects to generate hypotheses about the underlying computational mechanisms. By contrast, RNNs can also be trained on the measured physiological and behavioural data, thereby directly inheriting their temporal and geometrical properties. In this way they become a formal surrogate for the experimentally probed system that can be further analysed, perturbed and simulated. This powerful approach is called dynamical system reconstruction. In this Perspective, we focus on recent trends in artificial intelligence and machine learning in this exciting and rapidly expanding field, which may be less well known in neuroscience. We discuss formal prerequisites, different model architectures and training approaches for RNN-based dynamical system reconstructions, ways to evaluate and validate model performance, how to interpret trained models in a neuroscience context, and current challenges.
Collapse
Affiliation(s)
- Daniel Durstewitz
- Dept. of Theoretical Neuroscience, Central Institute of Mental Health, Medical Faculty Mannheim, Heidelberg University, Mannheim, Germany.
- Interdisciplinary Center for Scientific Computing, Heidelberg University, Heidelberg, Germany.
- Faculty of Physics and Astronomy, Heidelberg University, Heidelberg, Germany.
| | - Georgia Koppe
- Dept. of Theoretical Neuroscience, Central Institute of Mental Health, Medical Faculty Mannheim, Heidelberg University, Mannheim, Germany
- Dept. of Psychiatry and Psychotherapy, Central Institute of Mental Health, Medical Faculty Mannheim, Heidelberg University, Mannheim, Germany
- Hector Institute for Artificial Intelligence in Psychiatry, Central Institute of Mental Health, Medical Faculty Mannheim, Heidelberg University, Mannheim, Germany
| | - Max Ingo Thurm
- Dept. of Theoretical Neuroscience, Central Institute of Mental Health, Medical Faculty Mannheim, Heidelberg University, Mannheim, Germany
| |
Collapse
|
19
|
Chang EH, Gabalski AH, Huerta TS, Datta-Chaudhuri T, Zanos TP, Zanos S, Grill WM, Tracey KJ, Al-Abed Y. The Fifth Bioelectronic Medicine Summit: today's tools, tomorrow's therapies. Bioelectron Med 2023; 9:21. [PMID: 37794457 PMCID: PMC10552422 DOI: 10.1186/s42234-023-00123-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/28/2023] [Accepted: 09/04/2023] [Indexed: 10/06/2023] Open
Abstract
The emerging field of bioelectronic medicine (BEM) is poised to make a significant impact on the treatment of several neurological and inflammatory disorders. With several BEM therapies being recently approved for clinical use and others in late-phase clinical trials, the 2022 BEM summit was a timely scientific meeting convening a wide range of experts to discuss the latest developments in the field. The BEM Summit was held over two days in New York with more than thirty-five invited speakers and panelists comprised of researchers and experts from both academia and industry. The goal of the meeting was to bring international leaders together to discuss advances and cultivate collaborations in this emerging field that incorporates aspects of neuroscience, physiology, molecular medicine, engineering, and technology. This Meeting Report recaps the latest findings discussed at the Meeting and summarizes the main developments in this rapidly advancing interdisciplinary field. Our hope is that this Meeting Report will encourage researchers from academia and industry to push the field forward and generate new multidisciplinary collaborations that will form the basis of new discoveries that we can discuss at the next BEM Summit.
Collapse
Affiliation(s)
- Eric H Chang
- Feinstein Institutes for Medical Research, Northwell Health, 350 Community Drive, Manhasset, NY, 11030, USA.
- Donald and Barbara Zucker School of Medicine at Hofstra/Northwell, 500 Hofstra Blvd, Hempstead, NY, 11549, USA.
- The Elmezzi Graduate School of Molecular Medicine, Northwell Health, 350 Community Drive, Manhasset, NY, 11030, USA.
| | - Arielle H Gabalski
- Feinstein Institutes for Medical Research, Northwell Health, 350 Community Drive, Manhasset, NY, 11030, USA
- Donald and Barbara Zucker School of Medicine at Hofstra/Northwell, 500 Hofstra Blvd, Hempstead, NY, 11549, USA
| | - Tomas S Huerta
- Feinstein Institutes for Medical Research, Northwell Health, 350 Community Drive, Manhasset, NY, 11030, USA
| | - Timir Datta-Chaudhuri
- Feinstein Institutes for Medical Research, Northwell Health, 350 Community Drive, Manhasset, NY, 11030, USA
- Donald and Barbara Zucker School of Medicine at Hofstra/Northwell, 500 Hofstra Blvd, Hempstead, NY, 11549, USA
- The Elmezzi Graduate School of Molecular Medicine, Northwell Health, 350 Community Drive, Manhasset, NY, 11030, USA
| | - Theodoros P Zanos
- Feinstein Institutes for Medical Research, Northwell Health, 350 Community Drive, Manhasset, NY, 11030, USA
- Donald and Barbara Zucker School of Medicine at Hofstra/Northwell, 500 Hofstra Blvd, Hempstead, NY, 11549, USA
- The Elmezzi Graduate School of Molecular Medicine, Northwell Health, 350 Community Drive, Manhasset, NY, 11030, USA
| | - Stavros Zanos
- Feinstein Institutes for Medical Research, Northwell Health, 350 Community Drive, Manhasset, NY, 11030, USA
- Donald and Barbara Zucker School of Medicine at Hofstra/Northwell, 500 Hofstra Blvd, Hempstead, NY, 11549, USA
- The Elmezzi Graduate School of Molecular Medicine, Northwell Health, 350 Community Drive, Manhasset, NY, 11030, USA
| | - Warren M Grill
- Department of Biomedical Engineering, Fitzpatrick CIEMAS, Duke University, Room 1427, 101 Science Drive, Box 90281, Durham, NC, 27708, USA
| | - Kevin J Tracey
- Feinstein Institutes for Medical Research, Northwell Health, 350 Community Drive, Manhasset, NY, 11030, USA
- Donald and Barbara Zucker School of Medicine at Hofstra/Northwell, 500 Hofstra Blvd, Hempstead, NY, 11549, USA
- The Elmezzi Graduate School of Molecular Medicine, Northwell Health, 350 Community Drive, Manhasset, NY, 11030, USA
| | - Yousef Al-Abed
- Feinstein Institutes for Medical Research, Northwell Health, 350 Community Drive, Manhasset, NY, 11030, USA
- Donald and Barbara Zucker School of Medicine at Hofstra/Northwell, 500 Hofstra Blvd, Hempstead, NY, 11549, USA
- The Elmezzi Graduate School of Molecular Medicine, Northwell Health, 350 Community Drive, Manhasset, NY, 11030, USA
| |
Collapse
|
20
|
Rezaei MR, Jeoung H, Gharamani A, Saha U, Bhat V, Popovic MR, Yousefi A, Chen R, Lankarany M. Inferring cognitive state underlying conflict choices in verbal Stroop task using heterogeneous input discriminative-generative decoder model. J Neural Eng 2023; 20:056016. [PMID: 37473753 DOI: 10.1088/1741-2552/ace932] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/19/2023] [Accepted: 07/20/2023] [Indexed: 07/22/2023]
Abstract
Objective. The subthalamic nucleus (STN) of the basal ganglia interacts with the medial prefrontal cortex (mPFC) and shapes a control loop, specifically when the brain receives contradictory information from either different sensory systems or conflicting information from sensory inputs and prior knowledge that developed in the brain. Experimental studies demonstrated that significant increases in theta activities (2-8 Hz) in both the STN and mPFC as well as increased phase synchronization between mPFC and STN are prominent features of conflict processing. While these neural features reflect the importance of STN-mPFC circuitry in conflict processing, a low-dimensional representation of the mPFC-STN interaction referred to as a cognitive state, that links neural activities generated by these sub-regions to behavioral signals (e.g. the response time), remains to be identified.Approach. Here, we propose a new model, namely, the heterogeneous input discriminative-generative decoder (HI-DGD) model, to infer a cognitive state underlying decision-making based on neural activities (STN and mPFC) and behavioral signals (individuals' response time) recorded in ten Parkinson's disease (PD) patients while they performed a Stroop task. PD patients may have conflict processing which is quantitatively (may be qualitative in some) different from healthy populations.Main results. Using extensive synthetic and experimental data, we showed that the HI-DGD model can diffuse information from neural and behavioral data simultaneously and estimate cognitive states underlying conflict and non-conflict trials significantly better than traditional methods. Additionally, the HI-DGD model identified which neural features made significant contributions to conflict and non-conflict choices. Interestingly, the estimated features match well with those reported in experimental studies.Significance. Finally, we highlight the capability of the HI-DGD model in estimating a cognitive state from a single trial of observation, which makes it appropriate to be utilized in closed-loop neuromodulation systems.
Collapse
Affiliation(s)
- Mohammad R Rezaei
- Institute of Biomedical Engineering, University of Toronto, Toronto, ON, Canada
- Krembil Research Institute, University Health Network (UHN), Toronto, ON, Canada
- KITE Research Institute, University Health Network (UHN), Toronto, ON, Canada
| | - Haseul Jeoung
- Krembil Research Institute, University Health Network (UHN), Toronto, ON, Canada
| | - Ayda Gharamani
- Krembil Research Institute, University Health Network (UHN), Toronto, ON, Canada
- Worcester Polytechnic Institute, MA, United States of America
| | - Utpal Saha
- Krembil Research Institute, University Health Network (UHN), Toronto, ON, Canada
| | - Venkat Bhat
- Institute of Biomedical Engineering, University of Toronto, Toronto, ON, Canada
- Department of Psychiatry, University Health Network and University of Toronto, Toronto, ON, Canada
| | - Milos R Popovic
- Institute of Biomedical Engineering, University of Toronto, Toronto, ON, Canada
- KITE Research Institute, University Health Network (UHN), Toronto, ON, Canada
| | - Ali Yousefi
- Worcester Polytechnic Institute, MA, United States of America
| | - Robert Chen
- Krembil Research Institute, University Health Network (UHN), Toronto, ON, Canada
| | - Milad Lankarany
- Institute of Biomedical Engineering, University of Toronto, Toronto, ON, Canada
- Krembil Research Institute, University Health Network (UHN), Toronto, ON, Canada
- KITE Research Institute, University Health Network (UHN), Toronto, ON, Canada
| |
Collapse
|
21
|
Sadras N, Sani OG, Ahmadipour P, Shanechi MM. Post-stimulus encoding of decision confidence in EEG: toward a brain-computer interface for decision making. J Neural Eng 2023; 20:056012. [PMID: 37524073 DOI: 10.1088/1741-2552/acec14] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/21/2022] [Accepted: 07/31/2023] [Indexed: 08/02/2023]
Abstract
Objective.When making decisions, humans can evaluate how likely they are to be correct. If this subjective confidence could be reliably decoded from brain activity, it would be possible to build a brain-computer interface (BCI) that improves decision performance by automatically providing more information to the user if needed based on their confidence. But this possibility depends on whether confidence can be decoded right after stimulus presentation and before the response so that a corrective action can be taken in time. Although prior work has shown that decision confidence is represented in brain signals, it is unclear if the representation is stimulus-locked or response-locked, and whether stimulus-locked pre-response decoding is sufficiently accurate for enabling such a BCI.Approach.We investigate the neural correlates of confidence by collecting high-density electroencephalography (EEG) during a perceptual decision task with realistic stimuli. Importantly, we design our task to include a post-stimulus gap that prevents the confounding of stimulus-locked activity by response-locked activity and vice versa, and then compare with a task without this gap.Main results.We perform event-related potential and source-localization analyses. Our analyses suggest that the neural correlates of confidence are stimulus-locked, and that an absence of a post-stimulus gap could cause these correlates to incorrectly appear as response-locked. By preventing response-locked activity from confounding stimulus-locked activity, we then show that confidence can be reliably decoded from single-trial stimulus-locked pre-response EEG alone. We also identify a high-performance classification algorithm by comparing a battery of algorithms. Lastly, we design a simulated BCI framework to show that the EEG classification is accurate enough to build a BCI and that the decoded confidence could be used to improve decision making performance particularly when the task difficulty and cost of errors are high.Significance.Our results show feasibility of non-invasive EEG-based BCIs to improve human decision making.
Collapse
Affiliation(s)
- Nitin Sadras
- Ming Hsieh Department of Electrical and Computer Engineering, Viterbi School of Engineering, University of Southern California, Los Angeles, CA, United States of America
| | - Omid G Sani
- Ming Hsieh Department of Electrical and Computer Engineering, Viterbi School of Engineering, University of Southern California, Los Angeles, CA, United States of America
| | - Parima Ahmadipour
- Ming Hsieh Department of Electrical and Computer Engineering, Viterbi School of Engineering, University of Southern California, Los Angeles, CA, United States of America
| | - Maryam M Shanechi
- Ming Hsieh Department of Electrical and Computer Engineering, Viterbi School of Engineering, University of Southern California, Los Angeles, CA, United States of America
- Department of Biomedical Engineering, Viterbi School of Engineering, University of Southern California, Los Angeles, CA, United States of America
- Department of Computer Science, Viterbi School of Engineering, University of Southern California, Los Angeles, CA, United States of America
- Neuroscience Graduate Program University of Southern California, Los Angeles, CA, United States of America
| |
Collapse
|
22
|
Ali YH, Bodkin K, Rigotti-Thompson M, Patel K, Card NS, Bhaduri B, Nason-Tomaszewski SR, Mifsud DM, Hou X, Nicolas C, Allcroft S, Hochberg LR, Yong NA, Stavisky SD, Miller LE, Brandman DM, Pandarinath C. BRAND: A platform for closed-loop experiments with deep network models. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2023:2023.08.08.552473. [PMID: 37609167 PMCID: PMC10441362 DOI: 10.1101/2023.08.08.552473] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 08/24/2023]
Abstract
Artificial neural networks (ANNs) are state-of-the-art tools for modeling and decoding neural activity, but deploying them in closed-loop experiments with tight timing constraints is challenging due to their limited support in existing real-time frameworks. Researchers need a platform that fully supports high-level languages for running ANNs (e.g., Python and Julia) while maintaining support for languages that are critical for low-latency data acquisition and processing (e.g., C and C++). To address these needs, we introduce the Backend for Realtime Asynchronous Neural Decoding (BRAND). BRAND comprises Linux processes, termed nodes , which communicate with each other in a graph via streams of data. Its asynchronous design allows for acquisition, control, and analysis to be executed in parallel on streams of data that may operate at different timescales. BRAND uses Redis to send data between nodes, which enables fast inter-process communication and supports 54 different programming languages. Thus, developers can easily deploy existing ANN models in BRAND with minimal implementation changes. In our tests, BRAND achieved <600 microsecond latency between processes when sending large quantities of data (1024 channels of 30 kHz neural data in 1-millisecond chunks). BRAND runs a brain-computer interface with a recurrent neural network (RNN) decoder with less than 8 milliseconds of latency from neural data input to decoder prediction. In a real-world demonstration of the system, participant T11 in the BrainGate2 clinical trial performed a standard cursor control task, in which 30 kHz signal processing, RNN decoding, task control, and graphics were all executed in BRAND. This system also supports real-time inference with complex latent variable models like Latent Factor Analysis via Dynamical Systems. By providing a framework that is fast, modular, and language-agnostic, BRAND lowers the barriers to integrating the latest tools in neuroscience and machine learning into closed-loop experiments.
Collapse
|
23
|
Athalye VR, Khanna P, Gowda S, Orsborn AL, Costa RM, Carmena JM. Invariant neural dynamics drive commands to control different movements. Curr Biol 2023; 33:2962-2976.e15. [PMID: 37402376 PMCID: PMC10527529 DOI: 10.1016/j.cub.2023.06.027] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/22/2022] [Revised: 04/24/2023] [Accepted: 06/09/2023] [Indexed: 07/06/2023]
Abstract
It has been proposed that the nervous system has the capacity to generate a wide variety of movements because it reuses some invariant code. Previous work has identified that dynamics of neural population activity are similar during different movements, where dynamics refer to how the instantaneous spatial pattern of population activity changes in time. Here, we test whether invariant dynamics of neural populations are actually used to issue the commands that direct movement. Using a brain-machine interface (BMI) that transforms rhesus macaques' motor-cortex activity into commands for a neuroprosthetic cursor, we discovered that the same command is issued with different neural-activity patterns in different movements. However, these different patterns were predictable, as we found that the transitions between activity patterns are governed by the same dynamics across movements. These invariant dynamics are low dimensional, and critically, they align with the BMI, so that they predict the specific component of neural activity that actually issues the next command. We introduce a model of optimal feedback control (OFC) that shows that invariant dynamics can help transform movement feedback into commands, reducing the input that the neural population needs to control movement. Altogether our results demonstrate that invariant dynamics drive commands to control a variety of movements and show how feedback can be integrated with invariant dynamics to issue generalizable commands.
Collapse
Affiliation(s)
- Vivek R Athalye
- Zuckerman Mind Brain Behavior Institute, Departments of Neuroscience and Neurology, Columbia University, New York, NY 10027, USA.
| | - Preeya Khanna
- Department of Neurology, University of California, San Francisco, San Francisco, CA 94158, USA.
| | - Suraj Gowda
- Department of Electrical Engineering and Computer Sciences, University of California, Berkeley, Berkeley, CA 94720, USA
| | - Amy L Orsborn
- Departments of Bioengineering, Electrical and Computer Engineering, University of Washington, Seattle, Seattle, WA 98195, USA
| | - Rui M Costa
- Zuckerman Mind Brain Behavior Institute, Departments of Neuroscience and Neurology, Columbia University, New York, NY 10027, USA.
| | - Jose M Carmena
- Department of Electrical Engineering and Computer Sciences, University of California, Berkeley, Berkeley, CA 94720, USA; Helen Wills Neuroscience Institute, University of California, Berkeley, Berkeley, CA 94720, USA; UC Berkeley-UCSF Joint Graduate Program in Bioengineering, University of California, Berkeley, Berkeley, CA 94720, USA.
| |
Collapse
|
24
|
Huang Y, Zhang X, Wang Y. Decoding Ensemble Spike States from Extracellular Field Potentials. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2023; 2023:1-4. [PMID: 38083630 DOI: 10.1109/embc40787.2023.10341044] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/18/2023]
Abstract
Behaviors are encoded by multi-scale brain signals, from microscopic spike signals to macroscopic extracellular Field Potentials (FPs). Extracting neuronal spike information from FPs is an important, yet challenging problem. Because FPs stem from summed contributions of a large population of neurons. Previous work inferred single-neuron spiking activity from the FPs using a generalized linear model (GLM). However, FPs reflect the states of neural ensembles more than single-neuron spike trains. In this paper, we propose a computational model to decode ensemble spike states from FPs. This framework first extracts transient features in FPs, and then detects typical ensemble spike patterns and assigns state labels accordingly. Finally, we use a neural network to decode the ensemble spike states from the FP neuromodulations. This FP-Spike decoder is tested on the FP and spike data from the M1 area of an SD rat. We show that our model can effectively decode multi-neuron spike states. Compared with the GLM method for single-neuron spike prediction, our model exhibits 37% less ensemble spike pattern decoding error. These preliminary results show that we can decode informative spike states from FPs, indicating that the decode results can further benefit long-term stable brain-machine interfaces.
Collapse
|
25
|
Ahmadipour P, Sani OG, Pesaran B, Shanechi MM. Multimodal subspace identification for modeling discrete-continuous spiking and field potential population activity. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2023:2023.05.26.542509. [PMID: 37398400 PMCID: PMC10312539 DOI: 10.1101/2023.05.26.542509] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 07/04/2023]
Abstract
Learning dynamical latent state models for multimodal spiking and field potential activity can reveal their collective low-dimensional dynamics and enable better decoding of behavior through multimodal fusion. Toward this goal, developing unsupervised learning methods that are computationally efficient is important, especially for real-time learning applications such as brain-machine interfaces (BMIs). However, efficient learning remains elusive for multimodal spike-field data due to their heterogeneous discrete-continuous distributions and different timescales. Here, we develop a multiscale subspace identification (multiscale SID) algorithm that enables computationally efficient modeling and dimensionality reduction for multimodal discrete-continuous spike-field data. We describe the spike-field activity as combined Poisson and Gaussian observations, for which we derive a new analytical subspace identification method. Importantly, we also introduce a novel constrained optimization approach to learn valid noise statistics, which is critical for multimodal statistical inference of the latent state, neural activity, and behavior. We validate the method using numerical simulations and spike-LFP population activity recorded during a naturalistic reach and grasp behavior. We find that multiscale SID accurately learned dynamical models of spike-field signals and extracted low-dimensional dynamics from these multimodal signals. Further, it fused multimodal information, thus better identifying the dynamical modes and predicting behavior compared to using a single modality. Finally, compared to existing multiscale expectation-maximization learning for Poisson-Gaussian observations, multiscale SID had a much lower computational cost while being better in identifying the dynamical modes and having a better or similar accuracy in predicting neural activity. Overall, multiscale SID is an accurate learning method that is particularly beneficial when efficient learning is of interest.
Collapse
|
26
|
Schneider S, Lee JH, Mathis MW. Learnable latent embeddings for joint behavioural and neural analysis. Nature 2023; 617:360-368. [PMID: 37138088 PMCID: PMC10172131 DOI: 10.1038/s41586-023-06031-6] [Citation(s) in RCA: 42] [Impact Index Per Article: 42.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/30/2022] [Accepted: 03/28/2023] [Indexed: 05/05/2023]
Abstract
Mapping behavioural actions to neural activity is a fundamental goal of neuroscience. As our ability to record large neural and behavioural data increases, there is growing interest in modelling neural dynamics during adaptive behaviours to probe neural representations1-3. In particular, although neural latent embeddings can reveal underlying correlates of behaviour, we lack nonlinear techniques that can explicitly and flexibly leverage joint behaviour and neural data to uncover neural dynamics3-5. Here, we fill this gap with a new encoding method, CEBRA, that jointly uses behavioural and neural data in a (supervised) hypothesis- or (self-supervised) discovery-driven manner to produce both consistent and high-performance latent spaces. We show that consistency can be used as a metric for uncovering meaningful differences, and the inferred latents can be used for decoding. We validate its accuracy and demonstrate our tool's utility for both calcium and electrophysiology datasets, across sensory and motor tasks and in simple or complex behaviours across species. It allows leverage of single- and multi-session datasets for hypothesis testing or can be used label free. Lastly, we show that CEBRA can be used for the mapping of space, uncovering complex kinematic features, for the production of consistent latent spaces across two-photon and Neuropixels data, and can provide rapid, high-accuracy decoding of natural videos from visual cortex.
Collapse
Affiliation(s)
- Steffen Schneider
- Brain Mind Institute & Neuro X Institute, École Polytechnique Fédérale de Lausanne, Geneva, Switzerland
| | - Jin Hwa Lee
- Brain Mind Institute & Neuro X Institute, École Polytechnique Fédérale de Lausanne, Geneva, Switzerland
| | - Mackenzie Weygandt Mathis
- Brain Mind Institute & Neuro X Institute, École Polytechnique Fédérale de Lausanne, Geneva, Switzerland.
| |
Collapse
|
27
|
Disse GD, Nandakumar B, Pauzin FP, Blumenthal GH, Kong Z, Ditterich J, Moxon KA. Neural ensemble dynamics in trunk and hindlimb sensorimotor cortex encode for the control of postural stability. Cell Rep 2023; 42:112347. [PMID: 37027302 DOI: 10.1016/j.celrep.2023.112347] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/22/2022] [Revised: 02/09/2023] [Accepted: 03/21/2023] [Indexed: 04/08/2023] Open
Abstract
The cortex has a disputed role in monitoring postural equilibrium and intervening in cases of major postural disturbances. Here, we investigate the patterns of neural activity in the cortex that underlie neural dynamics during unexpected perturbations. In both the primary sensory (S1) and motor (M1) cortices of the rat, unique neuronal classes differentially covary their responses to distinguish different characteristics of applied postural perturbations; however, there is substantial information gain in M1, demonstrating a role for higher-order computations in motor control. A dynamical systems model of M1 activity and forces generated by the limbs reveals that these neuronal classes contribute to a low-dimensional manifold comprised of separate subspaces enabled by congruent and incongruent neural firing patterns that define different computations depending on the postural responses. These results inform how the cortex engages in postural control, directing work aiming to understand postural instability after neurological disease.
Collapse
Affiliation(s)
- Gregory D Disse
- Neuroscience Graduate Group, University of California, Davis, Davis, CA 95616, USA; Biomedical Engineering, University of California, Davis, Davis, CA 95616, USA
| | | | - Francois P Pauzin
- Biomedical Engineering, University of California, Davis, Davis, CA 95616, USA
| | - Gary H Blumenthal
- School of Biomedical Engineering Science and Health Systems, Drexel University, Philadelphia, PA 19104, USA
| | - Zhaodan Kong
- Mechanical and Aerospace Engineering, University of California, Davis, Davis, CA 95616, USA
| | - Jochen Ditterich
- Neuroscience Graduate Group, University of California, Davis, Davis, CA 95616, USA; Neurobiology, Physiology and Behavior, University of California, Davis, Davis, CA 95616, USA
| | - Karen A Moxon
- Neuroscience Graduate Group, University of California, Davis, Davis, CA 95616, USA; Biomedical Engineering, University of California, Davis, Davis, CA 95616, USA.
| |
Collapse
|
28
|
Abbaspourazad H, Erturk E, Pesaran B, Shanechi MM. Dynamical flexible inference of nonlinear latent structures in neural population activity. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2023:2023.03.13.532479. [PMID: 36993605 PMCID: PMC10054986 DOI: 10.1101/2023.03.13.532479] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/19/2023]
Abstract
Inferring complex spatiotemporal dynamics in neural population activity is critical for investigating neural mechanisms and developing neurotechnology. These activity patterns are noisy observations of lower-dimensional latent factors and their nonlinear dynamical structure. A major unaddressed challenge is to model this nonlinear structure, but in a manner that allows for flexible inference, whether causally, non-causally, or in the presence of missing neural observations. We address this challenge by developing DFINE, a new neural network that separates the model into dynamic and manifold latent factors, such that the dynamics can be modeled in tractable form. We show that DFINE achieves flexible nonlinear inference across diverse behaviors and brain regions. Further, despite enabling flexible inference unlike prior neural network models of population activity, DFINE also better predicts the behavior and neural activity, and better captures the latent neural manifold structure. DFINE can both enhance future neurotechnology and facilitate investigations across diverse domains of neuroscience.
Collapse
|
29
|
Vahidi P, Sani OG, Shanechi MM. Modeling and dissociation of intrinsic and input-driven neural population dynamics underlying behavior. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2023:2023.03.14.532554. [PMID: 36993213 PMCID: PMC10055042 DOI: 10.1101/2023.03.14.532554] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/19/2023]
Abstract
Neural dynamics can reflect intrinsic dynamics or dynamic inputs, such as sensory inputs or inputs from other regions. To avoid misinterpreting temporally-structured inputs as intrinsic dynamics, dynamical models of neural activity should account for measured inputs. However, incorporating measured inputs remains elusive in joint dynamical modeling of neural-behavioral data, which is important for studying neural computations of a specific behavior. We first show how training dynamical models of neural activity while considering behavior but not input, or input but not behavior may lead to misinterpretations. We then develop a novel analytical learning method that simultaneously accounts for neural activity, behavior, and measured inputs. The method provides the new capability to prioritize the learning of intrinsic behaviorally relevant neural dynamics and dissociate them from both other intrinsic dynamics and measured input dynamics. In data from a simulated brain with fixed intrinsic dynamics that performs different tasks, the method correctly finds the same intrinsic dynamics regardless of task while other methods can be influenced by the change in task. In neural datasets from three subjects performing two different motor tasks with task instruction sensory inputs, the method reveals low-dimensional intrinsic neural dynamics that are missed by other methods and are more predictive of behavior and/or neural activity. The method also uniquely finds that the intrinsic behaviorally relevant neural dynamics are largely similar across the three subjects and two tasks whereas the overall neural dynamics are not. These input-driven dynamical models of neural-behavioral data can uncover intrinsic dynamics that may otherwise be missed.
Collapse
|
30
|
Chen ZS, Wilson MA. How our understanding of memory replay evolves. J Neurophysiol 2023; 129:552-580. [PMID: 36752404 PMCID: PMC9988534 DOI: 10.1152/jn.00454.2022] [Citation(s) in RCA: 7] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/02/2022] [Revised: 01/20/2023] [Accepted: 01/20/2023] [Indexed: 02/09/2023] Open
Abstract
Memory reactivations and replay, widely reported in the hippocampus and cortex across species, have been implicated in memory consolidation, planning, and spatial and skill learning. Technological advances in electrophysiology, calcium imaging, and human neuroimaging techniques have enabled neuroscientists to measure large-scale neural activity with increasing spatiotemporal resolution and have provided opportunities for developing robust analytic methods to identify memory replay. In this article, we first review a large body of historically important and representative memory replay studies from the animal and human literature. We then discuss our current understanding of memory replay functions in learning, planning, and memory consolidation and further discuss the progress in computational modeling that has contributed to these improvements. Next, we review past and present analytic methods for replay analyses and discuss their limitations and challenges. Finally, looking ahead, we discuss some promising analytic methods for detecting nonstereotypical, behaviorally nondecodable structures from large-scale neural recordings. We argue that seamless integration of multisite recordings, real-time replay decoding, and closed-loop manipulation experiments will be essential for delineating the role of memory replay in a wide range of cognitive and motor functions.
Collapse
Affiliation(s)
- Zhe Sage Chen
- Department of Psychiatry, New York University Grossman School of Medicine, New York, New York, United States
- Department of Neuroscience and Physiology, New York University Grossman School of Medicine, New York, New York, United States
- Neuroscience Institute, New York University Grossman School of Medicine, New York, New York, United States
- Department of Biomedical Engineering, New York University Tandon School of Engineering, Brooklyn, New York, United States
| | - Matthew A Wilson
- Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, Massachusetts, United States
- Picower Institute for Learning and Memory, Massachusetts Institute of Technology, Cambridge, Massachusetts, United States
| |
Collapse
|
31
|
Galgali AR, Sahani M, Mante V. Residual dynamics resolves recurrent contributions to neural computation. Nat Neurosci 2023; 26:326-338. [PMID: 36635498 DOI: 10.1038/s41593-022-01230-2] [Citation(s) in RCA: 8] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/18/2021] [Accepted: 11/08/2022] [Indexed: 01/14/2023]
Abstract
Relating neural activity to behavior requires an understanding of how neural computations arise from the coordinated dynamics of distributed, recurrently connected neural populations. However, inferring the nature of recurrent dynamics from partial recordings of a neural circuit presents considerable challenges. Here we show that some of these challenges can be overcome by a fine-grained analysis of the dynamics of neural residuals-that is, trial-by-trial variability around the mean neural population trajectory for a given task condition. Residual dynamics in macaque prefrontal cortex (PFC) in a saccade-based perceptual decision-making task reveals recurrent dynamics that is time dependent, but consistently stable, and suggests that pronounced rotational structure in PFC trajectories during saccades is driven by inputs from upstream areas. The properties of residual dynamics restrict the possible contributions of PFC to decision-making and saccade generation and suggest a path toward fully characterizing distributed neural computations with large-scale neural recordings and targeted causal perturbations.
Collapse
Affiliation(s)
- Aniruddh R Galgali
- Institute of Neuroinformatics, University of Zurich & ETH Zurich, Zurich, Switzerland.
- Neuroscience Center Zurich, University of Zurich & ETH Zurich, Zurich, Switzerland.
- Department of Experimental Psychology, University of Oxford, Oxford, UK.
| | - Maneesh Sahani
- Gatsby Computational Neuroscience Unit, University College London, London, UK
| | - Valerio Mante
- Institute of Neuroinformatics, University of Zurich & ETH Zurich, Zurich, Switzerland.
- Neuroscience Center Zurich, University of Zurich & ETH Zurich, Zurich, Switzerland.
| |
Collapse
|
32
|
Fang H, Yang Y. Predictive neuromodulation of cingulo-frontal neural dynamics in major depressive disorder using a brain-computer interface system: A simulation study. Front Comput Neurosci 2023; 17:1119685. [PMID: 36950505 PMCID: PMC10025398 DOI: 10.3389/fncom.2023.1119685] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/13/2022] [Accepted: 02/15/2023] [Indexed: 03/08/2023] Open
Abstract
Introduction Deep brain stimulation (DBS) is a promising therapy for treatment-resistant major depressive disorder (MDD). MDD involves the dysfunction of a brain network that can exhibit complex nonlinear neural dynamics in multiple frequency bands. However, current open-loop and responsive DBS methods cannot track the complex multiband neural dynamics in MDD, leading to imprecise regulation of symptoms, variable treatment effects among patients, and high battery power consumption. Methods Here, we develop a closed-loop brain-computer interface (BCI) system of predictive neuromodulation for treating MDD. We first use a biophysically plausible ventral anterior cingulate cortex (vACC)-dorsolateral prefrontal cortex (dlPFC) neural mass model of MDD to simulate nonlinear and multiband neural dynamics in response to DBS. We then use offline system identification to build a dynamic model that predicts the DBS effect on neural activity. We next use the offline identified model to design an online BCI system of predictive neuromodulation. The online BCI system consists of a dynamic brain state estimator and a model predictive controller. The brain state estimator estimates the MDD brain state from the history of neural activity and previously delivered DBS patterns. The predictive controller takes the estimated MDD brain state as the feedback signal and optimally adjusts DBS to regulate the MDD neural dynamics to therapeutic targets. We use the vACC-dlPFC neural mass model as a simulation testbed to test the BCI system and compare it with state-of-the-art open-loop and responsive DBS treatments of MDD. Results We demonstrate that our dynamic model accurately predicts nonlinear and multiband neural activity. Consequently, the predictive neuromodulation system accurately regulates the neural dynamics in MDD, resulting in significantly smaller control errors and lower DBS battery power consumption than open-loop and responsive DBS. Discussion Our results have implications for developing future precisely-tailored clinical closed-loop DBS treatments for MDD.
Collapse
Affiliation(s)
- Hao Fang
- Department of Electrical and Computer Engineering, University of Central Florida, Orlando, FL, United States
| | - Yuxiao Yang
- Ministry of Education (MOE) Frontier Science Center for Brain Science and Brain-Machine Integration, Zhejiang University, Hangzhou, Zhejiang, China
- State Key Laboratory of Brain-Machine Intelligence, Zhejiang University, Hangzhou, Zhejiang, China
- College of Computer Science and Technology, Zhejiang University, Hangzhou, Zhejiang, China
- Department of Neurosurgery, Second Affiliated Hospital, School of Medicine, Zhejiang University, Hangzhou, Zhejiang, China
- *Correspondence: Yuxiao Yang
| |
Collapse
|
33
|
Overcoming the Domain Gap in Neural Action Representations. Int J Comput Vis 2022. [DOI: 10.1007/s11263-022-01713-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/23/2022]
Abstract
AbstractRelating behavior to brain activity in animals is a fundamental goal in neuroscience, with practical applications in building robust brain-machine interfaces. However, the domain gap between individuals is a major issue that prevents the training of general models that work on unlabeled subjects. Since 3D pose data can now be reliably extracted from multi-view video sequences without manual intervention, we propose to use it to guide the encoding of neural action representations together with a set of neural and behavioral augmentations exploiting the properties of microscopy imaging. To test our method, we collect a large dataset that features flies and their neural activity. To reduce the domain gap, during training, we mix features of neural and behavioral data across flies that seem to be performing similar actions. To show our method can generalize further neural modalities and other downstream tasks, we test our method on a human neural Electrocorticography dataset, and another RGB video data of human activities from different viewpoints. We believe our work will enable more robust neural decoding algorithms to be used in future brain-machine interfaces.
Collapse
|
34
|
Song CY, Hsieh HL, Pesaran B, Shanechi MM. Modeling and inference methods for switching regime-dependent dynamical systems with multiscale neural observations. J Neural Eng 2022; 19. [PMID: 36261030 DOI: 10.1088/1741-2552/ac9b94] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/12/2022] [Accepted: 10/19/2022] [Indexed: 01/11/2023]
Abstract
Objective.Realizing neurotechnologies that enable long-term neural recordings across multiple spatial-temporal scales during naturalistic behaviors requires new modeling and inference methods that can simultaneously address two challenges. First, the methods should aggregate information across all activity scales from multiple recording sources such as spiking and field potentials. Second, the methods should detect changes in the regimes of behavior and/or neural dynamics during naturalistic scenarios and long-term recordings. Prior regime detection methods are developed for a single scale of activity rather than multiscale activity, and prior multiscale methods have not considered regime switching and are for stationary cases.Approach.Here, we address both challenges by developing a switching multiscale dynamical system model and the associated filtering and smoothing methods. This model describes the encoding of an unobserved brain state in multiscale spike-field activity. It also allows for regime-switching dynamics using an unobserved regime state that dictates the dynamical and encoding parameters at every time-step. We also design the associated switching multiscale inference methods that estimate both the unobserved regime and brain states from simultaneous spike-field activity.Main results.We validate the methods in both extensive numerical simulations and prefrontal spike-field data recorded in a monkey performing saccades for fluid rewards. We show that these methods can successfully combine the spiking and field potential observations to simultaneously track the regime and brain states accurately. Doing so, these methods lead to better state estimation compared with single-scale switching methods or stationary multiscale methods. Also, for single-scale linear Gaussian observations, the new switching smoother can better generalize to diverse system settings compared to prior switching smoothers.Significance.These modeling and inference methods effectively incorporate both regime-detection and multiscale observations. As such, they could facilitate investigation of latent switching neural population dynamics and improve future brain-machine interfaces by enabling inference in naturalistic scenarios where regime-dependent multiscale activity and behavior arise.
Collapse
Affiliation(s)
- Christian Y Song
- Ming Hsieh Department of Electrical and Computer Engineering, Viterbi School of Engineering, University of Southern California, Los Angeles, CA, United States of America
| | - Han-Lin Hsieh
- Ming Hsieh Department of Electrical and Computer Engineering, Viterbi School of Engineering, University of Southern California, Los Angeles, CA, United States of America
| | - Bijan Pesaran
- Departments of Neurosurgery, Neuroscience, and Bioengineering, University of Pennsylvania, Philadelphia, PA, United States of America
| | - Maryam M Shanechi
- Ming Hsieh Department of Electrical and Computer Engineering, Viterbi School of Engineering, University of Southern California, Los Angeles, CA, United States of America.,Neuroscience Graduate Program, University of Southern California, Los Angeles, CA, United States of America.,Department of Biomedical Engineering, Viterbi School of Engineering, University of Southern California, Los Angeles, CA, United States of America.,Department of Computer Science, Viterbi School of Engineering, University of Southern California, Los Angeles, CA, United States of America
| |
Collapse
|
35
|
Machado TA, Kauvar IV, Deisseroth K. Multiregion neuronal activity: the forest and the trees. Nat Rev Neurosci 2022; 23:683-704. [PMID: 36192596 PMCID: PMC10327445 DOI: 10.1038/s41583-022-00634-0] [Citation(s) in RCA: 19] [Impact Index Per Article: 9.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 08/25/2022] [Indexed: 12/12/2022]
Abstract
The past decade has witnessed remarkable advances in the simultaneous measurement of neuronal activity across many brain regions, enabling fundamentally new explorations of the brain-spanning cellular dynamics that underlie sensation, cognition and action. These recently developed multiregion recording techniques have provided many experimental opportunities, but thoughtful consideration of methodological trade-offs is necessary, especially regarding field of view, temporal acquisition rate and ability to guarantee cellular resolution. When applied in concert with modern optogenetic and computational tools, multiregion recording has already made possible fundamental biological discoveries - in part via the unprecedented ability to perform unbiased neural activity screens for principles of brain function, spanning dozens of brain areas and from local to global scales.
Collapse
Affiliation(s)
- Timothy A Machado
- Department of Bioengineering, Stanford University, Stanford, CA, USA
| | - Isaac V Kauvar
- Department of Bioengineering, Stanford University, Stanford, CA, USA
- Department of Electrical Engineering, Stanford University, Stanford, CA, USA
| | - Karl Deisseroth
- Department of Bioengineering, Stanford University, Stanford, CA, USA.
- Howard Hughes Medical Institute, Stanford University, Stanford, CA, USA.
- Department of Psychiatry and Behavioral Sciences, Stanford University, Stanford, CA, USA.
| |
Collapse
|
36
|
Sheth SA, Bijanki KR, Metzger B, Allawala A, Pirtle V, Adkinson JA, Myers J, Mathura RK, Oswalt D, Tsolaki E, Xiao J, Noecker A, Strutt AM, Cohn JF, McIntyre CC, Mathew SJ, Borton D, Goodman W, Pouratian N. Deep Brain Stimulation for Depression Informed by Intracranial Recordings. Biol Psychiatry 2022; 92:246-251. [PMID: 35063186 PMCID: PMC9124238 DOI: 10.1016/j.biopsych.2021.11.007] [Citation(s) in RCA: 49] [Impact Index Per Article: 24.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 08/16/2021] [Revised: 11/01/2021] [Accepted: 11/02/2021] [Indexed: 11/02/2022]
Abstract
The success of deep brain stimulation (DBS) for treating Parkinson's disease has led to its application to several other disorders, including treatment-resistant depression. Results with DBS for treatment-resistant depression have been heterogeneous, with inconsistencies largely driven by incomplete understanding of the brain networks regulating mood, especially on an individual basis. We report results from the first subject treated with DBS for treatment-resistant depression using an approach that incorporates intracranial recordings to personalize understanding of network behavior and its response to stimulation. These recordings enabled calculation of individually optimized DBS stimulation parameters using a novel inverse solution approach. In the ensuing double-blind, randomized phase incorporating these bespoke parameter sets, DBS led to remission of symptoms and dramatic improvement in quality of life. Results from this initial case demonstrate the feasibility of this personalized platform, which may be used to improve surgical neuromodulation for a vast array of neurologic and psychiatric disorders.
Collapse
Affiliation(s)
- Sameer A. Sheth
- Department of Neurosurgery, Baylor College of Medicine, Houston TX, 77030 USA,Corresponding Author: Sameer A. Sheth, MD, PhD, 7200 Cambridge Street, Suite 9B, Houston, TX 77030, 310-922-2596,
| | - Kelly R. Bijanki
- Department of Neurosurgery, Baylor College of Medicine, Houston TX, 77030 USA
| | - Brian Metzger
- Department of Neurosurgery, Baylor College of Medicine, Houston TX, 77030 USA
| | - Anusha Allawala
- Department of Engineering, Brown University, Providence, RI, 02912 USA
| | - Victoria Pirtle
- Department of Neurosurgery, Baylor College of Medicine, Houston TX, 77030 USA
| | - Josh A. Adkinson
- Department of Neurosurgery, Baylor College of Medicine, Houston TX, 77030 USA
| | - John Myers
- Department of Neurosurgery, Baylor College of Medicine, Houston TX, 77030 USA
| | - Raissa K. Mathura
- Department of Neurosurgery, Baylor College of Medicine, Houston TX, 77030 USA
| | - Denise Oswalt
- Department of Neurosurgery, Baylor College of Medicine, Houston TX, 77030 USA
| | - Evangelia Tsolaki
- Department of Neurosurgery, University of California, Los Angeles, Los Angeles, CA, 90095 USA
| | - Jiayang Xiao
- Department of Neurosurgery, Baylor College of Medicine, Houston TX, 77030 USA
| | - Angela Noecker
- Department of Biomedical Engineering, Case Western Reserve University, Cleveland, OH, 44106 USA
| | - Adriana M. Strutt
- Department of Neurology, Baylor College of Medicine, Houston TX, 77030 USA
| | - Jeffrey F. Cohn
- Department of Psychology, University of Pittsburgh, Pittsburgh, PA, 19104 USA
| | - Cameron C. McIntyre
- Department of Biomedical Engineering, Case Western Reserve University, Cleveland, OH, 44106 USA
| | - Sanjay J. Mathew
- Department of Psychiatry, Baylor College of Medicine, Houston TX, 77030 USA
| | - David Borton
- Department of Engineering, Brown University, Providence, RI, 02912 USA
| | - Wayne Goodman
- Department of Psychiatry, Baylor College of Medicine, Houston TX, 77030 USA
| | - Nader Pouratian
- Department of Neurosurgery, University of California, Los Angeles, Los Angeles, CA, 90095 USA
| |
Collapse
|
37
|
Hennig MH. The sloppy relationship between neural circuit structure and function. J Physiol 2022. [PMID: 35876720 DOI: 10.1113/jp282757] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/14/2022] [Accepted: 07/20/2022] [Indexed: 11/08/2022] Open
Abstract
Investigating and describing the relationships between the structure of a circuit and its function has a long tradition in neuroscience. Since neural circuits acquire their structure through sophisticated developmental programmes, and memories and experiences are maintained through synaptic modification, it is to be expected that structure is closely linked to function. Recent findings challenge this hypothesis from three different angles: Function does not strongly constrain circuit parameters, many parameters in neural circuits are irrelevant and contribute little to function, and circuit parameters are unstable and subject to constant random drift. At the same time however, recent work also showed that dynamics in neural circuit activity that is related to function are robust over time and across individuals. Here this apparent contradiction is addressed by considering the properties of neural manifolds that restrict circuit activity to functionally relevant subspaces, and it will be suggested that degenerate, anisotropic and unstable parameter spaces are a closely related to the structure and implementation of functionally relevant neural manifolds. Abstract figure legend What are the relationships between noisy and highly variable microscopic neural circuit variables on the one hand and the generation of behaviour on the other? Here it is proposed that an intermediate level of description exists where this relationship can be understood in terms of low-dimensional dynamics. Recordings of neural activity during unconstrained behaviour and the development of new machine learning methods will help to uncover these links. This article is protected by copyright. All rights reserved.
Collapse
Affiliation(s)
- Matthias H Hennig
- Institute for Adaptive and Neural Computation, School of Informatics, University of Edinburgh
| |
Collapse
|
38
|
A hybrid autoencoder framework of dimensionality reduction for brain-computer interface decoding. Comput Biol Med 2022; 148:105871. [DOI: 10.1016/j.compbiomed.2022.105871] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/28/2022] [Revised: 06/20/2022] [Accepted: 07/09/2022] [Indexed: 11/19/2022]
|
39
|
Fang H, Yang Y. Designing and Validating a Robust Adaptive Neuromodulation Algorithm for Closed-Loop Control of Brain States. J Neural Eng 2022; 19. [PMID: 35576912 DOI: 10.1088/1741-2552/ac7005] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/08/2022] [Accepted: 05/16/2022] [Indexed: 11/12/2022]
Abstract
OBJECTIVE Neuromodulation systems that use closed-loop brain stimulation to control brain states can provide new therapies for brain disorders. To date, closed-loop brain stimulation has largely used linear time-invariant controllers. However, nonlinear time-varying brain network dynamics and external disturbances can appear during real-time stimulation, collectively leading to real-time model uncertainty. Real-time model uncertainty can degrade the performance or even cause instability of time-invariant controllers. Three problems need to be resolved to enable accurate and stable control under model uncertainty. First, an adaptive controller is needed to track the model uncertainty. Second, the adaptive controller additionally needs to be robust to noise and disturbances. Third, theoretical analyses of stability and robustness are needed as prerequisites for stable operation of the controller in practical applications. APPROACH We develop a robust adaptive neuromodulation algorithm that solves the above three problems. First, we develop a state-space brain network model that explicitly includes nonlinear terms of real-time model uncertainty and design an adaptive controller to track and cancel the model uncertainty. Second, to improve the robustness of the adaptive controller, we design two linear filters to increase steady-state control accuracy and reduce sensitivity to high-frequency noise and disturbances. Third, we conduct theoretical analyses to prove the stability of the neuromodulation algorithm and establish a trade-off between stability and robustness, which we further use to optimize the algorithm design. Finally, we validate the algorithm using comprehensive Monte Carlo simulations that span a broad range of model nonlinearity, uncertainty, and complexity. MAIN RESULTS The robust adaptive neuromodulation algorithm accurately tracks various types of target brain state trajectories, enables stable and robust control, and significantly outperforms state-of-the-art neuromodulation algorithms. SIGNIFICANCE Our algorithm has implications for future designs of precise, stable, and robust closed-loop brain stimulation systems to treat brain disorders and facilitate brain functions.
Collapse
Affiliation(s)
- Hao Fang
- University of Central Florida, Research 1 Room 334, 313/316, University of Central Florida, 4353 Scorpius St., Orlando, Florida, 32816-2368, UNITED STATES
| | - Yuxiao Yang
- Department of Electrical and Computer Engineering, University of Central Florida, 4353 Scorpius St., Orlando, Florida, 32816-2368, UNITED STATES
| |
Collapse
|
40
|
Pandarinath C, Bensmaia SJ. The science and engineering behind sensitized brain-controlled bionic hands. Physiol Rev 2022; 102:551-604. [PMID: 34541898 PMCID: PMC8742729 DOI: 10.1152/physrev.00034.2020] [Citation(s) in RCA: 22] [Impact Index Per Article: 11.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/11/2020] [Revised: 09/07/2021] [Accepted: 09/13/2021] [Indexed: 12/13/2022] Open
Abstract
Advances in our understanding of brain function, along with the development of neural interfaces that allow for the monitoring and activation of neurons, have paved the way for brain-machine interfaces (BMIs), which harness neural signals to reanimate the limbs via electrical activation of the muscles or to control extracorporeal devices, thereby bypassing the muscles and senses altogether. BMIs consist of reading out motor intent from the neuronal responses monitored in motor regions of the brain and executing intended movements with bionic limbs, reanimated limbs, or exoskeletons. BMIs also allow for the restoration of the sense of touch by electrically activating neurons in somatosensory regions of the brain, thereby evoking vivid tactile sensations and conveying feedback about object interactions. In this review, we discuss the neural mechanisms of motor control and somatosensation in able-bodied individuals and describe approaches to use neuronal responses as control signals for movement restoration and to activate residual sensory pathways to restore touch. Although the focus of the review is on intracortical approaches, we also describe alternative signal sources for control and noninvasive strategies for sensory restoration.
Collapse
Affiliation(s)
- Chethan Pandarinath
- Department of Biomedical Engineering, Emory University and Georgia Institute of Technology, Atlanta, Georgia
- Department of Neurosurgery, Emory University, Atlanta, Georgia
| | - Sliman J Bensmaia
- Department of Organismal Biology and Anatomy, University of Chicago, Chicago, Illinois
- Committee on Computational Neuroscience, University of Chicago, Chicago, Illinois
- Grossman Institute for Neuroscience, Quantitative Biology, and Human Behavior, University of Chicago, Chicago, Illinois
| |
Collapse
|
41
|
Howland JG, Ito R, Lapish CC, Villaruel FR. The rodent medial prefrontal cortex and associated circuits in orchestrating adaptive behavior under variable demands. Neurosci Biobehav Rev 2022; 135:104569. [PMID: 35131398 PMCID: PMC9248379 DOI: 10.1016/j.neubiorev.2022.104569] [Citation(s) in RCA: 16] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/18/2021] [Revised: 12/17/2021] [Accepted: 02/01/2022] [Indexed: 11/28/2022]
Abstract
Emerging evidence implicates rodent medial prefrontal cortex (mPFC) in tasks requiring adaptation of behavior to changing information from external and internal sources. However, the computations within mPFC and subsequent outputs that determine behavior are incompletely understood. We review the involvement of mPFC subregions, and their projections to the striatum and amygdala in two broad types of tasks in rodents: 1) appetitive and aversive Pavlovian and operant conditioning tasks that engage mPFC-striatum and mPFC-amygdala circuits, and 2) foraging-based tasks that require decision making to optimize reward. We find support for region-specific function of the mPFC, with dorsal mPFC and its projections to the dorsomedial striatum supporting action control with higher cognitive demands, and ventral mPFC engagement in translating affective signals into behavior via discrete projections to the ventral striatum and amygdala. However, we also propose that defined mPFC subdivisions operate as a functional continuum rather than segregated functional units, with crosstalk that allows distinct subregion-specific inputs (e.g., internal, affective) to influence adaptive behavior supported by other subregions.
Collapse
Affiliation(s)
- John G Howland
- Department of Anatomy, Physiology, and Pharmacology, University of Saskatchewan, Saskatoon, SK, Canada.
| | - Rutsuko Ito
- Department of Psychology, University of Toronto-Scarborough, Toronto, ON, Canada.
| | - Christopher C Lapish
- Department of Psychology, Indiana University-Purdue University Indianapolis, Indianapolis, IN, USA.
| | - Franz R Villaruel
- Department of Psychology, Concordia University, Montreal, QC, Canada.
| |
Collapse
|
42
|
Wang T, Chen Y, Cui H. From Parametric Representation to Dynamical System: Shifting Views of the Motor Cortex in Motor Control. Neurosci Bull 2022; 38:796-808. [PMID: 35298779 PMCID: PMC9276910 DOI: 10.1007/s12264-022-00832-x] [Citation(s) in RCA: 10] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/12/2021] [Accepted: 11/29/2021] [Indexed: 11/01/2022] Open
Abstract
In contrast to traditional representational perspectives in which the motor cortex is involved in motor control via neuronal preference for kinetics and kinematics, a dynamical system perspective emerging in the last decade views the motor cortex as a dynamical machine that generates motor commands by autonomous temporal evolution. In this review, we first look back at the history of the representational and dynamical perspectives and discuss their explanatory power and controversy from both empirical and computational points of view. Here, we aim to reconcile the above perspectives, and evaluate their theoretical impact, future direction, and potential applications in brain-machine interfaces.
Collapse
Affiliation(s)
- Tianwei Wang
- Center for Excellence in Brain Science and Intelligent Technology, Institute of Neuroscience, Chinese Academy of Sciences, Shanghai, 200031, China.,Shanghai Center for Brain and Brain-inspired Intelligence Technology, Shanghai, 200031, China.,University of Chinese Academy of Sciences, Beijing, 100049, China
| | - Yun Chen
- Center for Excellence in Brain Science and Intelligent Technology, Institute of Neuroscience, Chinese Academy of Sciences, Shanghai, 200031, China.,Shanghai Center for Brain and Brain-inspired Intelligence Technology, Shanghai, 200031, China.,University of Chinese Academy of Sciences, Beijing, 100049, China
| | - He Cui
- Center for Excellence in Brain Science and Intelligent Technology, Institute of Neuroscience, Chinese Academy of Sciences, Shanghai, 200031, China. .,Shanghai Center for Brain and Brain-inspired Intelligence Technology, Shanghai, 200031, China. .,University of Chinese Academy of Sciences, Beijing, 100049, China.
| |
Collapse
|
43
|
Thome J, Steinbach R, Grosskreutz J, Durstewitz D, Koppe G. Classification of amyotrophic lateral sclerosis by brain volume, connectivity, and network dynamics. Hum Brain Mapp 2022; 43:681-699. [PMID: 34655259 PMCID: PMC8720197 DOI: 10.1002/hbm.25679] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/13/2021] [Accepted: 09/27/2021] [Indexed: 12/19/2022] Open
Abstract
Emerging studies corroborate the importance of neuroimaging biomarkers and machine learning to improve diagnostic classification of amyotrophic lateral sclerosis (ALS). While most studies focus on structural data, recent studies assessing functional connectivity between brain regions by linear methods highlight the role of brain function. These studies have yet to be combined with brain structure and nonlinear functional features. We investigate the role of linear and nonlinear functional brain features, and the benefit of combining brain structure and function for ALS classification. ALS patients (N = 97) and healthy controls (N = 59) underwent structural and functional resting state magnetic resonance imaging. Based on key hubs of resting state networks, we defined three feature sets comprising brain volume, resting state functional connectivity (rsFC), as well as (nonlinear) resting state dynamics assessed via recurrent neural networks. Unimodal and multimodal random forest classifiers were built to classify ALS. Out-of-sample prediction errors were assessed via five-fold cross-validation. Unimodal classifiers achieved a classification accuracy of 56.35-61.66%. Multimodal classifiers outperformed unimodal classifiers achieving accuracies of 62.85-66.82%. Evaluating the ranking of individual features' importance scores across all classifiers revealed that rsFC features were most dominant in classification. While univariate analyses revealed reduced rsFC in ALS patients, functional features more generally indicated deficits in information integration across resting state brain networks in ALS. The present work undermines that combining brain structure and function provides an additional benefit to diagnostic classification, as indicated by multimodal classifiers, while emphasizing the importance of capturing both linear and nonlinear functional brain properties to identify discriminative biomarkers of ALS.
Collapse
Affiliation(s)
- Janine Thome
- Department of Theoretical Neuroscience, Central Institute of Mental Health Mannheim, Medical Faculty MannheimHeidelberg UniversityGermany
- Clinic for Psychiatry and Psychotherapy, Central Institute of Mental Health Mannheim, Medical Faculty MannheimHeidelberg UniversityGermany
| | - Robert Steinbach
- Hans Berger Department of NeurologyJena University HospitalJenaGermany
| | - Julian Grosskreutz
- Precision Neurology, Department of NeurologyUniversity of LuebeckLuebeckGermany
| | - Daniel Durstewitz
- Department of Theoretical Neuroscience, Central Institute of Mental Health Mannheim, Medical Faculty MannheimHeidelberg UniversityGermany
| | - Georgia Koppe
- Department of Theoretical Neuroscience, Central Institute of Mental Health Mannheim, Medical Faculty MannheimHeidelberg UniversityGermany
- Clinic for Psychiatry and Psychotherapy, Central Institute of Mental Health Mannheim, Medical Faculty MannheimHeidelberg UniversityGermany
| |
Collapse
|
44
|
Wang C, Pesaran B, Shanechi MM. Modeling multiscale causal interactions between spiking and field potential signals during behavior. J Neural Eng 2022; 19. [DOI: 10.1088/1741-2552/ac4e1c] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/07/2021] [Accepted: 01/24/2022] [Indexed: 11/12/2022]
Abstract
Abstract
Objective. Brain recordings exhibit dynamics at multiple spatiotemporal scales, which are measured with spike trains and larger-scale field potential signals. To study neural processes, it is important to identify and model causal interactions not only at a single scale of activity, but also across multiple scales, i.e. between spike trains and field potential signals. Standard causality measures are not directly applicable here because spike trains are binary-valued but field potentials are continuous-valued. It is thus important to develop computational tools to recover multiscale neural causality during behavior, assess their performance on neural datasets, and study whether modeling multiscale causalities can improve the prediction of neural signals beyond what is possible with single-scale causality. Approach. We design a multiscale model-based Granger-like causality method based on directed information and evaluate its success both in realistic biophysical spike-field simulations and in motor cortical datasets from two non-human primates (NHP) performing a motor behavior. To compute multiscale causality, we learn point-process generalized linear models that predict the spike events at a given time based on the history of both spike trains and field potential signals. We also learn linear Gaussian models that predict the field potential signals at a given time based on their own history as well as either the history of binary spike events or that of latent firing rates. Main results. We find that our method reveals the true multiscale causality network structure in biophysical simulations despite the presence of model mismatch. Further, models with the identified multiscale causalities in the NHP neural datasets lead to better prediction of both spike trains and field potential signals compared to just modeling single-scale causalities. Finally, we find that latent firing rates are better predictors of field potential signals compared with the binary spike events in the NHP datasets. Significance. This multiscale causality method can reveal the directed functional interactions across spatiotemporal scales of brain activity to inform basic science investigations and neurotechnologies.
Collapse
|
45
|
Abstract
Investigating how an artificial network of neurons controls a simulated arm suggests that rotational patterns of activity in the motor cortex may rely on sensory feedback from the moving limb.
Collapse
Affiliation(s)
- Omid G Sani
- Ming Hsieh Department of Electrical and Computer Engineering, Viterbi School of Engineering, University of Southern California, Los Angeles, United States
| | - Maryam M Shanechi
- Ming Hsieh Department of Electrical and Computer Engineering, Viterbi School of Engineering, University of Southern California, Los Angeles, United States.,Department of Biomedical Engineering, Viterbi School of Engineering, University of Southern California, Los Angeles, United States.,Neuroscience Graduate Program, University of Southern California, Los Angeles, United States
| |
Collapse
|
46
|
Kalidindi HT, Cross KP, Lillicrap TP, Omrani M, Falotico E, Sabes PN, Scott SH. Rotational dynamics in motor cortex are consistent with a feedback controller. eLife 2021; 10:e67256. [PMID: 34730516 PMCID: PMC8691841 DOI: 10.7554/elife.67256] [Citation(s) in RCA: 25] [Impact Index Per Article: 8.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/05/2021] [Accepted: 10/28/2021] [Indexed: 11/13/2022] Open
Abstract
Recent studies have identified rotational dynamics in motor cortex (MC), which many assume arise from intrinsic connections in MC. However, behavioral and neurophysiological studies suggest that MC behaves like a feedback controller where continuous sensory feedback and interactions with other brain areas contribute substantially to MC processing. We investigated these apparently conflicting theories by building recurrent neural networks that controlled a model arm and received sensory feedback from the limb. Networks were trained to counteract perturbations to the limb and to reach toward spatial targets. Network activities and sensory feedback signals to the network exhibited rotational structure even when the recurrent connections were removed. Furthermore, neural recordings in monkeys performing similar tasks also exhibited rotational structure not only in MC but also in somatosensory cortex. Our results argue that rotational structure may also reflect dynamics throughout the voluntary motor system involved in online control of motor actions.
Collapse
Affiliation(s)
| | - Kevin P Cross
- Centre for Neuroscience Studies, Queen's UniversityKingstonCanada
| | - Timothy P Lillicrap
- Centre for Computation, Mathematics and Physics, University College LondonLondonUnited Kingdom
| | - Mohsen Omrani
- Centre for Neuroscience Studies, Queen's UniversityKingstonCanada
| | - Egidio Falotico
- The BioRobotics Institute, Scuola Superiore Sant'AnnaPisaItaly
| | - Philip N Sabes
- Department of Physiology, University of California, San FranciscoSan FranciscoUnited States
| | - Stephen H Scott
- Centre for Neuroscience Studies, Queen's UniversityKingstonCanada
| |
Collapse
|
47
|
Kim MK, Sohn JW, Kim SP. Finding Kinematics-Driven Latent Neural States From Neuronal Population Activity for Motor Decoding. IEEE Trans Neural Syst Rehabil Eng 2021; 29:2027-2036. [PMID: 34550888 DOI: 10.1109/tnsre.2021.3114367] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Abstract
While intracortical brain-machine interfaces (BMIs) demonstrate feasibility to restore mobility to people with paralysis, it is still challenging to maintain high-performance decoding in clinical BMIs. One of the main obstacles for high-performance BMI is the noise-prone nature of traditional decoding methods that connect neural response explicitly with physical quantity, such as velocity. In contrast, the recent development of latent neural state model enables a robust readout of large-scale neuronal population activity contents. However, these latent neural states do not necessarily contain kinematic information useful for decoding. Therefore, this study proposes a new approach to finding kinematics-dependent latent factors by extracting latent factors' kinematics-dependent components using linear regression. We estimated these components from the population activity through nonlinear mapping. The proposed kinematics-dependent latent factors generate neural trajectories that discriminate latent neural states before and after the motion onset. We compared the decoding performance of the proposed analysis model with the results from other popular models. They are factor analysis (FA), Gaussian process factor analysis (GPFA), latent factor analysis via dynamical systems (LFADS), preferential subspace identification (PSID), and neuronal population firing rates. The proposed analysis model results in higher decoding accuracy than do the others ( % improvement on average). Our approach may pave a new way to extract latent neural states specific to kinematic information from motor cortices, potentially improving decoding performance for online intracortical BMIs.
Collapse
|
48
|
Whiteway MR, Biderman D, Friedman Y, Dipoppa M, Buchanan EK, Wu A, Zhou J, Bonacchi N, Miska NJ, Noel JP, Rodriguez E, Schartner M, Socha K, Urai AE, Salzman CD, Cunningham JP, Paninski L. Partitioning variability in animal behavioral videos using semi-supervised variational autoencoders. PLoS Comput Biol 2021; 17:e1009439. [PMID: 34550974 PMCID: PMC8489729 DOI: 10.1371/journal.pcbi.1009439] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/06/2021] [Revised: 10/04/2021] [Accepted: 09/09/2021] [Indexed: 12/02/2022] Open
Abstract
Recent neuroscience studies demonstrate that a deeper understanding of brain function requires a deeper understanding of behavior. Detailed behavioral measurements are now often collected using video cameras, resulting in an increased need for computer vision algorithms that extract useful information from video data. Here we introduce a new video analysis tool that combines the output of supervised pose estimation algorithms (e.g. DeepLabCut) with unsupervised dimensionality reduction methods to produce interpretable, low-dimensional representations of behavioral videos that extract more information than pose estimates alone. We demonstrate this tool by extracting interpretable behavioral features from videos of three different head-fixed mouse preparations, as well as a freely moving mouse in an open field arena, and show how these interpretable features can facilitate downstream behavioral and neural analyses. We also show how the behavioral features produced by our model improve the precision and interpretation of these downstream analyses compared to using the outputs of either fully supervised or fully unsupervised methods alone.
Collapse
Affiliation(s)
- Matthew R. Whiteway
- Center for Theoretical Neuroscience, Columbia University, New York, New York, United States of America
- Mortimer B. Zuckerman Mind Brain Behavior Institute, Columbia University, New York, New York, United States of America
- Grossman Center for the Statistics of Mind, Columbia University, New York, New York, United States of America
- Department of Statistics, Columbia University, New York, New York, United States of America
- Department of Neuroscience, Columbia University, New York, New York, United States of America
| | - Dan Biderman
- Center for Theoretical Neuroscience, Columbia University, New York, New York, United States of America
- Mortimer B. Zuckerman Mind Brain Behavior Institute, Columbia University, New York, New York, United States of America
- Grossman Center for the Statistics of Mind, Columbia University, New York, New York, United States of America
- Department of Statistics, Columbia University, New York, New York, United States of America
- Department of Neuroscience, Columbia University, New York, New York, United States of America
| | - Yoni Friedman
- Center for Theoretical Neuroscience, Columbia University, New York, New York, United States of America
- Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Boston, Massachusetts, United States of America
| | - Mario Dipoppa
- Center for Theoretical Neuroscience, Columbia University, New York, New York, United States of America
- Mortimer B. Zuckerman Mind Brain Behavior Institute, Columbia University, New York, New York, United States of America
| | - E. Kelly Buchanan
- Center for Theoretical Neuroscience, Columbia University, New York, New York, United States of America
- Mortimer B. Zuckerman Mind Brain Behavior Institute, Columbia University, New York, New York, United States of America
- Grossman Center for the Statistics of Mind, Columbia University, New York, New York, United States of America
- Department of Statistics, Columbia University, New York, New York, United States of America
- Department of Neuroscience, Columbia University, New York, New York, United States of America
| | - Anqi Wu
- Center for Theoretical Neuroscience, Columbia University, New York, New York, United States of America
- Mortimer B. Zuckerman Mind Brain Behavior Institute, Columbia University, New York, New York, United States of America
- Grossman Center for the Statistics of Mind, Columbia University, New York, New York, United States of America
- Department of Statistics, Columbia University, New York, New York, United States of America
- Department of Neuroscience, Columbia University, New York, New York, United States of America
| | - John Zhou
- Department of Computer Science, Columbia University, New York, New York, United States of America
| | | | - Nathaniel J. Miska
- Sainsbury-Wellcome Centre for Neural Circuits and Behavior, University College London, London, United Kingdom
| | - Jean-Paul Noel
- Center for Neural Science, New York University, New York, New York, United States of America
| | - Erica Rodriguez
- Mortimer B. Zuckerman Mind Brain Behavior Institute, Columbia University, New York, New York, United States of America
- Department of Neuroscience, Columbia University, New York, New York, United States of America
| | | | - Karolina Socha
- Institute of Ophthalmology, University College London, London, United Kingdom
| | - Anne E. Urai
- Cognitive Psychology Unit, Leiden University, Leiden, The Netherlands
| | - C. Daniel Salzman
- Mortimer B. Zuckerman Mind Brain Behavior Institute, Columbia University, New York, New York, United States of America
- Department of Neuroscience, Columbia University, New York, New York, United States of America
- Department of Psychiatry, Columbia University, New York, New York, United States of America
- New York State Psychiatric Institute, New York, New York, United States of America
- Kavli Institute for Brain Sciences, New York, New York, United States of America
| | | | - John P. Cunningham
- Center for Theoretical Neuroscience, Columbia University, New York, New York, United States of America
- Mortimer B. Zuckerman Mind Brain Behavior Institute, Columbia University, New York, New York, United States of America
- Grossman Center for the Statistics of Mind, Columbia University, New York, New York, United States of America
- Department of Statistics, Columbia University, New York, New York, United States of America
| | - Liam Paninski
- Center for Theoretical Neuroscience, Columbia University, New York, New York, United States of America
- Mortimer B. Zuckerman Mind Brain Behavior Institute, Columbia University, New York, New York, United States of America
- Grossman Center for the Statistics of Mind, Columbia University, New York, New York, United States of America
- Department of Statistics, Columbia University, New York, New York, United States of America
- Department of Neuroscience, Columbia University, New York, New York, United States of America
| |
Collapse
|
49
|
Noel JP, Caziot B, Bruni S, Fitzgerald NE, Avila E, Angelaki DE. Supporting generalization in non-human primate behavior by tapping into structural knowledge: Examples from sensorimotor mappings, inference, and decision-making. Prog Neurobiol 2021; 201:101996. [PMID: 33454361 PMCID: PMC8096669 DOI: 10.1016/j.pneurobio.2021.101996] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/17/2020] [Revised: 12/15/2020] [Accepted: 01/12/2021] [Indexed: 02/05/2023]
Abstract
The complex behaviors we ultimately wish to understand are far from those currently used in systems neuroscience laboratories. A salient difference are the closed loops between action and perception prominently present in natural but not laboratory behaviors. The framework of reinforcement learning and control naturally wades across action and perception, and thus is poised to inform the neurosciences of tomorrow, not only from a data analyses and modeling framework, but also in guiding experimental design. We argue that this theoretical framework emphasizes active sensing, dynamical planning, and the leveraging of structural regularities as key operations for intelligent behavior within uncertain, time-varying environments. Similarly, we argue that we may study natural task strategies and their neural circuits without over-training animals when the tasks we use tap into our animal's structural knowledge. As proof-of-principle, we teach animals to navigate through a virtual environment - i.e., explore a well-defined and repetitive structure governed by the laws of physics - using a joystick. Once these animals have learned to 'drive', without further training they naturally (i) show zero- or one-shot learning of novel sensorimotor contingencies, (ii) infer the evolving path of dynamically changing latent variables, and (iii) make decisions consistent with maximizing reward rate. Such task designs allow for the study of flexible and generalizable, yet controlled, behaviors. In turn, they allow for the exploitation of pillars of intelligence - flexibility, prediction, and generalization -, properties whose neural underpinning have remained elusive.
Collapse
Affiliation(s)
- Jean-Paul Noel
- Center for Neural Science, New York University, New York, USA
| | - Baptiste Caziot
- Center for Neural Science, New York University, New York, USA
| | - Stefania Bruni
- Center for Neural Science, New York University, New York, USA
| | | | - Eric Avila
- Center for Neural Science, New York University, New York, USA
| | - Dora E Angelaki
- Center for Neural Science, New York University, New York, USA; Tandon School of Engineering, New York University, New York, USA.
| |
Collapse
|
50
|
Yang Y, Ahmadipour P, Shanechi MM. Adaptive latent state modeling of brain network dynamics with real-time learning rate optimization. J Neural Eng 2021; 18. [PMID: 33254159 DOI: 10.1088/1741-2552/abcefd] [Citation(s) in RCA: 16] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/07/2020] [Accepted: 11/30/2020] [Indexed: 12/29/2022]
Abstract
Objective. Dynamic latent state models are widely used to characterize the dynamics of brain network activity for various neural signal types. To date, dynamic latent state models have largely been developed for stationary brain network dynamics. However, brain network dynamics can be non-stationary for example due to learning, plasticity or recording instability. To enable modeling these non-stationarities, two problems need to be resolved. First, novel methods should be developed that can adaptively update the parameters of latent state models, which is difficult due to the state being latent. Second, new methods are needed to optimize the adaptation learning rate, which specifies how fast new neural observations update the model parameters and can significantly influence adaptation accuracy.Approach. We develop a Rate Optimized-adaptive Linear State-Space Modeling (RO-adaptive LSSM) algorithm that solves these two problems. First, to enable adaptation, we derive a computation- and memory-efficient adaptive LSSM fitting algorithm that updates the LSSM parameters recursively and in real time in the presence of the latent state. Second, we develop a real-time learning rate optimization algorithm. We use comprehensive simulations of a broad range of non-stationary brain network dynamics to validate both algorithms, which together constitute the RO-adaptive LSSM.Main results. We show that the adaptive LSSM fitting algorithm can accurately track the broad simulated non-stationary brain network dynamics. We also find that the learning rate significantly affects the LSSM fitting accuracy. Finally, we show that the real-time learning rate optimization algorithm can run in parallel with the adaptive LSSM fitting algorithm. Doing so, the combined RO-adaptive LSSM algorithm rapidly converges to the optimal learning rate and accurately tracks non-stationarities.Significance. These algorithms can be used to study time-varying neural dynamics underlying various brain functions and enhance future neurotechnologies such as brain-machine interfaces and closed-loop brain stimulation systems.
Collapse
Affiliation(s)
- Yuxiao Yang
- Ming Hsieh Department of Electrical and Computer Engineering, Viterbi School of Engineering, University of Southern California, Los Angeles, CA, United States of America.,These authors contributed equally to this work
| | - Parima Ahmadipour
- Ming Hsieh Department of Electrical and Computer Engineering, Viterbi School of Engineering, University of Southern California, Los Angeles, CA, United States of America.,These authors contributed equally to this work
| | - Maryam M Shanechi
- Ming Hsieh Department of Electrical and Computer Engineering, Viterbi School of Engineering, University of Southern California, Los Angeles, CA, United States of America.,Neuroscience Graduate Program, University of Southern California, Los Angeles, CA, United States of America
| |
Collapse
|