1
|
Sani OG, Pesaran B, Shanechi MM. Dissociative and prioritized modeling of behaviorally relevant neural dynamics using recurrent neural networks. Nat Neurosci 2024; 27:2033-2045. [PMID: 39242944 PMCID: PMC11452342 DOI: 10.1038/s41593-024-01731-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/22/2023] [Accepted: 07/17/2024] [Indexed: 09/09/2024]
Abstract
Understanding the dynamical transformation of neural activity to behavior requires new capabilities to nonlinearly model, dissociate and prioritize behaviorally relevant neural dynamics and test hypotheses about the origin of nonlinearity. We present dissociative prioritized analysis of dynamics (DPAD), a nonlinear dynamical modeling approach that enables these capabilities with a multisection neural network architecture and training approach. Analyzing cortical spiking and local field potential activity across four movement tasks, we demonstrate five use-cases. DPAD enabled more accurate neural-behavioral prediction. It identified nonlinear dynamical transformations of local field potentials that were more behavior predictive than traditional power features. Further, DPAD achieved behavior-predictive nonlinear neural dimensionality reduction. It enabled hypothesis testing regarding nonlinearities in neural-behavioral transformation, revealing that, in our datasets, nonlinearities could largely be isolated to the mapping from latent cortical dynamics to behavior. Finally, DPAD extended across continuous, intermittently sampled and categorical behaviors. DPAD provides a powerful tool for nonlinear dynamical modeling and investigation of neural-behavioral data.
Collapse
Affiliation(s)
- Omid G Sani
- Ming Hsieh Department of Electrical and Computer Engineering, Viterbi School of Engineering, University of Southern California, Los Angeles, CA, USA
| | - Bijan Pesaran
- Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA
| | - Maryam M Shanechi
- Ming Hsieh Department of Electrical and Computer Engineering, Viterbi School of Engineering, University of Southern California, Los Angeles, CA, USA.
- Thomas Lord Department of Computer Science, University of Southern California, Los Angeles, CA, USA.
- Neuroscience Graduate Program, University of Southern California, Los Angeles, CA, USA.
- Alfred E. Mann Department of Biomedical Engineering, University of Southern California, Los Angeles, CA, USA.
| |
Collapse
|
2
|
Koloski MF, Hulyalkar S, Barnes SA, Mishra J, Ramanathan DS. Cortico-striatal beta oscillations as a reward-related signal. COGNITIVE, AFFECTIVE & BEHAVIORAL NEUROSCIENCE 2024; 24:839-859. [PMID: 39147929 PMCID: PMC11390840 DOI: 10.3758/s13415-024-01208-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Accepted: 07/13/2024] [Indexed: 08/17/2024]
Abstract
The value associated with reward is sensitive to external factors, such as the time between the choice and reward delivery as classically manipulated in temporal discounting tasks. Subjective preference for two reward options is dependent on objective variables of reward magnitude and reward delay. Single neuron correlates of reward value have been observed in regions, including ventral striatum, orbital, and medial prefrontal cortex. Brain imaging studies show cortico-striatal-limbic network activity related to subjective preferences. To explore how oscillatory dynamics represent reward processing across brain regions, we measured local field potentials of rats performing a temporal discounting task. Our goal was to use a data-driven approach to identify an electrophysiological marker that correlates with reward preference. We found that reward-locked oscillations at beta frequencies signaled the magnitude of reward and decayed with longer temporal delays. Electrodes in orbitofrontal/medial prefrontal cortex, anterior insula, ventral striatum, and amygdala individually increased power and were functionally connected at beta frequencies during reward outcome. Beta power during reward outcome correlated with subjective value as defined by a computational model fit to the discounting behavior. These data suggest that cortico-striatal beta oscillations are a reward signal correlated, which may represent subjective value and hold potential to serve as a biomarker and potential therapeutic target.
Collapse
Affiliation(s)
- M F Koloski
- Mental Health Service, VA San Diego Healthcare Syst, La Jolla, CA, USA.
- Department of Psychiatry, UC San Diego, La Jolla, CA, USA.
| | - S Hulyalkar
- Mental Health Service, VA San Diego Healthcare Syst, La Jolla, CA, USA
- Department of Psychiatry, UC San Diego, La Jolla, CA, USA
| | - S A Barnes
- Department of Psychiatry, UC San Diego, La Jolla, CA, USA
| | - J Mishra
- Department of Psychiatry, UC San Diego, La Jolla, CA, USA
| | - D S Ramanathan
- Mental Health Service, VA San Diego Healthcare Syst, La Jolla, CA, USA
- Department of Psychiatry, UC San Diego, La Jolla, CA, USA
| |
Collapse
|
3
|
Wu S, Zhang X, Wang Y. Neural Manifold Constraint for Spike Prediction Models Under Behavioral Reinforcement. IEEE Trans Neural Syst Rehabil Eng 2024; 32:2772-2781. [PMID: 39074025 DOI: 10.1109/tnsre.2024.3435568] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 07/31/2024]
Abstract
Spike prediction models effectively predict downstream spike trains from upstream neural activity for neural prostheses. Such prostheses could potentially restore damaged neural communication pathways using predicted patterns to guide electrical stimulations on downstream. Since the ground truth of downstream neural activity is unavailable for subjects with the damage, reinforcement learning (RL) with behavior-level rewards becomes necessary for model training. However, existing models do not involve any constraint on the generated firing patterns and neglect the correlations among neural activities. Thus, the model outputs can greatly deviate from the natural range of neural activities, causing concerns for clinical usage. This study proposes the neural manifold constraint to solve this problem, shaping RL-generated spike trains in the feature space. The constraint terms describe the first and second order statistics of the neural manifold estimated from neural recordings during subjects' freely moving period. Then, the models can be optimized within the neural manifold by behavioral reinforcement. We test the method to predict primary motor cortex (M1) spikes from medial prefrontal (mPFC) spikes when rats perform the two-lever discrimination task. Results show that the neural activity generated by constrained models resembles the real M1 recordings. Compared with models without constraints, our approach achieves similar behavioral success rates, but reduces the mean squared error of neural firing by 61%. The constraints also increase the model's robustness across data segments and induce realistic neural correlations. Our method provides a promising tool to restore transregional communication with high behavioral performance and more realistic microscopic patterns.
Collapse
|
4
|
Sadras N, Pesaran B, Shanechi MM. Event detection and classification from multimodal time series with application to neural data. J Neural Eng 2024; 21:026049. [PMID: 38513289 DOI: 10.1088/1741-2552/ad3678] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/15/2023] [Accepted: 03/21/2024] [Indexed: 03/23/2024]
Abstract
The detection of events in time-series data is a common signal-processing problem. When the data can be modeled as a known template signal with an unknown delay in Gaussian noise, detection of the template signal can be done with a traditional matched filter. However, in many applications, the event of interest is represented in multimodal data consisting of both Gaussian and point-process time series. Neuroscience experiments, for example, can simultaneously record multimodal neural signals such as local field potentials (LFPs), which can be modeled as Gaussian, and neuronal spikes, which can be modeled as point processes. Currently, no method exists for event detection from such multimodal data, and as such our objective in this work is to develop a method to meet this need. Here we address this challenge by developing the multimodal event detector (MED) algorithm which simultaneously estimates event times and classes. To do this, we write a multimodal likelihood function for Gaussian and point-process observations and derive the associated maximum likelihood estimator of simultaneous event times and classes. We additionally introduce a cross-modal scaling parameter to account for model mismatch in real datasets. We validate this method in extensive simulations as well as in a neural spike-LFP dataset recorded during an eye-movement task, where the events of interest are eye movements with unknown times and directions. We show that the MED can successfully detect eye movement onset and classify eye movement direction. Further, the MED successfully combines information across data modalities, with multimodal performance exceeding unimodal performance. This method can facilitate applications such as the discovery of latent events in multimodal neural population activity and the development of brain-computer interfaces for naturalistic settings without constrained tasks or prior knowledge of event times.
Collapse
Affiliation(s)
- Nitin Sadras
- Ming Hsieh Department of Electrical and Computer Engineering, Viterbi School of Engineering, University of Southern California, Los Angeles, CA, United States of America
| | - Bijan Pesaran
- Department of Neurosurgery, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, United States of America
| | - Maryam M Shanechi
- Ming Hsieh Department of Electrical and Computer Engineering, Viterbi School of Engineering, University of Southern California, Los Angeles, CA, United States of America
- Thomas Lord Department of Computer Science, Alfred E. Mann Department of Biomedical Engineering, and the Neuroscience Graduate Program, University of Southern California, Los Angeles, CA, United States of America
| |
Collapse
|
5
|
Chang YJ, Chen YI, Yeh HC, Santacruz SR. Neurobiologically realistic neural network enables cross-scale modeling of neural dynamics. Sci Rep 2024; 14:5145. [PMID: 38429297 PMCID: PMC10907713 DOI: 10.1038/s41598-024-54593-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/02/2023] [Accepted: 02/14/2024] [Indexed: 03/03/2024] Open
Abstract
Fundamental principles underlying computation in multi-scale brain networks illustrate how multiple brain areas and their coordinated activity give rise to complex cognitive functions. Whereas brain activity has been studied at the micro- to meso-scale to reveal the connections between the dynamical patterns and the behaviors, investigations of neural population dynamics are mainly limited to single-scale analysis. Our goal is to develop a cross-scale dynamical model for the collective activity of neuronal populations. Here we introduce a bio-inspired deep learning approach, termed NeuroBondGraph Network (NBGNet), to capture cross-scale dynamics that can infer and map the neural data from multiple scales. Our model not only exhibits more than an 11-fold improvement in reconstruction accuracy, but also predicts synchronous neural activity and preserves correlated low-dimensional latent dynamics. We also show that the NBGNet robustly predicts held-out data across a long time scale (2 weeks) without retraining. We further validate the effective connectivity defined from our model by demonstrating that neural connectivity during motor behaviour agrees with the established neuroanatomical hierarchy of motor control in the literature. The NBGNet approach opens the door to revealing a comprehensive understanding of brain computation, where network mechanisms of multi-scale activity are critical.
Collapse
Affiliation(s)
- Yin-Jui Chang
- Biomedical Engineering, The University of Texas at Austin, Austin, TX, USA
| | - Yuan-I Chen
- Biomedical Engineering, The University of Texas at Austin, Austin, TX, USA
| | - Hsin-Chih Yeh
- Biomedical Engineering, The University of Texas at Austin, Austin, TX, USA
- Texas Materials Institute, The University of Texas at Austin, Austin, TX, USA
| | - Samantha R Santacruz
- Biomedical Engineering, The University of Texas at Austin, Austin, TX, USA.
- Institute for Neuroscience, The University of Texas at Austin, Austin, TX, USA.
- Electrical and Computer Engineering, The University of Texas at Austin, Austin, TX, USA.
| |
Collapse
|
6
|
Ahmadipour P, Sani OG, Pesaran B, Shanechi MM. Multimodal subspace identification for modeling discrete-continuous spiking and field potential population activity. J Neural Eng 2024; 21:026001. [PMID: 38016450 PMCID: PMC10913727 DOI: 10.1088/1741-2552/ad1053] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/02/2023] [Revised: 10/23/2023] [Accepted: 11/28/2023] [Indexed: 11/30/2023]
Abstract
Objective.Learning dynamical latent state models for multimodal spiking and field potential activity can reveal their collective low-dimensional dynamics and enable better decoding of behavior through multimodal fusion. Toward this goal, developing unsupervised learning methods that are computationally efficient is important, especially for real-time learning applications such as brain-machine interfaces (BMIs). However, efficient learning remains elusive for multimodal spike-field data due to their heterogeneous discrete-continuous distributions and different timescales.Approach.Here, we develop a multiscale subspace identification (multiscale SID) algorithm that enables computationally efficient learning for modeling and dimensionality reduction for multimodal discrete-continuous spike-field data. We describe the spike-field activity as combined Poisson and Gaussian observations, for which we derive a new analytical SID method. Importantly, we also introduce a novel constrained optimization approach to learn valid noise statistics, which is critical for multimodal statistical inference of the latent state, neural activity, and behavior. We validate the method using numerical simulations and with spiking and local field potential population activity recorded during a naturalistic reach and grasp behavior.Main results.We find that multiscale SID accurately learned dynamical models of spike-field signals and extracted low-dimensional dynamics from these multimodal signals. Further, it fused multimodal information, thus better identifying the dynamical modes and predicting behavior compared to using a single modality. Finally, compared to existing multiscale expectation-maximization learning for Poisson-Gaussian observations, multiscale SID had a much lower training time while being better in identifying the dynamical modes and having a better or similar accuracy in predicting neural activity and behavior.Significance.Overall, multiscale SID is an accurate learning method that is particularly beneficial when efficient learning is of interest, such as for online adaptive BMIs to track non-stationary dynamics or for reducing offline training time in neuroscience investigations.
Collapse
Affiliation(s)
- Parima Ahmadipour
- Ming Hsieh Department of Electrical and Computer Engineering, Viterbi School of Engineering, University of Southern California, Los Angeles, CA, United States of America
| | - Omid G Sani
- Ming Hsieh Department of Electrical and Computer Engineering, Viterbi School of Engineering, University of Southern California, Los Angeles, CA, United States of America
| | - Bijan Pesaran
- Department of Neurosurgery, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, United States of America
| | - Maryam M Shanechi
- Ming Hsieh Department of Electrical and Computer Engineering, Viterbi School of Engineering, University of Southern California, Los Angeles, CA, United States of America
- Thomas Lord Department of Computer Science, Alfred E. Mann Department of Biomedical Engineering, and the Neuroscience Graduate Program, University of Southern California, Los Angeles, CA, United States of America
| |
Collapse
|
7
|
Vahidi P, Sani OG, Shanechi MM. Modeling and dissociation of intrinsic and input-driven neural population dynamics underlying behavior. Proc Natl Acad Sci U S A 2024; 121:e2212887121. [PMID: 38335258 PMCID: PMC10873612 DOI: 10.1073/pnas.2212887121] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/28/2022] [Accepted: 12/03/2023] [Indexed: 02/12/2024] Open
Abstract
Neural dynamics can reflect intrinsic dynamics or dynamic inputs, such as sensory inputs or inputs from other brain regions. To avoid misinterpreting temporally structured inputs as intrinsic dynamics, dynamical models of neural activity should account for measured inputs. However, incorporating measured inputs remains elusive in joint dynamical modeling of neural-behavioral data, which is important for studying neural computations of behavior. We first show how training dynamical models of neural activity while considering behavior but not input or input but not behavior may lead to misinterpretations. We then develop an analytical learning method for linear dynamical models that simultaneously accounts for neural activity, behavior, and measured inputs. The method provides the capability to prioritize the learning of intrinsic behaviorally relevant neural dynamics and dissociate them from both other intrinsic dynamics and measured input dynamics. In data from a simulated brain with fixed intrinsic dynamics that performs different tasks, the method correctly finds the same intrinsic dynamics regardless of the task while other methods can be influenced by the task. In neural datasets from three subjects performing two different motor tasks with task instruction sensory inputs, the method reveals low-dimensional intrinsic neural dynamics that are missed by other methods and are more predictive of behavior and/or neural activity. The method also uniquely finds that the intrinsic behaviorally relevant neural dynamics are largely similar across the different subjects and tasks, whereas the overall neural dynamics are not. These input-driven dynamical models of neural-behavioral data can uncover intrinsic dynamics that may otherwise be missed.
Collapse
Affiliation(s)
- Parsa Vahidi
- Ming Hsieh Department of Electrical and Computer Engineering, Viterbi School of Engineering, University of Southern California, Los Angeles, CA90089
| | - Omid G. Sani
- Ming Hsieh Department of Electrical and Computer Engineering, Viterbi School of Engineering, University of Southern California, Los Angeles, CA90089
| | - Maryam M. Shanechi
- Ming Hsieh Department of Electrical and Computer Engineering, Viterbi School of Engineering, University of Southern California, Los Angeles, CA90089
- Neuroscience Graduate Program, University of Southern California, Los Angeles, CA90089
- Thomas Lord Department of Computer Science and Alfred E. Mann Department of Biomedical Engineering, Viterbi School of Engineering, University of Southern California, Los Angeles, CA90089
| |
Collapse
|
8
|
Kuzmina E, Kriukov D, Lebedev M. Neuronal travelling waves explain rotational dynamics in experimental datasets and modelling. Sci Rep 2024; 14:3566. [PMID: 38347042 PMCID: PMC10861525 DOI: 10.1038/s41598-024-53907-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/25/2023] [Accepted: 02/06/2024] [Indexed: 02/15/2024] Open
Abstract
Spatiotemporal properties of neuronal population activity in cortical motor areas have been subjects of experimental and theoretical investigations, generating numerous interpretations regarding mechanisms for preparing and executing limb movements. Two competing models, representational and dynamical, strive to explain the relationship between movement parameters and neuronal activity. A dynamical model uses the jPCA method that holistically characterizes oscillatory activity in neuron populations by maximizing the data rotational dynamics. Different rotational dynamics interpretations revealed by the jPCA approach have been proposed. Yet, the nature of such dynamics remains poorly understood. We comprehensively analyzed several neuronal-population datasets and found rotational dynamics consistently accounted for by a traveling wave pattern. For quantifying rotation strength, we developed a complex-valued measure, the gyration number. Additionally, we identified parameters influencing rotation extent in the data. Our findings suggest that rotational dynamics and traveling waves are typically the same phenomena, so reevaluation of the previous interpretations where they were considered separate entities is needed.
Collapse
Affiliation(s)
- Ekaterina Kuzmina
- Skolkovo Institute of Science and Technology, Vladimir Zelman Center for Neurobiology and Brain Rehabilitation, Moscow, Russia, 121205.
- Artificial Intelligence Research Institute (AIRI), Moscow, Russia.
| | - Dmitrii Kriukov
- Artificial Intelligence Research Institute (AIRI), Moscow, Russia
- Skolkovo Institute of Science and Technology, Center for Molecular and Cellular Biology, Moscow, Russia, 121205
| | - Mikhail Lebedev
- Faculty of Mechanics and Mathematics, Lomonosov Moscow State University, Moscow, Russia, 119992
- Sechenov Institute of Evolutionary Physiology and Biochemistry of the Russian Academy of Sciences, Saint-Petersburg, Russia, 194223
| |
Collapse
|
9
|
Abbaspourazad H, Erturk E, Pesaran B, Shanechi MM. Dynamical flexible inference of nonlinear latent factors and structures in neural population activity. Nat Biomed Eng 2024; 8:85-108. [PMID: 38082181 DOI: 10.1038/s41551-023-01106-1] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/21/2022] [Accepted: 09/12/2023] [Indexed: 12/26/2023]
Abstract
Modelling the spatiotemporal dynamics in the activity of neural populations while also enabling their flexible inference is hindered by the complexity and noisiness of neural observations. Here we show that the lower-dimensional nonlinear latent factors and latent structures can be computationally modelled in a manner that allows for flexible inference causally, non-causally and in the presence of missing neural observations. To enable flexible inference, we developed a neural network that separates the model into jointly trained manifold and dynamic latent factors such that nonlinearity is captured through the manifold factors and the dynamics can be modelled in tractable linear form on this nonlinear manifold. We show that the model, which we named 'DFINE' (for 'dynamical flexible inference for nonlinear embeddings') achieves flexible inference in simulations of nonlinear dynamics and across neural datasets representing a diversity of brain regions and behaviours. Compared with earlier neural-network models, DFINE enables flexible inference, better predicts neural activity and behaviour, and better captures the latent neural manifold structure. DFINE may advance the development of neurotechnology and investigations in neuroscience.
Collapse
Affiliation(s)
- Hamidreza Abbaspourazad
- Ming Hsieh Department of Electrical and Computer Engineering, Viterbi School of Engineering, University of Southern California, Los Angeles, CA, USA
| | - Eray Erturk
- Ming Hsieh Department of Electrical and Computer Engineering, Viterbi School of Engineering, University of Southern California, Los Angeles, CA, USA
| | - Bijan Pesaran
- Departments of Neurosurgery, Neuroscience, and Bioengineering, University of Pennsylvania, Philadelphia, PA, USA
| | - Maryam M Shanechi
- Ming Hsieh Department of Electrical and Computer Engineering, Viterbi School of Engineering, University of Southern California, Los Angeles, CA, USA.
- Thomas Lord Department of Computer Science, Alfred E. Mann Department of Biomedical Engineering, Neuroscience Graduate Program, University of Southern California, Los Angeles, CA, USA.
| |
Collapse
|
10
|
Song CY, Shanechi MM. Unsupervised learning of stationary and switching dynamical system models from Poisson observations. J Neural Eng 2023; 20:066029. [PMID: 38083862 PMCID: PMC10714100 DOI: 10.1088/1741-2552/ad038d] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/24/2023] [Revised: 09/15/2023] [Accepted: 10/16/2023] [Indexed: 12/18/2023]
Abstract
Objective. Investigating neural population dynamics underlying behavior requires learning accurate models of the recorded spiking activity, which can be modeled with a Poisson observation distribution. Switching dynamical system models can offer both explanatory power and interpretability by piecing together successive regimes of simpler dynamics to capture more complex ones. However, in many cases, reliable regime labels are not available, thus demanding accurate unsupervised learning methods for Poisson observations. Existing learning methods, however, rely on inference of latent states in neural activity using the Laplace approximation, which may not capture the broader properties of densities and may lead to inaccurate learning. Thus, there is a need for new inference methods that can enable accurate model learning.Approach. To achieve accurate model learning, we derive a novel inference method based on deterministic sampling for Poisson observations called the Poisson Cubature Filter (PCF) and embed it in an unsupervised learning framework. This method takes a minimum mean squared error approach to estimation. Terms that are difficult to find analytically for Poisson observations are approximated in a novel way with deterministic sampling based on numerical integration and cubature rules.Main results. PCF enabled accurate unsupervised learning in both stationary and switching dynamical systems and largely outperformed prior Laplace approximation-based learning methods in both simulations and motor cortical spiking data recorded during a reaching task. These improvements were larger for smaller data sizes, showing that PCF-based learning was more data efficient and enabled more reliable regime identification. In experimental data and unsupervised with respect to behavior, PCF-based learning uncovered interpretable behavior-relevant regimes unlike prior learning methods.Significance. The developed unsupervised learning methods for switching dynamical systems can accurately uncover latent regimes and states in population spiking activity, with important applications in both basic neuroscience and neurotechnology.
Collapse
Affiliation(s)
- Christian Y Song
- Ming Hsieh Department of Electrical and Computer Engineering, Viterbi School of Engineering, University of Southern California, Los Angeles, CA, United States of America
| | - Maryam M Shanechi
- Ming Hsieh Department of Electrical and Computer Engineering, Viterbi School of Engineering, University of Southern California, Los Angeles, CA, United States of America
- Neuroscience Graduate Program, University of Southern California, Los Angeles, CA, United States of America
- Alfred E. Mann Department of Biomedical Engineering, Viterbi School of Engineering, University of Southern California, Los Angeles, CA, United States of America
- Thomas Lord Department of Computer Science, Viterbi School of Engineering, University of Southern California, Los Angeles, CA, United States of America
| |
Collapse
|
11
|
Hosoda K, Seno S, Kamiura R, Murakami N, Kondoh M. Biodiversity and Constrained Information Dynamics in Ecosystems: A Framework for Living Systems. ENTROPY (BASEL, SWITZERLAND) 2023; 25:1624. [PMID: 38136504 PMCID: PMC10742641 DOI: 10.3390/e25121624] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/01/2023] [Revised: 12/01/2023] [Accepted: 12/01/2023] [Indexed: 12/24/2023]
Abstract
The increase in ecosystem biodiversity can be perceived as one of the universal processes converting energy into information across a wide range of living systems. This study delves into the dynamics of living systems, highlighting the distinction between ex post adaptation, typically associated with natural selection, and its proactive counterpart, ex ante adaptability. Through coalescence experiments using synthetic ecosystems, we (i) quantified ecosystem stability, (ii) identified correlations between some biodiversity indexes and the stability, (iii) proposed a mechanism for increasing biodiversity through moderate inter-ecosystem interactions, and (iv) inferred that the information carrier of ecosystems is species composition, or merged genomic information. Additionally, it was suggested that (v) changes in ecosystems are constrained to a low-dimensional state space, with three distinct alteration trajectories-fluctuations, rapid environmental responses, and long-term changes-converging into this state space in common. These findings suggest that daily fluctuations may predict broader ecosystem changes. Our experimental insights, coupled with an exploration of living systems' information dynamics from an ecosystem perspective, enhance our predictive capabilities for natural ecosystem behavior, providing a universal framework for understanding a broad spectrum of living systems.
Collapse
Affiliation(s)
- Kazufumi Hosoda
- RIKEN Center for Biosystems Dynamics Research, 6-2-3 Furuedai, Suita, Osaka 565-0874, Japan; (R.K.); (N.M.)
- Center for Information and Neural Networks (CiNet), National Institute of Information and Communications Technology (NICT), Osaka 565-0871, Japan
- Institute for Transdisciplinary Graduate Degree Programs, Osaka University, 1-5 Yamadaoka, Suita, Osaka 565-0871, Japan
- Life and Medical Sciences Area, Health Sciences Discipline, Kobe University, Tomogaoka 7-10-2, Suma-ku, Kobe, Hyogo 654-0142, Japan
| | - Shigeto Seno
- Graduate School of Information Science and Technology, Osaka University, 1-5 Yamadaoka, Suita, Osaka 565-0871, Japan;
| | - Rikuto Kamiura
- RIKEN Center for Biosystems Dynamics Research, 6-2-3 Furuedai, Suita, Osaka 565-0874, Japan; (R.K.); (N.M.)
| | - Naomi Murakami
- RIKEN Center for Biosystems Dynamics Research, 6-2-3 Furuedai, Suita, Osaka 565-0874, Japan; (R.K.); (N.M.)
| | - Michio Kondoh
- Graduate School of Life Sciences, Tohoku University, 6-3 Aoba, Aramaki, Aoba-ku, Sendai 980-8578, Japan;
| |
Collapse
|
12
|
Sadras N, Sani OG, Ahmadipour P, Shanechi MM. Post-stimulus encoding of decision confidence in EEG: toward a brain-computer interface for decision making. J Neural Eng 2023; 20:056012. [PMID: 37524073 DOI: 10.1088/1741-2552/acec14] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/21/2022] [Accepted: 07/31/2023] [Indexed: 08/02/2023]
Abstract
Objective.When making decisions, humans can evaluate how likely they are to be correct. If this subjective confidence could be reliably decoded from brain activity, it would be possible to build a brain-computer interface (BCI) that improves decision performance by automatically providing more information to the user if needed based on their confidence. But this possibility depends on whether confidence can be decoded right after stimulus presentation and before the response so that a corrective action can be taken in time. Although prior work has shown that decision confidence is represented in brain signals, it is unclear if the representation is stimulus-locked or response-locked, and whether stimulus-locked pre-response decoding is sufficiently accurate for enabling such a BCI.Approach.We investigate the neural correlates of confidence by collecting high-density electroencephalography (EEG) during a perceptual decision task with realistic stimuli. Importantly, we design our task to include a post-stimulus gap that prevents the confounding of stimulus-locked activity by response-locked activity and vice versa, and then compare with a task without this gap.Main results.We perform event-related potential and source-localization analyses. Our analyses suggest that the neural correlates of confidence are stimulus-locked, and that an absence of a post-stimulus gap could cause these correlates to incorrectly appear as response-locked. By preventing response-locked activity from confounding stimulus-locked activity, we then show that confidence can be reliably decoded from single-trial stimulus-locked pre-response EEG alone. We also identify a high-performance classification algorithm by comparing a battery of algorithms. Lastly, we design a simulated BCI framework to show that the EEG classification is accurate enough to build a BCI and that the decoded confidence could be used to improve decision making performance particularly when the task difficulty and cost of errors are high.Significance.Our results show feasibility of non-invasive EEG-based BCIs to improve human decision making.
Collapse
Affiliation(s)
- Nitin Sadras
- Ming Hsieh Department of Electrical and Computer Engineering, Viterbi School of Engineering, University of Southern California, Los Angeles, CA, United States of America
| | - Omid G Sani
- Ming Hsieh Department of Electrical and Computer Engineering, Viterbi School of Engineering, University of Southern California, Los Angeles, CA, United States of America
| | - Parima Ahmadipour
- Ming Hsieh Department of Electrical and Computer Engineering, Viterbi School of Engineering, University of Southern California, Los Angeles, CA, United States of America
| | - Maryam M Shanechi
- Ming Hsieh Department of Electrical and Computer Engineering, Viterbi School of Engineering, University of Southern California, Los Angeles, CA, United States of America
- Department of Biomedical Engineering, Viterbi School of Engineering, University of Southern California, Los Angeles, CA, United States of America
- Department of Computer Science, Viterbi School of Engineering, University of Southern California, Los Angeles, CA, United States of America
- Neuroscience Graduate Program University of Southern California, Los Angeles, CA, United States of America
| |
Collapse
|
13
|
Xiao J, Provenza NR, Asfouri J, Myers J, Mathura RK, Metzger B, Adkinson JA, Allawala AB, Pirtle V, Oswalt D, Shofty B, Robinson ME, Mathew SJ, Goodman WK, Pouratian N, Schrater PR, Patel AB, Tolias AS, Bijanki KR, Pitkow X, Sheth SA. Decoding Depression Severity From Intracranial Neural Activity. Biol Psychiatry 2023; 94:445-453. [PMID: 36736418 PMCID: PMC10394110 DOI: 10.1016/j.biopsych.2023.01.020] [Citation(s) in RCA: 11] [Impact Index Per Article: 11.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 08/14/2022] [Revised: 01/09/2023] [Accepted: 01/25/2023] [Indexed: 02/05/2023]
Abstract
BACKGROUND Disorders of mood and cognition are prevalent, disabling, and notoriously difficult to treat. Fueling this challenge in treatment is a significant gap in our understanding of their neurophysiological basis. METHODS We recorded high-density neural activity from intracranial electrodes implanted in depression-relevant prefrontal cortical regions in 3 human subjects with severe depression. Neural recordings were labeled with depression severity scores across a wide dynamic range using an adaptive assessment that allowed sampling with a temporal frequency greater than that possible with typical rating scales. We modeled these data using regularized regression techniques with region selection to decode depression severity from the prefrontal recordings. RESULTS Across prefrontal regions, we found that reduced depression severity is associated with decreased low-frequency neural activity and increased high-frequency activity. When constraining our model to decode using a single region, spectral changes in the anterior cingulate cortex best predicted depression severity in all 3 subjects. Relaxing this constraint revealed unique, individual-specific sets of spatiospectral features predictive of symptom severity, reflecting the heterogeneous nature of depression. CONCLUSIONS The ability to decode depression severity from neural activity increases our fundamental understanding of how depression manifests in the human brain and provides a target neural signature for personalized neuromodulation therapies.
Collapse
Affiliation(s)
- Jiayang Xiao
- Department of Neurosurgery, Baylor College of Medicine, Houston, Texas; Department of Neuroscience, Baylor College of Medicine, Houston, Texas
| | - Nicole R Provenza
- Department of Neurosurgery, Baylor College of Medicine, Houston, Texas
| | - Joseph Asfouri
- Department of Electrical and Computer Engineering, Rice University, Houston, Texas
| | - John Myers
- Department of Neurosurgery, Baylor College of Medicine, Houston, Texas
| | - Raissa K Mathura
- Department of Neurosurgery, Baylor College of Medicine, Houston, Texas
| | - Brian Metzger
- Department of Neurosurgery, Baylor College of Medicine, Houston, Texas
| | - Joshua A Adkinson
- Department of Neurosurgery, Baylor College of Medicine, Houston, Texas
| | | | - Victoria Pirtle
- Department of Neurosurgery, Baylor College of Medicine, Houston, Texas
| | - Denise Oswalt
- Department of Neurosurgery, Baylor College of Medicine, Houston, Texas
| | - Ben Shofty
- Department of Neurosurgery, Baylor College of Medicine, Houston, Texas
| | - Meghan E Robinson
- Department of Neurosurgery, Baylor College of Medicine, Houston, Texas
| | - Sanjay J Mathew
- Department of Psychiatry, Baylor College of Medicine, Houston, Texas
| | - Wayne K Goodman
- Department of Psychiatry, Baylor College of Medicine, Houston, Texas
| | - Nader Pouratian
- Department of Neurological Surgery, UT Southwestern Medical Center, Dallas, Texas
| | - Paul R Schrater
- Department of Computer Science and Engineering, University of Minnesota, Minneapolis, Minnesota; Department of Psychology, University of Minnesota, Minneapolis, Minnesota
| | - Ankit B Patel
- Department of Neuroscience, Baylor College of Medicine, Houston, Texas; Department of Electrical and Computer Engineering, Rice University, Houston, Texas; Center for Neuroscience and Artificial Intelligence, Baylor College of Medicine, Houston, Texas
| | - Andreas S Tolias
- Department of Neuroscience, Baylor College of Medicine, Houston, Texas; Department of Electrical and Computer Engineering, Rice University, Houston, Texas; Center for Neuroscience and Artificial Intelligence, Baylor College of Medicine, Houston, Texas
| | - Kelly R Bijanki
- Department of Neurosurgery, Baylor College of Medicine, Houston, Texas
| | - Xaq Pitkow
- Department of Neuroscience, Baylor College of Medicine, Houston, Texas; Department of Electrical and Computer Engineering, Rice University, Houston, Texas; Center for Neuroscience and Artificial Intelligence, Baylor College of Medicine, Houston, Texas
| | - Sameer A Sheth
- Department of Neurosurgery, Baylor College of Medicine, Houston, Texas.
| |
Collapse
|
14
|
Dong Y, Wang S, Huang Q, Berg RW, Li G, He J. Neural Decoding for Intracortical Brain-Computer Interfaces. CYBORG AND BIONIC SYSTEMS 2023; 4:0044. [PMID: 37519930 PMCID: PMC10380541 DOI: 10.34133/cbsystems.0044] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/08/2022] [Accepted: 07/04/2023] [Indexed: 08/01/2023] Open
Abstract
Brain-computer interfaces have revolutionized the field of neuroscience by providing a solution for paralyzed patients to control external devices and improve the quality of daily life. To accurately and stably control effectors, it is important for decoders to recognize an individual's motor intention from neural activity either by noninvasive or intracortical neural recording. Intracortical recording is an invasive way of measuring neural electrical activity with high temporal and spatial resolution. Herein, we review recent developments in neural signal decoding methods for intracortical brain-computer interfaces. These methods have achieved good performance in analyzing neural activity and controlling robots and prostheses in nonhuman primates and humans. For more complex paradigms in motor rehabilitation or other clinical applications, there remains more space for further improvements of decoders.
Collapse
Affiliation(s)
- Yuanrui Dong
- School of Mechatronical Engineering and Beijing Advanced Innovation Center for Intelligent Robots,
Beijing Institute of Technology, Beijing 100081, China
| | - Shirong Wang
- School of Mechatronical Engineering and Beijing Advanced Innovation Center for Intelligent Robots,
Beijing Institute of Technology, Beijing 100081, China
| | - Qiang Huang
- School of Mechatronical Engineering and Beijing Advanced Innovation Center for Intelligent Robots,
Beijing Institute of Technology, Beijing 100081, China
| | - Rune W. Berg
- Department of Neuroscience,
University of Copenhagen, Copenhagen 2200, Denmark
| | - Guanghui Li
- Department of Neuroscience,
University of Copenhagen, Copenhagen 2200, Denmark
| | - Jiping He
- School of Mechatronical Engineering and Beijing Advanced Innovation Center for Intelligent Robots,
Beijing Institute of Technology, Beijing 100081, China
| |
Collapse
|
15
|
Athalye VR, Khanna P, Gowda S, Orsborn AL, Costa RM, Carmena JM. Invariant neural dynamics drive commands to control different movements. Curr Biol 2023; 33:2962-2976.e15. [PMID: 37402376 PMCID: PMC10527529 DOI: 10.1016/j.cub.2023.06.027] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/22/2022] [Revised: 04/24/2023] [Accepted: 06/09/2023] [Indexed: 07/06/2023]
Abstract
It has been proposed that the nervous system has the capacity to generate a wide variety of movements because it reuses some invariant code. Previous work has identified that dynamics of neural population activity are similar during different movements, where dynamics refer to how the instantaneous spatial pattern of population activity changes in time. Here, we test whether invariant dynamics of neural populations are actually used to issue the commands that direct movement. Using a brain-machine interface (BMI) that transforms rhesus macaques' motor-cortex activity into commands for a neuroprosthetic cursor, we discovered that the same command is issued with different neural-activity patterns in different movements. However, these different patterns were predictable, as we found that the transitions between activity patterns are governed by the same dynamics across movements. These invariant dynamics are low dimensional, and critically, they align with the BMI, so that they predict the specific component of neural activity that actually issues the next command. We introduce a model of optimal feedback control (OFC) that shows that invariant dynamics can help transform movement feedback into commands, reducing the input that the neural population needs to control movement. Altogether our results demonstrate that invariant dynamics drive commands to control a variety of movements and show how feedback can be integrated with invariant dynamics to issue generalizable commands.
Collapse
Affiliation(s)
- Vivek R Athalye
- Zuckerman Mind Brain Behavior Institute, Departments of Neuroscience and Neurology, Columbia University, New York, NY 10027, USA.
| | - Preeya Khanna
- Department of Neurology, University of California, San Francisco, San Francisco, CA 94158, USA.
| | - Suraj Gowda
- Department of Electrical Engineering and Computer Sciences, University of California, Berkeley, Berkeley, CA 94720, USA
| | - Amy L Orsborn
- Departments of Bioengineering, Electrical and Computer Engineering, University of Washington, Seattle, Seattle, WA 98195, USA
| | - Rui M Costa
- Zuckerman Mind Brain Behavior Institute, Departments of Neuroscience and Neurology, Columbia University, New York, NY 10027, USA.
| | - Jose M Carmena
- Department of Electrical Engineering and Computer Sciences, University of California, Berkeley, Berkeley, CA 94720, USA; Helen Wills Neuroscience Institute, University of California, Berkeley, Berkeley, CA 94720, USA; UC Berkeley-UCSF Joint Graduate Program in Bioengineering, University of California, Berkeley, Berkeley, CA 94720, USA.
| |
Collapse
|
16
|
Ahmadipour P, Sani OG, Pesaran B, Shanechi MM. Multimodal subspace identification for modeling discrete-continuous spiking and field potential population activity. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2023:2023.05.26.542509. [PMID: 37398400 PMCID: PMC10312539 DOI: 10.1101/2023.05.26.542509] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 07/04/2023]
Abstract
Learning dynamical latent state models for multimodal spiking and field potential activity can reveal their collective low-dimensional dynamics and enable better decoding of behavior through multimodal fusion. Toward this goal, developing unsupervised learning methods that are computationally efficient is important, especially for real-time learning applications such as brain-machine interfaces (BMIs). However, efficient learning remains elusive for multimodal spike-field data due to their heterogeneous discrete-continuous distributions and different timescales. Here, we develop a multiscale subspace identification (multiscale SID) algorithm that enables computationally efficient modeling and dimensionality reduction for multimodal discrete-continuous spike-field data. We describe the spike-field activity as combined Poisson and Gaussian observations, for which we derive a new analytical subspace identification method. Importantly, we also introduce a novel constrained optimization approach to learn valid noise statistics, which is critical for multimodal statistical inference of the latent state, neural activity, and behavior. We validate the method using numerical simulations and spike-LFP population activity recorded during a naturalistic reach and grasp behavior. We find that multiscale SID accurately learned dynamical models of spike-field signals and extracted low-dimensional dynamics from these multimodal signals. Further, it fused multimodal information, thus better identifying the dynamical modes and predicting behavior compared to using a single modality. Finally, compared to existing multiscale expectation-maximization learning for Poisson-Gaussian observations, multiscale SID had a much lower computational cost while being better in identifying the dynamical modes and having a better or similar accuracy in predicting neural activity. Overall, multiscale SID is an accurate learning method that is particularly beneficial when efficient learning is of interest.
Collapse
|
17
|
Abbaspourazad H, Erturk E, Pesaran B, Shanechi MM. Dynamical flexible inference of nonlinear latent structures in neural population activity. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2023:2023.03.13.532479. [PMID: 36993605 PMCID: PMC10054986 DOI: 10.1101/2023.03.13.532479] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/19/2023]
Abstract
Inferring complex spatiotemporal dynamics in neural population activity is critical for investigating neural mechanisms and developing neurotechnology. These activity patterns are noisy observations of lower-dimensional latent factors and their nonlinear dynamical structure. A major unaddressed challenge is to model this nonlinear structure, but in a manner that allows for flexible inference, whether causally, non-causally, or in the presence of missing neural observations. We address this challenge by developing DFINE, a new neural network that separates the model into dynamic and manifold latent factors, such that the dynamics can be modeled in tractable form. We show that DFINE achieves flexible nonlinear inference across diverse behaviors and brain regions. Further, despite enabling flexible inference unlike prior neural network models of population activity, DFINE also better predicts the behavior and neural activity, and better captures the latent neural manifold structure. DFINE can both enhance future neurotechnology and facilitate investigations across diverse domains of neuroscience.
Collapse
|
18
|
Vahidi P, Sani OG, Shanechi MM. Modeling and dissociation of intrinsic and input-driven neural population dynamics underlying behavior. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2023:2023.03.14.532554. [PMID: 36993213 PMCID: PMC10055042 DOI: 10.1101/2023.03.14.532554] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/19/2023]
Abstract
Neural dynamics can reflect intrinsic dynamics or dynamic inputs, such as sensory inputs or inputs from other regions. To avoid misinterpreting temporally-structured inputs as intrinsic dynamics, dynamical models of neural activity should account for measured inputs. However, incorporating measured inputs remains elusive in joint dynamical modeling of neural-behavioral data, which is important for studying neural computations of a specific behavior. We first show how training dynamical models of neural activity while considering behavior but not input, or input but not behavior may lead to misinterpretations. We then develop a novel analytical learning method that simultaneously accounts for neural activity, behavior, and measured inputs. The method provides the new capability to prioritize the learning of intrinsic behaviorally relevant neural dynamics and dissociate them from both other intrinsic dynamics and measured input dynamics. In data from a simulated brain with fixed intrinsic dynamics that performs different tasks, the method correctly finds the same intrinsic dynamics regardless of task while other methods can be influenced by the change in task. In neural datasets from three subjects performing two different motor tasks with task instruction sensory inputs, the method reveals low-dimensional intrinsic neural dynamics that are missed by other methods and are more predictive of behavior and/or neural activity. The method also uniquely finds that the intrinsic behaviorally relevant neural dynamics are largely similar across the three subjects and two tasks whereas the overall neural dynamics are not. These input-driven dynamical models of neural-behavioral data can uncover intrinsic dynamics that may otherwise be missed.
Collapse
|
19
|
Saalmann YB, Mofakham S, Mikell CB, Djuric PM. Microscale multicircuit brain stimulation: Achieving real-time brain state control for novel applications. CURRENT RESEARCH IN NEUROBIOLOGY 2022; 4:100071. [PMID: 36619175 PMCID: PMC9816916 DOI: 10.1016/j.crneur.2022.100071] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/30/2022] [Revised: 11/30/2022] [Accepted: 12/19/2022] [Indexed: 12/30/2022] Open
Abstract
Neurological and psychiatric disorders typically result from dysfunction across multiple neural circuits. Most of these disorders lack a satisfactory neuromodulation treatment. However, deep brain stimulation (DBS) has been successful in a limited number of disorders; DBS typically targets one or two brain areas with single contacts on relatively large electrodes, allowing for only coarse modulation of circuit function. Because of the dysfunction in distributed neural circuits - each requiring fine, tailored modulation - that characterizes most neuropsychiatric disorders, this approach holds limited promise. To develop the next generation of neuromodulation therapies, we will have to achieve fine-grained, closed-loop control over multiple neural circuits. Recent work has demonstrated spatial and frequency selectivity using microstimulation with many small, closely-spaced contacts, mimicking endogenous neural dynamics. Using custom electrode design and stimulation parameters, it should be possible to achieve bidirectional control over behavioral outcomes, such as increasing or decreasing arousal during central thalamic stimulation. Here, we discuss one possible approach, which we term microscale multicircuit brain stimulation (MMBS). We discuss how machine learning leverages behavioral and neural data to find optimal stimulation parameters across multiple contacts, to drive the brain towards desired states associated with behavioral goals. We expound a mathematical framework for MMBS, where behavioral and neural responses adjust the model in real-time, allowing us to adjust stimulation in real-time. These technologies will be critical to the development of the next generation of neurostimulation therapies, which will allow us to treat problems like disorders of consciousness and cognition.
Collapse
Affiliation(s)
- Yuri B. Saalmann
- Department of Psychology, University of Wisconsin-Madison, Madison, WI, USA
- Wisconsin National Primate Research Center, University of Wisconsin-Madison, Madison, WI, USA
| | - Sima Mofakham
- Department of Neurological Surgery, Stony Brook University Hospital, Stony Brook, NY, USA
- Department of Electrical and Computer Engineering, Stony Brook University, Stony Brook, NY, USA
| | - Charles B. Mikell
- Department of Neurological Surgery, Stony Brook University Hospital, Stony Brook, NY, USA
| | - Petar M. Djuric
- Department of Electrical and Computer Engineering, Stony Brook University, Stony Brook, NY, USA
| |
Collapse
|
20
|
Overcoming the Domain Gap in Neural Action Representations. Int J Comput Vis 2022. [DOI: 10.1007/s11263-022-01713-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/23/2022]
Abstract
AbstractRelating behavior to brain activity in animals is a fundamental goal in neuroscience, with practical applications in building robust brain-machine interfaces. However, the domain gap between individuals is a major issue that prevents the training of general models that work on unlabeled subjects. Since 3D pose data can now be reliably extracted from multi-view video sequences without manual intervention, we propose to use it to guide the encoding of neural action representations together with a set of neural and behavioral augmentations exploiting the properties of microscopy imaging. To test our method, we collect a large dataset that features flies and their neural activity. To reduce the domain gap, during training, we mix features of neural and behavioral data across flies that seem to be performing similar actions. To show our method can generalize further neural modalities and other downstream tasks, we test our method on a human neural Electrocorticography dataset, and another RGB video data of human activities from different viewpoints. We believe our work will enable more robust neural decoding algorithms to be used in future brain-machine interfaces.
Collapse
|
21
|
Song CY, Hsieh HL, Pesaran B, Shanechi MM. Modeling and inference methods for switching regime-dependent dynamical systems with multiscale neural observations. J Neural Eng 2022; 19. [PMID: 36261030 DOI: 10.1088/1741-2552/ac9b94] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/12/2022] [Accepted: 10/19/2022] [Indexed: 01/11/2023]
Abstract
Objective.Realizing neurotechnologies that enable long-term neural recordings across multiple spatial-temporal scales during naturalistic behaviors requires new modeling and inference methods that can simultaneously address two challenges. First, the methods should aggregate information across all activity scales from multiple recording sources such as spiking and field potentials. Second, the methods should detect changes in the regimes of behavior and/or neural dynamics during naturalistic scenarios and long-term recordings. Prior regime detection methods are developed for a single scale of activity rather than multiscale activity, and prior multiscale methods have not considered regime switching and are for stationary cases.Approach.Here, we address both challenges by developing a switching multiscale dynamical system model and the associated filtering and smoothing methods. This model describes the encoding of an unobserved brain state in multiscale spike-field activity. It also allows for regime-switching dynamics using an unobserved regime state that dictates the dynamical and encoding parameters at every time-step. We also design the associated switching multiscale inference methods that estimate both the unobserved regime and brain states from simultaneous spike-field activity.Main results.We validate the methods in both extensive numerical simulations and prefrontal spike-field data recorded in a monkey performing saccades for fluid rewards. We show that these methods can successfully combine the spiking and field potential observations to simultaneously track the regime and brain states accurately. Doing so, these methods lead to better state estimation compared with single-scale switching methods or stationary multiscale methods. Also, for single-scale linear Gaussian observations, the new switching smoother can better generalize to diverse system settings compared to prior switching smoothers.Significance.These modeling and inference methods effectively incorporate both regime-detection and multiscale observations. As such, they could facilitate investigation of latent switching neural population dynamics and improve future brain-machine interfaces by enabling inference in naturalistic scenarios where regime-dependent multiscale activity and behavior arise.
Collapse
Affiliation(s)
- Christian Y Song
- Ming Hsieh Department of Electrical and Computer Engineering, Viterbi School of Engineering, University of Southern California, Los Angeles, CA, United States of America
| | - Han-Lin Hsieh
- Ming Hsieh Department of Electrical and Computer Engineering, Viterbi School of Engineering, University of Southern California, Los Angeles, CA, United States of America
| | - Bijan Pesaran
- Departments of Neurosurgery, Neuroscience, and Bioengineering, University of Pennsylvania, Philadelphia, PA, United States of America
| | - Maryam M Shanechi
- Ming Hsieh Department of Electrical and Computer Engineering, Viterbi School of Engineering, University of Southern California, Los Angeles, CA, United States of America.,Neuroscience Graduate Program, University of Southern California, Los Angeles, CA, United States of America.,Department of Biomedical Engineering, Viterbi School of Engineering, University of Southern California, Los Angeles, CA, United States of America.,Department of Computer Science, Viterbi School of Engineering, University of Southern California, Los Angeles, CA, United States of America
| |
Collapse
|
22
|
Gallego-Carracedo C, Perich MG, Chowdhury RH, Miller LE, Gallego JÁ. Local field potentials reflect cortical population dynamics in a region-specific and frequency-dependent manner. eLife 2022; 11:73155. [PMID: 35968845 PMCID: PMC9470163 DOI: 10.7554/elife.73155] [Citation(s) in RCA: 17] [Impact Index Per Article: 8.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/18/2021] [Accepted: 08/02/2022] [Indexed: 11/13/2022] Open
Abstract
The spiking activity of populations of cortical neurons is well described by the dynamics of a small number of population-wide covariance patterns, the 'latent dynamics'. These latent dynamics are largely driven by the same correlated synaptic currents across the circuit that determine the generation of local field potentials (LFP). Yet, the relationship between latent dynamics and LFPs remains largely unexplored. Here, we characterised this relationship for three different regions of primate sensorimotor cortex during reaching. The correlation between latent dynamics and LFPs was frequency-dependent and varied across regions. However, for any given region, this relationship remained stable throughout the behaviour: in each of primary motor and premotor cortices, the LFP-latent dynamics correlation profile was remarkably similar between movement planning and execution. These robust associations between LFPs and neural population latent dynamics help bridge the wealth of studies reporting neural correlates of behaviour using either type of recordings.
Collapse
Affiliation(s)
| | - Matthew G Perich
- Department of Neuroscience, Icahn School of Medicine at Mount Sinai, New York, United States
| | - Raeed H Chowdhury
- Department of Bioengineering, University of Pittsburgh, Pittsburgh, United States
| | - Lee E Miller
- Department of Biomedical Engineering, Northwestern University, Evanston, United States
| | - Juan Álvaro Gallego
- Department of Bioengineering, Imperial College London, London, United Kingdom
| |
Collapse
|
23
|
Fang H, Yang Y. Designing and Validating a Robust Adaptive Neuromodulation Algorithm for Closed-Loop Control of Brain States. J Neural Eng 2022; 19. [PMID: 35576912 DOI: 10.1088/1741-2552/ac7005] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/08/2022] [Accepted: 05/16/2022] [Indexed: 11/12/2022]
Abstract
OBJECTIVE Neuromodulation systems that use closed-loop brain stimulation to control brain states can provide new therapies for brain disorders. To date, closed-loop brain stimulation has largely used linear time-invariant controllers. However, nonlinear time-varying brain network dynamics and external disturbances can appear during real-time stimulation, collectively leading to real-time model uncertainty. Real-time model uncertainty can degrade the performance or even cause instability of time-invariant controllers. Three problems need to be resolved to enable accurate and stable control under model uncertainty. First, an adaptive controller is needed to track the model uncertainty. Second, the adaptive controller additionally needs to be robust to noise and disturbances. Third, theoretical analyses of stability and robustness are needed as prerequisites for stable operation of the controller in practical applications. APPROACH We develop a robust adaptive neuromodulation algorithm that solves the above three problems. First, we develop a state-space brain network model that explicitly includes nonlinear terms of real-time model uncertainty and design an adaptive controller to track and cancel the model uncertainty. Second, to improve the robustness of the adaptive controller, we design two linear filters to increase steady-state control accuracy and reduce sensitivity to high-frequency noise and disturbances. Third, we conduct theoretical analyses to prove the stability of the neuromodulation algorithm and establish a trade-off between stability and robustness, which we further use to optimize the algorithm design. Finally, we validate the algorithm using comprehensive Monte Carlo simulations that span a broad range of model nonlinearity, uncertainty, and complexity. MAIN RESULTS The robust adaptive neuromodulation algorithm accurately tracks various types of target brain state trajectories, enables stable and robust control, and significantly outperforms state-of-the-art neuromodulation algorithms. SIGNIFICANCE Our algorithm has implications for future designs of precise, stable, and robust closed-loop brain stimulation systems to treat brain disorders and facilitate brain functions.
Collapse
Affiliation(s)
- Hao Fang
- University of Central Florida, Research 1 Room 334, 313/316, University of Central Florida, 4353 Scorpius St., Orlando, Florida, 32816-2368, UNITED STATES
| | - Yuxiao Yang
- Department of Electrical and Computer Engineering, University of Central Florida, 4353 Scorpius St., Orlando, Florida, 32816-2368, UNITED STATES
| |
Collapse
|
24
|
Peterson SM, Singh SH, Dichter B, Scheid M, Rao RPN, Brunton BW. AJILE12: Long-term naturalistic human intracranial neural recordings and pose. Sci Data 2022; 9:184. [PMID: 35449141 PMCID: PMC9023453 DOI: 10.1038/s41597-022-01280-y] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/03/2021] [Accepted: 03/25/2022] [Indexed: 12/22/2022] Open
Abstract
Understanding the neural basis of human movement in naturalistic scenarios is critical for expanding neuroscience research beyond constrained laboratory paradigms. Here, we describe our Annotated Joints in Long-term Electrocorticography for 12 human participants (AJILE12) dataset, the largest human neurobehavioral dataset that is publicly available; the dataset was recorded opportunistically during passive clinical epilepsy monitoring. AJILE12 includes synchronized intracranial neural recordings and upper body pose trajectories across 55 semi-continuous days of naturalistic movements, along with relevant metadata, including thousands of wrist movement events and annotated behavioral states. Neural recordings are available at 500 Hz from at least 64 electrodes per participant, for a total of 1280 hours. Pose trajectories at 9 upper-body keypoints were estimated from 118 million video frames. To facilitate data exploration and reuse, we have shared AJILE12 on The DANDI Archive in the Neurodata Without Borders (NWB) data standard and developed a browser-based dashboard.
Collapse
Affiliation(s)
- Steven M Peterson
- University of Washington, Department of Biology, Seattle, 98195, USA.,University of Washington, eScience Institute, Seattle, USA
| | - Satpreet H Singh
- University of Washington, Department of Electrical and Computer Engineering, Seattle, USA
| | | | | | - Rajesh P N Rao
- University of Washington, Paul G. Allen School of Computer Science and Engineering, Seattle, USA.,University of Washington, Center for Neurotechnology, Seattle, USA
| | - Bingni W Brunton
- University of Washington, Department of Biology, Seattle, 98195, USA. .,University of Washington, eScience Institute, Seattle, USA.
| |
Collapse
|
25
|
Marshall JD, Li T, Wu JH, Dunn TW. Leaving flatland: Advances in 3D behavioral measurement. Curr Opin Neurobiol 2022; 73:102522. [PMID: 35453000 DOI: 10.1016/j.conb.2022.02.002] [Citation(s) in RCA: 9] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/15/2021] [Revised: 01/25/2022] [Accepted: 02/02/2022] [Indexed: 01/10/2023]
Abstract
Animals move in three dimensions (3D). Thus, 3D measurement is necessary to report the true kinematics of animal movement. Existing 3D measurement techniques draw on specialized hardware, such as motion capture or depth cameras, as well as deep multi-view and monocular computer vision. Continued advances at the intersection of deep learning and computer vision will facilitate 3D tracking across more anatomical features, with less training data, in additional species, and within more natural, occlusive environments. 3D behavioral measurement enables unique applications in phenotyping, investigating the neural basis of behavior, and designing artificial agents capable of imitating animal behavior.
Collapse
Affiliation(s)
- Jesse D Marshall
- Harvard University, Department of Organismic and Evolutionary Biology, Cambridge, MA 02138, USA.
| | - Tianqing Li
- Duke University, Pratt School of Engineering, Department of Biomedical Engineering, Durham, NC 27708, USA. https://twitter.com/tianqingxli
| | - Joshua H Wu
- Duke University, Pratt School of Engineering, Department of Biomedical Engineering, Durham, NC 27708, USA
| | - Timothy W Dunn
- Duke University, Pratt School of Engineering, Department of Biomedical Engineering, Durham, NC 27708, USA.
| |
Collapse
|
26
|
O'Reilly D, Delis I. A network information theoretic framework to characterise muscle synergies in space and time. J Neural Eng 2022; 19. [PMID: 35108699 DOI: 10.1088/1741-2552/ac5150] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/21/2021] [Accepted: 02/02/2022] [Indexed: 11/12/2022]
Abstract
Objective Current approaches to muscle synergy extraction rely on linear dimensionality reduction algorithms that make specific assumptions on the underlying signals. However, to capture nonlinear time varying, large-scale but also muscle-specific interactions, a more generalised approach is required. Approach Here we developed a novel framework for muscle synergy extraction that relaxes model assumptions by using a combination of information- and network theory and dimensionality reduction. We first quantify informational dynamics between muscles, time-samples or muscle-time pairings using a novel mutual information formulation. We then model these pairwise interactions as multiplex networks and identify modules representing the network architecture. We employ this modularity criterion as the input parameter for dimensionality reduction, which verifiably extracts the identified modules, and also to characterise salient structures within each module. Main results This novel framework captures spatial, temporal and spatiotemporal interactions across two benchmark datasets of reaching movements, producing distinct spatial groupings and both tonic and phasic temporal patterns. Readily interpretable muscle synergies spanning multiple spatial and temporal scales were identified, demonstrating significant task dependence, ability to capture trial-to-trial fluctuations and concordance across participants. Furthermore, our framework identifies submodular structures that represent the distributed networks of co-occurring signal interactions across scales. Significance The capabilities of this framework are illustrated through the concomitant continuity with previous research and novelty of the insights gained. Several previous limitations are circumvented including the extraction of functionally meaningful and multiplexed pairwise muscle couplings under relaxed model assumptions. The extracted synergies provide a holistic view of the movement while important details of task performance are readily interpretable. The identified muscle groupings transcend biomechanical constraints and the temporal patterns reveal characteristics of fundamental motor control mechanisms. We conclude that this framework opens new opportunities for muscle synergy research and can constitute a bridge between existing models and recent network-theoretic endeavours.
Collapse
Affiliation(s)
- David O'Reilly
- University of Leeds, Faculty of Biological sciences, Leeds, LS2 9JT, UNITED KINGDOM OF GREAT BRITAIN AND NORTHERN IRELAND
| | - Ioannis Delis
- University of Leeds, Faculty of Biological sciences, Leeds, Leeds, LS2 9JT, UNITED KINGDOM OF GREAT BRITAIN AND NORTHERN IRELAND
| |
Collapse
|
27
|
Wang C, Pesaran B, Shanechi MM. Modeling multiscale causal interactions between spiking and field potential signals during behavior. J Neural Eng 2022; 19. [DOI: 10.1088/1741-2552/ac4e1c] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/07/2021] [Accepted: 01/24/2022] [Indexed: 11/12/2022]
Abstract
Abstract
Objective. Brain recordings exhibit dynamics at multiple spatiotemporal scales, which are measured with spike trains and larger-scale field potential signals. To study neural processes, it is important to identify and model causal interactions not only at a single scale of activity, but also across multiple scales, i.e. between spike trains and field potential signals. Standard causality measures are not directly applicable here because spike trains are binary-valued but field potentials are continuous-valued. It is thus important to develop computational tools to recover multiscale neural causality during behavior, assess their performance on neural datasets, and study whether modeling multiscale causalities can improve the prediction of neural signals beyond what is possible with single-scale causality. Approach. We design a multiscale model-based Granger-like causality method based on directed information and evaluate its success both in realistic biophysical spike-field simulations and in motor cortical datasets from two non-human primates (NHP) performing a motor behavior. To compute multiscale causality, we learn point-process generalized linear models that predict the spike events at a given time based on the history of both spike trains and field potential signals. We also learn linear Gaussian models that predict the field potential signals at a given time based on their own history as well as either the history of binary spike events or that of latent firing rates. Main results. We find that our method reveals the true multiscale causality network structure in biophysical simulations despite the presence of model mismatch. Further, models with the identified multiscale causalities in the NHP neural datasets lead to better prediction of both spike trains and field potential signals compared to just modeling single-scale causalities. Finally, we find that latent firing rates are better predictors of field potential signals compared with the binary spike events in the NHP datasets. Significance. This multiscale causality method can reveal the directed functional interactions across spatiotemporal scales of brain activity to inform basic science investigations and neurotechnologies.
Collapse
|
28
|
Abstract
Investigating how an artificial network of neurons controls a simulated arm suggests that rotational patterns of activity in the motor cortex may rely on sensory feedback from the moving limb.
Collapse
Affiliation(s)
- Omid G Sani
- Ming Hsieh Department of Electrical and Computer Engineering, Viterbi School of Engineering, University of Southern California, Los Angeles, United States
| | - Maryam M Shanechi
- Ming Hsieh Department of Electrical and Computer Engineering, Viterbi School of Engineering, University of Southern California, Los Angeles, United States.,Department of Biomedical Engineering, Viterbi School of Engineering, University of Southern California, Los Angeles, United States.,Neuroscience Graduate Program, University of Southern California, Los Angeles, United States
| |
Collapse
|
29
|
Guo X, Wang J. Low-Dimensional Dynamics of Brain Activity Associated with Manual Acupuncture in Healthy Subjects. SENSORS 2021; 21:s21227432. [PMID: 34833508 PMCID: PMC8619579 DOI: 10.3390/s21227432] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 09/24/2021] [Revised: 11/03/2021] [Accepted: 11/06/2021] [Indexed: 11/24/2022]
Abstract
Acupuncture is one of the oldest traditional medical treatments in Asian countries. However, the scientific explanation regarding the therapeutic effect of acupuncture is still unknown. The much-discussed hypothesis it that acupuncture’s effects are mediated via autonomic neural networks; nevertheless, dynamic brain activity involved in the acupuncture response has still not been elicited. In this work, we hypothesized that there exists a lower-dimensional subspace of dynamic brain activity across subjects, underpinning the brain’s response to manual acupuncture stimulation. To this end, we employed a variational auto-encoder to probe the latent variables from multichannel EEG signals associated with acupuncture stimulation at the ST36 acupoint. The experimental results demonstrate that manual acupuncture stimuli can reduce the dimensionality of brain activity, which results from the enhancement of oscillatory activity in the delta and alpha frequency bands induced by acupuncture. Moreover, it was found that large-scale brain activity could be constrained within a low-dimensional neural subspace, which is spanned by the “acupuncture mode”. In each neural subspace, the steady dynamics of the brain in response to acupuncture stimuli converge to topologically similar elliptic-shaped attractors across different subjects. The attractor morphology is closely related to the frequency of the acupuncture stimulation. These results shed light on probing the large-scale brain response to manual acupuncture stimuli.
Collapse
Affiliation(s)
- Xinmeng Guo
- School of Electrical and Information Engineering, Tianjin University, Tianjin 300072, China;
- Academy of Medical Engineering and Translational Medicine, Tianjin University, Tianjin 300072, China
- Correspondence:
| | - Jiang Wang
- School of Electrical and Information Engineering, Tianjin University, Tianjin 300072, China;
| |
Collapse
|
30
|
Kalidindi HT, Cross KP, Lillicrap TP, Omrani M, Falotico E, Sabes PN, Scott SH. Rotational dynamics in motor cortex are consistent with a feedback controller. eLife 2021; 10:e67256. [PMID: 34730516 PMCID: PMC8691841 DOI: 10.7554/elife.67256] [Citation(s) in RCA: 28] [Impact Index Per Article: 9.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/05/2021] [Accepted: 10/28/2021] [Indexed: 11/13/2022] Open
Abstract
Recent studies have identified rotational dynamics in motor cortex (MC), which many assume arise from intrinsic connections in MC. However, behavioral and neurophysiological studies suggest that MC behaves like a feedback controller where continuous sensory feedback and interactions with other brain areas contribute substantially to MC processing. We investigated these apparently conflicting theories by building recurrent neural networks that controlled a model arm and received sensory feedback from the limb. Networks were trained to counteract perturbations to the limb and to reach toward spatial targets. Network activities and sensory feedback signals to the network exhibited rotational structure even when the recurrent connections were removed. Furthermore, neural recordings in monkeys performing similar tasks also exhibited rotational structure not only in MC but also in somatosensory cortex. Our results argue that rotational structure may also reflect dynamics throughout the voluntary motor system involved in online control of motor actions.
Collapse
Affiliation(s)
| | - Kevin P Cross
- Centre for Neuroscience Studies, Queen's UniversityKingstonCanada
| | - Timothy P Lillicrap
- Centre for Computation, Mathematics and Physics, University College LondonLondonUnited Kingdom
| | - Mohsen Omrani
- Centre for Neuroscience Studies, Queen's UniversityKingstonCanada
| | - Egidio Falotico
- The BioRobotics Institute, Scuola Superiore Sant'AnnaPisaItaly
| | - Philip N Sabes
- Department of Physiology, University of California, San FranciscoSan FranciscoUnited States
| | - Stephen H Scott
- Centre for Neuroscience Studies, Queen's UniversityKingstonCanada
| |
Collapse
|
31
|
Bilodeau G, Gagnon-Turcotte G, Gagnon LL, Keramidis I, Timofeev I, De Koninck Y, Ethier C, Gosselin B. A Wireless Electro-Optic Platform for Multimodal Electrophysiology and Optogenetics in Freely Moving Rodents. Front Neurosci 2021; 15:718478. [PMID: 34504415 PMCID: PMC8422428 DOI: 10.3389/fnins.2021.718478] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/31/2021] [Accepted: 07/19/2021] [Indexed: 11/25/2022] Open
Abstract
This paper presents the design and the utilization of a wireless electro-optic platform to perform simultaneous multimodal electrophysiological recordings and optogenetic stimulation in freely moving rodents. The developed system can capture neural action potentials (AP), local field potentials (LFP) and electromyography (EMG) signals with up to 32 channels in parallel while providing four optical stimulation channels. The platform is using commercial off-the-shelf components (COTS) and a low-power digital field-programmable gate array (FPGA), to perform digital signal processing to digitally separate in real time the AP, LFP and EMG while performing signal detection and compression for mitigating wireless bandwidth and power consumption limitations. The different signal modalities collected on the 32 channels are time-multiplexed into a single data stream to decrease power consumption and optimize resource utilization. The data reduction strategy is based on signal processing and real-time data compression. Digital filtering, signal detection, and wavelet data compression are used inside the platform to separate the different electrophysiological signal modalities, namely the local field potentials (1–500 Hz), EMG (30–500 Hz), and the action potentials (300–5,000 Hz) and perform data reduction before transmitting the data. The platform achieves a measured data reduction ratio of 7.77 (for a firing rate of 50 AP/second) and weights 4.7 g with a 100-mAh battery, an on/off switch and a protective plastic enclosure. To validate the performance of the platform, we measured distinct electrophysiology signals and performed optogenetics stimulation in vivo in freely moving rondents. We recorded AP and LFP signals with the platform using a 16-microelectrode array implanted in the primary motor cortex of a Long Evans rat, both in anesthetized and freely moving conditions. EMG responses to optogenetic Channelrhodopsin-2 induced activation of motor cortex via optical fiber were also recorded in freely moving rodents.
Collapse
Affiliation(s)
- Guillaume Bilodeau
- Smart Biomedical Microsystems Laboratory, Department of Electrical Engineering, Université Laval, Québec, QC, Canada
| | - Gabriel Gagnon-Turcotte
- Smart Biomedical Microsystems Laboratory, Department of Electrical Engineering, Université Laval, Québec, QC, Canada
| | - Léonard L Gagnon
- Smart Biomedical Microsystems Laboratory, Department of Electrical Engineering, Université Laval, Québec, QC, Canada
| | - Iason Keramidis
- Department of Psychiatry and Neuroscience, CERVO Brain Research Centre, Université Laval, Québec, QC, Canada
| | - Igor Timofeev
- Department of Psychiatry and Neuroscience, CERVO Brain Research Centre, Université Laval, Québec, QC, Canada
| | - Yves De Koninck
- Department of Psychiatry and Neuroscience, CERVO Brain Research Centre, Université Laval, Québec, QC, Canada
| | - Christian Ethier
- Department of Psychiatry and Neuroscience, CERVO Brain Research Centre, Université Laval, Québec, QC, Canada
| | - Benoit Gosselin
- Smart Biomedical Microsystems Laboratory, Department of Electrical Engineering, Université Laval, Québec, QC, Canada.,Department of Psychiatry and Neuroscience, CERVO Brain Research Centre, Université Laval, Québec, QC, Canada
| |
Collapse
|
32
|
Lu HY, Lorenc ES, Zhu H, Kilmarx J, Sulzer J, Xie C, Tobler PN, Watrous AJ, Orsborn AL, Lewis-Peacock J, Santacruz SR. Multi-scale neural decoding and analysis. J Neural Eng 2021; 18. [PMID: 34284369 PMCID: PMC8840800 DOI: 10.1088/1741-2552/ac160f] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/08/2020] [Accepted: 07/20/2021] [Indexed: 12/15/2022]
Abstract
Objective. Complex spatiotemporal neural activity encodes rich information related to behavior and cognition. Conventional research has focused on neural activity acquired using one of many different measurement modalities, each of which provides useful but incomplete assessment of the neural code. Multi-modal techniques can overcome tradeoffs in the spatial and temporal resolution of a single modality to reveal deeper and more comprehensive understanding of system-level neural mechanisms. Uncovering multi-scale dynamics is essential for a mechanistic understanding of brain function and for harnessing neuroscientific insights to develop more effective clinical treatment. Approach. We discuss conventional methodologies used for characterizing neural activity at different scales and review contemporary examples of how these approaches have been combined. Then we present our case for integrating activity across multiple scales to benefit from the combined strengths of each approach and elucidate a more holistic understanding of neural processes. Main results. We examine various combinations of neural activity at different scales and analytical techniques that can be used to integrate or illuminate information across scales, as well the technologies that enable such exciting studies. We conclude with challenges facing future multi-scale studies, and a discussion of the power and potential of these approaches. Significance. This roadmap will lead the readers toward a broad range of multi-scale neural decoding techniques and their benefits over single-modality analyses. This Review article highlights the importance of multi-scale analyses for systematically interrogating complex spatiotemporal mechanisms underlying cognition and behavior.
Collapse
Affiliation(s)
- Hung-Yun Lu
- The University of Texas at Austin, Biomedical Engineering, Austin, TX, United States of America
| | - Elizabeth S Lorenc
- The University of Texas at Austin, Psychology, Austin, TX, United States of America.,The University of Texas at Austin, Institute for Neuroscience, Austin, TX, United States of America
| | - Hanlin Zhu
- Rice University, Electrical and Computer Engineering, Houston, TX, United States of America
| | - Justin Kilmarx
- The University of Texas at Austin, Mechanical Engineering, Austin, TX, United States of America
| | - James Sulzer
- The University of Texas at Austin, Mechanical Engineering, Austin, TX, United States of America.,The University of Texas at Austin, Institute for Neuroscience, Austin, TX, United States of America
| | - Chong Xie
- Rice University, Electrical and Computer Engineering, Houston, TX, United States of America
| | - Philippe N Tobler
- University of Zurich, Neuroeconomics and Social Neuroscience, Zurich, Switzerland
| | - Andrew J Watrous
- The University of Texas at Austin, Neurology, Austin, TX, United States of America
| | - Amy L Orsborn
- University of Washington, Electrical and Computer Engineering, Seattle, WA, United States of America.,University of Washington, Bioengineering, Seattle, WA, United States of America.,Washington National Primate Research Center, Seattle, WA, United States of America
| | - Jarrod Lewis-Peacock
- The University of Texas at Austin, Psychology, Austin, TX, United States of America.,The University of Texas at Austin, Institute for Neuroscience, Austin, TX, United States of America
| | - Samantha R Santacruz
- The University of Texas at Austin, Biomedical Engineering, Austin, TX, United States of America.,The University of Texas at Austin, Institute for Neuroscience, Austin, TX, United States of America
| |
Collapse
|
33
|
Yang Y, Ahmadipour P, Shanechi MM. Adaptive latent state modeling of brain network dynamics with real-time learning rate optimization. J Neural Eng 2021; 18. [PMID: 33254159 DOI: 10.1088/1741-2552/abcefd] [Citation(s) in RCA: 16] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/07/2020] [Accepted: 11/30/2020] [Indexed: 12/29/2022]
Abstract
Objective. Dynamic latent state models are widely used to characterize the dynamics of brain network activity for various neural signal types. To date, dynamic latent state models have largely been developed for stationary brain network dynamics. However, brain network dynamics can be non-stationary for example due to learning, plasticity or recording instability. To enable modeling these non-stationarities, two problems need to be resolved. First, novel methods should be developed that can adaptively update the parameters of latent state models, which is difficult due to the state being latent. Second, new methods are needed to optimize the adaptation learning rate, which specifies how fast new neural observations update the model parameters and can significantly influence adaptation accuracy.Approach. We develop a Rate Optimized-adaptive Linear State-Space Modeling (RO-adaptive LSSM) algorithm that solves these two problems. First, to enable adaptation, we derive a computation- and memory-efficient adaptive LSSM fitting algorithm that updates the LSSM parameters recursively and in real time in the presence of the latent state. Second, we develop a real-time learning rate optimization algorithm. We use comprehensive simulations of a broad range of non-stationary brain network dynamics to validate both algorithms, which together constitute the RO-adaptive LSSM.Main results. We show that the adaptive LSSM fitting algorithm can accurately track the broad simulated non-stationary brain network dynamics. We also find that the learning rate significantly affects the LSSM fitting accuracy. Finally, we show that the real-time learning rate optimization algorithm can run in parallel with the adaptive LSSM fitting algorithm. Doing so, the combined RO-adaptive LSSM algorithm rapidly converges to the optimal learning rate and accurately tracks non-stationarities.Significance. These algorithms can be used to study time-varying neural dynamics underlying various brain functions and enhance future neurotechnologies such as brain-machine interfaces and closed-loop brain stimulation systems.
Collapse
Affiliation(s)
- Yuxiao Yang
- Ming Hsieh Department of Electrical and Computer Engineering, Viterbi School of Engineering, University of Southern California, Los Angeles, CA, United States of America.,These authors contributed equally to this work
| | - Parima Ahmadipour
- Ming Hsieh Department of Electrical and Computer Engineering, Viterbi School of Engineering, University of Southern California, Los Angeles, CA, United States of America.,These authors contributed equally to this work
| | - Maryam M Shanechi
- Ming Hsieh Department of Electrical and Computer Engineering, Viterbi School of Engineering, University of Southern California, Los Angeles, CA, United States of America.,Neuroscience Graduate Program, University of Southern California, Los Angeles, CA, United States of America
| |
Collapse
|