1
|
Sadras N, Pesaran B, Shanechi MM. Event detection and classification from multimodal time series with application to neural data. J Neural Eng 2024; 21:026049. [PMID: 38513289 DOI: 10.1088/1741-2552/ad3678] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/15/2023] [Accepted: 03/21/2024] [Indexed: 03/23/2024]
Abstract
The detection of events in time-series data is a common signal-processing problem. When the data can be modeled as a known template signal with an unknown delay in Gaussian noise, detection of the template signal can be done with a traditional matched filter. However, in many applications, the event of interest is represented in multimodal data consisting of both Gaussian and point-process time series. Neuroscience experiments, for example, can simultaneously record multimodal neural signals such as local field potentials (LFPs), which can be modeled as Gaussian, and neuronal spikes, which can be modeled as point processes. Currently, no method exists for event detection from such multimodal data, and as such our objective in this work is to develop a method to meet this need. Here we address this challenge by developing the multimodal event detector (MED) algorithm which simultaneously estimates event times and classes. To do this, we write a multimodal likelihood function for Gaussian and point-process observations and derive the associated maximum likelihood estimator of simultaneous event times and classes. We additionally introduce a cross-modal scaling parameter to account for model mismatch in real datasets. We validate this method in extensive simulations as well as in a neural spike-LFP dataset recorded during an eye-movement task, where the events of interest are eye movements with unknown times and directions. We show that the MED can successfully detect eye movement onset and classify eye movement direction. Further, the MED successfully combines information across data modalities, with multimodal performance exceeding unimodal performance. This method can facilitate applications such as the discovery of latent events in multimodal neural population activity and the development of brain-computer interfaces for naturalistic settings without constrained tasks or prior knowledge of event times.
Collapse
Affiliation(s)
- Nitin Sadras
- Ming Hsieh Department of Electrical and Computer Engineering, Viterbi School of Engineering, University of Southern California, Los Angeles, CA, United States of America
| | - Bijan Pesaran
- Department of Neurosurgery, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, United States of America
| | - Maryam M Shanechi
- Ming Hsieh Department of Electrical and Computer Engineering, Viterbi School of Engineering, University of Southern California, Los Angeles, CA, United States of America
- Thomas Lord Department of Computer Science, Alfred E. Mann Department of Biomedical Engineering, and the Neuroscience Graduate Program, University of Southern California, Los Angeles, CA, United States of America
| |
Collapse
|
2
|
Chang YJ, Chen YI, Yeh HC, Santacruz SR. Neurobiologically realistic neural network enables cross-scale modeling of neural dynamics. Sci Rep 2024; 14:5145. [PMID: 38429297 PMCID: PMC10907713 DOI: 10.1038/s41598-024-54593-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/02/2023] [Accepted: 02/14/2024] [Indexed: 03/03/2024] Open
Abstract
Fundamental principles underlying computation in multi-scale brain networks illustrate how multiple brain areas and their coordinated activity give rise to complex cognitive functions. Whereas brain activity has been studied at the micro- to meso-scale to reveal the connections between the dynamical patterns and the behaviors, investigations of neural population dynamics are mainly limited to single-scale analysis. Our goal is to develop a cross-scale dynamical model for the collective activity of neuronal populations. Here we introduce a bio-inspired deep learning approach, termed NeuroBondGraph Network (NBGNet), to capture cross-scale dynamics that can infer and map the neural data from multiple scales. Our model not only exhibits more than an 11-fold improvement in reconstruction accuracy, but also predicts synchronous neural activity and preserves correlated low-dimensional latent dynamics. We also show that the NBGNet robustly predicts held-out data across a long time scale (2 weeks) without retraining. We further validate the effective connectivity defined from our model by demonstrating that neural connectivity during motor behaviour agrees with the established neuroanatomical hierarchy of motor control in the literature. The NBGNet approach opens the door to revealing a comprehensive understanding of brain computation, where network mechanisms of multi-scale activity are critical.
Collapse
Affiliation(s)
- Yin-Jui Chang
- Biomedical Engineering, The University of Texas at Austin, Austin, TX, USA
| | - Yuan-I Chen
- Biomedical Engineering, The University of Texas at Austin, Austin, TX, USA
| | - Hsin-Chih Yeh
- Biomedical Engineering, The University of Texas at Austin, Austin, TX, USA
- Texas Materials Institute, The University of Texas at Austin, Austin, TX, USA
| | - Samantha R Santacruz
- Biomedical Engineering, The University of Texas at Austin, Austin, TX, USA.
- Institute for Neuroscience, The University of Texas at Austin, Austin, TX, USA.
- Electrical and Computer Engineering, The University of Texas at Austin, Austin, TX, USA.
| |
Collapse
|
3
|
Ahmadipour P, Sani OG, Pesaran B, Shanechi MM. Multimodal subspace identification for modeling discrete-continuous spiking and field potential population activity. J Neural Eng 2024; 21:026001. [PMID: 38016450 PMCID: PMC10913727 DOI: 10.1088/1741-2552/ad1053] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/02/2023] [Revised: 10/23/2023] [Accepted: 11/28/2023] [Indexed: 11/30/2023]
Abstract
Objective.Learning dynamical latent state models for multimodal spiking and field potential activity can reveal their collective low-dimensional dynamics and enable better decoding of behavior through multimodal fusion. Toward this goal, developing unsupervised learning methods that are computationally efficient is important, especially for real-time learning applications such as brain-machine interfaces (BMIs). However, efficient learning remains elusive for multimodal spike-field data due to their heterogeneous discrete-continuous distributions and different timescales.Approach.Here, we develop a multiscale subspace identification (multiscale SID) algorithm that enables computationally efficient learning for modeling and dimensionality reduction for multimodal discrete-continuous spike-field data. We describe the spike-field activity as combined Poisson and Gaussian observations, for which we derive a new analytical SID method. Importantly, we also introduce a novel constrained optimization approach to learn valid noise statistics, which is critical for multimodal statistical inference of the latent state, neural activity, and behavior. We validate the method using numerical simulations and with spiking and local field potential population activity recorded during a naturalistic reach and grasp behavior.Main results.We find that multiscale SID accurately learned dynamical models of spike-field signals and extracted low-dimensional dynamics from these multimodal signals. Further, it fused multimodal information, thus better identifying the dynamical modes and predicting behavior compared to using a single modality. Finally, compared to existing multiscale expectation-maximization learning for Poisson-Gaussian observations, multiscale SID had a much lower training time while being better in identifying the dynamical modes and having a better or similar accuracy in predicting neural activity and behavior.Significance.Overall, multiscale SID is an accurate learning method that is particularly beneficial when efficient learning is of interest, such as for online adaptive BMIs to track non-stationary dynamics or for reducing offline training time in neuroscience investigations.
Collapse
Affiliation(s)
- Parima Ahmadipour
- Ming Hsieh Department of Electrical and Computer Engineering, Viterbi School of Engineering, University of Southern California, Los Angeles, CA, United States of America
| | - Omid G Sani
- Ming Hsieh Department of Electrical and Computer Engineering, Viterbi School of Engineering, University of Southern California, Los Angeles, CA, United States of America
| | - Bijan Pesaran
- Department of Neurosurgery, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, United States of America
| | - Maryam M Shanechi
- Ming Hsieh Department of Electrical and Computer Engineering, Viterbi School of Engineering, University of Southern California, Los Angeles, CA, United States of America
- Thomas Lord Department of Computer Science, Alfred E. Mann Department of Biomedical Engineering, and the Neuroscience Graduate Program, University of Southern California, Los Angeles, CA, United States of America
| |
Collapse
|
4
|
Vahidi P, Sani OG, Shanechi MM. Modeling and dissociation of intrinsic and input-driven neural population dynamics underlying behavior. Proc Natl Acad Sci U S A 2024; 121:e2212887121. [PMID: 38335258 PMCID: PMC10873612 DOI: 10.1073/pnas.2212887121] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/28/2022] [Accepted: 12/03/2023] [Indexed: 02/12/2024] Open
Abstract
Neural dynamics can reflect intrinsic dynamics or dynamic inputs, such as sensory inputs or inputs from other brain regions. To avoid misinterpreting temporally structured inputs as intrinsic dynamics, dynamical models of neural activity should account for measured inputs. However, incorporating measured inputs remains elusive in joint dynamical modeling of neural-behavioral data, which is important for studying neural computations of behavior. We first show how training dynamical models of neural activity while considering behavior but not input or input but not behavior may lead to misinterpretations. We then develop an analytical learning method for linear dynamical models that simultaneously accounts for neural activity, behavior, and measured inputs. The method provides the capability to prioritize the learning of intrinsic behaviorally relevant neural dynamics and dissociate them from both other intrinsic dynamics and measured input dynamics. In data from a simulated brain with fixed intrinsic dynamics that performs different tasks, the method correctly finds the same intrinsic dynamics regardless of the task while other methods can be influenced by the task. In neural datasets from three subjects performing two different motor tasks with task instruction sensory inputs, the method reveals low-dimensional intrinsic neural dynamics that are missed by other methods and are more predictive of behavior and/or neural activity. The method also uniquely finds that the intrinsic behaviorally relevant neural dynamics are largely similar across the different subjects and tasks, whereas the overall neural dynamics are not. These input-driven dynamical models of neural-behavioral data can uncover intrinsic dynamics that may otherwise be missed.
Collapse
Affiliation(s)
- Parsa Vahidi
- Ming Hsieh Department of Electrical and Computer Engineering, Viterbi School of Engineering, University of Southern California, Los Angeles, CA90089
| | - Omid G. Sani
- Ming Hsieh Department of Electrical and Computer Engineering, Viterbi School of Engineering, University of Southern California, Los Angeles, CA90089
| | - Maryam M. Shanechi
- Ming Hsieh Department of Electrical and Computer Engineering, Viterbi School of Engineering, University of Southern California, Los Angeles, CA90089
- Neuroscience Graduate Program, University of Southern California, Los Angeles, CA90089
- Thomas Lord Department of Computer Science and Alfred E. Mann Department of Biomedical Engineering, Viterbi School of Engineering, University of Southern California, Los Angeles, CA90089
| |
Collapse
|
5
|
Song CY, Shanechi MM. Unsupervised learning of stationary and switching dynamical system models from Poisson observations. J Neural Eng 2023; 20:066029. [PMID: 38083862 PMCID: PMC10714100 DOI: 10.1088/1741-2552/ad038d] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/24/2023] [Revised: 09/15/2023] [Accepted: 10/16/2023] [Indexed: 12/18/2023]
Abstract
Objective. Investigating neural population dynamics underlying behavior requires learning accurate models of the recorded spiking activity, which can be modeled with a Poisson observation distribution. Switching dynamical system models can offer both explanatory power and interpretability by piecing together successive regimes of simpler dynamics to capture more complex ones. However, in many cases, reliable regime labels are not available, thus demanding accurate unsupervised learning methods for Poisson observations. Existing learning methods, however, rely on inference of latent states in neural activity using the Laplace approximation, which may not capture the broader properties of densities and may lead to inaccurate learning. Thus, there is a need for new inference methods that can enable accurate model learning.Approach. To achieve accurate model learning, we derive a novel inference method based on deterministic sampling for Poisson observations called the Poisson Cubature Filter (PCF) and embed it in an unsupervised learning framework. This method takes a minimum mean squared error approach to estimation. Terms that are difficult to find analytically for Poisson observations are approximated in a novel way with deterministic sampling based on numerical integration and cubature rules.Main results. PCF enabled accurate unsupervised learning in both stationary and switching dynamical systems and largely outperformed prior Laplace approximation-based learning methods in both simulations and motor cortical spiking data recorded during a reaching task. These improvements were larger for smaller data sizes, showing that PCF-based learning was more data efficient and enabled more reliable regime identification. In experimental data and unsupervised with respect to behavior, PCF-based learning uncovered interpretable behavior-relevant regimes unlike prior learning methods.Significance. The developed unsupervised learning methods for switching dynamical systems can accurately uncover latent regimes and states in population spiking activity, with important applications in both basic neuroscience and neurotechnology.
Collapse
Affiliation(s)
- Christian Y Song
- Ming Hsieh Department of Electrical and Computer Engineering, Viterbi School of Engineering, University of Southern California, Los Angeles, CA, United States of America
| | - Maryam M Shanechi
- Ming Hsieh Department of Electrical and Computer Engineering, Viterbi School of Engineering, University of Southern California, Los Angeles, CA, United States of America
- Neuroscience Graduate Program, University of Southern California, Los Angeles, CA, United States of America
- Alfred E. Mann Department of Biomedical Engineering, Viterbi School of Engineering, University of Southern California, Los Angeles, CA, United States of America
- Thomas Lord Department of Computer Science, Viterbi School of Engineering, University of Southern California, Los Angeles, CA, United States of America
| |
Collapse
|
6
|
Akella S, Bastos AM, Miller EK, Principe JC. Measurable fields-to-spike causality and its dependence on cortical layer and area. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2023:2023.01.17.524451. [PMID: 37577637 PMCID: PMC10418085 DOI: 10.1101/2023.01.17.524451] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 08/15/2023]
Abstract
Distinct dynamics in different cortical layers are apparent in neuronal and local field potential (LFP) patterns, yet their associations in the context of laminar processing have been sparingly analyzed. Here, we study the laminar organization of spike-field causal flow within and across visual (V4) and frontal areas (PFC) of monkeys performing a visual task. Using an event-based quantification of LFPs and a directed information estimator, we found area and frequency specificity in the laminar organization of spike-field causal connectivity. Gamma bursts (40-80 Hz) in the superficial layers of V4 largely drove intralaminar spiking. These gamma influences also fed forward up the cortical hierarchy to modulate laminar spiking in PFC. In PFC, the direction of intralaminar information flow was from spikes → fields where these influences dually controlled top-down and bottom-up processing. Our results, enabled by innovative methodologies, emphasize the complexities of spike-field causal interactions amongst multiple brain areas and behavior.
Collapse
Affiliation(s)
- Shailaja Akella
- Allen Institute, Seattle, WA, United States
- Department of Electrical and Computer Engineering, University of Florida Gainesville, FL, United States
| | - André M. Bastos
- Department of Psychology and Vanderbilt Brain Institute,Vanderbilt University, Nashville, TN, United States
| | - Earl K. Miller
- The Picower Institute for Learning and Memory, MIT, Cambridge, MA, United States
| | - Jose C. Principe
- Department of Electrical and Computer Engineering, University of Florida Gainesville, FL, United States
| |
Collapse
|
7
|
Vahidi P, Sani OG, Shanechi MM. Modeling and dissociation of intrinsic and input-driven neural population dynamics underlying behavior. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2023:2023.03.14.532554. [PMID: 36993213 PMCID: PMC10055042 DOI: 10.1101/2023.03.14.532554] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/19/2023]
Abstract
Neural dynamics can reflect intrinsic dynamics or dynamic inputs, such as sensory inputs or inputs from other regions. To avoid misinterpreting temporally-structured inputs as intrinsic dynamics, dynamical models of neural activity should account for measured inputs. However, incorporating measured inputs remains elusive in joint dynamical modeling of neural-behavioral data, which is important for studying neural computations of a specific behavior. We first show how training dynamical models of neural activity while considering behavior but not input, or input but not behavior may lead to misinterpretations. We then develop a novel analytical learning method that simultaneously accounts for neural activity, behavior, and measured inputs. The method provides the new capability to prioritize the learning of intrinsic behaviorally relevant neural dynamics and dissociate them from both other intrinsic dynamics and measured input dynamics. In data from a simulated brain with fixed intrinsic dynamics that performs different tasks, the method correctly finds the same intrinsic dynamics regardless of task while other methods can be influenced by the change in task. In neural datasets from three subjects performing two different motor tasks with task instruction sensory inputs, the method reveals low-dimensional intrinsic neural dynamics that are missed by other methods and are more predictive of behavior and/or neural activity. The method also uniquely finds that the intrinsic behaviorally relevant neural dynamics are largely similar across the three subjects and two tasks whereas the overall neural dynamics are not. These input-driven dynamical models of neural-behavioral data can uncover intrinsic dynamics that may otherwise be missed.
Collapse
|
8
|
Song CY, Hsieh HL, Pesaran B, Shanechi MM. Modeling and inference methods for switching regime-dependent dynamical systems with multiscale neural observations. J Neural Eng 2022; 19. [PMID: 36261030 DOI: 10.1088/1741-2552/ac9b94] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/12/2022] [Accepted: 10/19/2022] [Indexed: 01/11/2023]
Abstract
Objective.Realizing neurotechnologies that enable long-term neural recordings across multiple spatial-temporal scales during naturalistic behaviors requires new modeling and inference methods that can simultaneously address two challenges. First, the methods should aggregate information across all activity scales from multiple recording sources such as spiking and field potentials. Second, the methods should detect changes in the regimes of behavior and/or neural dynamics during naturalistic scenarios and long-term recordings. Prior regime detection methods are developed for a single scale of activity rather than multiscale activity, and prior multiscale methods have not considered regime switching and are for stationary cases.Approach.Here, we address both challenges by developing a switching multiscale dynamical system model and the associated filtering and smoothing methods. This model describes the encoding of an unobserved brain state in multiscale spike-field activity. It also allows for regime-switching dynamics using an unobserved regime state that dictates the dynamical and encoding parameters at every time-step. We also design the associated switching multiscale inference methods that estimate both the unobserved regime and brain states from simultaneous spike-field activity.Main results.We validate the methods in both extensive numerical simulations and prefrontal spike-field data recorded in a monkey performing saccades for fluid rewards. We show that these methods can successfully combine the spiking and field potential observations to simultaneously track the regime and brain states accurately. Doing so, these methods lead to better state estimation compared with single-scale switching methods or stationary multiscale methods. Also, for single-scale linear Gaussian observations, the new switching smoother can better generalize to diverse system settings compared to prior switching smoothers.Significance.These modeling and inference methods effectively incorporate both regime-detection and multiscale observations. As such, they could facilitate investigation of latent switching neural population dynamics and improve future brain-machine interfaces by enabling inference in naturalistic scenarios where regime-dependent multiscale activity and behavior arise.
Collapse
Affiliation(s)
- Christian Y Song
- Ming Hsieh Department of Electrical and Computer Engineering, Viterbi School of Engineering, University of Southern California, Los Angeles, CA, United States of America
| | - Han-Lin Hsieh
- Ming Hsieh Department of Electrical and Computer Engineering, Viterbi School of Engineering, University of Southern California, Los Angeles, CA, United States of America
| | - Bijan Pesaran
- Departments of Neurosurgery, Neuroscience, and Bioengineering, University of Pennsylvania, Philadelphia, PA, United States of America
| | - Maryam M Shanechi
- Ming Hsieh Department of Electrical and Computer Engineering, Viterbi School of Engineering, University of Southern California, Los Angeles, CA, United States of America.,Neuroscience Graduate Program, University of Southern California, Los Angeles, CA, United States of America.,Department of Biomedical Engineering, Viterbi School of Engineering, University of Southern California, Los Angeles, CA, United States of America.,Department of Computer Science, Viterbi School of Engineering, University of Southern California, Los Angeles, CA, United States of America
| |
Collapse
|