1
|
Sadras N, Pesaran B, Shanechi MM. Event detection and classification from multimodal time series with application to neural data. J Neural Eng 2024; 21:026049. [PMID: 38513289 DOI: 10.1088/1741-2552/ad3678] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/15/2023] [Accepted: 03/21/2024] [Indexed: 03/23/2024]
Abstract
The detection of events in time-series data is a common signal-processing problem. When the data can be modeled as a known template signal with an unknown delay in Gaussian noise, detection of the template signal can be done with a traditional matched filter. However, in many applications, the event of interest is represented in multimodal data consisting of both Gaussian and point-process time series. Neuroscience experiments, for example, can simultaneously record multimodal neural signals such as local field potentials (LFPs), which can be modeled as Gaussian, and neuronal spikes, which can be modeled as point processes. Currently, no method exists for event detection from such multimodal data, and as such our objective in this work is to develop a method to meet this need. Here we address this challenge by developing the multimodal event detector (MED) algorithm which simultaneously estimates event times and classes. To do this, we write a multimodal likelihood function for Gaussian and point-process observations and derive the associated maximum likelihood estimator of simultaneous event times and classes. We additionally introduce a cross-modal scaling parameter to account for model mismatch in real datasets. We validate this method in extensive simulations as well as in a neural spike-LFP dataset recorded during an eye-movement task, where the events of interest are eye movements with unknown times and directions. We show that the MED can successfully detect eye movement onset and classify eye movement direction. Further, the MED successfully combines information across data modalities, with multimodal performance exceeding unimodal performance. This method can facilitate applications such as the discovery of latent events in multimodal neural population activity and the development of brain-computer interfaces for naturalistic settings without constrained tasks or prior knowledge of event times.
Collapse
Affiliation(s)
- Nitin Sadras
- Ming Hsieh Department of Electrical and Computer Engineering, Viterbi School of Engineering, University of Southern California, Los Angeles, CA, United States of America
| | - Bijan Pesaran
- Department of Neurosurgery, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, United States of America
| | - Maryam M Shanechi
- Ming Hsieh Department of Electrical and Computer Engineering, Viterbi School of Engineering, University of Southern California, Los Angeles, CA, United States of America
- Thomas Lord Department of Computer Science, Alfred E. Mann Department of Biomedical Engineering, and the Neuroscience Graduate Program, University of Southern California, Los Angeles, CA, United States of America
| |
Collapse
|
2
|
Ahmadipour P, Sani OG, Pesaran B, Shanechi MM. Multimodal subspace identification for modeling discrete-continuous spiking and field potential population activity. J Neural Eng 2024; 21:026001. [PMID: 38016450 PMCID: PMC10913727 DOI: 10.1088/1741-2552/ad1053] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/02/2023] [Revised: 10/23/2023] [Accepted: 11/28/2023] [Indexed: 11/30/2023]
Abstract
Objective.Learning dynamical latent state models for multimodal spiking and field potential activity can reveal their collective low-dimensional dynamics and enable better decoding of behavior through multimodal fusion. Toward this goal, developing unsupervised learning methods that are computationally efficient is important, especially for real-time learning applications such as brain-machine interfaces (BMIs). However, efficient learning remains elusive for multimodal spike-field data due to their heterogeneous discrete-continuous distributions and different timescales.Approach.Here, we develop a multiscale subspace identification (multiscale SID) algorithm that enables computationally efficient learning for modeling and dimensionality reduction for multimodal discrete-continuous spike-field data. We describe the spike-field activity as combined Poisson and Gaussian observations, for which we derive a new analytical SID method. Importantly, we also introduce a novel constrained optimization approach to learn valid noise statistics, which is critical for multimodal statistical inference of the latent state, neural activity, and behavior. We validate the method using numerical simulations and with spiking and local field potential population activity recorded during a naturalistic reach and grasp behavior.Main results.We find that multiscale SID accurately learned dynamical models of spike-field signals and extracted low-dimensional dynamics from these multimodal signals. Further, it fused multimodal information, thus better identifying the dynamical modes and predicting behavior compared to using a single modality. Finally, compared to existing multiscale expectation-maximization learning for Poisson-Gaussian observations, multiscale SID had a much lower training time while being better in identifying the dynamical modes and having a better or similar accuracy in predicting neural activity and behavior.Significance.Overall, multiscale SID is an accurate learning method that is particularly beneficial when efficient learning is of interest, such as for online adaptive BMIs to track non-stationary dynamics or for reducing offline training time in neuroscience investigations.
Collapse
Affiliation(s)
- Parima Ahmadipour
- Ming Hsieh Department of Electrical and Computer Engineering, Viterbi School of Engineering, University of Southern California, Los Angeles, CA, United States of America
| | - Omid G Sani
- Ming Hsieh Department of Electrical and Computer Engineering, Viterbi School of Engineering, University of Southern California, Los Angeles, CA, United States of America
| | - Bijan Pesaran
- Department of Neurosurgery, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, United States of America
| | - Maryam M Shanechi
- Ming Hsieh Department of Electrical and Computer Engineering, Viterbi School of Engineering, University of Southern California, Los Angeles, CA, United States of America
- Thomas Lord Department of Computer Science, Alfred E. Mann Department of Biomedical Engineering, and the Neuroscience Graduate Program, University of Southern California, Los Angeles, CA, United States of America
| |
Collapse
|
3
|
Song CY, Shanechi MM. Unsupervised learning of stationary and switching dynamical system models from Poisson observations. J Neural Eng 2023; 20:066029. [PMID: 38083862 PMCID: PMC10714100 DOI: 10.1088/1741-2552/ad038d] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/24/2023] [Revised: 09/15/2023] [Accepted: 10/16/2023] [Indexed: 12/18/2023]
Abstract
Objective. Investigating neural population dynamics underlying behavior requires learning accurate models of the recorded spiking activity, which can be modeled with a Poisson observation distribution. Switching dynamical system models can offer both explanatory power and interpretability by piecing together successive regimes of simpler dynamics to capture more complex ones. However, in many cases, reliable regime labels are not available, thus demanding accurate unsupervised learning methods for Poisson observations. Existing learning methods, however, rely on inference of latent states in neural activity using the Laplace approximation, which may not capture the broader properties of densities and may lead to inaccurate learning. Thus, there is a need for new inference methods that can enable accurate model learning.Approach. To achieve accurate model learning, we derive a novel inference method based on deterministic sampling for Poisson observations called the Poisson Cubature Filter (PCF) and embed it in an unsupervised learning framework. This method takes a minimum mean squared error approach to estimation. Terms that are difficult to find analytically for Poisson observations are approximated in a novel way with deterministic sampling based on numerical integration and cubature rules.Main results. PCF enabled accurate unsupervised learning in both stationary and switching dynamical systems and largely outperformed prior Laplace approximation-based learning methods in both simulations and motor cortical spiking data recorded during a reaching task. These improvements were larger for smaller data sizes, showing that PCF-based learning was more data efficient and enabled more reliable regime identification. In experimental data and unsupervised with respect to behavior, PCF-based learning uncovered interpretable behavior-relevant regimes unlike prior learning methods.Significance. The developed unsupervised learning methods for switching dynamical systems can accurately uncover latent regimes and states in population spiking activity, with important applications in both basic neuroscience and neurotechnology.
Collapse
Affiliation(s)
- Christian Y Song
- Ming Hsieh Department of Electrical and Computer Engineering, Viterbi School of Engineering, University of Southern California, Los Angeles, CA, United States of America
| | - Maryam M Shanechi
- Ming Hsieh Department of Electrical and Computer Engineering, Viterbi School of Engineering, University of Southern California, Los Angeles, CA, United States of America
- Neuroscience Graduate Program, University of Southern California, Los Angeles, CA, United States of America
- Alfred E. Mann Department of Biomedical Engineering, Viterbi School of Engineering, University of Southern California, Los Angeles, CA, United States of America
- Thomas Lord Department of Computer Science, Viterbi School of Engineering, University of Southern California, Los Angeles, CA, United States of America
| |
Collapse
|
4
|
Ahmadipour P, Sani OG, Pesaran B, Shanechi MM. Multimodal subspace identification for modeling discrete-continuous spiking and field potential population activity. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2023:2023.05.26.542509. [PMID: 37398400 PMCID: PMC10312539 DOI: 10.1101/2023.05.26.542509] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 07/04/2023]
Abstract
Learning dynamical latent state models for multimodal spiking and field potential activity can reveal their collective low-dimensional dynamics and enable better decoding of behavior through multimodal fusion. Toward this goal, developing unsupervised learning methods that are computationally efficient is important, especially for real-time learning applications such as brain-machine interfaces (BMIs). However, efficient learning remains elusive for multimodal spike-field data due to their heterogeneous discrete-continuous distributions and different timescales. Here, we develop a multiscale subspace identification (multiscale SID) algorithm that enables computationally efficient modeling and dimensionality reduction for multimodal discrete-continuous spike-field data. We describe the spike-field activity as combined Poisson and Gaussian observations, for which we derive a new analytical subspace identification method. Importantly, we also introduce a novel constrained optimization approach to learn valid noise statistics, which is critical for multimodal statistical inference of the latent state, neural activity, and behavior. We validate the method using numerical simulations and spike-LFP population activity recorded during a naturalistic reach and grasp behavior. We find that multiscale SID accurately learned dynamical models of spike-field signals and extracted low-dimensional dynamics from these multimodal signals. Further, it fused multimodal information, thus better identifying the dynamical modes and predicting behavior compared to using a single modality. Finally, compared to existing multiscale expectation-maximization learning for Poisson-Gaussian observations, multiscale SID had a much lower computational cost while being better in identifying the dynamical modes and having a better or similar accuracy in predicting neural activity. Overall, multiscale SID is an accurate learning method that is particularly beneficial when efficient learning is of interest.
Collapse
|
5
|
Fang H, Yang Y. Predictive neuromodulation of cingulo-frontal neural dynamics in major depressive disorder using a brain-computer interface system: A simulation study. Front Comput Neurosci 2023; 17:1119685. [PMID: 36950505 PMCID: PMC10025398 DOI: 10.3389/fncom.2023.1119685] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/13/2022] [Accepted: 02/15/2023] [Indexed: 03/08/2023] Open
Abstract
Introduction Deep brain stimulation (DBS) is a promising therapy for treatment-resistant major depressive disorder (MDD). MDD involves the dysfunction of a brain network that can exhibit complex nonlinear neural dynamics in multiple frequency bands. However, current open-loop and responsive DBS methods cannot track the complex multiband neural dynamics in MDD, leading to imprecise regulation of symptoms, variable treatment effects among patients, and high battery power consumption. Methods Here, we develop a closed-loop brain-computer interface (BCI) system of predictive neuromodulation for treating MDD. We first use a biophysically plausible ventral anterior cingulate cortex (vACC)-dorsolateral prefrontal cortex (dlPFC) neural mass model of MDD to simulate nonlinear and multiband neural dynamics in response to DBS. We then use offline system identification to build a dynamic model that predicts the DBS effect on neural activity. We next use the offline identified model to design an online BCI system of predictive neuromodulation. The online BCI system consists of a dynamic brain state estimator and a model predictive controller. The brain state estimator estimates the MDD brain state from the history of neural activity and previously delivered DBS patterns. The predictive controller takes the estimated MDD brain state as the feedback signal and optimally adjusts DBS to regulate the MDD neural dynamics to therapeutic targets. We use the vACC-dlPFC neural mass model as a simulation testbed to test the BCI system and compare it with state-of-the-art open-loop and responsive DBS treatments of MDD. Results We demonstrate that our dynamic model accurately predicts nonlinear and multiband neural activity. Consequently, the predictive neuromodulation system accurately regulates the neural dynamics in MDD, resulting in significantly smaller control errors and lower DBS battery power consumption than open-loop and responsive DBS. Discussion Our results have implications for developing future precisely-tailored clinical closed-loop DBS treatments for MDD.
Collapse
Affiliation(s)
- Hao Fang
- Department of Electrical and Computer Engineering, University of Central Florida, Orlando, FL, United States
| | - Yuxiao Yang
- Ministry of Education (MOE) Frontier Science Center for Brain Science and Brain-Machine Integration, Zhejiang University, Hangzhou, Zhejiang, China
- State Key Laboratory of Brain-Machine Intelligence, Zhejiang University, Hangzhou, Zhejiang, China
- College of Computer Science and Technology, Zhejiang University, Hangzhou, Zhejiang, China
- Department of Neurosurgery, Second Affiliated Hospital, School of Medicine, Zhejiang University, Hangzhou, Zhejiang, China
- *Correspondence: Yuxiao Yang
| |
Collapse
|
6
|
Song CY, Hsieh HL, Pesaran B, Shanechi MM. Modeling and inference methods for switching regime-dependent dynamical systems with multiscale neural observations. J Neural Eng 2022; 19. [PMID: 36261030 DOI: 10.1088/1741-2552/ac9b94] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/12/2022] [Accepted: 10/19/2022] [Indexed: 01/11/2023]
Abstract
Objective.Realizing neurotechnologies that enable long-term neural recordings across multiple spatial-temporal scales during naturalistic behaviors requires new modeling and inference methods that can simultaneously address two challenges. First, the methods should aggregate information across all activity scales from multiple recording sources such as spiking and field potentials. Second, the methods should detect changes in the regimes of behavior and/or neural dynamics during naturalistic scenarios and long-term recordings. Prior regime detection methods are developed for a single scale of activity rather than multiscale activity, and prior multiscale methods have not considered regime switching and are for stationary cases.Approach.Here, we address both challenges by developing a switching multiscale dynamical system model and the associated filtering and smoothing methods. This model describes the encoding of an unobserved brain state in multiscale spike-field activity. It also allows for regime-switching dynamics using an unobserved regime state that dictates the dynamical and encoding parameters at every time-step. We also design the associated switching multiscale inference methods that estimate both the unobserved regime and brain states from simultaneous spike-field activity.Main results.We validate the methods in both extensive numerical simulations and prefrontal spike-field data recorded in a monkey performing saccades for fluid rewards. We show that these methods can successfully combine the spiking and field potential observations to simultaneously track the regime and brain states accurately. Doing so, these methods lead to better state estimation compared with single-scale switching methods or stationary multiscale methods. Also, for single-scale linear Gaussian observations, the new switching smoother can better generalize to diverse system settings compared to prior switching smoothers.Significance.These modeling and inference methods effectively incorporate both regime-detection and multiscale observations. As such, they could facilitate investigation of latent switching neural population dynamics and improve future brain-machine interfaces by enabling inference in naturalistic scenarios where regime-dependent multiscale activity and behavior arise.
Collapse
Affiliation(s)
- Christian Y Song
- Ming Hsieh Department of Electrical and Computer Engineering, Viterbi School of Engineering, University of Southern California, Los Angeles, CA, United States of America
| | - Han-Lin Hsieh
- Ming Hsieh Department of Electrical and Computer Engineering, Viterbi School of Engineering, University of Southern California, Los Angeles, CA, United States of America
| | - Bijan Pesaran
- Departments of Neurosurgery, Neuroscience, and Bioengineering, University of Pennsylvania, Philadelphia, PA, United States of America
| | - Maryam M Shanechi
- Ming Hsieh Department of Electrical and Computer Engineering, Viterbi School of Engineering, University of Southern California, Los Angeles, CA, United States of America.,Neuroscience Graduate Program, University of Southern California, Los Angeles, CA, United States of America.,Department of Biomedical Engineering, Viterbi School of Engineering, University of Southern California, Los Angeles, CA, United States of America.,Department of Computer Science, Viterbi School of Engineering, University of Southern California, Los Angeles, CA, United States of America
| |
Collapse
|
7
|
Wang C, Pesaran B, Shanechi MM. Modeling multiscale causal interactions between spiking and field potential signals during behavior. J Neural Eng 2022; 19. [DOI: 10.1088/1741-2552/ac4e1c] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/07/2021] [Accepted: 01/24/2022] [Indexed: 11/12/2022]
Abstract
Abstract
Objective. Brain recordings exhibit dynamics at multiple spatiotemporal scales, which are measured with spike trains and larger-scale field potential signals. To study neural processes, it is important to identify and model causal interactions not only at a single scale of activity, but also across multiple scales, i.e. between spike trains and field potential signals. Standard causality measures are not directly applicable here because spike trains are binary-valued but field potentials are continuous-valued. It is thus important to develop computational tools to recover multiscale neural causality during behavior, assess their performance on neural datasets, and study whether modeling multiscale causalities can improve the prediction of neural signals beyond what is possible with single-scale causality. Approach. We design a multiscale model-based Granger-like causality method based on directed information and evaluate its success both in realistic biophysical spike-field simulations and in motor cortical datasets from two non-human primates (NHP) performing a motor behavior. To compute multiscale causality, we learn point-process generalized linear models that predict the spike events at a given time based on the history of both spike trains and field potential signals. We also learn linear Gaussian models that predict the field potential signals at a given time based on their own history as well as either the history of binary spike events or that of latent firing rates. Main results. We find that our method reveals the true multiscale causality network structure in biophysical simulations despite the presence of model mismatch. Further, models with the identified multiscale causalities in the NHP neural datasets lead to better prediction of both spike trains and field potential signals compared to just modeling single-scale causalities. Finally, we find that latent firing rates are better predictors of field potential signals compared with the binary spike events in the NHP datasets. Significance. This multiscale causality method can reveal the directed functional interactions across spatiotemporal scales of brain activity to inform basic science investigations and neurotechnologies.
Collapse
|
8
|
Yang Y, Ahmadipour P, Shanechi MM. Adaptive latent state modeling of brain network dynamics with real-time learning rate optimization. J Neural Eng 2021; 18. [PMID: 33254159 DOI: 10.1088/1741-2552/abcefd] [Citation(s) in RCA: 16] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/07/2020] [Accepted: 11/30/2020] [Indexed: 12/29/2022]
Abstract
Objective. Dynamic latent state models are widely used to characterize the dynamics of brain network activity for various neural signal types. To date, dynamic latent state models have largely been developed for stationary brain network dynamics. However, brain network dynamics can be non-stationary for example due to learning, plasticity or recording instability. To enable modeling these non-stationarities, two problems need to be resolved. First, novel methods should be developed that can adaptively update the parameters of latent state models, which is difficult due to the state being latent. Second, new methods are needed to optimize the adaptation learning rate, which specifies how fast new neural observations update the model parameters and can significantly influence adaptation accuracy.Approach. We develop a Rate Optimized-adaptive Linear State-Space Modeling (RO-adaptive LSSM) algorithm that solves these two problems. First, to enable adaptation, we derive a computation- and memory-efficient adaptive LSSM fitting algorithm that updates the LSSM parameters recursively and in real time in the presence of the latent state. Second, we develop a real-time learning rate optimization algorithm. We use comprehensive simulations of a broad range of non-stationary brain network dynamics to validate both algorithms, which together constitute the RO-adaptive LSSM.Main results. We show that the adaptive LSSM fitting algorithm can accurately track the broad simulated non-stationary brain network dynamics. We also find that the learning rate significantly affects the LSSM fitting accuracy. Finally, we show that the real-time learning rate optimization algorithm can run in parallel with the adaptive LSSM fitting algorithm. Doing so, the combined RO-adaptive LSSM algorithm rapidly converges to the optimal learning rate and accurately tracks non-stationarities.Significance. These algorithms can be used to study time-varying neural dynamics underlying various brain functions and enhance future neurotechnologies such as brain-machine interfaces and closed-loop brain stimulation systems.
Collapse
Affiliation(s)
- Yuxiao Yang
- Ming Hsieh Department of Electrical and Computer Engineering, Viterbi School of Engineering, University of Southern California, Los Angeles, CA, United States of America.,These authors contributed equally to this work
| | - Parima Ahmadipour
- Ming Hsieh Department of Electrical and Computer Engineering, Viterbi School of Engineering, University of Southern California, Los Angeles, CA, United States of America.,These authors contributed equally to this work
| | - Maryam M Shanechi
- Ming Hsieh Department of Electrical and Computer Engineering, Viterbi School of Engineering, University of Southern California, Los Angeles, CA, United States of America.,Neuroscience Graduate Program, University of Southern California, Los Angeles, CA, United States of America
| |
Collapse
|
9
|
Nesse WH, Maler L, Longtin A. Enhanced Signal Detection by Adaptive Decorrelation of Interspike Intervals. Neural Comput 2020; 33:341-375. [PMID: 33253034 DOI: 10.1162/neco_a_01347] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/27/2022]
Abstract
Spike trains with negative interspike interval (ISI) correlations, in which long/short ISIs are more likely followed by short/long ISIs, are common in many neurons. They can be described by stochastic models with a spike-triggered adaptation variable. We analyze a phenomenon in these models where such statistically dependent ISI sequences arise in tandem with quasi-statistically independent and identically distributed (quasi-IID) adaptation variable sequences. The sequences of adaptation states and resulting ISIs are linked by a nonlinear decorrelating transformation. We establish general conditions on a family of stochastic spiking models that guarantee this quasi-IID property and establish bounds on the resulting baseline ISI correlations. Inputs that elicit weak firing rate changes in samples with many spikes are known to be more detectible when negative ISI correlations are present because they reduce spike count variance; this defines a variance-reduced firing rate coding benchmark. We performed a Fisher information analysis on these adapting models exhibiting ISI correlations to show that a spike pattern code based on the quasi-IID property achieves the upper bound of detection performance, surpassing rate codes with the same mean rate-including the variance-reduced rate code benchmark-by 20% to 30%. The information loss in rate codes arises because the benefits of reduced spike count variance cannot compensate for the lower firing rate gain due to adaptation. Since adaptation states have similar dynamics to synaptic responses, the quasi-IID decorrelation transformation of the spike train is plausibly implemented by downstream neurons through matched postsynaptic kinetics. This provides an explanation for observed coding performance in sensory systems that cannot be accounted for by rate coding, for example, at the detection threshold where rate changes can be insignificant.
Collapse
Affiliation(s)
- William H Nesse
- Department of Mathematics, University of Utah, Salt Lake City, UT 84112, U.S.A.
| | - Leonard Maler
- Department of Cellular and Molecular Medicine, University of Ottawa, Ottawa, ON K1H 8M5, Canada
| | - André Longtin
- Department of Physics, University of Ottawa, Ottawa, ON K1N 6N5, Canada
| |
Collapse
|
10
|
Latimer KW, Rieke F, Pillow JW. Inferring synaptic inputs from spikes with a conductance-based neural encoding model. eLife 2019; 8:47012. [PMID: 31850846 PMCID: PMC6989090 DOI: 10.7554/elife.47012] [Citation(s) in RCA: 17] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/22/2019] [Accepted: 12/17/2019] [Indexed: 01/15/2023] Open
Abstract
Descriptive statistical models of neural responses generally aim to characterize the mapping from stimuli to spike responses while ignoring biophysical details of the encoding process. Here, we introduce an alternative approach, the conductance-based encoding model (CBEM), which describes a mapping from stimuli to excitatory and inhibitory synaptic conductances governing the dynamics of sub-threshold membrane potential. Remarkably, we show that the CBEM can be fit to extracellular spike train data and then used to predict excitatory and inhibitory synaptic currents. We validate these predictions with intracellular recordings from macaque retinal ganglion cells. Moreover, we offer a novel quasi-biophysical interpretation of the Poisson generalized linear model (GLM) as a special case of the CBEM in which excitation and inhibition are perfectly balanced. This work forges a new link between statistical and biophysical models of neural encoding and sheds new light on the biophysical variables that underlie spiking in the early visual pathway.
Collapse
Affiliation(s)
- Kenneth W Latimer
- Department of Physiology and Biophysics, University of Washington, Seattle, United States
| | - Fred Rieke
- Department of Physiology and Biophysics, University of Washington, Seattle, United States
| | - Jonathan W Pillow
- Princeton Neuroscience Institute, Department of Psychology, Princeton University, Princeton, United States
| |
Collapse
|
11
|
Sadras N, Pesaran B, Shanechi MM. A point-process matched filter for event detection and decoding from population spike trains. J Neural Eng 2019; 16:066016. [PMID: 31437831 DOI: 10.1088/1741-2552/ab3dbc] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/12/2022]
Abstract
OBJECTIVE Information encoding in neurons can be described through their response fields. The spatial response field of a neuron is the region of space in which a sensory stimulus or a behavioral event causes that neuron to fire. Neurons can also exhibit temporal response fields (TRFs), which characterize a transient response to stimulus or behavioral event onsets. These neurons can thus be described by a spatio-temporal response field (STRF). The activity of neurons with STRFs can be well-described with point process models that characterize binary spike trains with an instantaneous firing rate that is a function of both time and space. However, developing decoders for point process models of neurons that exhibit TRFs is challenging because it requires prior knowledge of event onset times, which are unknown. Indeed, point process filters (PPF) to date have largely focused on decoding neuronal activity without considering TRFs. Also, neural classifiers have required data to be behavior- or stimulus-aligned, i.e. event times to be known, which is often not possible in real-world applications. Our objective in this work is to develop a viable decoder for neurons with STRFs when event times are unknown. APPROACH To enable decoding of neurons with STRFs, we develop a novel point-process matched filter (PPMF) that can detect events and estimate their onset times from population spike trains. We also devise a PPF for neurons with transient responses as characterized by STRFs. When neurons exhibit STRFs and event times are unknown, the PPMF can be combined with the PPF or with discrete classifiers for continuous and discrete brain state decoding, respectively. MAIN RESULTS We validate our algorithm on two datasets: simulated spikes from neurons that encode visual saliency in response to stimuli, and prefrontal spikes recorded in a monkey performing a delayed-saccade task. We show that the PPMF can estimate the stimulus times and saccade times accurately. Further, the PPMF combined with the PPF can decode visual saliency maps without knowing the stimulus times. Similarly, the PPMF combined with a point process classifier can decode the saccade direction without knowing the saccade times. SIGNIFICANCE These event detection and decoding algorithms can help develop neurotechnologies to decode cognitive states from neural responses that exhibit STRFs.
Collapse
Affiliation(s)
- Nitin Sadras
- Ming Hsieh Department of Electrical and Computer Engineering, Viterbi School of Engineering, University of Southern California, Los Angeles, CA, United States of America
| | | | | |
Collapse
|
12
|
Zhang P, Ma X, Chen L, Zhou J, Wang C, Li W, He J. Decoder calibration with ultra small current sample set for intracortical brain-machine interface. J Neural Eng 2019; 15:026019. [PMID: 29343650 DOI: 10.1088/1741-2552/aaa8a4] [Citation(s) in RCA: 16] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/16/2023]
Abstract
OBJECTIVE Intracortical brain-machine interfaces (iBMIs) aim to restore efficient communication and movement ability for paralyzed patients. However, frequent recalibration is required for consistency and reliability, and every recalibration will require relatively large most current sample set. The aim in this study is to develop an effective decoder calibration method that can achieve good performance while minimizing recalibration time. APPROACH Two rhesus macaques implanted with intracortical microelectrode arrays were trained separately on movement and sensory paradigm. Neural signals were recorded to decode reaching positions or grasping postures. A novel principal component analysis-based domain adaptation (PDA) method was proposed to recalibrate the decoder with only ultra small current sample set by taking advantage of large historical data, and the decoding performance was compared with other three calibration methods for evaluation. MAIN RESULTS The PDA method closed the gap between historical and current data effectively, and made it possible to take advantage of large historical data for decoder recalibration in current data decoding. Using only ultra small current sample set (five trials of each category), the decoder calibrated using the PDA method could achieve much better and more robust performance in all sessions than using other three calibration methods in both monkeys. SIGNIFICANCE (1) By this study, transfer learning theory was brought into iBMIs decoder calibration for the first time. (2) Different from most transfer learning studies, the target data in this study were ultra small sample set and were transferred to the source data. (3) By taking advantage of historical data, the PDA method was demonstrated to be effective in reducing recalibration time for both movement paradigm and sensory paradigm, indicating a viable generalization. By reducing the demand for large current training data, this new method may facilitate the application of intracortical brain-machine interfaces in clinical practice.
Collapse
Affiliation(s)
- Peng Zhang
- Neural Interface and Rehabilitation Technology Research Center, School of Automation, Huazhong University of Science and Technology, Wuhan, People's Republic of China
| | | | | | | | | | | | | |
Collapse
|
13
|
Bighamian R, Wong YT, Pesaran B, Shanechi MM. Sparse model-based estimation of functional dependence in high-dimensional field and spike multiscale networks. J Neural Eng 2019; 16:056022. [DOI: 10.1088/1741-2552/ab225b] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/10/2023]
|
14
|
Abbaspourazad H, Hsieh HL, Shanechi MM. A Multiscale Dynamical Modeling and Identification Framework for Spike-Field Activity. IEEE Trans Neural Syst Rehabil Eng 2019; 27:1128-1138. [DOI: 10.1109/tnsre.2019.2913218] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
|
15
|
Wang C, Shanechi MM. Estimating Multiscale Direct Causality Graphs in Neural Spike-Field Networks. IEEE Trans Neural Syst Rehabil Eng 2019; 27:857-866. [DOI: 10.1109/tnsre.2019.2908156] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
|
16
|
Yang Y, Lee JT, Guidera JA, Vlasov KY, Pei J, Brown EN, Solt K, Shanechi MM. Developing a personalized closed-loop controller of medically-induced coma in a rodent model. J Neural Eng 2019; 16:036022. [PMID: 30856619 DOI: 10.1088/1741-2552/ab0ea4] [Citation(s) in RCA: 27] [Impact Index Per Article: 5.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/11/2022]
Abstract
OBJECTIVE Personalized automatic control of medically-induced coma, a critical multi-day therapy in the intensive care unit, could greatly benefit clinical care and further provide a novel scientific tool for investigating how the brain response to anesthetic infusion rate changes during therapy. Personalized control would require real-time tracking of inter- and intra-subject variabilities in the brain response to anesthetic infusion rate while simultaneously delivering the therapy, which has not been achieved. Current control systems for medically-induced coma require a separate offline model fitting experiment to deal with inter-subject variabilities, which would lead to therapy interruption. Removing the need for these offline interruptions could help facilitate clinical feasbility. In addition, current systems do not track intra-subject variabilities. Tracking intra-subject variabilities is essential for studying whether or how the brain response to anesthetic infusion rate changes during therapy. Further, such tracking could enhance control precison and thus help facilitate clinical feasibility. APPROACH Here we develop a personalized closed-loop anesthetic delivery (CLAD) system in a rodent model that tracks both inter- and intra-subject variabilities in real time while simultaneously controlling the anesthetic in closed loop. We tested the CLAD in rats by administrating propofol to control the electroencephalogram (EEG) burst suppression. We first examined whether the CLAD can remove the need for offline model fitting interruption. We then used the CLAD as a tool to study whether and how the brain response to anesthetic infusion rate changes as a function of changes in the depth of medically-induced coma. Finally, we studied whether the CLAD can enhance control compared with prior systems by tracking intra-subject variabilities. MAIN RESULTS The CLAD precisely controlled the EEG burst suppression in each rat without performing offline model fitting experiments. Further, using the CLAD, we discovered that the brain response to anesthetic infusion rate varied during control, and that these variations correlated with the depth of medically-induced coma in a consistent manner across individual rats. Finally, tracking these variations reduced control bias and error by more than 70% compared with prior systems. SIGNIFICANCE This personalized CLAD provides a new tool to study the dynamics of brain response to anesthetic infusion rate and has significant implications for enabling clinically-feasible automatic control of medically-induced coma.
Collapse
Affiliation(s)
- Yuxiao Yang
- Department of Electrical and Computer Engineering, Viterbi School of Engineering, University of Southern California, Los Angeles, CA 90089, United States of America
| | | | | | | | | | | | | | | |
Collapse
|
17
|
Hsieh HL, Wong YT, Pesaran B, Shanechi MM. Multiscale modeling and decoding algorithms for spike-field activity. J Neural Eng 2018; 16:016018. [DOI: 10.1088/1741-2552/aaeb1a] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/01/2023]
|
18
|
Orcioni S, Paffi A, Camera F, Apollonio F, Liberti M. Automatic decoding of input sinusoidal signal in a neuron model: High pass homomorphic filtering. Neurocomputing 2018. [DOI: 10.1016/j.neucom.2018.03.007] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/17/2022]
|
19
|
Hsieh HL, Shanechi MM. Optimizing the learning rate for adaptive estimation of neural encoding models. PLoS Comput Biol 2018; 14:e1006168. [PMID: 29813069 PMCID: PMC5993334 DOI: 10.1371/journal.pcbi.1006168] [Citation(s) in RCA: 22] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/04/2017] [Revised: 06/08/2018] [Accepted: 05/02/2018] [Indexed: 01/05/2023] Open
Abstract
Closed-loop neurotechnologies often need to adaptively learn an encoding model that relates the neural activity to the brain state, and is used for brain state decoding. The speed and accuracy of adaptive learning algorithms are critically affected by the learning rate, which dictates how fast model parameters are updated based on new observations. Despite the importance of the learning rate, currently an analytical approach for its selection is largely lacking and existing signal processing methods vastly tune it empirically or heuristically. Here, we develop a novel analytical calibration algorithm for optimal selection of the learning rate in adaptive Bayesian filters. We formulate the problem through a fundamental trade-off that learning rate introduces between the steady-state error and the convergence time of the estimated model parameters. We derive explicit functions that predict the effect of learning rate on error and convergence time. Using these functions, our calibration algorithm can keep the steady-state parameter error covariance smaller than a desired upper-bound while minimizing the convergence time, or keep the convergence time faster than a desired value while minimizing the error. We derive the algorithm both for discrete-valued spikes modeled as point processes nonlinearly dependent on the brain state, and for continuous-valued neural recordings modeled as Gaussian processes linearly dependent on the brain state. Using extensive closed-loop simulations, we show that the analytical solution of the calibration algorithm accurately predicts the effect of learning rate on parameter error and convergence time. Moreover, the calibration algorithm allows for fast and accurate learning of the encoding model and for fast convergence of decoding to accurate performance. Finally, larger learning rates result in inaccurate encoding models and decoders, and smaller learning rates delay their convergence. The calibration algorithm provides a novel analytical approach to predictably achieve a desired level of error and convergence time in adaptive learning, with application to closed-loop neurotechnologies and other signal processing domains.
Collapse
Affiliation(s)
- Han-Lin Hsieh
- Ming Hsieh Department of Electrical Engineering, Viterbi School of Engineering, University of Southern California, Los Angeles, California, United States of America
| | - Maryam M. Shanechi
- Ming Hsieh Department of Electrical Engineering, Viterbi School of Engineering, University of Southern California, Los Angeles, California, United States of America
- Neuroscience Graduate Program, University of Southern California, Los Angeles, California, United States of America
| |
Collapse
|
20
|
Matsuda T, Kitajo K, Yamaguchi Y, Komaki F. A point process modeling approach for investigating the effect of online brain activity on perceptual switching. Neuroimage 2017; 152:50-59. [PMID: 28242318 DOI: 10.1016/j.neuroimage.2017.02.068] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/20/2016] [Revised: 01/30/2017] [Accepted: 02/23/2017] [Indexed: 11/19/2022] Open
Abstract
When watching an ambiguous figure that allows for multiple interpretations, our interpretation spontaneously switches between the possible options. Such spontaneous switching is called perceptual switching and it is modulated by top-down selective attention. In this study, we propose a point process modeling approach for investigating the effects of online brain activity on perceptual switching, where we define online activity as continuous brain activity including spontaneous background and induced activities. Specifically, we modeled perceptual switching during Necker cube perception using electroencephalography (EEG) data. Our method is based on the framework of point process model, which is a statistical model of a series of events. We regard perceptual switching phenomenon as a stochastic process and construct its model in a data-driven manner. We develop a model called the online activity regression model, which enables to determine whether online brain activity has excitatory or inhibitory effects on perceptual switching. By fitting online activity regression models to experimental data and applying the likelihood ratio testing with correction for multiple comparisons, we explore the brain regions and frequency bands with significant effects on perceptual switching. The results demonstrate that the modulation of online occipital alpha activity mediates the suppression of perceptual switching to the non-attended interpretation. Thus, our method provides a dynamic description of the attentional process by naturally accounting for the entire time course of brain activity, which is difficult to resolve by focusing only on the brain activity around the time of perceptual switching.
Collapse
Affiliation(s)
- Takeru Matsuda
- Graduate School of Information Science and Technology, The University of Tokyo, Tokyo, Japan.
| | - Keiichi Kitajo
- RIKEN BSI-Toyota Collaboration Center, RIKEN Brain Science Institute, Wako, Saitama, Japan; RIKEN Brain Science Institute, Wako, Saitama, Japan
| | | | - Fumiyasu Komaki
- Graduate School of Information Science and Technology, The University of Tokyo, Tokyo, Japan; RIKEN Brain Science Institute, Wako, Saitama, Japan
| |
Collapse
|
21
|
Lebedev MA, Nicolelis MAL. Brain-Machine Interfaces: From Basic Science to Neuroprostheses and Neurorehabilitation. Physiol Rev 2017; 97:767-837. [PMID: 28275048 DOI: 10.1152/physrev.00027.2016] [Citation(s) in RCA: 235] [Impact Index Per Article: 33.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/08/2023] Open
Abstract
Brain-machine interfaces (BMIs) combine methods, approaches, and concepts derived from neurophysiology, computer science, and engineering in an effort to establish real-time bidirectional links between living brains and artificial actuators. Although theoretical propositions and some proof of concept experiments on directly linking the brains with machines date back to the early 1960s, BMI research only took off in earnest at the end of the 1990s, when this approach became intimately linked to new neurophysiological methods for sampling large-scale brain activity. The classic goals of BMIs are 1) to unveil and utilize principles of operation and plastic properties of the distributed and dynamic circuits of the brain and 2) to create new therapies to restore mobility and sensations to severely disabled patients. Over the past decade, a wide range of BMI applications have emerged, which considerably expanded these original goals. BMI studies have shown neural control over the movements of robotic and virtual actuators that enact both upper and lower limb functions. Furthermore, BMIs have also incorporated ways to deliver sensory feedback, generated from external actuators, back to the brain. BMI research has been at the forefront of many neurophysiological discoveries, including the demonstration that, through continuous use, artificial tools can be assimilated by the primate brain's body schema. Work on BMIs has also led to the introduction of novel neurorehabilitation strategies. As a result of these efforts, long-term continuous BMI use has been recently implicated with the induction of partial neurological recovery in spinal cord injury patients.
Collapse
|
22
|
Leiva V, Tejo M, Guiraud P, Schmachtenberg O, Orio P, Marmolejo-Ramos F. Modeling neural activity with cumulative damage distributions. BIOLOGICAL CYBERNETICS 2015; 109:421-433. [PMID: 25998210 DOI: 10.1007/s00422-015-0651-9] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/06/2014] [Accepted: 04/20/2015] [Indexed: 06/04/2023]
Abstract
Neurons transmit information as action potentials or spikes. Due to the inherent randomness of the inter-spike intervals (ISIs), probabilistic models are often used for their description. Cumulative damage (CD) distributions are a family of probabilistic models that has been widely considered for describing time-related cumulative processes. This family allows us to consider certain deterministic principles for modeling ISIs from a probabilistic viewpoint and to link its parameters to values with biological interpretation. The CD family includes the Birnbaum-Saunders and inverse Gaussian distributions, which possess distinctive properties and theoretical arguments useful for ISI description. We expand the use of CD distributions to the modeling of neural spiking behavior, mainly by testing the suitability of the Birnbaum-Saunders distribution, which has not been studied in the setting of neural activity. We validate this expansion with original experimental and simulated electrophysiological data.
Collapse
Affiliation(s)
- Víctor Leiva
- Faculty of Engineering and Sciences, Universidad Adolfo Ibáñez, Viña del Mar, Chile.
- Institute of Statistics, Universidad de Valparaiso, Valparaiso, Chile.
| | - Mauricio Tejo
- Faculty of Natural and Exact Sciences, Universidad de Playa Ancha, Valparaiso, Chile
| | - Pierre Guiraud
- Centro de Investigación y Modelamiento de Fenómenos Aleatorios - Valparaíso, Faculty of Engineering, Universidad de Valparaíso, Valparaiso, Chile
| | - Oliver Schmachtenberg
- Centro Interdisciplinario de Neurociencia de Valparaíso and Institute of Neuroscience, Universidad de Valparaíso, Valparaiso, Chile
| | - Patricio Orio
- Centro Interdisciplinario de Neurociencia de Valparaíso and Institute of Neuroscience, Universidad de Valparaíso, Valparaiso, Chile
| | | |
Collapse
|
23
|
Zaytsev YV, Morrison A, Deger M. Reconstruction of recurrent synaptic connectivity of thousands of neurons from simulated spiking activity. J Comput Neurosci 2015; 39:77-103. [PMID: 26041729 PMCID: PMC4493949 DOI: 10.1007/s10827-015-0565-5] [Citation(s) in RCA: 26] [Impact Index Per Article: 2.9] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/11/2014] [Revised: 04/18/2015] [Accepted: 04/22/2015] [Indexed: 10/30/2022]
Abstract
Dynamics and function of neuronal networks are determined by their synaptic connectivity. Current experimental methods to analyze synaptic network structure on the cellular level, however, cover only small fractions of functional neuronal circuits, typically without a simultaneous record of neuronal spiking activity. Here we present a method for the reconstruction of large recurrent neuronal networks from thousands of parallel spike train recordings. We employ maximum likelihood estimation of a generalized linear model of the spiking activity in continuous time. For this model the point process likelihood is concave, such that a global optimum of the parameters can be obtained by gradient ascent. Previous methods, including those of the same class, did not allow recurrent networks of that order of magnitude to be reconstructed due to prohibitive computational cost and numerical instabilities. We describe a minimal model that is optimized for large networks and an efficient scheme for its parallelized numerical optimization on generic computing clusters. For a simulated balanced random network of 1000 neurons, synaptic connectivity is recovered with a misclassification error rate of less than 1 % under ideal conditions. We show that the error rate remains low in a series of example cases under progressively less ideal conditions. Finally, we successfully reconstruct the connectivity of a hidden synfire chain that is embedded in a random network, which requires clustering of the network connectivity to reveal the synfire groups. Our results demonstrate how synaptic connectivity could potentially be inferred from large-scale parallel spike train recordings.
Collapse
Affiliation(s)
- Yury V. Zaytsev
- Simulation Laboratory Neuroscience – Bernstein Facility for Simulation and Database Technology, Institute for Advanced Simulation, Jülich Aachen Research Alliance, Jülich Research Center, Jülich, Germany
- Faculty of Biology, Albert-Ludwig University of Freiburg, Freiburg im Breisgau, Germany
- Forschungszentrum Jülich GmbH, Jülich Supercomputing Center (JSC), 52425 Jülich, Germany
| | - Abigail Morrison
- Simulation Laboratory Neuroscience – Bernstein Facility for Simulation and Database Technology, Institute for Advanced Simulation, Jülich Aachen Research Alliance, Jülich Research Center, Jülich, Germany
- Institute for Advanced Simulation (IAS-6), Theoretical Neuroscience & Institute of Neuroscience and Medicine (INM-6), Computational and Systems Neuroscience, Jülich Research Center and JARA, Jülich, Germany
- Institute of Cognitive Neuroscience, Faculty of Psychology, Ruhr-University Bochum, Bochum, Germany
| | - Moritz Deger
- School of Life Sciences, Brain Mind Institute and School of Computer and Communication Sciences, École polytechnique fédérale de Lausanne, 1015 Lausanne, EPFL Switzerland
| |
Collapse
|
24
|
Fast maximum likelihood estimation using continuous-time neural point process models. J Comput Neurosci 2015; 38:499-519. [PMID: 25788412 DOI: 10.1007/s10827-015-0551-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/30/2014] [Revised: 01/05/2015] [Accepted: 03/02/2015] [Indexed: 10/23/2022]
Abstract
A recent report estimates that the number of simultaneously recorded neurons is growing exponentially. A commonly employed statistical paradigm using discrete-time point process models of neural activity involves the computation of a maximum-likelihood estimate. The time to computate this estimate, per neuron, is proportional to the number of bins in a finely spaced discretization of time. By using continuous-time models of neural activity and the optimally efficient Gaussian quadrature, memory requirements and computation times are dramatically decreased in the commonly encountered situation where the number of parameters p is much less than the number of time-bins n. In this regime, with q equal to the quadrature order, memory requirements are decreased from O(np) to O(qp), and the number of floating-point operations are decreased from O(np(2)) to O(qp(2)). Accuracy of the proposed estimates is assessed based upon physiological consideration, error bounds, and mathematical results describing the relation between numerical integration error and numerical error affecting both parameter estimates and the observed Fisher information. A check is provided which is used to adapt the order of numerical integration. The procedure is verified in simulation and for hippocampal recordings. It is found that in 95 % of hippocampal recordings a q of 60 yields numerical error negligible with respect to parameter estimate standard error. Statistical inference using the proposed methodology is a fast and convenient alternative to statistical inference performed using a discrete-time point process model of neural activity. It enables the employment of the statistical methodology available with discrete-time inference, but is faster, uses less memory, and avoids any error due to discretization.
Collapse
|
25
|
Mena G, Paninski L. On quadrature methods for refractory point process likelihoods. Neural Comput 2014; 26:2790-7. [PMID: 25248082 DOI: 10.1162/neco_a_00676] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Parametric models of the conditional intensity of a point process (e.g., generalized linear models) are popular in statistical neuroscience, as they allow us to characterize the variability in neural responses in terms of stimuli and spiking history. Parameter estimation in these models relies heavily on accurate evaluations of the log likelihood and its derivatives. Classical approaches use a discretized time version of the spiking process, and recent work has exploited the existence of a refractory period (during which the conditional intensity is zero following a spike) to obtain more accurate estimates of the likelihood. In this brief letter, we demonstrate that this method can be improved significantly by applying classical quadrature methods directly to the resulting continuous-time integral.
Collapse
Affiliation(s)
- Gonzalo Mena
- Department of Statistics and Grossman Center for the Statistics of Mind, Columbia University, New York, NY 10027, U.S.A.
| | | |
Collapse
|
26
|
Li Z. Decoding methods for neural prostheses: where have we reached? Front Syst Neurosci 2014; 8:129. [PMID: 25076875 PMCID: PMC4100531 DOI: 10.3389/fnsys.2014.00129] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/29/2014] [Accepted: 06/29/2014] [Indexed: 11/22/2022] Open
Abstract
This article reviews advances in decoding methods for brain-machine interfaces (BMIs). Recent work has focused on practical considerations for future clinical deployment of prosthetics. This review is organized by open questions in the field such as what variables to decode, how to design neural tuning models, which neurons to select, how to design models of desired actions, how to learn decoder parameters during prosthetic operation, and how to adapt to changes in neural signals and neural tuning. The concluding discussion highlights the need to design and test decoders within the context of their expected use and the need to answer the question of how much control accuracy is good enough for a prosthetic.
Collapse
Affiliation(s)
- Zheng Li
- State Key Laboratory of Cognitive Neuroscience and Learning and IDG/McGovern Institute for Brain Research, Beijing Normal University Beijing, China ; Center for Collaboration and Innovation in Brain and Learning Sciences, Beijing Normal University Beijing, China
| |
Collapse
|
27
|
Ba D, Temereanca S, Brown EN. Algorithms for the analysis of ensemble neural spiking activity using simultaneous-event multivariate point-process models. Front Comput Neurosci 2014; 8:6. [PMID: 24575001 PMCID: PMC3918645 DOI: 10.3389/fncom.2014.00006] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/16/2013] [Accepted: 01/09/2014] [Indexed: 12/04/2022] Open
Abstract
Understanding how ensembles of neurons represent and transmit information in the patterns of their joint spiking activity is a fundamental question in computational neuroscience. At present, analyses of spiking activity from neuronal ensembles are limited because multivariate point process (MPP) models cannot represent simultaneous occurrences of spike events at an arbitrarily small time resolution. Solo recently reported a simultaneous-event multivariate point process (SEMPP) model to correct this key limitation. In this paper, we show how Solo's discrete-time formulation of the SEMPP model can be efficiently fit to ensemble neural spiking activity using a multinomial generalized linear model (mGLM). Unlike existing approximate procedures for fitting the discrete-time SEMPP model, the mGLM is an exact algorithm. The MPP time-rescaling theorem can be used to assess model goodness-of-fit. We also derive a new marked point-process (MkPP) representation of the SEMPP model that leads to new thinning and time-rescaling algorithms for simulating an SEMPP stochastic process. These algorithms are much simpler than multivariate extensions of algorithms for simulating a univariate point process, and could not be arrived at without the MkPP representation. We illustrate the versatility of the SEMPP model by analyzing neural spiking activity from pairs of simultaneously-recorded rat thalamic neurons stimulated by periodic whisker deflections, and by simulating SEMPP data. In the data analysis example, the SEMPP model demonstrates that whisker motion significantly modulates simultaneous spiking activity at the 1 ms time scale and that the stimulus effect is more than one order of magnitude greater for simultaneous activity compared with non-simultaneous activity. Together, the mGLM, the MPP time-rescaling theorem and the MkPP representation of the SEMPP model offer a theoretically sound, practical tool for measuring joint spiking propensity in a neuronal ensemble.
Collapse
Affiliation(s)
- Demba Ba
- Department of Anesthesia, Critical Care and Pain Medicine, Harvard Medical School, Massachusetts General Hospital Charlestown, MA, USA ; Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology Cambridge, MA, USA
| | - Simona Temereanca
- Athinoula A. Martinos Center for Biomedical Imaging, Harvard Medical School, Massachusetts General Hospital Charlestown, MA, USA
| | - Emery N Brown
- Department of Anesthesia, Critical Care and Pain Medicine, Harvard Medical School, Massachusetts General Hospital Charlestown, MA, USA ; Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology Cambridge, MA, USA ; Institute for Medical Engineering and Science, Massachusetts Institute of Technology Cambridge, MA, USA
| |
Collapse
|