1
|
Sani OG, Pesaran B, Shanechi MM. Dissociative and prioritized modeling of behaviorally relevant neural dynamics using recurrent neural networks. Nat Neurosci 2024:10.1038/s41593-024-01731-2. [PMID: 39242944 DOI: 10.1038/s41593-024-01731-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/22/2023] [Accepted: 07/17/2024] [Indexed: 09/09/2024]
Abstract
Understanding the dynamical transformation of neural activity to behavior requires new capabilities to nonlinearly model, dissociate and prioritize behaviorally relevant neural dynamics and test hypotheses about the origin of nonlinearity. We present dissociative prioritized analysis of dynamics (DPAD), a nonlinear dynamical modeling approach that enables these capabilities with a multisection neural network architecture and training approach. Analyzing cortical spiking and local field potential activity across four movement tasks, we demonstrate five use-cases. DPAD enabled more accurate neural-behavioral prediction. It identified nonlinear dynamical transformations of local field potentials that were more behavior predictive than traditional power features. Further, DPAD achieved behavior-predictive nonlinear neural dimensionality reduction. It enabled hypothesis testing regarding nonlinearities in neural-behavioral transformation, revealing that, in our datasets, nonlinearities could largely be isolated to the mapping from latent cortical dynamics to behavior. Finally, DPAD extended across continuous, intermittently sampled and categorical behaviors. DPAD provides a powerful tool for nonlinear dynamical modeling and investigation of neural-behavioral data.
Collapse
Affiliation(s)
- Omid G Sani
- Ming Hsieh Department of Electrical and Computer Engineering, Viterbi School of Engineering, University of Southern California, Los Angeles, CA, USA
| | - Bijan Pesaran
- Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA
| | - Maryam M Shanechi
- Ming Hsieh Department of Electrical and Computer Engineering, Viterbi School of Engineering, University of Southern California, Los Angeles, CA, USA.
- Thomas Lord Department of Computer Science, University of Southern California, Los Angeles, CA, USA.
- Neuroscience Graduate Program, University of Southern California, Los Angeles, CA, USA.
- Alfred E. Mann Department of Biomedical Engineering, University of Southern California, Los Angeles, CA, USA.
| |
Collapse
|
2
|
Fang H, Berman SA, Wang Y, Yang Y. Robust adaptive deep brain stimulation control of in-silico non-stationary Parkinsonian neural oscillatory dynamics. J Neural Eng 2024; 21:036043. [PMID: 38834058 DOI: 10.1088/1741-2552/ad5406] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/23/2024] [Accepted: 06/04/2024] [Indexed: 06/06/2024]
Abstract
Objective. Closed-loop deep brain stimulation (DBS) is a promising therapy for Parkinson's disease (PD) that works by adjusting DBS patterns in real time from the guidance of feedback neural activity. Current closed-loop DBS mainly uses threshold-crossing on-off controllers or linear time-invariant (LTI) controllers to regulate the basal ganglia (BG) Parkinsonian beta band oscillation power. However, the critical cortex-BG-thalamus network dynamics underlying PD are nonlinear, non-stationary, and noisy, hindering accurate and robust control of Parkinsonian neural oscillatory dynamics.Approach. Here, we develop a new robust adaptive closed-loop DBS method for regulating the Parkinsonian beta oscillatory dynamics of the cortex-BG-thalamus network. We first build an adaptive state-space model to quantify the dynamic, nonlinear, and non-stationary neural activity. We then construct an adaptive estimator to track the nonlinearity and non-stationarity in real time. We next design a robust controller to automatically determine the DBS frequency based on the estimated Parkinsonian neural state while reducing the system's sensitivity to high-frequency noise. We adopt and tune a biophysical cortex-BG-thalamus network model as an in-silico simulation testbed to generate nonlinear and non-stationary Parkinsonian neural dynamics for evaluating DBS methods.Main results. We find that under different nonlinear and non-stationary neural dynamics, our robust adaptive DBS method achieved accurate regulation of the BG Parkinsonian beta band oscillation power with small control error, bias, and deviation. Moreover, the accurate regulation generalizes across different therapeutic targets and consistently outperforms current on-off and LTI DBS methods.Significance. These results have implications for future designs of closed-loop DBS systems to treat PD.
Collapse
Affiliation(s)
- Hao Fang
- MOE Frontier Science Center for Brain Science and Brain-machine Integration, Zhejiang University, Hangzhou 310058, People's Republic of China
- Nanhu Brain-computer Interface Institute, Hangzhou 311100, People's Republic of China
| | - Stephen A Berman
- College of Medicine, University of Central Florida, Orlando, FL 32816, United States of America
| | - Yueming Wang
- Nanhu Brain-computer Interface Institute, Hangzhou 311100, People's Republic of China
- Qiushi Academy for Advanced Studies, Hangzhou 310058, People's Republic of China
- College of Computer Science and Technology, Zhejiang University, Hangzhou 310058, People's Republic of China
- State Key Laboratory of Brain-machine Intelligence, Hangzhou 310058, People's Republic of China
| | - Yuxiao Yang
- MOE Frontier Science Center for Brain Science and Brain-machine Integration, Zhejiang University, Hangzhou 310058, People's Republic of China
- Nanhu Brain-computer Interface Institute, Hangzhou 311100, People's Republic of China
- College of Computer Science and Technology, Zhejiang University, Hangzhou 310058, People's Republic of China
- State Key Laboratory of Brain-machine Intelligence, Hangzhou 310058, People's Republic of China
- Department of Neurosurgery, Second Affiliated Hospital, School of Medicine, Hangzhou 310058, People's Republic of China
- NHC and CAMS Key Laboratory of Medical Neurobiology, Zhejiang University, Hangzhou 310058, People's Republic of China
| |
Collapse
|
3
|
Alasfour A, Gilja V. Consistent spectro-spatial features of human ECoG successfully decode naturalistic behavioral states. Front Hum Neurosci 2024; 18:1388267. [PMID: 38873653 PMCID: PMC11169785 DOI: 10.3389/fnhum.2024.1388267] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/19/2024] [Accepted: 04/19/2024] [Indexed: 06/15/2024] Open
Abstract
Objective Understanding the neural correlates of naturalistic behavior is critical for extending and confirming the results obtained from trial-based experiments and designing generalizable brain-computer interfaces that can operate outside laboratory environments. In this study, we aimed to pinpoint consistent spectro-spatial features of neural activity in humans that can discriminate between naturalistic behavioral states. Approach We analyzed data from five participants using electrocorticography (ECoG) with broad spatial coverage. Spontaneous and naturalistic behaviors such as "Talking" and "Watching TV" were labeled from manually annotated videos. Linear discriminant analysis (LDA) was used to classify the two behavioral states. The parameters learned from the LDA were then used to determine whether the neural signatures driving classification performance are consistent across the participants. Main results Spectro-spatial feature values were consistently discriminative between the two labeled behavioral states across participants. Mainly, θ, α, and low and high γ in the postcentral gyrus, precentral gyrus, and temporal lobe showed significant classification performance and feature consistency across participants. Subject-specific performance exceeded 70%. Combining neural activity from multiple cortical regions generally does not improve decoding performance, suggesting that information regarding the behavioral state is non-additive as a function of the cortical region. Significance To the best of our knowledge, this is the first attempt to identify specific spectro-spatial neural correlates that consistently decode naturalistic and active behavioral states. The aim of this work is to serve as an initial starting point for developing brain-computer interfaces that can be generalized in a realistic setting and to further our understanding of the neural correlates of naturalistic behavior in humans.
Collapse
Affiliation(s)
- Abdulwahab Alasfour
- Department of Electrical Engineering, College of Engineering and Petroleum, Kuwait University, Kuwait City, Kuwait
| | - Vikash Gilja
- Department of Electrical and Computer Engineering, University of California, San Diego, CA, United States
| |
Collapse
|
4
|
Sadras N, Pesaran B, Shanechi MM. Event detection and classification from multimodal time series with application to neural data. J Neural Eng 2024; 21:026049. [PMID: 38513289 DOI: 10.1088/1741-2552/ad3678] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/15/2023] [Accepted: 03/21/2024] [Indexed: 03/23/2024]
Abstract
The detection of events in time-series data is a common signal-processing problem. When the data can be modeled as a known template signal with an unknown delay in Gaussian noise, detection of the template signal can be done with a traditional matched filter. However, in many applications, the event of interest is represented in multimodal data consisting of both Gaussian and point-process time series. Neuroscience experiments, for example, can simultaneously record multimodal neural signals such as local field potentials (LFPs), which can be modeled as Gaussian, and neuronal spikes, which can be modeled as point processes. Currently, no method exists for event detection from such multimodal data, and as such our objective in this work is to develop a method to meet this need. Here we address this challenge by developing the multimodal event detector (MED) algorithm which simultaneously estimates event times and classes. To do this, we write a multimodal likelihood function for Gaussian and point-process observations and derive the associated maximum likelihood estimator of simultaneous event times and classes. We additionally introduce a cross-modal scaling parameter to account for model mismatch in real datasets. We validate this method in extensive simulations as well as in a neural spike-LFP dataset recorded during an eye-movement task, where the events of interest are eye movements with unknown times and directions. We show that the MED can successfully detect eye movement onset and classify eye movement direction. Further, the MED successfully combines information across data modalities, with multimodal performance exceeding unimodal performance. This method can facilitate applications such as the discovery of latent events in multimodal neural population activity and the development of brain-computer interfaces for naturalistic settings without constrained tasks or prior knowledge of event times.
Collapse
Affiliation(s)
- Nitin Sadras
- Ming Hsieh Department of Electrical and Computer Engineering, Viterbi School of Engineering, University of Southern California, Los Angeles, CA, United States of America
| | - Bijan Pesaran
- Department of Neurosurgery, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, United States of America
| | - Maryam M Shanechi
- Ming Hsieh Department of Electrical and Computer Engineering, Viterbi School of Engineering, University of Southern California, Los Angeles, CA, United States of America
- Thomas Lord Department of Computer Science, Alfred E. Mann Department of Biomedical Engineering, and the Neuroscience Graduate Program, University of Southern California, Los Angeles, CA, United States of America
| |
Collapse
|
5
|
Ahmadipour P, Sani OG, Pesaran B, Shanechi MM. Multimodal subspace identification for modeling discrete-continuous spiking and field potential population activity. J Neural Eng 2024; 21:026001. [PMID: 38016450 PMCID: PMC10913727 DOI: 10.1088/1741-2552/ad1053] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/02/2023] [Revised: 10/23/2023] [Accepted: 11/28/2023] [Indexed: 11/30/2023]
Abstract
Objective.Learning dynamical latent state models for multimodal spiking and field potential activity can reveal their collective low-dimensional dynamics and enable better decoding of behavior through multimodal fusion. Toward this goal, developing unsupervised learning methods that are computationally efficient is important, especially for real-time learning applications such as brain-machine interfaces (BMIs). However, efficient learning remains elusive for multimodal spike-field data due to their heterogeneous discrete-continuous distributions and different timescales.Approach.Here, we develop a multiscale subspace identification (multiscale SID) algorithm that enables computationally efficient learning for modeling and dimensionality reduction for multimodal discrete-continuous spike-field data. We describe the spike-field activity as combined Poisson and Gaussian observations, for which we derive a new analytical SID method. Importantly, we also introduce a novel constrained optimization approach to learn valid noise statistics, which is critical for multimodal statistical inference of the latent state, neural activity, and behavior. We validate the method using numerical simulations and with spiking and local field potential population activity recorded during a naturalistic reach and grasp behavior.Main results.We find that multiscale SID accurately learned dynamical models of spike-field signals and extracted low-dimensional dynamics from these multimodal signals. Further, it fused multimodal information, thus better identifying the dynamical modes and predicting behavior compared to using a single modality. Finally, compared to existing multiscale expectation-maximization learning for Poisson-Gaussian observations, multiscale SID had a much lower training time while being better in identifying the dynamical modes and having a better or similar accuracy in predicting neural activity and behavior.Significance.Overall, multiscale SID is an accurate learning method that is particularly beneficial when efficient learning is of interest, such as for online adaptive BMIs to track non-stationary dynamics or for reducing offline training time in neuroscience investigations.
Collapse
Affiliation(s)
- Parima Ahmadipour
- Ming Hsieh Department of Electrical and Computer Engineering, Viterbi School of Engineering, University of Southern California, Los Angeles, CA, United States of America
| | - Omid G Sani
- Ming Hsieh Department of Electrical and Computer Engineering, Viterbi School of Engineering, University of Southern California, Los Angeles, CA, United States of America
| | - Bijan Pesaran
- Department of Neurosurgery, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, United States of America
| | - Maryam M Shanechi
- Ming Hsieh Department of Electrical and Computer Engineering, Viterbi School of Engineering, University of Southern California, Los Angeles, CA, United States of America
- Thomas Lord Department of Computer Science, Alfred E. Mann Department of Biomedical Engineering, and the Neuroscience Graduate Program, University of Southern California, Los Angeles, CA, United States of America
| |
Collapse
|
6
|
Vahidi P, Sani OG, Shanechi MM. Modeling and dissociation of intrinsic and input-driven neural population dynamics underlying behavior. Proc Natl Acad Sci U S A 2024; 121:e2212887121. [PMID: 38335258 PMCID: PMC10873612 DOI: 10.1073/pnas.2212887121] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/28/2022] [Accepted: 12/03/2023] [Indexed: 02/12/2024] Open
Abstract
Neural dynamics can reflect intrinsic dynamics or dynamic inputs, such as sensory inputs or inputs from other brain regions. To avoid misinterpreting temporally structured inputs as intrinsic dynamics, dynamical models of neural activity should account for measured inputs. However, incorporating measured inputs remains elusive in joint dynamical modeling of neural-behavioral data, which is important for studying neural computations of behavior. We first show how training dynamical models of neural activity while considering behavior but not input or input but not behavior may lead to misinterpretations. We then develop an analytical learning method for linear dynamical models that simultaneously accounts for neural activity, behavior, and measured inputs. The method provides the capability to prioritize the learning of intrinsic behaviorally relevant neural dynamics and dissociate them from both other intrinsic dynamics and measured input dynamics. In data from a simulated brain with fixed intrinsic dynamics that performs different tasks, the method correctly finds the same intrinsic dynamics regardless of the task while other methods can be influenced by the task. In neural datasets from three subjects performing two different motor tasks with task instruction sensory inputs, the method reveals low-dimensional intrinsic neural dynamics that are missed by other methods and are more predictive of behavior and/or neural activity. The method also uniquely finds that the intrinsic behaviorally relevant neural dynamics are largely similar across the different subjects and tasks, whereas the overall neural dynamics are not. These input-driven dynamical models of neural-behavioral data can uncover intrinsic dynamics that may otherwise be missed.
Collapse
Affiliation(s)
- Parsa Vahidi
- Ming Hsieh Department of Electrical and Computer Engineering, Viterbi School of Engineering, University of Southern California, Los Angeles, CA90089
| | - Omid G. Sani
- Ming Hsieh Department of Electrical and Computer Engineering, Viterbi School of Engineering, University of Southern California, Los Angeles, CA90089
| | - Maryam M. Shanechi
- Ming Hsieh Department of Electrical and Computer Engineering, Viterbi School of Engineering, University of Southern California, Los Angeles, CA90089
- Neuroscience Graduate Program, University of Southern California, Los Angeles, CA90089
- Thomas Lord Department of Computer Science and Alfred E. Mann Department of Biomedical Engineering, Viterbi School of Engineering, University of Southern California, Los Angeles, CA90089
| |
Collapse
|
7
|
Abbaspourazad H, Erturk E, Pesaran B, Shanechi MM. Dynamical flexible inference of nonlinear latent factors and structures in neural population activity. Nat Biomed Eng 2024; 8:85-108. [PMID: 38082181 DOI: 10.1038/s41551-023-01106-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/21/2022] [Accepted: 09/12/2023] [Indexed: 12/26/2023]
Abstract
Modelling the spatiotemporal dynamics in the activity of neural populations while also enabling their flexible inference is hindered by the complexity and noisiness of neural observations. Here we show that the lower-dimensional nonlinear latent factors and latent structures can be computationally modelled in a manner that allows for flexible inference causally, non-causally and in the presence of missing neural observations. To enable flexible inference, we developed a neural network that separates the model into jointly trained manifold and dynamic latent factors such that nonlinearity is captured through the manifold factors and the dynamics can be modelled in tractable linear form on this nonlinear manifold. We show that the model, which we named 'DFINE' (for 'dynamical flexible inference for nonlinear embeddings') achieves flexible inference in simulations of nonlinear dynamics and across neural datasets representing a diversity of brain regions and behaviours. Compared with earlier neural-network models, DFINE enables flexible inference, better predicts neural activity and behaviour, and better captures the latent neural manifold structure. DFINE may advance the development of neurotechnology and investigations in neuroscience.
Collapse
Affiliation(s)
- Hamidreza Abbaspourazad
- Ming Hsieh Department of Electrical and Computer Engineering, Viterbi School of Engineering, University of Southern California, Los Angeles, CA, USA
| | - Eray Erturk
- Ming Hsieh Department of Electrical and Computer Engineering, Viterbi School of Engineering, University of Southern California, Los Angeles, CA, USA
| | - Bijan Pesaran
- Departments of Neurosurgery, Neuroscience, and Bioengineering, University of Pennsylvania, Philadelphia, PA, USA
| | - Maryam M Shanechi
- Ming Hsieh Department of Electrical and Computer Engineering, Viterbi School of Engineering, University of Southern California, Los Angeles, CA, USA.
- Thomas Lord Department of Computer Science, Alfred E. Mann Department of Biomedical Engineering, Neuroscience Graduate Program, University of Southern California, Los Angeles, CA, USA.
| |
Collapse
|
8
|
Nozari E, Bertolero MA, Stiso J, Caciagli L, Cornblath EJ, He X, Mahadevan AS, Pappas GJ, Bassett DS. Macroscopic resting-state brain dynamics are best described by linear models. Nat Biomed Eng 2024; 8:68-84. [PMID: 38082179 PMCID: PMC11357987 DOI: 10.1038/s41551-023-01117-y] [Citation(s) in RCA: 8] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/23/2020] [Accepted: 09/26/2023] [Indexed: 12/22/2023]
Abstract
It is typically assumed that large networks of neurons exhibit a large repertoire of nonlinear behaviours. Here we challenge this assumption by leveraging mathematical models derived from measurements of local field potentials via intracranial electroencephalography and of whole-brain blood-oxygen-level-dependent brain activity via functional magnetic resonance imaging. We used state-of-the-art linear and nonlinear families of models to describe spontaneous resting-state activity of 700 participants in the Human Connectome Project and 122 participants in the Restoring Active Memory project. We found that linear autoregressive models provide the best fit across both data types and three performance metrics: predictive power, computational complexity and the extent of the residual dynamics unexplained by the model. To explain this observation, we show that microscopic nonlinear dynamics can be counteracted or masked by four factors associated with macroscopic dynamics: averaging over space and over time, which are inherent to aggregated macroscopic brain activity, and observation noise and limited data samples, which stem from technological limitations. We therefore argue that easier-to-interpret linear models can faithfully describe macroscopic brain dynamics during resting-state conditions.
Collapse
Affiliation(s)
- Erfan Nozari
- Department of Mechanical Engineering, University of California, Riverside, CA, USA
- Department of Electrical and Computer Engineering, University of California, Riverside, CA, USA
- Department of Bioengineering, University of California, Riverside, CA, USA
| | - Maxwell A Bertolero
- Department of Bioengineering, University of Pennsylvania, Philadelphia, PA, USA
| | - Jennifer Stiso
- Department of Bioengineering, University of Pennsylvania, Philadelphia, PA, USA
- Department of Neuroscience, University of Pennsylvania, Philadelphia, PA, USA
| | - Lorenzo Caciagli
- Department of Bioengineering, University of Pennsylvania, Philadelphia, PA, USA
| | - Eli J Cornblath
- Department of Bioengineering, University of Pennsylvania, Philadelphia, PA, USA
- Department of Neuroscience, University of Pennsylvania, Philadelphia, PA, USA
| | - Xiaosong He
- Department of Bioengineering, University of Pennsylvania, Philadelphia, PA, USA
| | - Arun S Mahadevan
- Department of Bioengineering, University of Pennsylvania, Philadelphia, PA, USA
| | - George J Pappas
- Department of Electrical and Systems Engineering, University of Pennsylvania, Philadelphia, PA, USA
| | - Dani S Bassett
- Department of Bioengineering, University of Pennsylvania, Philadelphia, PA, USA.
- Department of Electrical and Systems Engineering, University of Pennsylvania, Philadelphia, PA, USA.
- Department of Physics and Astronomy, University of Pennsylvania, Philadelphia, PA, USA.
- Department of Neurology, University of Pennsylvania, Philadelphia, PA, USA.
- Department of Psychiatry, University of Pennsylvania, Philadelphia, PA, USA.
- Santa Fe Institute, Santa Fe, NM, USA.
| |
Collapse
|
9
|
Song CY, Shanechi MM. Unsupervised learning of stationary and switching dynamical system models from Poisson observations. J Neural Eng 2023; 20:066029. [PMID: 38083862 PMCID: PMC10714100 DOI: 10.1088/1741-2552/ad038d] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/24/2023] [Revised: 09/15/2023] [Accepted: 10/16/2023] [Indexed: 12/18/2023]
Abstract
Objective. Investigating neural population dynamics underlying behavior requires learning accurate models of the recorded spiking activity, which can be modeled with a Poisson observation distribution. Switching dynamical system models can offer both explanatory power and interpretability by piecing together successive regimes of simpler dynamics to capture more complex ones. However, in many cases, reliable regime labels are not available, thus demanding accurate unsupervised learning methods for Poisson observations. Existing learning methods, however, rely on inference of latent states in neural activity using the Laplace approximation, which may not capture the broader properties of densities and may lead to inaccurate learning. Thus, there is a need for new inference methods that can enable accurate model learning.Approach. To achieve accurate model learning, we derive a novel inference method based on deterministic sampling for Poisson observations called the Poisson Cubature Filter (PCF) and embed it in an unsupervised learning framework. This method takes a minimum mean squared error approach to estimation. Terms that are difficult to find analytically for Poisson observations are approximated in a novel way with deterministic sampling based on numerical integration and cubature rules.Main results. PCF enabled accurate unsupervised learning in both stationary and switching dynamical systems and largely outperformed prior Laplace approximation-based learning methods in both simulations and motor cortical spiking data recorded during a reaching task. These improvements were larger for smaller data sizes, showing that PCF-based learning was more data efficient and enabled more reliable regime identification. In experimental data and unsupervised with respect to behavior, PCF-based learning uncovered interpretable behavior-relevant regimes unlike prior learning methods.Significance. The developed unsupervised learning methods for switching dynamical systems can accurately uncover latent regimes and states in population spiking activity, with important applications in both basic neuroscience and neurotechnology.
Collapse
Affiliation(s)
- Christian Y Song
- Ming Hsieh Department of Electrical and Computer Engineering, Viterbi School of Engineering, University of Southern California, Los Angeles, CA, United States of America
| | - Maryam M Shanechi
- Ming Hsieh Department of Electrical and Computer Engineering, Viterbi School of Engineering, University of Southern California, Los Angeles, CA, United States of America
- Neuroscience Graduate Program, University of Southern California, Los Angeles, CA, United States of America
- Alfred E. Mann Department of Biomedical Engineering, Viterbi School of Engineering, University of Southern California, Los Angeles, CA, United States of America
- Thomas Lord Department of Computer Science, Viterbi School of Engineering, University of Southern California, Los Angeles, CA, United States of America
| |
Collapse
|
10
|
Sadras N, Sani OG, Ahmadipour P, Shanechi MM. Post-stimulus encoding of decision confidence in EEG: toward a brain-computer interface for decision making. J Neural Eng 2023; 20:056012. [PMID: 37524073 DOI: 10.1088/1741-2552/acec14] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/21/2022] [Accepted: 07/31/2023] [Indexed: 08/02/2023]
Abstract
Objective.When making decisions, humans can evaluate how likely they are to be correct. If this subjective confidence could be reliably decoded from brain activity, it would be possible to build a brain-computer interface (BCI) that improves decision performance by automatically providing more information to the user if needed based on their confidence. But this possibility depends on whether confidence can be decoded right after stimulus presentation and before the response so that a corrective action can be taken in time. Although prior work has shown that decision confidence is represented in brain signals, it is unclear if the representation is stimulus-locked or response-locked, and whether stimulus-locked pre-response decoding is sufficiently accurate for enabling such a BCI.Approach.We investigate the neural correlates of confidence by collecting high-density electroencephalography (EEG) during a perceptual decision task with realistic stimuli. Importantly, we design our task to include a post-stimulus gap that prevents the confounding of stimulus-locked activity by response-locked activity and vice versa, and then compare with a task without this gap.Main results.We perform event-related potential and source-localization analyses. Our analyses suggest that the neural correlates of confidence are stimulus-locked, and that an absence of a post-stimulus gap could cause these correlates to incorrectly appear as response-locked. By preventing response-locked activity from confounding stimulus-locked activity, we then show that confidence can be reliably decoded from single-trial stimulus-locked pre-response EEG alone. We also identify a high-performance classification algorithm by comparing a battery of algorithms. Lastly, we design a simulated BCI framework to show that the EEG classification is accurate enough to build a BCI and that the decoded confidence could be used to improve decision making performance particularly when the task difficulty and cost of errors are high.Significance.Our results show feasibility of non-invasive EEG-based BCIs to improve human decision making.
Collapse
Affiliation(s)
- Nitin Sadras
- Ming Hsieh Department of Electrical and Computer Engineering, Viterbi School of Engineering, University of Southern California, Los Angeles, CA, United States of America
| | - Omid G Sani
- Ming Hsieh Department of Electrical and Computer Engineering, Viterbi School of Engineering, University of Southern California, Los Angeles, CA, United States of America
| | - Parima Ahmadipour
- Ming Hsieh Department of Electrical and Computer Engineering, Viterbi School of Engineering, University of Southern California, Los Angeles, CA, United States of America
| | - Maryam M Shanechi
- Ming Hsieh Department of Electrical and Computer Engineering, Viterbi School of Engineering, University of Southern California, Los Angeles, CA, United States of America
- Department of Biomedical Engineering, Viterbi School of Engineering, University of Southern California, Los Angeles, CA, United States of America
- Department of Computer Science, Viterbi School of Engineering, University of Southern California, Los Angeles, CA, United States of America
- Neuroscience Graduate Program University of Southern California, Los Angeles, CA, United States of America
| |
Collapse
|
11
|
Ahmadipour P, Sani OG, Pesaran B, Shanechi MM. Multimodal subspace identification for modeling discrete-continuous spiking and field potential population activity. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2023:2023.05.26.542509. [PMID: 37398400 PMCID: PMC10312539 DOI: 10.1101/2023.05.26.542509] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 07/04/2023]
Abstract
Learning dynamical latent state models for multimodal spiking and field potential activity can reveal their collective low-dimensional dynamics and enable better decoding of behavior through multimodal fusion. Toward this goal, developing unsupervised learning methods that are computationally efficient is important, especially for real-time learning applications such as brain-machine interfaces (BMIs). However, efficient learning remains elusive for multimodal spike-field data due to their heterogeneous discrete-continuous distributions and different timescales. Here, we develop a multiscale subspace identification (multiscale SID) algorithm that enables computationally efficient modeling and dimensionality reduction for multimodal discrete-continuous spike-field data. We describe the spike-field activity as combined Poisson and Gaussian observations, for which we derive a new analytical subspace identification method. Importantly, we also introduce a novel constrained optimization approach to learn valid noise statistics, which is critical for multimodal statistical inference of the latent state, neural activity, and behavior. We validate the method using numerical simulations and spike-LFP population activity recorded during a naturalistic reach and grasp behavior. We find that multiscale SID accurately learned dynamical models of spike-field signals and extracted low-dimensional dynamics from these multimodal signals. Further, it fused multimodal information, thus better identifying the dynamical modes and predicting behavior compared to using a single modality. Finally, compared to existing multiscale expectation-maximization learning for Poisson-Gaussian observations, multiscale SID had a much lower computational cost while being better in identifying the dynamical modes and having a better or similar accuracy in predicting neural activity. Overall, multiscale SID is an accurate learning method that is particularly beneficial when efficient learning is of interest.
Collapse
|
12
|
Chang S, Wang J, Zhu Y, Wei X, Deng B, Li H, Liu C. Nonlinear dynamical modeling of neural activity using volterra series with GA-enhanced particle swarm optimization algorithm. Cogn Neurodyn 2023; 17:467-476. [PMID: 37007203 PMCID: PMC10050660 DOI: 10.1007/s11571-022-09822-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/03/2021] [Revised: 03/23/2022] [Accepted: 05/09/2022] [Indexed: 11/03/2022] Open
Abstract
In order to improve the modeling performance of Volterra sequence for nonlinear neural activity, in this paper, a new optimization algorithm is proposed to identify Volterra sequence parameters. Algorithm combines the advantages of particle swarm optimization (PSO) and genetic algorithm (GA) improve the performance of the identification of nonlinear model parameters from rapidity and accuracy. In the modeling experiments of neural signal data generated by the neural computing model and clinical neural data set in this paper, the proposed algorithm shows its excellent potential in nonlinear neural activity modeling. Compared with PSO and GA, the algorithm can achieve less identification error, and better balance the convergence speed and identification error. Further, we explore the influence of algorithm parameters on identification efficiency, which provides possible guiding significance for parameter setting in practical application of the algorithm.
Collapse
Affiliation(s)
- Siyuan Chang
- School of Electrical and Information Engineering, Tianjin University, Tianjin, 30072 China
| | - Jiang Wang
- School of Electrical and Information Engineering, Tianjin University, Tianjin, 30072 China
| | - Yulin Zhu
- School of Electrical and Information Engineering, Tianjin University, Tianjin, 30072 China
| | - Xile Wei
- School of Electrical and Information Engineering, Tianjin University, Tianjin, 30072 China
| | - Bin Deng
- School of Electrical and Information Engineering, Tianjin University, Tianjin, 30072 China
| | - Huiyan Li
- School of Automation and Electrical Engineering, Tianjin University of Technology and Education, Tianjin, China
| | - Chen Liu
- School of Electrical and Information Engineering, Tianjin University, Tianjin, 30072 China
| |
Collapse
|
13
|
Abbaspourazad H, Erturk E, Pesaran B, Shanechi MM. Dynamical flexible inference of nonlinear latent structures in neural population activity. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2023:2023.03.13.532479. [PMID: 36993605 PMCID: PMC10054986 DOI: 10.1101/2023.03.13.532479] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/19/2023]
Abstract
Inferring complex spatiotemporal dynamics in neural population activity is critical for investigating neural mechanisms and developing neurotechnology. These activity patterns are noisy observations of lower-dimensional latent factors and their nonlinear dynamical structure. A major unaddressed challenge is to model this nonlinear structure, but in a manner that allows for flexible inference, whether causally, non-causally, or in the presence of missing neural observations. We address this challenge by developing DFINE, a new neural network that separates the model into dynamic and manifold latent factors, such that the dynamics can be modeled in tractable form. We show that DFINE achieves flexible nonlinear inference across diverse behaviors and brain regions. Further, despite enabling flexible inference unlike prior neural network models of population activity, DFINE also better predicts the behavior and neural activity, and better captures the latent neural manifold structure. DFINE can both enhance future neurotechnology and facilitate investigations across diverse domains of neuroscience.
Collapse
|
14
|
Vahidi P, Sani OG, Shanechi MM. Modeling and dissociation of intrinsic and input-driven neural population dynamics underlying behavior. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2023:2023.03.14.532554. [PMID: 36993213 PMCID: PMC10055042 DOI: 10.1101/2023.03.14.532554] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/19/2023]
Abstract
Neural dynamics can reflect intrinsic dynamics or dynamic inputs, such as sensory inputs or inputs from other regions. To avoid misinterpreting temporally-structured inputs as intrinsic dynamics, dynamical models of neural activity should account for measured inputs. However, incorporating measured inputs remains elusive in joint dynamical modeling of neural-behavioral data, which is important for studying neural computations of a specific behavior. We first show how training dynamical models of neural activity while considering behavior but not input, or input but not behavior may lead to misinterpretations. We then develop a novel analytical learning method that simultaneously accounts for neural activity, behavior, and measured inputs. The method provides the new capability to prioritize the learning of intrinsic behaviorally relevant neural dynamics and dissociate them from both other intrinsic dynamics and measured input dynamics. In data from a simulated brain with fixed intrinsic dynamics that performs different tasks, the method correctly finds the same intrinsic dynamics regardless of task while other methods can be influenced by the change in task. In neural datasets from three subjects performing two different motor tasks with task instruction sensory inputs, the method reveals low-dimensional intrinsic neural dynamics that are missed by other methods and are more predictive of behavior and/or neural activity. The method also uniquely finds that the intrinsic behaviorally relevant neural dynamics are largely similar across the three subjects and two tasks whereas the overall neural dynamics are not. These input-driven dynamical models of neural-behavioral data can uncover intrinsic dynamics that may otherwise be missed.
Collapse
|
15
|
Fang H, Yang Y. Predictive neuromodulation of cingulo-frontal neural dynamics in major depressive disorder using a brain-computer interface system: A simulation study. Front Comput Neurosci 2023; 17:1119685. [PMID: 36950505 PMCID: PMC10025398 DOI: 10.3389/fncom.2023.1119685] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/13/2022] [Accepted: 02/15/2023] [Indexed: 03/08/2023] Open
Abstract
Introduction Deep brain stimulation (DBS) is a promising therapy for treatment-resistant major depressive disorder (MDD). MDD involves the dysfunction of a brain network that can exhibit complex nonlinear neural dynamics in multiple frequency bands. However, current open-loop and responsive DBS methods cannot track the complex multiband neural dynamics in MDD, leading to imprecise regulation of symptoms, variable treatment effects among patients, and high battery power consumption. Methods Here, we develop a closed-loop brain-computer interface (BCI) system of predictive neuromodulation for treating MDD. We first use a biophysically plausible ventral anterior cingulate cortex (vACC)-dorsolateral prefrontal cortex (dlPFC) neural mass model of MDD to simulate nonlinear and multiband neural dynamics in response to DBS. We then use offline system identification to build a dynamic model that predicts the DBS effect on neural activity. We next use the offline identified model to design an online BCI system of predictive neuromodulation. The online BCI system consists of a dynamic brain state estimator and a model predictive controller. The brain state estimator estimates the MDD brain state from the history of neural activity and previously delivered DBS patterns. The predictive controller takes the estimated MDD brain state as the feedback signal and optimally adjusts DBS to regulate the MDD neural dynamics to therapeutic targets. We use the vACC-dlPFC neural mass model as a simulation testbed to test the BCI system and compare it with state-of-the-art open-loop and responsive DBS treatments of MDD. Results We demonstrate that our dynamic model accurately predicts nonlinear and multiband neural activity. Consequently, the predictive neuromodulation system accurately regulates the neural dynamics in MDD, resulting in significantly smaller control errors and lower DBS battery power consumption than open-loop and responsive DBS. Discussion Our results have implications for developing future precisely-tailored clinical closed-loop DBS treatments for MDD.
Collapse
Affiliation(s)
- Hao Fang
- Department of Electrical and Computer Engineering, University of Central Florida, Orlando, FL, United States
| | - Yuxiao Yang
- Ministry of Education (MOE) Frontier Science Center for Brain Science and Brain-Machine Integration, Zhejiang University, Hangzhou, Zhejiang, China
- State Key Laboratory of Brain-Machine Intelligence, Zhejiang University, Hangzhou, Zhejiang, China
- College of Computer Science and Technology, Zhejiang University, Hangzhou, Zhejiang, China
- Department of Neurosurgery, Second Affiliated Hospital, School of Medicine, Zhejiang University, Hangzhou, Zhejiang, China
- *Correspondence: Yuxiao Yang
| |
Collapse
|
16
|
Song CY, Hsieh HL, Pesaran B, Shanechi MM. Modeling and inference methods for switching regime-dependent dynamical systems with multiscale neural observations. J Neural Eng 2022; 19. [PMID: 36261030 DOI: 10.1088/1741-2552/ac9b94] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/12/2022] [Accepted: 10/19/2022] [Indexed: 01/11/2023]
Abstract
Objective.Realizing neurotechnologies that enable long-term neural recordings across multiple spatial-temporal scales during naturalistic behaviors requires new modeling and inference methods that can simultaneously address two challenges. First, the methods should aggregate information across all activity scales from multiple recording sources such as spiking and field potentials. Second, the methods should detect changes in the regimes of behavior and/or neural dynamics during naturalistic scenarios and long-term recordings. Prior regime detection methods are developed for a single scale of activity rather than multiscale activity, and prior multiscale methods have not considered regime switching and are for stationary cases.Approach.Here, we address both challenges by developing a switching multiscale dynamical system model and the associated filtering and smoothing methods. This model describes the encoding of an unobserved brain state in multiscale spike-field activity. It also allows for regime-switching dynamics using an unobserved regime state that dictates the dynamical and encoding parameters at every time-step. We also design the associated switching multiscale inference methods that estimate both the unobserved regime and brain states from simultaneous spike-field activity.Main results.We validate the methods in both extensive numerical simulations and prefrontal spike-field data recorded in a monkey performing saccades for fluid rewards. We show that these methods can successfully combine the spiking and field potential observations to simultaneously track the regime and brain states accurately. Doing so, these methods lead to better state estimation compared with single-scale switching methods or stationary multiscale methods. Also, for single-scale linear Gaussian observations, the new switching smoother can better generalize to diverse system settings compared to prior switching smoothers.Significance.These modeling and inference methods effectively incorporate both regime-detection and multiscale observations. As such, they could facilitate investigation of latent switching neural population dynamics and improve future brain-machine interfaces by enabling inference in naturalistic scenarios where regime-dependent multiscale activity and behavior arise.
Collapse
Affiliation(s)
- Christian Y Song
- Ming Hsieh Department of Electrical and Computer Engineering, Viterbi School of Engineering, University of Southern California, Los Angeles, CA, United States of America
| | - Han-Lin Hsieh
- Ming Hsieh Department of Electrical and Computer Engineering, Viterbi School of Engineering, University of Southern California, Los Angeles, CA, United States of America
| | - Bijan Pesaran
- Departments of Neurosurgery, Neuroscience, and Bioengineering, University of Pennsylvania, Philadelphia, PA, United States of America
| | - Maryam M Shanechi
- Ming Hsieh Department of Electrical and Computer Engineering, Viterbi School of Engineering, University of Southern California, Los Angeles, CA, United States of America.,Neuroscience Graduate Program, University of Southern California, Los Angeles, CA, United States of America.,Department of Biomedical Engineering, Viterbi School of Engineering, University of Southern California, Los Angeles, CA, United States of America.,Department of Computer Science, Viterbi School of Engineering, University of Southern California, Los Angeles, CA, United States of America
| |
Collapse
|
17
|
Ebrahiminia F, Cichy RM, Khaligh-Razavi SM. A multivariate comparison of electroencephalogram and functional magnetic resonance imaging to electrocorticogram using visual object representations in humans. Front Neurosci 2022; 16:983602. [PMID: 36330341 PMCID: PMC9624066 DOI: 10.3389/fnins.2022.983602] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/01/2022] [Accepted: 09/23/2022] [Indexed: 09/07/2024] Open
Abstract
Today, most neurocognitive studies in humans employ the non-invasive neuroimaging techniques functional magnetic resonance imaging (fMRI) and electroencephalogram (EEG). However, how the data provided by fMRI and EEG relate exactly to the underlying neural activity remains incompletely understood. Here, we aimed to understand the relation between EEG and fMRI data at the level of neural population codes using multivariate pattern analysis. In particular, we assessed whether this relation is affected when we change stimuli or introduce identity-preserving variations to them. For this, we recorded EEG and fMRI data separately from 21 healthy participants while participants viewed everyday objects in different viewing conditions, and then related the data to electrocorticogram (ECoG) data recorded for the same stimulus set from epileptic patients. The comparison of EEG and ECoG data showed that object category signals emerge swiftly in the visual system and can be detected by both EEG and ECoG at similar temporal delays after stimulus onset. The correlation between EEG and ECoG was reduced when object representations tolerant to changes in scale and orientation were considered. The comparison of fMRI and ECoG overall revealed a tighter relationship in occipital than in temporal regions, related to differences in fMRI signal-to-noise ratio. Together, our results reveal a complex relationship between fMRI, EEG, and ECoG signals at the level of population codes that critically depends on the time point after stimulus onset, the region investigated, and the visual contents used.
Collapse
Affiliation(s)
- Fatemeh Ebrahiminia
- Department of Stem Cells and Developmental Biology, Cell Science Research Center, Royan Institute for Stem Cell Biology and Technology, Academic Center for Education, Culture and Research (ACECR), Tehran, Iran
- School of Electrical and Computer Engineering, University of Tehran, Tehran, Iran
| | | | - Seyed-Mahdi Khaligh-Razavi
- Department of Stem Cells and Developmental Biology, Cell Science Research Center, Royan Institute for Stem Cell Biology and Technology, Academic Center for Education, Culture and Research (ACECR), Tehran, Iran
| |
Collapse
|
18
|
Alasfour A, Gabriel P, Jiang X, Shamie I, Melloni L, Thesen T, Dugan P, Friedman D, Doyle W, Devinsky O, Gonda D, Sattar S, Wang S, Halgren E, Gilja V. Spatiotemporal dynamics of human high gamma discriminate naturalistic behavioral states. PLoS Comput Biol 2022; 18:e1010401. [PMID: 35939509 PMCID: PMC9387937 DOI: 10.1371/journal.pcbi.1010401] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/02/2022] [Revised: 08/18/2022] [Accepted: 07/18/2022] [Indexed: 11/18/2022] Open
Abstract
In analyzing the neural correlates of naturalistic and unstructured behaviors, features of neural activity that are ignored in a trial-based experimental paradigm can be more fully studied and investigated. Here, we analyze neural activity from two patients using electrocorticography (ECoG) and stereo-electroencephalography (sEEG) recordings, and reveal that multiple neural signal characteristics exist that discriminate between unstructured and naturalistic behavioral states such as “engaging in dialogue” and “using electronics”. Using the high gamma amplitude as an estimate of neuronal firing rate, we demonstrate that behavioral states in a naturalistic setting are discriminable based on long-term mean shifts, variance shifts, and differences in the specific neural activity’s covariance structure. Both the rapid and slow changes in high gamma band activity separate unstructured behavioral states. We also use Gaussian process factor analysis (GPFA) to show the existence of salient spatiotemporal features with variable smoothness in time. Further, we demonstrate that both temporally smooth and stochastic spatiotemporal activity can be used to differentiate unstructured behavioral states. This is the first attempt to elucidate how different neural signal features contain information about behavioral states collected outside the conventional experimental paradigm.
Collapse
Affiliation(s)
- Abdulwahab Alasfour
- Department of Electrical Engineering, Kuwait University, Kuwait City, Kuwait
- Department of Electrical and Computer Engineering, UC San Diego, San Diego, California, United States of America
- * E-mail:
| | - Paolo Gabriel
- Department of Electrical and Computer Engineering, UC San Diego, San Diego, California, United States of America
| | - Xi Jiang
- Department of Neurosciences, UC San Diego, San Diego, California, United States of America
| | - Isaac Shamie
- Department of Neurosciences, UC San Diego, San Diego, California, United States of America
| | - Lucia Melloni
- Comprehensive Epilepsy Center, Department of Neurology, New York University Grossman School of Medicine, New York City, New York, United States of America
| | - Thomas Thesen
- Comprehensive Epilepsy Center, Department of Neurology, New York University Grossman School of Medicine, New York City, New York, United States of America
- Department of Biomedical Sciences, College of Medicine, University of Houston, Houston, Texas, United States of America
| | - Patricia Dugan
- Comprehensive Epilepsy Center, Department of Neurology, New York University Grossman School of Medicine, New York City, New York, United States of America
| | - Daniel Friedman
- Comprehensive Epilepsy Center, Department of Neurology, New York University Grossman School of Medicine, New York City, New York, United States of America
| | - Werner Doyle
- Comprehensive Epilepsy Center, Department of Neurology, New York University Grossman School of Medicine, New York City, New York, United States of America
| | - Orin Devinsky
- Comprehensive Epilepsy Center, Department of Neurology, New York University Grossman School of Medicine, New York City, New York, United States of America
| | - David Gonda
- Department of Neurosciences, UC San Diego, San Diego, California, United States of America
- Rady Children’s Hospital San Diego, San Diego, California, United States of America
| | - Shifteh Sattar
- Department of Neurosciences, UC San Diego, San Diego, California, United States of America
- Rady Children’s Hospital San Diego, San Diego, California, United States of America
| | - Sonya Wang
- Rady Children’s Hospital San Diego, San Diego, California, United States of America
- Department of Neurology, University of Minnesota Medical School, Minneapolis, Minnesota, United States of America
| | - Eric Halgren
- Department of Neurosciences, UC San Diego, San Diego, California, United States of America
| | - Vikash Gilja
- Department of Electrical and Computer Engineering, UC San Diego, San Diego, California, United States of America
| |
Collapse
|
19
|
Fang H, Yang Y. Designing and Validating a Robust Adaptive Neuromodulation Algorithm for Closed-Loop Control of Brain States. J Neural Eng 2022; 19. [PMID: 35576912 DOI: 10.1088/1741-2552/ac7005] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/08/2022] [Accepted: 05/16/2022] [Indexed: 11/12/2022]
Abstract
OBJECTIVE Neuromodulation systems that use closed-loop brain stimulation to control brain states can provide new therapies for brain disorders. To date, closed-loop brain stimulation has largely used linear time-invariant controllers. However, nonlinear time-varying brain network dynamics and external disturbances can appear during real-time stimulation, collectively leading to real-time model uncertainty. Real-time model uncertainty can degrade the performance or even cause instability of time-invariant controllers. Three problems need to be resolved to enable accurate and stable control under model uncertainty. First, an adaptive controller is needed to track the model uncertainty. Second, the adaptive controller additionally needs to be robust to noise and disturbances. Third, theoretical analyses of stability and robustness are needed as prerequisites for stable operation of the controller in practical applications. APPROACH We develop a robust adaptive neuromodulation algorithm that solves the above three problems. First, we develop a state-space brain network model that explicitly includes nonlinear terms of real-time model uncertainty and design an adaptive controller to track and cancel the model uncertainty. Second, to improve the robustness of the adaptive controller, we design two linear filters to increase steady-state control accuracy and reduce sensitivity to high-frequency noise and disturbances. Third, we conduct theoretical analyses to prove the stability of the neuromodulation algorithm and establish a trade-off between stability and robustness, which we further use to optimize the algorithm design. Finally, we validate the algorithm using comprehensive Monte Carlo simulations that span a broad range of model nonlinearity, uncertainty, and complexity. MAIN RESULTS The robust adaptive neuromodulation algorithm accurately tracks various types of target brain state trajectories, enables stable and robust control, and significantly outperforms state-of-the-art neuromodulation algorithms. SIGNIFICANCE Our algorithm has implications for future designs of precise, stable, and robust closed-loop brain stimulation systems to treat brain disorders and facilitate brain functions.
Collapse
Affiliation(s)
- Hao Fang
- University of Central Florida, Research 1 Room 334, 313/316, University of Central Florida, 4353 Scorpius St., Orlando, Florida, 32816-2368, UNITED STATES
| | - Yuxiao Yang
- Department of Electrical and Computer Engineering, University of Central Florida, 4353 Scorpius St., Orlando, Florida, 32816-2368, UNITED STATES
| |
Collapse
|
20
|
Wang C, Pesaran B, Shanechi MM. Modeling multiscale causal interactions between spiking and field potential signals during behavior. J Neural Eng 2022; 19. [DOI: 10.1088/1741-2552/ac4e1c] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/07/2021] [Accepted: 01/24/2022] [Indexed: 11/12/2022]
Abstract
Abstract
Objective. Brain recordings exhibit dynamics at multiple spatiotemporal scales, which are measured with spike trains and larger-scale field potential signals. To study neural processes, it is important to identify and model causal interactions not only at a single scale of activity, but also across multiple scales, i.e. between spike trains and field potential signals. Standard causality measures are not directly applicable here because spike trains are binary-valued but field potentials are continuous-valued. It is thus important to develop computational tools to recover multiscale neural causality during behavior, assess their performance on neural datasets, and study whether modeling multiscale causalities can improve the prediction of neural signals beyond what is possible with single-scale causality. Approach. We design a multiscale model-based Granger-like causality method based on directed information and evaluate its success both in realistic biophysical spike-field simulations and in motor cortical datasets from two non-human primates (NHP) performing a motor behavior. To compute multiscale causality, we learn point-process generalized linear models that predict the spike events at a given time based on the history of both spike trains and field potential signals. We also learn linear Gaussian models that predict the field potential signals at a given time based on their own history as well as either the history of binary spike events or that of latent firing rates. Main results. We find that our method reveals the true multiscale causality network structure in biophysical simulations despite the presence of model mismatch. Further, models with the identified multiscale causalities in the NHP neural datasets lead to better prediction of both spike trains and field potential signals compared to just modeling single-scale causalities. Finally, we find that latent firing rates are better predictors of field potential signals compared with the binary spike events in the NHP datasets. Significance. This multiscale causality method can reveal the directed functional interactions across spatiotemporal scales of brain activity to inform basic science investigations and neurotechnologies.
Collapse
|
21
|
Salimpour Y, Mills KA, Hwang BY, Anderson WS. Phase- targeted stimulation modulates phase-amplitude coupling in the motor cortex of the human brain. Brain Stimul 2021; 15:152-163. [PMID: 34856396 DOI: 10.1016/j.brs.2021.11.019] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/13/2020] [Revised: 10/10/2021] [Accepted: 11/28/2021] [Indexed: 11/02/2022] Open
Abstract
BACKGROUND Phase-amplitude coupling (PAC) in which the amplitude of a faster field potential oscillation is coupled to the phase of a slower rhythm, is one of the most well-studied interactions between oscillations at different frequency bands. In a healthy brain, PAC accompanies cognitive functions such as learning and memory, and changes in PAC have been associated with neurological diseases including Parkinson's disease (PD), schizophrenia, obsessive-compulsive disorder, Alzheimer's disease, and epilepsy. OBJECTIVE /Hypothesis: In PD, normalization of PAC in the motor cortex has been reported in the context of effective treatments such as dopamine replacement therapy and deep brain stimulation (DBS), but the possibility of normalizing PAC through intervention at the cortex has not been shown in humans. Phase-targeted stimulation (PDS) has a strong potential to modulate PAC levels and potentially normalize it. METHODS We applied stimulation pulses triggered by specific phases of the beta oscillations, the low frequency oscillations that define phase of gamma amplitude in beta-gamma PAC, to the motor cortex of seven PD patients at rest during DBS lead placement surgery We measured the effect on PAC modulation in the motor cortex relative to stimulation-free periods. RESULTS We describe a system for phase-targeted stimulation locked to specific phases of a continuously updated slow local field potential oscillation (in this case, beta band oscillations) prediction. Stimulation locked to the phase of the peak of beta oscillations increased beta-gamma coupling both during and after stimulation in the motor cortex, and the opposite phase (trough) stimulation reduced the magnitude of coupling after stimulation. CONCLUSION These results demonstrate the capacity of cortical phase-targeted stimulation to modulate PAC without evoking motor activation, which could allow applications in the treatment of neurological disorders associated with abnormal PAC, such as PD.
Collapse
Affiliation(s)
- Yousef Salimpour
- Functional Neurosurgery Laboratory, Department of Neurosurgery, Johns Hopkins School of Medicine, Baltimore, MD, USA.
| | - Kelly A Mills
- Neuromodulation and Advanced Therapies Clinic, Department of Neurology, Johns Hopkins School of Medicine, Baltimore, MD, USA
| | - Brian Y Hwang
- Functional Neurosurgery Laboratory, Department of Neurosurgery, Johns Hopkins School of Medicine, Baltimore, MD, USA
| | - William S Anderson
- Functional Neurosurgery Laboratory, Department of Neurosurgery, Johns Hopkins School of Medicine, Baltimore, MD, USA; Department of Biomedical Engineering, Johns Hopkins School of Medicine, Baltimore, MD, USA
| |
Collapse
|
22
|
Ravishankar S, Toneva M, Wehbe L. Single-Trial MEG Data Can Be Denoised Through Cross-Subject Predictive Modeling. Front Comput Neurosci 2021; 15:737324. [PMID: 34858157 PMCID: PMC8632362 DOI: 10.3389/fncom.2021.737324] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/06/2021] [Accepted: 08/23/2021] [Indexed: 11/13/2022] Open
Abstract
A pervasive challenge in brain imaging is the presence of noise that hinders investigation of underlying neural processes, with Magnetoencephalography (MEG) in particular having very low Signal-to-Noise Ratio (SNR). The established strategy to increase MEG's SNR involves averaging multiple repetitions of data corresponding to the same stimulus. However, repetition of stimulus can be undesirable, because underlying neural activity has been shown to change across trials, and repeating stimuli limits the breadth of the stimulus space experienced by subjects. In particular, the rising popularity of naturalistic studies with a single viewing of a movie or story necessitates the discovery of new approaches to increase SNR. We introduce a simple framework to reduce noise in single-trial MEG data by leveraging correlations in neural responses across subjects as they experience the same stimulus. We demonstrate its use in a naturalistic reading comprehension task with 8 subjects, with MEG data collected while they read the same story a single time. We find that our procedure results in data with reduced noise and allows for better discovery of neural phenomena. As proof-of-concept, we show that the N400m's correlation with word surprisal, an established finding in literature, is far more clearly observed in the denoised data than the original data. The denoised data also shows higher decoding and encoding accuracy than the original data, indicating that the neural signals associated with reading are either preserved or enhanced after the denoising procedure.
Collapse
Affiliation(s)
| | - Mariya Toneva
- Machine Learning Department, Carnegie Mellon University, Pittsburgh, PA, United States
- Neuroscience Institute, Carnegie Mellon University, Pittsburgh, PA, United States
| | - Leila Wehbe
- Machine Learning Department, Carnegie Mellon University, Pittsburgh, PA, United States
- Neuroscience Institute, Carnegie Mellon University, Pittsburgh, PA, United States
| |
Collapse
|
23
|
Lu HY, Lorenc ES, Zhu H, Kilmarx J, Sulzer J, Xie C, Tobler PN, Watrous AJ, Orsborn AL, Lewis-Peacock J, Santacruz SR. Multi-scale neural decoding and analysis. J Neural Eng 2021; 18. [PMID: 34284369 PMCID: PMC8840800 DOI: 10.1088/1741-2552/ac160f] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/08/2020] [Accepted: 07/20/2021] [Indexed: 12/15/2022]
Abstract
Objective. Complex spatiotemporal neural activity encodes rich information related to behavior and cognition. Conventional research has focused on neural activity acquired using one of many different measurement modalities, each of which provides useful but incomplete assessment of the neural code. Multi-modal techniques can overcome tradeoffs in the spatial and temporal resolution of a single modality to reveal deeper and more comprehensive understanding of system-level neural mechanisms. Uncovering multi-scale dynamics is essential for a mechanistic understanding of brain function and for harnessing neuroscientific insights to develop more effective clinical treatment. Approach. We discuss conventional methodologies used for characterizing neural activity at different scales and review contemporary examples of how these approaches have been combined. Then we present our case for integrating activity across multiple scales to benefit from the combined strengths of each approach and elucidate a more holistic understanding of neural processes. Main results. We examine various combinations of neural activity at different scales and analytical techniques that can be used to integrate or illuminate information across scales, as well the technologies that enable such exciting studies. We conclude with challenges facing future multi-scale studies, and a discussion of the power and potential of these approaches. Significance. This roadmap will lead the readers toward a broad range of multi-scale neural decoding techniques and their benefits over single-modality analyses. This Review article highlights the importance of multi-scale analyses for systematically interrogating complex spatiotemporal mechanisms underlying cognition and behavior.
Collapse
Affiliation(s)
- Hung-Yun Lu
- The University of Texas at Austin, Biomedical Engineering, Austin, TX, United States of America
| | - Elizabeth S Lorenc
- The University of Texas at Austin, Psychology, Austin, TX, United States of America.,The University of Texas at Austin, Institute for Neuroscience, Austin, TX, United States of America
| | - Hanlin Zhu
- Rice University, Electrical and Computer Engineering, Houston, TX, United States of America
| | - Justin Kilmarx
- The University of Texas at Austin, Mechanical Engineering, Austin, TX, United States of America
| | - James Sulzer
- The University of Texas at Austin, Mechanical Engineering, Austin, TX, United States of America.,The University of Texas at Austin, Institute for Neuroscience, Austin, TX, United States of America
| | - Chong Xie
- Rice University, Electrical and Computer Engineering, Houston, TX, United States of America
| | - Philippe N Tobler
- University of Zurich, Neuroeconomics and Social Neuroscience, Zurich, Switzerland
| | - Andrew J Watrous
- The University of Texas at Austin, Neurology, Austin, TX, United States of America
| | - Amy L Orsborn
- University of Washington, Electrical and Computer Engineering, Seattle, WA, United States of America.,University of Washington, Bioengineering, Seattle, WA, United States of America.,Washington National Primate Research Center, Seattle, WA, United States of America
| | - Jarrod Lewis-Peacock
- The University of Texas at Austin, Psychology, Austin, TX, United States of America.,The University of Texas at Austin, Institute for Neuroscience, Austin, TX, United States of America
| | - Samantha R Santacruz
- The University of Texas at Austin, Biomedical Engineering, Austin, TX, United States of America.,The University of Texas at Austin, Institute for Neuroscience, Austin, TX, United States of America
| |
Collapse
|
24
|
Ozmen GC, Safaei M, Semiz B, Whittingslow DC, Hunnicutt JL, Prahalad S, Hash R, Xerogeanes JW, Inan OT. Detection of Meniscal Tear Effects on Tibial Vibration Using Passive Knee Sound Measurements. IEEE Trans Biomed Eng 2021; 68:2241-2250. [PMID: 33400643 PMCID: PMC8284919 DOI: 10.1109/tbme.2020.3048930] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/14/2022]
Abstract
OBJECTIVE To evaluate whether non-invasive knee sound measurements can provide information related to the underlying structural changes in the knee following meniscal tear. These changes are explained using an equivalent vibrational model of the knee-tibia structure. METHODS First, we formed an analytical model by modeling the tibia as a cantilever beam with the fixed end being the knee. The knee end was supported by three lumped components with features corresponding with tibial stiffnesses, and meniscal damping effect. Second, we recorded knee sounds from 46 healthy legs and 9 legs with acute meniscal tears (n = 34 subjects). We developed an acoustic event ("click") detection algorithm to find patterns in the recordings, and used the instrumental variable continuous-time transfer function estimation algorithm to model them. RESULTS The knee sound measurements yielded consistently lower fundamental mode decay rate in legs with meniscal tears ( 16 ±13 s - 1) compared to healthy legs ( 182 ±128 s - 1), p < 0.05. When we performed an intra-subject analysis of the injured versus contralateral legs for the 9 subjects with meniscus tears, we observed significantly lower natural frequency and damping ratio (first mode results for healthy: [Formula: see text]injured: [Formula: see text]) for the first three vibration modes (p < 0.05). These results agreed with the theoretical expectations gleaned from the vibrational model. SIGNIFICANCE This combined analytical and experimental method improves our understanding of how vibrations can describe the underlying structural changes in the knee following meniscal tear, and supports their use as a tool for future efforts in non-invasively diagnosing meniscal tear injuries.
Collapse
Affiliation(s)
- Goktug C. Ozmen
- School of Electrical and Computer Engineering, Georgia Institute of Technology, Atlanta, GA 30332, USA
| | - Mohsen Safaei
- School of Electrical and Computer Engineering, Georgia Institute of Technology, Atlanta, GA 30332, USA
| | - Beren Semiz
- School of Electrical and Computer Engineering, Georgia Institute of Technology, Atlanta, GA 30332, USA
| | - Daniel C. Whittingslow
- Emory University School of Medicine and Georgia Institute of Technology Coulter Department of Biomedical Engineering under the MD/PhD program
| | | | | | - Regina Hash
- Emory University School of Medicine, Atlanta, GA 30329, USA
| | | | - Omer T. Inan
- School of Electrical and Computer Engineering and, by courtesy, the Wallace H. Coulter Department of Biomedical Engineering, Georgia Institute of Technology, Atlanta, GA 30332, USA
| |
Collapse
|
25
|
Singh SH, Peterson SM, Rao RPN, Brunton BW. Mining naturalistic human behaviors in long-term video and neural recordings. J Neurosci Methods 2021; 358:109199. [PMID: 33910024 DOI: 10.1016/j.jneumeth.2021.109199] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/20/2020] [Revised: 04/07/2021] [Accepted: 04/19/2021] [Indexed: 10/21/2022]
Abstract
BACKGROUND Recent technological advances in brain recording and machine learning algorithms are enabling the study of neural activity underlying spontaneous human behaviors, beyond the confines of cued, repeated trials. However, analyzing such unstructured data lacking a priori experimental design remains a significant challenge, especially when the data is multi-modal and long-term. NEW METHOD Here we describe an automated, behavior-first approach for analyzing simultaneously recorded long-term, naturalistic electrocorticography (ECoG) and behavior video data. We identify and characterize spontaneous human upper-limb movements by combining computer vision, discrete latent-variable modeling, and string pattern-matching on the video. RESULTS Our pipeline discovers and annotates over 40,000 instances of naturalistic arm movements in long term (7-9 day) behavioral videos, across 12 subjects. Analysis of the simultaneously recorded brain data reveals neural signatures of movement that corroborate previous findings. Our pipeline produces large training datasets for brain-computer interfacing applications, and we show decoding results from a movement initiation detection task. COMPARISON WITH EXISTING METHODS Spontaneous movements capture real-world neural and behavior variability that is missing from traditional cued tasks. Building beyond window-based movement detection metrics, our unsupervised discretization scheme produces a queryable pose representation, allowing localization of movements with finer temporal resolution. CONCLUSIONS Our work addresses the unique analytic challenges of studying naturalistic human behaviors and contributes methods that may generalize to other neural recording modalities beyond ECoG. We publish our curated dataset and believe that it will be a valuable resource for future studies of naturalistic movements.
Collapse
Affiliation(s)
- Satpreet H Singh
- Department of Electrical and Computer Engineering, University of Washington, Seattle, USA
| | - Steven M Peterson
- Department of Biology, University of Washington, Seattle, USA; eScience Institute, University of Washington, Seattle, USA
| | - Rajesh P N Rao
- Department of Electrical and Computer Engineering, University of Washington, Seattle, USA; Paul G. Allen School of Computer Science and Engineering, University of Washington, Seattle, USA; Center for Neurotechnology, University of Washington, Seattle, USA; University of Washington Institute for Neuroengineering, Seattle, USA
| | - Bingni W Brunton
- Department of Biology, University of Washington, Seattle, USA; eScience Institute, University of Washington, Seattle, USA; University of Washington Institute for Neuroengineering, Seattle, USA.
| |
Collapse
|
26
|
Yang Y, Ahmadipour P, Shanechi MM. Adaptive latent state modeling of brain network dynamics with real-time learning rate optimization. J Neural Eng 2021; 18. [PMID: 33254159 DOI: 10.1088/1741-2552/abcefd] [Citation(s) in RCA: 16] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/07/2020] [Accepted: 11/30/2020] [Indexed: 12/29/2022]
Abstract
Objective. Dynamic latent state models are widely used to characterize the dynamics of brain network activity for various neural signal types. To date, dynamic latent state models have largely been developed for stationary brain network dynamics. However, brain network dynamics can be non-stationary for example due to learning, plasticity or recording instability. To enable modeling these non-stationarities, two problems need to be resolved. First, novel methods should be developed that can adaptively update the parameters of latent state models, which is difficult due to the state being latent. Second, new methods are needed to optimize the adaptation learning rate, which specifies how fast new neural observations update the model parameters and can significantly influence adaptation accuracy.Approach. We develop a Rate Optimized-adaptive Linear State-Space Modeling (RO-adaptive LSSM) algorithm that solves these two problems. First, to enable adaptation, we derive a computation- and memory-efficient adaptive LSSM fitting algorithm that updates the LSSM parameters recursively and in real time in the presence of the latent state. Second, we develop a real-time learning rate optimization algorithm. We use comprehensive simulations of a broad range of non-stationary brain network dynamics to validate both algorithms, which together constitute the RO-adaptive LSSM.Main results. We show that the adaptive LSSM fitting algorithm can accurately track the broad simulated non-stationary brain network dynamics. We also find that the learning rate significantly affects the LSSM fitting accuracy. Finally, we show that the real-time learning rate optimization algorithm can run in parallel with the adaptive LSSM fitting algorithm. Doing so, the combined RO-adaptive LSSM algorithm rapidly converges to the optimal learning rate and accurately tracks non-stationarities.Significance. These algorithms can be used to study time-varying neural dynamics underlying various brain functions and enhance future neurotechnologies such as brain-machine interfaces and closed-loop brain stimulation systems.
Collapse
Affiliation(s)
- Yuxiao Yang
- Ming Hsieh Department of Electrical and Computer Engineering, Viterbi School of Engineering, University of Southern California, Los Angeles, CA, United States of America.,These authors contributed equally to this work
| | - Parima Ahmadipour
- Ming Hsieh Department of Electrical and Computer Engineering, Viterbi School of Engineering, University of Southern California, Los Angeles, CA, United States of America.,These authors contributed equally to this work
| | - Maryam M Shanechi
- Ming Hsieh Department of Electrical and Computer Engineering, Viterbi School of Engineering, University of Southern California, Los Angeles, CA, United States of America.,Neuroscience Graduate Program, University of Southern California, Los Angeles, CA, United States of America
| |
Collapse
|
27
|
Modelling and prediction of the dynamic responses of large-scale brain networks during direct electrical stimulation. Nat Biomed Eng 2021; 5:324-345. [PMID: 33526909 DOI: 10.1038/s41551-020-00666-w] [Citation(s) in RCA: 56] [Impact Index Per Article: 18.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/25/2019] [Accepted: 11/24/2020] [Indexed: 01/19/2023]
Abstract
Direct electrical stimulation can modulate the activity of brain networks for the treatment of several neurological and neuropsychiatric disorders and for restoring lost function. However, precise neuromodulation in an individual requires the accurate modelling and prediction of the effects of stimulation on the activity of their large-scale brain networks. Here, we report the development of dynamic input-output models that predict multiregional dynamics of brain networks in response to temporally varying patterns of ongoing microstimulation. In experiments with two awake rhesus macaques, we show that the activities of brain networks are modulated by changes in both stimulation amplitude and frequency, that they exhibit damping and oscillatory response dynamics, and that variabilities in prediction accuracy and in estimated response strength across brain regions can be explained by an at-rest functional connectivity measure computed without stimulation. Input-output models of brain dynamics may enable precise neuromodulation for the treatment of disease and facilitate the investigation of the functional organization of large-scale brain networks.
Collapse
|
28
|
Abbaspourazad H, Choudhury M, Wong YT, Pesaran B, Shanechi MM. Multiscale low-dimensional motor cortical state dynamics predict naturalistic reach-and-grasp behavior. Nat Commun 2021; 12:607. [PMID: 33504797 PMCID: PMC7840738 DOI: 10.1038/s41467-020-20197-x] [Citation(s) in RCA: 30] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/13/2020] [Accepted: 11/18/2020] [Indexed: 01/30/2023] Open
Abstract
Motor function depends on neural dynamics spanning multiple spatiotemporal scales of population activity, from spiking of neurons to larger-scale local field potentials (LFP). How multiple scales of low-dimensional population dynamics are related in control of movements remains unknown. Multiscale neural dynamics are especially important to study in naturalistic reach-and-grasp movements, which are relatively under-explored. We learn novel multiscale dynamical models for spike-LFP network activity in monkeys performing naturalistic reach-and-grasps. We show low-dimensional dynamics of spiking and LFP activity exhibited several principal modes, each with a unique decay-frequency characteristic. One principal mode dominantly predicted movements. Despite distinct principal modes existing at the two scales, this predictive mode was multiscale and shared between scales, and was shared across sessions and monkeys, yet did not simply replicate behavioral modes. Further, this multiscale mode's decay-frequency explained behavior. We propose that multiscale, low-dimensional motor cortical state dynamics reflect the neural control of naturalistic reach-and-grasp behaviors.
Collapse
Affiliation(s)
- Hamidreza Abbaspourazad
- Ming Hsieh Department of Electrical and Computer Engineering, Viterbi School of Engineering, University of Southern California, Los Angeles, CA, 90089, USA
| | - Mahdi Choudhury
- Center for Neural Science, New York University, New York City, NY, 10003, USA
| | - Yan T Wong
- Center for Neural Science, New York University, New York City, NY, 10003, USA
- Department of Physiology, and Electrical and Computer Systems Engineering, Monash University, Melbourne, VIC, 3800, Australia
| | - Bijan Pesaran
- Center for Neural Science, New York University, New York City, NY, 10003, USA
| | - Maryam M Shanechi
- Ming Hsieh Department of Electrical and Computer Engineering, Viterbi School of Engineering, University of Southern California, Los Angeles, CA, 90089, USA.
- Neuroscience Graduate Program, University of Southern California, Los Angeles, CA, 90089, USA.
- Department of Biomedical Engineering, University of Southern California, Los Angeles, CA, 90089, USA.
| |
Collapse
|
29
|
Lydon-Staley DM, Cornblath EJ, Blevins AS, Bassett DS. Modeling brain, symptom, and behavior in the winds of change. Neuropsychopharmacology 2021; 46:20-32. [PMID: 32859996 PMCID: PMC7689481 DOI: 10.1038/s41386-020-00805-6] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 04/30/2020] [Revised: 07/19/2020] [Accepted: 07/22/2020] [Indexed: 02/08/2023]
Abstract
Neuropsychopharmacology addresses pressing questions in the study of three intertwined complex systems: the brain, human behavior, and symptoms of illness. The field seeks to understand the perturbations that impinge upon those systems, either driving greater health or illness. In the pursuit of this aim, investigators often perform analyses that make certain assumptions about the nature of the systems that are being perturbed. Those assumptions can be encoded in powerful computational models that serve to bridge the wide gulf between a descriptive analysis and a formal theory of a system's response. Here we review a set of three such models along a continuum of complexity, moving from a local treatment to a network treatment: one commonly applied form of the general linear model, impulse response models, and network control models. For each, we describe the model's basic form, review its use in the field, and provide a frank assessment of its relative strengths and weaknesses. The discussion naturally motivates future efforts to interlink data analysis, computational modeling, and formal theory. Our goal is to inspire practitioners to consider the assumptions implicit in their analytical approach, align those assumptions to the complexity of the systems under study, and take advantage of exciting recent advances in modeling the relations between perturbations and system function.
Collapse
Affiliation(s)
- David M Lydon-Staley
- Department of Bioengineering, University of Pennsylvania, Philadelphia, PA, 19104, USA
- Annenberg School for Communication, University of Pennsylvania, Philadelphia, PA, 19104, USA
| | - Eli J Cornblath
- Department of Bioengineering, University of Pennsylvania, Philadelphia, PA, 19104, USA
- Neuroscience Graduate Group, University of Pennsylvania, Philadelphia, PA, 19104, USA
| | - Ann Sizemore Blevins
- Department of Bioengineering, University of Pennsylvania, Philadelphia, PA, 19104, USA
| | - Danielle S Bassett
- Department of Bioengineering, University of Pennsylvania, Philadelphia, PA, 19104, USA.
- Department of Psychiatry, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, 19104, USA.
- Department of Neurology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, 19104, USA.
- Department of Electrical & Systems Engineering, School of Engineering & Applied Science, University of Pennsylvania, Philadelphia, PA, 19104, USA.
- Department of Physics & Astronomy, College of Arts & Sciences, University of Pennsylvania, Philadelphia, PA, 19104, USA.
- The Santa Fe Institute, Santa Fe, NM, 87501, USA.
| |
Collapse
|
30
|
Modeling behaviorally relevant neural dynamics enabled by preferential subspace identification. Nat Neurosci 2020; 24:140-149. [PMID: 33169030 DOI: 10.1038/s41593-020-00733-0] [Citation(s) in RCA: 55] [Impact Index Per Article: 13.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/04/2019] [Accepted: 10/02/2020] [Indexed: 11/09/2022]
Abstract
Neural activity exhibits complex dynamics related to various brain functions, internal states and behaviors. Understanding how neural dynamics explain specific measured behaviors requires dissociating behaviorally relevant and irrelevant dynamics, which is not achieved with current neural dynamic models as they are learned without considering behavior. We develop preferential subspace identification (PSID), which is an algorithm that models neural activity while dissociating and prioritizing its behaviorally relevant dynamics. Modeling data in two monkeys performing three-dimensional reach and grasp tasks, PSID revealed that the behaviorally relevant dynamics are significantly lower-dimensional than otherwise implied. Moreover, PSID discovered distinct rotational dynamics that were more predictive of behavior. Furthermore, PSID more accurately learned behaviorally relevant dynamics for each joint and recording channel. Finally, modeling data in two monkeys performing saccades demonstrated the generalization of PSID across behaviors, brain regions and neural signal types. PSID provides a general new tool to reveal behaviorally relevant neural dynamics that can otherwise go unnoticed.
Collapse
|
31
|
Srivastava P, Nozari E, Kim JZ, Ju H, Zhou D, Becker C, Pasqualetti F, Pappas GJ, Bassett DS. Models of communication and control for brain networks: distinctions, convergence, and future outlook. Netw Neurosci 2020; 4:1122-1159. [PMID: 33195951 PMCID: PMC7655113 DOI: 10.1162/netn_a_00158] [Citation(s) in RCA: 27] [Impact Index Per Article: 6.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/14/2020] [Accepted: 07/21/2020] [Indexed: 12/13/2022] Open
Abstract
Recent advances in computational models of signal propagation and routing in the human brain have underscored the critical role of white-matter structure. A complementary approach has utilized the framework of network control theory to better understand how white matter constrains the manner in which a region or set of regions can direct or control the activity of other regions. Despite the potential for both of these approaches to enhance our understanding of the role of network structure in brain function, little work has sought to understand the relations between them. Here, we seek to explicitly bridge computational models of communication and principles of network control in a conceptual review of the current literature. By drawing comparisons between communication and control models in terms of the level of abstraction, the dynamical complexity, the dependence on network attributes, and the interplay of multiple spatiotemporal scales, we highlight the convergence of and distinctions between the two frameworks. Based on the understanding of the intertwined nature of communication and control in human brain networks, this work provides an integrative perspective for the field and outlines exciting directions for future work.
Collapse
Affiliation(s)
- Pragya Srivastava
- Department of Bioengineering, University of Pennsylvania, Philadelphia, PA USA
| | - Erfan Nozari
- Department of Electrical & Systems Engineering, University of Pennsylvania, Philadelphia, PA USA
| | - Jason Z. Kim
- Department of Bioengineering, University of Pennsylvania, Philadelphia, PA USA
| | - Harang Ju
- Neuroscience Graduate Group, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA USA
| | - Dale Zhou
- Neuroscience Graduate Group, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA USA
| | - Cassiano Becker
- Department of Bioengineering, University of Pennsylvania, Philadelphia, PA USA
| | - Fabio Pasqualetti
- Department of Mechanical Engineering, University of California, Riverside, CA USA
| | - George J. Pappas
- Department of Electrical & Systems Engineering, University of Pennsylvania, Philadelphia, PA USA
| | - Danielle S. Bassett
- Department of Bioengineering, University of Pennsylvania, Philadelphia, PA USA
- Department of Electrical & Systems Engineering, University of Pennsylvania, Philadelphia, PA USA
- Department of Physics & Astronomy, University of Pennsylvania, Philadelphia, PA USA
- Department of Neurology, University of Pennsylvania, Philadelphia, PA USA
- Department of Psychiatry, University of Pennsylvania, Philadelphia, PA USA
- Santa Fe Institute, Santa Fe, NM USA
| |
Collapse
|
32
|
Gazi AH, Gurel NZ, Richardson KLS, Wittbrodt MT, Shah AJ, Vaccarino V, Bremner JD, Inan OT. Digital Cardiovascular Biomarker Responses to Transcutaneous Cervical Vagus Nerve Stimulation: State-Space Modeling, Prediction, and Simulation. JMIR Mhealth Uhealth 2020; 8:e20488. [PMID: 32960179 PMCID: PMC7539162 DOI: 10.2196/20488] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/20/2020] [Revised: 06/27/2020] [Accepted: 07/26/2020] [Indexed: 12/11/2022] Open
Abstract
Background Transcutaneous cervical vagus nerve stimulation (tcVNS) is a promising alternative to implantable stimulation of the vagus nerve. With demonstrated potential in myriad applications, ranging from systemic inflammation reduction to traumatic stress attenuation, closed-loop tcVNS during periods of risk could improve treatment efficacy and reduce ineffective delivery. However, achieving this requires a deeper understanding of biomarker changes over time. Objective The aim of the present study was to reveal the dynamics of relevant cardiovascular biomarkers, extracted from wearable sensing modalities, in response to tcVNS. Methods Twenty-four human subjects were recruited for a randomized double-blind clinical trial, for whom electrocardiography and photoplethysmography were used to measure heart rate and photoplethysmogram amplitude responses to tcVNS, respectively. Modeling these responses in state-space, we (1) compared the biomarkers in terms of their predictability and active vs sham differentiation, (2) studied the latency between stimulation onset and measurable effects, and (3) visualized the true and model-simulated biomarker responses to tcVNS. Results The models accurately predicted future heart rate and photoplethysmogram amplitude values with root mean square errors of approximately one-fifth the standard deviations of the data. Moreover, (1) the photoplethysmogram amplitude showed superior predictability (P=.03) and active vs sham separation compared to heart rate; (2) a consistent delay of greater than 5 seconds was found between tcVNS onset and cardiovascular effects; and (3) dynamic characteristics differentiated responses to tcVNS from the sham stimulation. Conclusions This work furthers the state of the art by modeling pertinent biomarker responses to tcVNS. Through subsequent analysis, we discovered three key findings with implications related to (1) wearable sensing devices for bioelectronic medicine, (2) the dominant mechanism of action for tcVNS-induced effects on cardiovascular physiology, and (3) the existence of dynamic biomarker signatures that can be leveraged when titrating therapy in closed loop. Trial Registration ClinicalTrials.gov NCT02992899; https://clinicaltrials.gov/ct2/show/NCT02992899 International Registered Report Identifier (IRRID) RR2-10.1016/j.brs.2019.08.002
Collapse
Affiliation(s)
- Asim H Gazi
- School of Electrical and Computer Engineering, Georgia Institute of Technology, Atlanta, GA, United States
| | - Nil Z Gurel
- School of Electrical and Computer Engineering, Georgia Institute of Technology, Atlanta, GA, United States
| | - Kristine L S Richardson
- School of Electrical and Computer Engineering, Georgia Institute of Technology, Atlanta, GA, United States
| | - Matthew T Wittbrodt
- Department of Psychiatry and Behavioral Sciences, Emory University School of Medicine, Atlanta, GA, United States
| | - Amit J Shah
- Department of Epidemiology, Rollins School of Public Health, Atlanta, GA, United States.,Department of Medicine, Division of Cardiology, Emory University School of Medicine, Atlanta, GA, United States.,Atlanta VA Medical Center, Emory University, Atlanta, GA, United States
| | - Viola Vaccarino
- Department of Epidemiology, Rollins School of Public Health, Atlanta, GA, United States.,Department of Medicine, Division of Cardiology, Emory University School of Medicine, Atlanta, GA, United States
| | - J Douglas Bremner
- Department of Psychiatry and Behavioral Sciences, Emory University School of Medicine, Atlanta, GA, United States.,Atlanta VA Medical Center, Emory University, Atlanta, GA, United States.,Department of Radiology, Emory University School of Medicine, Atlanta, GA, United States
| | - Omer T Inan
- School of Electrical and Computer Engineering, Georgia Institute of Technology, Atlanta, GA, United States.,Coulter Department of Bioengineering, Georgia Institute of Technology, Atlanta, GA, United States
| |
Collapse
|
33
|
Chang S, Wei X, Su F, Liu C, Yi G, Wang J, Han C, Che Y. Model Predictive Control for Seizure Suppression Based on Nonlinear Auto-Regressive Moving-Average Volterra Model. IEEE Trans Neural Syst Rehabil Eng 2020; 28:2173-2183. [PMID: 32763855 DOI: 10.1109/tnsre.2020.3014927] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Abstract
This article investigates a closed-loop brain stimulation method based on model predictive control strategy to suppress epileptic seizures. A neural mass model (NMM), exhibiting the normal and various epileptic seizures by changing physiologically meaningful parameters, is used as a black-box model of the brain. Based on system identification, an auto-regressive moving-average Volterra model is established to reveal the relationship between stimulation and neuronal responses. Then, the model predictive control strategy is implemented based the Volterra model, which can generate an optimal stimulation waveform to eliminate epileptiform waves. The computational simulation results indicate the proposed closed-loop control strategy can optimize the stimulation waveform without particular knowledge of the physiological properties of the system. The robustness of the proposed control strategy to system disturbances makes it more appropriate for future clinical application.
Collapse
|
34
|
Qiao S, Sedillo JI, Brown KA, Ferrentino B, Pesaran B. A Causal Network Analysis of Neuromodulation in the Mood Processing Network. Neuron 2020; 107:972-985.e6. [PMID: 32645299 DOI: 10.1016/j.neuron.2020.06.012] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/10/2019] [Revised: 04/21/2020] [Accepted: 06/09/2020] [Indexed: 12/19/2022]
Abstract
Neural decoding and neuromodulation technologies hold great promise for treating mood and other brain disorders in next-generation therapies that manipulate functional brain networks. Here we perform a novel causal network analysis to decode multiregional communication in the primate mood processing network and determine how neuromodulation, short-burst tetanic microstimulation (sbTetMS), alters multiregional network communication. The causal network analysis revealed a mechanism of network excitability that regulates when a sender stimulation site communicates with receiver sites. Decoding network excitability from neural activity at modulator sites predicted sender-receiver communication, whereas sbTetMS neuromodulation temporarily disrupted sender-receiver communication. These results reveal specific network mechanisms of multiregional communication and suggest a new generation of brain therapies that combine neural decoding to predict multiregional communication with neuromodulation to disrupt multiregional communication.
Collapse
Affiliation(s)
- Shaoyu Qiao
- Center for Neural Science, New York University, New York, NY 10003, USA
| | - J Isaac Sedillo
- Center for Neural Science, New York University, New York, NY 10003, USA
| | - Kevin A Brown
- Center for Neural Science, New York University, New York, NY 10003, USA
| | | | - Bijan Pesaran
- Center for Neural Science, New York University, New York, NY 10003, USA; Neuroscience Institute, New York University Langone Health, New York, NY 10016, USA; Department of Neurology, New York University Langone Health, New York, NY 10016, USA.
| |
Collapse
|
35
|
Shanechi MM. Brain–machine interfaces from motor to mood. Nat Neurosci 2019; 22:1554-1564. [DOI: 10.1038/s41593-019-0488-y] [Citation(s) in RCA: 82] [Impact Index Per Article: 16.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/08/2019] [Accepted: 08/06/2019] [Indexed: 12/22/2022]
|