1
|
Sani OG, Pesaran B, Shanechi MM. Dissociative and prioritized modeling of behaviorally relevant neural dynamics using recurrent neural networks. Nat Neurosci 2024; 27:2033-2045. [PMID: 39242944 PMCID: PMC11452342 DOI: 10.1038/s41593-024-01731-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/22/2023] [Accepted: 07/17/2024] [Indexed: 09/09/2024]
Abstract
Understanding the dynamical transformation of neural activity to behavior requires new capabilities to nonlinearly model, dissociate and prioritize behaviorally relevant neural dynamics and test hypotheses about the origin of nonlinearity. We present dissociative prioritized analysis of dynamics (DPAD), a nonlinear dynamical modeling approach that enables these capabilities with a multisection neural network architecture and training approach. Analyzing cortical spiking and local field potential activity across four movement tasks, we demonstrate five use-cases. DPAD enabled more accurate neural-behavioral prediction. It identified nonlinear dynamical transformations of local field potentials that were more behavior predictive than traditional power features. Further, DPAD achieved behavior-predictive nonlinear neural dimensionality reduction. It enabled hypothesis testing regarding nonlinearities in neural-behavioral transformation, revealing that, in our datasets, nonlinearities could largely be isolated to the mapping from latent cortical dynamics to behavior. Finally, DPAD extended across continuous, intermittently sampled and categorical behaviors. DPAD provides a powerful tool for nonlinear dynamical modeling and investigation of neural-behavioral data.
Collapse
Affiliation(s)
- Omid G Sani
- Ming Hsieh Department of Electrical and Computer Engineering, Viterbi School of Engineering, University of Southern California, Los Angeles, CA, USA
| | - Bijan Pesaran
- Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA
| | - Maryam M Shanechi
- Ming Hsieh Department of Electrical and Computer Engineering, Viterbi School of Engineering, University of Southern California, Los Angeles, CA, USA.
- Thomas Lord Department of Computer Science, University of Southern California, Los Angeles, CA, USA.
- Neuroscience Graduate Program, University of Southern California, Los Angeles, CA, USA.
- Alfred E. Mann Department of Biomedical Engineering, University of Southern California, Los Angeles, CA, USA.
| |
Collapse
|
2
|
Yang SH, Huang CJ, Huang JS. Increasing Robustness of Intracortical Brain-Computer Interfaces for Recording Condition Changes via Data Augmentation. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2024; 251:108208. [PMID: 38754326 DOI: 10.1016/j.cmpb.2024.108208] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/08/2024] [Accepted: 04/30/2024] [Indexed: 05/18/2024]
Abstract
BACKGROUND AND OBJECTIVE Intracortical brain-computer interfaces (iBCIs) aim to help paralyzed individuals restore their motor functions by decoding neural activity into intended movement. However, changes in neural recording conditions hinder the decoding performance of iBCIs, mainly because the neural-to-kinematic mappings shift. Conventional approaches involve either training the neural decoders using large datasets before deploying the iBCI or conducting frequent calibrations during its operation. However, collecting data for extended periods can cause user fatigue, negatively impacting the quality and consistency of neural signals. Furthermore, frequent calibration imposes a substantial computational load. METHODS This study proposes a novel approach to increase iBCIs' robustness against changing recording conditions. The approach uses three neural augmentation operators to generate augmented neural activity that mimics common recording conditions. Then, contrastive learning is used to learn latent factors by maximizing the similarity between the augmented neural activities. The learned factors are expected to remain stable despite varying recording conditions and maintain a consistent correlation with the intended movement. RESULTS Experimental results demonstrate that the proposed iBCI outperformed the state-of-the-art iBCIs and was robust to changing recording conditions across days for long-term use on one publicly available nonhuman primate dataset. It achieved satisfactory offline decoding performance, even when a large training dataset was unavailable. CONCLUSIONS This study paves the way for reducing the need for frequent calibration of iBCIs and collecting a large amount of annotated training data. Potential future works aim to improve offline decoding performance with an ultra-small training dataset and improve the iBCIs' robustness to severely disabled electrodes.
Collapse
Affiliation(s)
- Shih-Hung Yang
- Department of Mechanical Engineering, National Cheng Kung University, Tainan, 701, Taiwan.
| | - Chun-Jui Huang
- Department of Mechanical Engineering, National Cheng Kung University, Tainan, 701, Taiwan
| | - Jhih-Siang Huang
- Department of Mechanical Engineering, National Cheng Kung University, Tainan, 701, Taiwan
| |
Collapse
|
3
|
Zhang C, Wang H, Tang S, Li Z. Rhesus monkeys learn to control a directional-key inspired brain machine interface via bio-feedback. PLoS One 2024; 19:e0286742. [PMID: 38232123 DOI: 10.1371/journal.pone.0286742] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/13/2022] [Accepted: 05/23/2023] [Indexed: 01/19/2024] Open
Abstract
Brain machine interfaces (BMI) connect brains directly to the outside world, bypassing natural neural systems and actuators. Neuronal-activity-to-motion transformation algorithms allow applications such as control of prosthetics or computer cursors. These algorithms lie within a spectrum between bio-mimetic control and bio-feedback control. The bio-mimetic approach relies on increasingly complex algorithms to decode neural activity by mimicking the natural neural system and actuator relationship while focusing on machine learning: the supervised fitting of decoder parameters. On the other hand, the bio-feedback approach uses simple algorithms and relies primarily on user learning, which may take some time, but can facilitate control of novel, non-biological appendages. An increasing amount of work has focused on the arguably more successful bio-mimetic approach. However, as chronic recordings have become more accessible and utilization of novel appendages such as computer cursors have become more universal, users can more easily spend time learning in a bio-feedback control paradigm. We believe a simple approach which leverages user learning and few assumptions will provide users with good control ability. To test the feasibility of this idea, we implemented a simple firing-rate-to-motion correspondence rule, assigned groups of neurons to virtual "directional keys" for control of a 2D cursor. Though not strictly required, to facilitate initial control, we selected neurons with similar preferred directions for each group. The groups of neurons were kept the same across multiple recording sessions to allow learning. Two Rhesus monkeys used this BMI to perform a center-out cursor movement task. After about a week of training, monkeys performed the task better and neuronal signal patterns changed on a group basis, indicating learning. While our experiments did not compare this bio-feedback BMI to bio-mimetic BMIs, the results demonstrate the feasibility of our control paradigm and paves the way for further research in multi-dimensional bio-feedback BMIs.
Collapse
Affiliation(s)
- Chenguang Zhang
- Center for Cognition and Neuroergonomics, State Key Laboratory of Cognitive Neuroscience and Learning, Beijing Normal University at Zhuhai, Zhuhai, People's Republic of China
- IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing, People's Republic of China
| | - Hao Wang
- Institute of Big Data and Artificial Intelligence, China Telecom Corporation Limited Beijing Research Institute, Beijing, China
| | - Shaohua Tang
- Center for Cognition and Neuroergonomics, State Key Laboratory of Cognitive Neuroscience and Learning, Beijing Normal University at Zhuhai, Zhuhai, People's Republic of China
- School of Systems Science, Beijing Normal University, Beijing, China
- International Academic Center of Complex Systems, Beijing Normal University, Zhuhai, China
| | - Zheng Li
- Center for Cognition and Neuroergonomics, State Key Laboratory of Cognitive Neuroscience and Learning, Beijing Normal University at Zhuhai, Zhuhai, People's Republic of China
- IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing, People's Republic of China
| |
Collapse
|
4
|
Abbaspourazad H, Erturk E, Pesaran B, Shanechi MM. Dynamical flexible inference of nonlinear latent factors and structures in neural population activity. Nat Biomed Eng 2024; 8:85-108. [PMID: 38082181 DOI: 10.1038/s41551-023-01106-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/21/2022] [Accepted: 09/12/2023] [Indexed: 12/26/2023]
Abstract
Modelling the spatiotemporal dynamics in the activity of neural populations while also enabling their flexible inference is hindered by the complexity and noisiness of neural observations. Here we show that the lower-dimensional nonlinear latent factors and latent structures can be computationally modelled in a manner that allows for flexible inference causally, non-causally and in the presence of missing neural observations. To enable flexible inference, we developed a neural network that separates the model into jointly trained manifold and dynamic latent factors such that nonlinearity is captured through the manifold factors and the dynamics can be modelled in tractable linear form on this nonlinear manifold. We show that the model, which we named 'DFINE' (for 'dynamical flexible inference for nonlinear embeddings') achieves flexible inference in simulations of nonlinear dynamics and across neural datasets representing a diversity of brain regions and behaviours. Compared with earlier neural-network models, DFINE enables flexible inference, better predicts neural activity and behaviour, and better captures the latent neural manifold structure. DFINE may advance the development of neurotechnology and investigations in neuroscience.
Collapse
Affiliation(s)
- Hamidreza Abbaspourazad
- Ming Hsieh Department of Electrical and Computer Engineering, Viterbi School of Engineering, University of Southern California, Los Angeles, CA, USA
| | - Eray Erturk
- Ming Hsieh Department of Electrical and Computer Engineering, Viterbi School of Engineering, University of Southern California, Los Angeles, CA, USA
| | - Bijan Pesaran
- Departments of Neurosurgery, Neuroscience, and Bioengineering, University of Pennsylvania, Philadelphia, PA, USA
| | - Maryam M Shanechi
- Ming Hsieh Department of Electrical and Computer Engineering, Viterbi School of Engineering, University of Southern California, Los Angeles, CA, USA.
- Thomas Lord Department of Computer Science, Alfred E. Mann Department of Biomedical Engineering, Neuroscience Graduate Program, University of Southern California, Los Angeles, CA, USA.
| |
Collapse
|
5
|
Song CY, Shanechi MM. Unsupervised learning of stationary and switching dynamical system models from Poisson observations. J Neural Eng 2023; 20:066029. [PMID: 38083862 PMCID: PMC10714100 DOI: 10.1088/1741-2552/ad038d] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/24/2023] [Revised: 09/15/2023] [Accepted: 10/16/2023] [Indexed: 12/18/2023]
Abstract
Objective. Investigating neural population dynamics underlying behavior requires learning accurate models of the recorded spiking activity, which can be modeled with a Poisson observation distribution. Switching dynamical system models can offer both explanatory power and interpretability by piecing together successive regimes of simpler dynamics to capture more complex ones. However, in many cases, reliable regime labels are not available, thus demanding accurate unsupervised learning methods for Poisson observations. Existing learning methods, however, rely on inference of latent states in neural activity using the Laplace approximation, which may not capture the broader properties of densities and may lead to inaccurate learning. Thus, there is a need for new inference methods that can enable accurate model learning.Approach. To achieve accurate model learning, we derive a novel inference method based on deterministic sampling for Poisson observations called the Poisson Cubature Filter (PCF) and embed it in an unsupervised learning framework. This method takes a minimum mean squared error approach to estimation. Terms that are difficult to find analytically for Poisson observations are approximated in a novel way with deterministic sampling based on numerical integration and cubature rules.Main results. PCF enabled accurate unsupervised learning in both stationary and switching dynamical systems and largely outperformed prior Laplace approximation-based learning methods in both simulations and motor cortical spiking data recorded during a reaching task. These improvements were larger for smaller data sizes, showing that PCF-based learning was more data efficient and enabled more reliable regime identification. In experimental data and unsupervised with respect to behavior, PCF-based learning uncovered interpretable behavior-relevant regimes unlike prior learning methods.Significance. The developed unsupervised learning methods for switching dynamical systems can accurately uncover latent regimes and states in population spiking activity, with important applications in both basic neuroscience and neurotechnology.
Collapse
Affiliation(s)
- Christian Y Song
- Ming Hsieh Department of Electrical and Computer Engineering, Viterbi School of Engineering, University of Southern California, Los Angeles, CA, United States of America
| | - Maryam M Shanechi
- Ming Hsieh Department of Electrical and Computer Engineering, Viterbi School of Engineering, University of Southern California, Los Angeles, CA, United States of America
- Neuroscience Graduate Program, University of Southern California, Los Angeles, CA, United States of America
- Alfred E. Mann Department of Biomedical Engineering, Viterbi School of Engineering, University of Southern California, Los Angeles, CA, United States of America
- Thomas Lord Department of Computer Science, Viterbi School of Engineering, University of Southern California, Los Angeles, CA, United States of America
| |
Collapse
|
6
|
Meghanath G, Jimenez B, Makin JG. Inferring population dynamics in macaque cortex. J Neural Eng 2023; 20:056041. [PMID: 37875104 DOI: 10.1088/1741-2552/ad0651] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/01/2023] [Accepted: 10/24/2023] [Indexed: 10/26/2023]
Abstract
Objective.The proliferation of multi-unit cortical recordings over the last two decades, especially in macaques and during motor-control tasks, has generated interest in neural 'population dynamics': the time evolution of neural activity across a group of neurons working together. A good model of these dynamics should be able to infer the activity of unobserved neurons within the same population and of the observed neurons at future times. Accordingly, Pandarinath and colleagues have introduced a benchmark to evaluate models on these two (and related) criteria: four data sets, each consisting of firing rates from a population of neurons, recorded from macaque cortex during movement-related tasks.Approach.Since this is a discriminative-learning task, we hypothesize that general-purpose architectures based on recurrent neural networks (RNNs) trained with masking can outperform more 'bespoke' models. To capture long-distance dependencies without sacrificing the autoregressive bias of recurrent networks, we also propose a novel, hybrid architecture ('TERN') that augments the RNN with self-attention, as in transformer networks.Main results.Our RNNs outperform all published models on all four data sets in the benchmark. The hybrid architecture improves performance further still. Pure transformer models fail to achieve this level of performance, either in our work or that of other groups.Significance.We argue that the autoregressive bias imposed by RNNs is critical for achieving the highest levels of performance, and establish the state of the art on the neural latents benchmark. We conclude, however, by proposing that the benchmark be augmented with an alternative evaluation of latent dynamics that favors generative over discriminative models like the ones we propose in this report.
Collapse
Affiliation(s)
- Ganga Meghanath
- Elmore Family School of Electrical and Computer Engineering, Purdue University, West Lafayette, IN, United States of America
| | - Bryan Jimenez
- Elmore Family School of Electrical and Computer Engineering, Purdue University, West Lafayette, IN, United States of America
| | - Joseph G Makin
- Elmore Family School of Electrical and Computer Engineering, Purdue University, West Lafayette, IN, United States of America
| |
Collapse
|
7
|
Ye J, Collinger JL, Wehbe L, Gaunt R. Neural Data Transformer 2: Multi-context Pretraining for Neural Spiking Activity. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2023:2023.09.18.558113. [PMID: 37781630 PMCID: PMC10541112 DOI: 10.1101/2023.09.18.558113] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/03/2023]
Abstract
The neural population spiking activity recorded by intracortical brain-computer interfaces (iBCIs) contain rich structure. Current models of such spiking activity are largely prepared for individual experimental contexts, restricting data volume to that collectable within a single session and limiting the effectiveness of deep neural networks (DNNs). The purported challenge in aggregating neural spiking data is the pervasiveness of context-dependent shifts in the neural data distributions. However, large scale unsupervised pretraining by nature spans heterogeneous data, and has proven to be a fundamental recipe for successful representation learning across deep learning. We thus develop Neural Data Transformer 2 (NDT2), a spatiotemporal Transformer for neural spiking activity, and demonstrate that pretraining can leverage motor BCI datasets that span sessions, subjects, and experimental tasks. NDT2 enables rapid adaptation to novel contexts in downstream decoding tasks and opens the path to deployment of pretrained DNNs for iBCI control. Code: https://github.com/joel99/context_general_bci.
Collapse
Affiliation(s)
- Joel Ye
- Rehab Neural Engineering Labs, University of Pittsburgh
- Neuroscience Institute, Carnegie Mellon University
- Center for the Neural Basis of Cognition, Pittsburgh
| | - Jennifer L. Collinger
- Rehab Neural Engineering Labs, University of Pittsburgh
- Center for the Neural Basis of Cognition, Pittsburgh
- Department of Physical Medicine and Rehabilitation, University of Pittsburgh
- Department of Bioengineering, University of Pittsburgh
- Department of Biomedical Engineering, Carnegie Mellon University
| | - Leila Wehbe
- Neuroscience Institute, Carnegie Mellon University
- Center for the Neural Basis of Cognition, Pittsburgh
- Machine Learning Department, Carnegie Mellon University
| | - Robert Gaunt
- Rehab Neural Engineering Labs, University of Pittsburgh
- Center for the Neural Basis of Cognition, Pittsburgh
- Department of Physical Medicine and Rehabilitation, University of Pittsburgh
- Department of Bioengineering, University of Pittsburgh
- Department of Biomedical Engineering, Carnegie Mellon University
| |
Collapse
|
8
|
Deo DR, Willett FR, Avansino DT, Hochberg LR, Henderson JM, Shenoy KV. Translating deep learning to neuroprosthetic control. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2023:2023.04.21.537581. [PMID: 37131830 PMCID: PMC10153231 DOI: 10.1101/2023.04.21.537581] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/04/2023]
Abstract
Advances in deep learning have given rise to neural network models of the relationship between movement and brain activity that appear to far outperform prior approaches. Brain-computer interfaces (BCIs) that enable people with paralysis to control external devices, such as robotic arms or computer cursors, might stand to benefit greatly from these advances. We tested recurrent neural networks (RNNs) on a challenging nonlinear BCI problem: decoding continuous bimanual movement of two computer cursors. Surprisingly, we found that although RNNs appeared to perform well in offline settings, they did so by overfitting to the temporal structure of the training data and failed to generalize to real-time neuroprosthetic control. In response, we developed a method that alters the temporal structure of the training data by dilating/compressing it in time and re-ordering it, which we show helps RNNs successfully generalize to the online setting. With this method, we demonstrate that a person with paralysis can control two computer cursors simultaneously, far outperforming standard linear methods. Our results provide evidence that preventing models from overfitting to temporal structure in training data may, in principle, aid in translating deep learning advances to the BCI setting, unlocking improved performance for challenging applications.
Collapse
|
9
|
Abbaspourazad H, Erturk E, Pesaran B, Shanechi MM. Dynamical flexible inference of nonlinear latent structures in neural population activity. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2023:2023.03.13.532479. [PMID: 36993605 PMCID: PMC10054986 DOI: 10.1101/2023.03.13.532479] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/19/2023]
Abstract
Inferring complex spatiotemporal dynamics in neural population activity is critical for investigating neural mechanisms and developing neurotechnology. These activity patterns are noisy observations of lower-dimensional latent factors and their nonlinear dynamical structure. A major unaddressed challenge is to model this nonlinear structure, but in a manner that allows for flexible inference, whether causally, non-causally, or in the presence of missing neural observations. We address this challenge by developing DFINE, a new neural network that separates the model into dynamic and manifold latent factors, such that the dynamics can be modeled in tractable form. We show that DFINE achieves flexible nonlinear inference across diverse behaviors and brain regions. Further, despite enabling flexible inference unlike prior neural network models of population activity, DFINE also better predicts the behavior and neural activity, and better captures the latent neural manifold structure. DFINE can both enhance future neurotechnology and facilitate investigations across diverse domains of neuroscience.
Collapse
|
10
|
Pandarinath C, Bensmaia SJ. The science and engineering behind sensitized brain-controlled bionic hands. Physiol Rev 2022; 102:551-604. [PMID: 34541898 PMCID: PMC8742729 DOI: 10.1152/physrev.00034.2020] [Citation(s) in RCA: 31] [Impact Index Per Article: 15.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/11/2020] [Revised: 09/07/2021] [Accepted: 09/13/2021] [Indexed: 12/13/2022] Open
Abstract
Advances in our understanding of brain function, along with the development of neural interfaces that allow for the monitoring and activation of neurons, have paved the way for brain-machine interfaces (BMIs), which harness neural signals to reanimate the limbs via electrical activation of the muscles or to control extracorporeal devices, thereby bypassing the muscles and senses altogether. BMIs consist of reading out motor intent from the neuronal responses monitored in motor regions of the brain and executing intended movements with bionic limbs, reanimated limbs, or exoskeletons. BMIs also allow for the restoration of the sense of touch by electrically activating neurons in somatosensory regions of the brain, thereby evoking vivid tactile sensations and conveying feedback about object interactions. In this review, we discuss the neural mechanisms of motor control and somatosensation in able-bodied individuals and describe approaches to use neuronal responses as control signals for movement restoration and to activate residual sensory pathways to restore touch. Although the focus of the review is on intracortical approaches, we also describe alternative signal sources for control and noninvasive strategies for sensory restoration.
Collapse
Affiliation(s)
- Chethan Pandarinath
- Department of Biomedical Engineering, Emory University and Georgia Institute of Technology, Atlanta, Georgia
- Department of Neurosurgery, Emory University, Atlanta, Georgia
| | - Sliman J Bensmaia
- Department of Organismal Biology and Anatomy, University of Chicago, Chicago, Illinois
- Committee on Computational Neuroscience, University of Chicago, Chicago, Illinois
- Grossman Institute for Neuroscience, Quantitative Biology, and Human Behavior, University of Chicago, Chicago, Illinois
| |
Collapse
|
11
|
Schroeder KE, Perkins SM, Wang Q, Churchland MM. Cortical Control of Virtual Self-Motion Using Task-Specific Subspaces. J Neurosci 2022; 42:220-239. [PMID: 34716229 PMCID: PMC8802935 DOI: 10.1523/jneurosci.2687-20.2021] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/19/2020] [Revised: 09/18/2021] [Accepted: 10/17/2021] [Indexed: 11/21/2022] Open
Abstract
Brain-machine interfaces (BMIs) for reaching have enjoyed continued performance improvements, yet there remains significant need for BMIs that control other movement classes. Recent scientific findings suggest that the intrinsic covariance structure of neural activity depends strongly on movement class, potentially necessitating different decode algorithms across classes. To address this possibility, we developed a self-motion BMI based on cortical activity as monkeys cycled a hand-held pedal to progress along a virtual track. Unlike during reaching, we found no high-variance dimensions that directly correlated with to-be-decoded variables. This was due to no neurons having consistent correlations between their responses and kinematic variables. Yet we could decode a single variable-self-motion-by nonlinearly leveraging structure that spanned multiple high-variance neural dimensions. Resulting online BMI-control success rates approached those during manual control. These findings make two broad points regarding how to build decode algorithms that harmonize with the empirical structure of neural activity in motor cortex. First, even when decoding from the same cortical region (e.g., arm-related motor cortex), different movement classes may need to employ very different strategies. Although correlations between neural activity and hand velocity are prominent during reaching tasks, they are not a fundamental property of motor cortex and cannot be counted on to be present in general. Second, although one generally desires a low-dimensional readout, it can be beneficial to leverage a multidimensional high-variance subspace. Fully embracing this approach requires highly nonlinear approaches tailored to the task at hand, but can produce near-native levels of performance.SIGNIFICANCE STATEMENT Many brain-machine interface decoders have been constructed for controlling movements normally performed with the arm. Yet it is unclear how these will function beyond the reach-like scenarios where they were developed. Existing decoders implicitly assume that neural covariance structure, and correlations with to-be-decoded kinematic variables, will be largely preserved across tasks. We find that the correlation between neural activity and hand kinematics, a feature typically exploited when decoding reach-like movements, is essentially absent during another task performed with the arm: cycling through a virtual environment. Nevertheless, the use of a different strategy, one focused on leveraging the highest-variance neural signals, supported high performance real-time brain-machine interface control.
Collapse
Affiliation(s)
- Karen E Schroeder
- Department of Neuroscience, Columbia University Medical Center, New York, New York
- Zuckerman Institute, Columbia University, New York, New York
| | - Sean M Perkins
- Zuckerman Institute, Columbia University, New York, New York
- Department of Biomedical Engineering, Columbia University, New York, New York
| | - Qi Wang
- Department of Biomedical Engineering, Columbia University, New York, New York
| | - Mark M Churchland
- Department of Neuroscience, Columbia University Medical Center, New York, New York
- Zuckerman Institute, Columbia University, New York, New York
- Kavli Institute for Brain Science, Columbia University Medical Center, New York, New York
- Grossman Center for the Statistics of Mind, Columbia University, New York, New York
| |
Collapse
|
12
|
Savolainen OW. The significance of neural inter-frequency power correlations. Sci Rep 2021; 11:23190. [PMID: 34848759 PMCID: PMC8633012 DOI: 10.1038/s41598-021-02277-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/14/2021] [Accepted: 10/26/2021] [Indexed: 11/29/2022] Open
Abstract
It is of great interest in neuroscience to determine what frequency bands in the brain have covarying power. This would help us robustly identify the frequency signatures of neural processes. However to date, to the best of the author's knowledge, a comprehensive statistical approach to this question that accounts for intra-frequency autocorrelation, frequency-domain oversampling, and multiple testing under dependency has not been undertaken. As such, this work presents a novel statistical significance test for correlated power across frequency bands for a broad class of non-stationary time series. It is validated on synthetic data. It is then used to test all of the inter-frequency power correlations between 0.2 and 8500 Hz in continuous intracortical extracellular neural recordings in Macaque M1, using a very large, publicly available dataset. The recordings were Current Source Density referenced and were recorded with a Utah array. The results support previous results in the literature that show that neural processes in M1 have power signatures across a very broad range of frequency bands. In particular, the power in LFP frequency bands as low as 20 Hz was found to almost always be statistically significantly correlated to the power in kHz frequency ranges. It is proposed that this test can also be used to discover the superimposed frequency domain signatures of all the neural processes in a neural signal, allowing us to identify every interesting neural frequency band.
Collapse
Affiliation(s)
- Oscar W Savolainen
- Centre for Bio-Inspired Technology, Imperial College London, London, UK.
| |
Collapse
|
13
|
Selection of Essential Neural Activity Timesteps for Intracortical Brain-Computer Interface Based on Recurrent Neural Network. SENSORS 2021; 21:s21196372. [PMID: 34640699 PMCID: PMC8512903 DOI: 10.3390/s21196372] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 08/08/2021] [Revised: 09/15/2021] [Accepted: 09/20/2021] [Indexed: 11/29/2022]
Abstract
Intracortical brain–computer interfaces (iBCIs) translate neural activity into control commands, thereby allowing paralyzed persons to control devices via their brain signals. Recurrent neural networks (RNNs) are widely used as neural decoders because they can learn neural response dynamics from continuous neural activity. Nevertheless, excessively long or short input neural activity for an RNN may decrease its decoding performance. Based on the temporal attention module exploiting relations in features over time, we propose a temporal attention-aware timestep selection (TTS) method that improves the interpretability of the salience of each timestep in an input neural activity. Furthermore, TTS determines the appropriate input neural activity length for accurate neural decoding. Experimental results show that the proposed TTS efficiently selects 28 essential timesteps for RNN-based neural decoders, outperforming state-of-the-art neural decoders on two nonhuman primate datasets (R2=0.76±0.05 for monkey Indy and CC=0.91±0.01 for monkey N). In addition, it reduces the computation time for offline training (reducing 5–12%) and online prediction (reducing 16–18%). When visualizing the attention mechanism in TTS, the preparatory neural activity is consecutively highlighted during arm movement, and the most recent neural activity is highlighted during the resting state in nonhuman primates. Selecting only a few essential timesteps for an RNN-based neural decoder provides sufficient decoding performance and requires only a short computation time.
Collapse
|
14
|
Inferring entire spiking activity from local field potentials. Sci Rep 2021; 11:19045. [PMID: 34561480 PMCID: PMC8463692 DOI: 10.1038/s41598-021-98021-9] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/02/2020] [Accepted: 09/01/2021] [Indexed: 11/29/2022] Open
Abstract
Extracellular recordings are typically analysed by separating them into two distinct signals: local field potentials (LFPs) and spikes. Previous studies have shown that spikes, in the form of single-unit activity (SUA) or multiunit activity (MUA), can be inferred solely from LFPs with moderately good accuracy. SUA and MUA are typically extracted via threshold-based technique which may not be reliable when the recordings exhibit a low signal-to-noise ratio (SNR). Another type of spiking activity, referred to as entire spiking activity (ESA), can be extracted by a threshold-less, fast, and automated technique and has led to better performance in several tasks. However, its relationship with the LFPs has not been investigated. In this study, we aim to address this issue by inferring ESA from LFPs intracortically recorded from the motor cortex area of three monkeys performing different tasks. Results from long-term recording sessions and across subjects revealed that ESA can be inferred from LFPs with good accuracy. On average, the inference performance of ESA was consistently and significantly higher than those of SUA and MUA. In addition, local motor potential (LMP) was found to be the most predictive feature. The overall results indicate that LFPs contain substantial information about spiking activity, particularly ESA. This could be useful for understanding LFP-spike relationship and for the development of LFP-based BMIs.
Collapse
|
15
|
Sachdeva PS, Livezey JA, Dougherty ME, Gu BM, Berke JD, Bouchard KE. Improved inference in coupling, encoding, and decoding models and its consequence for neuroscientific interpretation. J Neurosci Methods 2021; 358:109195. [PMID: 33905791 DOI: 10.1016/j.jneumeth.2021.109195] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/01/2020] [Revised: 04/08/2021] [Accepted: 04/10/2021] [Indexed: 10/21/2022]
Abstract
BACKGROUND A central goal of systems neuroscience is to understand the relationships amongst constituent units in neural populations, and their modulation by external factors, using high-dimensional and stochastic neural recordings. Parametric statistical models (e.g., coupling, encoding, and decoding models), play an instrumental role in accomplishing this goal. However, extracting conclusions from a parametric model requires that it is fit using an inference algorithm capable of selecting the correct parameters and properly estimating their values. Traditional approaches to parameter inference have been shown to suffer from failures in both selection and estimation. The recent development of algorithms that ameliorate these deficiencies raises the question of whether past work relying on such inference procedures have produced inaccurate systems neuroscience models, thereby impairing their interpretation. NEW METHOD We used algorithms based on Union of Intersections, a statistical inference framework based on stability principles, capable of improved selection and estimation. COMPARISON We fit functional coupling, encoding, and decoding models across a battery of neural datasets using both UoI and baseline inference procedures (e.g., ℓ1-penalized GLMs), and compared the structure of their fitted parameters. RESULTS Across recording modality, brain region, and task, we found that UoI inferred models with increased sparsity, improved stability, and qualitatively different parameter distributions, while maintaining predictive performance. We obtained highly sparse functional coupling networks with substantially different community structure, more parsimonious encoding models, and decoding models that relied on fewer single-units. CONCLUSIONS Together, these results demonstrate that improved parameter inference, achieved via UoI, reshapes interpretation in diverse neuroscience contexts.
Collapse
Affiliation(s)
- Pratik S Sachdeva
- Redwood Center for Theoretical Neuroscience, University of California, Berkeley, 94720, CA, USA; Department of Physics, University of California, Berkeley, 94720, CA, USA; Biological Systems and Engineering Division, Lawrence Berkeley National Laboratory, Berkeley, 94720, CA, USA
| | - Jesse A Livezey
- Redwood Center for Theoretical Neuroscience, University of California, Berkeley, 94720, CA, USA; Biological Systems and Engineering Division, Lawrence Berkeley National Laboratory, Berkeley, 94720, CA, USA
| | - Maximilian E Dougherty
- Biological Systems and Engineering Division, Lawrence Berkeley National Laboratory, Berkeley, 94720, CA, USA
| | - Bon-Mi Gu
- Department of Neurology, University of California, San Francisco, San Francisco, 94143, CA, USA
| | - Joshua D Berke
- Department of Neurology, University of California, San Francisco, San Francisco, 94143, CA, USA; Department of Psychiatry; Neuroscience Graduate Program; Kavli Institute for Fundamental Neuroscience; Weill Institute for Neurosciences, University of California, San Francisco, San Francisco, 94143, CA, USA
| | - Kristofer E Bouchard
- Redwood Center for Theoretical Neuroscience, University of California, Berkeley, 94720, CA, USA; Biological Systems and Engineering Division, Lawrence Berkeley National Laboratory, Berkeley, 94720, CA, USA; Computational Resources Division, Lawrence Berkeley National Laboratory, Berkeley, 94720, CA, USA; Helen Wills Neuroscience Institute, University of California, Berkeley, Berkeley, 94720, CA, USA
| |
Collapse
|
16
|
Ahmadi N, Constandinou T, Bouganis CS. Robust and accurate decoding of hand kinematics from entire spiking activity using deep learning. J Neural Eng 2021; 18. [PMID: 33477128 DOI: 10.1088/1741-2552/abde8a] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/12/2020] [Accepted: 01/21/2021] [Indexed: 12/18/2022]
Abstract
OBJECTIVE Brain-machine interfaces (BMIs) seek to restore lost motor functions in individuals with neurological disorders by enabling them to control external devices directly with their thoughts. This work aims to improve robustness and decoding accuracy that currently become major challenges in the clinical translation of intracortical BMIs. APPROACH We propose entire spiking activity (ESA) -an envelope of spiking activity that can be extracted by a simple, threshold-less, and automated technique- as the input signal. We couple ESA with deep learning-based decoding algorithm that uses quasi-recurrent neural network (QRNN) architecture. We evaluate comprehensively the performance of ESA-driven QRNN decoder for decoding hand kinematics from neural signals chronically recorded from the primary motor cortex area of three non-human primates performing different tasks. MAIN RESULTS Our proposed method yields consistently higher decoding performance than any other combinations of the input signal and decoding algorithm previously reported across long term recording sessions. It can sustain high decoding performance even when removing spikes from the raw signals, when using the different number of channels, and when using a smaller amount of training data. SIGNIFICANCE Overall results demonstrate exceptionally high decoding accuracy and chronic robustness, which is highly desirable given it is an unresolved challenge in BMIs.
Collapse
Affiliation(s)
- Nur Ahmadi
- Electrical and Electronic Engineering, Imperial College London, South Kensington Campus, London, London, SW7 2BT, UNITED KINGDOM OF GREAT BRITAIN AND NORTHERN IRELAND
| | - Timothy Constandinou
- Electrical & Electronic Engineering, Imperial College London, South Kensington Campus, London, London, SW7 2BT, UNITED KINGDOM OF GREAT BRITAIN AND NORTHERN IRELAND
| | - Christos-Savvas Bouganis
- Electrical and Electronic Engineering, Imperial College London, South Kensington Campus, London, London, SW7 2BT, UNITED KINGDOM OF GREAT BRITAIN AND NORTHERN IRELAND
| |
Collapse
|
17
|
Livezey JA, Glaser JI. Deep learning approaches for neural decoding across architectures and recording modalities. Brief Bioinform 2020; 22:1577-1591. [PMID: 33372958 DOI: 10.1093/bib/bbaa355] [Citation(s) in RCA: 17] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/18/2020] [Revised: 10/31/2020] [Accepted: 11/04/2020] [Indexed: 12/19/2022] Open
Abstract
Decoding behavior, perception or cognitive state directly from neural signals is critical for brain-computer interface research and an important tool for systems neuroscience. In the last decade, deep learning has become the state-of-the-art method in many machine learning tasks ranging from speech recognition to image segmentation. The success of deep networks in other domains has led to a new wave of applications in neuroscience. In this article, we review deep learning approaches to neural decoding. We describe the architectures used for extracting useful features from neural recording modalities ranging from spikes to functional magnetic resonance imaging. Furthermore, we explore how deep learning has been leveraged to predict common outputs including movement, speech and vision, with a focus on how pretrained deep networks can be incorporated as priors for complex decoding targets like acoustic speech or images. Deep learning has been shown to be a useful tool for improving the accuracy and flexibility of neural decoding across a wide range of tasks, and we point out areas for future scientific development.
Collapse
Affiliation(s)
- Jesse A Livezey
- Neural Systems and Data Science Laboratory at the Lawrence Berkeley National Laboratory. He obtained his PhD in Physics from the University of California, Berkeley
| | - Joshua I Glaser
- Center for Theoretical Neuroscience and Department of Statistics at Columbia University. He obtained his PhD in Neuroscience from Northwestern University
| |
Collapse
|
18
|
Ahmadi N, Constandinou T, Bouganis CS. Impact of referencing scheme on decoding performance of LFP-based brain-machine interface. J Neural Eng 2020; 18. [PMID: 33242850 DOI: 10.1088/1741-2552/abce3c] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/03/2020] [Accepted: 11/26/2020] [Indexed: 12/19/2022]
Abstract
OBJECTIVE There has recently been an increasing interest in local field potential (LFP) for brain-machine interface (BMI) applications due to its desirable properties (signal stability and low bandwidth). LFP is typically recorded with respect to a single unipolar reference which is susceptible to common noise. Several referencing schemes have been proposed to eliminate the common noise, such as bipolar reference, current source density (CSD), and common average reference (CAR). However, to date, there have not been any studies to investigate the impact of these referencing schemes on decoding performance of LFP-based BMIs. APPROACH To address this issue, we comprehensively examined the impact of different referencing schemes and LFP features on the performance of hand kinematic decoding using a deep learning method. We used LFPs chronically recorded from the motor cortex area of a monkey while performing reaching tasks. MAIN RESULTS Experimental results revealed that local motor potential (LMP) emerged as the most informative feature regardless of the referencing schemes. Using LMP as the feature, CAR was found to yield consistently better decoding performance than other referencing schemes over long-term recording sessions. Significance Overall, our results suggest the potential use of LMP coupled with CAR for enhancing the decoding performance of LFP-based BMIs.
Collapse
Affiliation(s)
- Nur Ahmadi
- Electrical and Electronic Engineering, Imperial College London, South Kensington Campus, London, London, SW7 2AZ, UNITED KINGDOM OF GREAT BRITAIN AND NORTHERN IRELAND
| | - Timothy Constandinou
- Electrical & Electronic Engineering, Imperial College London, London, London, UNITED KINGDOM OF GREAT BRITAIN AND NORTHERN IRELAND
| | - Christos-Savvas Bouganis
- Electrical and Electronic Engineering, Imperial College London, London, London, UNITED KINGDOM OF GREAT BRITAIN AND NORTHERN IRELAND
| |
Collapse
|
19
|
Kobler RJ, Sburlea AI, Mondini V, Hirata M, Müller-Putz GR. Distance- and speed-informed kinematics decoding improves M/EEG based upper-limb movement decoder accuracy. J Neural Eng 2020; 17:056027. [PMID: 33146148 DOI: 10.1088/1741-2552/abb3b3] [Citation(s) in RCA: 12] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022]
Abstract
OBJECTIVE One of the main goals in brain-computer interface (BCI) research is the replacement or restoration of lost function in individuals with paralysis. One line of research investigates the inference of movement kinematics from brain activity during different volitional states. A growing number of electroencephalography (EEG) and magnetoencephalography (MEG) studies suggest that information about directional (e.g. velocity) and nondirectional (e.g. speed) movement kinematics is accessible noninvasively. We sought to assess if the neural information associated with both types of kinematics can be combined to improve the decoding accuracy. APPROACH In an offline analysis, we reanalyzed the data of two previous experiments containing the recordings of 34 healthy participants (15 EEG, 19 MEG). We decoded 2D movement trajectories from low-frequency M/EEG signals in executed and observed tracking movements, and compared the accuracy of an unscented Kalman filter (UKF) that explicitly modeled the nonlinear relation between directional and nondirectional kinematics to the accuracies of linear Kalman (KF) and Wiener filters which did not combine both types of kinematics. MAIN RESULTS At the group level, posterior-parietal and parieto-occipital (executed and observed movements) and sensorimotor areas (executed movements) encoded kinematic information. Correlations between the recorded position and velocity trajectories and the UKF decoded ones were on average 0.49 during executed and 0.36 during observed movements. Compared to the other filters, the UKF could achieve the best trade-off between maximizing the signal to noise ratio and minimizing the amplitude mismatch between the recorded and decoded trajectories. SIGNIFICANCE We present direct evidence that directional and nondirectional kinematic information is simultaneously detectable in low-frequency M/EEG signals. Moreover, combining directional and nondirectional kinematic information significantly improves the decoding accuracy upon a linear KF.
Collapse
Affiliation(s)
- Reinmar J Kobler
- Institute of Neural Engineering, Graz University of Technology, Graz 8010, Styria, Austria
| | | | | | | | | |
Collapse
|
20
|
Shaikh S, So R, Sibindi T, Libedinsky C, Basu A. Towards Intelligent Intracortical BMI (i 2BMI): Low-Power Neuromorphic Decoders That Outperform Kalman Filters. IEEE TRANSACTIONS ON BIOMEDICAL CIRCUITS AND SYSTEMS 2019; 13:1615-1624. [PMID: 31581098 DOI: 10.1109/tbcas.2019.2944486] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
Fully-implantable wireless intracortical Brain Machine Interfaces (iBMI) is one of the most promising next frontiers in the nascent field of neurotechnology. However, scaling the number of channels in such systems by another 10× is difficult due to power and bandwidth requirements of the wireless transmitter. One promising solution for that is to include more processing, up to the decoder, in the implant so that transmission data-rate is reduced drastically. Earlier work on neuromorphic decoder chips only showed classification of discrete states. We present results for continuous state decoding using a low-power neuromorphic decoder chip termed Spike-input Extreme Learning Machine (SELMA) that implements a nonlinear decoder without memory and its memory-based version with time-delayed bins, SELMA-bins. We have compared SELMA, SELMA-bins against state-of-the-art Steady-State Kalman Filter (SSKF), a linear decoder with memory, across two different datasets involving a total of 4 non-human primates (NHPs). Results show at least a 10% (20%) or more increase in the fraction of variance accounted for (FVAF) by SELMA (SELMA-bins) over SSKF across the datasets. Estimated energy consumption comparison shows SELMA (SELMA-bins) consuming ≈ 9 nJ/update (23 nJ/update) against SSKF's ≈ 7.4 nJ/update for an iBMI with a 10 degree of freedom control. Thus, SELMA yields better performance against SSKF while consuming energy in the same range as SSKF whereas SELMA-bins performs the best with moderately increased energy consumption, albeit far less than energy required for raw data transmission. This paves the way for reducing transmission data rates in future scaled iBMI systems.
Collapse
|
21
|
Tseng PH, Urpi NA, Lebedev M, Nicolelis M. Decoding Movements from Cortical Ensemble Activity Using a Long Short-Term Memory Recurrent Network. Neural Comput 2019; 31:1085-1113. [DOI: 10.1162/neco_a_01189] [Citation(s) in RCA: 19] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/29/2023]
Abstract
Although many real-time neural decoding algorithms have been proposed for brain-machine interface (BMI) applications over the years, an optimal, consensual approach remains elusive. Recent advances in deep learning algorithms provide new opportunities for improving the design of BMI decoders, including the use of recurrent artificial neural networks to decode neuronal ensemble activity in real time. Here, we developed a long-short term memory (LSTM) decoder for extracting movement kinematics from the activity of large ( N = 134–402) populations of neurons, sampled simultaneously from multiple cortical areas, in rhesus monkeys performing motor tasks. Recorded regions included primary motor, dorsal premotor, supplementary motor, and primary somatosensory cortical areas. The LSTM's capacity to retain information for extended periods of time enabled accurate decoding for tasks that required both movements and periods of immobility. Our LSTM algorithm significantly outperformed the state-of-the-art unscented Kalman filter when applied to three tasks: center-out arm reaching, bimanual reaching, and bipedal walking on a treadmill. Notably, LSTM units exhibited a variety of well-known physiological features of cortical neuronal activity, such as directional tuning and neuronal dynamics across task epochs. LSTM modeled several key physiological attributes of cortical circuits involved in motor tasks. These findings suggest that LSTM-based approaches could yield a better algorithm strategy for neuroprostheses that employ BMIs to restore movement in severely disabled patients.
Collapse
Affiliation(s)
- Po-He Tseng
- Department of Neurobiology and Duke University Center for Neuroengineering, Duke University, Durham, NC 27710, U.S.A
| | - Núria Armengol Urpi
- Departments of Information and Communication Technologies and Experimental and Health Sciences, Universitat Pompeu Fabra, Barcelona, 08018, Spain; and Department of Mechanical and Process Engineering, ETH Zurich, 8092 Zurich, Switzerland
| | - Mikhail Lebedev
- Department of Neurobiology and Duke University Center for Neuroengineering, Duke University, Durham, NC 27710, U.S.A.; and Center for Bioelectric Interfaces of the Institute for Cognitive Neuroscience of the National Research University Higher School of Economics, Moscow, Russia; and Department of Information and Internet Technologies of Digital Health Institute, I.M. Sechenov First Moscow State Medical University, Moscow, Russia
| | - Miguel Nicolelis
- Departments of Neurobiology, Biomedical Engineering, Psychology and Neuroscience, Neurology, Neurosurgery, and Duke University Center for Neuroengineering, Duke University, Durham, NC 27710, U.S.A.; and Edmund and Lily Safra International Institute of Neuroscience, Natal Brazil 59066060
| |
Collapse
|