1
|
Menéndez JA, Hennig JA, Golub MD, Oby ER, Sadtler PT, Batista AP, Chase SM, Yu BM, Latham PE. A theory of brain-computer interface learning via low-dimensional control. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.04.18.589952. [PMID: 38712193 PMCID: PMC11071278 DOI: 10.1101/2024.04.18.589952] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/08/2024]
Abstract
A remarkable demonstration of the flexibility of mammalian motor systems is primates' ability to learn to control brain-computer interfaces (BCIs). This constitutes a completely novel motor behavior, yet primates are capable of learning to control BCIs under a wide range of conditions. BCIs with carefully calibrated decoders, for example, can be learned with only minutes to hours of practice. With a few weeks of practice, even BCIs with randomly constructed decoders can be learned. What are the biological substrates of this learning process? Here, we develop a theory based on a re-aiming strategy, whereby learning operates within a low-dimensional subspace of task-relevant inputs driving the local population of recorded neurons. Through comprehensive numerical and formal analysis, we demonstrate that this theory can provide a unifying explanation for disparate phenomena previously reported in three different BCI learning tasks, and we derive a novel experimental prediction that we verify with previously published data. By explicitly modeling the underlying neural circuitry, the theory reveals an interpretation of these phenomena in terms of biological constraints on neural activity.
Collapse
|
2
|
Losey DM, Hennig JA, Oby ER, Golub MD, Sadtler PT, Quick KM, Ryu SI, Tyler-Kabara EC, Batista AP, Yu BM, Chase SM. Learning leaves a memory trace in motor cortex. Curr Biol 2024; 34:1519-1531.e4. [PMID: 38531360 PMCID: PMC11097210 DOI: 10.1016/j.cub.2024.03.003] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/21/2023] [Revised: 12/06/2023] [Accepted: 03/04/2024] [Indexed: 03/28/2024]
Abstract
How are we able to learn new behaviors without disrupting previously learned ones? To understand how the brain achieves this, we used a brain-computer interface (BCI) learning paradigm, which enables us to detect the presence of a memory of one behavior while performing another. We found that learning to use a new BCI map altered the neural activity that monkeys produced when they returned to using a familiar BCI map in a way that was specific to the learning experience. That is, learning left a "memory trace" in the primary motor cortex. This memory trace coexisted with proficient performance under the familiar map, primarily by altering neural activity in dimensions that did not impact behavior. Forming memory traces might be how the brain is able to provide for the joint learning of multiple behaviors without interference.
Collapse
Affiliation(s)
- Darby M Losey
- Neuroscience Institute, Carnegie Mellon University, Pittsburgh, PA 15213, USA; Center for the Neural Basis of Cognition, Pittsburgh, PA 15213, USA; Machine Learning Department, Carnegie Mellon University, Pittsburgh, PA 15213, USA
| | - Jay A Hennig
- Neuroscience Institute, Carnegie Mellon University, Pittsburgh, PA 15213, USA; Center for the Neural Basis of Cognition, Pittsburgh, PA 15213, USA; Machine Learning Department, Carnegie Mellon University, Pittsburgh, PA 15213, USA
| | - Emily R Oby
- Center for the Neural Basis of Cognition, Pittsburgh, PA 15213, USA; Department of Bioengineering, University of Pittsburgh, Pittsburgh, PA 15213, USA
| | - Matthew D Golub
- Center for the Neural Basis of Cognition, Pittsburgh, PA 15213, USA; Department of Electrical and Computer Engineering, Carnegie Mellon University, Pittsburgh, PA 15213, USA; Department of Electrical Engineering, Stanford University, Stanford, CA 94305, USA; Paul G. Allen School of Computer Science & Engineering, University of Washington, Seattle, WA 98195, USA
| | - Patrick T Sadtler
- Center for the Neural Basis of Cognition, Pittsburgh, PA 15213, USA; Department of Bioengineering, University of Pittsburgh, Pittsburgh, PA 15213, USA
| | - Kristin M Quick
- Center for the Neural Basis of Cognition, Pittsburgh, PA 15213, USA; Department of Bioengineering, University of Pittsburgh, Pittsburgh, PA 15213, USA
| | - Stephen I Ryu
- Department of Electrical Engineering, Stanford University, Stanford, CA 94305, USA; Department of Neurosurgery, Palo Alto Medical Foundation, Palo Alto, CA 94301, USA
| | - Elizabeth C Tyler-Kabara
- Center for the Neural Basis of Cognition, Pittsburgh, PA 15213, USA; Department of Physical Medicine and Rehabilitation, University of Pittsburgh, Pittsburgh, PA 15213, USA; Department of Neurological Surgery, University of Pittsburgh, Pittsburgh, PA 15213, USA; Department of Neurosurgery, Dell Medical School, University of Texas at Austin, Austin, TX 78712, USA
| | - Aaron P Batista
- Center for the Neural Basis of Cognition, Pittsburgh, PA 15213, USA; Department of Bioengineering, University of Pittsburgh, Pittsburgh, PA 15213, USA.
| | - Byron M Yu
- Neuroscience Institute, Carnegie Mellon University, Pittsburgh, PA 15213, USA; Center for the Neural Basis of Cognition, Pittsburgh, PA 15213, USA; Department of Electrical and Computer Engineering, Carnegie Mellon University, Pittsburgh, PA 15213, USA; Department of Biomedical Engineering, Carnegie Mellon University, Pittsburgh, PA 15213, USA.
| | - Steven M Chase
- Neuroscience Institute, Carnegie Mellon University, Pittsburgh, PA 15213, USA; Center for the Neural Basis of Cognition, Pittsburgh, PA 15213, USA; Department of Biomedical Engineering, Carnegie Mellon University, Pittsburgh, PA 15213, USA.
| |
Collapse
|
3
|
Barradas VR, Koike Y, Schweighofer N. Theoretical limits on the speed of learning inverse models explain the rate of adaptation in arm reaching tasks. Neural Netw 2024; 170:376-389. [PMID: 38029719 DOI: 10.1016/j.neunet.2023.10.049] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/31/2022] [Revised: 09/08/2023] [Accepted: 10/30/2023] [Indexed: 12/01/2023]
Abstract
An essential aspect of human motor learning is the formation of inverse models, which map desired actions to motor commands. Inverse models can be learned by adjusting parameters in neural circuits to minimize errors in the performance of motor tasks through gradient descent. However, the theory of gradient descent establishes limits on the learning speed. Specifically, the eigenvalues of the Hessian of the error surface around a minimum determine the maximum speed of learning in a task. Here, we use this theoretical framework to analyze the speed of learning in different inverse model learning architectures in a set of isometric arm-reaching tasks. We show theoretically that, in these tasks, the error surface and, thus the speed of learning, are determined by the shapes of the force manipulability ellipsoid of the arm and the distribution of targets in the task. In particular, rounder manipulability ellipsoids generate a rounder error surface, allowing for faster learning of the inverse model. Rounder target distributions have a similar effect. We tested these predictions experimentally in a quasi-isometric reaching task with a visuomotor transformation. The experimental results were consistent with our theoretical predictions. Furthermore, our analysis accounts for the speed of learning in previous experiments with incompatible and compatible virtual surgery tasks, and with visuomotor rotation tasks with different numbers of targets. By identifying aspects of a task that influence the speed of learning, our results provide theoretical principles for the design of motor tasks that allow for faster learning.
Collapse
Affiliation(s)
- Victor R Barradas
- Institute of Innovative Research, Tokyo Institute of Technology, 4259 R2-16 Nagatsuta-cho, Midori-ku, Yokohama, Kanagawa 226-8503, Japan.
| | - Yasuharu Koike
- Institute of Innovative Research, Tokyo Institute of Technology, 4259 R2-16 Nagatsuta-cho, Midori-ku, Yokohama, Kanagawa 226-8503, Japan
| | - Nicolas Schweighofer
- Biokinesiology and Physical Therapy, University of Southern California, 1540 Alcazar Street, CHP 155, Los Angeles, CA 90089-9006, USA
| |
Collapse
|
4
|
Weng G, Clark K, Akbarian A, Noudoost B, Nategh N. Time-varying generalized linear models: characterizing and decoding neuronal dynamics in higher visual areas. Front Comput Neurosci 2024; 18:1273053. [PMID: 38348287 PMCID: PMC10859875 DOI: 10.3389/fncom.2024.1273053] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/05/2023] [Accepted: 01/09/2024] [Indexed: 02/15/2024] Open
Abstract
To create a behaviorally relevant representation of the visual world, neurons in higher visual areas exhibit dynamic response changes to account for the time-varying interactions between external (e.g., visual input) and internal (e.g., reward value) factors. The resulting high-dimensional representational space poses challenges for precisely quantifying individual factors' contributions to the representation and readout of sensory information during a behavior. The widely used point process generalized linear model (GLM) approach provides a powerful framework for a quantitative description of neuronal processing as a function of various sensory and non-sensory inputs (encoding) as well as linking particular response components to particular behaviors (decoding), at the level of single trials and individual neurons. However, most existing variations of GLMs assume the neural systems to be time-invariant, making them inadequate for modeling nonstationary characteristics of neuronal sensitivity in higher visual areas. In this review, we summarize some of the existing GLM variations, with a focus on time-varying extensions. We highlight their applications to understanding neural representations in higher visual areas and decoding transient neuronal sensitivity as well as linking physiology to behavior through manipulation of model components. This time-varying class of statistical models provide valuable insights into the neural basis of various visual behaviors in higher visual areas and hold significant potential for uncovering the fundamental computational principles that govern neuronal processing underlying various behaviors in different regions of the brain.
Collapse
Affiliation(s)
- Geyu Weng
- Department of Biomedical Engineering, University of Utah, Salt Lake City, UT, United States
- Department of Ophthalmology and Visual Sciences, University of Utah, Salt Lake City, UT, United States
| | - Kelsey Clark
- Department of Ophthalmology and Visual Sciences, University of Utah, Salt Lake City, UT, United States
| | - Amir Akbarian
- Department of Ophthalmology and Visual Sciences, University of Utah, Salt Lake City, UT, United States
| | - Behrad Noudoost
- Department of Ophthalmology and Visual Sciences, University of Utah, Salt Lake City, UT, United States
| | - Neda Nategh
- Department of Ophthalmology and Visual Sciences, University of Utah, Salt Lake City, UT, United States
- Department of Electrical and Computer Engineering, University of Utah, Salt Lake City, UT, United States
| |
Collapse
|
5
|
Zippi EL, Shvartsman GF, Vendrell-Llopis N, Wallis JD, Carmena JM. Distinct neural representations during a brain-machine interface and manual reaching task in motor cortex, prefrontal cortex, and striatum. Sci Rep 2023; 13:17810. [PMID: 37857827 PMCID: PMC10587077 DOI: 10.1038/s41598-023-44405-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/31/2023] [Accepted: 10/07/2023] [Indexed: 10/21/2023] Open
Abstract
Although brain-machine interfaces (BMIs) are directly controlled by the modulation of a select local population of neurons, distributed networks consisting of cortical and subcortical areas have been implicated in learning and maintaining control. Previous work in rodents has demonstrated the involvement of the striatum in BMI learning. However, the prefrontal cortex has been largely ignored when studying motor BMI control despite its role in action planning, action selection, and learning abstract tasks. Here, we compare local field potentials simultaneously recorded from primary motor cortex (M1), dorsolateral prefrontal cortex (DLPFC), and the caudate nucleus of the striatum (Cd) while nonhuman primates perform a two-dimensional, self-initiated, center-out task under BMI control and manual control. Our results demonstrate the presence of distinct neural representations for BMI and manual control in M1, DLPFC, and Cd. We find that neural activity from DLPFC and M1 best distinguishes control types at the go cue and target acquisition, respectively, while M1 best predicts target-direction at both task events. We also find effective connectivity from DLPFC → M1 throughout both control types and Cd → M1 during BMI control. These results suggest distributed network activity between M1, DLPFC, and Cd during BMI control that is similar yet distinct from manual control.
Collapse
Affiliation(s)
- Ellen L Zippi
- Helen Wills Neuroscience Institute, University of California, Berkeley, Berkeley, CA, USA
| | - Gabrielle F Shvartsman
- Department of Electrical Engineering and Computer Science, University of California, Berkeley, Berkeley, CA, USA
| | - Nuria Vendrell-Llopis
- Helen Wills Neuroscience Institute, University of California, Berkeley, Berkeley, CA, USA
- Department of Electrical Engineering and Computer Science, University of California, Berkeley, Berkeley, CA, USA
| | - Joni D Wallis
- Helen Wills Neuroscience Institute, University of California, Berkeley, Berkeley, CA, USA
- Department of Psychology, University of California, Berkeley, Berkeley, CA, USA
| | - Jose M Carmena
- Helen Wills Neuroscience Institute, University of California, Berkeley, Berkeley, CA, USA.
- Department of Electrical Engineering and Computer Science, University of California, Berkeley, Berkeley, CA, USA.
| |
Collapse
|
6
|
Abstract
Brain-machine interfaces (BMIs) aim to treat sensorimotor neurological disorders by creating artificial motor and/or sensory pathways. Introducing artificial pathways creates new relationships between sensory input and motor output, which the brain must learn to gain dexterous control. This review highlights the role of learning in BMIs to restore movement and sensation, and discusses how BMI design may influence neural plasticity and performance. The close integration of plasticity in sensory and motor function influences the design of both artificial pathways and will be an essential consideration for bidirectional devices that restore both sensory and motor function.
Collapse
Affiliation(s)
- Maria C Dadarlat
- Weldon School of Biomedical Engineering, Purdue University, West Lafayette, Indiana, USA;
| | - Ryan A Canfield
- Department of Bioengineering, University of Washington, Seattle, Washington, USA
| | - Amy L Orsborn
- Department of Bioengineering, University of Washington, Seattle, Washington, USA
- Department of Electrical and Computer Engineering, University of Washington, Seattle, Washington, USA
- Washington National Primate Research Center, Seattle, Washington, USA
| |
Collapse
|
7
|
Zippi EL, Shvartsman GF, Vendrell-Llopis N, Wallis JD, Carmena JM. Distinct neural representations during a brain-machine interface and manual reaching task in motor cortex, prefrontal cortex, and striatum. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2023:2023.05.31.542532. [PMID: 37398143 PMCID: PMC10312492 DOI: 10.1101/2023.05.31.542532] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 07/04/2023]
Abstract
Although brain-machine interfaces (BMIs) are directly controlled by the modulation of a select local population of neurons, distributed networks consisting of cortical and subcortical areas have been implicated in learning and maintaining control. Previous work in rodent BMI has demonstrated the involvement of the striatum in BMI learning. However, the prefrontal cortex has been largely ignored when studying motor BMI control despite its role in action planning, action selection, and learning abstract tasks. Here, we compare local field potentials simultaneously recorded from the primary motor cortex (M1), dorsolateral prefrontal cortex (DLPFC), and the caudate nucleus of the striatum (Cd) while nonhuman primates perform a two-dimensional, self-initiated, center-out task under BMI control and manual control. Our results demonstrate the presence of distinct neural representations for BMI and manual control in M1, DLPFC, and Cd. We find that neural activity from DLPFC and M1 best distinguish between control types at the go cue and target acquisition, respectively. We also found effective connectivity from DLPFC→M1 throughout trials across both control types and Cd→M1 during BMI control. These results suggest distributed network activity between M1, DLPFC, and Cd during BMI control that is similar yet distinct from manual control.
Collapse
Affiliation(s)
- Ellen L. Zippi
- Helen Wills Neuroscience Institute, University of California, Berkeley, Berkeley, CA
| | - Gabrielle F. Shvartsman
- Department of Electrical Engineering and Computer Science, University of California, Berkeley, Berkeley, CA
| | - Nuria Vendrell-Llopis
- Helen Wills Neuroscience Institute, University of California, Berkeley, Berkeley, CA
- Department of Electrical Engineering and Computer Science, University of California, Berkeley, Berkeley, CA
| | - Joni D. Wallis
- Helen Wills Neuroscience Institute, University of California, Berkeley, Berkeley, CA
- Department of Psychology, University of California, Berkeley, Berkeley, CA
| | - Jose M. Carmena
- Helen Wills Neuroscience Institute, University of California, Berkeley, Berkeley, CA
- Department of Electrical Engineering and Computer Science, University of California, Berkeley, Berkeley, CA
| |
Collapse
|
8
|
Smoulder AL, Marino PJ, Oby ER, Snyder SE, Miyata H, Pavlovsky NP, Bishop WE, Yu BM, Chase SM, Batista AP. A neural basis of choking under pressure. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2023:2023.04.16.537007. [PMID: 37090659 PMCID: PMC10120738 DOI: 10.1101/2023.04.16.537007] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 04/25/2023]
Abstract
Incentives tend to drive improvements in performance. But when incentives get too high, we can "choke under pressure" and underperform when it matters most. What neural processes might lead to choking under pressure? We studied Rhesus monkeys performing a challenging reaching task in which they underperform when an unusually large "jackpot" reward is at stake. We observed a collapse in neural information about upcoming movements for jackpot rewards: in the motor cortex, neural planning signals became less distinguishable for different reach directions when a jackpot reward was made available. We conclude that neural signals of reward and motor planning interact in the motor cortex in a manner that can explain why we choke under pressure. One-Sentence Summary In response to exceptionally large reward cues, animals can "choke under pressure", and this corresponds to a collapse in the neural information about upcoming movements.
Collapse
|
9
|
Govindarajan LN, Calvert JS, Parker SR, Jung M, Darie R, Miranda P, Shaaya E, Borton DA, Serre T. Fast inference of spinal neuromodulation for motor control using amortized neural networks. J Neural Eng 2022; 19:10.1088/1741-2552/ac9646. [PMID: 36174534 PMCID: PMC9668352 DOI: 10.1088/1741-2552/ac9646] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/31/2022] [Accepted: 09/29/2022] [Indexed: 11/12/2022]
Abstract
Objective.Epidural electrical stimulation (EES) has emerged as an approach to restore motor function following spinal cord injury (SCI). However, identifying optimal EES parameters presents a significant challenge due to the complex and stochastic nature of muscle control and the combinatorial explosion of possible parameter configurations. Here, we describe a machine-learning approach that leverages modern deep neural networks to learn bidirectional mappings between the space of permissible EES parameters and target motor outputs.Approach.We collected data from four sheep implanted with two 24-contact EES electrode arrays on the lumbosacral spinal cord. Muscle activity was recorded from four bilateral hindlimb electromyography (EMG) sensors. We introduce a general learning framework to identify EES parameters capable of generating desired patterns of EMG activity. Specifically, we first amortize spinal sensorimotor computations in a forward neural network model that learns to predict motor outputs based on EES parameters. Then, we employ a second neural network as an inverse model, which reuses the amortized knowledge learned by the forward model to guide the selection of EES parameters.Main results.We found that neural networks can functionally approximate spinal sensorimotor computations by accurately predicting EMG outputs based on EES parameters. The generalization capability of the forward model critically benefited our inverse model. We successfully identified novel EES parameters, in under 20 min, capable of producing desired target EMG recruitment duringin vivotesting. Furthermore, we discovered potential functional redundancies within the spinal sensorimotor networks by identifying unique EES parameters that result in similar motor outcomes. Together, these results suggest that our framework is well-suited to probe spinal circuitry and control muscle recruitment in a completely data-driven manner.Significance.We successfully identify novel EES parameters within minutes, capable of producing desired EMG recruitment. Our approach is data-driven, subject-agnostic, automated, and orders of magnitude faster than manual approaches.
Collapse
Affiliation(s)
- Lakshmi Narasimhan Govindarajan
- Cognitive, Linguistic & Psychological Sciences, Brown University, Providence RI USA
- Carney Institute for Brain Science, Brown University, Providence RI USA
| | | | | | - Minju Jung
- Cognitive, Linguistic & Psychological Sciences, Brown University, Providence RI USA
- Carney Institute for Brain Science, Brown University, Providence RI USA
| | - Radu Darie
- School of Engineering, Brown University, Providence RI USA
| | | | - Elias Shaaya
- Department of Neurosurgery, Brown University and Rhode Island Hospital, Providence RI USA
| | - David A. Borton
- Carney Institute for Brain Science, Brown University, Providence RI USA
- School of Engineering, Brown University, Providence RI USA
- Center for Neurorestoration and Neurotechnology, Department of Veterans Affairs, Providence RI USA
| | - Thomas Serre
- Cognitive, Linguistic & Psychological Sciences, Brown University, Providence RI USA
- Carney Institute for Brain Science, Brown University, Providence RI USA
| |
Collapse
|
10
|
Maeda RS, Kersten R, Pruszynski JA. Shared internal models for feedforward and feedback control of arm dynamics in non-human primates. Eur J Neurosci 2020; 53:1605-1620. [PMID: 33222285 DOI: 10.1111/ejn.15056] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/20/2020] [Revised: 11/12/2020] [Accepted: 11/13/2020] [Indexed: 11/30/2022]
Abstract
Previous work has shown that humans account for and learn novel properties or the arm's dynamics, and that such learning causes changes in both the predictive (i.e., feedforward) control of reaching and reflex (i.e., feedback) responses to mechanical perturbations. Here we show that similar observations hold in old-world monkeys (Macaca fascicularis). Two monkeys were trained to use an exoskeleton to perform a single-joint elbow reaching and to respond to mechanical perturbations that created pure elbow motion. Both of these tasks engaged robust shoulder muscle activity as required to account for the torques that typically arise at the shoulder when the forearm rotates around the elbow joint (i.e., intersegmental dynamics). We altered these intersegmental arm dynamics by having the monkeys generate the same elbow movements with the shoulder joint either free to rotate, as normal, or fixed by the robotic manipulandum, which eliminates the shoulder torques caused by forearm rotation. After fixing the shoulder joint, we found a systematic reduction in shoulder muscle activity. In addition, after releasing the shoulder joint again, we found evidence of kinematic aftereffects (i.e., reach errors) in the direction predicted if failing to compensate for normal arm dynamics. We also tested whether such learning transfers to feedback responses evoked by mechanical perturbations and found a reduction in shoulder feedback responses, as appropriate for these altered arm intersegmental dynamics. Demonstrating this learning and transfer in non-human primates will allow the investigation of the neural mechanisms involved in feedforward and feedback control of the arm's dynamics.
Collapse
Affiliation(s)
- Rodrigo S Maeda
- Brain and Mind Institute, Western University, London, ON, Canada.,Robarts Research Institute, Western University, London, ON, Canada.,Department of Psychology, Western University, London, ON, Canada
| | - Rhonda Kersten
- Robarts Research Institute, Western University, London, ON, Canada.,Department of Physiology and Pharmacology, Western University, London, ON, Canada
| | - J Andrew Pruszynski
- Brain and Mind Institute, Western University, London, ON, Canada.,Robarts Research Institute, Western University, London, ON, Canada.,Department of Psychology, Western University, London, ON, Canada.,Department of Physiology and Pharmacology, Western University, London, ON, Canada
| |
Collapse
|
11
|
Vyas S, O'Shea DJ, Ryu SI, Shenoy KV. Causal Role of Motor Preparation during Error-Driven Learning. Neuron 2020; 106:329-339.e4. [PMID: 32053768 PMCID: PMC7185427 DOI: 10.1016/j.neuron.2020.01.019] [Citation(s) in RCA: 25] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/23/2019] [Revised: 11/12/2019] [Accepted: 01/16/2020] [Indexed: 11/28/2022]
Abstract
Current theories suggest that an error-driven learning process updates trial-by-trial to facilitate motor adaptation. How this process interacts with motor cortical preparatory activity-which current models suggest plays a critical role in movement initiation-remains unknown. Here, we evaluated the role of motor preparation during visuomotor adaptation. We found that preparation time was inversely correlated to variance of errors on current trials and mean error on subsequent trials. We also found causal evidence that intracortical microstimulation during motor preparation was sufficient to disrupt learning. Surprisingly, stimulation did not affect current trials, but instead disrupted the update computation of a learning process, thereby affecting subsequent trials. This is consistent with a Bayesian estimation framework where the motor system reduces its learning rate by virtue of lowering error sensitivity when faced with uncertainty. This interaction between motor preparation and the error-driven learning system may facilitate new probes into mechanisms underlying trial-by-trial adaptation.
Collapse
Affiliation(s)
- Saurabh Vyas
- Department of Bioengineering, Stanford University, Stanford, CA 94305, USA.
| | - Daniel J O'Shea
- Department of Electrical Engineering, Stanford University, Stanford, CA 94305, USA; Neurosciences Graduate Program, Stanford University, Stanford, CA 94305, USA
| | - Stephen I Ryu
- Department of Electrical Engineering, Stanford University, Stanford, CA 94305, USA; Palo Alto Medical Foundation, Palo Alto, CA 94301, USA
| | - Krishna V Shenoy
- Department of Bioengineering, Stanford University, Stanford, CA 94305, USA; Department of Electrical Engineering, Stanford University, Stanford, CA 94305, USA; Department of Neurobiology, Stanford University, Stanford, CA 94305, USA; Bio-X Program, Stanford University, Stanford, CA 94305, USA; Wu Tsai Neurosciences Institute, Stanford University, Stanford, CA 94305, USA; Howard Hughes Medical Institute, Stanford University, Stanford, CA 94305, USA
| |
Collapse
|
12
|
Stabilization of a brain-computer interface via the alignment of low-dimensional spaces of neural activity. Nat Biomed Eng 2020; 4:672-685. [PMID: 32313100 DOI: 10.1038/s41551-020-0542-9] [Citation(s) in RCA: 84] [Impact Index Per Article: 21.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/17/2018] [Accepted: 02/21/2020] [Indexed: 12/31/2022]
Abstract
The instability of neural recordings can render clinical brain-computer interfaces (BCIs) uncontrollable. Here, we show that the alignment of low-dimensional neural manifolds (low-dimensional spaces that describe specific correlation patterns between neurons) can be used to stabilize neural activity, thereby maintaining BCI performance in the presence of recording instabilities. We evaluated the stabilizer with non-human primates during online cursor control via intracortical BCIs in the presence of severe and abrupt recording instabilities. The stabilized BCIs recovered proficient control under different instability conditions and across multiple days. The stabilizer does not require knowledge of user intent and can outperform supervised recalibration. It stabilized BCIs even when neural activity contained little information about the direction of cursor movement. The stabilizer may be applicable to other neural interfaces and may improve the clinical viability of BCIs.
Collapse
|
13
|
Issar D, Williamson RC, Khanna SB, Smith MA. A neural network for online spike classification that improves decoding accuracy. J Neurophysiol 2020; 123:1472-1485. [PMID: 32101491 PMCID: PMC7191521 DOI: 10.1152/jn.00641.2019] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/08/2019] [Revised: 02/26/2020] [Accepted: 02/26/2020] [Indexed: 11/22/2022] Open
Abstract
Separating neural signals from noise can improve brain-computer interface performance and stability. However, most algorithms for separating neural action potentials from noise are not suitable for use in real time and have shown mixed effects on decoding performance. With the goal of removing noise that impedes online decoding, we sought to automate the intuition of human spike-sorters to operate in real time with an easily tunable parameter governing the stringency with which spike waveforms are classified. We trained an artificial neural network with one hidden layer on neural waveforms that were hand-labeled as either spikes or noise. The network output was a likelihood metric for each waveform it classified, and we tuned the network's stringency by varying the minimum likelihood value for a waveform to be considered a spike. Using the network's labels to exclude noise waveforms, we decoded remembered target location during a memory-guided saccade task from electrode arrays implanted in prefrontal cortex of rhesus macaque monkeys. The network classified waveforms in real time, and its classifications were qualitatively similar to those of a human spike-sorter. Compared with decoding with threshold crossings, in most sessions we improved decoding performance by removing waveforms with low spike likelihood values. Furthermore, decoding with our network's classifications became more beneficial as time since array implantation increased. Our classifier serves as a feasible preprocessing step, with little risk of harm, that could be applied to both off-line neural data analyses and online decoding.NEW & NOTEWORTHY Although there are many spike-sorting methods that isolate well-defined single units, these methods typically involve human intervention and have inconsistent effects on decoding. We used human classified neural waveforms as training data to create an artificial neural network that could be tuned to separate spikes from noise that impaired decoding. We found that this network operated in real time and was suitable for both off-line data processing and online decoding.
Collapse
Affiliation(s)
- Deepa Issar
- Department of Bioengineering, University of Pittsburgh, Pittsburgh, Pennsylvania
- University of Pittsburgh School of Medicine, University of Pittsburgh, Pittsburgh, Pennsylvania
| | - Ryan C Williamson
- University of Pittsburgh School of Medicine, University of Pittsburgh, Pittsburgh, Pennsylvania
- Department of Machine Learning, Carnegie Mellon University, Pittsburgh, Pennsylvania
- Carnegie Mellon Neuroscience Institute, Pittsburgh, Pennsylvania
| | - Sanjeev B Khanna
- Department of Bioengineering, University of Pittsburgh, Pittsburgh, Pennsylvania
| | - Matthew A Smith
- Department of Biomedical Engineering, Carnegie Mellon University, Pittsburgh, Pennsylvania
- Carnegie Mellon Neuroscience Institute, Pittsburgh, Pennsylvania
- Department of Ophthalmology, University of Pittsburgh School of Medicine, University of Pittsburgh, Pittsburgh, Pennsylvania
| |
Collapse
|
14
|
Neuroscience out of control: control-theoretic perspectives on neural circuit dynamics. Curr Opin Neurobiol 2019; 58:122-129. [DOI: 10.1016/j.conb.2019.09.001] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/27/2019] [Revised: 07/16/2019] [Accepted: 09/03/2019] [Indexed: 12/19/2022]
|
15
|
Shanechi MM. Brain–machine interfaces from motor to mood. Nat Neurosci 2019; 22:1554-1564. [DOI: 10.1038/s41593-019-0488-y] [Citation(s) in RCA: 82] [Impact Index Per Article: 16.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/08/2019] [Accepted: 08/06/2019] [Indexed: 12/22/2022]
|
16
|
Slutzky MW. Brain-Machine Interfaces: Powerful Tools for Clinical Treatment and Neuroscientific Investigations. Neuroscientist 2019; 25:139-154. [PMID: 29772957 PMCID: PMC6611552 DOI: 10.1177/1073858418775355] [Citation(s) in RCA: 33] [Impact Index Per Article: 6.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/15/2023]
Abstract
Brain-machine interfaces (BMIs) have exploded in popularity in the past decade. BMIs, also called brain-computer interfaces, provide a direct link between the brain and a computer, usually to control an external device. BMIs have a wide array of potential clinical applications, ranging from restoring communication to people unable to speak due to amyotrophic lateral sclerosis or a stroke, to restoring movement to people with paralysis from spinal cord injury or motor neuron disease, to restoring memory to people with cognitive impairment. Because BMIs are controlled directly by the activity of prespecified neurons or cortical areas, they also provide a powerful paradigm with which to investigate fundamental questions about brain physiology, including neuronal behavior, learning, and the role of oscillations. This article reviews the clinical and neuroscientific applications of BMIs, with a primary focus on motor BMIs.
Collapse
Affiliation(s)
- Marc W Slutzky
- 1 Departments of Neurology, Physiology, and Physical Medicine & Rehabilitation, Northwestern University, Chicago, IL, USA
| |
Collapse
|
17
|
Hsieh HL, Wong YT, Pesaran B, Shanechi MM. Multiscale modeling and decoding algorithms for spike-field activity. J Neural Eng 2018; 16:016018. [DOI: 10.1088/1741-2552/aaeb1a] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/01/2023]
|
18
|
Quick KM, Mischel JL, Loughlin PJ, Batista AP. The critical stability task: quantifying sensory-motor control during ongoing movement in nonhuman primates. J Neurophysiol 2018; 120:2164-2181. [PMID: 29947593 DOI: 10.1152/jn.00300.2017] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
Everyday behaviors require that we interact with the environment, using sensory information in an ongoing manner to guide our actions. Yet, by design, many of the tasks used in primate neurophysiology laboratories can be performed with limited sensory guidance. As a consequence, our knowledge about the neural mechanisms of motor control is largely limited to the feedforward aspects of the motor command. To study the feedback aspects of volitional motor control, we adapted the critical stability task (CST) from the human performance literature (Jex H, McDonnell J, Phatak A. IEEE Trans Hum Factors Electron 7: 138-145, 1966). In the CST, our monkey subjects interact with an inherently unstable (i.e., divergent) virtual system and must generate sensory-guided actions to stabilize it about an equilibrium point. The difficulty of the CST is determined by a single parameter, which allows us to quantitatively establish the limits of performance in the task for different sensory feedback conditions. Two monkeys learned to perform the CST with visual or vibrotactile feedback. Performance was better under visual feedback, as expected, but both monkeys were able to utilize vibrotactile feedback alone to successfully perform the CST. We also observed changes in behavioral strategy as the task became more challenging. The CST will have value for basic science investigations of the neural basis of sensory-motor integration during ongoing actions, and it may also provide value for the design and testing of bidirectional brain computer interface systems. NEW & NOTEWORTHY Currently, most behavioral tasks used in motor neurophysiology studies require primates to make short-duration, stereotyped movements that do not necessitate sensory feedback. To improve our understanding of sensorimotor integration, and to engineer meaningful artificial sensory feedback systems for brain-computer interfaces, it is crucial to have a task that requires sensory feedback for good control. The critical stability task demands that sensory information be used to guide long-duration movements.
Collapse
Affiliation(s)
- Kristin M Quick
- Department of Bioengineering, University of Pittsburgh , Pittsburgh, Pennsylvania.,Center for the Neural Basis of Cognition , Pittsburgh, Pennsylvania
| | - Jessica L Mischel
- Department of Bioengineering, University of Pittsburgh , Pittsburgh, Pennsylvania.,Center for the Neural Basis of Cognition , Pittsburgh, Pennsylvania
| | - Patrick J Loughlin
- Department of Bioengineering, University of Pittsburgh , Pittsburgh, Pennsylvania.,Center for the Neural Basis of Cognition , Pittsburgh, Pennsylvania
| | - Aaron P Batista
- Department of Bioengineering, University of Pittsburgh , Pittsburgh, Pennsylvania.,Center for the Neural Basis of Cognition , Pittsburgh, Pennsylvania
| |
Collapse
|
19
|
Pandarinath C, Ames KC, Russo AA, Farshchian A, Miller LE, Dyer EL, Kao JC. Latent Factors and Dynamics in Motor Cortex and Their Application to Brain-Machine Interfaces. J Neurosci 2018; 38:9390-9401. [PMID: 30381431 PMCID: PMC6209846 DOI: 10.1523/jneurosci.1669-18.2018] [Citation(s) in RCA: 56] [Impact Index Per Article: 9.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/06/2018] [Revised: 09/24/2018] [Accepted: 09/25/2018] [Indexed: 01/07/2023] Open
Abstract
In the 1960s, Evarts first recorded the activity of single neurons in motor cortex of behaving monkeys (Evarts, 1968). In the 50 years since, great effort has been devoted to understanding how single neuron activity relates to movement. Yet these single neurons exist within a vast network, the nature of which has been largely inaccessible. With advances in recording technologies, algorithms, and computational power, the ability to study these networks is increasing exponentially. Recent experimental results suggest that the dynamical properties of these networks are critical to movement planning and execution. Here we discuss this dynamical systems perspective and how it is reshaping our understanding of the motor cortices. Following an overview of key studies in motor cortex, we discuss techniques to uncover the "latent factors" underlying observed neural population activity. Finally, we discuss efforts to use these factors to improve the performance of brain-machine interfaces, promising to make these findings broadly relevant to neuroengineering as well as systems neuroscience.
Collapse
Affiliation(s)
- Chethan Pandarinath
- Wallace H. Coulter Department of Biomedical Engineering, Emory University and Georgia Institute of Technology, Atlanta, Georgia 30322,
- Department of Neurosurgery, Emory University, Atlanta, Georgia 30322
| | - K Cora Ames
- Department of Neuroscience
- Center for Theoretical Neuroscience
- Grossman Center for the Statistics of Mind
- Zuckerman Institute, Columbia University, New York, New York 10027
| | - Abigail A Russo
- Department of Neuroscience
- Grossman Center for the Statistics of Mind
- Zuckerman Institute, Columbia University, New York, New York 10027
| | - Ali Farshchian
- Department of Physiology, Northwestern University, Chicago, Illinois 60611
| | - Lee E Miller
- Department of Physiology, Northwestern University, Chicago, Illinois 60611
| | - Eva L Dyer
- Wallace H. Coulter Department of Biomedical Engineering, Emory University and Georgia Institute of Technology, Atlanta, Georgia 30322
- Department of Electrical and Computer Engineering, Georgia Institute of Technology, Atlanta, Georgia 30332
| | - Jonathan C Kao
- Department of Electrical and Computer Engineering, and
- Neurosciences Program, University of California, Los Angeles, California 90095
| |
Collapse
|
20
|
Ambron E, Jax S, Schettino LF, Coslett HB. Magnifying vision improves motor performance in individuals with stroke. Neuropsychologia 2018; 119:373-381. [PMID: 30172830 DOI: 10.1016/j.neuropsychologia.2018.08.029] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/08/2018] [Revised: 08/25/2018] [Accepted: 08/29/2018] [Indexed: 10/28/2022]
Abstract
Increasing perceived hand size using magnifying lenses improves tactile discrimination and motor performance in neurologically-intact individuals. We tested whether magnification of the hand can improve motor function in individuals with chronic stroke. Twenty-five individuals with a history of stroke more than 6 months prior to testing underwent a series of tasks exploring different aspects of motor performance (grip force, finger tapping, reaching and grasping, and finger matching) under two visual conditions: magnified or normal vision. Performance was also assessed shortly after visual manipulation to test if these effects persisted. Twenty-eight percent of individuals showed an immediate significant improvement averaged across all tasks with magnification; similar beneficial responses were also observed in 32% of individuals after a short delay. These results suggest that magnification of the image of the hand may be of utility in rehabilitation of individuals with stroke.
Collapse
Affiliation(s)
- Elisabetta Ambron
- Laboratory for Cognition and Neural Stimulation, Dept. of Neurology, Perelman School of Medicine at the University of Pennsylvania, United States.
| | - Steven Jax
- Perceptual-Motor Control Laboratory, Moss Rehabilitation Research Institute (MRRI), United States
| | | | - H Branch Coslett
- Laboratory for Cognition and Neural Stimulation, Dept. of Neurology, Perelman School of Medicine at the University of Pennsylvania, United States.
| |
Collapse
|
21
|
Hennig JA, Golub MD, Lund PJ, Sadtler PT, Oby ER, Quick KM, Ryu SI, Tyler-Kabara EC, Batista AP, Yu BM, Chase SM. Constraints on neural redundancy. eLife 2018; 7:36774. [PMID: 30109848 PMCID: PMC6130976 DOI: 10.7554/elife.36774] [Citation(s) in RCA: 32] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/18/2018] [Accepted: 08/06/2018] [Indexed: 12/24/2022] Open
Abstract
Millions of neurons drive the activity of hundreds of muscles, meaning many different neural population activity patterns could generate the same movement. Studies have suggested that these redundant (i.e. behaviorally equivalent) activity patterns may be beneficial for neural computation. However, it is unknown what constraints may limit the selection of different redundant activity patterns. We leveraged a brain-computer interface, allowing us to define precisely which neural activity patterns were redundant. Rhesus monkeys made cursor movements by modulating neural activity in primary motor cortex. We attempted to predict the observed distribution of redundant neural activity. Principles inspired by work on muscular redundancy did not accurately predict these distributions. Surprisingly, the distributions of redundant neural activity and task-relevant activity were coupled, which enabled accurate predictions of the distributions of redundant activity. This suggests limits on the extent to which redundancy may be exploited by the brain for computation. When you swing a tennis racket, muscles in your arm contract in a specific sequence. For this to happen, millions of neurons in your brain and spinal cord must fire to make those muscles contract. If you swing the racket a second time, the same muscles in your arm will contract again. But the firing pattern of the underlying neurons will probably be different. This phenomenon, in which different patterns of neural activity generate the same outcome, is called neural redundancy. Neural redundancy allows a set of neurons to perform multiple tasks at once. For example, the same neurons may drive an arm movement while simultaneously planning the next activity. But does performing a given task constrain how often different patterns of neural activity can be produced? If so, this would limit whether other tasks could be carried out at the same time. To address this, Hennig et al. trained macaque monkeys to use a brain-computer interface (BCI). This is a device that reads out electrical brain activity and converts it into signals that can be used to control another device. The key advantage of a BCI is that the redundant activity patterns are precisely known. The monkeys learned to use their brain activity, via the BCI, to move a cursor on a computer screen in different directions. The results revealed that monkeys could only produce a limited number of different patterns of brain activity for a given BCI cursor movement. This suggests that the ability of a group of neurons to multitask is restricted. For example, if the same set of neurons is involved in both planning and performing movements, then an animal’s ability to plan a future movement will depend on the one it is currently performing. BCIs can help patients who have suffered stroke or paralysis. They enable patients to use their brain activity to control a computer or even robotic limbs. Understanding how the brain controls BCIs will help us improve their performance and deepen our knowledge of how the brain plans and performs movements. This might include designing BCIs that allow users to multitask more effectively.
Collapse
Affiliation(s)
- Jay A Hennig
- Program in Neural Computation, Carnegie Mellon University, Pittsburgh, United States.,Center for the Neural Basis of Cognition, Carnegie Mellon University, Pittsburgh, United States.,Machine Learning Department, Carnegie Mellon University, Pittsburgh, United States
| | - Matthew D Golub
- Center for the Neural Basis of Cognition, Carnegie Mellon University, Pittsburgh, United States.,Department of Electrical and Computer Engineering, Carnegie Mellon University, Pittsburgh, United States
| | - Peter J Lund
- Center for the Neural Basis of Cognition, Carnegie Mellon University, Pittsburgh, United States.,Machine Learning Department, Carnegie Mellon University, Pittsburgh, United States.,Department of Biomedical Engineering, Carnegie Mellon University, Pittsburgh, United States
| | - Patrick T Sadtler
- Center for the Neural Basis of Cognition, Carnegie Mellon University, Pittsburgh, United States.,Department of Bioengineering, University of Pittsburgh, Pittsburgh, United States
| | - Emily R Oby
- Center for the Neural Basis of Cognition, Carnegie Mellon University, Pittsburgh, United States.,Department of Bioengineering, University of Pittsburgh, Pittsburgh, United States
| | - Kristin M Quick
- Center for the Neural Basis of Cognition, Carnegie Mellon University, Pittsburgh, United States.,Department of Bioengineering, University of Pittsburgh, Pittsburgh, United States
| | - Stephen I Ryu
- Department of Neurosurgery, Palo Alto Medical Foundation, California, United States.,Department of Electrical Engineering, Stanford University, California, United States
| | - Elizabeth C Tyler-Kabara
- Center for the Neural Basis of Cognition, Carnegie Mellon University, Pittsburgh, United States.,Department of Physical Medicine and Rehabilitation, University of Pittsburgh, Pittsburgh, United States.,Department of Neurological Surgery, University of Pittsburgh, Pittsburgh, United States
| | - Aaron P Batista
- Center for the Neural Basis of Cognition, Carnegie Mellon University, Pittsburgh, United States.,Department of Bioengineering, University of Pittsburgh, Pittsburgh, United States
| | - Byron M Yu
- Center for the Neural Basis of Cognition, Carnegie Mellon University, Pittsburgh, United States.,Department of Electrical and Computer Engineering, Carnegie Mellon University, Pittsburgh, United States.,Department of Biomedical Engineering, Carnegie Mellon University, Pittsburgh, United States
| | - Steven M Chase
- Center for the Neural Basis of Cognition, Carnegie Mellon University, Pittsburgh, United States.,Department of Biomedical Engineering, Carnegie Mellon University, Pittsburgh, United States
| |
Collapse
|
22
|
Abstract
Brain-computer interfaces are in the process of moving from the laboratory to the clinic. These devices act by reading neural activity and using it to directly control a device, such as a cursor on a computer screen. An open question in the field is how to map neural activity to device movement in order to achieve the most proficient control. This question is complicated by the fact that learning, especially the long-term skill learning that accompanies weeks of practice, can allow subjects to improve performance over time. Typical approaches to this problem attempt to maximize the biomimetic properties of the device in order to limit the need for extensive training. However, it is unclear if this approach would ultimately be superior to performance that might be achieved with a nonbiomimetic device once the subject has engaged in extended practice and learned how to use it. Here we approach this problem using ideas from optimal control theory. Under the assumption that the brain acts as an optimal controller, we present a formal definition of the usability of a device and show that the optimal postlearning mapping can be written as the solution of a constrained optimization problem. We then derive the optimal mappings for particular cases common to most brain-computer interfaces. Our results suggest that the common approach of creating biomimetic interfaces may not be optimal when learning is taken into account. More broadly, our method provides a blueprint for optimal device design in general control-theoretic contexts.
Collapse
Affiliation(s)
- Yin Zhang
- Robotics Institute, Carnegie Mellon University, Pittsburgh, PA 15213, U.S.A
| | - Steve M. Chase
- Biomedical Engineering Department and Center for the Neural Basis of Cognition, Carnegie Mellon University, Pittsburgh, PA 15213, U.S.A
| |
Collapse
|
23
|
Hsieh HL, Shanechi MM. Optimizing the learning rate for adaptive estimation of neural encoding models. PLoS Comput Biol 2018; 14:e1006168. [PMID: 29813069 PMCID: PMC5993334 DOI: 10.1371/journal.pcbi.1006168] [Citation(s) in RCA: 21] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/04/2017] [Revised: 06/08/2018] [Accepted: 05/02/2018] [Indexed: 01/05/2023] Open
Abstract
Closed-loop neurotechnologies often need to adaptively learn an encoding model that relates the neural activity to the brain state, and is used for brain state decoding. The speed and accuracy of adaptive learning algorithms are critically affected by the learning rate, which dictates how fast model parameters are updated based on new observations. Despite the importance of the learning rate, currently an analytical approach for its selection is largely lacking and existing signal processing methods vastly tune it empirically or heuristically. Here, we develop a novel analytical calibration algorithm for optimal selection of the learning rate in adaptive Bayesian filters. We formulate the problem through a fundamental trade-off that learning rate introduces between the steady-state error and the convergence time of the estimated model parameters. We derive explicit functions that predict the effect of learning rate on error and convergence time. Using these functions, our calibration algorithm can keep the steady-state parameter error covariance smaller than a desired upper-bound while minimizing the convergence time, or keep the convergence time faster than a desired value while minimizing the error. We derive the algorithm both for discrete-valued spikes modeled as point processes nonlinearly dependent on the brain state, and for continuous-valued neural recordings modeled as Gaussian processes linearly dependent on the brain state. Using extensive closed-loop simulations, we show that the analytical solution of the calibration algorithm accurately predicts the effect of learning rate on parameter error and convergence time. Moreover, the calibration algorithm allows for fast and accurate learning of the encoding model and for fast convergence of decoding to accurate performance. Finally, larger learning rates result in inaccurate encoding models and decoders, and smaller learning rates delay their convergence. The calibration algorithm provides a novel analytical approach to predictably achieve a desired level of error and convergence time in adaptive learning, with application to closed-loop neurotechnologies and other signal processing domains.
Collapse
Affiliation(s)
- Han-Lin Hsieh
- Ming Hsieh Department of Electrical Engineering, Viterbi School of Engineering, University of Southern California, Los Angeles, California, United States of America
| | - Maryam M. Shanechi
- Ming Hsieh Department of Electrical Engineering, Viterbi School of Engineering, University of Southern California, Los Angeles, California, United States of America
- Neuroscience Graduate Program, University of Southern California, Los Angeles, California, United States of America
| |
Collapse
|
24
|
Golub MD, Sadtler PT, Oby ER, Quick KM, Ryu SI, Tyler-Kabara EC, Batista AP, Chase SM, Yu BM. Learning by neural reassociation. Nat Neurosci 2018. [PMID: 29531364 PMCID: PMC5876156 DOI: 10.1038/s41593-018-0095-3] [Citation(s) in RCA: 114] [Impact Index Per Article: 19.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/23/2022]
Abstract
Behavior is driven by coordinated activity across a population of neurons. Learning requires the brain to change the neural population activity produced to achieve a given behavioral goal. How does population activity reorganize during learning? We studied intracortical population activity in the primary motor cortex of rhesus macaques during short-term learning in a brain-computer interface (BCI) task. In a BCI, the mapping between neural activity and behavior is exactly known, enabling us to rigorously define hypotheses about neural reorganization during learning. We found that changes in population activity followed a suboptimal neural strategy of Reassociation: animals relied on a fixed repertoire of activity patterns and associated those patterns with different movements after learning. These results indicate that the activity patterns that a neural population can generate are even more constrained than previously thought and might explain why it is often difficult to quickly learn to a high level of proficiency.
Collapse
Affiliation(s)
- Matthew D Golub
- Department of Electrical and Computer Engineering, Carnegie Mellon University, Pittsburgh, PA, USA.,Center for the Neural Basis of Cognition, Carnegie Mellon University, Pittsburgh, PA, USA.,Department of Electrical Engineering, Stanford University, Stanford, CA, USA
| | - Patrick T Sadtler
- Center for the Neural Basis of Cognition, Carnegie Mellon University, Pittsburgh, PA, USA.,Department of Bioengineering, University of Pittsburgh, Pittsburgh, PA, USA.,Systems Neuroscience Institute, University of Pittsburgh, Pittsburgh, PA, USA
| | - Emily R Oby
- Center for the Neural Basis of Cognition, Carnegie Mellon University, Pittsburgh, PA, USA.,Department of Bioengineering, University of Pittsburgh, Pittsburgh, PA, USA.,Systems Neuroscience Institute, University of Pittsburgh, Pittsburgh, PA, USA
| | - Kristin M Quick
- Center for the Neural Basis of Cognition, Carnegie Mellon University, Pittsburgh, PA, USA.,Department of Bioengineering, University of Pittsburgh, Pittsburgh, PA, USA.,Systems Neuroscience Institute, University of Pittsburgh, Pittsburgh, PA, USA
| | - Stephen I Ryu
- Department of Electrical Engineering, Stanford University, Stanford, CA, USA.,Department of Neurosurgery, Palo Alto Medical Foundation, Palo Alto, CA, USA
| | - Elizabeth C Tyler-Kabara
- Department of Bioengineering, University of Pittsburgh, Pittsburgh, PA, USA.,Department of Physical Medicine and Rehabilitation, University of Pittsburgh, Pittsburgh, PA, USA.,Department of Neurological Surgery, University of Pittsburgh, Pittsburgh, PA, USA
| | - Aaron P Batista
- Center for the Neural Basis of Cognition, Carnegie Mellon University, Pittsburgh, PA, USA.,Department of Bioengineering, University of Pittsburgh, Pittsburgh, PA, USA.,Systems Neuroscience Institute, University of Pittsburgh, Pittsburgh, PA, USA
| | - Steven M Chase
- Center for the Neural Basis of Cognition, Carnegie Mellon University, Pittsburgh, PA, USA. .,Department of Biomedical Engineering, Carnegie Mellon University, Pittsburgh, PA, USA.
| | - Byron M Yu
- Department of Electrical and Computer Engineering, Carnegie Mellon University, Pittsburgh, PA, USA. .,Center for the Neural Basis of Cognition, Carnegie Mellon University, Pittsburgh, PA, USA. .,Department of Biomedical Engineering, Carnegie Mellon University, Pittsburgh, PA, USA.
| |
Collapse
|
25
|
Vyas S, Even-Chen N, Stavisky SD, Ryu SI, Nuyujukian P, Shenoy KV. Neural Population Dynamics Underlying Motor Learning Transfer. Neuron 2018; 97:1177-1186.e3. [PMID: 29456026 DOI: 10.1016/j.neuron.2018.01.040] [Citation(s) in RCA: 75] [Impact Index Per Article: 12.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/16/2017] [Revised: 11/21/2017] [Accepted: 01/20/2018] [Indexed: 12/22/2022]
Abstract
Covert motor learning can sometimes transfer to overt behavior. We investigated the neural mechanism underlying transfer by constructing a two-context paradigm. Subjects performed cursor movements either overtly using arm movements, or covertly via a brain-machine interface that moves the cursor based on motor cortical activity (in lieu of arm movement). These tasks helped evaluate whether and how cortical changes resulting from "covert rehearsal" affect overt performance. We found that covert learning indeed transfers to overt performance and is accompanied by systematic population-level changes in motor preparatory activity. Current models of motor cortical function ascribe motor preparation to achieving initial conditions favorable for subsequent movement-period neural dynamics. We found that covert and overt contexts share these initial conditions, and covert rehearsal manipulates them in a manner that persists across context changes, thus facilitating overt motor learning. This transfer learning mechanism might provide new insights into other covert processes like mental rehearsal.
Collapse
Affiliation(s)
- Saurabh Vyas
- Department of Bioengineering, Stanford University, Stanford, CA 94305, USA.
| | - Nir Even-Chen
- Department of Electrical Engineering, Stanford University, Stanford, CA 94305, USA; Bio-X Program, Stanford University, Stanford, CA 94305, USA
| | - Sergey D Stavisky
- Department of Electrical Engineering, Stanford University, Stanford, CA 94305, USA; Department of Neurosurgery, Stanford University, Stanford, CA 94305, USA
| | - Stephen I Ryu
- Department of Electrical Engineering, Stanford University, Stanford, CA 94305, USA; Palo Alto Medical Foundation, Palo Alto, CA 94301, USA
| | - Paul Nuyujukian
- Department of Bioengineering, Stanford University, Stanford, CA 94305, USA; Department of Electrical Engineering, Stanford University, Stanford, CA 94305, USA; Department of Neurosurgery, Stanford University, Stanford, CA 94305, USA; Bio-X Program, Stanford University, Stanford, CA 94305, USA; Stanford Neurosciences Institute, Stanford University, Stanford, CA 94305, USA
| | - Krishna V Shenoy
- Department of Bioengineering, Stanford University, Stanford, CA 94305, USA; Department of Electrical Engineering, Stanford University, Stanford, CA 94305, USA; Department of Neurobiology, Stanford University, Stanford, CA 94305, USA; Bio-X Program, Stanford University, Stanford, CA 94305, USA; Stanford Neurosciences Institute, Stanford University, Stanford, CA 94305, USA; Howard Hughes Medical Institute, Stanford University, Stanford, CA 94305, USA
| |
Collapse
|
26
|
|
27
|
Orsborn AL, Pesaran B. Parsing learning in networks using brain-machine interfaces. Curr Opin Neurobiol 2017; 46:76-83. [PMID: 28843838 DOI: 10.1016/j.conb.2017.08.002] [Citation(s) in RCA: 29] [Impact Index Per Article: 4.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/06/2017] [Revised: 07/31/2017] [Accepted: 08/03/2017] [Indexed: 12/30/2022]
Abstract
Brain-machine interfaces (BMIs) define new ways to interact with our environment and hold great promise for clinical therapies. Motor BMIs, for instance, re-route neural activity to control movements of a new effector and could restore movement to people with paralysis. Increasing experience shows that interfacing with the brain inevitably changes the brain. BMIs engage and depend on a wide array of innate learning mechanisms to produce meaningful behavior. BMIs precisely define the information streams into and out of the brain, but engage wide-spread learning. We take a network perspective and review existing observations of learning in motor BMIs to show that BMIs engage multiple learning mechanisms distributed across neural networks. Recent studies demonstrate the advantages of BMI for parsing this learning and its underlying neural mechanisms. BMIs therefore provide a powerful tool for studying the neural mechanisms of learning that highlights the critical role of learning in engineered neural therapies.
Collapse
Affiliation(s)
- Amy L Orsborn
- Center for Neural Science, New York University, New York, NY 10003, USA.
| | - Bijan Pesaran
- Center for Neural Science, New York University, New York, NY 10003, USA
| |
Collapse
|
28
|
Stavisky SD, Kao JC, Ryu SI, Shenoy KV. Motor Cortical Visuomotor Feedback Activity Is Initially Isolated from Downstream Targets in Output-Null Neural State Space Dimensions. Neuron 2017. [PMID: 28625485 DOI: 10.1016/j.neuron.2017.05.023] [Citation(s) in RCA: 56] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/07/2023]
Abstract
Neural circuits must transform new inputs into outputs without prematurely affecting downstream circuits while still maintaining other ongoing communication with these targets. We investigated how this isolation is achieved in the motor cortex when macaques received visual feedback signaling a movement perturbation. To overcome limitations in estimating the mapping from cortex to arm movements, we also conducted brain-machine interface (BMI) experiments where we could definitively identify neural firing patterns as output-null or output-potent. This revealed that perturbation-evoked responses were initially restricted to output-null patterns that cancelled out at the neural population code readout and only later entered output-potent neural dimensions. This mechanism was facilitated by the circuit's large null space and its ability to strongly modulate output-potent dimensions when generating corrective movements. These results show that the nervous system can temporarily isolate portions of a circuit's activity from its downstream targets by restricting this activity to the circuit's output-null neural dimensions.
Collapse
Affiliation(s)
- Sergey D Stavisky
- Neurosciences Graduate Program, Stanford University, Stanford, CA 94305, USA.
| | - Jonathan C Kao
- Electrical Engineering Department, Stanford University, Stanford, CA 94305, USA
| | - Stephen I Ryu
- Electrical Engineering Department, Stanford University, Stanford, CA 94305, USA; Department of Neurosurgery, Palo Alto Medical Foundation, Palo Alto, CA 94301, USA
| | - Krishna V Shenoy
- Neurosciences Graduate Program, Stanford University, Stanford, CA 94305, USA; Electrical Engineering Department, Stanford University, Stanford, CA 94305, USA; Neurobiology and Bioengineering Departments, Stanford University, Stanford, CA 94305, USA; Bio-X Program, Stanford University, Stanford, CA 94305, USA; Stanford Neurosciences Institute, Stanford University, Stanford, CA 94305, USA; Howard Hughes Medical Institute, Stanford University, Stanford, CA 94305, USA
| |
Collapse
|
29
|
Lebedev MA, Nicolelis MAL. Brain-Machine Interfaces: From Basic Science to Neuroprostheses and Neurorehabilitation. Physiol Rev 2017; 97:767-837. [PMID: 28275048 DOI: 10.1152/physrev.00027.2016] [Citation(s) in RCA: 235] [Impact Index Per Article: 33.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/08/2023] Open
Abstract
Brain-machine interfaces (BMIs) combine methods, approaches, and concepts derived from neurophysiology, computer science, and engineering in an effort to establish real-time bidirectional links between living brains and artificial actuators. Although theoretical propositions and some proof of concept experiments on directly linking the brains with machines date back to the early 1960s, BMI research only took off in earnest at the end of the 1990s, when this approach became intimately linked to new neurophysiological methods for sampling large-scale brain activity. The classic goals of BMIs are 1) to unveil and utilize principles of operation and plastic properties of the distributed and dynamic circuits of the brain and 2) to create new therapies to restore mobility and sensations to severely disabled patients. Over the past decade, a wide range of BMI applications have emerged, which considerably expanded these original goals. BMI studies have shown neural control over the movements of robotic and virtual actuators that enact both upper and lower limb functions. Furthermore, BMIs have also incorporated ways to deliver sensory feedback, generated from external actuators, back to the brain. BMI research has been at the forefront of many neurophysiological discoveries, including the demonstration that, through continuous use, artificial tools can be assimilated by the primate brain's body schema. Work on BMIs has also led to the introduction of novel neurorehabilitation strategies. As a result of these efforts, long-term continuous BMI use has been recently implicated with the induction of partial neurological recovery in spinal cord injury patients.
Collapse
|
30
|
Athalye VR, Ganguly K, Costa RM, Carmena JM. Emergence of Coordinated Neural Dynamics Underlies Neuroprosthetic Learning and Skillful Control. Neuron 2017; 93:955-970.e5. [PMID: 28190641 DOI: 10.1016/j.neuron.2017.01.016] [Citation(s) in RCA: 56] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/13/2016] [Revised: 09/07/2016] [Accepted: 01/19/2017] [Indexed: 01/01/2023]
Abstract
During motor learning, movements and underlying neural activity initially exhibit large trial-to-trial variability that decreases over learning. However, it is unclear how task-relevant neural populations coordinate to explore and consolidate activity patterns. Exploration and consolidation could happen for each neuron independently, across the population jointly, or both. We disambiguated among these possibilities by investigating how subjects learned de novo to control a brain-machine interface using neurons from motor cortex. We decomposed population activity into the sum of private and shared signals, which produce uncorrelated and correlated neural variance, respectively, and examined how these signals' evolution causally shapes behavior. We found that initially large trial-to-trial movement and private neural variability reduce over learning. Concomitantly, task-relevant shared variance increases, consolidating a manifold containing consistent neural trajectories that generate refined control. These results suggest that motor cortex acquires skillful control by leveraging both independent and coordinated variance to explore and consolidate neural patterns.
Collapse
Affiliation(s)
- Vivek R Athalye
- Department of Electrical Engineering and Computer Sciences, University of California, Berkeley, Berkeley, CA 94720, USA; Champalimaud Neuroscience Programme, Champalimaud Centre for the Unknown, Avenida de Brasília, Doca de Pedrouços, Lisbon 1400-038, Portugal
| | - Karunesh Ganguly
- Neurology and Rehabilitation Services, San Francisco VA Medical Center, San Francisco, CA 94121, USA; Department of Neurology, University of California, San Francisco, San Francisco, CA 94143, USA
| | - Rui M Costa
- Champalimaud Neuroscience Programme, Champalimaud Centre for the Unknown, Avenida de Brasília, Doca de Pedrouços, Lisbon 1400-038, Portugal; Department of Neuroscience, Zuckerman Mind Brain Behavior Institute, Columbia University, New York, NY 10032, USA.
| | - Jose M Carmena
- Department of Electrical Engineering and Computer Sciences, University of California, Berkeley, Berkeley, CA 94720, USA; Helen Wills Neuroscience Institute, University of California, Berkeley, Berkeley, CA 94720, USA; UC Berkeley/UCSF Joint Graduate Program in Bioengineering, University of California, Berkeley, Berkeley, CA 94720, USA.
| |
Collapse
|
31
|
Willett FR, Murphy BA, Memberg WD, Blabe CH, Pandarinath C, Walter BL, Sweet JA, Miller JP, Henderson JM, Shenoy KV, Hochberg LR, Kirsch RF, Ajiboye AB. Signal-independent noise in intracortical brain-computer interfaces causes movement time properties inconsistent with Fitts' law. J Neural Eng 2017; 14:026010. [PMID: 28177925 DOI: 10.1088/1741-2552/aa5990] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/10/2023]
Abstract
OBJECTIVE Do movements made with an intracortical BCI (iBCI) have the same movement time properties as able-bodied movements? Able-bodied movement times typically obey Fitts' law: [Formula: see text] (where MT is movement time, D is target distance, R is target radius, and [Formula: see text] are parameters). Fitts' law expresses two properties of natural movement that would be ideal for iBCIs to restore: (1) that movement times are insensitive to the absolute scale of the task (since movement time depends only on the ratio [Formula: see text]) and (2) that movements have a large dynamic range of accuracy (since movement time is logarithmically proportional to [Formula: see text]). APPROACH Two participants in the BrainGate2 pilot clinical trial made cortically controlled cursor movements with a linear velocity decoder and acquired targets by dwelling on them. We investigated whether the movement times were well described by Fitts' law. MAIN RESULTS We found that movement times were better described by the equation [Formula: see text], which captures how movement time increases sharply as the target radius becomes smaller, independently of distance. In contrast to able-bodied movements, the iBCI movements we studied had a low dynamic range of accuracy (absence of logarithmic proportionality) and were sensitive to the absolute scale of the task (small targets had long movement times regardless of the [Formula: see text] ratio). We argue that this relationship emerges due to noise in the decoder output whose magnitude is largely independent of the user's motor command (signal-independent noise). Signal-independent noise creates a baseline level of variability that cannot be decreased by trying to move slowly or hold still, making targets below a certain size very hard to acquire with a standard decoder. SIGNIFICANCE The results give new insight into how iBCI movements currently differ from able-bodied movements and suggest that restoring a Fitts' law-like relationship to iBCI movements may require non-linear decoding strategies.
Collapse
Affiliation(s)
- Francis R Willett
- Department of Biomedical Engineering, Case Western Reserve University, Cleveland, OH, United States of America. Louis Stokes Cleveland Department of Veterans Affairs Medical Center, FES Center of Excellence, Rehab. R&D Service, Cleveland, OH, United States of America
| | | | | | | | | | | | | | | | | | | | | | | | | |
Collapse
|
32
|
Trial-by-Trial Motor Cortical Correlates of a Rapidly Adapting Visuomotor Internal Model. J Neurosci 2017; 37:1721-1732. [PMID: 28087767 DOI: 10.1523/jneurosci.1091-16.2016] [Citation(s) in RCA: 20] [Impact Index Per Article: 2.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/01/2016] [Revised: 11/15/2016] [Accepted: 12/10/2016] [Indexed: 01/15/2023] Open
Abstract
Accurate motor control is mediated by internal models of how neural activity generates movement. We examined neural correlates of an adapting internal model of visuomotor gain in motor cortex while two macaques performed a reaching task in which the gain scaling between the hand and a presented cursor was varied. Previous studies of cortical changes during visuomotor adaptation focused on preparatory and perimovement epochs and analyzed trial-averaged neural data. Here, we recorded simultaneous neural population activity using multielectrode arrays and focused our analysis on neural differences in the period before the target appeared. We found that we could estimate the monkey's internal model of the gain using the neural population state during this pretarget epoch. This neural correlate depended on the gain experienced during recent trials and it predicted the speed of the subsequent reach. To explore the utility of this internal model estimate for brain-machine interfaces, we performed an offline analysis showing that it can be used to compensate for upcoming reach extent errors. Together, these results demonstrate that pretarget neural activity in motor cortex reflects the monkey's internal model of visuomotor gain on single trials and can potentially be used to improve neural prostheses.SIGNIFICANCE STATEMENT When generating movement commands, the brain is believed to use internal models of the relationship between neural activity and the body's movement. Visuomotor adaptation tasks have revealed neural correlates of these computations in multiple brain areas during movement preparation and execution. Here, we describe motor cortical changes in a visuomotor gain change task even before a specific movement is cued. We were able to estimate the gain internal model from these pretarget neural correlates and relate it to single-trial behavior. This is an important step toward understanding the sensorimotor system's algorithms for updating its internal models after specific movements and errors. Furthermore, the ability to estimate the internal model before movement could improve motor neural prostheses being developed for people with paralysis.
Collapse
|
33
|
Willett FR, Pandarinath C, Jarosiewicz B, Murphy BA, Memberg WD, Blabe CH, Saab J, Walter BL, Sweet JA, Miller JP, Henderson JM, Shenoy KV, Simeral JD, Hochberg LR, Kirsch RF, Ajiboye AB. Feedback control policies employed by people using intracortical brain-computer interfaces. J Neural Eng 2016; 14:016001. [PMID: 27900953 DOI: 10.1088/1741-2560/14/1/016001] [Citation(s) in RCA: 28] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/31/2022]
Abstract
OBJECTIVE When using an intracortical BCI (iBCI), users modulate their neural population activity to move an effector towards a target, stop accurately, and correct for movement errors. We call the rules that govern this modulation a 'feedback control policy'. A better understanding of these policies may inform the design of higher-performing neural decoders. APPROACH We studied how three participants in the BrainGate2 pilot clinical trial used an iBCI to control a cursor in a 2D target acquisition task. Participants used a velocity decoder with exponential smoothing dynamics. Through offline analyses, we characterized the users' feedback control policies by modeling their neural activity as a function of cursor state and target position. We also tested whether users could adapt their policy to different decoder dynamics by varying the gain (speed scaling) and temporal smoothing parameters of the iBCI. MAIN RESULTS We demonstrate that control policy assumptions made in previous studies do not fully describe the policies of our participants. To account for these discrepancies, we propose a new model that captures (1) how the user's neural population activity gradually declines as the cursor approaches the target from afar, then decreases more sharply as the cursor comes into contact with the target, (2) how the user makes constant feedback corrections even when the cursor is on top of the target, and (3) how the user actively accounts for the cursor's current velocity to avoid overshooting the target. Further, we show that users can adapt their control policy to decoder dynamics by attenuating neural modulation when the cursor gain is high and by damping the cursor velocity more strongly when the smoothing dynamics are high. SIGNIFICANCE Our control policy model may help to build better decoders, understand how neural activity varies during active iBCI control, and produce better simulations of closed-loop iBCI movements.
Collapse
Affiliation(s)
- Francis R Willett
- Department of Biomedical Engineering, Case Western Reserve University, Cleveland, OH, USA. Louis Stokes Cleveland Department of Veterans Affairs Medical Center, FES Center of Excellence, Rehab. R&D Service, Cleveland, OH, USA
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
Collapse
|
34
|
Abstract
Voluntary movement is a result of signals transmitted through a communication channel that links the internal world in our minds to the physical world around us. Intention can be considered the desire to effect change on our environment, and this is contained in the signals from the brain, passed through the nervous system to converge on muscles that generate displacements and forces on our surroundings. The resulting changes in the world act to generate sensations that feed back to the nervous system, closing the control loop. This Perspective discusses the experimental and theoretical underpinnings of current models of movement generation and the way they are modulated by external information. Movement systems embody intentionality and prediction, two factors that are propelling a revolution in engineering. Development of movement models that include the complexities of the external world may allow a better understanding of the neuronal populations regulating these processes, as well as the development of solutions for autonomous vehicles and robots, and neural prostheses for those who are motor impaired.
Collapse
Affiliation(s)
- Andrew B Schwartz
- Department of Neurobiology, School of Medicine, University of Pittsburgh, E1440 BSTWR, 200 Lothrop Street, Pittsburgh, PA 15213, USA.
| |
Collapse
|
35
|
Merel J, Carlson D, Paninski L, Cunningham JP. Neuroprosthetic Decoder Training as Imitation Learning. PLoS Comput Biol 2016; 12:e1004948. [PMID: 27191387 PMCID: PMC4871564 DOI: 10.1371/journal.pcbi.1004948] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/11/2015] [Accepted: 04/26/2016] [Indexed: 12/17/2022] Open
Abstract
Neuroprosthetic brain-computer interfaces function via an algorithm which decodes neural activity of the user into movements of an end effector, such as a cursor or robotic arm. In practice, the decoder is often learned by updating its parameters while the user performs a task. When the user's intention is not directly observable, recent methods have demonstrated value in training the decoder against a surrogate for the user's intended movement. Here we show that training a decoder in this way is a novel variant of an imitation learning problem, where an oracle or expert is employed for supervised training in lieu of direct observations, which are not available. Specifically, we describe how a generic imitation learning meta-algorithm, dataset aggregation (DAgger), can be adapted to train a generic brain-computer interface. By deriving existing learning algorithms for brain-computer interfaces in this framework, we provide a novel analysis of regret (an important metric of learning efficacy) for brain-computer interfaces. This analysis allows us to characterize the space of algorithmic variants and bounds on their regret rates. Existing approaches for decoder learning have been performed in the cursor control setting, but the available design principles for these decoders are such that it has been impossible to scale them to naturalistic settings. Leveraging our findings, we then offer an algorithm that combines imitation learning with optimal control, which should allow for training of arbitrary effectors for which optimal control can generate goal-oriented control. We demonstrate this novel and general BCI algorithm with simulated neuroprosthetic control of a 26 degree-of-freedom model of an arm, a sophisticated and realistic end effector.
Collapse
Affiliation(s)
- Josh Merel
- Neurobiology and Behavior program, Columbia University, New York, New York, United States of America
- Center for Theoretical Neuroscience, Columbia University, New York, New York, United States of America
| | - David Carlson
- Department of Statistics, Columbia University, New York, New York, United States of America
- Grossman Center for the Statistics of Mind, Columbia University, New York, New York, United States of America
| | - Liam Paninski
- Neurobiology and Behavior program, Columbia University, New York, New York, United States of America
- Center for Theoretical Neuroscience, Columbia University, New York, New York, United States of America
- Department of Statistics, Columbia University, New York, New York, United States of America
- Grossman Center for the Statistics of Mind, Columbia University, New York, New York, United States of America
| | - John P. Cunningham
- Neurobiology and Behavior program, Columbia University, New York, New York, United States of America
- Center for Theoretical Neuroscience, Columbia University, New York, New York, United States of America
- Department of Statistics, Columbia University, New York, New York, United States of America
- Grossman Center for the Statistics of Mind, Columbia University, New York, New York, United States of America
| |
Collapse
|
36
|
Golub MD, Chase SM, Batista AP, Yu BM. Brain-computer interfaces for dissecting cognitive processes underlying sensorimotor control. Curr Opin Neurobiol 2016; 37:53-58. [PMID: 26796293 DOI: 10.1016/j.conb.2015.12.005] [Citation(s) in RCA: 56] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/16/2015] [Revised: 12/16/2015] [Accepted: 12/17/2015] [Indexed: 01/19/2023]
Abstract
Sensorimotor control engages cognitive processes such as prediction, learning, and multisensory integration. Understanding the neural mechanisms underlying these cognitive processes with arm reaching is challenging because we currently record only a fraction of the relevant neurons, the arm has nonlinear dynamics, and multiple modalities of sensory feedback contribute to control. A brain-computer interface (BCI) is a well-defined sensorimotor loop with key simplifying advantages that address each of these challenges, while engaging similar cognitive processes. As a result, BCI is becoming recognized as a powerful tool for basic scientific studies of sensorimotor control. Here, we describe the benefits of BCI for basic scientific inquiries and review recent BCI studies that have uncovered new insights into the neural mechanisms underlying sensorimotor control.
Collapse
Affiliation(s)
- Matthew D Golub
- Department of Electrical and Computer Engineering, Carnegie Mellon University, United States; Center for the Neural Basis of Cognition, Carnegie Mellon University & University of Pittsburgh, United States
| | - Steven M Chase
- Department of Biomedical Engineering, Carnegie Mellon University, United States; Center for the Neural Basis of Cognition, Carnegie Mellon University & University of Pittsburgh, United States
| | - Aaron P Batista
- Center for the Neural Basis of Cognition, Carnegie Mellon University & University of Pittsburgh, United States; Department of Bioengineering, University of Pittsburgh, United States; Systems Neuroscience Institute, University of Pittsburgh, United States
| | - Byron M Yu
- Department of Electrical and Computer Engineering, Carnegie Mellon University, United States; Department of Biomedical Engineering, Carnegie Mellon University, United States; Center for the Neural Basis of Cognition, Carnegie Mellon University & University of Pittsburgh, United States
| |
Collapse
|