1
|
Neuronal Activity Distributed in Multiple Cortical Areas during Voluntary Control of the Native Arm or a Brain-Computer Interface. eNeuro 2020; 7:ENEURO.0376-20.2020. [PMID: 33060178 PMCID: PMC7598906 DOI: 10.1523/eneuro.0376-20.2020] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/30/2020] [Revised: 09/27/2020] [Accepted: 10/01/2020] [Indexed: 11/21/2022] Open
Abstract
Voluntary control of visually-guided upper extremity movements involves neuronal activity in multiple areas of the cerebral cortex. Studies of brain-computer interfaces (BCIs) that use spike recordings for input, however, have focused largely on activity in the region from which those neurons that directly control the BCI, which we call BCI units, are recorded. We hypothesized that just as voluntary control of the arm and hand involves activity in multiple cortical areas, so does voluntary control of a BCI. In two subjects (Macaca mulatta) performing a center-out task both with a hand-held joystick and with a BCI directly controlled by four primary motor cortex (M1) BCI units, we recorded the activity of other, non-BCI units in M1, dorsal premotor cortex (PMd) and ventral premotor cortex (PMv), primary somatosensory cortex (S1), dorsal posterior parietal cortex (dPPC), and the anterior intraparietal area (AIP). In most of these areas, non-BCI units were active in similar percentages and at similar modulation depths during both joystick and BCI trials. Both BCI and non-BCI units showed changes in preferred direction (PD). Additionally, the prevalence of effective connectivity between BCI and non-BCI units was similar during both tasks. The subject with better BCI performance showed increased percentages of modulated non-BCI units with increased modulation depth and increased effective connectivity during BCI as compared with joystick trials; such increases were not found in the subject with poorer BCI performance. During voluntary, closed-loop control, non-BCI units in a given cortical area may function similarly whether the effector is the native upper extremity or a BCI-controlled device.
Collapse
|
2
|
Kuo CH, Blakely TM, Wander JD, Sarma D, Wu J, Casimo K, Weaver KE, Ojemann JG. Context-dependent relationship in high-resolution micro-ECoG studies during finger movements. J Neurosurg 2020; 132:1358-1366. [DOI: 10.3171/2019.1.jns181840] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/26/2018] [Accepted: 01/28/2019] [Indexed: 11/06/2022]
Abstract
OBJECTIVEThe activation of the sensorimotor cortex as measured by electrocorticographic (ECoG) signals has been correlated with contralateral hand movements in humans, as precisely as the level of individual digits. However, the relationship between individual and multiple synergistic finger movements and the neural signal as detected by ECoG has not been fully explored. The authors used intraoperative high-resolution micro-ECoG (µECoG) on the sensorimotor cortex to link neural signals to finger movements across several context-specific motor tasks.METHODSThree neurosurgical patients with cortical lesions over eloquent regions participated. During awake craniotomy, a sensorimotor cortex area of hand movement was localized by high-frequency responses measured by an 8 × 8 µECoG grid of 3-mm interelectrode spacing. Patients performed a flexion movement of the thumb or index finger, or a pinch movement of both, based on a visual cue. High-gamma (HG; 70–230 Hz) filtered µECoG was used to identify dominant electrodes associated with thumb and index movement. Hand movements were recorded by a dataglove simultaneously with µECoG recording.RESULTSIn all 3 patients, the electrodes controlling thumb and index finger movements were identifiable approximately 3–6-mm apart by the HG-filtered µECoG signal. For HG power of cortical activation measured with µECoG, the thumb and index signals in the pinch movement were similar to those observed during thumb-only and index-only movement, respectively (all p > 0.05). Index finger movements, measured by the dataglove joint angles, were similar in both the index-only and pinch movements (p > 0.05). However, despite similar activation across the conditions, markedly decreased thumb movement was observed in pinch relative to independent thumb-only movement (all p < 0.05).CONCLUSIONSHG-filtered µECoG signals effectively identify dominant regions associated with thumb and index finger movement. For pinch, the µECoG signal comprises a combination of the signals from individual thumb and index movements. However, while the relationship between the index finger joint angle and HG-filtered signal remains consistent between conditions, there is not a fixed relationship for thumb movement. Although the HG-filtered µECoG signal is similar in both thumb-only and pinch conditions, the actual thumb movement is markedly smaller in the pinch condition than in the thumb-only condition. This implies a nonlinear relationship between the cortical signal and the motor output for some, but importantly not all, movement types. This analysis provides insight into the tuning of the motor cortex toward specific types of motor behaviors.
Collapse
Affiliation(s)
- Chao-Hung Kuo
- 1Department of Neurological Surgery, University of Washington, Seattle, Washington
- 2Department of Neurosurgery, Neurological Institute, Taipei Veterans General Hospital, Taipei, Taiwan
- 3School of Medicine, National Yang-Ming University, Taipei, Taiwan
| | | | | | - Devapratim Sarma
- 7Department of Physical Medicine & Rehabilitation, University of Pittsburgh, Pittsburgh, Pennsylvania; and
| | - Jing Wu
- 4Department of Bioengineering,
- 6Center for Sensorimotor Neural Engineering, University of Washington, Seattle, Washington
| | - Kaitlyn Casimo
- 5Graduate Program in Neuroscience, and
- 6Center for Sensorimotor Neural Engineering, University of Washington, Seattle, Washington
| | - Kurt E. Weaver
- 5Graduate Program in Neuroscience, and
- 6Center for Sensorimotor Neural Engineering, University of Washington, Seattle, Washington
- 8Department of Radiology, University of Washington, Seattle, Washington
| | - Jeffrey G. Ojemann
- 1Department of Neurological Surgery, University of Washington, Seattle, Washington
- 5Graduate Program in Neuroscience, and
- 6Center for Sensorimotor Neural Engineering, University of Washington, Seattle, Washington
| |
Collapse
|
3
|
Vaskov AK, Irwin ZT, Nason SR, Vu PP, Nu CS, Bullard AJ, Hill M, North N, Patil PG, Chestek CA. Cortical Decoding of Individual Finger Group Motions Using ReFIT Kalman Filter. Front Neurosci 2018; 12:751. [PMID: 30455621 PMCID: PMC6231049 DOI: 10.3389/fnins.2018.00751] [Citation(s) in RCA: 24] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/21/2018] [Accepted: 09/28/2018] [Indexed: 01/01/2023] Open
Abstract
Objective: To date, many brain-machine interface (BMI) studies have developed decoding algorithms for neuroprostheses that provide users with precise control of upper arm reaches with some limited grasping capabilities. However, comparatively few have focused on quantifying the performance of precise finger control. Here we expand upon this work by investigating online control of individual finger groups. Approach: We have developed a novel training manipulandum for non-human primate (NHP) studies to isolate the movements of two specific finger groups: index and middle-ring-pinkie (MRP) fingers. We use this device in combination with the ReFIT (Recalibrated Feedback Intention-Trained) Kalman filter to decode the position of each finger group during a single degree of freedom task in two rhesus macaques with Utah arrays in motor cortex. The ReFIT Kalman filter uses a two-stage training approach that improves online control of upper arm tasks with substantial reductions in orbiting time, thus making it a logical first choice for precise finger control. Results: Both animals were able to reliably acquire fingertip targets with both index and MRP fingers, which they did in blocks of finger group specific trials. Decoding from motor signals online, the ReFIT Kalman filter reliably outperformed the standard Kalman filter, measured by bit rate, across all tested finger groups and movements by 31.0 and 35.2%. These decoders were robust when the manipulandum was removed during online control. While index finger movements and middle-ring-pinkie finger movements could be differentiated from each other with 81.7% accuracy across both subjects, the linear Kalman filter was not sufficient for decoding both finger groups together due to significant unwanted movement in the stationary finger, potentially due to co-contraction. Significance: To our knowledge, this is the first systematic and biomimetic separation of digits for continuous online decoding in a NHP as well as the first demonstration of the ReFIT Kalman filter improving the performance of precise finger decoding. These results suggest that novel nonlinear approaches, apparently not necessary for center out reaches or gross hand motions, may be necessary to achieve independent and precise control of individual fingers.
Collapse
Affiliation(s)
- Alex K Vaskov
- Robotics Graduate Program, University of Michigan, Ann Arbor, MI, United States
| | - Zachary T Irwin
- Department of Biomedical Engineering, University of Michigan, Ann Arbor, MI, United States.,Department of Neurology, University of Alabama, Birmingham, AL, United States
| | - Samuel R Nason
- Department of Biomedical Engineering, University of Michigan, Ann Arbor, MI, United States
| | - Philip P Vu
- Department of Biomedical Engineering, University of Michigan, Ann Arbor, MI, United States
| | - Chrono S Nu
- Department of Biomedical Engineering, University of Michigan, Ann Arbor, MI, United States
| | - Autumn J Bullard
- Department of Biomedical Engineering, University of Michigan, Ann Arbor, MI, United States
| | - Mackenna Hill
- Department of Biomedical Engineering, University of Michigan, Ann Arbor, MI, United States.,Department of Biomedical Engineering, Duke University, Durham, NC, United States
| | - Naia North
- Mechanical Engineering Department, University of Michigan, Ann Arbor, MI, United States
| | - Parag G Patil
- Department of Biomedical Engineering, University of Michigan, Ann Arbor, MI, United States.,Department of Neurosurgery, University of Michigan, Ann Arbor, MI, United States.,Department of Neurology, University of Michigan, Ann Arbor, MI, United States.,Neuroscience Graduate Program, University of Michigan, Ann Arbor, MI, United States
| | - Cynthia A Chestek
- Robotics Graduate Program, University of Michigan, Ann Arbor, MI, United States.,Department of Biomedical Engineering, University of Michigan, Ann Arbor, MI, United States.,Neuroscience Graduate Program, University of Michigan, Ann Arbor, MI, United States.,Department of Electrical Engineering and Computer Science, University of Michigan, Ann Arbor, MI, United States
| |
Collapse
|
4
|
Quick KM, Mischel JL, Loughlin PJ, Batista AP. The critical stability task: quantifying sensory-motor control during ongoing movement in nonhuman primates. J Neurophysiol 2018; 120:2164-2181. [PMID: 29947593 DOI: 10.1152/jn.00300.2017] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
Everyday behaviors require that we interact with the environment, using sensory information in an ongoing manner to guide our actions. Yet, by design, many of the tasks used in primate neurophysiology laboratories can be performed with limited sensory guidance. As a consequence, our knowledge about the neural mechanisms of motor control is largely limited to the feedforward aspects of the motor command. To study the feedback aspects of volitional motor control, we adapted the critical stability task (CST) from the human performance literature (Jex H, McDonnell J, Phatak A. IEEE Trans Hum Factors Electron 7: 138-145, 1966). In the CST, our monkey subjects interact with an inherently unstable (i.e., divergent) virtual system and must generate sensory-guided actions to stabilize it about an equilibrium point. The difficulty of the CST is determined by a single parameter, which allows us to quantitatively establish the limits of performance in the task for different sensory feedback conditions. Two monkeys learned to perform the CST with visual or vibrotactile feedback. Performance was better under visual feedback, as expected, but both monkeys were able to utilize vibrotactile feedback alone to successfully perform the CST. We also observed changes in behavioral strategy as the task became more challenging. The CST will have value for basic science investigations of the neural basis of sensory-motor integration during ongoing actions, and it may also provide value for the design and testing of bidirectional brain computer interface systems. NEW & NOTEWORTHY Currently, most behavioral tasks used in motor neurophysiology studies require primates to make short-duration, stereotyped movements that do not necessitate sensory feedback. To improve our understanding of sensorimotor integration, and to engineer meaningful artificial sensory feedback systems for brain-computer interfaces, it is crucial to have a task that requires sensory feedback for good control. The critical stability task demands that sensory information be used to guide long-duration movements.
Collapse
Affiliation(s)
- Kristin M Quick
- Department of Bioengineering, University of Pittsburgh , Pittsburgh, Pennsylvania.,Center for the Neural Basis of Cognition , Pittsburgh, Pennsylvania
| | - Jessica L Mischel
- Department of Bioengineering, University of Pittsburgh , Pittsburgh, Pennsylvania.,Center for the Neural Basis of Cognition , Pittsburgh, Pennsylvania
| | - Patrick J Loughlin
- Department of Bioengineering, University of Pittsburgh , Pittsburgh, Pennsylvania.,Center for the Neural Basis of Cognition , Pittsburgh, Pennsylvania
| | - Aaron P Batista
- Department of Bioengineering, University of Pittsburgh , Pittsburgh, Pennsylvania.,Center for the Neural Basis of Cognition , Pittsburgh, Pennsylvania
| |
Collapse
|
5
|
Williams JJ, Tien RN, Inoue Y, Schwartz AB. Idle state classification using spiking activity and local field potentials in a brain computer interface. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2017; 2016:1572-1575. [PMID: 28268628 DOI: 10.1109/embc.2016.7591012] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
Abstract
Previous studies of intracortical brain-computer interfaces (BCIs) have often focused on or compared the use of spiking activity and local field potentials (LFPs) for decoding kinematic movement parameters. Conversely, using these signals to detect the initial intention to use a neuroprosthetic device or not has remained a relatively understudied problem. In this study, we examined the relative performance of spiking activity and LFP signals in detecting discrete state changes in attention regarding a user's desire to actively control a BCI device. Preliminary offline results suggest that the beta and high gamma frequency bands of LFP activity demonstrated a capacity for discriminating idle/active BCI control states equal to or greater than firing rate activity on the same channel. Population classifier models using either signal modality demonstrated an indistinguishably high degree of accuracy in decoding rest periods from active BCI reach periods as well as other portions of active BCI task trials. These results suggest that either signal modality may be used to reliably detect discrete state changes on a fine time scale for the purpose of gating neural prosthetic movements.
Collapse
|
6
|
Spatiotemporal Distribution of Location and Object Effects in Primary Motor Cortex Neurons during Reach-to-Grasp. J Neurosci 2017; 36:10640-10653. [PMID: 27733614 DOI: 10.1523/jneurosci.1716-16.2016] [Citation(s) in RCA: 25] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/27/2016] [Accepted: 08/25/2016] [Indexed: 12/20/2022] Open
Abstract
Reaching and grasping typically are considered to be spatially separate processes that proceed concurrently in the arm and the hand, respectively. The proximal representation in the primary motor cortex (M1) controls the arm for reaching, while the distal representation controls the hand for grasping. Many studies of M1 activity therefore have focused either on reaching to various locations without grasping different objects, or else on grasping different objects all at the same location. Here, we recorded M1 neurons in the anterior bank and lip of the central sulcus as monkeys performed more naturalistic movements, reaching toward, grasping, and manipulating four different objects in up to eight different locations. We quantified the extent to which variation in firing rates depended on location, on object, and on their interaction-all as a function of time. Activity proceeded largely in two sequential phases: the first related predominantly to the location to which the upper extremity reached, and the second related to the object about to be grasped. Both phases involved activity distributed widely throughout the sampled territory, spanning both the proximal and the distal upper extremity representation in caudal M1. Our findings indicate that naturalistic reaching and grasping, rather than being spatially segregated processes that proceed concurrently, each are spatially distributed processes controlled by caudal M1 in large part sequentially. Rather than neuromuscular processes separated in space but not time, reaching and grasping are separated more in time than in space. SIGNIFICANCE STATEMENT Reaching and grasping typically are viewed as processes that proceed concurrently in the arm and hand, respectively. The arm region in the primary motor cortex (M1) is assumed to control reaching, while the hand region controls grasping. During naturalistic reach-grasp-manipulate movements, we found, however, that neuron activity proceeds largely in two sequential phases, each spanning both arm and hand representations in M1. The first phase is related predominantly to the reach location, and the second is related to the object about to be grasped. Our findings indicate that reaching and grasping are successive aspects of a single movement. Initially the arm and the hand both are projected toward the object's location, and later both are shaped to grasp and manipulate.
Collapse
|
7
|
Willett FR, Murphy BA, Memberg WD, Blabe CH, Pandarinath C, Walter BL, Sweet JA, Miller JP, Henderson JM, Shenoy KV, Hochberg LR, Kirsch RF, Ajiboye AB. Signal-independent noise in intracortical brain-computer interfaces causes movement time properties inconsistent with Fitts' law. J Neural Eng 2017; 14:026010. [PMID: 28177925 DOI: 10.1088/1741-2552/aa5990] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/10/2023]
Abstract
OBJECTIVE Do movements made with an intracortical BCI (iBCI) have the same movement time properties as able-bodied movements? Able-bodied movement times typically obey Fitts' law: [Formula: see text] (where MT is movement time, D is target distance, R is target radius, and [Formula: see text] are parameters). Fitts' law expresses two properties of natural movement that would be ideal for iBCIs to restore: (1) that movement times are insensitive to the absolute scale of the task (since movement time depends only on the ratio [Formula: see text]) and (2) that movements have a large dynamic range of accuracy (since movement time is logarithmically proportional to [Formula: see text]). APPROACH Two participants in the BrainGate2 pilot clinical trial made cortically controlled cursor movements with a linear velocity decoder and acquired targets by dwelling on them. We investigated whether the movement times were well described by Fitts' law. MAIN RESULTS We found that movement times were better described by the equation [Formula: see text], which captures how movement time increases sharply as the target radius becomes smaller, independently of distance. In contrast to able-bodied movements, the iBCI movements we studied had a low dynamic range of accuracy (absence of logarithmic proportionality) and were sensitive to the absolute scale of the task (small targets had long movement times regardless of the [Formula: see text] ratio). We argue that this relationship emerges due to noise in the decoder output whose magnitude is largely independent of the user's motor command (signal-independent noise). Signal-independent noise creates a baseline level of variability that cannot be decreased by trying to move slowly or hold still, making targets below a certain size very hard to acquire with a standard decoder. SIGNIFICANCE The results give new insight into how iBCI movements currently differ from able-bodied movements and suggest that restoring a Fitts' law-like relationship to iBCI movements may require non-linear decoding strategies.
Collapse
Affiliation(s)
- Francis R Willett
- Department of Biomedical Engineering, Case Western Reserve University, Cleveland, OH, United States of America. Louis Stokes Cleveland Department of Veterans Affairs Medical Center, FES Center of Excellence, Rehab. R&D Service, Cleveland, OH, United States of America
| | | | | | | | | | | | | | | | | | | | | | | | | |
Collapse
|
8
|
Rouse AG. A four-dimensional virtual hand brain-machine interface using active dimension selection. J Neural Eng 2016; 13:036021. [PMID: 27171896 DOI: 10.1088/1741-2560/13/3/036021] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022]
Abstract
OBJECTIVE Brain-machine interfaces (BMI) traditionally rely on a fixed, linear transformation from neural signals to an output state-space. In this study, the assumption that a BMI must control a fixed, orthogonal basis set was challenged and a novel active dimension selection (ADS) decoder was explored. APPROACH ADS utilizes a two stage decoder by using neural signals to both (i) select an active dimension being controlled and (ii) control the velocity along the selected dimension. ADS decoding was tested in a monkey using 16 single units from premotor and primary motor cortex to successfully control a virtual hand avatar to move to eight different postures. MAIN RESULTS Following training with the ADS decoder to control 2, 3, and then 4 dimensions, each emulating a grasp shape of the hand, performance reached 93% correct with a bit rate of 2.4 bits s(-1) for eight targets. Selection of eight targets using ADS control was more efficient, as measured by bit rate, than either full four-dimensional control or computer assisted one-dimensional control. SIGNIFICANCE ADS decoding allows a user to quickly and efficiently select different hand postures. This novel decoding scheme represents a potential method to reduce the complexity of high-dimension BMI control of the hand.
Collapse
Affiliation(s)
- Adam G Rouse
- Department of Neuroscience, University of Rochester Medical Center, 601 Elmwood Avenue, Box 603 Rochester, NY 14642, USA. Department of Neurology, University of Rochester, Rochester, NY, USA. Department of Biomedical Engineering, University of Rochester, Rochester, NY, USA
| |
Collapse
|
9
|
Rouse AG, Schieber MH. Spatiotemporal distribution of location and object effects in the electromyographic activity of upper extremity muscles during reach-to-grasp. J Neurophysiol 2016; 115:3238-48. [PMID: 27009156 DOI: 10.1152/jn.00008.2016] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/04/2016] [Accepted: 03/22/2016] [Indexed: 11/22/2022] Open
Abstract
In reaching to grasp an object, proximal muscles that act on the shoulder and elbow classically have been viewed as transporting the hand to the intended location, while distal muscles that act on the fingers simultaneously shape the hand to grasp the object. Prior studies of electromyographic (EMG) activity in upper extremity muscles therefore have focused, by and large, either on proximal muscle activity during reaching to different locations or on distal muscle activity as the subject grasps various objects. Here, we examined the EMG activity of muscles from the shoulder to the hand, as monkeys reached and grasped in a task that dissociated location and object. We quantified the extent to which variation in the EMG activity of each muscle depended on location, on object, and on their interaction-all as a function of time. Although EMG variation depended on both location and object beginning early in the movement, an early phase of substantial location effects in muscles from proximal to distal was followed by a later phase in which object effects predominated throughout the extremity. Interaction effects remained relatively small. Our findings indicate that neural control of reach-to-grasp may occur largely in two sequential phases: the first, serving to project the entire upper extremity toward the intended location, and the second, acting predominantly to shape the entire extremity for grasping the object.
Collapse
Affiliation(s)
- Adam G Rouse
- Departments of Neurology, Neuroscience, and Biomedical Engineering, University of Rochester, Rochester, New York
| | - Marc H Schieber
- Departments of Neurology, Neuroscience, and Biomedical Engineering, University of Rochester, Rochester, New York
| |
Collapse
|