1
|
Kim JH, Daie K, Li N. A combinatorial neural code for long-term motor memory. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.06.05.597627. [PMID: 38895416 PMCID: PMC11185691 DOI: 10.1101/2024.06.05.597627] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/21/2024]
Abstract
Motor skill repertoire can be stably retained over long periods, but the neural mechanism underlying stable memory storage remains poorly understood. Moreover, it is unknown how existing motor memories are maintained as new motor skills are continuously acquired. Here we tracked neural representation of learned actions throughout a significant portion of a mouse's lifespan, and we show that learned actions are stably retained in motor memory in combination with context, which protects existing memories from erasure during new motor learning. We used automated home-cage training to establish a continual learning paradigm in which mice learned to perform directional licking in different task contexts. We combined this paradigm with chronic two-photon imaging of motor cortex activity for up to 6 months. Within the same task context, activity driving directional licking was stable over time with little representational drift. When learning new task contexts, new preparatory activity emerged to drive the same licking actions. Learning created parallel new motor memories while retaining the previous memories. Re-learning to make the same actions in the previous task context re-activated the previous preparatory activity, even months later. At the same time, continual learning of new task contexts kept creating new preparatory activity patterns. Context-specific memories, as we observed in the motor system, may provide a solution for stable memory storage throughout continual learning. Learning in new contexts produces parallel new representations instead of modifying existing representations, thus protecting existing motor repertoire from erasure.
Collapse
|
2
|
Sadras N, Pesaran B, Shanechi MM. Event detection and classification from multimodal time series with application to neural data. J Neural Eng 2024; 21:026049. [PMID: 38513289 DOI: 10.1088/1741-2552/ad3678] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/15/2023] [Accepted: 03/21/2024] [Indexed: 03/23/2024]
Abstract
The detection of events in time-series data is a common signal-processing problem. When the data can be modeled as a known template signal with an unknown delay in Gaussian noise, detection of the template signal can be done with a traditional matched filter. However, in many applications, the event of interest is represented in multimodal data consisting of both Gaussian and point-process time series. Neuroscience experiments, for example, can simultaneously record multimodal neural signals such as local field potentials (LFPs), which can be modeled as Gaussian, and neuronal spikes, which can be modeled as point processes. Currently, no method exists for event detection from such multimodal data, and as such our objective in this work is to develop a method to meet this need. Here we address this challenge by developing the multimodal event detector (MED) algorithm which simultaneously estimates event times and classes. To do this, we write a multimodal likelihood function for Gaussian and point-process observations and derive the associated maximum likelihood estimator of simultaneous event times and classes. We additionally introduce a cross-modal scaling parameter to account for model mismatch in real datasets. We validate this method in extensive simulations as well as in a neural spike-LFP dataset recorded during an eye-movement task, where the events of interest are eye movements with unknown times and directions. We show that the MED can successfully detect eye movement onset and classify eye movement direction. Further, the MED successfully combines information across data modalities, with multimodal performance exceeding unimodal performance. This method can facilitate applications such as the discovery of latent events in multimodal neural population activity and the development of brain-computer interfaces for naturalistic settings without constrained tasks or prior knowledge of event times.
Collapse
Affiliation(s)
- Nitin Sadras
- Ming Hsieh Department of Electrical and Computer Engineering, Viterbi School of Engineering, University of Southern California, Los Angeles, CA, United States of America
| | - Bijan Pesaran
- Department of Neurosurgery, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, United States of America
| | - Maryam M Shanechi
- Ming Hsieh Department of Electrical and Computer Engineering, Viterbi School of Engineering, University of Southern California, Los Angeles, CA, United States of America
- Thomas Lord Department of Computer Science, Alfred E. Mann Department of Biomedical Engineering, and the Neuroscience Graduate Program, University of Southern California, Los Angeles, CA, United States of America
| |
Collapse
|
3
|
Shang CF, Wang YF, Zhao MT, Fan QX, Zhao S, Qian Y, Xu SJ, Mu Y, Hao J, Du JL. Real-time analysis of large-scale neuronal imaging enables closed-loop investigation of neural dynamics. Nat Neurosci 2024; 27:1014-1018. [PMID: 38467902 DOI: 10.1038/s41593-024-01595-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/25/2022] [Accepted: 02/07/2024] [Indexed: 03/13/2024]
Abstract
Large-scale imaging of neuronal activities is crucial for understanding brain functions. However, it is challenging to analyze large-scale imaging data in real time, preventing closed-loop investigation of neural circuitry. Here we develop a real-time analysis system with a field programmable gate array-graphics processing unit design for an up to 500-megabyte-per-second image stream. Adapted to whole-brain imaging of awake larval zebrafish, the system timely extracts activity from up to 100,000 neurons and enables closed-loop perturbations of neural dynamics.
Collapse
Affiliation(s)
- Chun-Feng Shang
- Institute of Neuroscience, State Key Laboratory of Neuroscience, Center for Excellence in Brain Science and Intelligence Technology, Chinese Academy of Sciences, Shanghai, China
- Guangdong-Hongkong-Macau Institute of CNS Regeneration, Ministry of Education CNS Regeneration Collaborative Joint Laboratory, Jinan University, Guangzhou, China
- Shenzhen Institute of Neuroscience, Shenzhen, China
| | - Yu-Fan Wang
- Institute of Neuroscience, State Key Laboratory of Neuroscience, Center for Excellence in Brain Science and Intelligence Technology, Chinese Academy of Sciences, Shanghai, China
- University of Chinese Academy of Sciences, Beijing, China
| | - Mei-Ting Zhao
- Institute of Automation, Chinese Academy of Sciences, Beijing, China
- Guangdong Institute of Artificial Intelligence and Advanced Computing, Guangzhou, China
| | - Qiu-Xiang Fan
- Institute of Automation, Chinese Academy of Sciences, Beijing, China
- Guangdong Institute of Artificial Intelligence and Advanced Computing, Guangzhou, China
| | - Shan Zhao
- Institute of Neuroscience, State Key Laboratory of Neuroscience, Center for Excellence in Brain Science and Intelligence Technology, Chinese Academy of Sciences, Shanghai, China
- University of Chinese Academy of Sciences, Beijing, China
| | - Yu Qian
- Institute of Neuroscience, State Key Laboratory of Neuroscience, Center for Excellence in Brain Science and Intelligence Technology, Chinese Academy of Sciences, Shanghai, China
| | - Sheng-Jin Xu
- Institute of Neuroscience, State Key Laboratory of Neuroscience, Center for Excellence in Brain Science and Intelligence Technology, Chinese Academy of Sciences, Shanghai, China
- University of Chinese Academy of Sciences, Beijing, China
| | - Yu Mu
- Institute of Neuroscience, State Key Laboratory of Neuroscience, Center for Excellence in Brain Science and Intelligence Technology, Chinese Academy of Sciences, Shanghai, China.
- University of Chinese Academy of Sciences, Beijing, China.
| | - Jie Hao
- University of Chinese Academy of Sciences, Beijing, China.
- Institute of Automation, Chinese Academy of Sciences, Beijing, China.
- Guangdong Institute of Artificial Intelligence and Advanced Computing, Guangzhou, China.
| | - Jiu-Lin Du
- Institute of Neuroscience, State Key Laboratory of Neuroscience, Center for Excellence in Brain Science and Intelligence Technology, Chinese Academy of Sciences, Shanghai, China.
- University of Chinese Academy of Sciences, Beijing, China.
- School of Life Science and Technology, ShanghaiTech University, Shanghai, China.
| |
Collapse
|
4
|
Menéndez JA, Hennig JA, Golub MD, Oby ER, Sadtler PT, Batista AP, Chase SM, Yu BM, Latham PE. A theory of brain-computer interface learning via low-dimensional control. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.04.18.589952. [PMID: 38712193 PMCID: PMC11071278 DOI: 10.1101/2024.04.18.589952] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/08/2024]
Abstract
A remarkable demonstration of the flexibility of mammalian motor systems is primates' ability to learn to control brain-computer interfaces (BCIs). This constitutes a completely novel motor behavior, yet primates are capable of learning to control BCIs under a wide range of conditions. BCIs with carefully calibrated decoders, for example, can be learned with only minutes to hours of practice. With a few weeks of practice, even BCIs with randomly constructed decoders can be learned. What are the biological substrates of this learning process? Here, we develop a theory based on a re-aiming strategy, whereby learning operates within a low-dimensional subspace of task-relevant inputs driving the local population of recorded neurons. Through comprehensive numerical and formal analysis, we demonstrate that this theory can provide a unifying explanation for disparate phenomena previously reported in three different BCI learning tasks, and we derive a novel experimental prediction that we verify with previously published data. By explicitly modeling the underlying neural circuitry, the theory reveals an interpretation of these phenomena in terms of biological constraints on neural activity.
Collapse
|
5
|
Losey DM, Hennig JA, Oby ER, Golub MD, Sadtler PT, Quick KM, Ryu SI, Tyler-Kabara EC, Batista AP, Yu BM, Chase SM. Learning leaves a memory trace in motor cortex. Curr Biol 2024; 34:1519-1531.e4. [PMID: 38531360 PMCID: PMC11097210 DOI: 10.1016/j.cub.2024.03.003] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/21/2023] [Revised: 12/06/2023] [Accepted: 03/04/2024] [Indexed: 03/28/2024]
Abstract
How are we able to learn new behaviors without disrupting previously learned ones? To understand how the brain achieves this, we used a brain-computer interface (BCI) learning paradigm, which enables us to detect the presence of a memory of one behavior while performing another. We found that learning to use a new BCI map altered the neural activity that monkeys produced when they returned to using a familiar BCI map in a way that was specific to the learning experience. That is, learning left a "memory trace" in the primary motor cortex. This memory trace coexisted with proficient performance under the familiar map, primarily by altering neural activity in dimensions that did not impact behavior. Forming memory traces might be how the brain is able to provide for the joint learning of multiple behaviors without interference.
Collapse
Affiliation(s)
- Darby M Losey
- Neuroscience Institute, Carnegie Mellon University, Pittsburgh, PA 15213, USA; Center for the Neural Basis of Cognition, Pittsburgh, PA 15213, USA; Machine Learning Department, Carnegie Mellon University, Pittsburgh, PA 15213, USA
| | - Jay A Hennig
- Neuroscience Institute, Carnegie Mellon University, Pittsburgh, PA 15213, USA; Center for the Neural Basis of Cognition, Pittsburgh, PA 15213, USA; Machine Learning Department, Carnegie Mellon University, Pittsburgh, PA 15213, USA
| | - Emily R Oby
- Center for the Neural Basis of Cognition, Pittsburgh, PA 15213, USA; Department of Bioengineering, University of Pittsburgh, Pittsburgh, PA 15213, USA
| | - Matthew D Golub
- Center for the Neural Basis of Cognition, Pittsburgh, PA 15213, USA; Department of Electrical and Computer Engineering, Carnegie Mellon University, Pittsburgh, PA 15213, USA; Department of Electrical Engineering, Stanford University, Stanford, CA 94305, USA; Paul G. Allen School of Computer Science & Engineering, University of Washington, Seattle, WA 98195, USA
| | - Patrick T Sadtler
- Center for the Neural Basis of Cognition, Pittsburgh, PA 15213, USA; Department of Bioengineering, University of Pittsburgh, Pittsburgh, PA 15213, USA
| | - Kristin M Quick
- Center for the Neural Basis of Cognition, Pittsburgh, PA 15213, USA; Department of Bioengineering, University of Pittsburgh, Pittsburgh, PA 15213, USA
| | - Stephen I Ryu
- Department of Electrical Engineering, Stanford University, Stanford, CA 94305, USA; Department of Neurosurgery, Palo Alto Medical Foundation, Palo Alto, CA 94301, USA
| | - Elizabeth C Tyler-Kabara
- Center for the Neural Basis of Cognition, Pittsburgh, PA 15213, USA; Department of Physical Medicine and Rehabilitation, University of Pittsburgh, Pittsburgh, PA 15213, USA; Department of Neurological Surgery, University of Pittsburgh, Pittsburgh, PA 15213, USA; Department of Neurosurgery, Dell Medical School, University of Texas at Austin, Austin, TX 78712, USA
| | - Aaron P Batista
- Center for the Neural Basis of Cognition, Pittsburgh, PA 15213, USA; Department of Bioengineering, University of Pittsburgh, Pittsburgh, PA 15213, USA.
| | - Byron M Yu
- Neuroscience Institute, Carnegie Mellon University, Pittsburgh, PA 15213, USA; Center for the Neural Basis of Cognition, Pittsburgh, PA 15213, USA; Department of Electrical and Computer Engineering, Carnegie Mellon University, Pittsburgh, PA 15213, USA; Department of Biomedical Engineering, Carnegie Mellon University, Pittsburgh, PA 15213, USA.
| | - Steven M Chase
- Neuroscience Institute, Carnegie Mellon University, Pittsburgh, PA 15213, USA; Center for the Neural Basis of Cognition, Pittsburgh, PA 15213, USA; Department of Biomedical Engineering, Carnegie Mellon University, Pittsburgh, PA 15213, USA.
| |
Collapse
|
6
|
Rajeswaran P, Payeur A, Lajoie G, Orsborn AL. Assistive sensory-motor perturbations influence learned neural representations. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.03.20.585972. [PMID: 38562772 PMCID: PMC10983972 DOI: 10.1101/2024.03.20.585972] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 04/04/2024]
Abstract
Task errors are used to learn and refine motor skills. We investigated how task assistance influences learned neural representations using Brain-Computer Interfaces (BCIs), which map neural activity into movement via a decoder. We analyzed motor cortex activity as monkeys practiced BCI with a decoder that adapted to improve or maintain performance over days. Population dimensionality remained constant or increased with learning, counter to trends with non-adaptive BCIs. Yet, over time, task information was contained in a smaller subset of neurons or population modes. Moreover, task information was ultimately stored in neural modes that occupied a small fraction of the population variance. An artificial neural network model suggests the adaptive decoders contribute to forming these compact neural representations. Our findings show that assistive decoders manipulate error information used for long-term learning computations, like credit assignment, which informs our understanding of motor learning and has implications for designing real-world BCIs.
Collapse
Affiliation(s)
| | - Alexandre Payeur
- Université de Montreál, Department of Mathematics and Statistics, Montreál (QC), Canada, H3C 3J7
- Mila - Québec Artificial Intelligence Institute, Montreál (QC), Canada, H2S 3H1
| | - Guillaume Lajoie
- Université de Montreál, Department of Mathematics and Statistics, Montreál (QC), Canada, H3C 3J7
- Mila - Québec Artificial Intelligence Institute, Montreál (QC), Canada, H2S 3H1
| | - Amy L. Orsborn
- University of Washington, Bioengineering, Seattle, 98115, USA
- University of Washington, Electrical and Computer Engineering, Seattle, 98115, USA
- Washington National Primate Research Center, Seattle, Washington, 98115, USA
| |
Collapse
|
7
|
Ahmadipour P, Sani OG, Pesaran B, Shanechi MM. Multimodal subspace identification for modeling discrete-continuous spiking and field potential population activity. J Neural Eng 2024; 21:026001. [PMID: 38016450 PMCID: PMC10913727 DOI: 10.1088/1741-2552/ad1053] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/02/2023] [Revised: 10/23/2023] [Accepted: 11/28/2023] [Indexed: 11/30/2023]
Abstract
Objective.Learning dynamical latent state models for multimodal spiking and field potential activity can reveal their collective low-dimensional dynamics and enable better decoding of behavior through multimodal fusion. Toward this goal, developing unsupervised learning methods that are computationally efficient is important, especially for real-time learning applications such as brain-machine interfaces (BMIs). However, efficient learning remains elusive for multimodal spike-field data due to their heterogeneous discrete-continuous distributions and different timescales.Approach.Here, we develop a multiscale subspace identification (multiscale SID) algorithm that enables computationally efficient learning for modeling and dimensionality reduction for multimodal discrete-continuous spike-field data. We describe the spike-field activity as combined Poisson and Gaussian observations, for which we derive a new analytical SID method. Importantly, we also introduce a novel constrained optimization approach to learn valid noise statistics, which is critical for multimodal statistical inference of the latent state, neural activity, and behavior. We validate the method using numerical simulations and with spiking and local field potential population activity recorded during a naturalistic reach and grasp behavior.Main results.We find that multiscale SID accurately learned dynamical models of spike-field signals and extracted low-dimensional dynamics from these multimodal signals. Further, it fused multimodal information, thus better identifying the dynamical modes and predicting behavior compared to using a single modality. Finally, compared to existing multiscale expectation-maximization learning for Poisson-Gaussian observations, multiscale SID had a much lower training time while being better in identifying the dynamical modes and having a better or similar accuracy in predicting neural activity and behavior.Significance.Overall, multiscale SID is an accurate learning method that is particularly beneficial when efficient learning is of interest, such as for online adaptive BMIs to track non-stationary dynamics or for reducing offline training time in neuroscience investigations.
Collapse
Affiliation(s)
- Parima Ahmadipour
- Ming Hsieh Department of Electrical and Computer Engineering, Viterbi School of Engineering, University of Southern California, Los Angeles, CA, United States of America
| | - Omid G Sani
- Ming Hsieh Department of Electrical and Computer Engineering, Viterbi School of Engineering, University of Southern California, Los Angeles, CA, United States of America
| | - Bijan Pesaran
- Department of Neurosurgery, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, United States of America
| | - Maryam M Shanechi
- Ming Hsieh Department of Electrical and Computer Engineering, Viterbi School of Engineering, University of Southern California, Los Angeles, CA, United States of America
- Thomas Lord Department of Computer Science, Alfred E. Mann Department of Biomedical Engineering, and the Neuroscience Graduate Program, University of Southern California, Los Angeles, CA, United States of America
| |
Collapse
|
8
|
Oby ER, Degenhart AD, Grigsby EM, Motiwala A, McClain NT, Marino PJ, Yu BM, Batista AP. Dynamical constraints on neural population activity. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.01.03.573543. [PMID: 38260549 PMCID: PMC10802336 DOI: 10.1101/2024.01.03.573543] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/24/2024]
Abstract
The manner in which neural activity unfolds over time is thought to be central to sensory, motor, and cognitive functions in the brain. Network models have long posited that the brain's computations involve time courses of activity that are shaped by the underlying network. A prediction from this view is that the activity time courses should be difficult to violate. We leveraged a brain-computer interface (BCI) to challenge monkeys to violate the naturally-occurring time courses of neural population activity that we observed in motor cortex. This included challenging animals to traverse the natural time course of neural activity in a time-reversed manner. Animals were unable to violate the natural time courses of neural activity when directly challenged to do so. These results provide empirical support for the view that activity time courses observed in the brain indeed reflect the underlying network-level computational mechanisms that they are believed to implement.
Collapse
|
9
|
Mirfathollahi A, Ghodrati MT, Shalchyan V, Zarrindast MR, Daliri MR. Decoding hand kinetics and kinematics using somatosensory cortex activity in active and passive movement. iScience 2023; 26:107808. [PMID: 37736040 PMCID: PMC10509302 DOI: 10.1016/j.isci.2023.107808] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/05/2023] [Revised: 07/20/2023] [Accepted: 08/30/2023] [Indexed: 09/23/2023] Open
Abstract
Area 2 of the primary somatosensory cortex (S1), encodes proprioceptive information of limbs. Several studies investigated the encoding of movement parameters in this area. However, the single-trial decoding of these parameters, which can provide additional knowledge about the amount of information available in sub-regions of this area about instantaneous limb movement, has not been well investigated. We decoded kinematic and kinetic parameters of active and passive hand movement during center-out task using conventional and state-based decoders. Our results show that this area can be used to accurately decode position, velocity, force, moment, and joint angles of hand. Kinematics had higher accuracies compared to kinetics and active trials were decoded more accurately than passive trials. Although the state-based decoder outperformed the conventional decoder in the active task, it was the opposite in the passive task. These results can be used in intracortical micro-stimulation procedures to provide proprioceptive feedback to BCI subjects.
Collapse
Affiliation(s)
- Alavie Mirfathollahi
- Institute for Cognitive Science Studies (ICSS), Pardis 16583- 44575 Tehran, Iran
- Neuroscience & Neuroengineering Research Lab, Biomedical Engineering Department, School of Electrical Engineering, Iran University of Science and Technology (IUST), Narmak, Tehran 16846-13114, Iran
| | - Mohammad Taghi Ghodrati
- Neuroscience & Neuroengineering Research Lab, Biomedical Engineering Department, School of Electrical Engineering, Iran University of Science and Technology (IUST), Narmak, Tehran 16846-13114, Iran
| | - Vahid Shalchyan
- Neuroscience & Neuroengineering Research Lab, Biomedical Engineering Department, School of Electrical Engineering, Iran University of Science and Technology (IUST), Narmak, Tehran 16846-13114, Iran
| | - Mohammad Reza Zarrindast
- Institute for Cognitive Science Studies (ICSS), Pardis 16583- 44575 Tehran, Iran
- Department of Pharmacology, School of Medicine, Tehran University of Medical Sciences, Tehran 14166-34793, Iran
| | - Mohammad Reza Daliri
- Institute for Cognitive Science Studies (ICSS), Pardis 16583- 44575 Tehran, Iran
- Neuroscience & Neuroengineering Research Lab, Biomedical Engineering Department, School of Electrical Engineering, Iran University of Science and Technology (IUST), Narmak, Tehran 16846-13114, Iran
| |
Collapse
|
10
|
Zippi EL, Shvartsman GF, Vendrell-Llopis N, Wallis JD, Carmena JM. Distinct neural representations during a brain-machine interface and manual reaching task in motor cortex, prefrontal cortex, and striatum. Sci Rep 2023; 13:17810. [PMID: 37857827 PMCID: PMC10587077 DOI: 10.1038/s41598-023-44405-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/31/2023] [Accepted: 10/07/2023] [Indexed: 10/21/2023] Open
Abstract
Although brain-machine interfaces (BMIs) are directly controlled by the modulation of a select local population of neurons, distributed networks consisting of cortical and subcortical areas have been implicated in learning and maintaining control. Previous work in rodents has demonstrated the involvement of the striatum in BMI learning. However, the prefrontal cortex has been largely ignored when studying motor BMI control despite its role in action planning, action selection, and learning abstract tasks. Here, we compare local field potentials simultaneously recorded from primary motor cortex (M1), dorsolateral prefrontal cortex (DLPFC), and the caudate nucleus of the striatum (Cd) while nonhuman primates perform a two-dimensional, self-initiated, center-out task under BMI control and manual control. Our results demonstrate the presence of distinct neural representations for BMI and manual control in M1, DLPFC, and Cd. We find that neural activity from DLPFC and M1 best distinguishes control types at the go cue and target acquisition, respectively, while M1 best predicts target-direction at both task events. We also find effective connectivity from DLPFC → M1 throughout both control types and Cd → M1 during BMI control. These results suggest distributed network activity between M1, DLPFC, and Cd during BMI control that is similar yet distinct from manual control.
Collapse
Affiliation(s)
- Ellen L Zippi
- Helen Wills Neuroscience Institute, University of California, Berkeley, Berkeley, CA, USA
| | - Gabrielle F Shvartsman
- Department of Electrical Engineering and Computer Science, University of California, Berkeley, Berkeley, CA, USA
| | - Nuria Vendrell-Llopis
- Helen Wills Neuroscience Institute, University of California, Berkeley, Berkeley, CA, USA
- Department of Electrical Engineering and Computer Science, University of California, Berkeley, Berkeley, CA, USA
| | - Joni D Wallis
- Helen Wills Neuroscience Institute, University of California, Berkeley, Berkeley, CA, USA
- Department of Psychology, University of California, Berkeley, Berkeley, CA, USA
| | - Jose M Carmena
- Helen Wills Neuroscience Institute, University of California, Berkeley, Berkeley, CA, USA.
- Department of Electrical Engineering and Computer Science, University of California, Berkeley, Berkeley, CA, USA.
| |
Collapse
|
11
|
Athalye VR, Khanna P, Gowda S, Orsborn AL, Costa RM, Carmena JM. Invariant neural dynamics drive commands to control different movements. Curr Biol 2023; 33:2962-2976.e15. [PMID: 37402376 PMCID: PMC10527529 DOI: 10.1016/j.cub.2023.06.027] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/22/2022] [Revised: 04/24/2023] [Accepted: 06/09/2023] [Indexed: 07/06/2023]
Abstract
It has been proposed that the nervous system has the capacity to generate a wide variety of movements because it reuses some invariant code. Previous work has identified that dynamics of neural population activity are similar during different movements, where dynamics refer to how the instantaneous spatial pattern of population activity changes in time. Here, we test whether invariant dynamics of neural populations are actually used to issue the commands that direct movement. Using a brain-machine interface (BMI) that transforms rhesus macaques' motor-cortex activity into commands for a neuroprosthetic cursor, we discovered that the same command is issued with different neural-activity patterns in different movements. However, these different patterns were predictable, as we found that the transitions between activity patterns are governed by the same dynamics across movements. These invariant dynamics are low dimensional, and critically, they align with the BMI, so that they predict the specific component of neural activity that actually issues the next command. We introduce a model of optimal feedback control (OFC) that shows that invariant dynamics can help transform movement feedback into commands, reducing the input that the neural population needs to control movement. Altogether our results demonstrate that invariant dynamics drive commands to control a variety of movements and show how feedback can be integrated with invariant dynamics to issue generalizable commands.
Collapse
Affiliation(s)
- Vivek R Athalye
- Zuckerman Mind Brain Behavior Institute, Departments of Neuroscience and Neurology, Columbia University, New York, NY 10027, USA.
| | - Preeya Khanna
- Department of Neurology, University of California, San Francisco, San Francisco, CA 94158, USA.
| | - Suraj Gowda
- Department of Electrical Engineering and Computer Sciences, University of California, Berkeley, Berkeley, CA 94720, USA
| | - Amy L Orsborn
- Departments of Bioengineering, Electrical and Computer Engineering, University of Washington, Seattle, Seattle, WA 98195, USA
| | - Rui M Costa
- Zuckerman Mind Brain Behavior Institute, Departments of Neuroscience and Neurology, Columbia University, New York, NY 10027, USA.
| | - Jose M Carmena
- Department of Electrical Engineering and Computer Sciences, University of California, Berkeley, Berkeley, CA 94720, USA; Helen Wills Neuroscience Institute, University of California, Berkeley, Berkeley, CA 94720, USA; UC Berkeley-UCSF Joint Graduate Program in Bioengineering, University of California, Berkeley, Berkeley, CA 94720, USA.
| |
Collapse
|
12
|
Zippi EL, Shvartsman GF, Vendrell-Llopis N, Wallis JD, Carmena JM. Distinct neural representations during a brain-machine interface and manual reaching task in motor cortex, prefrontal cortex, and striatum. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2023:2023.05.31.542532. [PMID: 37398143 PMCID: PMC10312492 DOI: 10.1101/2023.05.31.542532] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 07/04/2023]
Abstract
Although brain-machine interfaces (BMIs) are directly controlled by the modulation of a select local population of neurons, distributed networks consisting of cortical and subcortical areas have been implicated in learning and maintaining control. Previous work in rodent BMI has demonstrated the involvement of the striatum in BMI learning. However, the prefrontal cortex has been largely ignored when studying motor BMI control despite its role in action planning, action selection, and learning abstract tasks. Here, we compare local field potentials simultaneously recorded from the primary motor cortex (M1), dorsolateral prefrontal cortex (DLPFC), and the caudate nucleus of the striatum (Cd) while nonhuman primates perform a two-dimensional, self-initiated, center-out task under BMI control and manual control. Our results demonstrate the presence of distinct neural representations for BMI and manual control in M1, DLPFC, and Cd. We find that neural activity from DLPFC and M1 best distinguish between control types at the go cue and target acquisition, respectively. We also found effective connectivity from DLPFC→M1 throughout trials across both control types and Cd→M1 during BMI control. These results suggest distributed network activity between M1, DLPFC, and Cd during BMI control that is similar yet distinct from manual control.
Collapse
Affiliation(s)
- Ellen L. Zippi
- Helen Wills Neuroscience Institute, University of California, Berkeley, Berkeley, CA
| | - Gabrielle F. Shvartsman
- Department of Electrical Engineering and Computer Science, University of California, Berkeley, Berkeley, CA
| | - Nuria Vendrell-Llopis
- Helen Wills Neuroscience Institute, University of California, Berkeley, Berkeley, CA
- Department of Electrical Engineering and Computer Science, University of California, Berkeley, Berkeley, CA
| | - Joni D. Wallis
- Helen Wills Neuroscience Institute, University of California, Berkeley, Berkeley, CA
- Department of Psychology, University of California, Berkeley, Berkeley, CA
| | - Jose M. Carmena
- Helen Wills Neuroscience Institute, University of California, Berkeley, Berkeley, CA
- Department of Electrical Engineering and Computer Science, University of California, Berkeley, Berkeley, CA
| |
Collapse
|
13
|
Ahmadipour P, Sani OG, Pesaran B, Shanechi MM. Multimodal subspace identification for modeling discrete-continuous spiking and field potential population activity. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2023:2023.05.26.542509. [PMID: 37398400 PMCID: PMC10312539 DOI: 10.1101/2023.05.26.542509] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 07/04/2023]
Abstract
Learning dynamical latent state models for multimodal spiking and field potential activity can reveal their collective low-dimensional dynamics and enable better decoding of behavior through multimodal fusion. Toward this goal, developing unsupervised learning methods that are computationally efficient is important, especially for real-time learning applications such as brain-machine interfaces (BMIs). However, efficient learning remains elusive for multimodal spike-field data due to their heterogeneous discrete-continuous distributions and different timescales. Here, we develop a multiscale subspace identification (multiscale SID) algorithm that enables computationally efficient modeling and dimensionality reduction for multimodal discrete-continuous spike-field data. We describe the spike-field activity as combined Poisson and Gaussian observations, for which we derive a new analytical subspace identification method. Importantly, we also introduce a novel constrained optimization approach to learn valid noise statistics, which is critical for multimodal statistical inference of the latent state, neural activity, and behavior. We validate the method using numerical simulations and spike-LFP population activity recorded during a naturalistic reach and grasp behavior. We find that multiscale SID accurately learned dynamical models of spike-field signals and extracted low-dimensional dynamics from these multimodal signals. Further, it fused multimodal information, thus better identifying the dynamical modes and predicting behavior compared to using a single modality. Finally, compared to existing multiscale expectation-maximization learning for Poisson-Gaussian observations, multiscale SID had a much lower computational cost while being better in identifying the dynamical modes and having a better or similar accuracy in predicting neural activity. Overall, multiscale SID is an accurate learning method that is particularly beneficial when efficient learning is of interest.
Collapse
|
14
|
Khademi Z, Ebrahimi F, Kordy HM. A review of critical challenges in MI-BCI: From conventional to deep learning methods. J Neurosci Methods 2023; 383:109736. [PMID: 36349568 DOI: 10.1016/j.jneumeth.2022.109736] [Citation(s) in RCA: 10] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/13/2021] [Revised: 09/20/2022] [Accepted: 10/27/2022] [Indexed: 11/08/2022]
Abstract
Brain-computer interfaces (BCIs) have achieved significant success in controlling external devices through the Electroencephalogram (EEG) signal processing. BCI-based Motor Imagery (MI) system bridges brain and external devices as communication tools to control, for example, wheelchair for people with disabilities, robotic control, and exoskeleton control. This success largely depends on the machine learning (ML) approaches like deep learning (DL) models. DL algorithms provide effective and powerful models to analyze compact and complex EEG data optimally for MI-BCI applications. DL models with CNN network have revolutionized computer vision through end-to-end learning from raw data. Meanwhile, RNN networks have been able to decode EEG signals by processing sequences of time series data. However, many challenges in the MI-BCI field have affected the performance of DL models. A major challenge is the individual differences in the EEG signal of different subjects. Therefore, the model must be retrained from the scratch for each new subject, which leads to computational costs. Analyzing the EEG signals is challenging due to its low signal to noise ratio and non-stationary nature. Additionally, limited size of existence datasets can lead to overfitting which can be prevented by using transfer learning (TF) approaches. The main contributions of this study are discovering major challenges in the MI-BCI field by reviewing the state of art machine learning models and then suggesting solutions to address these challenges by focusing on feature selection, feature extraction and classification methods.
Collapse
Affiliation(s)
- Zahra Khademi
- Faculty of Electrical and Computer Engineering, Babol Noshirvani University of Technology, Shariati Ave., Babol, Iran.
| | - Farideh Ebrahimi
- Faculty of Electrical and Computer Engineering, Babol Noshirvani University of Technology, Shariati Ave., Babol, Iran.
| | - Hussain Montazery Kordy
- Faculty of Electrical and Computer Engineering, Babol Noshirvani University of Technology, Shariati Ave., Babol, Iran.
| |
Collapse
|
15
|
Chinchani AM, Paliwal S, Ganesh S, Chandrasekhar V, Yu BM, Sridharan D. Tracking momentary fluctuations in human attention with a cognitive brain-machine interface. Commun Biol 2022; 5:1346. [PMID: 36481698 PMCID: PMC9732358 DOI: 10.1038/s42003-022-04231-w] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/05/2021] [Accepted: 11/07/2022] [Indexed: 12/13/2022] Open
Abstract
Selective attention produces systematic effects on neural states. It is unclear whether, conversely, momentary fluctuations in neural states have behavioral significance for attention. We investigated this question in the human brain with a cognitive brain-machine interface (cBMI) for tracking electrophysiological steady-state visually evoked potentials (SSVEPs) in real-time. Discrimination accuracy (d') was significantly higher when target stimuli were triggered at high, versus low, SSVEP power states. Target and distractor SSVEP power was uncorrelated across the hemifields, and target d' was unaffected by distractor SSVEP power states. Next, we trained participants on an auditory neurofeedback paradigm to generate biased, cross-hemispheric competitive interactions between target and distractor SSVEPs. The strongest behavioral effects emerged when competitive SSVEP dynamics unfolded at a timescale corresponding to the deployment of endogenous attention. In sum, SSVEP power dynamics provide a reliable readout of attentional state, a result with critical implications for tracking and training human attention.
Collapse
Affiliation(s)
- Abhijit M. Chinchani
- grid.34980.360000 0001 0482 5067Centre for Neuroscience, Indian Institute of Science, Bangalore, KA India ,grid.17091.3e0000 0001 2288 9830Present Address: University of British Columbia, 2329 West Mall, Vancouver, BC Canada
| | - Siddharth Paliwal
- grid.34980.360000 0001 0482 5067Centre for Neuroscience, Indian Institute of Science, Bangalore, KA India ,grid.36425.360000 0001 2216 9681Present Address: Stony Brook University, 100 Nicolls Rd, Stony Brook, NY USA
| | - Suhas Ganesh
- grid.34980.360000 0001 0482 5067Centre for Neuroscience, Indian Institute of Science, Bangalore, KA India ,grid.497059.6Present Address: Verily Life Sciences, 269 E Grand Ave, South San Francisco, CA USA
| | - Vishnu Chandrasekhar
- grid.34980.360000 0001 0482 5067Centre for Neuroscience, Indian Institute of Science, Bangalore, KA India ,grid.147455.60000 0001 2097 0344Present Address: Carnegie Mellon University, 319 Morewood Avenue, Pittsburgh, PA USA
| | - Byron M. Yu
- grid.147455.60000 0001 2097 0344Department of Biomedical Engineering, and Department of Electrical & Computer Engineering, Carnegie Mellon University, Pittsburgh, PA USA
| | - Devarajan Sridharan
- grid.34980.360000 0001 0482 5067Centre for Neuroscience, Indian Institute of Science, Bangalore, KA India ,grid.34980.360000 0001 0482 5067Computer Science and Automation, Indian Institute of Science, Bangalore, KA India
| |
Collapse
|
16
|
Existing function in primary visual cortex is not perturbed by new skill acquisition of a non-matched sensory task. Nat Commun 2022; 13:3638. [PMID: 35752622 PMCID: PMC9233699 DOI: 10.1038/s41467-022-31440-y] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/18/2021] [Accepted: 06/16/2022] [Indexed: 02/07/2023] Open
Abstract
Acquisition of new skills has the potential to disturb existing network function. To directly assess whether previously acquired cortical function is altered during learning, mice were trained in an abstract task in which selected activity patterns were rewarded using an optical brain-computer interface device coupled to primary visual cortex (V1) neurons. Excitatory neurons were longitudinally recorded using 2-photon calcium imaging. Despite significant changes in local neural activity during task performance, tuning properties and stimulus encoding assessed outside of the trained context were not perturbed. Similarly, stimulus tuning was stable in neurons that remained responsive following a different, visual discrimination training task. However, visual discrimination training increased the rate of representational drift. Our results indicate that while some forms of perceptual learning may modify the contribution of individual neurons to stimulus encoding, new skill learning is not inherently disruptive to the quality of stimulus representation in adult V1.
Collapse
|
17
|
|
18
|
Using EEG to study sensorimotor adaptation. Neurosci Biobehav Rev 2022; 134:104520. [PMID: 35016897 DOI: 10.1016/j.neubiorev.2021.104520] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/10/2021] [Revised: 12/10/2021] [Accepted: 12/30/2021] [Indexed: 11/23/2022]
Abstract
Sensorimotor adaptation, or the capacity to flexibly adapt movements to changes in the body or the environment, is crucial to our ability to move efficiently in a dynamic world. The field of sensorimotor adaptation is replete with rigorous behavioural and computational methods, which support strong conceptual frameworks. An increasing number of studies have combined these methods with electroencephalography (EEG) to unveil insights into the neural mechanisms of adaptation. We review these studies: discussing EEG markers of adaptation in the frequency and the temporal domain, EEG predictors for successful adaptation and how EEG can be used to unmask latent processes resulting from adaptation, such as the modulation of spatial attention. With its high temporal resolution, EEG can be further exploited to deepen our understanding of sensorimotor adaptation.
Collapse
|
19
|
Going beyond primary motor cortex to improve brain–computer interfaces. Trends Neurosci 2022; 45:176-183. [DOI: 10.1016/j.tins.2021.12.006] [Citation(s) in RCA: 12] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/20/2021] [Revised: 12/01/2021] [Accepted: 12/19/2021] [Indexed: 01/08/2023]
|
20
|
Srinath R, Ruff DA, Cohen MR. Attention improves information flow between neuronal populations without changing the communication subspace. Curr Biol 2021; 31:5299-5313.e4. [PMID: 34699782 PMCID: PMC8665027 DOI: 10.1016/j.cub.2021.09.076] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/29/2021] [Revised: 09/22/2021] [Accepted: 09/28/2021] [Indexed: 10/20/2022]
Abstract
Visual attention allows observers to change the influence of different parts of a visual scene on their behavior, suggesting that information can be flexibly shared between visual cortex and neurons involved in decision making. We investigated the neural substrate of flexible information routing by analyzing the activity of populations of visual neurons in the medial temporal area (MT) and oculo-motor neurons in the superior colliculus (SC) while rhesus monkeys switched spatial attention. We demonstrated that attention increases the efficacy of visuomotor communication: trial-to-trial variability in SC population activity could be better predicted by the activity of the MT population (and vice versa) when attention was directed toward their joint receptive fields. Surprisingly, this improvement in prediction was not explained by changes in the dimensionality of the shared subspace or in the magnitude of local or shared pairwise noise correlations. These results lay a foundation for future theoretical and experimental studies into how visual attention can flexibly change information flow between sensory and decision neurons.
Collapse
Affiliation(s)
- Ramanujan Srinath
- Department of Neuroscience and Center for the Neural Basis of Cognition, University of Pittsburgh, Pittsburgh, PA, USA.
| | - Douglas A Ruff
- Department of Neuroscience and Center for the Neural Basis of Cognition, University of Pittsburgh, Pittsburgh, PA, USA
| | - Marlene R Cohen
- Department of Neuroscience and Center for the Neural Basis of Cognition, University of Pittsburgh, Pittsburgh, PA, USA
| |
Collapse
|
21
|
Hennig JA, Oby ER, Losey DM, Batista AP, Yu BM, Chase SM. How learning unfolds in the brain: toward an optimization view. Neuron 2021; 109:3720-3735. [PMID: 34648749 PMCID: PMC8639641 DOI: 10.1016/j.neuron.2021.09.005] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/30/2021] [Revised: 08/25/2021] [Accepted: 09/02/2021] [Indexed: 12/17/2022]
Abstract
How do changes in the brain lead to learning? To answer this question, consider an artificial neural network (ANN), where learning proceeds by optimizing a given objective or cost function. This "optimization framework" may provide new insights into how the brain learns, as many idiosyncratic features of neural activity can be recapitulated by an ANN trained to perform the same task. Nevertheless, there are key features of how neural population activity changes throughout learning that cannot be readily explained in terms of optimization and are not typically features of ANNs. Here we detail three of these features: (1) the inflexibility of neural variability throughout learning, (2) the use of multiple learning processes even during simple tasks, and (3) the presence of large task-nonspecific activity changes. We propose that understanding the role of these features in the brain will be key to describing biological learning using an optimization framework.
Collapse
Affiliation(s)
- Jay A Hennig
- Neuroscience Institute, Carnegie Mellon University, Pittsburgh, PA, USA; Center for the Neural Basis of Cognition, Pittsburgh, PA, USA; Machine Learning Department, Carnegie Mellon University, Pittsburgh, PA, USA.
| | - Emily R Oby
- Center for the Neural Basis of Cognition, Pittsburgh, PA, USA; Department of Bioengineering, University of Pittsburgh, Pittsburgh, PA, USA
| | - Darby M Losey
- Neuroscience Institute, Carnegie Mellon University, Pittsburgh, PA, USA; Center for the Neural Basis of Cognition, Pittsburgh, PA, USA; Machine Learning Department, Carnegie Mellon University, Pittsburgh, PA, USA
| | - Aaron P Batista
- Center for the Neural Basis of Cognition, Pittsburgh, PA, USA; Department of Bioengineering, University of Pittsburgh, Pittsburgh, PA, USA
| | - Byron M Yu
- Neuroscience Institute, Carnegie Mellon University, Pittsburgh, PA, USA; Center for the Neural Basis of Cognition, Pittsburgh, PA, USA; Department of Electrical and Computer Engineering, Carnegie Mellon University, Pittsburgh, PA, USA; Department of Biomedical Engineering, Carnegie Mellon University, Pittsburgh, PA, USA
| | - Steven M Chase
- Neuroscience Institute, Carnegie Mellon University, Pittsburgh, PA, USA; Center for the Neural Basis of Cognition, Pittsburgh, PA, USA; Department of Biomedical Engineering, Carnegie Mellon University, Pittsburgh, PA, USA
| |
Collapse
|
22
|
Mridha MF, Das SC, Kabir MM, Lima AA, Islam MR, Watanobe Y. Brain-Computer Interface: Advancement and Challenges. SENSORS 2021; 21:s21175746. [PMID: 34502636 PMCID: PMC8433803 DOI: 10.3390/s21175746] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 07/17/2021] [Revised: 08/15/2021] [Accepted: 08/20/2021] [Indexed: 02/04/2023]
Abstract
Brain-Computer Interface (BCI) is an advanced and multidisciplinary active research domain based on neuroscience, signal processing, biomedical sensors, hardware, etc. Since the last decades, several groundbreaking research has been conducted in this domain. Still, no comprehensive review that covers the BCI domain completely has been conducted yet. Hence, a comprehensive overview of the BCI domain is presented in this study. This study covers several applications of BCI and upholds the significance of this domain. Then, each element of BCI systems, including techniques, datasets, feature extraction methods, evaluation measurement matrices, existing BCI algorithms, and classifiers, are explained concisely. In addition, a brief overview of the technologies or hardware, mostly sensors used in BCI, is appended. Finally, the paper investigates several unsolved challenges of the BCI and explains them with possible solutions.
Collapse
Affiliation(s)
- M. F. Mridha
- Department of Computer Science and Engineering, Bangladesh University of Business and Technology, Dhaka 1216, Bangladesh; (M.F.M.); (S.C.D.); (M.M.K.); (A.A.L.)
| | - Sujoy Chandra Das
- Department of Computer Science and Engineering, Bangladesh University of Business and Technology, Dhaka 1216, Bangladesh; (M.F.M.); (S.C.D.); (M.M.K.); (A.A.L.)
| | - Muhammad Mohsin Kabir
- Department of Computer Science and Engineering, Bangladesh University of Business and Technology, Dhaka 1216, Bangladesh; (M.F.M.); (S.C.D.); (M.M.K.); (A.A.L.)
| | - Aklima Akter Lima
- Department of Computer Science and Engineering, Bangladesh University of Business and Technology, Dhaka 1216, Bangladesh; (M.F.M.); (S.C.D.); (M.M.K.); (A.A.L.)
| | - Md. Rashedul Islam
- Department of Computer Science and Engineering, University of Asia Pacific, Dhaka 1216, Bangladesh
- Correspondence:
| | - Yutaka Watanobe
- Department of Computer Science and Engineering, University of Aizu, Aizu-Wakamatsu 965-8580, Japan;
| |
Collapse
|
23
|
Abstract
Significant experimental, computational, and theoretical work has identified rich structure within the coordinated activity of interconnected neural populations. An emerging challenge now is to uncover the nature of the associated computations, how they are implemented, and what role they play in driving behavior. We term this computation through neural population dynamics. If successful, this framework will reveal general motifs of neural population activity and quantitatively describe how neural population dynamics implement computations necessary for driving goal-directed behavior. Here, we start with a mathematical primer on dynamical systems theory and analytical tools necessary to apply this perspective to experimental data. Next, we highlight some recent discoveries resulting from successful application of dynamical systems. We focus on studies spanning motor control, timing, decision-making, and working memory. Finally, we briefly discuss promising recent lines of investigation and future directions for the computation through neural population dynamics framework.
Collapse
Affiliation(s)
- Saurabh Vyas
- Department of Bioengineering, Stanford University, Stanford, California 94305, USA; .,Wu Tsai Neurosciences Institute, Stanford University, Stanford, California 94305, USA
| | - Matthew D Golub
- Department of Electrical Engineering, Stanford University, Stanford, California 94305, USA.,Wu Tsai Neurosciences Institute, Stanford University, Stanford, California 94305, USA
| | - David Sussillo
- Department of Electrical Engineering, Stanford University, Stanford, California 94305, USA.,Wu Tsai Neurosciences Institute, Stanford University, Stanford, California 94305, USA.,Google AI, Google Inc., Mountain View, California 94305, USA
| | - Krishna V Shenoy
- Department of Bioengineering, Stanford University, Stanford, California 94305, USA; .,Department of Electrical Engineering, Stanford University, Stanford, California 94305, USA.,Wu Tsai Neurosciences Institute, Stanford University, Stanford, California 94305, USA.,Department of Neurobiology, Bio-X Institute, Neurosciences Program, and Howard Hughes Medical Institute, Stanford University, Stanford, California 94305, USA
| |
Collapse
|
24
|
Singh HP, Kumar P. Developments in the human machine interface technologies and their applications: a review. J Med Eng Technol 2021; 45:552-573. [PMID: 34184601 DOI: 10.1080/03091902.2021.1936237] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/11/2023]
Abstract
Human-machine interface (HMI) techniques use bioelectrical signals to gain real-time synchronised communication between the human body and machine functioning. HMI technology not only provides a real-time control access but also has the ability to control multiple functions at a single instance of time with modest human inputs and increased efficiency. The HMI technologies yield advanced control access on numerous applications such as health monitoring, medical diagnostics, development of prosthetic and assistive devices, automotive and aerospace industry, robotic controls and many more fields. In this paper, various physiological signals, their acquisition and processing techniques along with their respective applications in different HMI technologies have been discussed.
Collapse
Affiliation(s)
- Harpreet Pal Singh
- Department of Mechanical Engineering, Punjabi University, Patiala, India
| | - Parlad Kumar
- Department of Mechanical Engineering, Punjabi University, Patiala, India
| |
Collapse
|
25
|
Chen ZS, Pesaran B. Improving scalability in systems neuroscience. Neuron 2021; 109:1776-1790. [PMID: 33831347 PMCID: PMC8178195 DOI: 10.1016/j.neuron.2021.03.025] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/24/2020] [Revised: 03/11/2021] [Accepted: 03/16/2021] [Indexed: 12/30/2022]
Abstract
Emerging technologies to acquire data at increasingly greater scales promise to transform discovery in systems neuroscience. However, current exponential growth in the scale of data acquisition is a double-edged sword. Scaling up data acquisition can speed up the cycle of discovery but can also misinterpret the results or possibly slow down the cycle because of challenges presented by the curse of high-dimensional data. Active, adaptive, closed-loop experimental paradigms use hardware and algorithms optimized to enable time-critical computation to provide feedback that interprets the observations and tests hypotheses to actively update the stimulus or stimulation parameters. In this perspective, we review important concepts of active and adaptive experiments and discuss how selectively constraining the dimensionality and optimizing strategies at different stages of discovery loop can help mitigate the curse of high-dimensional data. Active and adaptive closed-loop experimental paradigms can speed up discovery despite an exponentially increasing data scale, offering a road map to timely and iterative hypothesis revision and discovery in an era of exponential growth in neuroscience.
Collapse
Affiliation(s)
- Zhe Sage Chen
- Department of Psychiatry, Department of Neuroscience and Physiology, New York University School of Medicine, New York, NY 10016, USA; Neuroscience Institute, NYU School of Medicine, New York, NY 10016, USA.
| | - Bijan Pesaran
- Neuroscience Institute, NYU School of Medicine, New York, NY 10016, USA; Center for Neural Science, New York University, New York, NY 10003, USA; Department of Neurology, New York University School of Medicine, New York, NY 10016, USA.
| |
Collapse
|
26
|
Hennig JA, Oby ER, Golub MD, Bahureksa LA, Sadtler PT, Quick KM, Ryu SI, Tyler-Kabara EC, Batista AP, Chase SM, Yu BM. Learning is shaped by abrupt changes in neural engagement. Nat Neurosci 2021; 24:727-736. [PMID: 33782622 DOI: 10.1038/s41593-021-00822-8] [Citation(s) in RCA: 29] [Impact Index Per Article: 9.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/23/2020] [Accepted: 02/22/2021] [Indexed: 01/30/2023]
Abstract
Internal states such as arousal, attention and motivation modulate brain-wide neural activity, but how these processes interact with learning is not well understood. During learning, the brain modifies its neural activity to improve behavior. How do internal states affect this process? Using a brain-computer interface learning paradigm in monkeys, we identified large, abrupt fluctuations in neural population activity in motor cortex indicative of arousal-like internal state changes, which we term 'neural engagement.' In a brain-computer interface, the causal relationship between neural activity and behavior is known, allowing us to understand how neural engagement impacted behavioral performance for different task goals. We observed stereotyped changes in neural engagement that occurred regardless of how they impacted performance. This allowed us to predict how quickly different task goals were learned. These results suggest that changes in internal states, even those seemingly unrelated to goal-seeking behavior, can systematically influence how behavior improves with learning.
Collapse
Affiliation(s)
- Jay A Hennig
- Neuroscience Institute, Carnegie Mellon University, Pittsburgh, PA, USA. .,Center for the Neural Basis of Cognition, Pittsburgh, PA, USA. .,Machine Learning Department, Carnegie Mellon University, Pittsburgh, PA, USA.
| | - Emily R Oby
- Center for the Neural Basis of Cognition, Pittsburgh, PA, USA.,Department of Neurobiology, University of Pittsburgh, Pittsburgh, PA, USA.,Department of Bioengineering, University of Pittsburgh, Pittsburgh, PA, USA
| | - Matthew D Golub
- Center for the Neural Basis of Cognition, Pittsburgh, PA, USA.,Department of Electrical and Computer Engineering, Carnegie Mellon University, Pittsburgh, PA, USA.,Department of Electrical Engineering, Stanford University, Stanford, CA, USA
| | - Lindsay A Bahureksa
- Center for the Neural Basis of Cognition, Pittsburgh, PA, USA.,Department of Biomedical Engineering, Carnegie Mellon University, Pittsburgh, PA, USA
| | - Patrick T Sadtler
- Center for the Neural Basis of Cognition, Pittsburgh, PA, USA.,Department of Bioengineering, University of Pittsburgh, Pittsburgh, PA, USA
| | - Kristin M Quick
- Center for the Neural Basis of Cognition, Pittsburgh, PA, USA.,Department of Bioengineering, University of Pittsburgh, Pittsburgh, PA, USA
| | - Stephen I Ryu
- Department of Electrical Engineering, Stanford University, Stanford, CA, USA.,Department of Neurosurgery, Palo Alto Medical Foundation, Palo Alto, CA, USA
| | - Elizabeth C Tyler-Kabara
- Center for the Neural Basis of Cognition, Pittsburgh, PA, USA.,Department of Physical Medicine and Rehabilitation, University of Pittsburgh, Pittsburgh, PA, USA.,Department of Neurological Surgery, University of Pittsburgh, Pittsburgh, PA, USA.,Department of Neurosurgery, Dell Medical School, University of Texas at Austin, Austin, TX, USA
| | - Aaron P Batista
- Center for the Neural Basis of Cognition, Pittsburgh, PA, USA.,Department of Bioengineering, University of Pittsburgh, Pittsburgh, PA, USA
| | - Steven M Chase
- Neuroscience Institute, Carnegie Mellon University, Pittsburgh, PA, USA.,Center for the Neural Basis of Cognition, Pittsburgh, PA, USA.,Department of Biomedical Engineering, Carnegie Mellon University, Pittsburgh, PA, USA
| | - Byron M Yu
- Neuroscience Institute, Carnegie Mellon University, Pittsburgh, PA, USA.,Center for the Neural Basis of Cognition, Pittsburgh, PA, USA.,Department of Electrical and Computer Engineering, Carnegie Mellon University, Pittsburgh, PA, USA.,Department of Biomedical Engineering, Carnegie Mellon University, Pittsburgh, PA, USA
| |
Collapse
|
27
|
Yang Y, Ahmadipour P, Shanechi MM. Adaptive latent state modeling of brain network dynamics with real-time learning rate optimization. J Neural Eng 2021; 18. [PMID: 33254159 DOI: 10.1088/1741-2552/abcefd] [Citation(s) in RCA: 16] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/07/2020] [Accepted: 11/30/2020] [Indexed: 12/29/2022]
Abstract
Objective. Dynamic latent state models are widely used to characterize the dynamics of brain network activity for various neural signal types. To date, dynamic latent state models have largely been developed for stationary brain network dynamics. However, brain network dynamics can be non-stationary for example due to learning, plasticity or recording instability. To enable modeling these non-stationarities, two problems need to be resolved. First, novel methods should be developed that can adaptively update the parameters of latent state models, which is difficult due to the state being latent. Second, new methods are needed to optimize the adaptation learning rate, which specifies how fast new neural observations update the model parameters and can significantly influence adaptation accuracy.Approach. We develop a Rate Optimized-adaptive Linear State-Space Modeling (RO-adaptive LSSM) algorithm that solves these two problems. First, to enable adaptation, we derive a computation- and memory-efficient adaptive LSSM fitting algorithm that updates the LSSM parameters recursively and in real time in the presence of the latent state. Second, we develop a real-time learning rate optimization algorithm. We use comprehensive simulations of a broad range of non-stationary brain network dynamics to validate both algorithms, which together constitute the RO-adaptive LSSM.Main results. We show that the adaptive LSSM fitting algorithm can accurately track the broad simulated non-stationary brain network dynamics. We also find that the learning rate significantly affects the LSSM fitting accuracy. Finally, we show that the real-time learning rate optimization algorithm can run in parallel with the adaptive LSSM fitting algorithm. Doing so, the combined RO-adaptive LSSM algorithm rapidly converges to the optimal learning rate and accurately tracks non-stationarities.Significance. These algorithms can be used to study time-varying neural dynamics underlying various brain functions and enhance future neurotechnologies such as brain-machine interfaces and closed-loop brain stimulation systems.
Collapse
Affiliation(s)
- Yuxiao Yang
- Ming Hsieh Department of Electrical and Computer Engineering, Viterbi School of Engineering, University of Southern California, Los Angeles, CA, United States of America.,These authors contributed equally to this work
| | - Parima Ahmadipour
- Ming Hsieh Department of Electrical and Computer Engineering, Viterbi School of Engineering, University of Southern California, Los Angeles, CA, United States of America.,These authors contributed equally to this work
| | - Maryam M Shanechi
- Ming Hsieh Department of Electrical and Computer Engineering, Viterbi School of Engineering, University of Southern California, Los Angeles, CA, United States of America.,Neuroscience Graduate Program, University of Southern California, Los Angeles, CA, United States of America
| |
Collapse
|
28
|
Feulner B, Clopath C. Neural manifold under plasticity in a goal driven learning behaviour. PLoS Comput Biol 2021; 17:e1008621. [PMID: 33544700 PMCID: PMC7864452 DOI: 10.1371/journal.pcbi.1008621] [Citation(s) in RCA: 20] [Impact Index Per Article: 6.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/10/2020] [Accepted: 12/08/2020] [Indexed: 11/19/2022] Open
Abstract
Neural activity is often low dimensional and dominated by only a few prominent neural covariation patterns. It has been hypothesised that these covariation patterns could form the building blocks used for fast and flexible motor control. Supporting this idea, recent experiments have shown that monkeys can learn to adapt their neural activity in motor cortex on a timescale of minutes, given that the change lies within the original low-dimensional subspace, also called neural manifold. However, the neural mechanism underlying this within-manifold adaptation remains unknown. Here, we show in a computational model that modification of recurrent weights, driven by a learned feedback signal, can account for the observed behavioural difference between within- and outside-manifold learning. Our findings give a new perspective, showing that recurrent weight changes do not necessarily lead to change in the neural manifold. On the contrary, successful learning is naturally constrained to a common subspace.
Collapse
Affiliation(s)
- Barbara Feulner
- Department of Bioengineering, Imperial College London, London, United Kingdom
| | - Claudia Clopath
- Department of Bioengineering, Imperial College London, London, United Kingdom
| |
Collapse
|
29
|
Peixoto D, Verhein JR, Kiani R, Kao JC, Nuyujukian P, Chandrasekaran C, Brown J, Fong S, Ryu SI, Shenoy KV, Newsome WT. Decoding and perturbing decision states in real time. Nature 2021; 591:604-609. [PMID: 33473215 DOI: 10.1038/s41586-020-03181-9] [Citation(s) in RCA: 40] [Impact Index Per Article: 13.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/24/2019] [Accepted: 12/09/2020] [Indexed: 01/01/2023]
Abstract
In dynamic environments, subjects often integrate multiple samples of a signal and combine them to reach a categorical judgment1. The process of deliberation can be described by a time-varying decision variable (DV), decoded from neural population activity, that predicts a subject's upcoming decision2. Within single trials, however, there are large moment-to-moment fluctuations in the DV, the behavioural significance of which is unclear. Here, using real-time, neural feedback control of stimulus duration, we show that within-trial DV fluctuations, decoded from motor cortex, are tightly linked to decision state in macaques, predicting behavioural choices substantially better than the condition-averaged DV or the visual stimulus alone. Furthermore, robust changes in DV sign have the statistical regularities expected from behavioural studies of changes of mind3. Probing the decision process on single trials with weak stimulus pulses, we find evidence for time-varying absorbing decision bounds, enabling us to distinguish between specific models of decision making.
Collapse
Affiliation(s)
- Diogo Peixoto
- Neurobiology Department, Stanford University, Stanford, CA, USA. .,Champalimaud Neuroscience Programme, Lisbon, Portugal. .,Wu Tsai Neurosciences Institute, Stanford University, Stanford, CA, USA.
| | - Jessica R Verhein
- Wu Tsai Neurosciences Institute, Stanford University, Stanford, CA, USA. .,Neurosciences Graduate Program, Stanford University, Stanford, CA, USA. .,Medical Scientist Training Program, Stanford University School of Medicine, Stanford, CA, USA.
| | - Roozbeh Kiani
- Center for Neural Science, New York University, New York, NY, USA
| | - Jonathan C Kao
- Wu Tsai Neurosciences Institute, Stanford University, Stanford, CA, USA.,Electrical Engineering Department, Stanford University, Stanford, CA, USA.,Department of Electrical and Computer Engineering, University of California, Los Angeles, Los Angeles, CA, USA.,Neurosciences Program, University of California, Los Angeles, Los Angeles, CA, USA
| | - Paul Nuyujukian
- Wu Tsai Neurosciences Institute, Stanford University, Stanford, CA, USA.,Electrical Engineering Department, Stanford University, Stanford, CA, USA.,Bioengineering Department, Stanford University, Stanford, CA, USA.,Neurosurgery Department, Stanford University, Stanford, CA, USA.,Bio-X Institute, Stanford University, Stanford, CA, USA
| | - Chandramouli Chandrasekaran
- Wu Tsai Neurosciences Institute, Stanford University, Stanford, CA, USA.,Electrical Engineering Department, Stanford University, Stanford, CA, USA.,Howard Hughes Medical Institute, Stanford University, Stanford, CA, USA.,Department of Psychological and Brain Sciences, Boston University, Boston, MA, USA.,Department of Anatomy and Neurobiology, Boston University School of Medicine, Boston, MA, USA
| | - Julian Brown
- Neurobiology Department, Stanford University, Stanford, CA, USA.,Wu Tsai Neurosciences Institute, Stanford University, Stanford, CA, USA
| | - Sania Fong
- Neurobiology Department, Stanford University, Stanford, CA, USA.,Wu Tsai Neurosciences Institute, Stanford University, Stanford, CA, USA
| | - Stephen I Ryu
- Electrical Engineering Department, Stanford University, Stanford, CA, USA.,Neurosurgery Department, Palo Alto Medical Foundation, Palo Alto, CA, USA
| | - Krishna V Shenoy
- Neurobiology Department, Stanford University, Stanford, CA, USA.,Wu Tsai Neurosciences Institute, Stanford University, Stanford, CA, USA.,Electrical Engineering Department, Stanford University, Stanford, CA, USA.,Bioengineering Department, Stanford University, Stanford, CA, USA.,Bio-X Institute, Stanford University, Stanford, CA, USA.,Howard Hughes Medical Institute, Stanford University, Stanford, CA, USA
| | - William T Newsome
- Neurobiology Department, Stanford University, Stanford, CA, USA. .,Wu Tsai Neurosciences Institute, Stanford University, Stanford, CA, USA. .,Bio-X Institute, Stanford University, Stanford, CA, USA.
| |
Collapse
|
30
|
Gonzalez-Astudillo J, Cattai T, Bassignana G, Corsi MC, De Vico Fallani F. Network-based brain computer interfaces: principles and applications. J Neural Eng 2020; 18. [PMID: 33147577 DOI: 10.1088/1741-2552/abc760] [Citation(s) in RCA: 14] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/23/2020] [Accepted: 11/04/2020] [Indexed: 12/17/2022]
Abstract
Brain-computer interfaces (BCIs) make possible to interact with the external environment by decoding the mental intention of individuals. BCIs can therefore be used to address basic neuroscience questions but also to unlock a variety of applications from exoskeleton control to neurofeedback (NFB) rehabilitation. In general, BCI usability critically depends on the ability to comprehensively characterize brain functioning and correctly identify the user's mental state. To this end, much of the efforts have focused on improving the classification algorithms taking into account localized brain activities as input features. Despite considerable improvement BCI performance is still unstable and, as a matter of fact, current features represent oversimplified descriptors of brain functioning. In the last decade, growing evidence has shown that the brain works as a networked system composed of multiple specialized and spatially distributed areas that dynamically integrate information. While more complex, looking at how remote brain regions functionally interact represents a grounded alternative to better describe brain functioning. Thanks to recent advances in network science, i.e. a modern field that draws on graph theory, statistical mechanics, data mining and inferential modelling, scientists have now powerful means to characterize complex brain networks derived from neuroimaging data. Notably, summary features can be extracted from these networks to quantitatively measure specific organizational properties across a variety of topological scales. In this topical review, we aim to provide the state-of-the-art supporting the development of a network theoretic approach as a promising tool for understanding BCIs and improve usability.
Collapse
|
31
|
Zheng Q, Zhang Y, Wan Z, Malik WQ, Chen W, Zhang S. Orthogonalizing the Activity of Two Neural Units for 2D Cursor Movement Control. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2020; 2020:3046-3049. [PMID: 33018647 DOI: 10.1109/embc44109.2020.9175931] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
Abstract
In the design of brain-machine interface (BMI), as the number of electrodes used to collect neural spike signals declines slowly, it is important to be able to decode with fewer units. We tried to train a monkey to control a cursor to perform a two-dimensional (2D) center-out task smoothly with spiking activities only from two units (direct units). At the same time, we studied how the direct units did change their tuning to the preferred direction during BMI training and tried to explore the underlying mechanism of how the monkey learned to control the cursor with their neural signals. In this study, we observed that both direct units slowly changed their preferred directions during BMI learning. Although the initial angles between the preferred directions of 3 pairs units are different, the angle between their preferred directions approached 90 degrees at the end of the training. Our results imply that BMI learning made the two units independent of each other. To our knowledge, it is the first time to demonstrate that only two units could be used to control a 2D cursor movements. Meanwhile, orthogonalizing the activities of two units driven by BMI learning in this study implies that the plasticity of the motor cortex is capable of providing an efficient strategy for motor control.
Collapse
|
32
|
A comprehensive assessment of Brain Computer Interfaces: Recent trends and challenges. J Neurosci Methods 2020; 346:108918. [PMID: 32853592 DOI: 10.1016/j.jneumeth.2020.108918] [Citation(s) in RCA: 20] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/21/2020] [Revised: 07/15/2020] [Accepted: 08/19/2020] [Indexed: 12/23/2022]
Abstract
BACKGROUND An uninterrupted channel of communication and control between the human brain and electronic processing units has led to an increased use of Brain Computer Interfaces (BCIs). This article attempts to present an all-encompassing review on BCI and the scientific advancements associated with it. The ultimate goal of this review is to provide a general overview of the BCI technology and to shed light on different aspects of BCIs. This review also underscores the applications, practical challenges and opportunities associated with BCI technology, which can be used to accelerate future developments in this field. METHODS This review is based on a systematic literature search for tracking down the relevant research annals and proceedings. Using a methodical search strategy, the search was carried out across major technical databases. The retrieved records were screened for their relevance and a total of 369 research chronicles were engulfed in this review based on the inclusion criteria. RESULTS This review describes the present scenario and recent advancements in BCI technology. It also identifies several application areas of BCI technology. This comprehensive review provides evidence that, while we are getting ever closer, significant challenges still exist for the development of BCIs that can seamlessly integrate with the user's biological system. CONCLUSION The findings of this review confirm the importance of BCI technology in various applications. It is concluded that BCI technology, still in its sprouting phase, requires significant explorations for further development.
Collapse
|
33
|
Functional Electrical Stimulation Controlled by Motor Imagery Brain-Computer Interface for Rehabilitation. Brain Sci 2020; 10:brainsci10080512. [PMID: 32748888 PMCID: PMC7465702 DOI: 10.3390/brainsci10080512] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/23/2020] [Revised: 07/23/2020] [Accepted: 07/31/2020] [Indexed: 11/17/2022] Open
Abstract
Sensorimotor rhythm (SMR)-based brain–computer interface (BCI) controlled Functional Electrical Stimulation (FES) has gained importance in recent years for the rehabilitation of motor deficits. However, there still remain many research questions to be addressed, such as unstructured Motor Imagery (MI) training procedures; a lack of methods to classify different MI tasks in a single hand, such as grasping and opening; and difficulty in decoding voluntary MI-evoked SMRs compared to FES-driven passive-movement-evoked SMRs. To address these issues, a study that is composed of two phases was conducted to develop and validate an SMR-based BCI-FES system with 2-class MI tasks in a single hand (Phase 1), and investigate the feasibility of the system with stroke and traumatic brain injury (TBI) patients (Phase 2). The results of Phase 1 showed that the accuracy of classifying 2-class MIs (approximately 71.25%) was significantly higher than the true chance level, while that of distinguishing voluntary and passive SMRs was not. In Phase 2, where the patients performed goal-oriented tasks in a semi-asynchronous mode, the effects of the FES existence type and adaptive learning on task performance were evaluated. The results showed that adaptive learning significantly increased the accuracy, and the accuracy after applying adaptive learning under the No-FES condition (61.9%) was significantly higher than the true chance level. The outcomes of the present research would provide insight into SMR-based BCI-controlled FES systems that can connect those with motor disabilities (e.g., stroke and TBI patients) to other people by greatly improving their quality of life. Recommendations for future work with a larger sample size and kinesthetic MI were also presented.
Collapse
|
34
|
Lansdell B, Milovanovic I, Mellema C, Fetz EE, Fairhall AL, Moritz CT. Reconfiguring Motor Circuits for a Joint Manual and BCI Task. IEEE Trans Neural Syst Rehabil Eng 2020; 28:248-257. [PMID: 31567096 PMCID: PMC7117797 DOI: 10.1109/tnsre.2019.2944347] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/05/2022]
Abstract
Designing brain-computer interfaces (BCIs) that can be used in conjunction with ongoing motor behavior requires an understanding of how neural activity co-opted for brain control interacts with existing neural circuits. For example, BCIs may be used to regain lost motor function after stroke. This requires that neural activity controlling unaffected limbs is dissociated from activity controlling the BCI. In this study we investigated how primary motor cortex accomplishes simultaneous BCI control and motor control in a task that explicitly required both activities to be driven from the same brain region (i.e. a dual-control task). Single-unit activity was recorded from intracortical, multi-electrode arrays while a non-human primate performed this dual-control task. Compared to activity observed during naturalistic motor control, we found that both units used to drive the BCI directly (control units) and units that did not directly control the BCI (non-control units) significantly changed their tuning to wrist torque. Using a measure of effective connectivity, we observed that control units decrease their connectivity. Through an analysis of variance we found that the intrinsic variability of the control units has a significant effect on task proficiency. When this variance is accounted for, motor cortical activity is flexible enough to perform novel BCI tasks that require active decoupling of natural associations to wrist motion. This study provides insight into the neural activity that enables a dual-control brain-computer interface.
Collapse
|
35
|
Shanechi MM. Brain–machine interfaces from motor to mood. Nat Neurosci 2019; 22:1554-1564. [DOI: 10.1038/s41593-019-0488-y] [Citation(s) in RCA: 82] [Impact Index Per Article: 16.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/08/2019] [Accepted: 08/06/2019] [Indexed: 12/22/2022]
|
36
|
Learning active sensing strategies using a sensory brain-machine interface. Proc Natl Acad Sci U S A 2019; 116:17509-17514. [PMID: 31409713 DOI: 10.1073/pnas.1909953116] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/31/2022] Open
Abstract
Diverse organisms, from insects to humans, actively seek out sensory information that best informs goal-directed actions. Efficient active sensing requires congruity between sensor properties and motor strategies, as typically honed through evolution. However, it has been difficult to study whether active sensing strategies are also modified with experience. Here, we used a sensory brain-machine interface paradigm, permitting both free behavior and experimental manipulation of sensory feedback, to study learning of active sensing strategies. Rats performed a searching task in a water maze in which the only task-relevant sensory feedback was provided by intracortical microstimulation (ICMS) encoding egocentric bearing to the hidden goal location. The rats learned to use the artificial goal direction sense to find the platform with the same proficiency as natural vision. Manipulation of the acuity of the ICMS feedback revealed distinct search strategy adaptations. Using an optimization model, the different strategies were found to minimize the effort required to extract the most salient task-relevant information. The results demonstrate that animals can adjust motor strategies to match novel sensor properties for efficient goal-directed behavior.
Collapse
|
37
|
Oby ER, Golub MD, Hennig JA, Degenhart AD, Tyler-Kabara EC, Yu BM, Chase SM, Batista AP. New neural activity patterns emerge with long-term learning. Proc Natl Acad Sci U S A 2019; 116:15210-15215. [PMID: 31182595 PMCID: PMC6660765 DOI: 10.1073/pnas.1820296116] [Citation(s) in RCA: 95] [Impact Index Per Article: 19.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022] Open
Abstract
Learning has been associated with changes in the brain at every level of organization. However, it remains difficult to establish a causal link between specific changes in the brain and new behavioral abilities. We establish that new neural activity patterns emerge with learning. We demonstrate that these new neural activity patterns cause the new behavior. Thus, the formation of new patterns of neural population activity can underlie the learning of new skills.
Collapse
Affiliation(s)
- Emily R Oby
- Department of Bioengineering, University of Pittsburgh, Pittsburgh, PA 15213
- Center for the Neural Basis of Cognition, University of Pittsburgh and Carnegie Mellon University, Pittsburgh, PA 15213
- University of Pittsburgh Brain Institute, Pittsburgh, PA 15213
- Systems Neuroscience Center, University of Pittsburgh, Pittsburgh, PA 15213
- Department of Neurobiology, University of Pittsburgh School of Medicine, Pittsburgh, PA 15213
| | - Matthew D Golub
- Center for the Neural Basis of Cognition, University of Pittsburgh and Carnegie Mellon University, Pittsburgh, PA 15213
- Department of Electrical and Computer Engineering, Carnegie Mellon University, Pittsburgh, PA 15213
- Department of Electrical Engineering, Stanford University, Stanford, CA 94305
- Wu Tsai Neurosciences Institute, Stanford University, Stanford, CA 94305
| | - Jay A Hennig
- Center for the Neural Basis of Cognition, University of Pittsburgh and Carnegie Mellon University, Pittsburgh, PA 15213
- Carnegie Mellon Neuroscience Institute, Carnegie Mellon University, Pittsburgh, PA 15213
- Machine Learning Department, Carnegie Mellon University, Pittsburgh, PA 15213
| | - Alan D Degenhart
- Department of Bioengineering, University of Pittsburgh, Pittsburgh, PA 15213
- Center for the Neural Basis of Cognition, University of Pittsburgh and Carnegie Mellon University, Pittsburgh, PA 15213
- University of Pittsburgh Brain Institute, Pittsburgh, PA 15213
- Systems Neuroscience Center, University of Pittsburgh, Pittsburgh, PA 15213
| | - Elizabeth C Tyler-Kabara
- Department of Bioengineering, University of Pittsburgh, Pittsburgh, PA 15213
- Department of Neurological Surgery, University of Pittsburgh, Pittsburgh, PA 15213
- Department of Physical Medicine and Rehabilitation, University of Pittsburgh, Pittsburgh, PA 15213
- McGowan Institute for Regenerative Medicine, University of Pittsburgh, Pittsburgh, PA 15213
| | - Byron M Yu
- Center for the Neural Basis of Cognition, University of Pittsburgh and Carnegie Mellon University, Pittsburgh, PA 15213
- Department of Electrical and Computer Engineering, Carnegie Mellon University, Pittsburgh, PA 15213
- Carnegie Mellon Neuroscience Institute, Carnegie Mellon University, Pittsburgh, PA 15213
- Department of Biomedical Engineering, Carnegie Mellon University, Pittsburgh, PA 15213
| | - Steven M Chase
- Center for the Neural Basis of Cognition, University of Pittsburgh and Carnegie Mellon University, Pittsburgh, PA 15213
- Carnegie Mellon Neuroscience Institute, Carnegie Mellon University, Pittsburgh, PA 15213
- Department of Biomedical Engineering, Carnegie Mellon University, Pittsburgh, PA 15213
| | - Aaron P Batista
- Department of Bioengineering, University of Pittsburgh, Pittsburgh, PA 15213;
- Center for the Neural Basis of Cognition, University of Pittsburgh and Carnegie Mellon University, Pittsburgh, PA 15213
- University of Pittsburgh Brain Institute, Pittsburgh, PA 15213
- Systems Neuroscience Center, University of Pittsburgh, Pittsburgh, PA 15213
| |
Collapse
|
38
|
Slutzky MW. Brain-Machine Interfaces: Powerful Tools for Clinical Treatment and Neuroscientific Investigations. Neuroscientist 2019; 25:139-154. [PMID: 29772957 PMCID: PMC6611552 DOI: 10.1177/1073858418775355] [Citation(s) in RCA: 33] [Impact Index Per Article: 6.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/15/2023]
Abstract
Brain-machine interfaces (BMIs) have exploded in popularity in the past decade. BMIs, also called brain-computer interfaces, provide a direct link between the brain and a computer, usually to control an external device. BMIs have a wide array of potential clinical applications, ranging from restoring communication to people unable to speak due to amyotrophic lateral sclerosis or a stroke, to restoring movement to people with paralysis from spinal cord injury or motor neuron disease, to restoring memory to people with cognitive impairment. Because BMIs are controlled directly by the activity of prespecified neurons or cortical areas, they also provide a powerful paradigm with which to investigate fundamental questions about brain physiology, including neuronal behavior, learning, and the role of oscillations. This article reviews the clinical and neuroscientific applications of BMIs, with a primary focus on motor BMIs.
Collapse
Affiliation(s)
- Marc W Slutzky
- 1 Departments of Neurology, Physiology, and Physical Medicine & Rehabilitation, Northwestern University, Chicago, IL, USA
| |
Collapse
|
39
|
|
40
|
Intrinsic Variable Learning for Brain-Machine Interface Control by Human Anterior Intraparietal Cortex. Neuron 2019; 102:694-705.e3. [PMID: 30853300 DOI: 10.1016/j.neuron.2019.02.012] [Citation(s) in RCA: 21] [Impact Index Per Article: 4.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/17/2018] [Revised: 11/05/2018] [Accepted: 02/06/2019] [Indexed: 11/22/2022]
Abstract
Although animal studies provided significant insights in understanding the neural basis of learning and adaptation, they often cannot dissociate between different learning mechanisms due to the lack of verbal communication. To overcome this limitation, we examined the mechanisms of learning and its limits in a human intracortical brain-machine interface (BMI) paradigm. A tetraplegic participant controlled a 2D computer cursor by modulating single-neuron activity in the anterior intraparietal area (AIP). By perturbing the neuron-to-movement mapping, the participant learned to modulate the activity of the recorded neurons to solve the perturbations by adopting a target re-aiming strategy. However, when no cognitive strategies were adequate to produce correct responses, AIP failed to adapt to perturbations. These findings suggest that learning is constrained by the pre-existing neuronal structure, although it is possible that AIP needs more training time to learn to generate novel activity patterns when cognitive re-adaptation fails to solve the perturbations.
Collapse
|
41
|
Zhou X, Tien RN, Ravikumar S, Chase SM. Distinct types of neural reorganization during long-term learning. J Neurophysiol 2019; 121:1329-1341. [PMID: 30726164 DOI: 10.1152/jn.00466.2018] [Citation(s) in RCA: 34] [Impact Index Per Article: 6.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/26/2022] Open
Abstract
What are the neural mechanisms of skill acquisition? Many studies find that long-term practice is associated with a functional reorganization of cortical neural activity. However, the link between these changes in neural activity and the behavioral improvements that occur is not well understood, especially for long-term learning that takes place over several weeks. To probe this link in detail, we leveraged a brain-computer interface (BCI) paradigm in which rhesus monkeys learned to master nonintuitive mappings between neural spiking in primary motor cortex and computer cursor movement. Critically, these BCI mappings were designed to disambiguate several different possible types of neural reorganization. We found that during the initial phase of learning, lasting minutes to hours, rapid changes in neural activity common to all neurons led to a fast suppression of motor error. In parallel, local changes to individual neurons gradually accrued over several weeks of training. This slower timescale cortical reorganization persisted long after the movement errors had decreased to asymptote and was associated with more efficient control of movement. We conclude that long-term practice evokes two distinct neural reorganization processes with vastly different timescales, leading to different aspects of improvement in motor behavior. NEW & NOTEWORTHY We leveraged a brain-computer interface learning paradigm to track the neural reorganization occurring throughout the full time course of motor skill learning lasting several weeks. We report on two distinct types of neural reorganization that mirror distinct phases of behavioral improvement: a fast phase, in which global reorganization of neural recruitment leads to a quick suppression of motor error, and a slow phase, in which local changes in individual tuning lead to improvements in movement efficiency.
Collapse
Affiliation(s)
- Xiao Zhou
- Department of Biomedical Engineering, Carnegie Mellon University , Pittsburgh, Pennsylvania.,Center for the Neural Basis of Cognition, Carnegie Mellon University , Pittsburgh, Pennsylvania
| | - Rex N Tien
- Center for the Neural Basis of Cognition, Carnegie Mellon University , Pittsburgh, Pennsylvania.,Department of Bioengineering, University of Pittsburgh , Pittsburgh, Pennsylvania
| | - Sadhana Ravikumar
- Department of Biomedical Engineering, Carnegie Mellon University , Pittsburgh, Pennsylvania
| | - Steven M Chase
- Department of Biomedical Engineering, Carnegie Mellon University , Pittsburgh, Pennsylvania.,Center for the Neural Basis of Cognition, Carnegie Mellon University , Pittsburgh, Pennsylvania
| |
Collapse
|
42
|
Zou Y, Zhao X, Chu Y, Zhao Y, Xu W, Han J. An inter-subject model to reduce the calibration time for motion imagination-based brain-computer interface. Med Biol Eng Comput 2018; 57:939-952. [PMID: 30498878 DOI: 10.1007/s11517-018-1917-x] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/15/2017] [Accepted: 10/19/2018] [Indexed: 11/28/2022]
Abstract
A major factor blocking the practical application of brain-computer interfaces (BCI) is the long calibration time. To obtain enough training trials, participants must spend a long time in the calibration stage. In this paper, we propose a new framework to reduce the calibration time through knowledge transferred from the electroencephalogram (EEG) of other subjects. We trained the motor recognition model for the target subject using both the target's EEG signal and the EEG signals of other subjects. To reduce the individual variation of different datasets, we proposed two data mapping methods. These two methods separately diminished the variation caused by dissimilarities in the brain activation region and the strength of the brain activation in different subjects. After these data mapping stages, we adopted an ensemble method to aggregate the EEG signals from all subjects into a final model. We compared our method with other methods that reduce the calibration time. The results showed that our method achieves a satisfactory recognition accuracy using very few training trials (32 samples). Compared with existing methods using few training trials, our method achieved much greater accuracy. Graphical abstract The framework of the proposed method. The workflow of the framework have three steps: 1, process each subjects EEG signals according to the target subject's EEG signal. 2, generate models from each subjects' processed signals. 3, ensemble these models to a final model, the final model is a model for the target subject.
Collapse
Affiliation(s)
- Yijun Zou
- University of Chinese Academy of Sciences, Beijing, 100049, China
| | - Xingang Zhao
- Key Laboratory of Networked Control System, Shenyang Institute of Automation, Chinese Academy of Sciences, Shenyang, 110016, China.
| | - Yaqi Chu
- University of Chinese Academy of Sciences, Beijing, 100049, China.,Key Laboratory of Networked Control System, Shenyang Institute of Automation, Chinese Academy of Sciences, Shenyang, 110016, China
| | - Yiwen Zhao
- Key Laboratory of Networked Control System, Shenyang Institute of Automation, Chinese Academy of Sciences, Shenyang, 110016, China
| | - Weiliang Xu
- Department of Mechanical Engineering, University of Auckland, Auckland, New Zealand
| | - Jianda Han
- Key Laboratory of Networked Control System, Shenyang Institute of Automation, Chinese Academy of Sciences, Shenyang, 110016, China
| |
Collapse
|
43
|
Hennig JA, Golub MD, Lund PJ, Sadtler PT, Oby ER, Quick KM, Ryu SI, Tyler-Kabara EC, Batista AP, Yu BM, Chase SM. Constraints on neural redundancy. eLife 2018; 7:36774. [PMID: 30109848 PMCID: PMC6130976 DOI: 10.7554/elife.36774] [Citation(s) in RCA: 32] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/18/2018] [Accepted: 08/06/2018] [Indexed: 12/24/2022] Open
Abstract
Millions of neurons drive the activity of hundreds of muscles, meaning many different neural population activity patterns could generate the same movement. Studies have suggested that these redundant (i.e. behaviorally equivalent) activity patterns may be beneficial for neural computation. However, it is unknown what constraints may limit the selection of different redundant activity patterns. We leveraged a brain-computer interface, allowing us to define precisely which neural activity patterns were redundant. Rhesus monkeys made cursor movements by modulating neural activity in primary motor cortex. We attempted to predict the observed distribution of redundant neural activity. Principles inspired by work on muscular redundancy did not accurately predict these distributions. Surprisingly, the distributions of redundant neural activity and task-relevant activity were coupled, which enabled accurate predictions of the distributions of redundant activity. This suggests limits on the extent to which redundancy may be exploited by the brain for computation. When you swing a tennis racket, muscles in your arm contract in a specific sequence. For this to happen, millions of neurons in your brain and spinal cord must fire to make those muscles contract. If you swing the racket a second time, the same muscles in your arm will contract again. But the firing pattern of the underlying neurons will probably be different. This phenomenon, in which different patterns of neural activity generate the same outcome, is called neural redundancy. Neural redundancy allows a set of neurons to perform multiple tasks at once. For example, the same neurons may drive an arm movement while simultaneously planning the next activity. But does performing a given task constrain how often different patterns of neural activity can be produced? If so, this would limit whether other tasks could be carried out at the same time. To address this, Hennig et al. trained macaque monkeys to use a brain-computer interface (BCI). This is a device that reads out electrical brain activity and converts it into signals that can be used to control another device. The key advantage of a BCI is that the redundant activity patterns are precisely known. The monkeys learned to use their brain activity, via the BCI, to move a cursor on a computer screen in different directions. The results revealed that monkeys could only produce a limited number of different patterns of brain activity for a given BCI cursor movement. This suggests that the ability of a group of neurons to multitask is restricted. For example, if the same set of neurons is involved in both planning and performing movements, then an animal’s ability to plan a future movement will depend on the one it is currently performing. BCIs can help patients who have suffered stroke or paralysis. They enable patients to use their brain activity to control a computer or even robotic limbs. Understanding how the brain controls BCIs will help us improve their performance and deepen our knowledge of how the brain plans and performs movements. This might include designing BCIs that allow users to multitask more effectively.
Collapse
Affiliation(s)
- Jay A Hennig
- Program in Neural Computation, Carnegie Mellon University, Pittsburgh, United States.,Center for the Neural Basis of Cognition, Carnegie Mellon University, Pittsburgh, United States.,Machine Learning Department, Carnegie Mellon University, Pittsburgh, United States
| | - Matthew D Golub
- Center for the Neural Basis of Cognition, Carnegie Mellon University, Pittsburgh, United States.,Department of Electrical and Computer Engineering, Carnegie Mellon University, Pittsburgh, United States
| | - Peter J Lund
- Center for the Neural Basis of Cognition, Carnegie Mellon University, Pittsburgh, United States.,Machine Learning Department, Carnegie Mellon University, Pittsburgh, United States.,Department of Biomedical Engineering, Carnegie Mellon University, Pittsburgh, United States
| | - Patrick T Sadtler
- Center for the Neural Basis of Cognition, Carnegie Mellon University, Pittsburgh, United States.,Department of Bioengineering, University of Pittsburgh, Pittsburgh, United States
| | - Emily R Oby
- Center for the Neural Basis of Cognition, Carnegie Mellon University, Pittsburgh, United States.,Department of Bioengineering, University of Pittsburgh, Pittsburgh, United States
| | - Kristin M Quick
- Center for the Neural Basis of Cognition, Carnegie Mellon University, Pittsburgh, United States.,Department of Bioengineering, University of Pittsburgh, Pittsburgh, United States
| | - Stephen I Ryu
- Department of Neurosurgery, Palo Alto Medical Foundation, California, United States.,Department of Electrical Engineering, Stanford University, California, United States
| | - Elizabeth C Tyler-Kabara
- Center for the Neural Basis of Cognition, Carnegie Mellon University, Pittsburgh, United States.,Department of Physical Medicine and Rehabilitation, University of Pittsburgh, Pittsburgh, United States.,Department of Neurological Surgery, University of Pittsburgh, Pittsburgh, United States
| | - Aaron P Batista
- Center for the Neural Basis of Cognition, Carnegie Mellon University, Pittsburgh, United States.,Department of Bioengineering, University of Pittsburgh, Pittsburgh, United States
| | - Byron M Yu
- Center for the Neural Basis of Cognition, Carnegie Mellon University, Pittsburgh, United States.,Department of Electrical and Computer Engineering, Carnegie Mellon University, Pittsburgh, United States.,Department of Biomedical Engineering, Carnegie Mellon University, Pittsburgh, United States
| | - Steven M Chase
- Center for the Neural Basis of Cognition, Carnegie Mellon University, Pittsburgh, United States.,Department of Biomedical Engineering, Carnegie Mellon University, Pittsburgh, United States
| |
Collapse
|
44
|
Bedell HW, Capadona JR. Anti-inflammatory Approaches to Mitigate the Neuroinflammatory Response to Brain-Dwelling Intracortical Microelectrodes. JOURNAL OF IMMUNOLOGICAL SCIENCES 2018; 2:15-21. [PMID: 30854523 PMCID: PMC6404754 DOI: 10.29245/2578-3009/2018/4.1157] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Subscribe] [Scholar Register] [Indexed: 01/10/2023]
Abstract
Intracortical microelectrodes are used both in basic research to increase our understanding of the nervous system and for rehabilitation purposes through brain-computer interfaces. Yet, challenges exist preventing the widespread clinical use of this technology. A prime challenge is with the neuroinflammatory response to intracortical microelectrodes. This mini-review details immunomodulatory strategies employed to decrease the inflammatory response to these devices. Over time, broad-spectrum anti-inflammatory approaches, such as dexamethasone and minocycline, evolved into more targeted treatments since the underlying biology of the neuroinflammation was elucidated. This review also presents studies which examine novel prospective targets for future immunomodulatory targeting.
Collapse
Affiliation(s)
- Hillary W. Bedell
- department of Biomedical Engineering, Case Western Reserve University, School of Engineering, 2071 MLK Jr. Drive, Wickenden Bldg, Cleveland OH 44106, USA
- Advanced Platform Technology Center, L. Stokes Cleveland VA Medical Center, Rehab. R&D, 10701 East Blvd. Mail Stop 151 AW/APT, Cleveland OH 44106, USA
| | - Jeffrey R. Capadona
- department of Biomedical Engineering, Case Western Reserve University, School of Engineering, 2071 MLK Jr. Drive, Wickenden Bldg, Cleveland OH 44106, USA
- Advanced Platform Technology Center, L. Stokes Cleveland VA Medical Center, Rehab. R&D, 10701 East Blvd. Mail Stop 151 AW/APT, Cleveland OH 44106, USA
| |
Collapse
|
45
|
Abstract
Understanding how cognitive processes affect the responses of sensory neurons may clarify the relationship between neuronal population activity and behavior. However, tools for analyzing neuronal activity have not kept up with technological advances in recording from large neuronal populations. Here, we describe prevalent hypotheses of how cognitive processes affect sensory neurons, driven largely by a model based on the activity of single neurons or pools of neurons as the units of computation. We then use simple simulations to expand this model to a new conceptual framework that focuses on subspaces of population activity as the relevant units of computation, uses comparisons between brain areas or to behavior to guide analyses of these subspaces, and suggests that population activity is optimized to decode the large variety of stimuli and tasks that animals encounter in natural behavior. This framework provides new ways of understanding the ever-growing quantity of recorded population activity data.
Collapse
Affiliation(s)
- Douglas A Ruff
- Department of Neuroscience and Center for the Neural Basis of Cognition, University of Pittsburgh, Pittsburgh, Pennsylvania 15213, USA;
| | - Amy M Ni
- Department of Neuroscience and Center for the Neural Basis of Cognition, University of Pittsburgh, Pittsburgh, Pennsylvania 15213, USA;
| | - Marlene R Cohen
- Department of Neuroscience and Center for the Neural Basis of Cognition, University of Pittsburgh, Pittsburgh, Pennsylvania 15213, USA;
| |
Collapse
|
46
|
Ekanayake J, Hutton C, Ridgway G, Scharnowski F, Weiskopf N, Rees G. Real-time decoding of covert attention in higher-order visual areas. Neuroimage 2018; 169:462-472. [PMID: 29247807 PMCID: PMC5864512 DOI: 10.1016/j.neuroimage.2017.12.019] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/23/2017] [Revised: 12/06/2017] [Accepted: 12/09/2017] [Indexed: 12/21/2022] Open
Abstract
Brain-computer-interfaces (BCI) provide a means of using human brain activations to control devices for communication. Until now this has only been demonstrated in primary motor and sensory brain regions, using surgical implants or non-invasive neuroimaging techniques. Here, we provide proof-of-principle for the use of higher-order brain regions involved in complex cognitive processes such as attention. Using realtime fMRI, we implemented an online 'winner-takes-all approach' with quadrant-specific parameter estimates, to achieve single-block classification of brain activations. These were linked to the covert allocation of attention to real-world images presented at 4-quadrant locations. Accuracies in three target regions were significantly above chance, with individual decoding accuracies reaching upto 70%. By utilising higher order mental processes, 'cognitive BCIs' access varied and therefore more versatile information, potentially providing a platform for communication in patients who are unable to speak or move due to brain injury.
Collapse
Affiliation(s)
- Jinendra Ekanayake
- Wellcome Trust Centre for Interventional and Surgical Sciences, University College London, London, United Kingdom; Wellcome Trust Centre for Neuroimaging, University College London, London, United Kingdom; Institute of Cognitive Neuroscience, University College London, London, United Kingdom.
| | - Chloe Hutton
- Siemens Molecular Imaging, Oxford, United Kingdom
| | | | - Frank Scharnowski
- Psychiatric University Hospital, University of Zürich, Lenggstrasse 31, 8032 Zürich, Switzerland; Neuroscience Center Zürich, University of Zürich and Swiss Federal Institute of Technology, Winterthurerstr. 190, 8057 Zürich, Switzerland; Zürich Center for Integrative Human Physiology (ZIHP), University of Zürich, Winterthurerstr. 190, 8057 Zürich, Switzerland
| | - Nikolaus Weiskopf
- Wellcome Trust Centre for Neuroimaging, University College London, London, United Kingdom; Department of Neurophysics, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
| | - Geraint Rees
- Wellcome Trust Centre for Neuroimaging, University College London, London, United Kingdom; Institute of Cognitive Neuroscience, University College London, London, United Kingdom
| |
Collapse
|
47
|
Golub MD, Sadtler PT, Oby ER, Quick KM, Ryu SI, Tyler-Kabara EC, Batista AP, Chase SM, Yu BM. Learning by neural reassociation. Nat Neurosci 2018. [PMID: 29531364 PMCID: PMC5876156 DOI: 10.1038/s41593-018-0095-3] [Citation(s) in RCA: 121] [Impact Index Per Article: 20.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/23/2022]
Abstract
Behavior is driven by coordinated activity across a population of neurons. Learning requires the brain to change the neural population activity produced to achieve a given behavioral goal. How does population activity reorganize during learning? We studied intracortical population activity in the primary motor cortex of rhesus macaques during short-term learning in a brain-computer interface (BCI) task. In a BCI, the mapping between neural activity and behavior is exactly known, enabling us to rigorously define hypotheses about neural reorganization during learning. We found that changes in population activity followed a suboptimal neural strategy of Reassociation: animals relied on a fixed repertoire of activity patterns and associated those patterns with different movements after learning. These results indicate that the activity patterns that a neural population can generate are even more constrained than previously thought and might explain why it is often difficult to quickly learn to a high level of proficiency.
Collapse
Affiliation(s)
- Matthew D Golub
- Department of Electrical and Computer Engineering, Carnegie Mellon University, Pittsburgh, PA, USA.,Center for the Neural Basis of Cognition, Carnegie Mellon University, Pittsburgh, PA, USA.,Department of Electrical Engineering, Stanford University, Stanford, CA, USA
| | - Patrick T Sadtler
- Center for the Neural Basis of Cognition, Carnegie Mellon University, Pittsburgh, PA, USA.,Department of Bioengineering, University of Pittsburgh, Pittsburgh, PA, USA.,Systems Neuroscience Institute, University of Pittsburgh, Pittsburgh, PA, USA
| | - Emily R Oby
- Center for the Neural Basis of Cognition, Carnegie Mellon University, Pittsburgh, PA, USA.,Department of Bioengineering, University of Pittsburgh, Pittsburgh, PA, USA.,Systems Neuroscience Institute, University of Pittsburgh, Pittsburgh, PA, USA
| | - Kristin M Quick
- Center for the Neural Basis of Cognition, Carnegie Mellon University, Pittsburgh, PA, USA.,Department of Bioengineering, University of Pittsburgh, Pittsburgh, PA, USA.,Systems Neuroscience Institute, University of Pittsburgh, Pittsburgh, PA, USA
| | - Stephen I Ryu
- Department of Electrical Engineering, Stanford University, Stanford, CA, USA.,Department of Neurosurgery, Palo Alto Medical Foundation, Palo Alto, CA, USA
| | - Elizabeth C Tyler-Kabara
- Department of Bioengineering, University of Pittsburgh, Pittsburgh, PA, USA.,Department of Physical Medicine and Rehabilitation, University of Pittsburgh, Pittsburgh, PA, USA.,Department of Neurological Surgery, University of Pittsburgh, Pittsburgh, PA, USA
| | - Aaron P Batista
- Center for the Neural Basis of Cognition, Carnegie Mellon University, Pittsburgh, PA, USA.,Department of Bioengineering, University of Pittsburgh, Pittsburgh, PA, USA.,Systems Neuroscience Institute, University of Pittsburgh, Pittsburgh, PA, USA
| | - Steven M Chase
- Center for the Neural Basis of Cognition, Carnegie Mellon University, Pittsburgh, PA, USA. .,Department of Biomedical Engineering, Carnegie Mellon University, Pittsburgh, PA, USA.
| | - Byron M Yu
- Department of Electrical and Computer Engineering, Carnegie Mellon University, Pittsburgh, PA, USA. .,Center for the Neural Basis of Cognition, Carnegie Mellon University, Pittsburgh, PA, USA. .,Department of Biomedical Engineering, Carnegie Mellon University, Pittsburgh, PA, USA.
| |
Collapse
|
48
|
Blumberg MS, Dooley JC. Phantom Limbs, Neuroprosthetics, and the Developmental Origins of Embodiment. Trends Neurosci 2018; 40:603-612. [PMID: 28843655 DOI: 10.1016/j.tins.2017.07.003] [Citation(s) in RCA: 15] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/05/2017] [Revised: 07/12/2017] [Accepted: 07/25/2017] [Indexed: 01/11/2023]
Abstract
Amputees who wish to rid themselves of a phantom limb must weaken the neural representation of the absent limb. Conversely, amputees who wish to replace a lost limb must assimilate a neuroprosthetic with the existing neural representation. Whether we wish to remove a phantom limb or assimilate a synthetic one, we will benefit from knowing more about the developmental process that enables embodiment. A potentially critical contributor to that process is the spontaneous activity - in the form of limb twitches - that occurs exclusively and abundantly during active (REM) sleep, a particularly prominent state in early development. The sensorimotor circuits activated by twitching limbs, and the developmental context in which activation occurs, could provide a roadmap for creating neuroprosthetics that feel as if they are part of the body.
Collapse
Affiliation(s)
- Mark S Blumberg
- Department of Psychological and Brain Sciences, University of Iowa, Iowa City, Iowa 52242, USA; Department of Biology, University of Iowa, Iowa City, Iowa 52242, USA; DeLTA Center, University of Iowa, Iowa City, Iowa 52242, USA; Iowa Neuroscience Institute, University of Iowa, Iowa City, Iowa 52242, USA.
| | - James C Dooley
- Department of Psychological and Brain Sciences, University of Iowa, Iowa City, Iowa 52242, USA; DeLTA Center, University of Iowa, Iowa City, Iowa 52242, USA
| |
Collapse
|
49
|
Orsborn AL, Pesaran B. Parsing learning in networks using brain-machine interfaces. Curr Opin Neurobiol 2017; 46:76-83. [PMID: 28843838 DOI: 10.1016/j.conb.2017.08.002] [Citation(s) in RCA: 29] [Impact Index Per Article: 4.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/06/2017] [Revised: 07/31/2017] [Accepted: 08/03/2017] [Indexed: 12/30/2022]
Abstract
Brain-machine interfaces (BMIs) define new ways to interact with our environment and hold great promise for clinical therapies. Motor BMIs, for instance, re-route neural activity to control movements of a new effector and could restore movement to people with paralysis. Increasing experience shows that interfacing with the brain inevitably changes the brain. BMIs engage and depend on a wide array of innate learning mechanisms to produce meaningful behavior. BMIs precisely define the information streams into and out of the brain, but engage wide-spread learning. We take a network perspective and review existing observations of learning in motor BMIs to show that BMIs engage multiple learning mechanisms distributed across neural networks. Recent studies demonstrate the advantages of BMI for parsing this learning and its underlying neural mechanisms. BMIs therefore provide a powerful tool for studying the neural mechanisms of learning that highlights the critical role of learning in engineered neural therapies.
Collapse
Affiliation(s)
- Amy L Orsborn
- Center for Neural Science, New York University, New York, NY 10003, USA.
| | - Bijan Pesaran
- Center for Neural Science, New York University, New York, NY 10003, USA
| |
Collapse
|
50
|
Marjanovic N, Kerr K, Aranda R, Hickey R, Esmailbeigi H. Wearable wireless User Interface Cursor-Controller (UIC-C). ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2017; 2017:3852-3855. [PMID: 29060738 DOI: 10.1109/embc.2017.8037697] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/07/2023]
Abstract
Controlling a computer or a smartphone's cursor allows the user to access a world full of information. For millions of people with limited upper extremities motor function, controlling the cursor becomes profoundly difficult. Our team has developed the User Interface Cursor-Controller (UIC-C) to assist the impaired individuals in regaining control over the cursor. The UIC-C is a hands-free device that utilizes the tongue muscle to control the cursor movements. The entire device is housed inside a subject specific retainer. The user maneuvers the cursor by manipulating a joystick imbedded inside the retainer via their tongue. The joystick movement commands are sent to an electronic device via a Bluetooth connection. The device is readily recognizable as a cursor controller by any Bluetooth enabled electronic device. The device testing results have shown that the time it takes the user to control the cursor accurately via the UIC-C is about three times longer than a standard computer mouse controlled via the hand. The device does not require any permanent modifications to the body; therefore, it could be used during the period of acute rehabilitation of the hands. With the development of modern smart homes, and enhancement electronics controlled by the computer, UIC-C could be integrated into a system that enables individuals with permanent impairment, the ability to control the cursor. In conclusion, the UIC-C device is designed with the goal of allowing the user to accurately control a cursor during the periods of either acute or permanent upper extremities impairment.
Collapse
|