1
|
Yang SH, Huang CJ, Huang JS. Increasing Robustness of Intracortical Brain-Computer Interfaces for Recording Condition Changes via Data Augmentation. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2024; 251:108208. [PMID: 38754326 DOI: 10.1016/j.cmpb.2024.108208] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/08/2024] [Accepted: 04/30/2024] [Indexed: 05/18/2024]
Abstract
BACKGROUND AND OBJECTIVE Intracortical brain-computer interfaces (iBCIs) aim to help paralyzed individuals restore their motor functions by decoding neural activity into intended movement. However, changes in neural recording conditions hinder the decoding performance of iBCIs, mainly because the neural-to-kinematic mappings shift. Conventional approaches involve either training the neural decoders using large datasets before deploying the iBCI or conducting frequent calibrations during its operation. However, collecting data for extended periods can cause user fatigue, negatively impacting the quality and consistency of neural signals. Furthermore, frequent calibration imposes a substantial computational load. METHODS This study proposes a novel approach to increase iBCIs' robustness against changing recording conditions. The approach uses three neural augmentation operators to generate augmented neural activity that mimics common recording conditions. Then, contrastive learning is used to learn latent factors by maximizing the similarity between the augmented neural activities. The learned factors are expected to remain stable despite varying recording conditions and maintain a consistent correlation with the intended movement. RESULTS Experimental results demonstrate that the proposed iBCI outperformed the state-of-the-art iBCIs and was robust to changing recording conditions across days for long-term use on one publicly available nonhuman primate dataset. It achieved satisfactory offline decoding performance, even when a large training dataset was unavailable. CONCLUSIONS This study paves the way for reducing the need for frequent calibration of iBCIs and collecting a large amount of annotated training data. Potential future works aim to improve offline decoding performance with an ultra-small training dataset and improve the iBCIs' robustness to severely disabled electrodes.
Collapse
Affiliation(s)
- Shih-Hung Yang
- Department of Mechanical Engineering, National Cheng Kung University, Tainan, 701, Taiwan.
| | - Chun-Jui Huang
- Department of Mechanical Engineering, National Cheng Kung University, Tainan, 701, Taiwan
| | - Jhih-Siang Huang
- Department of Mechanical Engineering, National Cheng Kung University, Tainan, 701, Taiwan
| |
Collapse
|
2
|
Menéndez JA, Hennig JA, Golub MD, Oby ER, Sadtler PT, Batista AP, Chase SM, Yu BM, Latham PE. A theory of brain-computer interface learning via low-dimensional control. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.04.18.589952. [PMID: 38712193 PMCID: PMC11071278 DOI: 10.1101/2024.04.18.589952] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/08/2024]
Abstract
A remarkable demonstration of the flexibility of mammalian motor systems is primates' ability to learn to control brain-computer interfaces (BCIs). This constitutes a completely novel motor behavior, yet primates are capable of learning to control BCIs under a wide range of conditions. BCIs with carefully calibrated decoders, for example, can be learned with only minutes to hours of practice. With a few weeks of practice, even BCIs with randomly constructed decoders can be learned. What are the biological substrates of this learning process? Here, we develop a theory based on a re-aiming strategy, whereby learning operates within a low-dimensional subspace of task-relevant inputs driving the local population of recorded neurons. Through comprehensive numerical and formal analysis, we demonstrate that this theory can provide a unifying explanation for disparate phenomena previously reported in three different BCI learning tasks, and we derive a novel experimental prediction that we verify with previously published data. By explicitly modeling the underlying neural circuitry, the theory reveals an interpretation of these phenomena in terms of biological constraints on neural activity.
Collapse
|
3
|
Losey DM, Hennig JA, Oby ER, Golub MD, Sadtler PT, Quick KM, Ryu SI, Tyler-Kabara EC, Batista AP, Yu BM, Chase SM. Learning leaves a memory trace in motor cortex. Curr Biol 2024; 34:1519-1531.e4. [PMID: 38531360 PMCID: PMC11097210 DOI: 10.1016/j.cub.2024.03.003] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/21/2023] [Revised: 12/06/2023] [Accepted: 03/04/2024] [Indexed: 03/28/2024]
Abstract
How are we able to learn new behaviors without disrupting previously learned ones? To understand how the brain achieves this, we used a brain-computer interface (BCI) learning paradigm, which enables us to detect the presence of a memory of one behavior while performing another. We found that learning to use a new BCI map altered the neural activity that monkeys produced when they returned to using a familiar BCI map in a way that was specific to the learning experience. That is, learning left a "memory trace" in the primary motor cortex. This memory trace coexisted with proficient performance under the familiar map, primarily by altering neural activity in dimensions that did not impact behavior. Forming memory traces might be how the brain is able to provide for the joint learning of multiple behaviors without interference.
Collapse
Affiliation(s)
- Darby M Losey
- Neuroscience Institute, Carnegie Mellon University, Pittsburgh, PA 15213, USA; Center for the Neural Basis of Cognition, Pittsburgh, PA 15213, USA; Machine Learning Department, Carnegie Mellon University, Pittsburgh, PA 15213, USA
| | - Jay A Hennig
- Neuroscience Institute, Carnegie Mellon University, Pittsburgh, PA 15213, USA; Center for the Neural Basis of Cognition, Pittsburgh, PA 15213, USA; Machine Learning Department, Carnegie Mellon University, Pittsburgh, PA 15213, USA
| | - Emily R Oby
- Center for the Neural Basis of Cognition, Pittsburgh, PA 15213, USA; Department of Bioengineering, University of Pittsburgh, Pittsburgh, PA 15213, USA
| | - Matthew D Golub
- Center for the Neural Basis of Cognition, Pittsburgh, PA 15213, USA; Department of Electrical and Computer Engineering, Carnegie Mellon University, Pittsburgh, PA 15213, USA; Department of Electrical Engineering, Stanford University, Stanford, CA 94305, USA; Paul G. Allen School of Computer Science & Engineering, University of Washington, Seattle, WA 98195, USA
| | - Patrick T Sadtler
- Center for the Neural Basis of Cognition, Pittsburgh, PA 15213, USA; Department of Bioengineering, University of Pittsburgh, Pittsburgh, PA 15213, USA
| | - Kristin M Quick
- Center for the Neural Basis of Cognition, Pittsburgh, PA 15213, USA; Department of Bioengineering, University of Pittsburgh, Pittsburgh, PA 15213, USA
| | - Stephen I Ryu
- Department of Electrical Engineering, Stanford University, Stanford, CA 94305, USA; Department of Neurosurgery, Palo Alto Medical Foundation, Palo Alto, CA 94301, USA
| | - Elizabeth C Tyler-Kabara
- Center for the Neural Basis of Cognition, Pittsburgh, PA 15213, USA; Department of Physical Medicine and Rehabilitation, University of Pittsburgh, Pittsburgh, PA 15213, USA; Department of Neurological Surgery, University of Pittsburgh, Pittsburgh, PA 15213, USA; Department of Neurosurgery, Dell Medical School, University of Texas at Austin, Austin, TX 78712, USA
| | - Aaron P Batista
- Center for the Neural Basis of Cognition, Pittsburgh, PA 15213, USA; Department of Bioengineering, University of Pittsburgh, Pittsburgh, PA 15213, USA.
| | - Byron M Yu
- Neuroscience Institute, Carnegie Mellon University, Pittsburgh, PA 15213, USA; Center for the Neural Basis of Cognition, Pittsburgh, PA 15213, USA; Department of Electrical and Computer Engineering, Carnegie Mellon University, Pittsburgh, PA 15213, USA; Department of Biomedical Engineering, Carnegie Mellon University, Pittsburgh, PA 15213, USA.
| | - Steven M Chase
- Neuroscience Institute, Carnegie Mellon University, Pittsburgh, PA 15213, USA; Center for the Neural Basis of Cognition, Pittsburgh, PA 15213, USA; Department of Biomedical Engineering, Carnegie Mellon University, Pittsburgh, PA 15213, USA.
| |
Collapse
|
4
|
Oby ER, Degenhart AD, Grigsby EM, Motiwala A, McClain NT, Marino PJ, Yu BM, Batista AP. Dynamical constraints on neural population activity. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.01.03.573543. [PMID: 38260549 PMCID: PMC10802336 DOI: 10.1101/2024.01.03.573543] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/24/2024]
Abstract
The manner in which neural activity unfolds over time is thought to be central to sensory, motor, and cognitive functions in the brain. Network models have long posited that the brain's computations involve time courses of activity that are shaped by the underlying network. A prediction from this view is that the activity time courses should be difficult to violate. We leveraged a brain-computer interface (BCI) to challenge monkeys to violate the naturally-occurring time courses of neural population activity that we observed in motor cortex. This included challenging animals to traverse the natural time course of neural activity in a time-reversed manner. Animals were unable to violate the natural time courses of neural activity when directly challenged to do so. These results provide empirical support for the view that activity time courses observed in the brain indeed reflect the underlying network-level computational mechanisms that they are believed to implement.
Collapse
|
5
|
Love K, Cao D, Chang JC, Dal'Bello LR, Ma X, O'Shea DJ, Schone HR, Shahbazi M, Smoulder A. Highlights from the 32nd Annual Meeting of the Society for the Neural Control of Movement. J Neurophysiol 2024; 131:75-87. [PMID: 38057264 DOI: 10.1152/jn.00428.2023] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/17/2023] [Accepted: 12/04/2023] [Indexed: 12/08/2023] Open
Affiliation(s)
- Kassia Love
- Massachusetts Eye and Ear, Boston, Massachusetts, United States
| | - Di Cao
- Department of Mechanical Engineering, Johns Hopkins University, Baltimore, Maryland, United States
- Center for Movement Studies, Kennedy Krieger Institute, Baltimore, Maryland, United States
| | - Joanna C Chang
- Department of Bioengineering, Imperial College London, London, United Kingdom
| | - Lucas R Dal'Bello
- Laboratory of Neuromotor Physiology, IRCCS Fondazione Santa Lucia, Rome, Italy
| | - Xuan Ma
- Department of Neuroscience, Northwestern University, Chicago, Illinois, United States
| | - Daniel J O'Shea
- Department of Bioengineering, Stanford University, Stanford, California, United States
| | - Hunter R Schone
- Rehabilitation and Neural Engineering Laboratory, University of Pittsburgh, Pittsburgh, Pennsylvania, United States
- Department of Physical Medicine and Rehabilitation, University of Pittsburgh, Pittsburgh, Pennsylvania, United States
| | - Mahdiyar Shahbazi
- Western Institute for Neuroscience, Western University, London, Ontario, Canada
| | - Adam Smoulder
- Department of Biomedical Engineering, Carnegie Mellon University, Pittsburgh, Pennsylvania, United States
- Center for the Neural Basis of Cognition, Pittsburgh, Pennsylvania, United States
| |
Collapse
|
6
|
Athalye VR, Khanna P, Gowda S, Orsborn AL, Costa RM, Carmena JM. Invariant neural dynamics drive commands to control different movements. Curr Biol 2023; 33:2962-2976.e15. [PMID: 37402376 PMCID: PMC10527529 DOI: 10.1016/j.cub.2023.06.027] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/22/2022] [Revised: 04/24/2023] [Accepted: 06/09/2023] [Indexed: 07/06/2023]
Abstract
It has been proposed that the nervous system has the capacity to generate a wide variety of movements because it reuses some invariant code. Previous work has identified that dynamics of neural population activity are similar during different movements, where dynamics refer to how the instantaneous spatial pattern of population activity changes in time. Here, we test whether invariant dynamics of neural populations are actually used to issue the commands that direct movement. Using a brain-machine interface (BMI) that transforms rhesus macaques' motor-cortex activity into commands for a neuroprosthetic cursor, we discovered that the same command is issued with different neural-activity patterns in different movements. However, these different patterns were predictable, as we found that the transitions between activity patterns are governed by the same dynamics across movements. These invariant dynamics are low dimensional, and critically, they align with the BMI, so that they predict the specific component of neural activity that actually issues the next command. We introduce a model of optimal feedback control (OFC) that shows that invariant dynamics can help transform movement feedback into commands, reducing the input that the neural population needs to control movement. Altogether our results demonstrate that invariant dynamics drive commands to control a variety of movements and show how feedback can be integrated with invariant dynamics to issue generalizable commands.
Collapse
Affiliation(s)
- Vivek R Athalye
- Zuckerman Mind Brain Behavior Institute, Departments of Neuroscience and Neurology, Columbia University, New York, NY 10027, USA.
| | - Preeya Khanna
- Department of Neurology, University of California, San Francisco, San Francisco, CA 94158, USA.
| | - Suraj Gowda
- Department of Electrical Engineering and Computer Sciences, University of California, Berkeley, Berkeley, CA 94720, USA
| | - Amy L Orsborn
- Departments of Bioengineering, Electrical and Computer Engineering, University of Washington, Seattle, Seattle, WA 98195, USA
| | - Rui M Costa
- Zuckerman Mind Brain Behavior Institute, Departments of Neuroscience and Neurology, Columbia University, New York, NY 10027, USA.
| | - Jose M Carmena
- Department of Electrical Engineering and Computer Sciences, University of California, Berkeley, Berkeley, CA 94720, USA; Helen Wills Neuroscience Institute, University of California, Berkeley, Berkeley, CA 94720, USA; UC Berkeley-UCSF Joint Graduate Program in Bioengineering, University of California, Berkeley, Berkeley, CA 94720, USA.
| |
Collapse
|
7
|
Kuwabara T, Kohno H, Hatakeyama M, Kubo T. Evolutionary dynamics of mushroom body Kenyon cell types in hymenopteran brains from multifunctional type to functionally specialized types. SCIENCE ADVANCES 2023; 9:eadd4201. [PMID: 37146148 PMCID: PMC10162674 DOI: 10.1126/sciadv.add4201] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/07/2023]
Abstract
Evolutionary dynamics of diversification of brain neuronal cell types that have underlain behavioral evolution remain largely unknown. Here, we compared transcriptomes and functions of Kenyon cell (KC) types that compose the mushroom bodies between the honey bee and sawfly, a primitive hymenopteran insect whose KCs likely have the ancestral properties. Transcriptome analyses show that the sawfly KC type shares some of the gene expression profile with each honey bee KC type, although unique gene expression profiles have also been acquired in each honey bee KC type. In addition, functional analysis of two sawfly genes suggested that the functions in learning and memory of the ancestral KC type were heterogeneously inherited among the KC types in the honey bee. Our findings strongly suggest that the functional evolution of KCs in Hymenoptera involved two previously hypothesized processes for evolution of cell function: functional segregation and divergence.
Collapse
Affiliation(s)
- Takayoshi Kuwabara
- Department of Biological Sciences, Graduate School of Science, The University of Tokyo, Bunkyo-ku, Tokyo 113-0033, Japan
| | - Hiroki Kohno
- Department of Biological Sciences, Graduate School of Science, The University of Tokyo, Bunkyo-ku, Tokyo 113-0033, Japan
| | - Masatsugu Hatakeyama
- Division of Insect Advanced Technology, Institute of Agrobiological Sciences, NARO, Owashi, Tsukuba 305-8634, Japan
| | - Takeo Kubo
- Department of Biological Sciences, Graduate School of Science, The University of Tokyo, Bunkyo-ku, Tokyo 113-0033, Japan
| |
Collapse
|
8
|
Busch EL, Huang J, Benz A, Wallenstein T, Lajoie G, Wolf G, Krishnaswamy S, Turk-Browne NB. Multi-view manifold learning of human brain-state trajectories. NATURE COMPUTATIONAL SCIENCE 2023; 3:240-253. [PMID: 37693659 PMCID: PMC10487346 DOI: 10.1038/s43588-023-00419-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/09/2022] [Accepted: 02/14/2023] [Indexed: 09/12/2023]
Abstract
The complexity of the human brain gives the illusion that brain activity is intrinsically high-dimensional. Nonlinear dimensionality-reduction methods such as uniform manifold approximation and t-distributed stochastic neighbor embedding have been used for high-throughput biomedical data. However, they have not been used extensively for brain activity data such as those from functional magnetic resonance imaging (fMRI), primarily due to their inability to maintain dynamic structure. Here we introduce a nonlinear manifold learning method for time-series data-including those from fMRI-called temporal potential of heat-diffusion for affinity-based transition embedding (T-PHATE). In addition to recovering a low-dimensional intrinsic manifold geometry from time-series data, T-PHATE exploits the data's autocorrelative structure to faithfully denoise and unveil dynamic trajectories. We empirically validate T-PHATE on three fMRI datasets, showing that it greatly improves data visualization, classification, and segmentation of the data relative to several other state-of-the-art dimensionality-reduction benchmarks. These improvements suggest many potential applications of T-PHATE to other high-dimensional datasets of temporally diffuse processes.
Collapse
Affiliation(s)
- Erica L. Busch
- Department of Psychology, Yale University, New Haven, CT, USA
| | - Jessie Huang
- Department of Computer Science, Yale University, New Haven, CT, USA
| | - Andrew Benz
- Department of Mathematics, Yale University, New Haven, CT, USA
| | - Tom Wallenstein
- Department of Computer Science, Yale University, New Haven, CT, USA
| | - Guillaume Lajoie
- Department of Mathematics and Statistics, Université de Montréal, Montreal, Canada
- Mila—Quebec Artificial Intelligence Institute, Montreal, Canada
| | - Guy Wolf
- Department of Mathematics and Statistics, Université de Montréal, Montreal, Canada
- Mila—Quebec Artificial Intelligence Institute, Montreal, Canada
| | - Smita Krishnaswamy
- Department of Computer Science, Yale University, New Haven, CT, USA
- Department of Genetics, Yale University, New Haven, CT, USA
- Program in Applied Mathematics, Yale University, New Haven, CT, USA
- Wu Tsai Institute, Yale University, New Haven, CT, USA
- These authors contributed equally: Smita Krishnaswamy and Nicholas B. Turk-Browne
| | - Nicholas B. Turk-Browne
- Department of Psychology, Yale University, New Haven, CT, USA
- Wu Tsai Institute, Yale University, New Haven, CT, USA
- These authors contributed equally: Smita Krishnaswamy and Nicholas B. Turk-Browne
| |
Collapse
|
9
|
Melbaum S, Russo E, Eriksson D, Schneider A, Durstewitz D, Brox T, Diester I. Conserved structures of neural activity in sensorimotor cortex of freely moving rats allow cross-subject decoding. Nat Commun 2022; 13:7420. [PMID: 36456557 PMCID: PMC9715555 DOI: 10.1038/s41467-022-35115-6] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/08/2021] [Accepted: 11/17/2022] [Indexed: 12/04/2022] Open
Abstract
Our knowledge about neuronal activity in the sensorimotor cortex relies primarily on stereotyped movements that are strictly controlled in experimental settings. It remains unclear how results can be carried over to less constrained behavior like that of freely moving subjects. Toward this goal, we developed a self-paced behavioral paradigm that encouraged rats to engage in different movement types. We employed bilateral electrophysiological recordings across the entire sensorimotor cortex and simultaneous paw tracking. These techniques revealed behavioral coupling of neurons with lateralization and an anterior-posterior gradient from the premotor to the primary sensory cortex. The structure of population activity patterns was conserved across animals despite the severe under-sampling of the total number of neurons and variations in electrode positions across individuals. We demonstrated cross-subject and cross-session generalization in a decoding task through alignments of low-dimensional neural manifolds, providing evidence of a conserved neuronal code.
Collapse
Affiliation(s)
- Svenja Melbaum
- grid.5963.9Computer Vision Group, Department of Computer Science, University of Freiburg, 79110 Freiburg, Germany ,grid.5963.9IMBIT//BrainLinks-BrainTools, University of Freiburg, Georges-Köhler-Allee 201, 79110 Freiburg, Germany
| | - Eleonora Russo
- grid.410607.4Department of Psychiatry and Psychotherapy, University Medical Center, Johannes Gutenberg University, 55131 Mainz, Germany ,grid.7700.00000 0001 2190 4373Department of Theoretical Neuroscience, Central Institute of Mental Health, Medical Faculty Mannheim, University of Heidelberg, 68159 Mannheim, Germany
| | - David Eriksson
- grid.5963.9IMBIT//BrainLinks-BrainTools, University of Freiburg, Georges-Köhler-Allee 201, 79110 Freiburg, Germany ,grid.5963.9Optophysiology Lab, Faculty of Biology, University of Freiburg, 79110 Freiburg, Germany
| | - Artur Schneider
- grid.5963.9IMBIT//BrainLinks-BrainTools, University of Freiburg, Georges-Köhler-Allee 201, 79110 Freiburg, Germany ,grid.5963.9Optophysiology Lab, Faculty of Biology, University of Freiburg, 79110 Freiburg, Germany
| | - Daniel Durstewitz
- grid.7700.00000 0001 2190 4373Department of Theoretical Neuroscience, Central Institute of Mental Health, Medical Faculty Mannheim, University of Heidelberg, 68159 Mannheim, Germany
| | - Thomas Brox
- grid.5963.9Computer Vision Group, Department of Computer Science, University of Freiburg, 79110 Freiburg, Germany ,grid.5963.9IMBIT//BrainLinks-BrainTools, University of Freiburg, Georges-Köhler-Allee 201, 79110 Freiburg, Germany
| | - Ilka Diester
- grid.5963.9IMBIT//BrainLinks-BrainTools, University of Freiburg, Georges-Köhler-Allee 201, 79110 Freiburg, Germany ,grid.5963.9Optophysiology Lab, Faculty of Biology, University of Freiburg, 79110 Freiburg, Germany ,grid.5963.9Bernstein Center Freiburg, University of Freiburg, 79104 Freiburg, Germany
| |
Collapse
|
10
|
Fomins A, Sych Y, Helmchen F. Conservative significance testing of tripartite statistical relations in multivariate neural data. Netw Neurosci 2022; 6:1243-1274. [PMID: 38800452 PMCID: PMC11117094 DOI: 10.1162/netn_a_00259] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/22/2022] [Accepted: 06/14/2022] [Indexed: 05/29/2024] Open
Abstract
An important goal in systems neuroscience is to understand the structure of neuronal interactions, frequently approached by studying functional relations between recorded neuronal signals. Commonly used pairwise measures (e.g., correlation coefficient) offer limited insight, neither addressing the specificity of estimated neuronal interactions nor potential synergistic coupling between neuronal signals. Tripartite measures, such as partial correlation, variance partitioning, and partial information decomposition, address these questions by disentangling functional relations into interpretable information atoms (unique, redundant, and synergistic). Here, we apply these tripartite measures to simulated neuronal recordings to investigate their sensitivity to noise. We find that the considered measures are mostly accurate and specific for signals with noiseless sources but experience significant bias for noisy sources.We show that permutation testing of such measures results in high false positive rates even for small noise fractions and large data sizes. We present a conservative null hypothesis for significance testing of tripartite measures, which significantly decreases false positive rate at a tolerable expense of increasing false negative rate. We hope our study raises awareness about the potential pitfalls of significance testing and of interpretation of functional relations, offering both conceptual and practical advice.
Collapse
Affiliation(s)
- Aleksejs Fomins
- Brain Research Institute, University of Zurich, Zurich, Switzerland
- Neuroscience Center Zurich, University of Zurich, Switzerland
| | - Yaroslav Sych
- Brain Research Institute, University of Zurich, Zurich, Switzerland
- Experimental Neurology Center, Department of Neurology, Inselspital University Hospital Bern, Bern, Switzerland
- Present address: Institute of Cellular and Integrative Neurosciences, University of Strasbourg and CNRS, Strasbourg, France
| | - Fritjof Helmchen
- Brain Research Institute, University of Zurich, Zurich, Switzerland
- Neuroscience Center Zurich, University of Zurich, Switzerland
| |
Collapse
|
11
|
Bretton-Granatoor Z, Stealey H, Santacruz SR, Lewis-Peacock JA. Estimating Intrinsic Manifold Dimensionality to Classify Task-Related Information in Human and Non-Human Primate Data. IEEE BIOMEDICAL CIRCUITS AND SYSTEMS CONFERENCE : HEALTHCARE TECHNOLOGY : [PROCEEDINGS]. IEEE BIOMEDICAL CIRCUITS AND SYSTEMS CONFERENCE 2022; 2022:650-654. [PMID: 36820790 PMCID: PMC9942267 DOI: 10.1109/biocas54905.2022.9948604] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
Feature selection, or dimensionality reduction, has become a standard step in reducing large-scale neural datasets into usable signals for brain-machine interface and neurofeedback decoders. Current techniques in fMRI data reduce the number of voxels (features) by performing statistics on individual voxels or using traditional techniques that utilize linear combinations of features (e.g., principal component analysis (PCA)). However, these methods often do not account for the cross-correlations found across voxels and do not sufficiently reduce the feature space to support efficient real-time feedback. To overcome these limitations, we propose using factor analysis on fMRI data. This technique has become increasingly popular for extracting a minimal number of latent features to explain high-dimensional data in non-human primates (NHPs). Here, we demonstrate these methods in both NHP and human data. In NHP subjects (n=2), we reduced the number of features to an average of 26.86% and 14.86% of the total feature space to build our multinomial classifier. In one NHP subject, the average accuracy of classifying eight target locations over 64 sessions was 62.43% (+/-6.19%) compared to a PCA-based classifier with 60.26% (+/-6.02%). In healthy fMRI subjects, we reduced the feature space to an average of 0.33% of the initial space. Group average (n=5) accuracy of FA-based category classification was 74.33% (+/- 4.91%) compared to a PCA-based classifier with 68.42% (+/-4.79%). FA-based classifiers can maintain the performance fidelity observed with PCA-based decoders. Importantly, FA-based methods allow researchers to address specific hypotheses about how underlying neural activity relates to behavior.
Collapse
Affiliation(s)
| | | | - Samantha R. Santacruz
- Department of Biomedical Engineering The University of Texas at Austin Austin, TX, USA
| | | |
Collapse
|
12
|
Ning Y, Wan G, Liu T, Zhang S. Volitional Generation of Reproducible, Efficient Temporal Patterns. Brain Sci 2022; 12:1269. [PMID: 36291203 PMCID: PMC9599309 DOI: 10.3390/brainsci12101269] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/23/2022] [Revised: 09/09/2022] [Accepted: 09/14/2022] [Indexed: 12/26/2023] Open
Abstract
One of the extraordinary characteristics of the biological brain is the low energy expense it requires to implement a variety of biological functions and intelligence as compared to the modern artificial intelligence (AI). Spike-based energy-efficient temporal codes have long been suggested as a contributor for the brain to run on low energy expense. Despite this code having been largely reported in the sensory cortex, whether this code can be implemented in other brain areas to serve broader functions and how it evolves throughout learning have remained unaddressed. In this study, we designed a novel brain-machine interface (BMI) paradigm. Two macaques could volitionally generate reproducible energy-efficient temporal patterns in the primary motor cortex (M1) by learning the BMI paradigm. Moreover, most neurons that were not directly assigned to control the BMI did not boost their excitability, and they demonstrated an overall energy-efficient manner in performing the task. Over the course of learning, we found that the firing rates and temporal precision of selected neurons co-evolved to generate the energy-efficient temporal patterns, suggesting that a cohesive rather than dissociable processing underlies the refinement of energy-efficient temporal patterns.
Collapse
Affiliation(s)
- Yuxiao Ning
- Qiushi Academy for Advanced Studies, Zhejiang University, Hangzhou 310027, China
- Department of Biomedical Engineering, Zhejiang University, Hangzhou 310027, China
| | - Guihua Wan
- Qiushi Academy for Advanced Studies, Zhejiang University, Hangzhou 310027, China
| | - Tengjun Liu
- Qiushi Academy for Advanced Studies, Zhejiang University, Hangzhou 310027, China
- Department of Biomedical Engineering, Zhejiang University, Hangzhou 310027, China
| | - Shaomin Zhang
- Qiushi Academy for Advanced Studies, Zhejiang University, Hangzhou 310027, China
- Department of Biomedical Engineering, Zhejiang University, Hangzhou 310027, China
- Key Laboratory of Biomedical Engineering of Ministry of Education, Zhejiang University, Hangzhou 310027, China
- Zhejiang Provincial Key Laboratory of Cardio-Cerebral Vascular Detection Technology and Medicinal Effectiveness Appraisal, Zhejiang University, Hangzhou 310027, China
| |
Collapse
|
13
|
Computational role of exploration noise in error-based de novo motor learning. Neural Netw 2022; 153:349-372. [DOI: 10.1016/j.neunet.2022.06.011] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/17/2021] [Revised: 04/23/2022] [Accepted: 06/09/2022] [Indexed: 11/23/2022]
|
14
|
Inagaki HK, Chen S, Ridder MC, Sah P, Li N, Yang Z, Hasanbegovic H, Gao Z, Gerfen CR, Svoboda K. A midbrain-thalamus-cortex circuit reorganizes cortical dynamics to initiate movement. Cell 2022; 185:1065-1081.e23. [PMID: 35245431 PMCID: PMC8990337 DOI: 10.1016/j.cell.2022.02.006] [Citation(s) in RCA: 73] [Impact Index Per Article: 36.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/17/2020] [Revised: 11/15/2021] [Accepted: 02/03/2022] [Indexed: 01/06/2023]
Abstract
Motor behaviors are often planned long before execution but only released after specific sensory events. Planning and execution are each associated with distinct patterns of motor cortex activity. Key questions are how these dynamic activity patterns are generated and how they relate to behavior. Here, we investigate the multi-regional neural circuits that link an auditory "Go cue" and the transition from planning to execution of directional licking. Ascending glutamatergic neurons in the midbrain reticular and pedunculopontine nuclei show short latency and phasic changes in spike rate that are selective for the Go cue. This signal is transmitted via the thalamus to the motor cortex, where it triggers a rapid reorganization of motor cortex state from planning-related activity to a motor command, which in turn drives appropriate movement. Our studies show how midbrain can control cortical dynamics via the thalamus for rapid and precise motor behavior.
Collapse
Affiliation(s)
- Hidehiko K Inagaki
- Janelia Research Campus, Howard Hughes Medical Institute, Ashburn, VA 20147, USA; Max Planck Florida Institute for Neuroscience, Jupiter, FL 33458, USA.
| | - Susu Chen
- Janelia Research Campus, Howard Hughes Medical Institute, Ashburn, VA 20147, USA; Department of Neuroscience, Physiology, and Pharmacology, University College London, London WC1E 6BT, UK
| | - Margreet C Ridder
- Queensland Brain Institute, The University of Queensland, Brisbane, QLD 4072, Australia
| | - Pankaj Sah
- Queensland Brain Institute, The University of Queensland, Brisbane, QLD 4072, Australia; Joint Center for Neuroscience and Neural Engineering, and Department of Biology, Southern University of Science and Technology, Shenzhen, Guangdong Province 518055, China
| | - Nuo Li
- Department of Neuroscience, Baylor College of Medicine, Houston, TX 77030, USA
| | - Zidan Yang
- Max Planck Florida Institute for Neuroscience, Jupiter, FL 33458, USA
| | - Hana Hasanbegovic
- Department of Neuroscience, Erasmus MC, Rotterdam, 3015GE, The Netherlands
| | - Zhenyu Gao
- Department of Neuroscience, Erasmus MC, Rotterdam, 3015GE, The Netherlands
| | | | - Karel Svoboda
- Janelia Research Campus, Howard Hughes Medical Institute, Ashburn, VA 20147, USA; Allen Institute for Neural Dynamics, Seattle, WA 98109, USA.
| |
Collapse
|
15
|
Wang T, Chen Y, Cui H. From Parametric Representation to Dynamical System: Shifting Views of the Motor Cortex in Motor Control. Neurosci Bull 2022; 38:796-808. [PMID: 35298779 PMCID: PMC9276910 DOI: 10.1007/s12264-022-00832-x] [Citation(s) in RCA: 10] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/12/2021] [Accepted: 11/29/2021] [Indexed: 11/01/2022] Open
Abstract
In contrast to traditional representational perspectives in which the motor cortex is involved in motor control via neuronal preference for kinetics and kinematics, a dynamical system perspective emerging in the last decade views the motor cortex as a dynamical machine that generates motor commands by autonomous temporal evolution. In this review, we first look back at the history of the representational and dynamical perspectives and discuss their explanatory power and controversy from both empirical and computational points of view. Here, we aim to reconcile the above perspectives, and evaluate their theoretical impact, future direction, and potential applications in brain-machine interfaces.
Collapse
Affiliation(s)
- Tianwei Wang
- Center for Excellence in Brain Science and Intelligent Technology, Institute of Neuroscience, Chinese Academy of Sciences, Shanghai, 200031, China.,Shanghai Center for Brain and Brain-inspired Intelligence Technology, Shanghai, 200031, China.,University of Chinese Academy of Sciences, Beijing, 100049, China
| | - Yun Chen
- Center for Excellence in Brain Science and Intelligent Technology, Institute of Neuroscience, Chinese Academy of Sciences, Shanghai, 200031, China.,Shanghai Center for Brain and Brain-inspired Intelligence Technology, Shanghai, 200031, China.,University of Chinese Academy of Sciences, Beijing, 100049, China
| | - He Cui
- Center for Excellence in Brain Science and Intelligent Technology, Institute of Neuroscience, Chinese Academy of Sciences, Shanghai, 200031, China. .,Shanghai Center for Brain and Brain-inspired Intelligence Technology, Shanghai, 200031, China. .,University of Chinese Academy of Sciences, Beijing, 100049, China.
| |
Collapse
|
16
|
Self-healing codes: How stable neural populations can track continually reconfiguring neural representations. Proc Natl Acad Sci U S A 2022; 119:2106692119. [PMID: 35145024 PMCID: PMC8851551 DOI: 10.1073/pnas.2106692119] [Citation(s) in RCA: 12] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 12/29/2021] [Indexed: 12/19/2022] Open
Abstract
The brain is capable of adapting while maintaining stable long-term memories and learned skills. Recent experiments show that neural responses are highly plastic in some circuits, while other circuits maintain consistent responses over time, raising the question of how these circuits interact coherently. We show how simple, biologically motivated Hebbian and homeostatic mechanisms in single neurons can allow circuits with fixed responses to continuously track a plastic, changing representation without reference to an external learning signal. As an adaptive system, the brain must retain a faithful representation of the world while continuously integrating new information. Recent experiments have measured population activity in cortical and hippocampal circuits over many days and found that patterns of neural activity associated with fixed behavioral variables and percepts change dramatically over time. Such “representational drift” raises the question of how malleable population codes can interact coherently with stable long-term representations that are found in other circuits and with relatively rigid topographic mappings of peripheral sensory and motor signals. We explore how known plasticity mechanisms can allow single neurons to reliably read out an evolving population code without external error feedback. We find that interactions between Hebbian learning and single-cell homeostasis can exploit redundancy in a distributed population code to compensate for gradual changes in tuning. Recurrent feedback of partially stabilized readouts could allow a pool of readout cells to further correct inconsistencies introduced by representational drift. This shows how relatively simple, known mechanisms can stabilize neural tuning in the short term and provides a plausible explanation for how plastic neural codes remain integrated with consolidated, long-term representations.
Collapse
|
17
|
Thivierge JP, Pilzak A. Estimating null and potent modes of feedforward communication in a computational model of cortical activity. Sci Rep 2022; 12:742. [PMID: 35031628 PMCID: PMC8760251 DOI: 10.1038/s41598-021-04684-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/22/2021] [Accepted: 12/15/2021] [Indexed: 11/08/2022] Open
Abstract
Communication across anatomical areas of the brain is key to both sensory and motor processes. Dimensionality reduction approaches have shown that the covariation of activity across cortical areas follows well-delimited patterns. Some of these patterns fall within the "potent space" of neural interactions and generate downstream responses; other patterns fall within the "null space" and prevent the feedforward propagation of synaptic inputs. Despite growing evidence for the role of null space activity in visual processing as well as preparatory motor control, a mechanistic understanding of its neural origins is lacking. Here, we developed a mean-rate model that allowed for the systematic control of feedforward propagation by potent and null modes of interaction. In this model, altering the number of null modes led to no systematic changes in firing rates, pairwise correlations, or mean synaptic strengths across areas, making it difficult to characterize feedforward communication with common measures of functional connectivity. A novel measure termed the null ratio captured the proportion of null modes relayed from one area to another. Applied to simultaneous recordings of primate cortical areas V1 and V2 during image viewing, the null ratio revealed that feedforward interactions have a broad null space that may reflect properties of visual stimuli.
Collapse
Affiliation(s)
- Jean-Philippe Thivierge
- School of Psychology, University of Ottawa, Ottawa, ON, Canada.
- Brain and Mind Research Institute, University of Ottawa, Ottawa, ON, Canada.
| | - Artem Pilzak
- School of Psychology, University of Ottawa, Ottawa, ON, Canada
| |
Collapse
|
18
|
Hennig JA, Oby ER, Losey DM, Batista AP, Yu BM, Chase SM. How learning unfolds in the brain: toward an optimization view. Neuron 2021; 109:3720-3735. [PMID: 34648749 PMCID: PMC8639641 DOI: 10.1016/j.neuron.2021.09.005] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/30/2021] [Revised: 08/25/2021] [Accepted: 09/02/2021] [Indexed: 12/17/2022]
Abstract
How do changes in the brain lead to learning? To answer this question, consider an artificial neural network (ANN), where learning proceeds by optimizing a given objective or cost function. This "optimization framework" may provide new insights into how the brain learns, as many idiosyncratic features of neural activity can be recapitulated by an ANN trained to perform the same task. Nevertheless, there are key features of how neural population activity changes throughout learning that cannot be readily explained in terms of optimization and are not typically features of ANNs. Here we detail three of these features: (1) the inflexibility of neural variability throughout learning, (2) the use of multiple learning processes even during simple tasks, and (3) the presence of large task-nonspecific activity changes. We propose that understanding the role of these features in the brain will be key to describing biological learning using an optimization framework.
Collapse
Affiliation(s)
- Jay A Hennig
- Neuroscience Institute, Carnegie Mellon University, Pittsburgh, PA, USA; Center for the Neural Basis of Cognition, Pittsburgh, PA, USA; Machine Learning Department, Carnegie Mellon University, Pittsburgh, PA, USA.
| | - Emily R Oby
- Center for the Neural Basis of Cognition, Pittsburgh, PA, USA; Department of Bioengineering, University of Pittsburgh, Pittsburgh, PA, USA
| | - Darby M Losey
- Neuroscience Institute, Carnegie Mellon University, Pittsburgh, PA, USA; Center for the Neural Basis of Cognition, Pittsburgh, PA, USA; Machine Learning Department, Carnegie Mellon University, Pittsburgh, PA, USA
| | - Aaron P Batista
- Center for the Neural Basis of Cognition, Pittsburgh, PA, USA; Department of Bioengineering, University of Pittsburgh, Pittsburgh, PA, USA
| | - Byron M Yu
- Neuroscience Institute, Carnegie Mellon University, Pittsburgh, PA, USA; Center for the Neural Basis of Cognition, Pittsburgh, PA, USA; Department of Electrical and Computer Engineering, Carnegie Mellon University, Pittsburgh, PA, USA; Department of Biomedical Engineering, Carnegie Mellon University, Pittsburgh, PA, USA
| | - Steven M Chase
- Neuroscience Institute, Carnegie Mellon University, Pittsburgh, PA, USA; Center for the Neural Basis of Cognition, Pittsburgh, PA, USA; Department of Biomedical Engineering, Carnegie Mellon University, Pittsburgh, PA, USA
| |
Collapse
|
19
|
Umakantha A, Morina R, Cowley BR, Snyder AC, Smith MA, Yu BM. Bridging neuronal correlations and dimensionality reduction. Neuron 2021; 109:2740-2754.e12. [PMID: 34293295 PMCID: PMC8505167 DOI: 10.1016/j.neuron.2021.06.028] [Citation(s) in RCA: 17] [Impact Index Per Article: 5.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/12/2020] [Revised: 05/05/2021] [Accepted: 06/25/2021] [Indexed: 01/01/2023]
Abstract
Two commonly used approaches to study interactions among neurons are spike count correlation, which describes pairs of neurons, and dimensionality reduction, applied to a population of neurons. Although both approaches have been used to study trial-to-trial neuronal variability correlated among neurons, they are often used in isolation and have not been directly related. We first established concrete mathematical and empirical relationships between pairwise correlation and metrics of population-wide covariability based on dimensionality reduction. Applying these insights to macaque V4 population recordings, we found that the previously reported decrease in mean pairwise correlation associated with attention stemmed from three distinct changes in population-wide covariability. Overall, our work builds the intuition and formalism to bridge between pairwise correlation and population-wide covariability and presents a cautionary tale about the inferences one can make about population activity by using a single statistic, whether it be mean pairwise correlation or dimensionality.
Collapse
Affiliation(s)
- Akash Umakantha
- Carnegie Mellon Neuroscience Institute, Pittsburgh, PA 15213, USA; Machine Learning Department, Carnegie Mellon University, Pittsburgh, PA 15213, USA
| | - Rudina Morina
- Department of Electrical and Computer Engineering, Carnegie Mellon University, Pittsburgh, PA 15213, USA
| | - Benjamin R Cowley
- Machine Learning Department, Carnegie Mellon University, Pittsburgh, PA 15213, USA; Princeton Neuroscience Institute, Princeton University, Princeton, NJ 08544, USA
| | - Adam C Snyder
- Department of Electrical and Computer Engineering, Carnegie Mellon University, Pittsburgh, PA 15213, USA; Department of Brain and Cognitive Sciences, University of Rochester, Rochester, NY 14642, USA; Department of Neuroscience, University of Rochester, Rochester, NY 14642, USA; Center for Visual Science, University of Rochester, Rochester, NY 14642, USA
| | - Matthew A Smith
- Carnegie Mellon Neuroscience Institute, Pittsburgh, PA 15213, USA; Department of Biomedical Engineering, Carnegie Mellon University, Pittsburgh, PA 15213, USA.
| | - Byron M Yu
- Carnegie Mellon Neuroscience Institute, Pittsburgh, PA 15213, USA; Department of Electrical and Computer Engineering, Carnegie Mellon University, Pittsburgh, PA 15213, USA; Department of Biomedical Engineering, Carnegie Mellon University, Pittsburgh, PA 15213, USA.
| |
Collapse
|
20
|
Abstract
Significant experimental, computational, and theoretical work has identified rich structure within the coordinated activity of interconnected neural populations. An emerging challenge now is to uncover the nature of the associated computations, how they are implemented, and what role they play in driving behavior. We term this computation through neural population dynamics. If successful, this framework will reveal general motifs of neural population activity and quantitatively describe how neural population dynamics implement computations necessary for driving goal-directed behavior. Here, we start with a mathematical primer on dynamical systems theory and analytical tools necessary to apply this perspective to experimental data. Next, we highlight some recent discoveries resulting from successful application of dynamical systems. We focus on studies spanning motor control, timing, decision-making, and working memory. Finally, we briefly discuss promising recent lines of investigation and future directions for the computation through neural population dynamics framework.
Collapse
Affiliation(s)
- Saurabh Vyas
- Department of Bioengineering, Stanford University, Stanford, California 94305, USA; .,Wu Tsai Neurosciences Institute, Stanford University, Stanford, California 94305, USA
| | - Matthew D Golub
- Department of Electrical Engineering, Stanford University, Stanford, California 94305, USA.,Wu Tsai Neurosciences Institute, Stanford University, Stanford, California 94305, USA
| | - David Sussillo
- Department of Electrical Engineering, Stanford University, Stanford, California 94305, USA.,Wu Tsai Neurosciences Institute, Stanford University, Stanford, California 94305, USA.,Google AI, Google Inc., Mountain View, California 94305, USA
| | - Krishna V Shenoy
- Department of Bioengineering, Stanford University, Stanford, California 94305, USA; .,Department of Electrical Engineering, Stanford University, Stanford, California 94305, USA.,Wu Tsai Neurosciences Institute, Stanford University, Stanford, California 94305, USA.,Department of Neurobiology, Bio-X Institute, Neurosciences Program, and Howard Hughes Medical Institute, Stanford University, Stanford, California 94305, USA
| |
Collapse
|
21
|
Hennig JA, Oby ER, Golub MD, Bahureksa LA, Sadtler PT, Quick KM, Ryu SI, Tyler-Kabara EC, Batista AP, Chase SM, Yu BM. Learning is shaped by abrupt changes in neural engagement. Nat Neurosci 2021; 24:727-736. [PMID: 33782622 DOI: 10.1038/s41593-021-00822-8] [Citation(s) in RCA: 24] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/23/2020] [Accepted: 02/22/2021] [Indexed: 01/30/2023]
Abstract
Internal states such as arousal, attention and motivation modulate brain-wide neural activity, but how these processes interact with learning is not well understood. During learning, the brain modifies its neural activity to improve behavior. How do internal states affect this process? Using a brain-computer interface learning paradigm in monkeys, we identified large, abrupt fluctuations in neural population activity in motor cortex indicative of arousal-like internal state changes, which we term 'neural engagement.' In a brain-computer interface, the causal relationship between neural activity and behavior is known, allowing us to understand how neural engagement impacted behavioral performance for different task goals. We observed stereotyped changes in neural engagement that occurred regardless of how they impacted performance. This allowed us to predict how quickly different task goals were learned. These results suggest that changes in internal states, even those seemingly unrelated to goal-seeking behavior, can systematically influence how behavior improves with learning.
Collapse
Affiliation(s)
- Jay A Hennig
- Neuroscience Institute, Carnegie Mellon University, Pittsburgh, PA, USA. .,Center for the Neural Basis of Cognition, Pittsburgh, PA, USA. .,Machine Learning Department, Carnegie Mellon University, Pittsburgh, PA, USA.
| | - Emily R Oby
- Center for the Neural Basis of Cognition, Pittsburgh, PA, USA.,Department of Neurobiology, University of Pittsburgh, Pittsburgh, PA, USA.,Department of Bioengineering, University of Pittsburgh, Pittsburgh, PA, USA
| | - Matthew D Golub
- Center for the Neural Basis of Cognition, Pittsburgh, PA, USA.,Department of Electrical and Computer Engineering, Carnegie Mellon University, Pittsburgh, PA, USA.,Department of Electrical Engineering, Stanford University, Stanford, CA, USA
| | - Lindsay A Bahureksa
- Center for the Neural Basis of Cognition, Pittsburgh, PA, USA.,Department of Biomedical Engineering, Carnegie Mellon University, Pittsburgh, PA, USA
| | - Patrick T Sadtler
- Center for the Neural Basis of Cognition, Pittsburgh, PA, USA.,Department of Bioengineering, University of Pittsburgh, Pittsburgh, PA, USA
| | - Kristin M Quick
- Center for the Neural Basis of Cognition, Pittsburgh, PA, USA.,Department of Bioengineering, University of Pittsburgh, Pittsburgh, PA, USA
| | - Stephen I Ryu
- Department of Electrical Engineering, Stanford University, Stanford, CA, USA.,Department of Neurosurgery, Palo Alto Medical Foundation, Palo Alto, CA, USA
| | - Elizabeth C Tyler-Kabara
- Center for the Neural Basis of Cognition, Pittsburgh, PA, USA.,Department of Physical Medicine and Rehabilitation, University of Pittsburgh, Pittsburgh, PA, USA.,Department of Neurological Surgery, University of Pittsburgh, Pittsburgh, PA, USA.,Department of Neurosurgery, Dell Medical School, University of Texas at Austin, Austin, TX, USA
| | - Aaron P Batista
- Center for the Neural Basis of Cognition, Pittsburgh, PA, USA.,Department of Bioengineering, University of Pittsburgh, Pittsburgh, PA, USA
| | - Steven M Chase
- Neuroscience Institute, Carnegie Mellon University, Pittsburgh, PA, USA.,Center for the Neural Basis of Cognition, Pittsburgh, PA, USA.,Department of Biomedical Engineering, Carnegie Mellon University, Pittsburgh, PA, USA
| | - Byron M Yu
- Neuroscience Institute, Carnegie Mellon University, Pittsburgh, PA, USA.,Center for the Neural Basis of Cognition, Pittsburgh, PA, USA.,Department of Electrical and Computer Engineering, Carnegie Mellon University, Pittsburgh, PA, USA.,Department of Biomedical Engineering, Carnegie Mellon University, Pittsburgh, PA, USA
| |
Collapse
|
22
|
Semedo JD, Gokcen E, Machens CK, Kohn A, Yu BM. Statistical methods for dissecting interactions between brain areas. Curr Opin Neurobiol 2020; 65:59-69. [PMID: 33142111 PMCID: PMC7935404 DOI: 10.1016/j.conb.2020.09.009] [Citation(s) in RCA: 24] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/06/2020] [Revised: 09/23/2020] [Accepted: 09/24/2020] [Indexed: 12/12/2022]
Abstract
The brain is composed of many functionally distinct areas. This organization supports distributed processing, and requires the coordination of signals across areas. Our understanding of how populations of neurons in different areas interact with each other is still in its infancy. As the availability of recordings from large populations of neurons across multiple brain areas increases, so does the need for statistical methods that are well suited for dissecting and interrogating these recordings. Here we review multivariate statistical methods that have been, or could be, applied to this class of recordings. By leveraging population responses, these methods can provide a rich description of inter-areal interactions. At the same time, these methods can introduce interpretational challenges. We thus conclude by discussing how to interpret the outputs of these methods to further our understanding of inter-areal interactions.
Collapse
Affiliation(s)
- João D Semedo
- Department of Electrical and Computer Engineering, Carnegie Mellon University, Pittsburgh, PA, USA.
| | - Evren Gokcen
- Department of Electrical and Computer Engineering, Carnegie Mellon University, Pittsburgh, PA, USA.
| | - Christian K Machens
- Champalimaud Research, Champalimaud Centre for the Unknown, Lisbon, Portugal
| | - Adam Kohn
- Dominick Purpura Department of Neuroscience, Albert Einstein College of Medicine, Bronx, NY, USA; Department of Ophthalmology and Visual Sciences, Albert Einstein College of Medicine, Bronx, NY, USA; Department of Systems and Computational Biology, Albert Einstein College of Medicine, Bronx, NY, USA
| | - Byron M Yu
- Department of Electrical and Computer Engineering, Carnegie Mellon University, Pittsburgh, PA, USA; Department of Biomedical Engineering, Carnegie Mellon University, Pittsburgh, PA, USA
| |
Collapse
|
23
|
Maeda RS, Kersten R, Pruszynski JA. Shared internal models for feedforward and feedback control of arm dynamics in non-human primates. Eur J Neurosci 2020; 53:1605-1620. [PMID: 33222285 DOI: 10.1111/ejn.15056] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/20/2020] [Revised: 11/12/2020] [Accepted: 11/13/2020] [Indexed: 11/30/2022]
Abstract
Previous work has shown that humans account for and learn novel properties or the arm's dynamics, and that such learning causes changes in both the predictive (i.e., feedforward) control of reaching and reflex (i.e., feedback) responses to mechanical perturbations. Here we show that similar observations hold in old-world monkeys (Macaca fascicularis). Two monkeys were trained to use an exoskeleton to perform a single-joint elbow reaching and to respond to mechanical perturbations that created pure elbow motion. Both of these tasks engaged robust shoulder muscle activity as required to account for the torques that typically arise at the shoulder when the forearm rotates around the elbow joint (i.e., intersegmental dynamics). We altered these intersegmental arm dynamics by having the monkeys generate the same elbow movements with the shoulder joint either free to rotate, as normal, or fixed by the robotic manipulandum, which eliminates the shoulder torques caused by forearm rotation. After fixing the shoulder joint, we found a systematic reduction in shoulder muscle activity. In addition, after releasing the shoulder joint again, we found evidence of kinematic aftereffects (i.e., reach errors) in the direction predicted if failing to compensate for normal arm dynamics. We also tested whether such learning transfers to feedback responses evoked by mechanical perturbations and found a reduction in shoulder feedback responses, as appropriate for these altered arm intersegmental dynamics. Demonstrating this learning and transfer in non-human primates will allow the investigation of the neural mechanisms involved in feedforward and feedback control of the arm's dynamics.
Collapse
Affiliation(s)
- Rodrigo S Maeda
- Brain and Mind Institute, Western University, London, ON, Canada.,Robarts Research Institute, Western University, London, ON, Canada.,Department of Psychology, Western University, London, ON, Canada
| | - Rhonda Kersten
- Robarts Research Institute, Western University, London, ON, Canada.,Department of Physiology and Pharmacology, Western University, London, ON, Canada
| | - J Andrew Pruszynski
- Brain and Mind Institute, Western University, London, ON, Canada.,Robarts Research Institute, Western University, London, ON, Canada.,Department of Psychology, Western University, London, ON, Canada.,Department of Physiology and Pharmacology, Western University, London, ON, Canada
| |
Collapse
|
24
|
Cowley BR, Snyder AC, Acar K, Williamson RC, Yu BM, Smith MA. Slow Drift of Neural Activity as a Signature of Impulsivity in Macaque Visual and Prefrontal Cortex. Neuron 2020; 108:551-567.e8. [PMID: 32810433 PMCID: PMC7822647 DOI: 10.1016/j.neuron.2020.07.021] [Citation(s) in RCA: 49] [Impact Index Per Article: 12.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/17/2020] [Revised: 06/15/2020] [Accepted: 07/17/2020] [Indexed: 12/22/2022]
Abstract
An animal's decision depends not only on incoming sensory evidence but also on its fluctuating internal state. This state embodies multiple cognitive factors, such as arousal and fatigue, but it is unclear how these factors influence the neural processes that encode sensory stimuli and form a decision. We discovered that, unprompted by task conditions, animals slowly shifted their likelihood of detecting stimulus changes over the timescale of tens of minutes. Neural population activity from visual area V4, as well as from prefrontal cortex, slowly drifted together with these behavioral fluctuations. We found that this slow drift, rather than altering the encoding of the sensory stimulus, acted as an impulsivity signal, overriding sensory evidence to dictate the final decision. Overall, this work uncovers an internal state embedded in population activity across multiple brain areas and sheds further light on how internal states contribute to the decision-making process.
Collapse
Affiliation(s)
- Benjamin R Cowley
- Princeton Neuroscience Institute, Princeton University, Princeton, NJ 08544, USA; Center for the Neural Basis of Cognition, Pittsburgh, PA 15213, USA; Department of Machine Learning, Carnegie Mellon University, Pittsburgh, PA 15213, USA
| | - Adam C Snyder
- Center for the Neural Basis of Cognition, Pittsburgh, PA 15213, USA; Department of Electrical and Computer Engineering, Carnegie Mellon University, Pittsburgh, PA 15213, USA; Department of Brain and Cognitive Sciences, University of Rochester, Rochester, NY 14642, USA; Department of Neuroscience, University of Rochester, Rochester, NY 14642, USA; Center for Visual Science, University of Rochester, Rochester, NY 14642, USA
| | - Katerina Acar
- Center for the Neural Basis of Cognition, Pittsburgh, PA 15213, USA; Neuroscience Institute, Carnegie Mellon University, Pittsburgh, PA 15213, USA; Center for Neuroscience, University of Pittsburgh School of Medicine, Pittsburgh, PA 15213, USA
| | - Ryan C Williamson
- Center for the Neural Basis of Cognition, Pittsburgh, PA 15213, USA; Department of Machine Learning, Carnegie Mellon University, Pittsburgh, PA 15213, USA; University of Pittsburgh School of Medicine, Pittsburgh, PA 15213, USA
| | - Byron M Yu
- Center for the Neural Basis of Cognition, Pittsburgh, PA 15213, USA; Neuroscience Institute, Carnegie Mellon University, Pittsburgh, PA 15213, USA; Department of Electrical and Computer Engineering, Carnegie Mellon University, Pittsburgh, PA 15213, USA; Department of Biomedical Engineering, Carnegie Mellon University, Pittsburgh, PA 15213, USA
| | - Matthew A Smith
- Center for the Neural Basis of Cognition, Pittsburgh, PA 15213, USA; Neuroscience Institute, Carnegie Mellon University, Pittsburgh, PA 15213, USA; Department of Biomedical Engineering, Carnegie Mellon University, Pittsburgh, PA 15213, USA; Department of Ophthalmology, University of Pittsburgh, Pittsburgh, PA 15213, USA.
| |
Collapse
|
25
|
Rule ME, Loback AR, Raman DV, Driscoll LN, Harvey CD, O'Leary T. Stable task information from an unstable neural population. eLife 2020; 9:51121. [PMID: 32660692 PMCID: PMC7392606 DOI: 10.7554/elife.51121] [Citation(s) in RCA: 42] [Impact Index Per Article: 10.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/15/2019] [Accepted: 06/17/2020] [Indexed: 02/06/2023] Open
Abstract
Over days and weeks, neural activity representing an animal's position and movement in sensorimotor cortex has been found to continually reconfigure or 'drift' during repeated trials of learned tasks, with no obvious change in behavior. This challenges classical theories, which assume stable engrams underlie stable behavior. However, it is not known whether this drift occurs systematically, allowing downstream circuits to extract consistent information. Analyzing long-term calcium imaging recordings from posterior parietal cortex in mice (Mus musculus), we show that drift is systematically constrained far above chance, facilitating a linear weighted readout of behavioral variables. However, a significant component of drift continually degrades a fixed readout, implying that drift is not confined to a null coding space. We calculate the amount of plasticity required to compensate drift independently of any learning rule, and find that this is within physiologically achievable bounds. We demonstrate that a simple, biologically plausible local learning rule can achieve these bounds, accurately decoding behavior over many days.
Collapse
Affiliation(s)
- Michael E Rule
- Department of Engineering, University of Cambridge, Cambridge, United Kingdom
| | - Adrianna R Loback
- Department of Engineering, University of Cambridge, Cambridge, United Kingdom
| | - Dhruva V Raman
- Department of Engineering, University of Cambridge, Cambridge, United Kingdom
| | - Laura N Driscoll
- Department of Electrical Engineering, Stanford University, Stanford, United States
| | | | - Timothy O'Leary
- Department of Engineering, University of Cambridge, Cambridge, United Kingdom
| |
Collapse
|
26
|
De Santis D, Mussa-Ivaldi FA. Guiding functional reorganization of motor redundancy using a body-machine interface. J Neuroeng Rehabil 2020; 17:61. [PMID: 32393288 PMCID: PMC7216597 DOI: 10.1186/s12984-020-00681-7] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/27/2019] [Accepted: 04/01/2020] [Indexed: 01/01/2023] Open
Abstract
Background Body-machine interfaces map movements onto commands to external devices. Redundant motion signals derived from inertial sensors are mapped onto lower-dimensional device commands. Then, the device users face two problems, a) the structural problem of understanding the operation of the interface and b) the performance problem of controlling the external device with high efficiency. We hypothesize that these problems, while being distinct are connected in that aligning the space of body movements with the space encoded by the interface, i.e. solving the structural problem, facilitates redundancy resolution towards increasing efficiency, i.e. solving the performance problem. Methods Twenty unimpaired volunteers practiced controlling the movement of a computer cursor by moving their arms. Eight signals from four inertial sensors were mapped onto the two cursor’s coordinates on a screen. The mapping matrix was initialized by asking each user to perform free-form spontaneous upper-limb motions and deriving the two main principal components of the motion signals. Participants engaged in a reaching task for 18 min, followed by a tracking task. One group of 10 participants practiced with the same mapping throughout the experiment, while the other 10 with an adaptive mapping that was iteratively updated by recalculating the principal components based on ongoing movements. Results Participants quickly reduced reaching time while also learning to distribute most movement variance over two dimensions. Participants with the fixed mapping distributed movement variance over a subspace that did not match the potent subspace defined by the interface map. In contrast, participant with the adaptive map reduced the difference between the two subspaces, resulting in a smaller amount of arm motions distributed over the null space of the interface map. This, in turn, enhanced movement efficiency without impairing generalization from reaching to tracking. Conclusions Aligning the potent subspace encoded by the interface map to the user’s movement subspace guides redundancy resolution towards increasing movement efficiency, with implications for controlling assistive devices. In contrast, in the pursuit of rehabilitative goals, results would suggest that the interface must change to drive the statistics of user’s motions away from the established pattern and toward the engagement of movements to be recovered. Trial registration ClinicalTrials.gov, NCT01608438, Registered 16 April 2012.
Collapse
Affiliation(s)
- Dalia De Santis
- Northwestern University and the Shirley Ryan AbilityLab, Chicago, IL, USA. .,Fondazione Istituto Italiano di Tecnologia, Genoa, Italy.
| | | |
Collapse
|
27
|
Yoo SBM, Hayden BY. The Transition from Evaluation to Selection Involves Neural Subspace Reorganization in Core Reward Regions. Neuron 2020; 105:712-724.e4. [PMID: 31836322 PMCID: PMC7035164 DOI: 10.1016/j.neuron.2019.11.013] [Citation(s) in RCA: 27] [Impact Index Per Article: 6.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/31/2019] [Revised: 10/13/2019] [Accepted: 11/08/2019] [Indexed: 11/29/2022]
Abstract
Economic choice proceeds from evaluation, in which we contemplate options, to selection, in which we weigh options and choose one. These stages must be differentiated so that decision makers do not proceed to selection before evaluation is complete. We examined responses of neurons in two core reward regions, orbitofrontal (OFC) and ventromedial prefrontal cortex (vmPFC), during two-option choice with asynchronous offer presentation. Our data suggest that neurons selective during the first (presumed evaluation) and second (presumed comparison and selection) offer epochs come from a single pool. Stage transition is accompanied by a shift toward orthogonality in the low-dimensional population response manifold. Nonetheless, the relative position of each option in driving responses in the population subspace is preserved. The orthogonalization we observe supports the hypothesis that the transition from evaluation to selection leads to reorganization of response subspace and suggests a mechanism by which value-related signals are prevented from prematurely driving choice.
Collapse
Affiliation(s)
- Seng Bum Michael Yoo
- Department of Neuroscience, Center for Magnetic Resonance Research, Center for Neuroengineering, University of Minnesota, Minneapolis, MN 55455, USA.
| | - Benjamin Y Hayden
- Department of Neuroscience, Center for Magnetic Resonance Research, Center for Neuroengineering, University of Minnesota, Minneapolis, MN 55455, USA
| |
Collapse
|
28
|
Athalye VR, Carmena JM, Costa RM. Neural reinforcement: re-entering and refining neural dynamics leading to desirable outcomes. Curr Opin Neurobiol 2019; 60:145-154. [PMID: 31877493 DOI: 10.1016/j.conb.2019.11.023] [Citation(s) in RCA: 26] [Impact Index Per Article: 5.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/19/2019] [Revised: 11/25/2019] [Accepted: 11/26/2019] [Indexed: 01/06/2023]
Abstract
How do organisms learn to do again, on-demand, a behavior that led to a desirable outcome? Dopamine-dependent cortico-striatal plasticity provides a framework for learning behavior's value, but it is less clear how it enables the brain to re-enter desired behaviors and refine them over time. Reinforcing behavior is achieved by re-entering and refining the neural patterns that produce it. We review studies using brain-machine interfaces which reveal that reinforcing cortical population activity requires cortico-basal ganglia circuits. Then, we propose a formal framework for how reinforcement in cortico-basal ganglia circuits acts on the neural dynamics of cortical populations. We propose two parallel mechanisms: i) fast reinforcement which selects the inputs that permit the re-entrance of the particular cortical population dynamics which naturally produced the desired behavior, and ii) slower reinforcement which leads to refinement of cortical population dynamics and more reliable production of neural trajectories driving skillful behavior on-demand.
Collapse
Affiliation(s)
- Vivek R Athalye
- Zuckerman Mind Brain Behavior Institute, Departments of Neuroscience and Neurology, Columbia University, New York, NY, USA
| | - Jose M Carmena
- Helen Wills Neuroscience Institute, Department of Electrical Engineering and Computer Sciences, University of California-Berkeley, Berkeley, CA, USA
| | - Rui M Costa
- Zuckerman Mind Brain Behavior Institute, Departments of Neuroscience and Neurology, Columbia University, New York, NY, USA.
| |
Collapse
|
29
|
Neuroscience out of control: control-theoretic perspectives on neural circuit dynamics. Curr Opin Neurobiol 2019; 58:122-129. [DOI: 10.1016/j.conb.2019.09.001] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/27/2019] [Revised: 07/16/2019] [Accepted: 09/03/2019] [Indexed: 12/19/2022]
|
30
|
Oby ER, Golub MD, Hennig JA, Degenhart AD, Tyler-Kabara EC, Yu BM, Chase SM, Batista AP. New neural activity patterns emerge with long-term learning. Proc Natl Acad Sci U S A 2019; 116:15210-15215. [PMID: 31182595 PMCID: PMC6660765 DOI: 10.1073/pnas.1820296116] [Citation(s) in RCA: 88] [Impact Index Per Article: 17.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022] Open
Abstract
Learning has been associated with changes in the brain at every level of organization. However, it remains difficult to establish a causal link between specific changes in the brain and new behavioral abilities. We establish that new neural activity patterns emerge with learning. We demonstrate that these new neural activity patterns cause the new behavior. Thus, the formation of new patterns of neural population activity can underlie the learning of new skills.
Collapse
Affiliation(s)
- Emily R Oby
- Department of Bioengineering, University of Pittsburgh, Pittsburgh, PA 15213
- Center for the Neural Basis of Cognition, University of Pittsburgh and Carnegie Mellon University, Pittsburgh, PA 15213
- University of Pittsburgh Brain Institute, Pittsburgh, PA 15213
- Systems Neuroscience Center, University of Pittsburgh, Pittsburgh, PA 15213
- Department of Neurobiology, University of Pittsburgh School of Medicine, Pittsburgh, PA 15213
| | - Matthew D Golub
- Center for the Neural Basis of Cognition, University of Pittsburgh and Carnegie Mellon University, Pittsburgh, PA 15213
- Department of Electrical and Computer Engineering, Carnegie Mellon University, Pittsburgh, PA 15213
- Department of Electrical Engineering, Stanford University, Stanford, CA 94305
- Wu Tsai Neurosciences Institute, Stanford University, Stanford, CA 94305
| | - Jay A Hennig
- Center for the Neural Basis of Cognition, University of Pittsburgh and Carnegie Mellon University, Pittsburgh, PA 15213
- Carnegie Mellon Neuroscience Institute, Carnegie Mellon University, Pittsburgh, PA 15213
- Machine Learning Department, Carnegie Mellon University, Pittsburgh, PA 15213
| | - Alan D Degenhart
- Department of Bioengineering, University of Pittsburgh, Pittsburgh, PA 15213
- Center for the Neural Basis of Cognition, University of Pittsburgh and Carnegie Mellon University, Pittsburgh, PA 15213
- University of Pittsburgh Brain Institute, Pittsburgh, PA 15213
- Systems Neuroscience Center, University of Pittsburgh, Pittsburgh, PA 15213
| | - Elizabeth C Tyler-Kabara
- Department of Bioengineering, University of Pittsburgh, Pittsburgh, PA 15213
- Department of Neurological Surgery, University of Pittsburgh, Pittsburgh, PA 15213
- Department of Physical Medicine and Rehabilitation, University of Pittsburgh, Pittsburgh, PA 15213
- McGowan Institute for Regenerative Medicine, University of Pittsburgh, Pittsburgh, PA 15213
| | - Byron M Yu
- Center for the Neural Basis of Cognition, University of Pittsburgh and Carnegie Mellon University, Pittsburgh, PA 15213
- Department of Electrical and Computer Engineering, Carnegie Mellon University, Pittsburgh, PA 15213
- Carnegie Mellon Neuroscience Institute, Carnegie Mellon University, Pittsburgh, PA 15213
- Department of Biomedical Engineering, Carnegie Mellon University, Pittsburgh, PA 15213
| | - Steven M Chase
- Center for the Neural Basis of Cognition, University of Pittsburgh and Carnegie Mellon University, Pittsburgh, PA 15213
- Carnegie Mellon Neuroscience Institute, Carnegie Mellon University, Pittsburgh, PA 15213
- Department of Biomedical Engineering, Carnegie Mellon University, Pittsburgh, PA 15213
| | - Aaron P Batista
- Department of Bioengineering, University of Pittsburgh, Pittsburgh, PA 15213;
- Center for the Neural Basis of Cognition, University of Pittsburgh and Carnegie Mellon University, Pittsburgh, PA 15213
- University of Pittsburgh Brain Institute, Pittsburgh, PA 15213
- Systems Neuroscience Center, University of Pittsburgh, Pittsburgh, PA 15213
| |
Collapse
|
31
|
Kalaska JF. Emerging ideas and tools to study the emergent properties of the cortical neural circuits for voluntary motor control in non-human primates. F1000Res 2019; 8. [PMID: 31275561 PMCID: PMC6544130 DOI: 10.12688/f1000research.17161.1] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Accepted: 05/22/2019] [Indexed: 12/22/2022] Open
Abstract
For years, neurophysiological studies of the cerebral cortical mechanisms of voluntary motor control were limited to single-electrode recordings of the activity of one or a few neurons at a time. This approach was supported by the widely accepted belief that single neurons were the fundamental computational units of the brain (the “neuron doctrine”). Experiments were guided by motor-control models that proposed that the motor system attempted to plan and control specific parameters of a desired action, such as the direction, speed or causal forces of a reaching movement in specific coordinate frameworks, and that assumed that the controlled parameters would be expressed in the task-related activity of single neurons. The advent of chronically implanted multi-electrode arrays about 20 years ago permitted the simultaneous recording of the activity of many neurons. This greatly enhanced the ability to study neural control mechanisms at the population level. It has also shifted the focus of the analysis of neural activity from quantifying single-neuron correlates with different movement parameters to probing the structure of multi-neuron activity patterns to identify the emergent computational properties of cortical neural circuits. In particular, recent advances in “dimension reduction” algorithms have attempted to identify specific covariance patterns in multi-neuron activity which are presumed to reflect the underlying computational processes by which neural circuits convert the intention to perform a particular movement into the required causal descending motor commands. These analyses have led to many new perspectives and insights on how cortical motor circuits covertly plan and prepare to initiate a movement without causing muscle contractions, transition from preparation to overt execution of the desired movement, generate muscle-centered motor output commands, and learn new motor skills. Progress is also being made to import optical-imaging and optogenetic toolboxes from rodents to non-human primates to overcome some technical limitations of multi-electrode recording technology.
Collapse
Affiliation(s)
- John F Kalaska
- Groupe de recherche sur le système nerveux central (GRSNC), Département de Neurosciences, Faculté de Médecine, Université de Montréal, C.P. 6128, Succ. Centre-ville, Montréal (Québec), H3C 3J7, Canada
| |
Collapse
|
32
|
Zhou X, Tien RN, Ravikumar S, Chase SM. Distinct types of neural reorganization during long-term learning. J Neurophysiol 2019; 121:1329-1341. [PMID: 30726164 DOI: 10.1152/jn.00466.2018] [Citation(s) in RCA: 29] [Impact Index Per Article: 5.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/26/2022] Open
Abstract
What are the neural mechanisms of skill acquisition? Many studies find that long-term practice is associated with a functional reorganization of cortical neural activity. However, the link between these changes in neural activity and the behavioral improvements that occur is not well understood, especially for long-term learning that takes place over several weeks. To probe this link in detail, we leveraged a brain-computer interface (BCI) paradigm in which rhesus monkeys learned to master nonintuitive mappings between neural spiking in primary motor cortex and computer cursor movement. Critically, these BCI mappings were designed to disambiguate several different possible types of neural reorganization. We found that during the initial phase of learning, lasting minutes to hours, rapid changes in neural activity common to all neurons led to a fast suppression of motor error. In parallel, local changes to individual neurons gradually accrued over several weeks of training. This slower timescale cortical reorganization persisted long after the movement errors had decreased to asymptote and was associated with more efficient control of movement. We conclude that long-term practice evokes two distinct neural reorganization processes with vastly different timescales, leading to different aspects of improvement in motor behavior. NEW & NOTEWORTHY We leveraged a brain-computer interface learning paradigm to track the neural reorganization occurring throughout the full time course of motor skill learning lasting several weeks. We report on two distinct types of neural reorganization that mirror distinct phases of behavioral improvement: a fast phase, in which global reorganization of neural recruitment leads to a quick suppression of motor error, and a slow phase, in which local changes in individual tuning lead to improvements in movement efficiency.
Collapse
Affiliation(s)
- Xiao Zhou
- Department of Biomedical Engineering, Carnegie Mellon University , Pittsburgh, Pennsylvania.,Center for the Neural Basis of Cognition, Carnegie Mellon University , Pittsburgh, Pennsylvania
| | - Rex N Tien
- Center for the Neural Basis of Cognition, Carnegie Mellon University , Pittsburgh, Pennsylvania.,Department of Bioengineering, University of Pittsburgh , Pittsburgh, Pennsylvania
| | - Sadhana Ravikumar
- Department of Biomedical Engineering, Carnegie Mellon University , Pittsburgh, Pennsylvania
| | - Steven M Chase
- Department of Biomedical Engineering, Carnegie Mellon University , Pittsburgh, Pennsylvania.,Center for the Neural Basis of Cognition, Carnegie Mellon University , Pittsburgh, Pennsylvania
| |
Collapse
|
33
|
Williamson RC, Doiron B, Smith MA, Yu BM. Bridging large-scale neuronal recordings and large-scale network models using dimensionality reduction. Curr Opin Neurobiol 2019; 55:40-47. [PMID: 30677702 DOI: 10.1016/j.conb.2018.12.009] [Citation(s) in RCA: 33] [Impact Index Per Article: 6.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/03/2018] [Revised: 12/16/2018] [Accepted: 12/17/2018] [Indexed: 12/21/2022]
Abstract
A long-standing goal in neuroscience has been to bring together neuronal recordings and neural network modeling to understand brain function. Neuronal recordings can inform the development of network models, and network models can in turn provide predictions for subsequent experiments. Traditionally, neuronal recordings and network models have been related using single-neuron and pairwise spike train statistics. We review here recent studies that have begun to relate neuronal recordings and network models based on the multi-dimensional structure of neuronal population activity, as identified using dimensionality reduction. This approach has been used to study working memory, decision making, motor control, and more. Dimensionality reduction has provided common ground for incisive comparisons and tight interplay between neuronal recordings and network models.
Collapse
Affiliation(s)
- Ryan C Williamson
- Center for the Neural Basis of Cognition, Pittsburgh, PA, USA; Department of Machine Learning, Carnegie Mellon University, Pittsburgh, PA, USA; School of Medicine, University of Pittsburgh, Pittsburgh, PA, USA
| | - Brent Doiron
- Center for the Neural Basis of Cognition, Pittsburgh, PA, USA; Department of Mathematics, University of Pittsburgh, Pittsburgh, PA, USA
| | - Matthew A Smith
- Center for the Neural Basis of Cognition, Pittsburgh, PA, USA; Department of Ophthalmology, University of Pittsburgh, Pittsburgh, PA, USA; Department of Bioengineering, University of Pittsburgh, Pittsburgh, PA, USA
| | - Byron M Yu
- Center for the Neural Basis of Cognition, Pittsburgh, PA, USA; Department of Electrical Engineering, Carnegie Mellon University, Pittsburgh, PA, USA; Department of Biomedical Engineering, Carnegie Mellon University, Pittsburgh, PA, USA.
| |
Collapse
|
34
|
Null Ain’t Dull: New Perspectives on Motor Cortex. Trends Cogn Sci 2018; 22:1069-1071. [DOI: 10.1016/j.tics.2018.09.005] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/24/2018] [Accepted: 09/25/2018] [Indexed: 11/30/2022]
|
35
|
Kaufman MT. Adapting Fine with a Little Help from the Null Space. Neuron 2018; 100:771-773. [DOI: 10.1016/j.neuron.2018.11.007] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
|