101
|
Zhu F, Grier HA, Tandon R, Cai C, Agarwal A, Giovannucci A, Kaufman MT, Pandarinath C. A deep learning framework for inference of single-trial neural population dynamics from calcium imaging with subframe temporal resolution. Nat Neurosci 2022; 25:1724-1734. [PMID: 36424431 PMCID: PMC9825112 DOI: 10.1038/s41593-022-01189-0] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/18/2021] [Accepted: 09/23/2022] [Indexed: 11/26/2022]
Abstract
In many areas of the brain, neural populations act as a coordinated network whose state is tied to behavior on a millisecond timescale. Two-photon (2p) calcium imaging is a powerful tool to probe such network-scale phenomena. However, estimating the network state and dynamics from 2p measurements has proven challenging because of noise, inherent nonlinearities and limitations on temporal resolution. Here we describe Recurrent Autoencoder for Discovering Imaged Calcium Latents (RADICaL), a deep learning method to overcome these limitations at the population level. RADICaL extends methods that exploit dynamics in spiking activity for application to deconvolved calcium signals, whose statistics and temporal dynamics are quite distinct from electrophysiologically recorded spikes. It incorporates a new network training strategy that capitalizes on the timing of 2p sampling to recover network dynamics with high temporal precision. In synthetic tests, RADICaL infers the network state more accurately than previous methods, particularly for high-frequency components. In 2p recordings from sensorimotor areas in mice performing a forelimb reach task, RADICaL infers network state with close correspondence to single-trial variations in behavior and maintains high-quality inference even when neuronal populations are substantially reduced.
Collapse
Affiliation(s)
- Feng Zhu
- Wallace H. Coulter Department of Biomedical Engineering, Emory University and Georgia Institute of Technology, Atlanta, GA, USA
- Neuroscience Graduate Program, Graduate Division of Biological and Biomedical Sciences, Emory University, Atlanta, GA, USA
| | - Harrison A Grier
- Committee on Computational Neuroscience, The University of Chicago, Chicago, IL, USA
| | - Raghav Tandon
- Wallace H. Coulter Department of Biomedical Engineering, Emory University and Georgia Institute of Technology, Atlanta, GA, USA
| | - Changjia Cai
- Joint Biomedical Engineering Department, University of North Carolina at Chapel Hill and North Carolina State University, Chapel Hill, NC, USA
| | | | - Andrea Giovannucci
- Joint Biomedical Engineering Department, University of North Carolina at Chapel Hill and North Carolina State University, Chapel Hill, NC, USA.
- Neuroscience Center, University of North Carolina at Chapel Hill, Chapel Hill, NC, USA.
- Closed-Loop Engineering for Advanced Rehabilitation (CLEAR), North Carolina State University, Raleigh, NC, USA.
| | - Matthew T Kaufman
- Department of Organismal Biology and Anatomy, The University of Chicago, Chicago, IL, USA.
- Neuroscience Institute, The University of Chicago, Chicago, IL, USA.
| | - Chethan Pandarinath
- Wallace H. Coulter Department of Biomedical Engineering, Emory University and Georgia Institute of Technology, Atlanta, GA, USA.
- Department of Neurosurgery, Emory University, Atlanta, GA, USA.
- Center for Machine Learning, Georgia Institute of Technology, Atlanta, GA, USA.
| |
Collapse
|
102
|
Christensen AJ, Ott T, Kepecs A. Cognition and the single neuron: How cell types construct the dynamic computations of frontal cortex. Curr Opin Neurobiol 2022; 77:102630. [PMID: 36209695 PMCID: PMC10375540 DOI: 10.1016/j.conb.2022.102630] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/17/2022] [Revised: 08/18/2022] [Accepted: 08/23/2022] [Indexed: 01/10/2023]
Abstract
Frontal cortex is thought to underlie many advanced cognitive capacities, from self-control to long term planning. Reflecting these diverse demands, frontal neural activity is notoriously idiosyncratic, with tuning properties that are correlated with endless numbers of behavioral and task features. This menagerie of tuning has made it difficult to extract organizing principles that govern frontal neural activity. Here, we contrast two successful yet seemingly incompatible approaches that have begun to address this challenge. Inspired by the indecipherability of single-neuron tuning, the first approach casts frontal computations as dynamical trajectories traversed by arbitrary mixtures of neurons. The second approach, by contrast, attempts to explain the functional diversity of frontal activity with the biological diversity of cortical cell-types. Motivated by the recent discovery of functional clusters in frontal neurons, we propose a consilience between these population and cell-type-specific approaches to neural computations, advancing the conjecture that evolutionarily inherited cell-type constraints create the scaffold within which frontal population dynamics must operate.
Collapse
Affiliation(s)
- Amelia J Christensen
- Department of Neuroscience and Department of Psychiatry, Washington University in St. Louis, St. Louis, MO 63110, USA.
| | - Torben Ott
- Department of Neuroscience and Department of Psychiatry, Washington University in St. Louis, St. Louis, MO 63110, USA; Bernstein Center for Computational Neuroscience Berlin, Humboldt University of Berlin, Berlin, Germany.
| | - Adam Kepecs
- Department of Neuroscience and Department of Psychiatry, Washington University in St. Louis, St. Louis, MO 63110, USA.
| |
Collapse
|
103
|
Keshtkaran MR, Sedler AR, Chowdhury RH, Tandon R, Basrai D, Nguyen SL, Sohn H, Jazayeri M, Miller LE, Pandarinath C. A large-scale neural network training framework for generalized estimation of single-trial population dynamics. Nat Methods 2022; 19:1572-1577. [PMID: 36443486 PMCID: PMC9825111 DOI: 10.1038/s41592-022-01675-0] [Citation(s) in RCA: 17] [Impact Index Per Article: 8.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/12/2021] [Accepted: 10/14/2022] [Indexed: 11/30/2022]
Abstract
Achieving state-of-the-art performance with deep neural population dynamics models requires extensive hyperparameter tuning for each dataset. AutoLFADS is a model-tuning framework that automatically produces high-performing autoencoding models on data from a variety of brain areas and tasks, without behavioral or task information. We demonstrate its broad applicability on several rhesus macaque datasets: from motor cortex during free-paced reaching, somatosensory cortex during reaching with perturbations, and dorsomedial frontal cortex during a cognitive timing task.
Collapse
Affiliation(s)
- Mohammad Reza Keshtkaran
- Wallace H. Coulter Department of Biomedical Engineering, Emory University and Georgia Institute of Technology, Atlanta, GA, USA
| | - Andrew R Sedler
- Wallace H. Coulter Department of Biomedical Engineering, Emory University and Georgia Institute of Technology, Atlanta, GA, USA
- Center for Machine Learning, Georgia Institute of Technology, Atlanta, GA, USA
| | - Raeed H Chowdhury
- Department of Biomedical Engineering, Northwestern University, Evanston, IL, USA
- Department of Bioengineering, University of Pittsburgh, Pittsburgh, PA, USA
| | - Raghav Tandon
- Wallace H. Coulter Department of Biomedical Engineering, Emory University and Georgia Institute of Technology, Atlanta, GA, USA
- Center for Machine Learning, Georgia Institute of Technology, Atlanta, GA, USA
| | - Diya Basrai
- Wallace H. Coulter Department of Biomedical Engineering, Emory University and Georgia Institute of Technology, Atlanta, GA, USA
- Physiology and Neuroscience, University of California, San Diego, La Jolla, CA, USA
| | - Sarah L Nguyen
- College of Computing, Georgia Institute of Technology, Atlanta, GA, USA
| | - Hansem Sohn
- Department of Brain and Cognitive Sciences, McGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, MA, USA
| | - Mehrdad Jazayeri
- Department of Brain and Cognitive Sciences, McGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, MA, USA
| | - Lee E Miller
- Department of Biomedical Engineering, Northwestern University, Evanston, IL, USA
- Department of Neuroscience, Northwestern University, Chicago, IL, USA
- Department of Physical Medicine and Rehabilitation, Northwestern University, Chicago, IL, USA
- Shirley Ryan AbilityLab, Chicago, IL, USA
| | - Chethan Pandarinath
- Wallace H. Coulter Department of Biomedical Engineering, Emory University and Georgia Institute of Technology, Atlanta, GA, USA.
- Center for Machine Learning, Georgia Institute of Technology, Atlanta, GA, USA.
- Department of Neurosurgery, Emory University, Atlanta, GA, USA.
| |
Collapse
|
104
|
Willsey MS, Nason-Tomaszewski SR, Ensel SR, Temmar H, Mender MJ, Costello JT, Patil PG, Chestek CA. Real-time brain-machine interface in non-human primates achieves high-velocity prosthetic finger movements using a shallow feedforward neural network decoder. Nat Commun 2022; 13:6899. [PMID: 36371498 PMCID: PMC9653378 DOI: 10.1038/s41467-022-34452-w] [Citation(s) in RCA: 16] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/16/2022] [Accepted: 10/25/2022] [Indexed: 11/13/2022] Open
Abstract
Despite the rapid progress and interest in brain-machine interfaces that restore motor function, the performance of prosthetic fingers and limbs has yet to mimic native function. The algorithm that converts brain signals to a control signal for the prosthetic device is one of the limitations in achieving rapid and realistic finger movements. To achieve more realistic finger movements, we developed a shallow feed-forward neural network to decode real-time two-degree-of-freedom finger movements in two adult male rhesus macaques. Using a two-step training method, a recalibrated feedback intention-trained (ReFIT) neural network is introduced to further improve performance. In 7 days of testing across two animals, neural network decoders, with higher-velocity and more natural appearing finger movements, achieved a 36% increase in throughput over the ReFIT Kalman filter, which represents the current standard. The neural network decoders introduced herein demonstrate real-time decoding of continuous movements at a level superior to the current state-of-the-art and could provide a starting point to using neural networks for the development of more naturalistic brain-controlled prostheses.
Collapse
Affiliation(s)
- Matthew S. Willsey
- grid.214458.e0000000086837370Department of Neurosurgery, University of Michigan, Ann Arbor, MI USA ,grid.214458.e0000000086837370Department of Biomedical Engineering, University of Michigan, Ann Arbor, MI USA
| | - Samuel R. Nason-Tomaszewski
- grid.214458.e0000000086837370Department of Biomedical Engineering, University of Michigan, Ann Arbor, MI USA
| | - Scott R. Ensel
- grid.214458.e0000000086837370Department of Biomedical Engineering, University of Michigan, Ann Arbor, MI USA
| | - Hisham Temmar
- grid.214458.e0000000086837370Department of Biomedical Engineering, University of Michigan, Ann Arbor, MI USA
| | - Matthew J. Mender
- grid.214458.e0000000086837370Department of Biomedical Engineering, University of Michigan, Ann Arbor, MI USA
| | - Joseph T. Costello
- grid.214458.e0000000086837370Department of Electrical Engineering and Computer Science, University of Michigan, Ann Arbor, MI USA
| | - Parag G. Patil
- grid.214458.e0000000086837370Department of Neurosurgery, University of Michigan, Ann Arbor, MI USA ,grid.214458.e0000000086837370Department of Biomedical Engineering, University of Michigan, Ann Arbor, MI USA ,grid.214458.e0000000086837370Neuroscience Graduate Program, University of Michigan Medical School, Ann Arbor, MI USA ,grid.214458.e0000000086837370Department of Anesthesiology, University of Michigan, Ann Arbor, MI USA
| | - Cynthia A. Chestek
- grid.214458.e0000000086837370Department of Biomedical Engineering, University of Michigan, Ann Arbor, MI USA ,grid.214458.e0000000086837370Department of Electrical Engineering and Computer Science, University of Michigan, Ann Arbor, MI USA ,grid.214458.e0000000086837370Neuroscience Graduate Program, University of Michigan Medical School, Ann Arbor, MI USA ,grid.214458.e0000000086837370Robotics Graduate Program, University of Michigan, Ann Arbor, MI USA ,grid.214458.e0000000086837370Biointerfaces Institute, University of Michigan, Ann Arbor, MI USA
| |
Collapse
|
105
|
Linking global top-down views to first-person views in the brain. Proc Natl Acad Sci U S A 2022; 119:e2202024119. [PMID: 36322732 PMCID: PMC9659407 DOI: 10.1073/pnas.2202024119] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/25/2023] Open
Abstract
Humans and other animals have a remarkable capacity to translate their position from one spatial frame of reference to another. The ability to seamlessly move between top-down and first-person views is important for navigation, memory formation, and other cognitive tasks. Evidence suggests that the medial temporal lobe and other cortical regions contribute to this function. To understand how a neural system might carry out these computations, we used variational autoencoders (VAEs) to reconstruct the first-person view from the top-down view of a robot simulation, and vice versa. Many latent variables in the VAEs had similar responses to those seen in neuron recordings, including location-specific activity, head direction tuning, and encoding of distance to local objects. Place-specific responses were prominent when reconstructing a first-person view from a top-down view, but head direction-specific responses were prominent when reconstructing a top-down view from a first-person view. In both cases, the model could recover from perturbations without retraining, but rather through remapping. These results could advance our understanding of how brain regions support viewpoint linkages and transformations.
Collapse
|
106
|
Khona M, Fiete IR. Attractor and integrator networks in the brain. Nat Rev Neurosci 2022; 23:744-766. [DOI: 10.1038/s41583-022-00642-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 09/22/2022] [Indexed: 11/06/2022]
|
107
|
Sawant Y, Kundu JN, Radhakrishnan VB, Sridharan D. A Midbrain Inspired Recurrent Neural Network Model for Robust Change Detection. J Neurosci 2022; 42:8262-8283. [PMID: 36123120 PMCID: PMC9653281 DOI: 10.1523/jneurosci.0164-22.2022] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/21/2022] [Revised: 07/26/2022] [Accepted: 07/30/2022] [Indexed: 11/21/2022] Open
Abstract
We present a biologically inspired recurrent neural network (RNN) that efficiently detects changes in natural images. The model features sparse, topographic connectivity (st-RNN), closely modeled on the circuit architecture of a "midbrain attention network." We deployed the st-RNN in a challenging change blindness task, in which changes must be detected in a discontinuous sequence of images. Compared with a conventional RNN, the st-RNN learned 9x faster and achieved state-of-the-art performance with 15x fewer connections. An analysis of low-dimensional dynamics revealed putative circuit mechanisms, including a critical role for a global inhibitory (GI) motif, for successful change detection. The model reproduced key experimental phenomena, including midbrain neurons' sensitivity to dynamic stimuli, neural signatures of stimulus competition, as well as hallmark behavioral effects of midbrain microstimulation. Finally, the model accurately predicted human gaze fixations in a change blindness experiment, surpassing state-of-the-art saliency-based methods. The st-RNN provides a novel deep learning model for linking neural computations underlying change detection with psychophysical mechanisms.SIGNIFICANCE STATEMENT For adaptive survival, our brains must be able to accurately and rapidly detect changing aspects of our visual world. We present a novel deep learning model, a sparse, topographic recurrent neural network (st-RNN), that mimics the neuroanatomy of an evolutionarily conserved "midbrain attention network." The st-RNN achieved robust change detection in challenging change blindness tasks, outperforming conventional RNN architectures. The model also reproduced hallmark experimental phenomena, both neural and behavioral, reported in seminal midbrain studies. Lastly, the st-RNN outperformed state-of-the-art models at predicting human gaze fixations in a laboratory change blindness experiment. Our deep learning model may provide important clues about key mechanisms by which the brain efficiently detects changes.
Collapse
Affiliation(s)
- Yash Sawant
- Centre for Neuroscience, Indian Institute of Science, Bangalore 560012, India
| | - Jogendra Nath Kundu
- Department of Computational and Data Sciences, Indian Institute of Science, Bangalore 560012, India
| | | | - Devarajan Sridharan
- Centre for Neuroscience, Indian Institute of Science, Bangalore 560012, India
- Department of Computer Science and Automation, Indian Institute of Science, Bangalore 560012, India
| |
Collapse
|
108
|
Keogh C, FitzGerald JJ. Decomposition into dynamic features reveals a conserved temporal structure in hand kinematics. iScience 2022; 25:105428. [PMID: 36388974 PMCID: PMC9641230 DOI: 10.1016/j.isci.2022.105428] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/02/2022] [Revised: 09/01/2022] [Accepted: 10/19/2022] [Indexed: 11/06/2022] Open
Abstract
The human hand is a unique and highly complex effector. The ability to describe hand kinematics with a small number of features suggests that complex hand movements are composed of combinations of simpler movements. This would greatly simplify the neural control of hand movements. If such movement primitives exist, a dimensionality reduction approach designed to exploit these features should outperform existing methods. We developed a deep neural network to capture the temporal dynamics of movements and demonstrate that the features learned allow accurate representation of functional hand movements using lower-dimensional representations than previously reported. We show that these temporal features are highly conserved across individuals and can interpolate previously unseen movements, indicating that they capture the intrinsic structure of hand movements. These results indicate that functional hand movements are defined by a low-dimensional basis set of movement primitives with important temporal dynamics and that these features are common across individuals. Hand movements are comprised of a low-dimensional set of movement primitives Primitive movements have an important temporal component Spatiotemporal movement primitives are conserved across individuals New complex movements can be flexibly reconstructed using these primitives
Collapse
|
109
|
Machado TA, Kauvar IV, Deisseroth K. Multiregion neuronal activity: the forest and the trees. Nat Rev Neurosci 2022; 23:683-704. [PMID: 36192596 PMCID: PMC10327445 DOI: 10.1038/s41583-022-00634-0] [Citation(s) in RCA: 19] [Impact Index Per Article: 9.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 08/25/2022] [Indexed: 12/12/2022]
Abstract
The past decade has witnessed remarkable advances in the simultaneous measurement of neuronal activity across many brain regions, enabling fundamentally new explorations of the brain-spanning cellular dynamics that underlie sensation, cognition and action. These recently developed multiregion recording techniques have provided many experimental opportunities, but thoughtful consideration of methodological trade-offs is necessary, especially regarding field of view, temporal acquisition rate and ability to guarantee cellular resolution. When applied in concert with modern optogenetic and computational tools, multiregion recording has already made possible fundamental biological discoveries - in part via the unprecedented ability to perform unbiased neural activity screens for principles of brain function, spanning dozens of brain areas and from local to global scales.
Collapse
Affiliation(s)
- Timothy A Machado
- Department of Bioengineering, Stanford University, Stanford, CA, USA
| | - Isaac V Kauvar
- Department of Bioengineering, Stanford University, Stanford, CA, USA
- Department of Electrical Engineering, Stanford University, Stanford, CA, USA
| | - Karl Deisseroth
- Department of Bioengineering, Stanford University, Stanford, CA, USA.
- Howard Hughes Medical Institute, Stanford University, Stanford, CA, USA.
- Department of Psychiatry and Behavioral Sciences, Stanford University, Stanford, CA, USA.
| |
Collapse
|
110
|
Valeriani D, Santoro F, Ienca M. The present and future of neural interfaces. Front Neurorobot 2022; 16:953968. [PMID: 36304780 PMCID: PMC9592849 DOI: 10.3389/fnbot.2022.953968] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/26/2022] [Accepted: 07/13/2022] [Indexed: 11/18/2022] Open
Abstract
The 2020's decade will likely witness an unprecedented development and deployment of neurotechnologies for human rehabilitation, personalized use, and cognitive or other enhancement. New materials and algorithms are already enabling active brain monitoring and are allowing the development of biohybrid and neuromorphic systems that can adapt to the brain. Novel brain-computer interfaces (BCIs) have been proposed to tackle a variety of enhancement and therapeutic challenges, from improving decision-making to modulating mood disorders. While these BCIs have generally been developed in an open-loop modality to optimize their internal neural decoders, this decade will increasingly witness their validation in closed-loop systems that are able to continuously adapt to the user's mental states. Therefore, a proactive ethical approach is needed to ensure that these new technological developments go hand in hand with the development of a sound ethical framework. In this perspective article, we summarize recent developments in neural interfaces, ranging from neurohybrid synapses to closed-loop BCIs, and thereby identify the most promising macro-trends in BCI research, such as simulating vs. interfacing the brain, brain recording vs. brain stimulation, and hardware vs. software technology. Particular attention is devoted to central nervous system interfaces, especially those with application in healthcare and human enhancement. Finally, we critically assess the possible futures of neural interfacing and analyze the short- and long-term implications of such neurotechnologies.
Collapse
Affiliation(s)
| | - Francesca Santoro
- Institute for Biological Information Processing - Bioelectronics, IBI-3, Forschungszentrum Juelich, Juelich, Germany
- Faculty of Electrical Engineering and Information Technology, RWTH Aachen University, Aachen, Germany
| | - Marcello Ienca
- College of Humanities, Swiss Federal Institute of Technology Lausanne (EPFL), Lausanne, Switzerland
- *Correspondence: Marcello Ienca
| |
Collapse
|
111
|
Zhang YJ, Yu ZF, Liu JK, Huang TJ. Neural Decoding of Visual Information Across Different Neural Recording Modalities and Approaches. MACHINE INTELLIGENCE RESEARCH 2022. [PMCID: PMC9283560 DOI: 10.1007/s11633-022-1335-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Subscribe] [Scholar Register] [Indexed: 11/21/2022]
Abstract
Vision plays a peculiar role in intelligence. Visual information, forming a large part of the sensory information, is fed into the human brain to formulate various types of cognition and behaviours that make humans become intelligent agents. Recent advances have led to the development of brain-inspired algorithms and models for machine vision. One of the key components of these methods is the utilization of the computational principles underlying biological neurons. Additionally, advanced experimental neuroscience techniques have generated different types of neural signals that carry essential visual information. Thus, there is a high demand for mapping out functional models for reading out visual information from neural signals. Here, we briefly review recent progress on this issue with a focus on how machine learning techniques can help in the development of models for contending various types of neural signals, from fine-scale neural spikes and single-cell calcium imaging to coarse-scale electroencephalography (EEG) and functional magnetic resonance imaging recordings of brain signals.
Collapse
|
112
|
Awasthi P, Lin TH, Bae J, Miller LE, Danziger ZC. Validation of a non-invasive, real-time, human-in-the-loop model of intracortical brain-computer interfaces. J Neural Eng 2022; 19:056038. [PMID: 36198278 PMCID: PMC9855658 DOI: 10.1088/1741-2552/ac97c3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/21/2022] [Accepted: 10/05/2022] [Indexed: 01/26/2023]
Abstract
Objective. Despite the tremendous promise of invasive brain-computer interfaces (iBCIs), the associated study costs, risks, and ethical considerations limit the opportunity to develop and test the algorithms that decode neural activity into a user's intentions. Our goal was to address this challenge by designing an iBCI model capable of testing many human subjects in closed-loop.Approach. We developed an iBCI model that uses artificial neural networks (ANNs) to translate human finger movements into realistic motor cortex firing patterns, which can then be decoded in real time. We call the model the joint angle BCI, or jaBCI. jaBCI allows readily recruited, healthy subjects to perform closed-loop iBCI tasks using any neural decoder, preserving subjects' control-relevant short-latency error correction and learning dynamics.Main results. We validated jaBCI offline through emulated neuron firing statistics, confirming that emulated neural signals have firing rates, low-dimensional PCA geometry, and rotational jPCA dynamics that are quite similar to the actual neurons (recorded in monkey M1) on which we trained the ANN. We also tested jaBCI in closed-loop experiments, our single study examining roughly as many subjects as have been tested world-wide with iBCIs (n= 25). Performance was consistent with that of the paralyzed, human iBCI users with implanted intracortical electrodes. jaBCI allowed us to imitate the experimental protocols (e.g. the same velocity Kalman filter decoder and center-out task) and compute the same seven behavioral measures used in three critical studies.Significance. These encouraging results suggest the jaBCI's real-time firing rate emulation is a useful means to provide statistically robust sample sizes for rapid prototyping and optimization of decoding algorithms, the study of bi-directional learning in iBCIs, and improving iBCI control.
Collapse
Affiliation(s)
- Peeyush Awasthi
- Department of Biomedical Engineering, Florida International University, Miami, FL, United States of Amercia
| | - Tzu-Hsiang Lin
- Department of Biomedical Engineering, Florida International University, Miami, FL, United States of Amercia
| | - Jihye Bae
- Department of Electrical and Computer Engineering, University of Kentucky, Lexington, KY, United States
| | - Lee E Miller
- Department of Neuroscience, Physical Medicine, and Rehabilitation, Northwestern University, Chicago, IL, United States
| | - Zachary C Danziger
- Department of Biomedical Engineering, Florida International University, Miami, FL, United States of Amercia,Author to whom any correspondence should be addressed
| |
Collapse
|
113
|
Sylwestrak EL, Jo Y, Vesuna S, Wang X, Holcomb B, Tien RH, Kim DK, Fenno L, Ramakrishnan C, Allen WE, Chen R, Shenoy KV, Sussillo D, Deisseroth K. Cell-type-specific population dynamics of diverse reward computations. Cell 2022; 185:3568-3587.e27. [PMID: 36113428 PMCID: PMC10387374 DOI: 10.1016/j.cell.2022.08.019] [Citation(s) in RCA: 18] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/16/2021] [Revised: 06/16/2022] [Accepted: 08/17/2022] [Indexed: 01/26/2023]
Abstract
Computational analysis of cellular activity has developed largely independently of modern transcriptomic cell typology, but integrating these approaches may be essential for full insight into cellular-level mechanisms underlying brain function and dysfunction. Applying this approach to the habenula (a structure with diverse, intermingled molecular, anatomical, and computational features), we identified encoding of reward-predictive cues and reward outcomes in distinct genetically defined neural populations, including TH+ cells and Tac1+ cells. Data from genetically targeted recordings were used to train an optimized nonlinear dynamical systems model and revealed activity dynamics consistent with a line attractor. High-density, cell-type-specific electrophysiological recordings and optogenetic perturbation provided supporting evidence for this model. Reverse-engineering predicted how Tac1+ cells might integrate reward history, which was complemented by in vivo experimentation. This integrated approach describes a process by which data-driven computational models of population activity can generate and frame actionable hypotheses for cell-type-specific investigation in biological systems.
Collapse
Affiliation(s)
- Emily L Sylwestrak
- Department of Biology, University of Oregon, Eugene, OR 97403, USA; Department of Bioengineering, Stanford University, Stanford, CA 94305, USA; Institute of Neuroscience, University of Oregon, Eugene, OR 97403, USA.
| | - YoungJu Jo
- Department of Bioengineering, Stanford University, Stanford, CA 94305, USA; Department of Applied Physics, Stanford University, Stanford, CA 94305, USA
| | - Sam Vesuna
- Department of Bioengineering, Stanford University, Stanford, CA 94305, USA; Department of Psychiatry and Behavioral Sciences, Stanford University, Stanford, CA 94305, USA
| | - Xiao Wang
- Department of Bioengineering, Stanford University, Stanford, CA 94305, USA
| | - Blake Holcomb
- Institute of Neuroscience, University of Oregon, Eugene, OR 97403, USA
| | - Rebecca H Tien
- Department of Bioengineering, Stanford University, Stanford, CA 94305, USA
| | - Doo Kyung Kim
- Department of Bioengineering, Stanford University, Stanford, CA 94305, USA
| | - Lief Fenno
- Department of Bioengineering, Stanford University, Stanford, CA 94305, USA; Department of Psychiatry and Behavioral Sciences, Stanford University, Stanford, CA 94305, USA
| | - Charu Ramakrishnan
- Department of Bioengineering, Stanford University, Stanford, CA 94305, USA
| | - William E Allen
- Department of Bioengineering, Stanford University, Stanford, CA 94305, USA; Neurosciences Interdepartmental Program, Stanford University, Stanford, CA 94303, USA
| | - Ritchie Chen
- Department of Bioengineering, Stanford University, Stanford, CA 94305, USA
| | - Krishna V Shenoy
- Department of Neurobiology, Stanford University, Stanford, CA 94303, USA; Department of Electrical Engineering, Stanford University, Stanford, CA, USA; Wu Tsai Neurosciences Institute, Stanford University, Stanford, CA, USA; Howard Hughes Medical Institute, Stanford University, Stanford, CA 94305, USA
| | - David Sussillo
- Department of Electrical Engineering, Stanford University, Stanford, CA, USA; Wu Tsai Neurosciences Institute, Stanford University, Stanford, CA, USA
| | - Karl Deisseroth
- Department of Bioengineering, Stanford University, Stanford, CA 94305, USA; Department of Psychiatry and Behavioral Sciences, Stanford University, Stanford, CA 94305, USA; Wu Tsai Neurosciences Institute, Stanford University, Stanford, CA, USA; Howard Hughes Medical Institute, Stanford University, Stanford, CA 94305, USA.
| |
Collapse
|
114
|
Abe T, Kinsella I, Saxena S, Buchanan EK, Couto J, Briggs J, Kitt SL, Glassman R, Zhou J, Paninski L, Cunningham JP. Neuroscience Cloud Analysis As a Service: An open-source platform for scalable, reproducible data analysis. Neuron 2022; 110:2771-2789.e7. [PMID: 35870448 PMCID: PMC9464703 DOI: 10.1016/j.neuron.2022.06.018] [Citation(s) in RCA: 10] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/28/2021] [Revised: 05/06/2022] [Accepted: 06/22/2022] [Indexed: 10/17/2022]
Abstract
A key aspect of neuroscience research is the development of powerful, general-purpose data analyses that process large datasets. Unfortunately, modern data analyses have a hidden dependence upon complex computing infrastructure (e.g., software and hardware), which acts as an unaddressed deterrent to analysis users. Although existing analyses are increasingly shared as open-source software, the infrastructure and knowledge needed to deploy these analyses efficiently still pose significant barriers to use. In this work, we develop Neuroscience Cloud Analysis As a Service (NeuroCAAS): a fully automated open-source analysis platform offering automatic infrastructure reproducibility for any data analysis. We show how NeuroCAAS supports the design of simpler, more powerful data analyses and that many popular data analysis tools offered through NeuroCAAS outperform counterparts on typical infrastructure. Pairing rigorous infrastructure management with cloud resources, NeuroCAAS dramatically accelerates the dissemination and use of new data analyses for neuroscientific discovery.
Collapse
Affiliation(s)
- Taiga Abe
- Mortimer B. Zuckerman Mind Brain Behavior Institute, Department of Neuroscience, Columbia University, New York, NY 10027, USA; Center for Theoretical Neuroscience, Columbia University, New York, NY 10027, USA; Department of Neuroscience, Columbia University Medical Center, Columbia University, New York, NY 10027, USA
| | - Ian Kinsella
- Mortimer B. Zuckerman Mind Brain Behavior Institute, Department of Neuroscience, Columbia University, New York, NY 10027, USA; Center for Theoretical Neuroscience, Columbia University, New York, NY 10027, USA; Department of Statistics, Columbia University, New York, NY 10027, USA
| | - Shreya Saxena
- Mortimer B. Zuckerman Mind Brain Behavior Institute, Department of Neuroscience, Columbia University, New York, NY 10027, USA; Center for Theoretical Neuroscience, Columbia University, New York, NY 10027, USA; Grossman Center for the Statistics of Mind, Columbia University, New York, NY 10027, USA; Department of Statistics, Columbia University, New York, NY 10027, USA; Department of Electrical and Computer Engineering, University of Florida, Gainesville, FL 32607, USA
| | - E Kelly Buchanan
- Mortimer B. Zuckerman Mind Brain Behavior Institute, Department of Neuroscience, Columbia University, New York, NY 10027, USA; Center for Theoretical Neuroscience, Columbia University, New York, NY 10027, USA; Department of Neuroscience, Columbia University Medical Center, Columbia University, New York, NY 10027, USA
| | - Joao Couto
- Department of Neurobiology, David Geffen School of Medicine, University of California, Los Angeles, CA 90095, USA
| | - John Briggs
- Mortimer B. Zuckerman Mind Brain Behavior Institute, Department of Neuroscience, Columbia University, New York, NY 10027, USA
| | - Sian Lee Kitt
- Department of Computer Science, Columbia University, New York, NY 10027, USA
| | - Ryan Glassman
- Department of Computer Science, Columbia University, New York, NY 10027, USA
| | - John Zhou
- Department of Computer Science, Columbia University, New York, NY 10027, USA
| | - Liam Paninski
- Mortimer B. Zuckerman Mind Brain Behavior Institute, Department of Neuroscience, Columbia University, New York, NY 10027, USA; Center for Theoretical Neuroscience, Columbia University, New York, NY 10027, USA; Grossman Center for the Statistics of Mind, Columbia University, New York, NY 10027, USA; Department of Neuroscience, Columbia University Medical Center, Columbia University, New York, NY 10027, USA; Department of Statistics, Columbia University, New York, NY 10027, USA
| | - John P Cunningham
- Mortimer B. Zuckerman Mind Brain Behavior Institute, Department of Neuroscience, Columbia University, New York, NY 10027, USA; Center for Theoretical Neuroscience, Columbia University, New York, NY 10027, USA; Grossman Center for the Statistics of Mind, Columbia University, New York, NY 10027, USA; Department of Statistics, Columbia University, New York, NY 10027, USA.
| |
Collapse
|
115
|
Gallego-Carracedo C, Perich MG, Chowdhury RH, Miller LE, Gallego JÁ. Local field potentials reflect cortical population dynamics in a region-specific and frequency-dependent manner. eLife 2022; 11:73155. [PMID: 35968845 PMCID: PMC9470163 DOI: 10.7554/elife.73155] [Citation(s) in RCA: 17] [Impact Index Per Article: 8.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/18/2021] [Accepted: 08/02/2022] [Indexed: 11/13/2022] Open
Abstract
The spiking activity of populations of cortical neurons is well described by the dynamics of a small number of population-wide covariance patterns, the 'latent dynamics'. These latent dynamics are largely driven by the same correlated synaptic currents across the circuit that determine the generation of local field potentials (LFP). Yet, the relationship between latent dynamics and LFPs remains largely unexplored. Here, we characterised this relationship for three different regions of primate sensorimotor cortex during reaching. The correlation between latent dynamics and LFPs was frequency-dependent and varied across regions. However, for any given region, this relationship remained stable throughout the behaviour: in each of primary motor and premotor cortices, the LFP-latent dynamics correlation profile was remarkably similar between movement planning and execution. These robust associations between LFPs and neural population latent dynamics help bridge the wealth of studies reporting neural correlates of behaviour using either type of recordings.
Collapse
Affiliation(s)
| | - Matthew G Perich
- Department of Neuroscience, Icahn School of Medicine at Mount Sinai, New York, United States
| | - Raeed H Chowdhury
- Department of Bioengineering, University of Pittsburgh, Pittsburgh, United States
| | - Lee E Miller
- Department of Biomedical Engineering, Northwestern University, Evanston, United States
| | - Juan Álvaro Gallego
- Department of Bioengineering, Imperial College London, London, United Kingdom
| |
Collapse
|
116
|
Alasfour A, Gabriel P, Jiang X, Shamie I, Melloni L, Thesen T, Dugan P, Friedman D, Doyle W, Devinsky O, Gonda D, Sattar S, Wang S, Halgren E, Gilja V. Spatiotemporal dynamics of human high gamma discriminate naturalistic behavioral states. PLoS Comput Biol 2022; 18:e1010401. [PMID: 35939509 PMCID: PMC9387937 DOI: 10.1371/journal.pcbi.1010401] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/02/2022] [Revised: 08/18/2022] [Accepted: 07/18/2022] [Indexed: 11/18/2022] Open
Abstract
In analyzing the neural correlates of naturalistic and unstructured behaviors, features of neural activity that are ignored in a trial-based experimental paradigm can be more fully studied and investigated. Here, we analyze neural activity from two patients using electrocorticography (ECoG) and stereo-electroencephalography (sEEG) recordings, and reveal that multiple neural signal characteristics exist that discriminate between unstructured and naturalistic behavioral states such as “engaging in dialogue” and “using electronics”. Using the high gamma amplitude as an estimate of neuronal firing rate, we demonstrate that behavioral states in a naturalistic setting are discriminable based on long-term mean shifts, variance shifts, and differences in the specific neural activity’s covariance structure. Both the rapid and slow changes in high gamma band activity separate unstructured behavioral states. We also use Gaussian process factor analysis (GPFA) to show the existence of salient spatiotemporal features with variable smoothness in time. Further, we demonstrate that both temporally smooth and stochastic spatiotemporal activity can be used to differentiate unstructured behavioral states. This is the first attempt to elucidate how different neural signal features contain information about behavioral states collected outside the conventional experimental paradigm.
Collapse
Affiliation(s)
- Abdulwahab Alasfour
- Department of Electrical Engineering, Kuwait University, Kuwait City, Kuwait
- Department of Electrical and Computer Engineering, UC San Diego, San Diego, California, United States of America
- * E-mail:
| | - Paolo Gabriel
- Department of Electrical and Computer Engineering, UC San Diego, San Diego, California, United States of America
| | - Xi Jiang
- Department of Neurosciences, UC San Diego, San Diego, California, United States of America
| | - Isaac Shamie
- Department of Neurosciences, UC San Diego, San Diego, California, United States of America
| | - Lucia Melloni
- Comprehensive Epilepsy Center, Department of Neurology, New York University Grossman School of Medicine, New York City, New York, United States of America
| | - Thomas Thesen
- Comprehensive Epilepsy Center, Department of Neurology, New York University Grossman School of Medicine, New York City, New York, United States of America
- Department of Biomedical Sciences, College of Medicine, University of Houston, Houston, Texas, United States of America
| | - Patricia Dugan
- Comprehensive Epilepsy Center, Department of Neurology, New York University Grossman School of Medicine, New York City, New York, United States of America
| | - Daniel Friedman
- Comprehensive Epilepsy Center, Department of Neurology, New York University Grossman School of Medicine, New York City, New York, United States of America
| | - Werner Doyle
- Comprehensive Epilepsy Center, Department of Neurology, New York University Grossman School of Medicine, New York City, New York, United States of America
| | - Orin Devinsky
- Comprehensive Epilepsy Center, Department of Neurology, New York University Grossman School of Medicine, New York City, New York, United States of America
| | - David Gonda
- Department of Neurosciences, UC San Diego, San Diego, California, United States of America
- Rady Children’s Hospital San Diego, San Diego, California, United States of America
| | - Shifteh Sattar
- Department of Neurosciences, UC San Diego, San Diego, California, United States of America
- Rady Children’s Hospital San Diego, San Diego, California, United States of America
| | - Sonya Wang
- Rady Children’s Hospital San Diego, San Diego, California, United States of America
- Department of Neurology, University of Minnesota Medical School, Minneapolis, Minnesota, United States of America
| | - Eric Halgren
- Department of Neurosciences, UC San Diego, San Diego, California, United States of America
| | - Vikash Gilja
- Department of Electrical and Computer Engineering, UC San Diego, San Diego, California, United States of America
| |
Collapse
|
117
|
Valente A, Ostojic S, Pillow J. Probing the Relationship Between Latent Linear Dynamical Systems and Low-Rank Recurrent Neural Network Models. Neural Comput 2022; 34:1871-1892. [PMID: 35896161 DOI: 10.1162/neco_a_01522] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/26/2021] [Accepted: 04/15/2022] [Indexed: 11/04/2022]
Abstract
A large body of work has suggested that neural populations exhibit low-dimensional dynamics during behavior. However, there are a variety of different approaches for modeling low-dimensional neural population activity. One approach involves latent linear dynamical system (LDS) models, in which population activity is described by a projection of low-dimensional latent variables with linear dynamics. A second approach involves low-rank recurrent neural networks (RNNs), in which population activity arises directly from a low-dimensional projection of past activity. Although these two modeling approaches have strong similarities, they arise in different contexts and tend to have different domains of application. Here we examine the precise relationship between latent LDS models and linear low-rank RNNs. When can one model class be converted to the other, and vice versa? We show that latent LDS models can only be converted to RNNs in specific limit cases, due to the non-Markovian property of latent LDS models. Conversely, we show that linear RNNs can be mapped onto LDS models, with latent dimensionality at most twice the rank of the RNN. A surprising consequence of our results is that a partially observed RNN is better represented by an LDS model than by an RNN consisting of only observed units.
Collapse
Affiliation(s)
- Adrian Valente
- Laboratoire de Neurosciences Cognitives et Computationnelles, INSERM U960, Ecole Normale Superieure-PSL Research University, 75005 Paris, France
| | - Srdjan Ostojic
- Laboratoire de Neurosciences Cognitives et Computationnelles, INSERM U960, Ecole Normale Superieure-PSL Research University, 75005 Paris, France
| | - Jonathan Pillow
- Princeton Neuroscience Institute and Department of Psychology, Princeton University, Princeton, NJ 08544, U.S.A.
| |
Collapse
|
118
|
Inagaki HK, Chen S, Daie K, Finkelstein A, Fontolan L, Romani S, Svoboda K. Neural Algorithms and Circuits for Motor Planning. Annu Rev Neurosci 2022; 45:249-271. [PMID: 35316610 DOI: 10.1146/annurev-neuro-092021-121730] [Citation(s) in RCA: 18] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
The brain plans and executes volitional movements. The underlying patterns of neural population activity have been explored in the context of movements of the eyes, limbs, tongue, and head in nonhuman primates and rodents. How do networks of neurons produce the slow neural dynamics that prepare specific movements and the fast dynamics that ultimately initiate these movements? Recent work exploits rapid and calibrated perturbations of neural activity to test specific dynamical systems models that are capable of producing the observed neural activity. These joint experimental and computational studies show that cortical dynamics during motor planning reflect fixed points of neural activity (attractors). Subcortical control signals reshape and move attractors over multiple timescales, causing commitment to specific actions and rapid transitions to movement execution. Experiments in rodents are beginning to reveal how these algorithms are implemented at the level of brain-wide neural circuits.
Collapse
Affiliation(s)
| | - Susu Chen
- Janelia Research Campus, Howard Hughes Medical Institute, Ashburn, Virginia, USA
| | - Kayvon Daie
- Janelia Research Campus, Howard Hughes Medical Institute, Ashburn, Virginia, USA.,Allen Institute for Neural Dynamics, Seattle, Washington, USA;
| | - Arseny Finkelstein
- Janelia Research Campus, Howard Hughes Medical Institute, Ashburn, Virginia, USA.,Department of Physiology and Pharmacology, Sackler Faculty of Medicine, Tel Aviv University, Tel Aviv-Yafo, Israel
| | - Lorenzo Fontolan
- Janelia Research Campus, Howard Hughes Medical Institute, Ashburn, Virginia, USA
| | - Sandro Romani
- Janelia Research Campus, Howard Hughes Medical Institute, Ashburn, Virginia, USA
| | - Karel Svoboda
- Janelia Research Campus, Howard Hughes Medical Institute, Ashburn, Virginia, USA.,Allen Institute for Neural Dynamics, Seattle, Washington, USA;
| |
Collapse
|
119
|
Singh MF, Cole MW, Braver TS, Ching S. Developing control-theoretic objectives for large-scale brain dynamics and cognitive enhancement. ANNUAL REVIEWS IN CONTROL 2022; 54:363-376. [PMID: 38250171 PMCID: PMC10798814 DOI: 10.1016/j.arcontrol.2022.05.001] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/23/2024]
Abstract
The development of technologies for brain stimulation provides a means for scientists and clinicians to directly actuate the brain and nervous system. Brain stimulation has shown intriguing potential in terms of modifying particular symptom clusters in patients and behavioral characteristics of subjects. The stage is thus set for optimization of these techniques and the pursuit of more nuanced stimulation objectives, including the modification of complex cognitive functions such as memory and attention. Control theory and engineering will play a key role in the development of these methods, guiding computational and algorithmic strategies for stimulation. In particular, realizing this goal will require new development of frameworks that allow for controlling not only brain activity, but also latent dynamics that underlie neural computation and information processing. In the current opinion, we review recent progress in brain stimulation and outline challenges and potential research pathways associated with exogenous control of cognitive function.
Collapse
Affiliation(s)
- Matthew F Singh
- Electrical and Systems Engineering, Washington University in St. Louis, St. Louis, 63130, MO, USA
- Center for Molecular and Behavioral Neuroscience, Rutgers University, Newark, 07102, NJ, USA
- Psychological and Brain Science, Washington University in St. Louis, St. Louis, 63130, MO, USA
| | - Michael W Cole
- Center for Molecular and Behavioral Neuroscience, Rutgers University, Newark, 07102, NJ, USA
| | - Todd S Braver
- Psychological and Brain Science, Washington University in St. Louis, St. Louis, 63130, MO, USA
| | - ShiNung Ching
- Electrical and Systems Engineering, Washington University in St. Louis, St. Louis, 63130, MO, USA
| |
Collapse
|
120
|
Fleig P, Nemenman I. Statistical properties of large data sets with linear latent features. Phys Rev E 2022; 106:014102. [PMID: 35974629 DOI: 10.1103/physreve.106.014102] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/16/2022] [Accepted: 05/23/2022] [Indexed: 06/15/2023]
Abstract
Analytical understanding of how low-dimensional latent features reveal themselves in large-dimensional data is still lacking. We study this by defining a probabilistic linear latent features model with additive noise and by analytically and numerically computing the statistical distributions of pairwise correlations and eigenvalues of the data correlation matrix. This allows us to resolve the latent feature structure across a wide range of data regimes set by the number of recorded variables, observations, latent features, and the signal-to-noise ratio. We find a characteristic imprint of latent features in the distribution of correlations and eigenvalues and provide an analytic estimate for the boundary between signal and noise, even in the absence of a spectral gap.
Collapse
Affiliation(s)
- Philipp Fleig
- Department of Physics & Astronomy, University of Pennsylvania, Philadelphia, Pennsylvania 19104, USA
| | - Ilya Nemenman
- Department of Physics, Emory University, Atlanta, Georgia 30322, USA; Department of Biology, Emory University, Atlanta, Georgia 30322, USA; and Initiative in Theory and Modeling of Living Systems, Atlanta, Georgia 30322, USA
| |
Collapse
|
121
|
A hybrid autoencoder framework of dimensionality reduction for brain-computer interface decoding. Comput Biol Med 2022; 148:105871. [DOI: 10.1016/j.compbiomed.2022.105871] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/28/2022] [Revised: 06/20/2022] [Accepted: 07/09/2022] [Indexed: 11/19/2022]
|
122
|
Suhaimi A, Lim AWH, Chia XW, Li C, Makino H. Representation learning in the artificial and biological neural networks underlying sensorimotor integration. SCIENCE ADVANCES 2022; 8:eabn0984. [PMID: 35658033 PMCID: PMC9166289 DOI: 10.1126/sciadv.abn0984] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 11/03/2021] [Accepted: 04/18/2022] [Indexed: 06/15/2023]
Abstract
The integration of deep learning and theories of reinforcement learning (RL) is a promising avenue to explore novel hypotheses on reward-based learning and decision-making in humans and other animals. Here, we trained deep RL agents and mice in the same sensorimotor task with high-dimensional state and action space and studied representation learning in their respective neural networks. Evaluation of thousands of neural network models with extensive hyperparameter search revealed that learning-dependent enrichment of state-value and policy representations of the task-performance-optimized deep RL agent closely resembled neural activity of the posterior parietal cortex (PPC). These representations were critical for the task performance in both systems. PPC neurons also exhibited representations of the internally defined subgoal, a feature of deep RL algorithms postulated to improve sample efficiency. Such striking resemblance between the artificial and biological networks and their functional convergence in sensorimotor integration offers new opportunities to better understand respective intelligent systems.
Collapse
|
123
|
Saxena S, Russo AA, Cunningham J, Churchland MM. Motor cortex activity across movement speeds is predicted by network-level strategies for generating muscle activity. eLife 2022; 11:67620. [PMID: 35621264 PMCID: PMC9197394 DOI: 10.7554/elife.67620] [Citation(s) in RCA: 28] [Impact Index Per Article: 14.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/17/2021] [Accepted: 05/26/2022] [Indexed: 12/02/2022] Open
Abstract
Learned movements can be skillfully performed at different paces. What neural strategies produce this flexibility? Can they be predicted and understood by network modeling? We trained monkeys to perform a cycling task at different speeds, and trained artificial recurrent networks to generate the empirical muscle-activity patterns. Network solutions reflected the principle that smooth well-behaved dynamics require low trajectory tangling. Network solutions had a consistent form, which yielded quantitative and qualitative predictions. To evaluate predictions, we analyzed motor cortex activity recorded during the same task. Responses supported the hypothesis that the dominant neural signals reflect not muscle activity, but network-level strategies for generating muscle activity. Single-neuron responses were better accounted for by network activity than by muscle activity. Similarly, neural population trajectories shared their organization not with muscle trajectories, but with network solutions. Thus, cortical activity could be understood based on the need to generate muscle activity via dynamics that allow smooth, robust control over movement speed.
Collapse
Affiliation(s)
- Shreya Saxena
- Department of Electrical and Computer Engineering, University of Florida, Gainesville, United States.,Zuckerman Mind Brain Behavior Institute, Columbia University, New York, United States.,Grossman Center for the Statistics of Mind, Columbia University, New York, United States.,Center for Theoretical Neuroscience, Columbia University, New York, United States.,Department of Statistics, Columbia University, New York, United States
| | - Abigail A Russo
- Zuckerman Mind Brain Behavior Institute, Columbia University, New York, United States.,Department of Neuroscience, Columbia University, New York, United States
| | - John Cunningham
- Zuckerman Mind Brain Behavior Institute, Columbia University, New York, United States.,Grossman Center for the Statistics of Mind, Columbia University, New York, United States.,Center for Theoretical Neuroscience, Columbia University, New York, United States.,Department of Statistics, Columbia University, New York, United States
| | - Mark M Churchland
- Zuckerman Mind Brain Behavior Institute, Columbia University, New York, United States.,Grossman Center for the Statistics of Mind, Columbia University, New York, United States.,Department of Neuroscience, Columbia University, New York, United States.,Kavli Institute for Brain Science, Columbia University, New York, United States
| |
Collapse
|
124
|
Fang H, Yang Y. Designing and Validating a Robust Adaptive Neuromodulation Algorithm for Closed-Loop Control of Brain States. J Neural Eng 2022; 19. [PMID: 35576912 DOI: 10.1088/1741-2552/ac7005] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/08/2022] [Accepted: 05/16/2022] [Indexed: 11/12/2022]
Abstract
OBJECTIVE Neuromodulation systems that use closed-loop brain stimulation to control brain states can provide new therapies for brain disorders. To date, closed-loop brain stimulation has largely used linear time-invariant controllers. However, nonlinear time-varying brain network dynamics and external disturbances can appear during real-time stimulation, collectively leading to real-time model uncertainty. Real-time model uncertainty can degrade the performance or even cause instability of time-invariant controllers. Three problems need to be resolved to enable accurate and stable control under model uncertainty. First, an adaptive controller is needed to track the model uncertainty. Second, the adaptive controller additionally needs to be robust to noise and disturbances. Third, theoretical analyses of stability and robustness are needed as prerequisites for stable operation of the controller in practical applications. APPROACH We develop a robust adaptive neuromodulation algorithm that solves the above three problems. First, we develop a state-space brain network model that explicitly includes nonlinear terms of real-time model uncertainty and design an adaptive controller to track and cancel the model uncertainty. Second, to improve the robustness of the adaptive controller, we design two linear filters to increase steady-state control accuracy and reduce sensitivity to high-frequency noise and disturbances. Third, we conduct theoretical analyses to prove the stability of the neuromodulation algorithm and establish a trade-off between stability and robustness, which we further use to optimize the algorithm design. Finally, we validate the algorithm using comprehensive Monte Carlo simulations that span a broad range of model nonlinearity, uncertainty, and complexity. MAIN RESULTS The robust adaptive neuromodulation algorithm accurately tracks various types of target brain state trajectories, enables stable and robust control, and significantly outperforms state-of-the-art neuromodulation algorithms. SIGNIFICANCE Our algorithm has implications for future designs of precise, stable, and robust closed-loop brain stimulation systems to treat brain disorders and facilitate brain functions.
Collapse
Affiliation(s)
- Hao Fang
- University of Central Florida, Research 1 Room 334, 313/316, University of Central Florida, 4353 Scorpius St., Orlando, Florida, 32816-2368, UNITED STATES
| | - Yuxiao Yang
- Department of Electrical and Computer Engineering, University of Central Florida, 4353 Scorpius St., Orlando, Florida, 32816-2368, UNITED STATES
| |
Collapse
|
125
|
Schneider A, Zimmermann C, Alyahyay M, Steenbergen F, Brox T, Diester I. 3D pose estimation enables virtual head fixation in freely moving rats. Neuron 2022; 110:2080-2093.e10. [DOI: 10.1016/j.neuron.2022.04.019] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/05/2021] [Revised: 01/13/2022] [Accepted: 04/18/2022] [Indexed: 10/18/2022]
|
126
|
Lin A, Witvliet D, Hernandez-Nunez L, Linderman SW, Samuel ADT, Venkatachalam V. Imaging whole-brain activity to understand behavior. NATURE REVIEWS. PHYSICS 2022; 4:292-305. [PMID: 37409001 PMCID: PMC10320740 DOI: 10.1038/s42254-022-00430-w] [Citation(s) in RCA: 15] [Impact Index Per Article: 7.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Accepted: 01/25/2022] [Indexed: 07/07/2023]
Abstract
The brain evolved to produce behaviors that help an animal inhabit the natural world. During natural behaviors, the brain is engaged in many levels of activity from the detection of sensory inputs to decision-making to motor planning and execution. To date, most brain studies have focused on small numbers of neurons that interact in limited circuits. This allows analyzing individual computations or steps of neural processing. During behavior, however, brain activity must integrate multiple circuits in different brain regions. The activities of different brain regions are not isolated, but may be contingent on one another. Coordinated and concurrent activity within and across brain areas is organized by (1) sensory information from the environment, (2) the animal's internal behavioral state, and (3) recurrent networks of synaptic and non-synaptic connectivity. Whole-brain recording with cellular resolution provides a new opportunity to dissect the neural basis of behavior, but whole-brain activity is also mutually contingent on behavior itself. This is especially true for natural behaviors like navigation, mating, or hunting, which require dynamic interaction between the animal, its environment, and other animals. In such behaviors, the sensory experience of an unrestrained animal is actively shaped by its movements and decisions. Many of the signaling and feedback pathways that an animal uses to guide behavior only occur in freely moving animals. Recent technological advances have enabled whole-brain recording in small behaving animals including nematodes, flies, and zebrafish. These whole-brain experiments capture neural activity with cellular resolution spanning sensory, decision-making, and motor circuits, and thereby demand new theoretical approaches that integrate brain dynamics with behavioral dynamics. Here, we review the experimental and theoretical methods that are being employed to understand animal behavior and whole-brain activity, and the opportunities for physics to contribute to this emerging field of systems neuroscience.
Collapse
Affiliation(s)
- Albert Lin
- Department of Physics, Harvard University, Cambridge, MA, USA
- Center for Brain Science, Harvard University, Cambridge, MA, USA
- Center for the Physics of Biological Function, Princeton University, Princeton, NJ, USA
- Princeton Neuroscience Institute, Princeton University, Princeton, NJ, USA
| | - Daniel Witvliet
- Department of Physics, Harvard University, Cambridge, MA, USA
- Center for Brain Science, Harvard University, Cambridge, MA, USA
| | - Luis Hernandez-Nunez
- Center for Brain Science, Harvard University, Cambridge, MA, USA
- Department of Molecular and Cellular Biology, Harvard University, Cambridge, MA, USA
| | - Scott W Linderman
- Department of Statistics, Stanford University, Stanford, CA, USA
- Wu Tsai Neurosciences Institute, Stanford University, Stanford, CA, USA
| | - Aravinthan D T Samuel
- Department of Physics, Harvard University, Cambridge, MA, USA
- Center for Brain Science, Harvard University, Cambridge, MA, USA
| | - Vivek Venkatachalam
- Center for Brain Science, Harvard University, Cambridge, MA, USA
- Department of Physics, Northeastern University, Boston, MA, USA
| |
Collapse
|
127
|
|
128
|
Wimalasena LN, Braun J, Keshtkaran MR, Hofmann D, Gallego JÁ, Alessandro C, Tresch M, Miller LE, Pandarinath C. Estimating muscle activation from EMG using deep learning-based dynamical systems models. J Neural Eng 2022; 19. [PMID: 35366649 DOI: 10.1088/1741-2552/ac6369] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/01/2021] [Accepted: 04/01/2022] [Indexed: 11/11/2022]
Abstract
OBJECTIVE To study the neural control of movement, it is often necessary to estimate how muscles are activated across a variety of behavioral conditions. One approach is to try extracting the underlying neural command signal to muscles by applying latent variable modeling methods to electromyographic (EMG) recordings. However, estimating the latent command signal that underlies muscle activation is challenging due to its complex relation with recorded EMG signals. Common approaches estimate each muscle activation independently or require manual tuning of model hyperparameters to preserve behaviorally-relevant features. APPROACH Here, we adapted AutoLFADS, a large-scale, unsupervised deep learning approach originally designed to de-noise cortical spiking data, to estimate muscle activation from multi-muscle EMG signals. AutoLFADS uses recurrent neural networks (RNNs) to model the spatial and temporal regularities that underlie multi-muscle activation. MAIN RESULTS We first tested AutoLFADS on muscle activity from the rat hindlimb during locomotion and found that it dynamically adjusts its frequency response characteristics across different phases of behavior. The model produced single-trial estimates of muscle activation that improved prediction of joint kinematics as compared to low-pass or Bayesian filtering. We also applied AutoLFADS to monkey forearm muscle activity recorded during an isometric wrist force task. AutoLFADS uncovered previously uncharacterized high-frequency oscillations in the EMG that enhanced the correlation with measured force. The AutoLFADS-inferred estimates of muscle activation were also more closely correlated with simultaneously-recorded motor cortical activity than were other tested approaches. SIGNIFICANCE This method leverages dynamical systems modeling and artificial neural networks to provide estimates of muscle activation for multiple muscles. Ultimately, the approach can be used for further studies of multi-muscle coordination and its control by upstream brain areas.
Collapse
Affiliation(s)
- Lahiru Neth Wimalasena
- Biomedical Engineering, Emory University, 101 Woodruff Circle NE, Atlanta, Georgia, 30322-1007, UNITED STATES
| | - Jonas Braun
- Electrical and Computer Engineering, Technical University of Munich, Arcisstraße 21, Munchen, Bayern, 80333, GERMANY
| | - Mohammad Reza Keshtkaran
- Biomedical Engineering, Emory University, 101 Woodruff Circle NE, Atlanta, Georgia, 30322-1007, UNITED STATES
| | - David Hofmann
- Physics, Emory University, Math & Science Center, 400 Dowman Drive, Atlanta, Georgia, 30322-1007, UNITED STATES
| | - Juan Álvaro Gallego
- Physiology, Northwestern University Feinberg School of Medicine, 303 East Chicago Ave, Chicago, Illinois, 60611-3008, UNITED STATES
| | - Cristiano Alessandro
- Physiology, Northwestern University Feinberg School of Medicine, 303 East Chicago Ave, Chicago, Illinois, 60611-3008, UNITED STATES
| | - Matthew Tresch
- Physiology, Northwestern University Feinberg School of Medicine, 303 East Chicago Ave, Chicago, Illinois, 60611-3008, UNITED STATES
| | - Lee E Miller
- Physiology, Northwestern University Feinberg School of Medicine, 303 East Chicago Ave, Chicago, Illinois, 60611-3008, UNITED STATES
| | - Chethan Pandarinath
- Biomedical Engineering, Emory University, 101 Woodruff Circle NE, Atlanta, Georgia, 30322-1007, UNITED STATES
| |
Collapse
|
129
|
Pandarinath C, Bensmaia SJ. The science and engineering behind sensitized brain-controlled bionic hands. Physiol Rev 2022; 102:551-604. [PMID: 34541898 PMCID: PMC8742729 DOI: 10.1152/physrev.00034.2020] [Citation(s) in RCA: 22] [Impact Index Per Article: 11.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/11/2020] [Revised: 09/07/2021] [Accepted: 09/13/2021] [Indexed: 12/13/2022] Open
Abstract
Advances in our understanding of brain function, along with the development of neural interfaces that allow for the monitoring and activation of neurons, have paved the way for brain-machine interfaces (BMIs), which harness neural signals to reanimate the limbs via electrical activation of the muscles or to control extracorporeal devices, thereby bypassing the muscles and senses altogether. BMIs consist of reading out motor intent from the neuronal responses monitored in motor regions of the brain and executing intended movements with bionic limbs, reanimated limbs, or exoskeletons. BMIs also allow for the restoration of the sense of touch by electrically activating neurons in somatosensory regions of the brain, thereby evoking vivid tactile sensations and conveying feedback about object interactions. In this review, we discuss the neural mechanisms of motor control and somatosensation in able-bodied individuals and describe approaches to use neuronal responses as control signals for movement restoration and to activate residual sensory pathways to restore touch. Although the focus of the review is on intracortical approaches, we also describe alternative signal sources for control and noninvasive strategies for sensory restoration.
Collapse
Affiliation(s)
- Chethan Pandarinath
- Department of Biomedical Engineering, Emory University and Georgia Institute of Technology, Atlanta, Georgia
- Department of Neurosurgery, Emory University, Atlanta, Georgia
| | - Sliman J Bensmaia
- Department of Organismal Biology and Anatomy, University of Chicago, Chicago, Illinois
- Committee on Computational Neuroscience, University of Chicago, Chicago, Illinois
- Grossman Institute for Neuroscience, Quantitative Biology, and Human Behavior, University of Chicago, Chicago, Illinois
| |
Collapse
|
130
|
Howland JG, Ito R, Lapish CC, Villaruel FR. The rodent medial prefrontal cortex and associated circuits in orchestrating adaptive behavior under variable demands. Neurosci Biobehav Rev 2022; 135:104569. [PMID: 35131398 PMCID: PMC9248379 DOI: 10.1016/j.neubiorev.2022.104569] [Citation(s) in RCA: 16] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/18/2021] [Revised: 12/17/2021] [Accepted: 02/01/2022] [Indexed: 11/28/2022]
Abstract
Emerging evidence implicates rodent medial prefrontal cortex (mPFC) in tasks requiring adaptation of behavior to changing information from external and internal sources. However, the computations within mPFC and subsequent outputs that determine behavior are incompletely understood. We review the involvement of mPFC subregions, and their projections to the striatum and amygdala in two broad types of tasks in rodents: 1) appetitive and aversive Pavlovian and operant conditioning tasks that engage mPFC-striatum and mPFC-amygdala circuits, and 2) foraging-based tasks that require decision making to optimize reward. We find support for region-specific function of the mPFC, with dorsal mPFC and its projections to the dorsomedial striatum supporting action control with higher cognitive demands, and ventral mPFC engagement in translating affective signals into behavior via discrete projections to the ventral striatum and amygdala. However, we also propose that defined mPFC subdivisions operate as a functional continuum rather than segregated functional units, with crosstalk that allows distinct subregion-specific inputs (e.g., internal, affective) to influence adaptive behavior supported by other subregions.
Collapse
Affiliation(s)
- John G Howland
- Department of Anatomy, Physiology, and Pharmacology, University of Saskatchewan, Saskatoon, SK, Canada.
| | - Rutsuko Ito
- Department of Psychology, University of Toronto-Scarborough, Toronto, ON, Canada.
| | - Christopher C Lapish
- Department of Psychology, Indiana University-Purdue University Indianapolis, Indianapolis, IN, USA.
| | - Franz R Villaruel
- Department of Psychology, Concordia University, Montreal, QC, Canada.
| |
Collapse
|
131
|
Śliwowski M, Martin M, Souloumiac A, Blanchart P, Aksenova T. Decoding ECoG signal into 3D hand translation using deep learning. J Neural Eng 2022; 19. [PMID: 35287119 DOI: 10.1088/1741-2552/ac5d69] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/08/2021] [Accepted: 03/14/2022] [Indexed: 12/29/2022]
Abstract
Objective.Motor brain-computer interfaces (BCIs) are a promising technology that may enable motor-impaired people to interact with their environment. BCIs would potentially compensate for arm and hand function loss, which is the top priority for individuals with tetraplegia. Designing real-time and accurate BCI is crucial to make such devices useful, safe, and easy to use by patients in a real-life environment. Electrocorticography (ECoG)-based BCIs emerge as a good compromise between invasiveness of the recording device and good spatial and temporal resolution of the recorded signal. However, most ECoG signal decoders used to predict continuous hand movements are linear models. These models have a limited representational capacity and may fail to capture the relationship between ECoG signal features and continuous hand movements. Deep learning (DL) models, which are state-of-the-art in many problems, could be a solution to better capture this relationship.Approach.In this study, we tested several DL-based architectures to predict imagined 3D continuous hand translation using time-frequency features extracted from ECoG signals. The dataset used in the analysis is a part of a long-term clinical trial (ClinicalTrials.gov identifier: NCT02550522) and was acquired during a closed-loop experiment with a tetraplegic subject. The proposed architectures include multilayer perceptron, convolutional neural networks (CNNs), and long short-term memory networks (LSTM). The accuracy of the DL-based and multilinear models was compared offline using cosine similarity.Main results.Our results show that CNN-based architectures outperform the current state-of-the-art multilinear model. The best architecture exploited the spatial correlation between neighboring electrodes with CNN and benefited from the sequential character of the desired hand trajectory by using LSTMs. Overall, DL increased the average cosine similarity, compared to the multilinear model, by up to 60%, from 0.189 to 0.302 and from 0.157 to 0.249 for the left and right hand, respectively.Significance.This study shows that DL-based models could increase the accuracy of BCI systems in the case of 3D hand translation prediction in a tetraplegic subject.
Collapse
Affiliation(s)
- Maciej Śliwowski
- Université Grenoble Alpes, CEA, LETI, Clinatec, F-38000 Grenoble, France.,Université Paris-Saclay, CEA, List, F-91120 Palaiseau, France
| | - Matthieu Martin
- Université Grenoble Alpes, CEA, LETI, Clinatec, F-38000 Grenoble, France
| | | | | | - Tetiana Aksenova
- Université Grenoble Alpes, CEA, LETI, Clinatec, F-38000 Grenoble, France
| |
Collapse
|
132
|
Skyberg R, Tanabe S, Chen H, Cang J. Coarse-to-fine processing drives the efficient coding of natural scenes in mouse visual cortex. Cell Rep 2022; 38:110606. [PMID: 35354030 PMCID: PMC9189856 DOI: 10.1016/j.celrep.2022.110606] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/22/2021] [Revised: 01/07/2022] [Accepted: 03/10/2022] [Indexed: 12/01/2022] Open
Abstract
The visual system processes sensory inputs sequentially, perceiving coarse information before fine details. Here we study the neural basis of coarse-to-fine processing and its computational benefits in natural vision. We find that primary visual cortical neurons in awake mice respond to natural scenes in a coarse-to-fine manner, primarily driven by individual neurons rapidly shifting their spatial frequency preference from low to high over a brief response period. This shift transforms the population response in a way that counteracts the statistical regularities of natural scenes, thereby reducing redundancy and generating a more efficient neural representation. The increase in representational efficiency does not occur in either dark-reared or anesthetized mice, which show significantly attenuated coarse-to-fine spatial processing. Collectively, these results illustrate that coarse-to-fine processing is state dependent, develops postnatally via visual experience, and provides a computational advantage by generating more efficient representations of the complex spatial statistics of ethologically relevant natural scenes. Skyberg et al. show that the visual cortex of mice processes natural scenes in a coarse-to-fine manner, driven by individual neuron’s temporal dynamics. These response dynamics, which require visual experience to develop, reduce redundancy in the neural code and lead to more efficient representations of complex visual stimuli.
Collapse
Affiliation(s)
- Rolf Skyberg
- Department of Biology and Department of Psychology, University of Virginia, Charlottesville, VA 22904, USA
| | - Seiji Tanabe
- Department of Biology and Department of Psychology, University of Virginia, Charlottesville, VA 22904, USA
| | - Hui Chen
- Department of Biology and Department of Psychology, University of Virginia, Charlottesville, VA 22904, USA
| | - Jianhua Cang
- Department of Biology and Department of Psychology, University of Virginia, Charlottesville, VA 22904, USA.
| |
Collapse
|
133
|
Triplett MA, Goodhill GJ. Inference of Multiplicative Factors Underlying Neural Variability in Calcium Imaging Data. Neural Comput 2022; 34:1143-1169. [PMID: 35344990 DOI: 10.1162/neco_a_01492] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/22/2021] [Accepted: 01/11/2022] [Indexed: 11/04/2022]
Abstract
Understanding brain function requires disentangling the high-dimensional activity of populations of neurons. Calcium imaging is an increasingly popular technique for monitoring such neural activity, but computational tools for interpreting extracted calcium signals are lacking. While there has been a substantial development of factor-analysis-type methods for neural spike train analysis, similar methods targeted at calcium imaging data are only beginning to emerge. Here we develop a flexible modeling framework that identifies low-dimensional latent factors in calcium imaging data with distinct additive and multiplicative modulatory effects. Our model includes spike-and-slab sparse priors that regularize additive factor activity and gaussian process priors that constrain multiplicative effects to vary only gradually, allowing for the identification of smooth and interpretable changes in multiplicative gain. These factors are estimated from the data using a variational expectation-maximization algorithm that requires a differentiable reparameterization of both continuous and discrete latent variables. After demonstrating our method on simulated data, we apply it to experimental data from the zebrafish optic tectum, uncovering low-dimensional fluctuations in multiplicative excitability that govern trial-to-trial variation in evoked responses.
Collapse
Affiliation(s)
- Marcus A Triplett
- Queensland Brain Institute and School of Mathematics and Physics, University of Queensland, St Lucia, QLD 4072, Australia
| | - Geoffrey J Goodhill
- Queensland Brain Institute and School of Mathematics and Physics, University of Queensland, St Lucia, QLD 4072, Australia
| |
Collapse
|
134
|
Hernández DG, Sober SJ, Nemenman I. Unsupervised Bayesian Ising Approximation for decoding neural activity and other biological dictionaries. eLife 2022; 11:68192. [PMID: 35315769 PMCID: PMC8989415 DOI: 10.7554/elife.68192] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/09/2021] [Accepted: 03/19/2022] [Indexed: 11/13/2022] Open
Abstract
The problem of deciphering how low-level patterns (action potentials in the brain, amino acids in a protein, etc.) drive high-level biological features (sensorimotor behavior, enzymatic function) represents the central challenge of quantitative biology. The lack of general methods for doing so from the size of datasets that can be collected experimentally severely limits our understanding of the biological world. For example, in neuroscience, some sensory and motor codes have been shown to consist of precisely timed multi-spike patterns. However, the combinatorial complexity of such pattern codes have precluded development of methods for their comprehensive analysis. Thus, just as it is hard to predict a protein's function based on its sequence, we still do not understand how to accurately predict an organism's behavior based on neural activity. Here we introduce the unsupervised Bayesian Ising Approximation (uBIA) for solving this class of problems. We demonstrate its utility in an application to neural data, detecting precisely timed spike patterns that code for specific motor behaviors in a songbird vocal system. In data recorded during singing from neurons in a vocal control region, our method detects such codewords with an arbitrary number of spikes, does so from small data sets, and accounts for dependencies in occurrences of codewords. Detecting such comprehensive motor control dictionaries can improve our understanding of skilled motor control and the neural bases of sensorimotor learning in animals. To further illustrate the utility of uBIA, used it to identify the distinct sets of activity patterns that encode vocal motor exploration versus typical song production. Crucially, our method can be used not only for analysis of neural systems, but also for understanding the structure of correlations in other biological and nonbiological datasets.
Collapse
Affiliation(s)
- Damián G Hernández
- Department of Medical Physics, Centro Atómico Bariloche and Instituto Balseiro, Bariloche, Argentina
| | - Samuel J Sober
- Department of Biology, Emory University, Atlanta, United States
| | - Ilya Nemenman
- Department of Physics, Emory University, Atlanta, United States
| |
Collapse
|
135
|
Pinotsis DA, Miller EK. Beyond dimension reduction: Stable electric fields emerge from and allow representational drift. Neuroimage 2022; 253:119058. [PMID: 35272022 DOI: 10.1016/j.neuroimage.2022.119058] [Citation(s) in RCA: 12] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/05/2022] [Revised: 03/03/2022] [Accepted: 03/03/2022] [Indexed: 01/18/2023] Open
Abstract
It is known that the exact neurons maintaining a given memory (the neural ensemble) change from trial to trial. This raises the question of how the brain achieves stability in the face of this representational drift. Here, we demonstrate that this stability emerges at the level of the electric fields that arise from neural activity. We show that electric fields carry information about working memory content. The electric fields, in turn, can act as "guard rails" that funnel higher dimensional variable neural activity along stable lower dimensional routes. We obtained the latent space associated with each memory. We then confirmed the stability of the electric field by mapping the latent space to different cortical patches (that comprise a neural ensemble) and reconstructing information flow between patches. Stable electric fields can allow latent states to be transferred between brain areas, in accord with modern engram theory.
Collapse
Affiliation(s)
- Dimitris A Pinotsis
- Centre for Mathematical Neuroscience and Psychology and Department of Psychology, City-University of London, London EC1V 0HB, United Kingdom; The Picower Institute for Learning and Memory and Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA 02139, USA.
| | - Earl K Miller
- The Picower Institute for Learning and Memory and Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA 02139, USA
| |
Collapse
|
136
|
Brinkman BAW, Yan H, Maffei A, Park IM, Fontanini A, Wang J, La Camera G. Metastable dynamics of neural circuits and networks. APPLIED PHYSICS REVIEWS 2022; 9:011313. [PMID: 35284030 PMCID: PMC8900181 DOI: 10.1063/5.0062603] [Citation(s) in RCA: 13] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/06/2021] [Accepted: 01/31/2022] [Indexed: 05/14/2023]
Abstract
Cortical neurons emit seemingly erratic trains of action potentials or "spikes," and neural network dynamics emerge from the coordinated spiking activity within neural circuits. These rich dynamics manifest themselves in a variety of patterns, which emerge spontaneously or in response to incoming activity produced by sensory inputs. In this Review, we focus on neural dynamics that is best understood as a sequence of repeated activations of a number of discrete hidden states. These transiently occupied states are termed "metastable" and have been linked to important sensory and cognitive functions. In the rodent gustatory cortex, for instance, metastable dynamics have been associated with stimulus coding, with states of expectation, and with decision making. In frontal, parietal, and motor areas of macaques, metastable activity has been related to behavioral performance, choice behavior, task difficulty, and attention. In this article, we review the experimental evidence for neural metastable dynamics together with theoretical approaches to the study of metastable activity in neural circuits. These approaches include (i) a theoretical framework based on non-equilibrium statistical physics for network dynamics; (ii) statistical approaches to extract information about metastable states from a variety of neural signals; and (iii) recent neural network approaches, informed by experimental results, to model the emergence of metastable dynamics. By discussing these topics, we aim to provide a cohesive view of how transitions between different states of activity may provide the neural underpinnings for essential functions such as perception, memory, expectation, or decision making, and more generally, how the study of metastable neural activity may advance our understanding of neural circuit function in health and disease.
Collapse
Affiliation(s)
| | - H. Yan
- State Key Laboratory of Electroanalytical Chemistry, Changchun Institute of Applied Chemistry, Chinese Academy of Sciences, Changchun, Jilin 130022, People's Republic of China
| | | | | | | | - J. Wang
- Authors to whom correspondence should be addressed: and
| | - G. La Camera
- Authors to whom correspondence should be addressed: and
| |
Collapse
|
137
|
Xie Y, Liu YH, Constantinidis C, Zhou X. Neural Mechanisms of Working Memory Accuracy Revealed by Recurrent Neural Networks. Front Syst Neurosci 2022; 16:760864. [PMID: 35237134 PMCID: PMC8883483 DOI: 10.3389/fnsys.2022.760864] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/18/2021] [Accepted: 01/18/2022] [Indexed: 11/17/2022] Open
Abstract
Understanding the neural mechanisms of working memory has been a long-standing Neuroscience goal. Bump attractor models have been used to simulate persistent activity generated in the prefrontal cortex during working memory tasks and to study the relationship between activity and behavior. How realistic the assumptions of these models are has been a matter of debate. Here, we relied on an alternative strategy to gain insights into the computational principles behind the generation of persistent activity and on whether current models capture some universal computational principles. We trained Recurrent Neural Networks (RNNs) to perform spatial working memory tasks and examined what aspects of RNN activity accounted for working memory performance. Furthermore, we compared activity in fully trained networks and immature networks, achieving only imperfect performance. We thus examined the relationship between the trial-to-trial variability of responses simulated by the network and different aspects of unit activity as a way of identifying the critical parameters of memory maintenance. Properties that spontaneously emerged in the artificial network strongly resembled persistent activity of prefrontal neurons. Most importantly, these included drift of network activity during the course of a trial that was causal to the behavior of the network. As a consequence, delay period firing rate and behavior were positively correlated, in strong analogy to experimental results from the prefrontal cortex. These findings reveal that delay period activity is computationally efficient in maintaining working memory, as evidenced by unbiased optimization of parameters in artificial neural networks, oblivious to the properties of prefrontal neurons.
Collapse
Affiliation(s)
- Yuanqi Xie
- Department of Computer Science, Vanderbilt University, Nashville, TN, United States
| | - Yichen Henry Liu
- Department of Computer Science, Vanderbilt University, Nashville, TN, United States
| | - Christos Constantinidis
- Department of Biomedical Engineering, Vanderbilt University, Nashville, TN, United States
- Neuroscience Program, Vanderbilt University, Nashville, TN, United States
- Department of Ophthalmology and Visual Sciences, Vanderbilt University Medical Center, Nashville, TN, United States
| | - Xin Zhou
- Department of Computer Science, Vanderbilt University, Nashville, TN, United States
- Department of Biomedical Engineering, Vanderbilt University, Nashville, TN, United States
- Data Science Institute, Vanderbilt University, Nashville, TN, United States
| |
Collapse
|
138
|
Thome J, Steinbach R, Grosskreutz J, Durstewitz D, Koppe G. Classification of amyotrophic lateral sclerosis by brain volume, connectivity, and network dynamics. Hum Brain Mapp 2022; 43:681-699. [PMID: 34655259 PMCID: PMC8720197 DOI: 10.1002/hbm.25679] [Citation(s) in RCA: 11] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/13/2021] [Accepted: 09/27/2021] [Indexed: 12/19/2022] Open
Abstract
Emerging studies corroborate the importance of neuroimaging biomarkers and machine learning to improve diagnostic classification of amyotrophic lateral sclerosis (ALS). While most studies focus on structural data, recent studies assessing functional connectivity between brain regions by linear methods highlight the role of brain function. These studies have yet to be combined with brain structure and nonlinear functional features. We investigate the role of linear and nonlinear functional brain features, and the benefit of combining brain structure and function for ALS classification. ALS patients (N = 97) and healthy controls (N = 59) underwent structural and functional resting state magnetic resonance imaging. Based on key hubs of resting state networks, we defined three feature sets comprising brain volume, resting state functional connectivity (rsFC), as well as (nonlinear) resting state dynamics assessed via recurrent neural networks. Unimodal and multimodal random forest classifiers were built to classify ALS. Out-of-sample prediction errors were assessed via five-fold cross-validation. Unimodal classifiers achieved a classification accuracy of 56.35-61.66%. Multimodal classifiers outperformed unimodal classifiers achieving accuracies of 62.85-66.82%. Evaluating the ranking of individual features' importance scores across all classifiers revealed that rsFC features were most dominant in classification. While univariate analyses revealed reduced rsFC in ALS patients, functional features more generally indicated deficits in information integration across resting state brain networks in ALS. The present work undermines that combining brain structure and function provides an additional benefit to diagnostic classification, as indicated by multimodal classifiers, while emphasizing the importance of capturing both linear and nonlinear functional brain properties to identify discriminative biomarkers of ALS.
Collapse
Affiliation(s)
- Janine Thome
- Department of Theoretical Neuroscience, Central Institute of Mental Health Mannheim, Medical Faculty MannheimHeidelberg UniversityGermany
- Clinic for Psychiatry and Psychotherapy, Central Institute of Mental Health Mannheim, Medical Faculty MannheimHeidelberg UniversityGermany
| | - Robert Steinbach
- Hans Berger Department of NeurologyJena University HospitalJenaGermany
| | - Julian Grosskreutz
- Precision Neurology, Department of NeurologyUniversity of LuebeckLuebeckGermany
| | - Daniel Durstewitz
- Department of Theoretical Neuroscience, Central Institute of Mental Health Mannheim, Medical Faculty MannheimHeidelberg UniversityGermany
| | - Georgia Koppe
- Department of Theoretical Neuroscience, Central Institute of Mental Health Mannheim, Medical Faculty MannheimHeidelberg UniversityGermany
- Clinic for Psychiatry and Psychotherapy, Central Institute of Mental Health Mannheim, Medical Faculty MannheimHeidelberg UniversityGermany
| |
Collapse
|
139
|
Huang Y, Yu Z. Representation Learning for Dynamic Functional Connectivities via Variational Dynamic Graph Latent Variable Models. ENTROPY 2022; 24:e24020152. [PMID: 35205448 PMCID: PMC8871213 DOI: 10.3390/e24020152] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/06/2021] [Revised: 01/14/2022] [Accepted: 01/14/2022] [Indexed: 02/04/2023]
Abstract
Latent variable models (LVMs) for neural population spikes have revealed informative low-dimensional dynamics about the neural data and have become powerful tools for analyzing and interpreting neural activity. However, these approaches are unable to determine the neurophysiological meaning of the inferred latent dynamics. On the other hand, emerging evidence suggests that dynamic functional connectivities (DFC) may be responsible for neural activity patterns underlying cognition or behavior. We are interested in studying how DFC are associated with the low-dimensional structure of neural activities. Most existing LVMs are based on a point process and fail to model evolving relationships. In this work, we introduce a dynamic graph as the latent variable and develop a Variational Dynamic Graph Latent Variable Model (VDGLVM), a representation learning model based on the variational information bottleneck framework. VDGLVM utilizes a graph generative model and a graph neural network to capture dynamic communication between nodes that one has no access to from the observed data. The proposed computational model provides guaranteed behavior-decoding performance and improves LVMs by associating the inferred latent dynamics with probable DFC.
Collapse
|
140
|
Schroeder KE, Perkins SM, Wang Q, Churchland MM. Cortical Control of Virtual Self-Motion Using Task-Specific Subspaces. J Neurosci 2022; 42:220-239. [PMID: 34716229 PMCID: PMC8802935 DOI: 10.1523/jneurosci.2687-20.2021] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/19/2020] [Revised: 09/18/2021] [Accepted: 10/17/2021] [Indexed: 11/21/2022] Open
Abstract
Brain-machine interfaces (BMIs) for reaching have enjoyed continued performance improvements, yet there remains significant need for BMIs that control other movement classes. Recent scientific findings suggest that the intrinsic covariance structure of neural activity depends strongly on movement class, potentially necessitating different decode algorithms across classes. To address this possibility, we developed a self-motion BMI based on cortical activity as monkeys cycled a hand-held pedal to progress along a virtual track. Unlike during reaching, we found no high-variance dimensions that directly correlated with to-be-decoded variables. This was due to no neurons having consistent correlations between their responses and kinematic variables. Yet we could decode a single variable-self-motion-by nonlinearly leveraging structure that spanned multiple high-variance neural dimensions. Resulting online BMI-control success rates approached those during manual control. These findings make two broad points regarding how to build decode algorithms that harmonize with the empirical structure of neural activity in motor cortex. First, even when decoding from the same cortical region (e.g., arm-related motor cortex), different movement classes may need to employ very different strategies. Although correlations between neural activity and hand velocity are prominent during reaching tasks, they are not a fundamental property of motor cortex and cannot be counted on to be present in general. Second, although one generally desires a low-dimensional readout, it can be beneficial to leverage a multidimensional high-variance subspace. Fully embracing this approach requires highly nonlinear approaches tailored to the task at hand, but can produce near-native levels of performance.SIGNIFICANCE STATEMENT Many brain-machine interface decoders have been constructed for controlling movements normally performed with the arm. Yet it is unclear how these will function beyond the reach-like scenarios where they were developed. Existing decoders implicitly assume that neural covariance structure, and correlations with to-be-decoded kinematic variables, will be largely preserved across tasks. We find that the correlation between neural activity and hand kinematics, a feature typically exploited when decoding reach-like movements, is essentially absent during another task performed with the arm: cycling through a virtual environment. Nevertheless, the use of a different strategy, one focused on leveraging the highest-variance neural signals, supported high performance real-time brain-machine interface control.
Collapse
Affiliation(s)
- Karen E Schroeder
- Department of Neuroscience, Columbia University Medical Center, New York, New York
- Zuckerman Institute, Columbia University, New York, New York
| | - Sean M Perkins
- Zuckerman Institute, Columbia University, New York, New York
- Department of Biomedical Engineering, Columbia University, New York, New York
| | - Qi Wang
- Department of Biomedical Engineering, Columbia University, New York, New York
| | - Mark M Churchland
- Department of Neuroscience, Columbia University Medical Center, New York, New York
- Zuckerman Institute, Columbia University, New York, New York
- Kavli Institute for Brain Science, Columbia University Medical Center, New York, New York
- Grossman Center for the Statistics of Mind, Columbia University, New York, New York
| |
Collapse
|
141
|
Greener JG, Kandathil SM, Moffat L, Jones DT. A guide to machine learning for biologists. Nat Rev Mol Cell Biol 2022; 23:40-55. [PMID: 34518686 DOI: 10.1038/s41580-021-00407-0] [Citation(s) in RCA: 511] [Impact Index Per Article: 255.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 07/23/2021] [Indexed: 02/08/2023]
Abstract
The expanding scale and inherent complexity of biological data have encouraged a growing use of machine learning in biology to build informative and predictive models of the underlying biological processes. All machine learning techniques fit models to data; however, the specific methods are quite varied and can at first glance seem bewildering. In this Review, we aim to provide readers with a gentle introduction to a few key machine learning techniques, including the most recently developed and widely used techniques involving deep neural networks. We describe how different techniques may be suited to specific types of biological data, and also discuss some best practices and points to consider when one is embarking on experiments involving machine learning. Some emerging directions in machine learning methodology are also discussed.
Collapse
Affiliation(s)
- Joe G Greener
- Department of Computer Science, University College London, London, UK
| | - Shaun M Kandathil
- Department of Computer Science, University College London, London, UK
| | - Lewis Moffat
- Department of Computer Science, University College London, London, UK
| | - David T Jones
- Department of Computer Science, University College London, London, UK.
| |
Collapse
|
142
|
Sainburg T, Gentner TQ. Toward a Computational Neuroethology of Vocal Communication: From Bioacoustics to Neurophysiology, Emerging Tools and Future Directions. Front Behav Neurosci 2021; 15:811737. [PMID: 34987365 PMCID: PMC8721140 DOI: 10.3389/fnbeh.2021.811737] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/09/2021] [Accepted: 11/29/2021] [Indexed: 11/23/2022] Open
Abstract
Recently developed methods in computational neuroethology have enabled increasingly detailed and comprehensive quantification of animal movements and behavioral kinematics. Vocal communication behavior is well poised for application of similar large-scale quantification methods in the service of physiological and ethological studies. This review describes emerging techniques that can be applied to acoustic and vocal communication signals with the goal of enabling study beyond a small number of model species. We review a range of modern computational methods for bioacoustics, signal processing, and brain-behavior mapping. Along with a discussion of recent advances and techniques, we include challenges and broader goals in establishing a framework for the computational neuroethology of vocal communication.
Collapse
Affiliation(s)
- Tim Sainburg
- Department of Psychology, University of California, San Diego, La Jolla, CA, United States
- Center for Academic Research & Training in Anthropogeny, University of California, San Diego, La Jolla, CA, United States
| | - Timothy Q. Gentner
- Department of Psychology, University of California, San Diego, La Jolla, CA, United States
- Neurosciences Graduate Program, University of California, San Diego, La Jolla, CA, United States
- Neurobiology Section, Division of Biological Sciences, University of California, San Diego, La Jolla, CA, United States
- Kavli Institute for Brain and Mind, University of California, San Diego, La Jolla, CA, United States
| |
Collapse
|
143
|
Jo Y, Cho H, Park WS, Kim G, Ryu D, Kim YS, Lee M, Park S, Lee MJ, Joo H, Jo H, Lee S, Lee S, Min HS, Heo WD, Park Y. Label-free multiplexed microtomography of endogenous subcellular dynamics using generalizable deep learning. Nat Cell Biol 2021; 23:1329-1337. [PMID: 34876684 DOI: 10.1038/s41556-021-00802-x] [Citation(s) in RCA: 28] [Impact Index Per Article: 9.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/21/2021] [Accepted: 10/25/2021] [Indexed: 02/07/2023]
Abstract
Simultaneous imaging of various facets of intact biological systems across multiple spatiotemporal scales is a long-standing goal in biology and medicine, for which progress is hindered by limits of conventional imaging modalities. Here we propose using the refractive index (RI), an intrinsic quantity governing light-matter interaction, as a means for such measurement. We show that major endogenous subcellular structures, which are conventionally accessed via exogenous fluorescence labelling, are encoded in three-dimensional (3D) RI tomograms. We decode this information in a data-driven manner, with a deep learning-based model that infers multiple 3D fluorescence tomograms from RI measurements of the corresponding subcellular targets, thereby achieving multiplexed microtomography. This approach, called RI2FL for refractive index to fluorescence, inherits the advantages of both high-specificity fluorescence imaging and label-free RI imaging. Importantly, full 3D modelling of absolute and unbiased RI improves generalization, such that the approach is applicable to a broad range of new samples without retraining to facilitate immediate applicability. The performance, reliability and scalability of this technology are extensively characterized, and its various applications within single-cell profiling at unprecedented scales (which can generate new experimentally testable hypotheses) are demonstrated.
Collapse
Affiliation(s)
- YoungJu Jo
- Department of Physics, Korea Advanced Institute of Science and Technology (KAIST), Daejeon, Republic of Korea.,KAIST Institute for Health Science and Technology, KAIST, Daejeon, Republic of Korea.,Tomocube, Daejeon, Republic of Korea.,Departments of Applied Physics and of Biology, Stanford University, Stanford, CA, USA
| | | | - Wei Sun Park
- Department of Physics, Korea Advanced Institute of Science and Technology (KAIST), Daejeon, Republic of Korea.,KAIST Institute for Health Science and Technology, KAIST, Daejeon, Republic of Korea
| | - Geon Kim
- Department of Physics, Korea Advanced Institute of Science and Technology (KAIST), Daejeon, Republic of Korea.,KAIST Institute for Health Science and Technology, KAIST, Daejeon, Republic of Korea
| | - DongHun Ryu
- Department of Physics, Korea Advanced Institute of Science and Technology (KAIST), Daejeon, Republic of Korea.,KAIST Institute for Health Science and Technology, KAIST, Daejeon, Republic of Korea
| | - Young Seo Kim
- Department of Physics, Korea Advanced Institute of Science and Technology (KAIST), Daejeon, Republic of Korea.,KAIST Institute for Health Science and Technology, KAIST, Daejeon, Republic of Korea.,Graduate School of Medial Science and Engineering, KAIST, Daejeon, Republic of Korea
| | - Moosung Lee
- Department of Physics, Korea Advanced Institute of Science and Technology (KAIST), Daejeon, Republic of Korea.,KAIST Institute for Health Science and Technology, KAIST, Daejeon, Republic of Korea
| | - Sangwoo Park
- Gwangju Center, Korea Basic Science Institute (KBSI), Gwangju, Republic of Korea
| | - Mahn Jae Lee
- KAIST Institute for Health Science and Technology, KAIST, Daejeon, Republic of Korea.,Graduate School of Medial Science and Engineering, KAIST, Daejeon, Republic of Korea
| | | | | | - Seongsoo Lee
- Gwangju Center, Korea Basic Science Institute (KBSI), Gwangju, Republic of Korea
| | - Sumin Lee
- Tomocube, Daejeon, Republic of Korea
| | | | - Won Do Heo
- Department of Biological Sciences, KAIST, Daejeon, Republic of Korea. .,KAIST Institute for the BioCentury, KAIST, Daejeon, Republic of Korea.
| | - YongKeun Park
- Department of Physics, Korea Advanced Institute of Science and Technology (KAIST), Daejeon, Republic of Korea. .,KAIST Institute for Health Science and Technology, KAIST, Daejeon, Republic of Korea. .,Tomocube, Daejeon, Republic of Korea.
| |
Collapse
|
144
|
Tuladhar A, Moore JA, Ismail Z, Forkert ND. Modeling Neurodegeneration in silico With Deep Learning. Front Neuroinform 2021; 15:748370. [PMID: 34867256 PMCID: PMC8640525 DOI: 10.3389/fninf.2021.748370] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/27/2021] [Accepted: 10/21/2021] [Indexed: 11/13/2022] Open
Abstract
Deep neural networks, inspired by information processing in the brain, can achieve human-like performance for various tasks. However, research efforts to use these networks as models of the brain have primarily focused on modeling healthy brain function so far. In this work, we propose a paradigm for modeling neural diseases in silico with deep learning and demonstrate its use in modeling posterior cortical atrophy (PCA), an atypical form of Alzheimer’s disease affecting the visual cortex. We simulated PCA in deep convolutional neural networks (DCNNs) trained for visual object recognition by randomly injuring connections between artificial neurons. Results showed that injured networks progressively lost their object recognition capability. Simulated PCA impacted learned representations hierarchically, as networks lost object-level representations before category-level representations. Incorporating this paradigm in computational neuroscience will be essential for developing in silico models of the brain and neurological diseases. The paradigm can be expanded to incorporate elements of neural plasticity and to other cognitive domains such as motor control, auditory cognition, language processing, and decision making.
Collapse
Affiliation(s)
- Anup Tuladhar
- Department of Radiology, University of Calgary, Calgary, AB, Canada.,Hotchkiss Brain Institute, University of Calgary, Calgary, AB, Canada
| | - Jasmine A Moore
- Department of Radiology, University of Calgary, Calgary, AB, Canada.,Hotchkiss Brain Institute, University of Calgary, Calgary, AB, Canada.,Biomedical Engineering Program, University of Calgary, Calgary, AB, Canada
| | - Zahinoor Ismail
- Hotchkiss Brain Institute, University of Calgary, Calgary, AB, Canada.,Department of Clinical Neurosciences, University of Calgary, Calgary, AB, Canada.,Department of Community Health Sciences, University of Calgary, Calgary, AB, Canada.,Department of Psychiatry, University of Calgary, Calgary, AB, Canada.,O'Brien Institute for Public Health, University of Calgary, Calgary, AB, Canada
| | - Nils D Forkert
- Department of Radiology, University of Calgary, Calgary, AB, Canada.,Hotchkiss Brain Institute, University of Calgary, Calgary, AB, Canada.,Department of Clinical Neurosciences, University of Calgary, Calgary, AB, Canada.,Alberta Children's Hospital Research Institute, University of Calgary, Calgary, AB, Canada
| |
Collapse
|
145
|
Srinath R, Ruff DA, Cohen MR. Attention improves information flow between neuronal populations without changing the communication subspace. Curr Biol 2021; 31:5299-5313.e4. [PMID: 34699782 PMCID: PMC8665027 DOI: 10.1016/j.cub.2021.09.076] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/29/2021] [Revised: 09/22/2021] [Accepted: 09/28/2021] [Indexed: 10/20/2022]
Abstract
Visual attention allows observers to change the influence of different parts of a visual scene on their behavior, suggesting that information can be flexibly shared between visual cortex and neurons involved in decision making. We investigated the neural substrate of flexible information routing by analyzing the activity of populations of visual neurons in the medial temporal area (MT) and oculo-motor neurons in the superior colliculus (SC) while rhesus monkeys switched spatial attention. We demonstrated that attention increases the efficacy of visuomotor communication: trial-to-trial variability in SC population activity could be better predicted by the activity of the MT population (and vice versa) when attention was directed toward their joint receptive fields. Surprisingly, this improvement in prediction was not explained by changes in the dimensionality of the shared subspace or in the magnitude of local or shared pairwise noise correlations. These results lay a foundation for future theoretical and experimental studies into how visual attention can flexibly change information flow between sensory and decision neurons.
Collapse
Affiliation(s)
- Ramanujan Srinath
- Department of Neuroscience and Center for the Neural Basis of Cognition, University of Pittsburgh, Pittsburgh, PA, USA.
| | - Douglas A Ruff
- Department of Neuroscience and Center for the Neural Basis of Cognition, University of Pittsburgh, Pittsburgh, PA, USA
| | - Marlene R Cohen
- Department of Neuroscience and Center for the Neural Basis of Cognition, University of Pittsburgh, Pittsburgh, PA, USA
| |
Collapse
|
146
|
Draelos A, Gupta P, Jun NY, Sriworarat C, Pearson J. Bubblewrap: Online tiling and real-time flow prediction on neural manifolds. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 2021; 34:6062-6074. [PMID: 35785106 PMCID: PMC9247712] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
While most classic studies of function in experimental neuroscience have focused on the coding properties of individual neurons, recent developments in recording technologies have resulted in an increasing emphasis on the dynamics of neural populations. This has given rise to a wide variety of models for analyzing population activity in relation to experimental variables, but direct testing of many neural population hypotheses requires intervening in the system based on current neural state, necessitating models capable of inferring neural state online. Existing approaches, primarily based on dynamical systems, require strong parametric assumptions that are easily violated in the noise-dominated regime and do not scale well to the thousands of data channels in modern experiments. To address this problem, we propose a method that combines fast, stable dimensionality reduction with a soft tiling of the resulting neural manifold, allowing dynamics to be approximated as a probability flow between tiles. This method can be fit efficiently using online expectation maximization, scales to tens of thousands of tiles, and outperforms existing methods when dynamics are noise-dominated or feature multi-modal transition probabilities. The resulting model can be trained at kiloHertz data rates, produces accurate approximations of neural dynamics within minutes, and generates predictions on submillisecond time scales. It retains predictive performance throughout many time steps into the future and is fast enough to serve as a component of closed-loop causal experiments.
Collapse
Affiliation(s)
| | | | | | | | - John Pearson
- Biostatistics & Bioinformatics, Electrical & Computer Engineering, Neurobiology, Psychology & Neuroscience, Duke University
| |
Collapse
|
147
|
Liu R, Azabou M, Dabagia M, Lin CH, Azar MG, Hengen KB, Valko M, Dyer EL. Drop, Swap, and Generate: A Self-Supervised Approach for Generating Neural Activity. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 2021; 34:10587-10599. [PMID: 36467015 PMCID: PMC9713686] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Subscribe] [Scholar Register] [Indexed: 06/17/2023]
Abstract
Meaningful and simplified representations of neural activity can yield insights into how and what information is being processed within a neural circuit. However, without labels, finding representations that reveal the link between the brain and behavior can be challenging. Here, we introduce a novel unsupervised approach for learning disentangled representations of neural activity called Swap-VAE. Our approach combines a generative modeling framework with an instance-specific alignment loss that tries to maximize the representational similarity between transformed views of the input (brain state). These transformed (or augmented) views are created by dropping out neurons and jittering samples in time, which intuitively should lead the network to a representation that maintains both temporal consistency and invariance to the specific neurons used to represent the neural state. Through evaluations on both synthetic data and neural recordings from hundreds of neurons in different primate brains, we show that it is possible to build representations that disentangle neural datasets along relevant latent dimensions linked to behavior.
Collapse
Affiliation(s)
- Ran Liu
- Contact: , . Project page: https://nerdslab.github.io/SwapVAE/
| | | | | | | | | | | | | | | |
Collapse
|
148
|
Dynamics on the manifold: Identifying computational dynamical activity from neural population recordings. Curr Opin Neurobiol 2021; 70:163-170. [PMID: 34837752 DOI: 10.1016/j.conb.2021.10.014] [Citation(s) in RCA: 18] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/26/2021] [Revised: 10/27/2021] [Accepted: 10/28/2021] [Indexed: 11/21/2022]
Abstract
The question of how the collective activity of neural populations gives rise to complex behaviour is fundamental to neuroscience. At the core of this question lie considerations about how neural circuits can perform computations that enable sensory perception, decision making, and motor control. It is thought that such computations are implemented through the dynamical evolution of distributed activity in recurrent circuits. Thus, identifying dynamical structure in neural population activity is a key challenge towards a better understanding of neural computation. At the same time, interpreting this structure in light of the computation of interest is essential for linking the time-varying activity patterns of the neural population to ongoing computational processes. Here, we review methods that aim to quantify structure in neural population recordings through a dynamical system defined in a low-dimensional latent variable space. We discuss advantages and limitations of different modelling approaches and address future challenges for the field.
Collapse
|
149
|
Kalidindi HT, Cross KP, Lillicrap TP, Omrani M, Falotico E, Sabes PN, Scott SH. Rotational dynamics in motor cortex are consistent with a feedback controller. eLife 2021; 10:e67256. [PMID: 34730516 PMCID: PMC8691841 DOI: 10.7554/elife.67256] [Citation(s) in RCA: 25] [Impact Index Per Article: 8.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/05/2021] [Accepted: 10/28/2021] [Indexed: 11/13/2022] Open
Abstract
Recent studies have identified rotational dynamics in motor cortex (MC), which many assume arise from intrinsic connections in MC. However, behavioral and neurophysiological studies suggest that MC behaves like a feedback controller where continuous sensory feedback and interactions with other brain areas contribute substantially to MC processing. We investigated these apparently conflicting theories by building recurrent neural networks that controlled a model arm and received sensory feedback from the limb. Networks were trained to counteract perturbations to the limb and to reach toward spatial targets. Network activities and sensory feedback signals to the network exhibited rotational structure even when the recurrent connections were removed. Furthermore, neural recordings in monkeys performing similar tasks also exhibited rotational structure not only in MC but also in somatosensory cortex. Our results argue that rotational structure may also reflect dynamics throughout the voluntary motor system involved in online control of motor actions.
Collapse
Affiliation(s)
| | - Kevin P Cross
- Centre for Neuroscience Studies, Queen's UniversityKingstonCanada
| | - Timothy P Lillicrap
- Centre for Computation, Mathematics and Physics, University College LondonLondonUnited Kingdom
| | - Mohsen Omrani
- Centre for Neuroscience Studies, Queen's UniversityKingstonCanada
| | - Egidio Falotico
- The BioRobotics Institute, Scuola Superiore Sant'AnnaPisaItaly
| | - Philip N Sabes
- Department of Physiology, University of California, San FranciscoSan FranciscoUnited States
| | - Stephen H Scott
- Centre for Neuroscience Studies, Queen's UniversityKingstonCanada
| |
Collapse
|
150
|
Abstract
A central goal of neuroscience is to understand the representations formed by brain activity patterns and their connection to behaviour. The classic approach is to investigate how individual neurons encode stimuli and how their tuning determines the fidelity of the neural representation. Tuning analyses often use the Fisher information to characterize the sensitivity of neural responses to small changes of the stimulus. In recent decades, measurements of large populations of neurons have motivated a complementary approach, which focuses on the information available to linear decoders. The decodable information is captured by the geometry of the representational patterns in the multivariate response space. Here we review neural tuning and representational geometry with the goal of clarifying the relationship between them. The tuning induces the geometry, but different sets of tuned neurons can induce the same geometry. The geometry determines the Fisher information, the mutual information and the behavioural performance of an ideal observer in a range of psychophysical tasks. We argue that future studies can benefit from considering both tuning and geometry to understand neural codes and reveal the connections between stimuli, brain activity and behaviour.
Collapse
|