1
|
Smoulder AL, Marino PJ, Oby ER, Snyder SE, Miyata H, Pavlovsky NP, Bishop WE, Yu BM, Chase SM, Batista AP. A neural basis of choking under pressure. Neuron 2024:S0896-6273(24)00608-1. [PMID: 39270654 DOI: 10.1016/j.neuron.2024.08.012] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/26/2024] [Revised: 06/12/2024] [Accepted: 08/16/2024] [Indexed: 09/15/2024]
Abstract
Incentives tend to drive improvements in performance. But when incentives get too high, we can "choke under pressure" and underperform right when it matters most. What neural processes might lead to choking under pressure? We studied rhesus monkeys performing a challenging reaching task in which they underperformed when an unusually large "jackpot" reward was at stake, and we sought a neural mechanism that might result in that underperformance. We found that increases in reward drive neural activity during movement preparation into, and then past, a zone of optimal performance. We conclude that neural signals of reward and motor preparation interact in the motor cortex (MC) in a manner that can explain why we choke under pressure.
Collapse
Affiliation(s)
- Adam L Smoulder
- Department of Biomedical Engineering, Carnegie Mellon University, Pittsburgh, PA, USA; Center for the Neural Basis of Cognition, Pittsburgh, PA, USA
| | - Patrick J Marino
- Center for the Neural Basis of Cognition, Pittsburgh, PA, USA; Department of Bioengineering, University of Pittsburgh, Pittsburgh, PA, USA
| | - Emily R Oby
- Center for the Neural Basis of Cognition, Pittsburgh, PA, USA; Department of Bioengineering, University of Pittsburgh, Pittsburgh, PA, USA
| | - Sam E Snyder
- Center for the Neural Basis of Cognition, Pittsburgh, PA, USA; Center for Neuroscience, University of Pittsburgh, Pittsburgh, PA, USA
| | - Hiroo Miyata
- Department of Biomedical Engineering, Carnegie Mellon University, Pittsburgh, PA, USA
| | - Nick P Pavlovsky
- Department of Bioengineering, University of Pittsburgh, Pittsburgh, PA, USA
| | - William E Bishop
- Janelia Research Campus, Howard Hughes Medical Institute, Ashburn, GA, USA
| | - Byron M Yu
- Department of Biomedical Engineering, Carnegie Mellon University, Pittsburgh, PA, USA; Center for the Neural Basis of Cognition, Pittsburgh, PA, USA; Department of Electrical and Computer Engineering, Carnegie Mellon University, Pittsburgh, PA, USA
| | - Steven M Chase
- Department of Biomedical Engineering, Carnegie Mellon University, Pittsburgh, PA, USA; Center for the Neural Basis of Cognition, Pittsburgh, PA, USA; Neuroscience Institute, Carnegie Mellon University, Pittsburgh, PA, USA.
| | - Aaron P Batista
- Center for the Neural Basis of Cognition, Pittsburgh, PA, USA; Department of Bioengineering, University of Pittsburgh, Pittsburgh, PA, USA.
| |
Collapse
|
2
|
Sani OG, Pesaran B, Shanechi MM. Dissociative and prioritized modeling of behaviorally relevant neural dynamics using recurrent neural networks. Nat Neurosci 2024:10.1038/s41593-024-01731-2. [PMID: 39242944 DOI: 10.1038/s41593-024-01731-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/22/2023] [Accepted: 07/17/2024] [Indexed: 09/09/2024]
Abstract
Understanding the dynamical transformation of neural activity to behavior requires new capabilities to nonlinearly model, dissociate and prioritize behaviorally relevant neural dynamics and test hypotheses about the origin of nonlinearity. We present dissociative prioritized analysis of dynamics (DPAD), a nonlinear dynamical modeling approach that enables these capabilities with a multisection neural network architecture and training approach. Analyzing cortical spiking and local field potential activity across four movement tasks, we demonstrate five use-cases. DPAD enabled more accurate neural-behavioral prediction. It identified nonlinear dynamical transformations of local field potentials that were more behavior predictive than traditional power features. Further, DPAD achieved behavior-predictive nonlinear neural dimensionality reduction. It enabled hypothesis testing regarding nonlinearities in neural-behavioral transformation, revealing that, in our datasets, nonlinearities could largely be isolated to the mapping from latent cortical dynamics to behavior. Finally, DPAD extended across continuous, intermittently sampled and categorical behaviors. DPAD provides a powerful tool for nonlinear dynamical modeling and investigation of neural-behavioral data.
Collapse
Affiliation(s)
- Omid G Sani
- Ming Hsieh Department of Electrical and Computer Engineering, Viterbi School of Engineering, University of Southern California, Los Angeles, CA, USA
| | - Bijan Pesaran
- Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA
| | - Maryam M Shanechi
- Ming Hsieh Department of Electrical and Computer Engineering, Viterbi School of Engineering, University of Southern California, Los Angeles, CA, USA.
- Thomas Lord Department of Computer Science, University of Southern California, Los Angeles, CA, USA.
- Neuroscience Graduate Program, University of Southern California, Los Angeles, CA, USA.
- Alfred E. Mann Department of Biomedical Engineering, University of Southern California, Los Angeles, CA, USA.
| |
Collapse
|
3
|
Tian GJ, Zhu O, Shirhatti V, Greenspon CM, Downey JE, Freedman DJ, Doiron B. Neuronal firing rate diversity lowers the dimension of population covariability. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.08.30.610535. [PMID: 39257801 PMCID: PMC11383671 DOI: 10.1101/2024.08.30.610535] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/12/2024]
Abstract
Populations of neurons produce activity with two central features. First, neuronal responses are very diverse - specific stimuli or behaviors prompt some neurons to emit many action potentials, while other neurons remain relatively silent. Second, the trial-to-trial fluctuations of neuronal response occupy a low dimensional space, owing to significant correlations between the activity of neurons. These two features define the quality of neuronal representation. We link these two aspects of population response using a recurrent circuit model and derive the following relation: the more diverse the firing rates of neurons in a population, the lower the effective dimension of population trial-to-trial covariability. This surprising prediction is tested and validated using simultaneously recorded neuronal populations from numerous brain areas in mice, non-human primates, and in the motor cortex of human participants. Using our relation we present a theory where a more diverse neuronal code leads to better fine discrimination performance from population activity. In line with this theory, we show that neuronal populations across the brain exhibit both more diverse mean responses and lower-dimensional fluctuations when the brain is in more heightened states of information processing. In sum, we present a key organizational principle of neuronal population response that is widely observed across the nervous system and acts to synergistically improve population representation.
Collapse
|
4
|
Bashford L, Rosenthal IA, Kellis S, Bjånes D, Pejsa K, Brunton BW, Andersen RA. Neural subspaces of imagined movements in parietal cortex remain stable over several years in humans. J Neural Eng 2024; 21:046059. [PMID: 39134021 PMCID: PMC11350602 DOI: 10.1088/1741-2552/ad6e19] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/19/2024] [Revised: 07/15/2024] [Accepted: 08/12/2024] [Indexed: 08/21/2024]
Abstract
Objective.A crucial goal in brain-machine interfacing is the long-term stability of neural decoding performance, ideally without regular retraining. Long-term stability has only been previously demonstrated in non-human primate experiments and only in primary sensorimotor cortices. Here we extend previous methods to determine long-term stability in humans by identifying and aligning low-dimensional structures in neural data.Approach.Over a period of 1106 and 871 d respectively, two participants completed an imagined center-out reaching task. The longitudinal accuracy between all day pairs was assessed by latent subspace alignment using principal components analysis and canonical correlations analysis of multi-unit intracortical recordings in different brain regions (Brodmann Area 5, Anterior Intraparietal Area and the junction of the postcentral and intraparietal sulcus).Main results.We show the long-term stable representation of neural activity in subspaces of intracortical recordings from higher-order association areas in humans.Significance.These results can be practically applied to significantly expand the longevity and generalizability of brain-computer interfaces.Clinical TrialsNCT01849822, NCT01958086, NCT01964261.
Collapse
Affiliation(s)
- L Bashford
- Division of Biology and Biological Engineering, and T&C Chen Brain-Machine Interface Center, California Institute of Technology, Pasadena, CA, United States of America
- Biosciences Institute, Newcastle University, Newcastle upon Tyne, United Kingdom
| | - I A Rosenthal
- Division of Biology and Biological Engineering, and T&C Chen Brain-Machine Interface Center, California Institute of Technology, Pasadena, CA, United States of America
| | - S Kellis
- Division of Biology and Biological Engineering, and T&C Chen Brain-Machine Interface Center, California Institute of Technology, Pasadena, CA, United States of America
| | - D Bjånes
- Division of Biology and Biological Engineering, and T&C Chen Brain-Machine Interface Center, California Institute of Technology, Pasadena, CA, United States of America
| | - K Pejsa
- Division of Biology and Biological Engineering, and T&C Chen Brain-Machine Interface Center, California Institute of Technology, Pasadena, CA, United States of America
| | - B W Brunton
- Department of Biology, University of Washington, Seattle, WA, United States of America
| | - R A Andersen
- Division of Biology and Biological Engineering, and T&C Chen Brain-Machine Interface Center, California Institute of Technology, Pasadena, CA, United States of America
| |
Collapse
|
5
|
Sabatini DA, Kaufman MT. Reach-dependent reorientation of rotational dynamics in motor cortex. Nat Commun 2024; 15:7007. [PMID: 39143078 PMCID: PMC11325044 DOI: 10.1038/s41467-024-51308-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/02/2023] [Accepted: 08/05/2024] [Indexed: 08/16/2024] Open
Abstract
During reaching, neurons in motor cortex exhibit complex, time-varying activity patterns. Though single-neuron activity correlates with movement parameters, movement correlations explain neural activity only partially. Neural responses also reflect population-level dynamics thought to generate outputs. These dynamics have previously been described as "rotational," such that activity orbits in neural state space. Here, we reanalyze reaching datasets from male Rhesus macaques and find two essential features that cannot be accounted for with standard dynamics models. First, the planes in which rotations occur differ for different reaches. Second, this variation in planes reflects the overall location of activity in neural state space. Our "location-dependent rotations" model fits nearly all motor cortex activity during reaching, and high-quality decoding of reach kinematics reveals a quasilinear relationship with spiking. Varying rotational planes allows motor cortex to produce richer outputs than possible under previous models. Finally, our model links representational and dynamical ideas: representation is present in the state space location, which dynamics then convert into time-varying command signals.
Collapse
Affiliation(s)
- David A Sabatini
- Department of Organismal Biology and Anatomy, The University of Chicago, Chicago, IL, 60637, USA
- Neuroscience Institute, The University of Chicago, Chicago, IL, 60637, USA
| | - Matthew T Kaufman
- Department of Organismal Biology and Anatomy, The University of Chicago, Chicago, IL, 60637, USA.
- Neuroscience Institute, The University of Chicago, Chicago, IL, 60637, USA.
| |
Collapse
|
6
|
Li Y, Zhu X, Qi Y, Wang Y. Revealing unexpected complex encoding but simple decoding mechanisms in motor cortex via separating behaviorally relevant neural signals. eLife 2024; 12:RP87881. [PMID: 39120996 PMCID: PMC11315449 DOI: 10.7554/elife.87881] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 08/11/2024] Open
Abstract
In motor cortex, behaviorally relevant neural responses are entangled with irrelevant signals, which complicates the study of encoding and decoding mechanisms. It remains unclear whether behaviorally irrelevant signals could conceal some critical truth. One solution is to accurately separate behaviorally relevant and irrelevant signals at both single-neuron and single-trial levels, but this approach remains elusive due to the unknown ground truth of behaviorally relevant signals. Therefore, we propose a framework to define, extract, and validate behaviorally relevant signals. Analyzing separated signals in three monkeys performing different reaching tasks, we found neural responses previously considered to contain little information actually encode rich behavioral information in complex nonlinear ways. These responses are critical for neuronal redundancy and reveal movement behaviors occupy a higher-dimensional neural space than previously expected. Surprisingly, when incorporating often-ignored neural dimensions, behaviorally relevant signals can be decoded linearly with comparable performance to nonlinear decoding, suggesting linear readout may be performed in motor cortex. Our findings prompt that separating behaviorally relevant signals may help uncover more hidden cortical mechanisms.
Collapse
Affiliation(s)
- Yangang Li
- Qiushi Academy for Advanced Studies, Zhejiang UniversityHangzhouChina
- Nanhu Brain-Computer Interface InstituteHangzhouChina
- College of Computer Science and Technology, Zhejiang UniversityHangzhouChina
- The State Key Lab of Brain-Machine Intelligence, Zhejiang UniversityHangzhouChina
| | - Xinyun Zhu
- Qiushi Academy for Advanced Studies, Zhejiang UniversityHangzhouChina
- Nanhu Brain-Computer Interface InstituteHangzhouChina
- College of Computer Science and Technology, Zhejiang UniversityHangzhouChina
- The State Key Lab of Brain-Machine Intelligence, Zhejiang UniversityHangzhouChina
| | - Yu Qi
- Nanhu Brain-Computer Interface InstituteHangzhouChina
- College of Computer Science and Technology, Zhejiang UniversityHangzhouChina
- The State Key Lab of Brain-Machine Intelligence, Zhejiang UniversityHangzhouChina
- Affiliated Mental Health Center & Hangzhou Seventh People’s Hospital and the MOE Frontier Science Center for Brain Science and Brain-Machine Integration, Zhejiang University School of MedicineHangzhouChina
| | - Yueming Wang
- Qiushi Academy for Advanced Studies, Zhejiang UniversityHangzhouChina
- Nanhu Brain-Computer Interface InstituteHangzhouChina
- College of Computer Science and Technology, Zhejiang UniversityHangzhouChina
- The State Key Lab of Brain-Machine Intelligence, Zhejiang UniversityHangzhouChina
- Affiliated Mental Health Center & Hangzhou Seventh People’s Hospital and the MOE Frontier Science Center for Brain Science and Brain-Machine Integration, Zhejiang University School of MedicineHangzhouChina
| |
Collapse
|
7
|
McCart JD, Sedler AR, Versteeg C, Mifsud D, Rigotti-Thompson M, Pandarinath C. Diffusion-Based Generation of Neural Activity from Disentangled Latent Codes. ARXIV 2024:arXiv:2407.21195v1. [PMID: 39130199 PMCID: PMC11312623] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Download PDF] [Subscribe] [Scholar Register] [Indexed: 08/13/2024]
Abstract
Recent advances in recording technology have allowed neuroscientists to monitor activity from thousands of neurons simultaneously. Latent variable models are increasingly valuable for distilling these recordings into compact and interpretable representations. Here we propose a new approach to neural data analysis that leverages advances in conditional generative modeling to enable the unsupervised inference of disentangled behavioral variables from recorded neural activity. Our approach builds on InfoDiffusion, which augments diffusion models with a set of latent variables that capture important factors of variation in the data. We apply our model, called Generating Neural Observations Conditioned on Codes with High Information (GNOCCHI), to time series neural data and test its application to synthetic and biological recordings of neural activity during reaching. In comparison to a VAE-based sequential autoencoder, GNOCCHI learns higher-quality latent spaces that are more clearly structured and more disentangled with respect to key behavioral variables. These properties enable accurate generation of novel samples (unseen behavioral conditions) through simple linear traversal of the latent spaces produced by GNOCCHI. Our work demonstrates the potential of unsupervised, information-based models for the discovery of interpretable latent spaces from neural data, enabling researchers to generate high-quality samples from unseen conditions.
Collapse
Affiliation(s)
- Jonathan D McCart
- Center for Machine Learning, Georgia Tech
- Department of Biomedical Engineering, Georgia Tech and Emory University
| | - Andrew R Sedler
- Center for Machine Learning, Georgia Tech
- Department of Biomedical Engineering, Georgia Tech and Emory University
| | | | - Domenick Mifsud
- Center for Machine Learning, Georgia Tech
- Department of Biomedical Engineering, Georgia Tech and Emory University
| | | | - Chethan Pandarinath
- Center for Machine Learning, Georgia Tech
- Department of Biomedical Engineering, Georgia Tech and Emory University
- Department of Neurosurgery, Emory University School of Medicine
| |
Collapse
|
8
|
Johnsen KA, Cruzado NA, Menard ZC, Willats AA, Charles AS, Markowitz JE, Rozell CJ. Bridging model and experiment in systems neuroscience with Cleo: the Closed-Loop, Electrophysiology, and Optophysiology simulation testbed. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2023.01.27.525963. [PMID: 39026717 PMCID: PMC11257437 DOI: 10.1101/2023.01.27.525963] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 07/20/2024]
Abstract
Systems neuroscience has experienced an explosion of new tools for reading and writing neural activity, enabling exciting new experiments such as all-optical or closed-loop control that effect powerful causal interventions. At the same time, improved computational models are capable of reproducing behavior and neural activity with increasing fidelity. Unfortunately, these advances have drastically increased the complexity of integrating different lines of research, resulting in the missed opportunities and untapped potential of suboptimal experiments. Experiment simulation can help bridge this gap, allowing model and experiment to better inform each other by providing a low-cost testbed for experiment design, model validation, and methods engineering. Specifically, this can be achieved by incorporating the simulation of the experimental interface into our models, but no existing tool integrates optogenetics, two-photon calcium imaging, electrode recording, and flexible closed-loop processing with neural population simulations. To address this need, we have developed Cleo: the Closed-Loop, Electrophysiology, and Optophysiology experiment simulation testbed. Cleo is a Python package enabling injection of recording and stimulation devices as well as closed-loop control with realistic latency into a Brian spiking neural network model. It is the only publicly available tool currently supporting two-photon and multi-opsin/wavelength optogenetics. To facilitate adoption and extension by the community, Cleo is open-source, modular, tested, and documented, and can export results to various data formats. Here we describe the design and features of Cleo, validate output of individual components and integrated experiments, and demonstrate its utility for advancing optogenetic techniques in prospective experiments using previously published systems neuroscience models.
Collapse
Affiliation(s)
- Kyle A. Johnsen
- Coulter Department of Biomedical Engineering, Georgia Institute of Technology and Emory University, Atlanta, GA, USA
| | | | - Zachary C. Menard
- Coulter Department of Biomedical Engineering, Georgia Institute of Technology and Emory University, Atlanta, GA, USA
| | - Adam A. Willats
- Coulter Department of Biomedical Engineering, Georgia Institute of Technology and Emory University, Atlanta, GA, USA
| | - Adam S. Charles
- Department of Biomedical Engineering, The Johns Hopkins University, Baltimore, MD, USA
| | - Jeffrey E. Markowitz
- Coulter Department of Biomedical Engineering, Georgia Institute of Technology and Emory University, Atlanta, GA, USA
| | | |
Collapse
|
9
|
Tostado-Marcos P, Arneodo EM, Ostrowski L, Brown DE, Perez XA, Kadwory A, Stanwicks LL, Alothman A, Gentner TQ, Gilja V. Neural population dynamics in songbird RA and HVC during learned motor-vocal behavior. ARXIV 2024:arXiv:2407.06244v1. [PMID: 39040642 PMCID: PMC11261980] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Subscribe] [Scholar Register] [Indexed: 07/24/2024]
Abstract
Complex, learned motor behaviors involve the coordination of large-scale neural activity across multiple brain regions, but our understanding of the population-level dynamics within different regions tied to the same behavior remains limited. Here, we investigate the neural population dynamics underlying learned vocal production in awake-singing songbirds. We use Neuropixels probes to record the simultaneous extracellular activity of populations of neurons in two regions of the vocal motor pathway. In line with observations made in non-human primates during limb-based motor tasks, we show that the population-level activity in both the premotor nucleus HVC and the motor nucleus RA is organized on low-dimensional neural manifolds upon which coordinated neural activity is well described by temporally structured trajectories during singing behavior. Both the HVC and RA latent trajectories provide relevant information to predict vocal sequence transitions between song syllables. However, the dynamics of these latent trajectories differ between regions. Our state-space models suggest a unique and continuous-over-time correspondence between the latent space of RA and vocal output, whereas the corresponding relationship for HVC exhibits a higher degree of neural variability. We then demonstrate that comparable high-fidelity reconstruction of continuous vocal outputs can be achieved from HVC and RA neural latents and spiking activity. Unlike those that use spiking activity, however, decoding models using neural latents generalize to novel sub-populations in each region, consistent with the existence of preserved manifolds that confine vocal-motor activity in HVC and RA.
Collapse
Affiliation(s)
- Pablo Tostado-Marcos
- Department of Bioengineering
- Department of Electrical and Computer Engineering
- Department of Psychology
| | | | - Lauren Ostrowski
- Neurosciences Graduate Program, University of California, San Diego, 9500 Gilman Drive, La Jolla, CA 92093, USA
| | - Daril E Brown
- Department of Electrical and Computer Engineering
- Department of Psychology
| | | | - Adam Kadwory
- Department of Electrical and Computer Engineering
| | - Lauren L Stanwicks
- Neurosciences Graduate Program, University of California, San Diego, 9500 Gilman Drive, La Jolla, CA 92093, USA
| | | | - Timothy Q Gentner
- Department of Psychology
- Department of Neurobiology
- Neurosciences Graduate Program, University of California, San Diego, 9500 Gilman Drive, La Jolla, CA 92093, USA
| | - Vikash Gilja
- Department of Electrical and Computer Engineering
- Neurosciences Graduate Program, University of California, San Diego, 9500 Gilman Drive, La Jolla, CA 92093, USA
| |
Collapse
|
10
|
Vaccari FE, Diomedi S, De Vitis M, Filippini M, Fattori P. Similar neural states, but dissimilar decoding patterns for motor control in parietal cortex. Netw Neurosci 2024; 8:486-516. [PMID: 38952818 PMCID: PMC11146678 DOI: 10.1162/netn_a_00364] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/27/2023] [Accepted: 01/29/2024] [Indexed: 07/03/2024] Open
Abstract
Discrete neural states are associated with reaching movements across the fronto-parietal network. Here, the Hidden Markov Model (HMM) applied to spiking activity of the somato-motor parietal area PE revealed a sequence of states similar to those of the contiguous visuomotor areas PEc and V6A. Using a coupled clustering and decoding approach, we proved that these neural states carried spatiotemporal information regarding behaviour in all three posterior parietal areas. However, comparing decoding accuracy, PE was less informative than V6A and PEc. In addition, V6A outperformed PEc in target inference, indicating functional differences among the parietal areas. To check the consistency of these differences, we used both a supervised and an unsupervised variant of the HMM, and compared its performance with two more common classifiers, Support Vector Machine and Long-Short Term Memory. The differences in decoding between areas were invariant to the algorithm used, still showing the dissimilarities found with HMM, thus indicating that these dissimilarities are intrinsic in the information encoded by parietal neurons. These results highlight that, when decoding from the parietal cortex, for example, in brain machine interface implementations, attention should be paid in selecting the most suitable source of neural signals, given the great heterogeneity of this cortical sector.
Collapse
Affiliation(s)
| | - Stefano Diomedi
- Department of Biomedical and Neuromotor Sciences, University of Bologna, Italy
| | - Marina De Vitis
- Department of Biomedical and Neuromotor Sciences, University of Bologna, Italy
| | - Matteo Filippini
- Department of Biomedical and Neuromotor Sciences, University of Bologna, Italy
- Alma Mater Research Institute for Human-Centered Artificial Intelligence, University of Bologna, Italy
| | - Patrizia Fattori
- Department of Biomedical and Neuromotor Sciences, University of Bologna, Italy
- Alma Mater Research Institute for Human-Centered Artificial Intelligence, University of Bologna, Italy
| |
Collapse
|
11
|
Wimalasena LN, Pandarinath C, Yong NA. Spinal interneuron population dynamics underlying flexible pattern generation. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.06.20.599927. [PMID: 38948833 PMCID: PMC11213001 DOI: 10.1101/2024.06.20.599927] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 07/02/2024]
Abstract
The mammalian spinal locomotor network is composed of diverse populations of interneurons that collectively orchestrate and execute a range of locomotor behaviors. Despite the identification of many classes of spinal interneurons constituting the locomotor network, it remains unclear how the network's collective activity computes and modifies locomotor output on a step-by-step basis. To investigate this, we analyzed lumbar interneuron population recordings and multi-muscle electromyography from spinalized cats performing air stepping and used artificial intelligence methods to uncover state space trajectories of spinal interneuron population activity on single step cycles and at millisecond timescales. Our analyses of interneuron population trajectories revealed that traversal of specific state space regions held millisecond-timescale correspondence to the timing adjustments of extensor-flexor alternation. Similarly, we found that small variations in the path of state space trajectories were tightly linked to single-step, microvolt-scale adjustments in the magnitude of muscle output. One sentence summary Features of spinal interneuron state space trajectories capture variations in the timing and magnitude of muscle activations across individual step cycles, with precision on the scales of milliseconds and microvolts respectively.
Collapse
|
12
|
Erra A, Chen J, Chrysostomou E, Barret S, Miller C, Kassim YM, Friedman RA, Ceriani F, Marcotti W, Carroll C, Manor U. An Open-Source Deep Learning-Based GUI Toolbox For Automated Auditory Brainstem Response Analyses (ABRA). BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.06.20.599815. [PMID: 38948763 PMCID: PMC11213013 DOI: 10.1101/2024.06.20.599815] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 07/02/2024]
Abstract
In this paper, we introduce a new, open-source software developed in Python for analyzing Auditory Brainstem Response (ABR) waveforms. ABRs are a far-field recording of synchronous neural activity generated by the auditory fibers in the ear in response to sound, and used to study acoustic neural information traveling along the ascending auditory pathway. Common ABR data analysis practices are subject to human interpretation and are labor-intensive, requiring manual annotations and visual estimation of hearing thresholds. The proposed new Auditory Brainstem Response Analyzer (ABRA) software is designed to facilitate the analysis of ABRs by supporting batch data import/export, waveform visualization, and statistical analysis. Techniques implemented in this software include algorithmic peak finding, threshold estimation, latency estimation, time warping for curve alignment, and 3D plotting of ABR waveforms over stimulus frequencies and decibels. The excellent performance on a large dataset of ABR collected from three labs in the field of hearing research that use different experimental recording settings illustrates the efficacy, flexibility, and wide utility of ABRA.
Collapse
Affiliation(s)
- Abhijeeth Erra
- Data Institute, University of San Francisco, San Francisco, CA
| | - Jeffrey Chen
- Data Institute, University of San Francisco, San Francisco, CA
| | - Elena Chrysostomou
- Dept. of Cell & Developmental Biology, University of California San Diego, La Jolla, CA
| | - Shannon Barret
- Dept. of Cell & Developmental Biology, University of California San Diego, La Jolla, CA
| | - Cayla Miller
- Dept. of Cell & Developmental Biology, University of California San Diego, La Jolla, CA
| | - Yasmin M. Kassim
- Dept. of Cell & Developmental Biology, University of California San Diego, La Jolla, CA
| | - Rick A. Friedman
- Dept. of Otolaryngology, University of California San Diego, La Jolla, CA
| | - Federico Ceriani
- Dept. of Biomedical Science, University of Sheffield, Sheffield, S10 2TN, UK
- Neuroscience Institute, University of Sheffield, Sheffield, S10 2TN, UK
| | - Walter Marcotti
- Dept. of Biomedical Science, University of Sheffield, Sheffield, S10 2TN, UK
- Neuroscience Institute, University of Sheffield, Sheffield, S10 2TN, UK
| | - Cody Carroll
- Data Institute, University of San Francisco, San Francisco, CA
- Dept. of Mathematics and Statistics, University of San Francisco, San Francisco, CA
| | - Uri Manor
- Dept. of Cell & Developmental Biology, University of California San Diego, La Jolla, CA
- Dept. of Otolaryngology, University of California San Diego, La Jolla, CA
- Halıcıoğlu Data Science Institute, University of California San Diego, La Jolla, CA
| |
Collapse
|
13
|
Cole ER, Eggers TE, Weiss DA, Connolly MJ, Gombolay MC, Laxpati NG, Gross RE. Irregular optogenetic stimulation waveforms can induce naturalistic patterns of hippocampal spectral activity. J Neural Eng 2024; 21:036039. [PMID: 38834054 DOI: 10.1088/1741-2552/ad5407] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/11/2023] [Accepted: 06/04/2024] [Indexed: 06/06/2024]
Abstract
Objective. Therapeutic brain stimulation is conventionally delivered using constant-frequency stimulation pulses. Several recent clinical studies have explored how unconventional and irregular temporal stimulation patterns could enable better therapy. However, it is challenging to understand which irregular patterns are most effective for different therapeutic applications given the massively high-dimensional parameter space.Approach. Here we applied many irregular stimulation patterns in a single neural circuit to demonstrate how they can enable new dimensions of neural control compared to conventional stimulation, to guide future exploration of novel stimulation patterns in translational settings. We optogenetically excited the septohippocampal circuit with constant-frequency, nested pulse, sinusoidal, and randomized stimulation waveforms, systematically varying their amplitude and frequency parameters.Main results.We first found equal entrainment of hippocampal oscillations: all waveforms provided similar gamma-power increase, whereas no parameters increased theta-band power above baseline (despite the mechanistic role of the medial septum in driving hippocampal theta oscillations). We then compared each of the effects of each waveform on high-dimensional multi-band activity states using dimensionality reduction methods. Strikingly, we found that conventional stimulation drove predominantly 'artificial' (different from behavioral activity) effects, whereas all irregular waveforms induced activity patterns that more closely resembled behavioral activity.Significance. Our findings suggest that irregular stimulation patterns are not useful when the desired mechanism is to suppress or enhance a single frequency band. However, novel stimulation patterns may provide the greatest benefit for neural control applications where entraining a particular mixture of bands (e.g. if they are associated with different symptoms) or behaviorally-relevant activity is desired.
Collapse
Affiliation(s)
- Eric R Cole
- Wallace H. Coulter Department of Biomedical Engineering, Georgia Institute of Technology and Emory University, Atlanta, GA 30332, United States of America
- Department of Neurosurgery, Emory University School of Medicine, Atlanta, GA 30322, United States of America
| | - Thomas E Eggers
- Department of Neurosurgery, Emory University School of Medicine, Atlanta, GA 30322, United States of America
| | - David A Weiss
- Wallace H. Coulter Department of Biomedical Engineering, Georgia Institute of Technology and Emory University, Atlanta, GA 30332, United States of America
| | - Mark J Connolly
- Emory National Primate Research Center, Emory University, Atlanta, GA 30329, United States of America
| | - Matthew C Gombolay
- Institute for Robotics and Intelligent Machines, Georgia Institute of Technology, Atlanta, GA 30332, United States of America
| | - Nealen G Laxpati
- Department of Neurosurgery, Emory University School of Medicine, Atlanta, GA 30322, United States of America
| | - Robert E Gross
- Department of Neurosurgery, Emory University School of Medicine, Atlanta, GA 30322, United States of America
- Department of Neurosurgery, Robert Wood Johnson Medical School, Rutgers the State University of New Jersey, Newark, NJ 07103, United States of America
| |
Collapse
|
14
|
Bardella G, Franchini S, Pan L, Balzan R, Ramawat S, Brunamonti E, Pani P, Ferraina S. Neural Activity in Quarks Language: Lattice Field Theory for a Network of Real Neurons. ENTROPY (BASEL, SWITZERLAND) 2024; 26:495. [PMID: 38920504 PMCID: PMC11203154 DOI: 10.3390/e26060495] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/25/2024] [Revised: 05/28/2024] [Accepted: 05/30/2024] [Indexed: 06/27/2024]
Abstract
Brain-computer interfaces have seen extraordinary surges in developments in recent years, and a significant discrepancy now exists between the abundance of available data and the limited headway made in achieving a unified theoretical framework. This discrepancy becomes particularly pronounced when examining the collective neural activity at the micro and meso scale, where a coherent formalization that adequately describes neural interactions is still lacking. Here, we introduce a mathematical framework to analyze systems of natural neurons and interpret the related empirical observations in terms of lattice field theory, an established paradigm from theoretical particle physics and statistical mechanics. Our methods are tailored to interpret data from chronic neural interfaces, especially spike rasters from measurements of single neuron activity, and generalize the maximum entropy model for neural networks so that the time evolution of the system is also taken into account. This is obtained by bridging particle physics and neuroscience, paving the way for particle physics-inspired models of the neocortex.
Collapse
Affiliation(s)
- Giampiero Bardella
- Department of Physiology and Pharmacology, Sapienza University of Rome, Piazzale Aldo Moro 5, 00185 Roma, Italy (E.B.); (P.P.); (S.F.)
| | - Simone Franchini
- Department of Physiology and Pharmacology, Sapienza University of Rome, Piazzale Aldo Moro 5, 00185 Roma, Italy (E.B.); (P.P.); (S.F.)
| | - Liming Pan
- School of Cyber Science and Technology, University of Science and Technology of China, Hefei 230026, China;
| | - Riccardo Balzan
- Laboratoire de Chimie et Biochimie Pharmacologiques et Toxicologiques, UMR 8601, UFR Biomédicale et des Sciences de Base, Université Paris Descartes-CNRS, PRES Paris Sorbonne Cité, 75006 Paris, France;
| | - Surabhi Ramawat
- Department of Physiology and Pharmacology, Sapienza University of Rome, Piazzale Aldo Moro 5, 00185 Roma, Italy (E.B.); (P.P.); (S.F.)
| | - Emiliano Brunamonti
- Department of Physiology and Pharmacology, Sapienza University of Rome, Piazzale Aldo Moro 5, 00185 Roma, Italy (E.B.); (P.P.); (S.F.)
| | - Pierpaolo Pani
- Department of Physiology and Pharmacology, Sapienza University of Rome, Piazzale Aldo Moro 5, 00185 Roma, Italy (E.B.); (P.P.); (S.F.)
| | - Stefano Ferraina
- Department of Physiology and Pharmacology, Sapienza University of Rome, Piazzale Aldo Moro 5, 00185 Roma, Italy (E.B.); (P.P.); (S.F.)
| |
Collapse
|
15
|
Lindsay AJ, Gallello I, Caracheo BF, Seamans JK. Reconfiguration of Behavioral Signals in the Anterior Cingulate Cortex Based on Emotional State. J Neurosci 2024; 44:e1670232024. [PMID: 38637155 PMCID: PMC11154859 DOI: 10.1523/jneurosci.1670-23.2024] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/05/2023] [Revised: 03/28/2024] [Accepted: 04/03/2024] [Indexed: 04/20/2024] Open
Abstract
Behaviors and their execution depend on the context and emotional state in which they are performed. The contextual modulation of behavior likely relies on regions such as the anterior cingulate cortex (ACC) that multiplex information about emotional/autonomic states and behaviors. The objective of the present study was to understand how the representations of behaviors by ACC neurons become modified when performed in different emotional states. A pipeline of machine learning techniques was developed to categorize and classify complex, spontaneous behaviors in male rats from the video. This pipeline, termed Hierarchical Unsupervised Behavioural Discovery Tool (HUB-DT), discovered a range of statistically separable behaviors during a task in which motivationally significant outcomes were delivered in blocks of trials that created three unique "emotional contexts." HUB-DT was capable of detecting behaviors specific to each emotional context and was able to identify and segregate the portions of a neural signal related to a behavior and to emotional context. Overall, ∼10× as many neurons responded to behaviors in a contextually dependent versus a fixed manner, highlighting the extreme impact of emotional state on representations of behaviors that were precisely defined based on detailed analyses of limb kinematics. This type of modulation may be a key mechanism that allows the ACC to modify the behavioral output based on emotional states and contextual demands.
Collapse
Affiliation(s)
- Adrian J Lindsay
- Department of Psychiatry, Djavad Mowafaghian Centre for Brain Health, University of British Columbia, Vancouver, British Columbia V6T2B5, Canada
| | - Isabella Gallello
- Department of Psychiatry, Djavad Mowafaghian Centre for Brain Health, University of British Columbia, Vancouver, British Columbia V6T2B5, Canada
| | - Barak F Caracheo
- Department of Psychiatry, Djavad Mowafaghian Centre for Brain Health, University of British Columbia, Vancouver, British Columbia V6T2B5, Canada
| | - Jeremy K Seamans
- Department of Psychiatry, Djavad Mowafaghian Centre for Brain Health, University of British Columbia, Vancouver, British Columbia V6T2B5, Canada
| |
Collapse
|
16
|
Chau G, Wang C, Talukder S, Subramaniam V, Soedarmadji S, Yue Y, Katz B, Barbu A. Population Transformer: Learning Population-level Representations of Intracranial Activity. ARXIV 2024:arXiv:2406.03044v1. [PMID: 38883237 PMCID: PMC11177958] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Figures] [Subscribe] [Scholar Register] [Indexed: 06/18/2024]
Abstract
We present a self-supervised framework that learns population-level codes for intracranial neural recordings at scale, unlocking the benefits of representation learning for a key neuroscience recording modality. The Population Transformer (PopT) lowers the amount of data required for decoding experiments, while increasing accuracy, even on never-before-seen subjects and tasks. We address two key challenges in developing PopT: sparse electrode distribution and varying electrode location across patients. PopT stacks on top of pretrained representations and enhances downstream tasks by enabling learned aggregation of multiple spatially-sparse data channels. Beyond decoding, we interpret the pretrained PopT and fine-tuned models to show how it can be used to provide neuroscience insights learned from massive amounts of data. We release a pretrained PopT to enable off-the-shelf improvements in multi-channel intracranial data decoding and interpretability, and code is available at https://github.com/czlwang/PopulationTransformer.
Collapse
|
17
|
Yang SH, Huang CJ, Huang JS. Increasing Robustness of Intracortical Brain-Computer Interfaces for Recording Condition Changes via Data Augmentation. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2024; 251:108208. [PMID: 38754326 DOI: 10.1016/j.cmpb.2024.108208] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/08/2024] [Accepted: 04/30/2024] [Indexed: 05/18/2024]
Abstract
BACKGROUND AND OBJECTIVE Intracortical brain-computer interfaces (iBCIs) aim to help paralyzed individuals restore their motor functions by decoding neural activity into intended movement. However, changes in neural recording conditions hinder the decoding performance of iBCIs, mainly because the neural-to-kinematic mappings shift. Conventional approaches involve either training the neural decoders using large datasets before deploying the iBCI or conducting frequent calibrations during its operation. However, collecting data for extended periods can cause user fatigue, negatively impacting the quality and consistency of neural signals. Furthermore, frequent calibration imposes a substantial computational load. METHODS This study proposes a novel approach to increase iBCIs' robustness against changing recording conditions. The approach uses three neural augmentation operators to generate augmented neural activity that mimics common recording conditions. Then, contrastive learning is used to learn latent factors by maximizing the similarity between the augmented neural activities. The learned factors are expected to remain stable despite varying recording conditions and maintain a consistent correlation with the intended movement. RESULTS Experimental results demonstrate that the proposed iBCI outperformed the state-of-the-art iBCIs and was robust to changing recording conditions across days for long-term use on one publicly available nonhuman primate dataset. It achieved satisfactory offline decoding performance, even when a large training dataset was unavailable. CONCLUSIONS This study paves the way for reducing the need for frequent calibration of iBCIs and collecting a large amount of annotated training data. Potential future works aim to improve offline decoding performance with an ultra-small training dataset and improve the iBCIs' robustness to severely disabled electrodes.
Collapse
Affiliation(s)
- Shih-Hung Yang
- Department of Mechanical Engineering, National Cheng Kung University, Tainan, 701, Taiwan.
| | - Chun-Jui Huang
- Department of Mechanical Engineering, National Cheng Kung University, Tainan, 701, Taiwan
| | - Jhih-Siang Huang
- Department of Mechanical Engineering, National Cheng Kung University, Tainan, 701, Taiwan
| |
Collapse
|
18
|
Alasfour A, Gilja V. Consistent spectro-spatial features of human ECoG successfully decode naturalistic behavioral states. Front Hum Neurosci 2024; 18:1388267. [PMID: 38873653 PMCID: PMC11169785 DOI: 10.3389/fnhum.2024.1388267] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/19/2024] [Accepted: 04/19/2024] [Indexed: 06/15/2024] Open
Abstract
Objective Understanding the neural correlates of naturalistic behavior is critical for extending and confirming the results obtained from trial-based experiments and designing generalizable brain-computer interfaces that can operate outside laboratory environments. In this study, we aimed to pinpoint consistent spectro-spatial features of neural activity in humans that can discriminate between naturalistic behavioral states. Approach We analyzed data from five participants using electrocorticography (ECoG) with broad spatial coverage. Spontaneous and naturalistic behaviors such as "Talking" and "Watching TV" were labeled from manually annotated videos. Linear discriminant analysis (LDA) was used to classify the two behavioral states. The parameters learned from the LDA were then used to determine whether the neural signatures driving classification performance are consistent across the participants. Main results Spectro-spatial feature values were consistently discriminative between the two labeled behavioral states across participants. Mainly, θ, α, and low and high γ in the postcentral gyrus, precentral gyrus, and temporal lobe showed significant classification performance and feature consistency across participants. Subject-specific performance exceeded 70%. Combining neural activity from multiple cortical regions generally does not improve decoding performance, suggesting that information regarding the behavioral state is non-additive as a function of the cortical region. Significance To the best of our knowledge, this is the first attempt to identify specific spectro-spatial neural correlates that consistently decode naturalistic and active behavioral states. The aim of this work is to serve as an initial starting point for developing brain-computer interfaces that can be generalized in a realistic setting and to further our understanding of the neural correlates of naturalistic behavior in humans.
Collapse
Affiliation(s)
- Abdulwahab Alasfour
- Department of Electrical Engineering, College of Engineering and Petroleum, Kuwait University, Kuwait City, Kuwait
| | - Vikash Gilja
- Department of Electrical and Computer Engineering, University of California, San Diego, CA, United States
| |
Collapse
|
19
|
Lee WH, Karpowicz BM, Pandarinath C, Rouse AG. Identifying Distinct Neural Features between the Initial and Corrective Phases of Precise Reaching Using AutoLFADS. J Neurosci 2024; 44:e1224232024. [PMID: 38538142 PMCID: PMC11097258 DOI: 10.1523/jneurosci.1224-23.2024] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/14/2023] [Revised: 03/11/2024] [Accepted: 03/11/2024] [Indexed: 04/09/2024] Open
Abstract
Many initial movements require subsequent corrective movements, but how the motor cortex transitions to make corrections and how similar the encoding is to initial movements is unclear. In our study, we explored how the brain's motor cortex signals both initial and corrective movements during a precision reaching task. We recorded a large population of neurons from two male rhesus macaques across multiple sessions to examine the neural firing rates during not only initial movements but also subsequent corrective movements. AutoLFADS, an autoencoder-based deep-learning model, was applied to provide a clearer picture of neurons' activity on individual corrective movements across sessions. Decoding of reach velocity generalized poorly from initial to corrective submovements. Unlike initial movements, it was challenging to predict the velocity of corrective movements using traditional linear methods in a single, global neural space. We identified several locations in the neural space where corrective submovements originated after the initial reaches, signifying firing rates different than the baseline before initial movements. To improve corrective movement decoding, we demonstrate that a state-dependent decoder incorporating the population firing rates at the initiation of correction improved performance, highlighting the diverse neural features of corrective movements. In summary, we show neural differences between initial and corrective submovements and how the neural activity encodes specific combinations of velocity and position. These findings are inconsistent with assumptions that neural correlations with kinematic features are global and independent, emphasizing that traditional methods often fall short in describing these diverse neural processes for online corrective movements.
Collapse
Affiliation(s)
- Wei-Hsien Lee
- Bioengineering Program, University of Kansas, Lawrence, Kansas 66045
| | - Brianna M Karpowicz
- Wallace H. Coulter Department of Biomedical Engineering, Emory University and Georgia Institute of Technology, Atlanta, Georgia 30322
| | - Chethan Pandarinath
- Wallace H. Coulter Department of Biomedical Engineering, Emory University and Georgia Institute of Technology, Atlanta, Georgia 30322
- Department of Neurosurgery, Emory University, Atlanta, Georgia 30322
| | - Adam G Rouse
- Bioengineering Program, University of Kansas, Lawrence, Kansas 66045
- Neurosurgery Department, University of Kansas Medical Center, Kansas City, Kansas 66160
- Electrical Engineering and Computer Science Department, University of Kansas, Lawrence, Kansas 66045
- Cell Biology and Physiology Department, University of Kansas Medical Center, Kansas City, Kansas 66160
| |
Collapse
|
20
|
Lin A, Akafia C, Dal Monte O, Fan S, Fagan N, Putnam P, Tye KM, Chang S, Ba D, Allsop AZAS. An unbiased method to partition diverse neuronal responses into functional ensembles reveals interpretable population dynamics during innate social behavior. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.05.08.593229. [PMID: 38766234 PMCID: PMC11100741 DOI: 10.1101/2024.05.08.593229] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/22/2024]
Abstract
In neuroscience, understanding how single-neuron firing contributes to distributed neural ensembles is crucial. Traditional methods of analysis have been limited to descriptions of whole population activity, or, when analyzing individual neurons, criteria for response categorization varied significantly across experiments. Current methods lack scalability for large datasets, fail to capture temporal changes and rely on parametric assumptions. There's a need for a robust, scalable, and non-parametric functional clustering approach to capture interpretable dynamics. To address this challenge, we developed a model-based, statistical framework for unsupervised clustering of multiple time series datasets that exhibit nonlinear dynamics into an a-priori-unknown number of parameterized ensembles called Functional Encoding Units (FEUs). FEU outperforms existing techniques in accuracy and benchmark scores. Here, we apply this FEU formalism to single-unit recordings collected during social behaviors in rodents and primates and demonstrate its hypothesis-generating and testing capacities. This novel pipeline serves as an analytic bridge, translating neural ensemble codes across model systems.
Collapse
Affiliation(s)
- Alexander Lin
- School of Engineering and Applied Sciences, Harvard University, Cambridge, Massachusetts, USA
| | - Cyril Akafia
- Department of Psychiatry, Yale University, New Haven, Connecticut, USA
| | - Olga Dal Monte
- Department of Psychology, Yale University, New Haven, Connecticut, USA
| | - Siqi Fan
- Department of Psychology, Yale University, New Haven, Connecticut, USA
| | - Nicholas Fagan
- Department of Psychology, Yale University, New Haven, Connecticut, USA
| | - Philip Putnam
- Department of Psychology, Yale University, New Haven, Connecticut, USA
| | - Kay M. Tye
- Salk Institute for Biological Studies, La Jolla, California, USA
- Howard Hughes Medical Institute, La Jolla, California, USA
- Kavli Institute for the Brain and Mind, La Jolla, California, USA
| | - Steve Chang
- Department of Psychology, Yale University, New Haven, Connecticut, USA
| | - Demba Ba
- School of Engineering and Applied Sciences, Harvard University, Cambridge, Massachusetts, USA
- Center for Brain Sciences, Harvard University, Cambridge, Massachusetts, USA
- Kempner Institute for the Study of Artificial and Natural Intelligence, Harvard University, Cambridge, Massachusetts, USA
| | - AZA Stephen Allsop
- Center for Collective Healing, Department of Psychiatry and Behavioral Sciences, Howard University, Washington DC, USA
- Department of Psychiatry, Yale University, New Haven, Connecticut, USA
| |
Collapse
|
21
|
Wei 魏赣超 G, Tajik Mansouri زینب تاجیک منصوری Z, Wang 王晓婧 X, Stevenson IH. Calibrating Bayesian Decoders of Neural Spiking Activity. J Neurosci 2024; 44:e2158232024. [PMID: 38538143 PMCID: PMC11063820 DOI: 10.1523/jneurosci.2158-23.2024] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/17/2023] [Revised: 01/29/2024] [Accepted: 03/11/2024] [Indexed: 05/03/2024] Open
Abstract
Accurately decoding external variables from observations of neural activity is a major challenge in systems neuroscience. Bayesian decoders, which provide probabilistic estimates, are some of the most widely used. Here we show how, in many common settings, the probabilistic predictions made by traditional Bayesian decoders are overconfident. That is, the estimates for the decoded stimulus or movement variables are more certain than they should be. We then show how Bayesian decoding with latent variables, taking account of low-dimensional shared variability in the observations, can improve calibration, although additional correction for overconfidence is still needed. Using data from males, we examine (1) decoding the direction of grating stimuli from spike recordings in the primary visual cortex in monkeys, (2) decoding movement direction from recordings in the primary motor cortex in monkeys, (3) decoding natural images from multiregion recordings in mice, and (4) decoding position from hippocampal recordings in rats. For each setting, we characterize the overconfidence, and we describe a possible method to correct miscalibration post hoc. Properly calibrated Bayesian decoders may alter theoretical results on probabilistic population coding and lead to brain-machine interfaces that more accurately reflect confidence levels when identifying external variables.
Collapse
Affiliation(s)
- Ganchao Wei 魏赣超
- Department of Statistical Science, Duke University, Durham, North Carolina 27708
| | | | | | - Ian H Stevenson
- Departments of Biomedical Engineering, University of Connecticut, Storrs, Connecticut 06269
- Psychological Sciences, University of Connecticut, Storrs, Connecticut 06269
- Connecticut Institute for Brain and Cognitive Science, University of Connecticut, Storrs, Connecticut 06269
| |
Collapse
|
22
|
Voigtlaender S, Pawelczyk J, Geiger M, Vaios EJ, Karschnia P, Cudkowicz M, Dietrich J, Haraldsen IRJH, Feigin V, Owolabi M, White TL, Świeboda P, Farahany N, Natarajan V, Winter SF. Artificial intelligence in neurology: opportunities, challenges, and policy implications. J Neurol 2024; 271:2258-2273. [PMID: 38367046 DOI: 10.1007/s00415-024-12220-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/20/2023] [Revised: 01/20/2024] [Accepted: 01/22/2024] [Indexed: 02/19/2024]
Abstract
Neurological conditions are the leading cause of disability and mortality combined, demanding innovative, scalable, and sustainable solutions. Brain health has become a global priority with adoption of the World Health Organization's Intersectoral Global Action Plan in 2022. Simultaneously, rapid advancements in artificial intelligence (AI) are revolutionizing neurological research and practice. This scoping review of 66 original articles explores the value of AI in neurology and brain health, systematizing the landscape for emergent clinical opportunities and future trends across the care trajectory: prevention, risk stratification, early detection, diagnosis, management, and rehabilitation. AI's potential to advance personalized precision neurology and global brain health directives hinges on resolving core challenges across four pillars-models, data, feasibility/equity, and regulation/innovation-through concerted pursuit of targeted recommendations. Paramount actions include swift, ethical, equity-focused integration of novel technologies into clinical workflows, mitigating data-related issues, counteracting digital inequity gaps, and establishing robust governance frameworks balancing safety and innovation.
Collapse
Affiliation(s)
- Sebastian Voigtlaender
- Systems Neuroscience Division, Max-Planck-Institute for Biological Cybernetics, Tübingen, Germany
- Virtual Diagnostics Team, QuantCo Inc., Cambridge, MA, USA
| | - Johannes Pawelczyk
- Faculty of Medicine, Ruprecht-Karls-University, Heidelberg, Germany
- Graduate Center of Medicine and Health, Technical University Munich, Munich, Germany
| | - Mario Geiger
- Department of Electrical Engineering and Computer Science, Massachusetts Institute of Technology, Cambridge, MA, USA
- NVIDIA, Zurich, Switzerland
| | - Eugene J Vaios
- Department of Radiation Oncology, Duke University Medical Center, Durham, NC, USA
| | - Philipp Karschnia
- Department of Neurosurgery, Ludwig-Maximilians-University and University Hospital Munich, Munich, Germany
- Department of Neurology, Massachusetts General Hospital and Harvard Medical School, Boston, MA, USA
| | - Merit Cudkowicz
- Department of Neurology, Massachusetts General Hospital and Harvard Medical School, Boston, MA, USA
| | - Jorg Dietrich
- Department of Neurology, Massachusetts General Hospital and Harvard Medical School, Boston, MA, USA
| | - Ira R J Hebold Haraldsen
- Department of Neurology, Division of Clinical Neuroscience, Oslo University Hospital, Oslo, Norway
| | - Valery Feigin
- National Institute for Stroke and Applied Neurosciences, Auckland University of Technology, Auckland, New Zealand
| | - Mayowa Owolabi
- Center for Genomics and Precision Medicine, College of Medicine, University of Ibadan, Ibadan, Nigeria
- Neurology Unit, Department of Medicine, University of Ibadan, Ibadan, Nigeria
- Blossom Specialist Medical Center, Ibadan, Nigeria
- Lebanese American University of Beirut, Beirut, Lebanon
| | - Tara L White
- Department of Behavioral and Social Sciences, Brown University, Providence, RI, USA
| | | | | | | | - Sebastian F Winter
- Department of Neurology, Massachusetts General Hospital and Harvard Medical School, Boston, MA, USA.
| |
Collapse
|
23
|
Ali YH, Bodkin K, Rigotti-Thompson M, Patel K, Card NS, Bhaduri B, Nason-Tomaszewski SR, Mifsud DM, Hou X, Nicolas C, Allcroft S, Hochberg LR, Au Yong N, Stavisky SD, Miller LE, Brandman DM, Pandarinath C. BRAND: a platform for closed-loop experiments with deep network models. J Neural Eng 2024; 21:026046. [PMID: 38579696 PMCID: PMC11021878 DOI: 10.1088/1741-2552/ad3b3a] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/11/2023] [Revised: 01/27/2024] [Accepted: 04/05/2024] [Indexed: 04/07/2024]
Abstract
Objective.Artificial neural networks (ANNs) are state-of-the-art tools for modeling and decoding neural activity, but deploying them in closed-loop experiments with tight timing constraints is challenging due to their limited support in existing real-time frameworks. Researchers need a platform that fully supports high-level languages for running ANNs (e.g. Python and Julia) while maintaining support for languages that are critical for low-latency data acquisition and processing (e.g. C and C++).Approach.To address these needs, we introduce the Backend for Realtime Asynchronous Neural Decoding (BRAND). BRAND comprises Linux processes, termednodes, which communicate with each other in agraphvia streams of data. Its asynchronous design allows for acquisition, control, and analysis to be executed in parallel on streams of data that may operate at different timescales. BRAND uses Redis, an in-memory database, to send data between nodes, which enables fast inter-process communication and supports 54 different programming languages. Thus, developers can easily deploy existing ANN models in BRAND with minimal implementation changes.Main results.In our tests, BRAND achieved <600 microsecond latency between processes when sending large quantities of data (1024 channels of 30 kHz neural data in 1 ms chunks). BRAND runs a brain-computer interface with a recurrent neural network (RNN) decoder with less than 8 ms of latency from neural data input to decoder prediction. In a real-world demonstration of the system, participant T11 in the BrainGate2 clinical trial (ClinicalTrials.gov Identifier: NCT00912041) performed a standard cursor control task, in which 30 kHz signal processing, RNN decoding, task control, and graphics were all executed in BRAND. This system also supports real-time inference with complex latent variable models like Latent Factor Analysis via Dynamical Systems.Significance.By providing a framework that is fast, modular, and language-agnostic, BRAND lowers the barriers to integrating the latest tools in neuroscience and machine learning into closed-loop experiments.
Collapse
Affiliation(s)
- Yahia H Ali
- Wallace H. Coulter Department of Biomedical Engineering, Emory University and Georgia Institute of Technology, Atlanta, GA, United States of America
| | - Kevin Bodkin
- Department of Neuroscience, Northwestern University, Chicago, IL, United States of America
| | - Mattia Rigotti-Thompson
- Wallace H. Coulter Department of Biomedical Engineering, Emory University and Georgia Institute of Technology, Atlanta, GA, United States of America
| | - Kushant Patel
- Department of Neurological Surgery, University of California, Davis, CA, United States of America
| | - Nicholas S Card
- Department of Neurological Surgery, University of California, Davis, CA, United States of America
| | - Bareesh Bhaduri
- Wallace H. Coulter Department of Biomedical Engineering, Emory University and Georgia Institute of Technology, Atlanta, GA, United States of America
| | - Samuel R Nason-Tomaszewski
- Wallace H. Coulter Department of Biomedical Engineering, Emory University and Georgia Institute of Technology, Atlanta, GA, United States of America
| | - Domenick M Mifsud
- Wallace H. Coulter Department of Biomedical Engineering, Emory University and Georgia Institute of Technology, Atlanta, GA, United States of America
| | - Xianda Hou
- Department of Neurological Surgery, University of California, Davis, CA, United States of America
| | - Claire Nicolas
- Center for Neurotechnology and Neurorecovery, Department of Neurology, Massachusetts General Hospital, Boston, MA, United States of America
| | - Shane Allcroft
- School of Engineering and Carney Institute for Brain Science, Brown University, Providence, RI, United States of America
| | - Leigh R Hochberg
- Center for Neurotechnology and Neurorecovery, Department of Neurology, Massachusetts General Hospital, Boston, MA, United States of America
- School of Engineering and Carney Institute for Brain Science, Brown University, Providence, RI, United States of America
- Harvard Medical School, Boston, MA, United States of America
- Veterans Affairs Rehabilitation Research & Development Center for Neurorestoration and Neurotechnology, Providence VA Medical Center, Providence, RI, United States of America
| | - Nicholas Au Yong
- Department of Neurosurgery, Emory University, Atlanta, GA, United States of America
| | - Sergey D Stavisky
- Department of Neurological Surgery, University of California, Davis, CA, United States of America
| | - Lee E Miller
- Department of Neuroscience, Northwestern University, Chicago, IL, United States of America
- Department of Biomedical Engineering, Northwestern University, Evanston, IL, United States of America
- Department of Physical Medicine and Rehabilitation, Northwestern University, Chicago, IL, United States of America
- Shirley Ryan AbilityLab, Chicago, IL, United States of America
| | - David M Brandman
- Department of Neurological Surgery, University of California, Davis, CA, United States of America
| | - Chethan Pandarinath
- Wallace H. Coulter Department of Biomedical Engineering, Emory University and Georgia Institute of Technology, Atlanta, GA, United States of America
- Department of Neurosurgery, Emory University, Atlanta, GA, United States of America
| |
Collapse
|
24
|
Misra J, Pessoa L. Brain dynamics and spatiotemporal trajectories during threat processing. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.04.06.588389. [PMID: 38617278 PMCID: PMC11014591 DOI: 10.1101/2024.04.06.588389] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 04/16/2024]
Abstract
In the past decades, functional MRI research has investigated mental states and their brain bases in largely static fashion based on evoked responses during blocked and event-related designs. Despite some progress in naturalistic designs, our understanding of threat processing remains largely limited to those obtained with standard paradigms. In the present paper, we applied Switching Linear Dynamical Systems to uncover the dynamics of threat processing during a continuous threat-of-shock paradigm. Importantly, unlike studies in systems neuroscience that frequently assume that systems are decoupled from external inputs, we characterized both endogenous and exogenous contributions to dynamics. First, we demonstrated that the SLDS model learned the regularities of the experimental paradigm, such that states and state transitions estimated from fMRI time series data from 85 ROIs reflected both the proximity of the circles and their direction (approach vs. retreat). After establishing that the model captured key properties of threat-related processing, we characterized the dynamics of the states and their transitions. The results revealed that threat processing can profitably be viewed in terms of dynamic multivariate patterns whose trajectories are a combination of intrinsic and extrinsic factors that jointly determine how the brain temporally evolves during dynamic threat. We propose that viewing threat processing through the lens of dynamical systems offers important avenues to uncover properties of the dynamics of threat that are not unveiled with standard experimental designs and analyses.
Collapse
Affiliation(s)
- Joyneel Misra
- Departmentof Electrical and Computer Engineering, University of Maryland, College Park, Maryland, United States of America
| | - Luiz Pessoa
- Departmentof Electrical and Computer Engineering, University of Maryland, College Park, Maryland, United States of America
- Department of Psychology and Maryland Neuroimaging Center, University of Maryland, College Park, Maryland, United States of America
| |
Collapse
|
25
|
Tlaie A, Shapcott K, van der Plas TL, Rowland J, Lees R, Keeling J, Packer A, Tiesinga P, Schölvinck ML, Havenith MN. What does the mean mean? A simple test for neuroscience. PLoS Comput Biol 2024; 20:e1012000. [PMID: 38640119 PMCID: PMC11062559 DOI: 10.1371/journal.pcbi.1012000] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/03/2023] [Revised: 05/01/2024] [Accepted: 03/12/2024] [Indexed: 04/21/2024] Open
Abstract
Trial-averaged metrics, e.g. tuning curves or population response vectors, are a ubiquitous way of characterizing neuronal activity. But how relevant are such trial-averaged responses to neuronal computation itself? Here we present a simple test to estimate whether average responses reflect aspects of neuronal activity that contribute to neuronal processing. The test probes two assumptions implicitly made whenever average metrics are treated as meaningful representations of neuronal activity: Reliability: Neuronal responses repeat consistently enough across trials that they convey a recognizable reflection of the average response to downstream regions.Behavioural relevance: If a single-trial response is more similar to the average template, it is more likely to evoke correct behavioural responses. We apply this test to two data sets: (1) Two-photon recordings in primary somatosensory cortices (S1 and S2) of mice trained to detect optogenetic stimulation in S1; and (2) Electrophysiological recordings from 71 brain areas in mice performing a contrast discrimination task. Under the highly controlled settings of Data set 1, both assumptions were largely fulfilled. In contrast, the less restrictive paradigm of Data set 2 met neither assumption. Simulations predict that the larger diversity of neuronal response preferences, rather than higher cross-trial reliability, drives the better performance of Data set 1. We conclude that when behaviour is less tightly restricted, average responses do not seem particularly relevant to neuronal computation, potentially because information is encoded more dynamically. Most importantly, we encourage researchers to apply this simple test of computational relevance whenever using trial-averaged neuronal metrics, in order to gauge how representative cross-trial averages are in a given context.
Collapse
Affiliation(s)
- Alejandro Tlaie
- Ernst Strüngmann Institute for Neuroscience, Frankfurt am Main, Germany
- Laboratory for Clinical Neuroscience, Centre for Biomedical Technology, Technical University of Madrid, Madrid, Spain
| | | | - Thijs L. van der Plas
- Department of Physiology, Anatomy, and Genetics, University of Oxford, Oxford, United Kingdom
| | - James Rowland
- Department of Physiology, Anatomy, and Genetics, University of Oxford, Oxford, United Kingdom
| | - Robert Lees
- Department of Physiology, Anatomy, and Genetics, University of Oxford, Oxford, United Kingdom
| | - Joshua Keeling
- Department of Physiology, Anatomy, and Genetics, University of Oxford, Oxford, United Kingdom
| | - Adam Packer
- Department of Physiology, Anatomy, and Genetics, University of Oxford, Oxford, United Kingdom
| | - Paul Tiesinga
- Department of Neuroinformatics, Donders Institute, Radboud University, Nijmegen, The Netherlands
| | | | - Martha N. Havenith
- Ernst Strüngmann Institute for Neuroscience, Frankfurt am Main, Germany
- Department of Physiology, Anatomy, and Genetics, University of Oxford, Oxford, United Kingdom
| |
Collapse
|
26
|
Meng R, Bouchard KE. Bayesian inference of structured latent spaces from neural population activity with the orthogonal stochastic linear mixing model. PLoS Comput Biol 2024; 20:e1011975. [PMID: 38669271 PMCID: PMC11078355 DOI: 10.1371/journal.pcbi.1011975] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/13/2022] [Revised: 05/08/2024] [Accepted: 03/07/2024] [Indexed: 04/28/2024] Open
Abstract
The brain produces diverse functions, from perceiving sounds to producing arm reaches, through the collective activity of populations of many neurons. Determining if and how the features of these exogenous variables (e.g., sound frequency, reach angle) are reflected in population neural activity is important for understanding how the brain operates. Often, high-dimensional neural population activity is confined to low-dimensional latent spaces. However, many current methods fail to extract latent spaces that are clearly structured by exogenous variables. This has contributed to a debate about whether or not brains should be thought of as dynamical systems or representational systems. Here, we developed a new latent process Bayesian regression framework, the orthogonal stochastic linear mixing model (OSLMM) which introduces an orthogonality constraint amongst time-varying mixture coefficients, and provide Markov chain Monte Carlo inference procedures. We demonstrate superior performance of OSLMM on latent trajectory recovery in synthetic experiments and show superior computational efficiency and prediction performance on several real-world benchmark data sets. We primarily focus on demonstrating the utility of OSLMM in two neural data sets: μECoG recordings from rat auditory cortex during presentation of pure tones and multi-single unit recordings form monkey motor cortex during complex arm reaching. We show that OSLMM achieves superior or comparable predictive accuracy of neural data and decoding of external variables (e.g., reach velocity). Most importantly, in both experimental contexts, we demonstrate that OSLMM latent trajectories directly reflect features of the sounds and reaches, demonstrating that neural dynamics are structured by neural representations. Together, these results demonstrate that OSLMM will be useful for the analysis of diverse, large-scale biological time-series datasets.
Collapse
Affiliation(s)
- Rui Meng
- Biological Systems and Engineering Division, Lawrence Berkeley National Laboratory, Berkeley, California, United States of America
| | - Kristofer E. Bouchard
- Biological Systems and Engineering Division, Lawrence Berkeley National Laboratory, Berkeley, California, United States of America
- Scientific Data Division, Lawrence Berkeley National Laboratory, Berkeley, California, United States of America
- Helen Wills Neuroscience Institute, University of California Berkeley, Berkeley, California, United States of America
- Redwood Center for Theoretical Neuroscience, University of California Berkeley, Berkeley, California, United States of America
| |
Collapse
|
27
|
Churchland MM, Shenoy KV. Preparatory activity and the expansive null-space. Nat Rev Neurosci 2024; 25:213-236. [PMID: 38443626 DOI: 10.1038/s41583-024-00796-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 01/26/2024] [Indexed: 03/07/2024]
Abstract
The study of the cortical control of movement experienced a conceptual shift over recent decades, as the basic currency of understanding shifted from single-neuron tuning towards population-level factors and their dynamics. This transition was informed by a maturing understanding of recurrent networks, where mechanism is often characterized in terms of population-level factors. By estimating factors from data, experimenters could test network-inspired hypotheses. Central to such hypotheses are 'output-null' factors that do not directly drive motor outputs yet are essential to the overall computation. In this Review, we highlight how the hypothesis of output-null factors was motivated by the venerable observation that motor-cortex neurons are active during movement preparation, well before movement begins. We discuss how output-null factors then became similarly central to understanding neural activity during movement. We discuss how this conceptual framework provided key analysis tools, making it possible for experimenters to address long-standing questions regarding motor control. We highlight an intriguing trend: as experimental and theoretical discoveries accumulate, the range of computational roles hypothesized to be subserved by output-null factors continues to expand.
Collapse
Affiliation(s)
- Mark M Churchland
- Department of Neuroscience, Columbia University, New York, NY, USA.
- Grossman Center for the Statistics of Mind, Columbia University, New York, NY, USA.
- Kavli Institute for Brain Science, Columbia University, New York, NY, USA.
| | - Krishna V Shenoy
- Department of Electrical Engineering, Stanford University, Stanford, CA, USA
- Department of Bioengineering, Stanford University, Stanford, CA, USA
- Department of Neurobiology, Stanford University, Stanford, CA, USA
- Department of Neurosurgery, Stanford University, Stanford, CA, USA
- Wu Tsai Neurosciences Institute, Stanford University, Stanford, CA, USA
- Bio-X Institute, Stanford University, Stanford, CA, USA
- Howard Hughes Medical Institute at Stanford University, Stanford, CA, USA
| |
Collapse
|
28
|
Hasnain MA, Birnbaum JE, Nunez JLU, Hartman EK, Chandrasekaran C, Economo MN. Separating cognitive and motor processes in the behaving mouse. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2023.08.23.554474. [PMID: 37662199 PMCID: PMC10473744 DOI: 10.1101/2023.08.23.554474] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/05/2023]
Abstract
The cognitive processes supporting complex animal behavior are closely associated with ubiquitous movements responsible for our posture, facial expressions, ability to actively sample our sensory environments, and other critical processes. These movements are strongly related to neural activity across much of the brain and are often highly correlated with ongoing cognitive processes, making it challenging to dissociate the neural dynamics that support cognitive processes from those supporting related movements. In such cases, a critical issue is whether cognitive processes are separable from related movements, or if they are driven by common neural mechanisms. Here, we demonstrate how the separability of cognitive and motor processes can be assessed, and, when separable, how the neural dynamics associated with each component can be isolated. We establish a novel two-context behavioral task in mice that involves multiple cognitive processes and show that commonly observed dynamics taken to support cognitive processes are strongly contaminated by movements. When cognitive and motor components are isolated using a novel approach for subspace decomposition, we find that they exhibit distinct dynamical trajectories. Further, properly accounting for movement revealed that largely separate populations of cells encode cognitive and motor variables, in contrast to the 'mixed selectivity' often reported. Accurately isolating the dynamics associated with particular cognitive and motor processes will be essential for developing conceptual and computational models of neural circuit function and evaluating the function of the cell types of which neural circuits are composed.
Collapse
Affiliation(s)
- Munib A. Hasnain
- Department of Biomedical Engineering, Boston University, Boston, MA
- Center for Neurophotonics, Boston University, Boston, MA
| | - Jaclyn E. Birnbaum
- Graduate Program for Neuroscience, Boston University, Boston, MA
- Center for Neurophotonics, Boston University, Boston, MA
| | | | - Emma K. Hartman
- Department of Biomedical Engineering, Boston University, Boston, MA
| | - Chandramouli Chandrasekaran
- Department of Psychological and Brain Sciences, Boston University, Boston, MA
- Department of Neurobiology & Anatomy, Boston University, Boston, MA
- Center for Systems Neuroscience, Boston University, Boston, MA
| | - Michael N. Economo
- Department of Biomedical Engineering, Boston University, Boston, MA
- Center for Neurophotonics, Boston University, Boston, MA
- Center for Systems Neuroscience, Boston University, Boston, MA
| |
Collapse
|
29
|
Morrell MC, Nemenman I, Sederberg A. Neural criticality from effective latent variables. eLife 2024; 12:RP89337. [PMID: 38470471 PMCID: PMC10957169 DOI: 10.7554/elife.89337] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/13/2024] Open
Abstract
Observations of power laws in neural activity data have raised the intriguing notion that brains may operate in a critical state. One example of this critical state is 'avalanche criticality', which has been observed in various systems, including cultured neurons, zebrafish, rodent cortex, and human EEG. More recently, power laws were also observed in neural populations in the mouse under an activity coarse-graining procedure, and they were explained as a consequence of the neural activity being coupled to multiple latent dynamical variables. An intriguing possibility is that avalanche criticality emerges due to a similar mechanism. Here, we determine the conditions under which latent dynamical variables give rise to avalanche criticality. We find that populations coupled to multiple latent variables produce critical behavior across a broader parameter range than those coupled to a single, quasi-static latent variable, but in both cases, avalanche criticality is observed without fine-tuning of model parameters. We identify two regimes of avalanches, both critical but differing in the amount of information carried about the latent variable. Our results suggest that avalanche criticality arises in neural systems in which activity is effectively modeled as a population driven by a few dynamical variables and these variables can be inferred from the population activity.
Collapse
Affiliation(s)
- Mia C Morrell
- Department of Physics, New York UniversityNew YorkUnited States
| | - Ilya Nemenman
- Department of Physics, Department of Biology, Initiative in Theory and Modeling of Living Systems, Emory UniversityAtlantaUnited States
| | - Audrey Sederberg
- Department of Neuroscience, University of Minnesota Medical SchoolMinneapolisUnited States
| |
Collapse
|
30
|
Temmar H, Willsey MS, Costello JT, Mender MJ, Cubillos LH, Lam JL, Wallace DM, Kelberman MM, Patil PG, Chestek CA. Artificial neural network for brain-machine interface consistently produces more naturalistic finger movements than linear methods. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.03.01.583000. [PMID: 38496403 PMCID: PMC10942378 DOI: 10.1101/2024.03.01.583000] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 03/19/2024]
Abstract
Brain-machine interfaces (BMI) aim to restore function to persons living with spinal cord injuries by 'decoding' neural signals into behavior. Recently, nonlinear BMI decoders have outperformed previous state-of-the-art linear decoders, but few studies have investigated what specific improvements these nonlinear approaches provide. In this study, we compare how temporally convolved feedforward neural networks (tcFNNs) and linear approaches predict individuated finger movements in open and closed-loop settings. We show that nonlinear decoders generate more naturalistic movements, producing distributions of velocities 85.3% closer to true hand control than linear decoders. Addressing concerns that neural networks may come to inconsistent solutions, we find that regularization techniques improve the consistency of tcFNN convergence by 194.6%, along with improving average performance, and training speed. Finally, we show that tcFNN can leverage training data from multiple task variations to improve generalization. The results of this study show that nonlinear methods produce more naturalistic movements and show potential for generalizing over less constrained tasks. Teaser A neural network decoder produces consistent naturalistic movements and shows potential for real-world generalization through task variations.
Collapse
|
31
|
Chang YJ, Chen YI, Yeh HC, Santacruz SR. Neurobiologically realistic neural network enables cross-scale modeling of neural dynamics. Sci Rep 2024; 14:5145. [PMID: 38429297 PMCID: PMC10907713 DOI: 10.1038/s41598-024-54593-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/02/2023] [Accepted: 02/14/2024] [Indexed: 03/03/2024] Open
Abstract
Fundamental principles underlying computation in multi-scale brain networks illustrate how multiple brain areas and their coordinated activity give rise to complex cognitive functions. Whereas brain activity has been studied at the micro- to meso-scale to reveal the connections between the dynamical patterns and the behaviors, investigations of neural population dynamics are mainly limited to single-scale analysis. Our goal is to develop a cross-scale dynamical model for the collective activity of neuronal populations. Here we introduce a bio-inspired deep learning approach, termed NeuroBondGraph Network (NBGNet), to capture cross-scale dynamics that can infer and map the neural data from multiple scales. Our model not only exhibits more than an 11-fold improvement in reconstruction accuracy, but also predicts synchronous neural activity and preserves correlated low-dimensional latent dynamics. We also show that the NBGNet robustly predicts held-out data across a long time scale (2 weeks) without retraining. We further validate the effective connectivity defined from our model by demonstrating that neural connectivity during motor behaviour agrees with the established neuroanatomical hierarchy of motor control in the literature. The NBGNet approach opens the door to revealing a comprehensive understanding of brain computation, where network mechanisms of multi-scale activity are critical.
Collapse
Affiliation(s)
- Yin-Jui Chang
- Biomedical Engineering, The University of Texas at Austin, Austin, TX, USA
| | - Yuan-I Chen
- Biomedical Engineering, The University of Texas at Austin, Austin, TX, USA
| | - Hsin-Chih Yeh
- Biomedical Engineering, The University of Texas at Austin, Austin, TX, USA
- Texas Materials Institute, The University of Texas at Austin, Austin, TX, USA
| | - Samantha R Santacruz
- Biomedical Engineering, The University of Texas at Austin, Austin, TX, USA.
- Institute for Neuroscience, The University of Texas at Austin, Austin, TX, USA.
- Electrical and Computer Engineering, The University of Texas at Austin, Austin, TX, USA.
| |
Collapse
|
32
|
Ahmadipour P, Sani OG, Pesaran B, Shanechi MM. Multimodal subspace identification for modeling discrete-continuous spiking and field potential population activity. J Neural Eng 2024; 21:026001. [PMID: 38016450 PMCID: PMC10913727 DOI: 10.1088/1741-2552/ad1053] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/02/2023] [Revised: 10/23/2023] [Accepted: 11/28/2023] [Indexed: 11/30/2023]
Abstract
Objective.Learning dynamical latent state models for multimodal spiking and field potential activity can reveal their collective low-dimensional dynamics and enable better decoding of behavior through multimodal fusion. Toward this goal, developing unsupervised learning methods that are computationally efficient is important, especially for real-time learning applications such as brain-machine interfaces (BMIs). However, efficient learning remains elusive for multimodal spike-field data due to their heterogeneous discrete-continuous distributions and different timescales.Approach.Here, we develop a multiscale subspace identification (multiscale SID) algorithm that enables computationally efficient learning for modeling and dimensionality reduction for multimodal discrete-continuous spike-field data. We describe the spike-field activity as combined Poisson and Gaussian observations, for which we derive a new analytical SID method. Importantly, we also introduce a novel constrained optimization approach to learn valid noise statistics, which is critical for multimodal statistical inference of the latent state, neural activity, and behavior. We validate the method using numerical simulations and with spiking and local field potential population activity recorded during a naturalistic reach and grasp behavior.Main results.We find that multiscale SID accurately learned dynamical models of spike-field signals and extracted low-dimensional dynamics from these multimodal signals. Further, it fused multimodal information, thus better identifying the dynamical modes and predicting behavior compared to using a single modality. Finally, compared to existing multiscale expectation-maximization learning for Poisson-Gaussian observations, multiscale SID had a much lower training time while being better in identifying the dynamical modes and having a better or similar accuracy in predicting neural activity and behavior.Significance.Overall, multiscale SID is an accurate learning method that is particularly beneficial when efficient learning is of interest, such as for online adaptive BMIs to track non-stationary dynamics or for reducing offline training time in neuroscience investigations.
Collapse
Affiliation(s)
- Parima Ahmadipour
- Ming Hsieh Department of Electrical and Computer Engineering, Viterbi School of Engineering, University of Southern California, Los Angeles, CA, United States of America
| | - Omid G Sani
- Ming Hsieh Department of Electrical and Computer Engineering, Viterbi School of Engineering, University of Southern California, Los Angeles, CA, United States of America
| | - Bijan Pesaran
- Department of Neurosurgery, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, United States of America
| | - Maryam M Shanechi
- Ming Hsieh Department of Electrical and Computer Engineering, Viterbi School of Engineering, University of Southern California, Los Angeles, CA, United States of America
- Thomas Lord Department of Computer Science, Alfred E. Mann Department of Biomedical Engineering, and the Neuroscience Graduate Program, University of Southern California, Los Angeles, CA, United States of America
| |
Collapse
|
33
|
Lindley S, Lu Y, Shukla D. The Experimentalist's Guide to Machine Learning for Small Molecule Design. ACS APPLIED BIO MATERIALS 2024; 7:657-684. [PMID: 37535819 PMCID: PMC10880109 DOI: 10.1021/acsabm.3c00054] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/19/2023] [Accepted: 07/17/2023] [Indexed: 08/05/2023]
Abstract
Initially part of the field of artificial intelligence, machine learning (ML) has become a booming research area since branching out into its own field in the 1990s. After three decades of refinement, ML algorithms have accelerated scientific developments across a variety of research topics. The field of small molecule design is no exception, and an increasing number of researchers are applying ML techniques in their pursuit of discovering, generating, and optimizing small molecule compounds. The goal of this review is to provide simple, yet descriptive, explanations of some of the most commonly utilized ML algorithms in the field of small molecule design along with those that are highly applicable to an experimentally focused audience. The algorithms discussed here span across three ML paradigms: supervised learning, unsupervised learning, and ensemble methods. Examples from the published literature will be provided for each algorithm. Some common pitfalls of applying ML to biological and chemical data sets will also be explained, alongside a brief summary of a few more advanced paradigms, including reinforcement learning and semi-supervised learning.
Collapse
Affiliation(s)
- Sarah
E. Lindley
- Department
of Bioengineering, University of Illinois, Urbana−Champaign, Illinois 61801, United States
| | - Yiyang Lu
- Department
of Chemical and Biomolecular Engineering, University of Illinois, Urbana−Champaign, Illinois 61801, United States
| | - Diwakar Shukla
- Department
of Bioengineering, University of Illinois, Urbana−Champaign, Illinois 61801, United States
- Department
of Chemical and Biomolecular Engineering, University of Illinois, Urbana−Champaign, Illinois 61801, United States
- Center
for Biophysics & Computational Biology, University of Illinois, Urbana−Champaign, Illinois 61801, United States
- Department
of Plant Biology, University of Illinois, Urbana−Champaign, Illinois 61801, United States
| |
Collapse
|
34
|
Kawahara D, Fujisawa S. Advantages of Persistent Cohomology in Estimating Animal Location From Grid Cell Population Activity. Neural Comput 2024; 36:385-411. [PMID: 38363660 DOI: 10.1162/neco_a_01645] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/17/2023] [Accepted: 10/09/2023] [Indexed: 02/18/2024]
Abstract
Many cognitive functions are represented as cell assemblies. In the case of spatial navigation, the population activity of place cells in the hippocampus and grid cells in the entorhinal cortex represents self-location in the environment. The brain cannot directly observe self-location information in the environment. Instead, it relies on sensory information and memory to estimate self-location. Therefore, estimating low-dimensional dynamics, such as the movement trajectory of an animal exploring its environment, from only the high-dimensional neural activity is important in deciphering the information represented in the brain. Most previous studies have estimated the low-dimensional dynamics (i.e., latent variables) behind neural activity by unsupervised learning with Bayesian population decoding using artificial neural networks or gaussian processes. Recently, persistent cohomology has been used to estimate latent variables from the phase information (i.e., circular coordinates) of manifolds created by neural activity. However, the advantages of persistent cohomology over Bayesian population decoding are not well understood. We compared persistent cohomology and Bayesian population decoding in estimating the animal location from simulated and actual grid cell population activity. We found that persistent cohomology can estimate the animal location with fewer neurons than Bayesian population decoding and robustly estimate the animal location from actual noisy data.
Collapse
Affiliation(s)
- Daisuke Kawahara
- Department of Complexity Science and Engineering, University of Tokyo, Kashiwa, Chiba 277-8563, Japan
- Laboratory for Systems Neurophysiology, RIKEN Center for Brain Science, Wako, Saitama 351-0198, Japan
| | - Shigeyoshi Fujisawa
- Department of Complexity Science and Engineering, University of Tokyo, Kashiwa, Chiba 277-8563, Japan
- Laboratory for Systems Neurophysiology, RIKEN Center for Brain Science, Wako, Saitama 351-0198, Japan
| |
Collapse
|
35
|
Vahidi P, Sani OG, Shanechi MM. Modeling and dissociation of intrinsic and input-driven neural population dynamics underlying behavior. Proc Natl Acad Sci U S A 2024; 121:e2212887121. [PMID: 38335258 PMCID: PMC10873612 DOI: 10.1073/pnas.2212887121] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/28/2022] [Accepted: 12/03/2023] [Indexed: 02/12/2024] Open
Abstract
Neural dynamics can reflect intrinsic dynamics or dynamic inputs, such as sensory inputs or inputs from other brain regions. To avoid misinterpreting temporally structured inputs as intrinsic dynamics, dynamical models of neural activity should account for measured inputs. However, incorporating measured inputs remains elusive in joint dynamical modeling of neural-behavioral data, which is important for studying neural computations of behavior. We first show how training dynamical models of neural activity while considering behavior but not input or input but not behavior may lead to misinterpretations. We then develop an analytical learning method for linear dynamical models that simultaneously accounts for neural activity, behavior, and measured inputs. The method provides the capability to prioritize the learning of intrinsic behaviorally relevant neural dynamics and dissociate them from both other intrinsic dynamics and measured input dynamics. In data from a simulated brain with fixed intrinsic dynamics that performs different tasks, the method correctly finds the same intrinsic dynamics regardless of the task while other methods can be influenced by the task. In neural datasets from three subjects performing two different motor tasks with task instruction sensory inputs, the method reveals low-dimensional intrinsic neural dynamics that are missed by other methods and are more predictive of behavior and/or neural activity. The method also uniquely finds that the intrinsic behaviorally relevant neural dynamics are largely similar across the different subjects and tasks, whereas the overall neural dynamics are not. These input-driven dynamical models of neural-behavioral data can uncover intrinsic dynamics that may otherwise be missed.
Collapse
Affiliation(s)
- Parsa Vahidi
- Ming Hsieh Department of Electrical and Computer Engineering, Viterbi School of Engineering, University of Southern California, Los Angeles, CA90089
| | - Omid G. Sani
- Ming Hsieh Department of Electrical and Computer Engineering, Viterbi School of Engineering, University of Southern California, Los Angeles, CA90089
| | - Maryam M. Shanechi
- Ming Hsieh Department of Electrical and Computer Engineering, Viterbi School of Engineering, University of Southern California, Los Angeles, CA90089
- Neuroscience Graduate Program, University of Southern California, Los Angeles, CA90089
- Thomas Lord Department of Computer Science and Alfred E. Mann Department of Biomedical Engineering, Viterbi School of Engineering, University of Southern California, Los Angeles, CA90089
| |
Collapse
|
36
|
Kuzmina E, Kriukov D, Lebedev M. Neuronal travelling waves explain rotational dynamics in experimental datasets and modelling. Sci Rep 2024; 14:3566. [PMID: 38347042 PMCID: PMC10861525 DOI: 10.1038/s41598-024-53907-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/25/2023] [Accepted: 02/06/2024] [Indexed: 02/15/2024] Open
Abstract
Spatiotemporal properties of neuronal population activity in cortical motor areas have been subjects of experimental and theoretical investigations, generating numerous interpretations regarding mechanisms for preparing and executing limb movements. Two competing models, representational and dynamical, strive to explain the relationship between movement parameters and neuronal activity. A dynamical model uses the jPCA method that holistically characterizes oscillatory activity in neuron populations by maximizing the data rotational dynamics. Different rotational dynamics interpretations revealed by the jPCA approach have been proposed. Yet, the nature of such dynamics remains poorly understood. We comprehensively analyzed several neuronal-population datasets and found rotational dynamics consistently accounted for by a traveling wave pattern. For quantifying rotation strength, we developed a complex-valued measure, the gyration number. Additionally, we identified parameters influencing rotation extent in the data. Our findings suggest that rotational dynamics and traveling waves are typically the same phenomena, so reevaluation of the previous interpretations where they were considered separate entities is needed.
Collapse
Affiliation(s)
- Ekaterina Kuzmina
- Skolkovo Institute of Science and Technology, Vladimir Zelman Center for Neurobiology and Brain Rehabilitation, Moscow, Russia, 121205.
- Artificial Intelligence Research Institute (AIRI), Moscow, Russia.
| | - Dmitrii Kriukov
- Artificial Intelligence Research Institute (AIRI), Moscow, Russia
- Skolkovo Institute of Science and Technology, Center for Molecular and Cellular Biology, Moscow, Russia, 121205
| | - Mikhail Lebedev
- Faculty of Mechanics and Mathematics, Lomonosov Moscow State University, Moscow, Russia, 119992
- Sechenov Institute of Evolutionary Physiology and Biochemistry of the Russian Academy of Sciences, Saint-Petersburg, Russia, 194223
| |
Collapse
|
37
|
Hassan J, Saeed SM, Deka L, Uddin MJ, Das DB. Applications of Machine Learning (ML) and Mathematical Modeling (MM) in Healthcare with Special Focus on Cancer Prognosis and Anticancer Therapy: Current Status and Challenges. Pharmaceutics 2024; 16:260. [PMID: 38399314 PMCID: PMC10892549 DOI: 10.3390/pharmaceutics16020260] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/08/2023] [Revised: 01/29/2024] [Accepted: 02/07/2024] [Indexed: 02/25/2024] Open
Abstract
The use of data-driven high-throughput analytical techniques, which has given rise to computational oncology, is undisputed. The widespread use of machine learning (ML) and mathematical modeling (MM)-based techniques is widely acknowledged. These two approaches have fueled the advancement in cancer research and eventually led to the uptake of telemedicine in cancer care. For diagnostic, prognostic, and treatment purposes concerning different types of cancer research, vast databases of varied information with manifold dimensions are required, and indeed, all this information can only be managed by an automated system developed utilizing ML and MM. In addition, MM is being used to probe the relationship between the pharmacokinetics and pharmacodynamics (PK/PD interactions) of anti-cancer substances to improve cancer treatment, and also to refine the quality of existing treatment models by being incorporated at all steps of research and development related to cancer and in routine patient care. This review will serve as a consolidation of the advancement and benefits of ML and MM techniques with a special focus on the area of cancer prognosis and anticancer therapy, leading to the identification of challenges (data quantity, ethical consideration, and data privacy) which are yet to be fully addressed in current studies.
Collapse
Affiliation(s)
- Jasmin Hassan
- Drug Delivery & Therapeutics Lab, Dhaka 1212, Bangladesh; (J.H.); (S.M.S.)
| | | | - Lipika Deka
- Faculty of Computing, Engineering and Media, De Montfort University, Leicester LE1 9BH, UK;
| | - Md Jasim Uddin
- Department of Pharmaceutical Technology, Faculty of Pharmacy, Universiti Malaya, Kuala Lumpur 50603, Malaysia
| | - Diganta B. Das
- Department of Chemical Engineering, Loughborough University, Loughborough LE11 3TU, UK
| |
Collapse
|
38
|
Lee WH, Karpowicz BM, Pandarinath C, Rouse AG. Identifying distinct neural features between the initial and corrective phases of precise reaching using AutoLFADS. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2023.06.30.547252. [PMID: 38352314 PMCID: PMC10862710 DOI: 10.1101/2023.06.30.547252] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 04/09/2024]
Abstract
Many initial movements require subsequent corrective movements, but how motor cortex transitions to make corrections and how similar the encoding is to initial movements is unclear. In our study, we explored how the brain's motor cortex signals both initial and corrective movements during a precision reaching task. We recorded a large population of neurons from two male rhesus macaques across multiple sessions to examine the neural firing rates during not only initial movements but also subsequent corrective movements. AutoLFADS, an auto-encoder-based deep-learning model, was applied to provide a clearer picture of neurons' activity on individual corrective movements across sessions. Decoding of reach velocity generalized poorly from initial to corrective submovements. Unlike initial movements, it was challenging to predict the velocity of corrective movements using traditional linear methods in a single, global neural space. We identified several locations in the neural space where corrective submovements originated after the initial reaches, signifying firing rates different than the baseline before initial movements. To improve corrective movement decoding, we demonstrate that a state-dependent decoder incorporating the population firing rates at the initiation of correction improved performance, highlighting the diverse neural features of corrective movements. In summary, we show neural differences between initial and corrective submovements and how the neural activity encodes specific combinations of velocity and position. These findings are inconsistent with assumptions that neural correlations with kinematic features are global and independent, emphasizing that traditional methods often fall short in describing these diverse neural processes for online corrective movements. Significance Statement We analyzed submovement neural population dynamics during precision reaching. Using an auto- encoder-based deep-learning model, AutoLFADS, we examined neural activity on a single-trial basis. Our study shows distinct neural dynamics between initial and corrective submovements. We demonstrate the existence of unique neural features within each submovement class that encode complex combinations of position and reach direction. Our study also highlights the benefit of state-specific decoding strategies, which consider the neural firing rates at the onset of any given submovement, when decoding complex motor tasks such as corrective submovements.
Collapse
|
39
|
Naud R, Longtin A. Connecting levels of analysis in the computational era. J Physiol 2024; 602:417-420. [PMID: 38071740 DOI: 10.1113/jp286013] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/21/2023] [Accepted: 11/30/2023] [Indexed: 02/02/2024] Open
Affiliation(s)
- Richard Naud
- Department of Cellular and Molecular Medicine, University of Ottawa, Ottawa, ON, Canada
- Department of Physics, University of Ottawa, Ottawa, ON, Canada
- Center for Neural Dynamics, University of Ottawa, Ottawa, ON, Canada
- Brain and Mind Research Institute, University of Ottawa, Ottawa, ON, Canada
| | - André Longtin
- Department of Cellular and Molecular Medicine, University of Ottawa, Ottawa, ON, Canada
- Department of Physics, University of Ottawa, Ottawa, ON, Canada
- Center for Neural Dynamics, University of Ottawa, Ottawa, ON, Canada
- Brain and Mind Research Institute, University of Ottawa, Ottawa, ON, Canada
| |
Collapse
|
40
|
Pals M, Macke JH, Barak O. Trained recurrent neural networks develop phase-locked limit cycles in a working memory task. PLoS Comput Biol 2024; 20:e1011852. [PMID: 38315736 PMCID: PMC10868787 DOI: 10.1371/journal.pcbi.1011852] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/13/2023] [Revised: 02/15/2024] [Accepted: 01/22/2024] [Indexed: 02/07/2024] Open
Abstract
Neural oscillations are ubiquitously observed in many brain areas. One proposed functional role of these oscillations is that they serve as an internal clock, or 'frame of reference'. Information can be encoded by the timing of neural activity relative to the phase of such oscillations. In line with this hypothesis, there have been multiple empirical observations of such phase codes in the brain. Here we ask: What kind of neural dynamics support phase coding of information with neural oscillations? We tackled this question by analyzing recurrent neural networks (RNNs) that were trained on a working memory task. The networks were given access to an external reference oscillation and tasked to produce an oscillation, such that the phase difference between the reference and output oscillation maintains the identity of transient stimuli. We found that networks converged to stable oscillatory dynamics. Reverse engineering these networks revealed that each phase-coded memory corresponds to a separate limit cycle attractor. We characterized how the stability of the attractor dynamics depends on both reference oscillation amplitude and frequency, properties that can be experimentally observed. To understand the connectivity structures that underlie these dynamics, we showed that trained networks can be described as two phase-coupled oscillators. Using this insight, we condensed our trained networks to a reduced model consisting of two functional modules: One that generates an oscillation and one that implements a coupling function between the internal oscillation and external reference. In summary, by reverse engineering the dynamics and connectivity of trained RNNs, we propose a mechanism by which neural networks can harness reference oscillations for working memory. Specifically, we propose that a phase-coding network generates autonomous oscillations which it couples to an external reference oscillation in a multi-stable fashion.
Collapse
Affiliation(s)
- Matthijs Pals
- Machine Learning in Science, Excellence Cluster Machine Learning, University of Tübingen, Tübingen, Germany
- Tübingen AI Center, University of Tübingen, Tübingen, Germany
| | - Jakob H. Macke
- Machine Learning in Science, Excellence Cluster Machine Learning, University of Tübingen, Tübingen, Germany
- Tübingen AI Center, University of Tübingen, Tübingen, Germany
- Department Empirical Inference, Max Planck Institute for Intelligent Systems, Tübingen, Germany
| | - Omri Barak
- Rappaport Faculty of Medicine Technion, Israel Institute of Technology, Haifa, Israel
- Network Biology Research Laboratory, Israel Institute of Technology, Haifa, Israel
| |
Collapse
|
41
|
Weng G, Clark K, Akbarian A, Noudoost B, Nategh N. Time-varying generalized linear models: characterizing and decoding neuronal dynamics in higher visual areas. Front Comput Neurosci 2024; 18:1273053. [PMID: 38348287 PMCID: PMC10859875 DOI: 10.3389/fncom.2024.1273053] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/05/2023] [Accepted: 01/09/2024] [Indexed: 02/15/2024] Open
Abstract
To create a behaviorally relevant representation of the visual world, neurons in higher visual areas exhibit dynamic response changes to account for the time-varying interactions between external (e.g., visual input) and internal (e.g., reward value) factors. The resulting high-dimensional representational space poses challenges for precisely quantifying individual factors' contributions to the representation and readout of sensory information during a behavior. The widely used point process generalized linear model (GLM) approach provides a powerful framework for a quantitative description of neuronal processing as a function of various sensory and non-sensory inputs (encoding) as well as linking particular response components to particular behaviors (decoding), at the level of single trials and individual neurons. However, most existing variations of GLMs assume the neural systems to be time-invariant, making them inadequate for modeling nonstationary characteristics of neuronal sensitivity in higher visual areas. In this review, we summarize some of the existing GLM variations, with a focus on time-varying extensions. We highlight their applications to understanding neural representations in higher visual areas and decoding transient neuronal sensitivity as well as linking physiology to behavior through manipulation of model components. This time-varying class of statistical models provide valuable insights into the neural basis of various visual behaviors in higher visual areas and hold significant potential for uncovering the fundamental computational principles that govern neuronal processing underlying various behaviors in different regions of the brain.
Collapse
Affiliation(s)
- Geyu Weng
- Department of Biomedical Engineering, University of Utah, Salt Lake City, UT, United States
- Department of Ophthalmology and Visual Sciences, University of Utah, Salt Lake City, UT, United States
| | - Kelsey Clark
- Department of Ophthalmology and Visual Sciences, University of Utah, Salt Lake City, UT, United States
| | - Amir Akbarian
- Department of Ophthalmology and Visual Sciences, University of Utah, Salt Lake City, UT, United States
| | - Behrad Noudoost
- Department of Ophthalmology and Visual Sciences, University of Utah, Salt Lake City, UT, United States
| | - Neda Nategh
- Department of Ophthalmology and Visual Sciences, University of Utah, Salt Lake City, UT, United States
- Department of Electrical and Computer Engineering, University of Utah, Salt Lake City, UT, United States
| |
Collapse
|
42
|
Tolooshams B, Matias S, Wu H, Temereanca S, Uchida N, Murthy VN, Masset P, Ba D. Interpretable deep learning for deconvolutional analysis of neural signals. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.01.05.574379. [PMID: 38260512 PMCID: PMC10802267 DOI: 10.1101/2024.01.05.574379] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/24/2024]
Abstract
The widespread adoption of deep learning to build models that capture the dynamics of neural populations is typically based on "black-box" approaches that lack an interpretable link between neural activity and function. Here, we propose to apply algorithm unrolling, a method for interpretable deep learning, to design the architecture of sparse deconvolutional neural networks and obtain a direct interpretation of network weights in relation to stimulus-driven single-neuron activity through a generative model. We characterize our method, referred to as deconvolutional unrolled neural learning (DUNL), and show its versatility by applying it to deconvolve single-trial local signals across multiple brain areas and recording modalities. To exemplify use cases of our decomposition method, we uncover multiplexed salience and reward prediction error signals from midbrain dopamine neurons in an unbiased manner, perform simultaneous event detection and characterization in somatosensory thalamus recordings, and characterize the responses of neurons in the piriform cortex. Our work leverages the advances in interpretable deep learning to gain a mechanistic understanding of neural dynamics.
Collapse
Affiliation(s)
- Bahareh Tolooshams
- Center for Brain Science, Harvard University, Cambridge MA, 02138
- John A. Paulson School of Engineering and Applied Sciences, Harvard University, Cambridge MA, 02138
- Computing + Mathematical Sciences, California Institute of Technology, Pasadena, CA, 91125
| | - Sara Matias
- Center for Brain Science, Harvard University, Cambridge MA, 02138
- Department of Molecular and Cellular Biology, Harvard University, Cambridge MA, 02138
| | - Hao Wu
- Center for Brain Science, Harvard University, Cambridge MA, 02138
- Department of Molecular and Cellular Biology, Harvard University, Cambridge MA, 02138
| | - Simona Temereanca
- Carney Institute for Brain Science, Brown University, Providence, RI, 02906
| | - Naoshige Uchida
- Center for Brain Science, Harvard University, Cambridge MA, 02138
- Department of Molecular and Cellular Biology, Harvard University, Cambridge MA, 02138
| | - Venkatesh N. Murthy
- Center for Brain Science, Harvard University, Cambridge MA, 02138
- Department of Molecular and Cellular Biology, Harvard University, Cambridge MA, 02138
| | - Paul Masset
- Center for Brain Science, Harvard University, Cambridge MA, 02138
- Department of Molecular and Cellular Biology, Harvard University, Cambridge MA, 02138
- Department of Psychology, McGill University, Montréal QC, H3A 1G1
| | - Demba Ba
- Center for Brain Science, Harvard University, Cambridge MA, 02138
- John A. Paulson School of Engineering and Applied Sciences, Harvard University, Cambridge MA, 02138
- Kempner Institute for the Study of Natural & Artificial Intelligence, Harvard University, Cambridge MA, 02138
| |
Collapse
|
43
|
Chen Y, Chien J, Dai B, Lin D, Chen ZS. Identifying behavioral links to neural dynamics of multifiber photometry recordings in a mouse social behavior network. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2023.12.25.573308. [PMID: 38234793 PMCID: PMC10793434 DOI: 10.1101/2023.12.25.573308] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/19/2024]
Abstract
Distributed hypothalamic-midbrain neural circuits orchestrate complex behavioral responses during social interactions. How population-averaged neural activity measured by multi-fiber photometry (MFP) for calcium fluorescence signals correlates with social behaviors is a fundamental question. We propose a state-space analysis framework to characterize mouse MFP data based on dynamic latent variable models, which include continuous-state linear dynamical system (LDS) and discrete-state hidden semi-Markov model (HSMM). We validate these models on extensive MFP recordings during aggressive and mating behaviors in male-male and male-female interactions, respectively. Our results show that these models are capable of capturing both temporal behavioral structure and associated neural states. Overall, these analysis approaches provide an unbiased strategy to examine neural dynamics underlying social behaviors and reveals mechanistic insights into the relevant networks.
Collapse
Affiliation(s)
- Yibo Chen
- Department of Psychiatry, Department of Neuroscience and Physiology, New York University School of Medicine, New York, NY, USA
- Program in Artificial Intelligence, University of Science and Technology of China, Hefei, Anhui, China
| | - Jonathan Chien
- Department of Psychiatry, Department of Neuroscience and Physiology, New York University School of Medicine, New York, NY, USA
| | - Bing Dai
- Neuroscience Institute, New York University Grossman School of Medicine, New York, NY, USA
| | - Dayu Lin
- Department of Psychiatry, Department of Neuroscience and Physiology, New York University School of Medicine, New York, NY, USA
- Neuroscience Institute, New York University Grossman School of Medicine, New York, NY, USA
- Center for Neural Science, New York University, New York, NY, USA
| | - Zhe Sage Chen
- Department of Psychiatry, Department of Neuroscience and Physiology, New York University School of Medicine, New York, NY, USA
- Neuroscience Institute, New York University Grossman School of Medicine, New York, NY, USA
- Department of Biomedical Engineering, NYU Tandon School of Engineering, Brooklyn, NY, USA
| |
Collapse
|
44
|
Love K, Cao D, Chang JC, Dal'Bello LR, Ma X, O'Shea DJ, Schone HR, Shahbazi M, Smoulder A. Highlights from the 32nd Annual Meeting of the Society for the Neural Control of Movement. J Neurophysiol 2024; 131:75-87. [PMID: 38057264 DOI: 10.1152/jn.00428.2023] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/17/2023] [Accepted: 12/04/2023] [Indexed: 12/08/2023] Open
Affiliation(s)
- Kassia Love
- Massachusetts Eye and Ear, Boston, Massachusetts, United States
| | - Di Cao
- Department of Mechanical Engineering, Johns Hopkins University, Baltimore, Maryland, United States
- Center for Movement Studies, Kennedy Krieger Institute, Baltimore, Maryland, United States
| | - Joanna C Chang
- Department of Bioengineering, Imperial College London, London, United Kingdom
| | - Lucas R Dal'Bello
- Laboratory of Neuromotor Physiology, IRCCS Fondazione Santa Lucia, Rome, Italy
| | - Xuan Ma
- Department of Neuroscience, Northwestern University, Chicago, Illinois, United States
| | - Daniel J O'Shea
- Department of Bioengineering, Stanford University, Stanford, California, United States
| | - Hunter R Schone
- Rehabilitation and Neural Engineering Laboratory, University of Pittsburgh, Pittsburgh, Pennsylvania, United States
- Department of Physical Medicine and Rehabilitation, University of Pittsburgh, Pittsburgh, Pennsylvania, United States
| | - Mahdiyar Shahbazi
- Western Institute for Neuroscience, Western University, London, Ontario, Canada
| | - Adam Smoulder
- Department of Biomedical Engineering, Carnegie Mellon University, Pittsburgh, Pennsylvania, United States
- Center for the Neural Basis of Cognition, Pittsburgh, Pennsylvania, United States
| |
Collapse
|
45
|
Abbaspourazad H, Erturk E, Pesaran B, Shanechi MM. Dynamical flexible inference of nonlinear latent factors and structures in neural population activity. Nat Biomed Eng 2024; 8:85-108. [PMID: 38082181 DOI: 10.1038/s41551-023-01106-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/21/2022] [Accepted: 09/12/2023] [Indexed: 12/26/2023]
Abstract
Modelling the spatiotemporal dynamics in the activity of neural populations while also enabling their flexible inference is hindered by the complexity and noisiness of neural observations. Here we show that the lower-dimensional nonlinear latent factors and latent structures can be computationally modelled in a manner that allows for flexible inference causally, non-causally and in the presence of missing neural observations. To enable flexible inference, we developed a neural network that separates the model into jointly trained manifold and dynamic latent factors such that nonlinearity is captured through the manifold factors and the dynamics can be modelled in tractable linear form on this nonlinear manifold. We show that the model, which we named 'DFINE' (for 'dynamical flexible inference for nonlinear embeddings') achieves flexible inference in simulations of nonlinear dynamics and across neural datasets representing a diversity of brain regions and behaviours. Compared with earlier neural-network models, DFINE enables flexible inference, better predicts neural activity and behaviour, and better captures the latent neural manifold structure. DFINE may advance the development of neurotechnology and investigations in neuroscience.
Collapse
Affiliation(s)
- Hamidreza Abbaspourazad
- Ming Hsieh Department of Electrical and Computer Engineering, Viterbi School of Engineering, University of Southern California, Los Angeles, CA, USA
| | - Eray Erturk
- Ming Hsieh Department of Electrical and Computer Engineering, Viterbi School of Engineering, University of Southern California, Los Angeles, CA, USA
| | - Bijan Pesaran
- Departments of Neurosurgery, Neuroscience, and Bioengineering, University of Pennsylvania, Philadelphia, PA, USA
| | - Maryam M Shanechi
- Ming Hsieh Department of Electrical and Computer Engineering, Viterbi School of Engineering, University of Southern California, Los Angeles, CA, USA.
- Thomas Lord Department of Computer Science, Alfred E. Mann Department of Biomedical Engineering, Neuroscience Graduate Program, University of Southern California, Los Angeles, CA, USA.
| |
Collapse
|
46
|
A neural network that enables flexible nonlinear inference from neural population activity. Nat Biomed Eng 2024; 8:9-10. [PMID: 38086959 DOI: 10.1038/s41551-023-01111-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/25/2024]
|
47
|
Dyer EL, Kording K. Why the simplest explanation isn't always the best. Proc Natl Acad Sci U S A 2023; 120:e2319169120. [PMID: 38117857 PMCID: PMC10756184 DOI: 10.1073/pnas.2319169120] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/22/2023] Open
Affiliation(s)
- Eva L. Dyer
- Department of Biomedical Engineering, Georgia Institute of Technology, Atlanta, GA30332
| | - Konrad Kording
- Department of Bioengineering, University of Pennsylvania, Philadelphia, PA19104
| |
Collapse
|
48
|
Wang JH, Tsin D, Engel TA. Predictive variational autoencoder for learning robust representations of time-series data. ARXIV 2023:arXiv:2312.06932v1. [PMID: 38168462 PMCID: PMC10760197] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Subscribe] [Scholar Register] [Indexed: 01/05/2024]
Abstract
Variational autoencoders (VAEs) have been used extensively to discover low-dimensional latent factors governing neural activity and animal behavior. However, without careful model selection, the uncovered latent factors may reflect noise in the data rather than true underlying features, rendering such representations unsuitable for scientific interpretation. Existing solutions to this problem involve introducing additional measured variables or data augmentations specific to a particular data type. We propose a VAE architecture that predicts the next point in time and show that it mitigates the learning of spurious features. In addition, we introduce a model selection metric based on smoothness over time in the latent space. We show that together these two constraints on VAEs to be smooth over time produce robust latent representations and faithfully recover latent factors on synthetic datasets.
Collapse
Affiliation(s)
- Julia H Wang
- Cold Spring Harbor Laboratory School of Biological Sciences Cold Spring Harbor Laboratory Cold Spring Harbor, New York, USA
| | - Dexter Tsin
- Princeton Neuroscience Institute Prineton University Princeton, New Jersey, USA
| | - Tatiana A Engel
- Princeton Neuroscience Institute Prineton University Princeton, New Jersey, USA
| |
Collapse
|
49
|
Petkoski S. On the structure function dichotomy: A perspective from human brain network modeling. Comment on "Structure and function in artificial, zebrafish and human neural networks" by Peng Ji et al. Phys Life Rev 2023; 47:165-167. [PMID: 37918193 DOI: 10.1016/j.plrev.2023.10.012] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/10/2023] [Accepted: 10/17/2023] [Indexed: 11/04/2023]
Affiliation(s)
- Spase Petkoski
- Aix Marseille Univ, INSERM, INS, Inst Neurosci Syst, Marseille, France.
| |
Collapse
|
50
|
Zhang Y, Ge F, Lin X, Xue J, Song Y, Xie H, He Y. Extract latent features of single-particle trajectories with historical experience learning. Biophys J 2023; 122:4451-4466. [PMID: 37885178 PMCID: PMC10698327 DOI: 10.1016/j.bpj.2023.10.023] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/03/2023] [Revised: 07/30/2023] [Accepted: 10/20/2023] [Indexed: 10/28/2023] Open
Abstract
Single-particle tracking has enabled real-time, in situ quantitative studies of complex systems. However, inferring dynamic state changes from noisy and undersampling trajectories encounters challenges. Here, we introduce a data-driven method for extracting features of subtrajectories with historical experience learning (Deep-SEES), where a single-particle tracking analysis pipeline based on a self-supervised architecture automatically searches for the latent space, allowing effective segmentation of the underlying states from noisy trajectories without prior knowledge on the particle dynamics. We validated our method on a variety of noisy simulated and experimental data. Our results showed that the method can faithfully capture both stable states and their dynamic switch. In highly random systems, our method outperformed commonly used unsupervised methods in inferring motion states, which is important for understanding nanoparticles interacting with living cell membranes, active enzymes, and liquid-liquid phase separation. Self-generating latent features of trajectories could potentially improve the understanding, estimation, and prediction of many complex systems.
Collapse
Affiliation(s)
- Yongyu Zhang
- Department of Chemistry, Tsinghua University, Beijing, P.R. China
| | - Feng Ge
- Department of Chemistry, Tsinghua University, Beijing, P.R. China
| | - Xijian Lin
- Department of Chemistry, Tsinghua University, Beijing, P.R. China
| | - Jianfeng Xue
- Department of Chemistry, Tsinghua University, Beijing, P.R. China
| | - Yuxin Song
- Department of Chemistry, Tsinghua University, Beijing, P.R. China
| | - Hao Xie
- Department of Automation, Tsinghua University, Beijing, P.R. China.
| | - Yan He
- Department of Chemistry, Tsinghua University, Beijing, P.R. China.
| |
Collapse
|