1
|
Sihn D, Chae S, Kim SP. A method to find temporal structure of neuronal coactivity patterns with across-trial correlations. J Neurosci Methods 2024; 408:110172. [PMID: 38782124 DOI: 10.1016/j.jneumeth.2024.110172] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/05/2024] [Revised: 05/08/2024] [Accepted: 05/17/2024] [Indexed: 05/25/2024]
Abstract
BACKGROUND The across-trial correlation of neurons' coactivity patterns emerges to be important for information coding, but methods for finding their temporal structures remain largely unexplored. NEW METHOD In the present study, we propose a method to find time clusters in which coactivity patterns of neurons are correlated across trials. We transform the multidimensional neural activity at each timing into a coactivity pattern of binary states, and predict the coactivity patterns at different timings. We devise a method suitable for these coactivity pattern predictions, call general event prediction. Cross-temporal prediction accuracy is then used to estimate across-trial correlations between coactivity patterns at two timings. We extract time clusters from the cross-temporal prediction accuracy by a modified k-means algorithm. RESULTS The feasibility of the proposed method is verified through simulations based on ground truth. We apply the proposed method to a calcium imaging dataset recorded from the motor cortex of mice, and demonstrate time clusters of motor cortical coactivity patterns during a motor task. COMPARISON WITH EXISTING METHODS While the existing cosine similarity method, which does not account for across-trial correlation, shows temporal structures only for contralateral neural responses, the proposed method reveals those for both contralateral and ipsilateral neural responses, demonstrating the effect of across-trial correlations. CONCLUSIONS This study introduces a novel method for measuring the temporal structure of neuronal ensemble activity.
Collapse
Affiliation(s)
- Duho Sihn
- Department of Biomedical Engineering, Ulsan National Institute of Science and Technology, Ulsan 44919, the Republic of Korea
| | - Soyoung Chae
- Department of Biomedical Engineering, Ulsan National Institute of Science and Technology, Ulsan 44919, the Republic of Korea
| | - Sung-Phil Kim
- Department of Biomedical Engineering, Ulsan National Institute of Science and Technology, Ulsan 44919, the Republic of Korea.
| |
Collapse
|
2
|
Nick Q, Gale DJ, Areshenkoff C, De Brouwer A, Nashed J, Wammes J, Zhu T, Flanagan R, Smallwood J, Gallivan J. Reconfigurations of cortical manifold structure during reward-based motor learning. eLife 2024; 12:RP91928. [PMID: 38916598 PMCID: PMC11198988 DOI: 10.7554/elife.91928] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 06/26/2024] Open
Abstract
Adaptive motor behavior depends on the coordinated activity of multiple neural systems distributed across the brain. While the role of sensorimotor cortex in motor learning has been well established, how higher-order brain systems interact with sensorimotor cortex to guide learning is less well understood. Using functional MRI, we examined human brain activity during a reward-based motor task where subjects learned to shape their hand trajectories through reinforcement feedback. We projected patterns of cortical and striatal functional connectivity onto a low-dimensional manifold space and examined how regions expanded and contracted along the manifold during learning. During early learning, we found that several sensorimotor areas in the dorsal attention network exhibited increased covariance with areas of the salience/ventral attention network and reduced covariance with areas of the default mode network (DMN). During late learning, these effects reversed, with sensorimotor areas now exhibiting increased covariance with DMN areas. However, areas in posteromedial cortex showed the opposite pattern across learning phases, with its connectivity suggesting a role in coordinating activity across different networks over time. Our results establish the neural changes that support reward-based motor learning and identify distinct transitions in the functional coupling of sensorimotor to transmodal cortex when adapting behavior.
Collapse
Affiliation(s)
- Qasem Nick
- Centre for Neuroscience Studies, Queen’s UniversityKingstonCanada
- Department of Psychology, Queen’s UniversityKingstonCanada
| | - Daniel J Gale
- Centre for Neuroscience Studies, Queen’s UniversityKingstonCanada
| | - Corson Areshenkoff
- Centre for Neuroscience Studies, Queen’s UniversityKingstonCanada
- Department of Psychology, Queen’s UniversityKingstonCanada
| | - Anouk De Brouwer
- Centre for Neuroscience Studies, Queen’s UniversityKingstonCanada
| | - Joseph Nashed
- Centre for Neuroscience Studies, Queen’s UniversityKingstonCanada
- Department of Medicine, Queen's UniversityKingstonCanada
| | - Jeffrey Wammes
- Centre for Neuroscience Studies, Queen’s UniversityKingstonCanada
- Department of Psychology, Queen’s UniversityKingstonCanada
| | - Tianyao Zhu
- Centre for Neuroscience Studies, Queen’s UniversityKingstonCanada
| | - Randy Flanagan
- Centre for Neuroscience Studies, Queen’s UniversityKingstonCanada
- Department of Psychology, Queen’s UniversityKingstonCanada
| | - Jonny Smallwood
- Centre for Neuroscience Studies, Queen’s UniversityKingstonCanada
- Department of Psychology, Queen’s UniversityKingstonCanada
| | - Jason Gallivan
- Centre for Neuroscience Studies, Queen’s UniversityKingstonCanada
- Department of Psychology, Queen’s UniversityKingstonCanada
- Department of Biomedical and Molecular Sciences, Queen’s UniversityKingstonCanada
| |
Collapse
|
3
|
Taylor NL, Whyte CJ, Munn BR, Chang C, Lizier JT, Leopold DA, Turchi JN, Zaborszky L, Műller EJ, Shine JM. Causal evidence for cholinergic stabilization of attractor landscape dynamics. Cell Rep 2024; 43:114359. [PMID: 38870015 DOI: 10.1016/j.celrep.2024.114359] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/18/2023] [Revised: 04/24/2024] [Accepted: 05/30/2024] [Indexed: 06/15/2024] Open
Abstract
There is substantial evidence that neuromodulatory systems critically influence brain state dynamics; however, most work has been purely descriptive. Here, we quantify, using data combining local inactivation of the basal forebrain with simultaneous measurement of resting-state fMRI activity in the macaque, the causal role of long-range cholinergic input to the stabilization of brain states in the cerebral cortex. Local inactivation of the nucleus basalis of Meynert (nbM) leads to a decrease in the energy barriers required for an fMRI state transition in cortical ongoing activity. Moreover, the inactivation of particular nbM sub-regions predominantly affects information transfer in cortical regions known to receive direct anatomical projections. We demonstrate these results in a simple neurodynamical model of cholinergic impact on neuronal firing rates and slow hyperpolarizing adaptation currents. We conclude that the cholinergic system plays a critical role in stabilizing macroscale brain state dynamics.
Collapse
Affiliation(s)
- Natasha L Taylor
- Brain and Mind Centre, The University of Sydney, Sydney, NSW, Australia; Centre for Complex Systems, The University of Sydney, Sydney, NSW, Australia
| | - Christopher J Whyte
- Brain and Mind Centre, The University of Sydney, Sydney, NSW, Australia; Centre for Complex Systems, The University of Sydney, Sydney, NSW, Australia
| | - Brandon R Munn
- Brain and Mind Centre, The University of Sydney, Sydney, NSW, Australia; Centre for Complex Systems, The University of Sydney, Sydney, NSW, Australia
| | - Catie Chang
- Vanderbilt School of Engineering, Vanderbilt University, Nashville, TN, USA
| | - Joseph T Lizier
- Centre for Complex Systems, The University of Sydney, Sydney, NSW, Australia; School of Computer Science, The University of Sydney, Sydney, NSW, Australia
| | - David A Leopold
- Neurophysiology Imaging Facility, National Institute of Mental Health, Washington DC, USA; Laboratory of Neuropsychology, National Institute of Mental Health, Bethesda MD, USA
| | - Janita N Turchi
- Laboratory of Neuropsychology, National Institute of Mental Health, Bethesda MD, USA
| | - Laszlo Zaborszky
- Centre for Molecular & Behavioral Neuroscience, Rutgers, The State University of New Jersey, Newark, NJ, USA
| | - Eli J Műller
- Brain and Mind Centre, The University of Sydney, Sydney, NSW, Australia; Centre for Complex Systems, The University of Sydney, Sydney, NSW, Australia
| | - James M Shine
- Brain and Mind Centre, The University of Sydney, Sydney, NSW, Australia; Centre for Complex Systems, The University of Sydney, Sydney, NSW, Australia.
| |
Collapse
|
4
|
Chang YT, Finkel EA, Xu D, O'Connor DH. Rule-based modulation of a sensorimotor transformation across cortical areas. eLife 2024; 12:RP92620. [PMID: 38842277 PMCID: PMC11156468 DOI: 10.7554/elife.92620] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 06/07/2024] Open
Abstract
Flexible responses to sensory stimuli based on changing rules are critical for adapting to a dynamic environment. However, it remains unclear how the brain encodes and uses rule information to guide behavior. Here, we made single-unit recordings while head-fixed mice performed a cross-modal sensory selection task where they switched between two rules: licking in response to tactile stimuli while rejecting visual stimuli, or vice versa. Along a cortical sensorimotor processing stream including the primary (S1) and secondary (S2) somatosensory areas, and the medial (MM) and anterolateral (ALM) motor areas, single-neuron activity distinguished between the two rules both prior to and in response to the tactile stimulus. We hypothesized that neural populations in these areas would show rule-dependent preparatory states, which would shape the subsequent sensory processing and behavior. This hypothesis was supported for the motor cortical areas (MM and ALM) by findings that (1) the current task rule could be decoded from pre-stimulus population activity; (2) neural subspaces containing the population activity differed between the two rules; and (3) optogenetic disruption of pre-stimulus states impaired task performance. Our findings indicate that flexible action selection in response to sensory input can occur via configuration of preparatory states in the motor cortex.
Collapse
Affiliation(s)
- Yi-Ting Chang
- Solomon H. Snyder Department of Neuroscience, Kavli Neuroscience Discovery Institute, Brain Science Institute, Johns Hopkins University School of MedicineBaltimoreUnited States
- Zanvyl Krieger Mind/Brain Institute, Johns Hopkins UniversityBaltimoreUnited States
| | - Eric A Finkel
- Solomon H. Snyder Department of Neuroscience, Kavli Neuroscience Discovery Institute, Brain Science Institute, Johns Hopkins University School of MedicineBaltimoreUnited States
| | - Duo Xu
- Solomon H. Snyder Department of Neuroscience, Kavli Neuroscience Discovery Institute, Brain Science Institute, Johns Hopkins University School of MedicineBaltimoreUnited States
- Zanvyl Krieger Mind/Brain Institute, Johns Hopkins UniversityBaltimoreUnited States
| | - Daniel H O'Connor
- Solomon H. Snyder Department of Neuroscience, Kavli Neuroscience Discovery Institute, Brain Science Institute, Johns Hopkins University School of MedicineBaltimoreUnited States
- Zanvyl Krieger Mind/Brain Institute, Johns Hopkins UniversityBaltimoreUnited States
| |
Collapse
|
5
|
Kim JH, Daie K, Li N. A combinatorial neural code for long-term motor memory. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.06.05.597627. [PMID: 38895416 PMCID: PMC11185691 DOI: 10.1101/2024.06.05.597627] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/21/2024]
Abstract
Motor skill repertoire can be stably retained over long periods, but the neural mechanism underlying stable memory storage remains poorly understood. Moreover, it is unknown how existing motor memories are maintained as new motor skills are continuously acquired. Here we tracked neural representation of learned actions throughout a significant portion of a mouse's lifespan, and we show that learned actions are stably retained in motor memory in combination with context, which protects existing memories from erasure during new motor learning. We used automated home-cage training to establish a continual learning paradigm in which mice learned to perform directional licking in different task contexts. We combined this paradigm with chronic two-photon imaging of motor cortex activity for up to 6 months. Within the same task context, activity driving directional licking was stable over time with little representational drift. When learning new task contexts, new preparatory activity emerged to drive the same licking actions. Learning created parallel new motor memories while retaining the previous memories. Re-learning to make the same actions in the previous task context re-activated the previous preparatory activity, even months later. At the same time, continual learning of new task contexts kept creating new preparatory activity patterns. Context-specific memories, as we observed in the motor system, may provide a solution for stable memory storage throughout continual learning. Learning in new contexts produces parallel new representations instead of modifying existing representations, thus protecting existing motor repertoire from erasure.
Collapse
|
6
|
Srinath R, Ni AM, Marucci C, Cohen MR, Brainard DH. Orthogonal neural representations support perceptual judgements of natural stimuli. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.02.14.580134. [PMID: 38464018 PMCID: PMC10925131 DOI: 10.1101/2024.02.14.580134] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 03/12/2024]
Abstract
In natural behavior, observers must separate relevant information from a barrage of irrelevant information. Many studies have investigated the neural underpinnings of this ability using artificial stimuli presented on simple backgrounds. Natural viewing, however, carries a set of challenges that are inaccessible using artificial stimuli, including neural responses to background objects that are task-irrelevant. An emerging body of evidence suggests that the visual abilities of humans and animals can be modeled through the linear decoding of task-relevant information from visual cortex. This idea suggests the hypothesis that irrelevant features of a natural scene should impair performance on a visual task only if their neural representations intrude on the linear readout of the task relevant feature, as would occur if the representations of task-relevant and irrelevant features are not orthogonal in the underlying neural population. We tested this hypothesis using human psychophysics and monkey neurophysiology, in response to parametrically variable naturalistic stimuli. We demonstrate that 1) the neural representation of one feature (the position of a central object) in visual area V4 is orthogonal to those of several background features, 2) the ability of human observers to precisely judge object position was largely unaffected by task-irrelevant variation in those background features, and 3) many features of the object and the background are orthogonally represented by V4 neural responses. Our observations are consistent with the hypothesis that orthogonal neural representations can support stable perception of objects and features despite the tremendous richness of natural visual scenes.
Collapse
Affiliation(s)
- Ramanujan Srinath
- equal contribution
- Department of Neurobiology and Neuroscience Institute, The University of Chicago, Chicago, IL 60637, USA
| | - Amy M. Ni
- equal contribution
- Department of Neurobiology and Neuroscience Institute, The University of Chicago, Chicago, IL 60637, USA
- Department of Psychology, University of Pennsylvania, Philadelphia, PA 19104, USA
| | - Claire Marucci
- Department of Psychology, University of Pennsylvania, Philadelphia, PA 19104, USA
| | - Marlene R. Cohen
- Department of Neurobiology and Neuroscience Institute, The University of Chicago, Chicago, IL 60637, USA
- equal contribution
| | - David H. Brainard
- Department of Psychology, University of Pennsylvania, Philadelphia, PA 19104, USA
- equal contribution
| |
Collapse
|
7
|
Jarne C, Caruso M. Effect in the spectra of eigenvalues and dynamics of RNNs trained with excitatory-inhibitory constraint. Cogn Neurodyn 2024; 18:1323-1335. [PMID: 38826641 PMCID: PMC11143133 DOI: 10.1007/s11571-023-09956-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/29/2022] [Revised: 01/09/2023] [Accepted: 03/08/2023] [Indexed: 04/09/2023] Open
Abstract
In order to comprehend and enhance models that describes various brain regions it is important to study the dynamics of trained recurrent neural networks. Including Dale's law in such models usually presents several challenges. However, this is an important aspect that allows computational models to better capture the characteristics of the brain. Here we present a framework to train networks using such constraint. Then we have used it to train them in simple decision making tasks. We characterized the eigenvalue distributions of the recurrent weight matrices of such networks. Interestingly, we discovered that the non-dominant eigenvalues of the recurrent weight matrix are distributed in a circle with a radius less than 1 for those whose initial condition before training was random normal and in a ring for those whose initial condition was random orthogonal. In both cases, the radius does not depend on the fraction of excitatory and inhibitory units nor the size of the network. Diminution of the radius, compared to networks trained without the constraint, has implications on the activity and dynamics that we discussed here. Supplementary Information The online version contains supplementary material available at 10.1007/s11571-023-09956-w.
Collapse
Affiliation(s)
- Cecilia Jarne
- Departmento de Ciencia y Tecnología, Universidad Nacional de Quilmes, Bernal, Argentina
- Center of Functionally Integrative Neuroscience, Department of Clinical Medicine, Aarhus University, Aarhus, Denmark
- CONICET, Buenos Aires, Argentina
| | - Mariano Caruso
- Present Address: Fundación I+D del Software Libre–FIDESOL, Granada, Spain
- Universidad Internacional de La Rioja–UNIR, La Rioja, Spain
| |
Collapse
|
8
|
Fenton AA. Remapping revisited: how the hippocampus represents different spaces. Nat Rev Neurosci 2024; 25:428-448. [PMID: 38714834 DOI: 10.1038/s41583-024-00817-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 04/04/2024] [Indexed: 05/25/2024]
Abstract
The representation of distinct spaces by hippocampal place cells has been linked to changes in their place fields (the locations in the environment where the place cells discharge strongly), a phenomenon that has been termed 'remapping'. Remapping has been assumed to be accompanied by the reorganization of subsecond cofiring relationships among the place cells, potentially maximizing hippocampal information coding capacity. However, several observations challenge this standard view. For example, place cells exhibit mixed selectivity, encode non-positional variables, can have multiple place fields and exhibit unreliable discharge in fixed environments. Furthermore, recent evidence suggests that, when measured at subsecond timescales, the moment-to-moment cofiring of a pair of cells in one environment is remarkably similar in another environment, despite remapping. Here, I propose that remapping is a misnomer for the changes in place fields across environments and suggest instead that internally organized manifold representations of hippocampal activity are actively registered to different environments to enable navigation, promote memory and organize knowledge.
Collapse
Affiliation(s)
- André A Fenton
- Center for Neural Science, New York University, New York, NY, USA.
- Neuroscience Institute at the NYU Langone Medical Center, New York, NY, USA.
| |
Collapse
|
9
|
Vinograd A, Nair A, Linderman SW, Anderson DJ. Intrinsic Dynamics and Neural Implementation of a Hypothalamic Line Attractor Encoding an Internal Behavioral State. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.05.21.595051. [PMID: 38826298 PMCID: PMC11142118 DOI: 10.1101/2024.05.21.595051] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/04/2024]
Abstract
Line attractors are emergent population dynamics hypothesized to encode continuous variables such as head direction and internal states. In mammals, direct evidence of neural implementation of a line attractor has been hindered by the challenge of targeting perturbations to specific neurons within contributing ensembles. Estrogen receptor type 1 (Esr1)-expressing neurons in the ventrolateral subdivision of the ventromedial hypothalamus (VMHvl) show line attractor dynamics in male mice during fighting. We hypothesized that these dynamics may encode continuous variation in the intensity of an internal aggressive state. Here, we report that these neurons also show line attractor dynamics in head-fixed mice observing aggression. We exploit this finding to identify and perturb line attractor-contributing neurons using 2-photon calcium imaging and holographic optogenetic perturbations. On-manifold perturbations demonstrate that integration and persistent activity are intrinsic properties of these neurons which drive the system along the line attractor, while transient off-manifold perturbations reveal rapid relaxation back into the attractor. Furthermore, stimulation and imaging reveal selective functional connectivity among attractor-contributing neurons. Intriguingly, individual differences among mice in line attractor stability were correlated with the degree of functional connectivity among contributing neurons. Mechanistic modelling indicates that dense subnetwork connectivity and slow neurotransmission are required to explain our empirical findings. Our work bridges circuit and manifold paradigms, shedding light on the intrinsic and operational dynamics of a behaviorally relevant mammalian line attractor.
Collapse
Affiliation(s)
- Amit Vinograd
- Division of Biology and Biological Engineering, California Institute of Technology; Pasadena, USA
- Tianqiao and Chrissy Chen Institute for Neuroscience Caltech; Pasadena, USA
- Howard Hughes Medical Institute; Chevy Chase, USA
| | - Aditya Nair
- Division of Biology and Biological Engineering, California Institute of Technology; Pasadena, USA
- Tianqiao and Chrissy Chen Institute for Neuroscience Caltech; Pasadena, USA
- Howard Hughes Medical Institute; Chevy Chase, USA
| | - Scott W. Linderman
- Department of Statistics, Stanford University, Stanford, USA
- Wu Tsai Neurosciences Institute, Stanford University, Stanford, USA
| | - David J. Anderson
- Division of Biology and Biological Engineering, California Institute of Technology; Pasadena, USA
- Tianqiao and Chrissy Chen Institute for Neuroscience Caltech; Pasadena, USA
- Howard Hughes Medical Institute; Chevy Chase, USA
| |
Collapse
|
10
|
Rodriguez AC, Perich MG, Miller L, Humphries MD. Motor cortex latent dynamics encode spatial and temporal arm movement parameters independently. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2023.05.26.542452. [PMID: 37292834 PMCID: PMC10246015 DOI: 10.1101/2023.05.26.542452] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
The fluid movement of an arm requires multiple spatiotemporal parameters to be set independently. Recent studies have argued that arm movements are generated by the collective dynamics of neurons in motor cortex. An untested prediction of this hypothesis is that independent parameters of movement must map to independent components of the neural dynamics. Using a task where monkeys made a sequence of reaching movements to randomly placed targets, we show that the spatial and temporal parameters of arm movements are independently encoded in the low-dimensional trajectories of population activity in motor cortex: Each movement's direction corresponds to a fixed neural trajectory through neural state space and its speed to how quickly that trajectory is traversed. Recurrent neural network models show this coding allows independent control over the spatial and temporal parameters of movement by separate network parameters. Our results support a key prediction of the dynamical systems view of motor cortex, but also argue that not all parameters of movement are defined by different trajectories of population activity.
Collapse
Affiliation(s)
| | - Matthew G. Perich
- Département de neurosciences, Faculté de médecine, Université de Montréal, Montréal, Canada
- Québec Artificial Intelligence Institute (Mila), Québec, Canada
| | - Lee Miller
- Northwestern University, Department of Biomedical Engineering, Chicago, USA
| | - Mark D. Humphries
- School of Psychology, University of Nottingham, Nottingham, United Kingdom
| |
Collapse
|
11
|
Mastrovito D, Liu YH, Kusmierz L, Shea-Brown E, Koch C, Mihalas S. Transition to chaos separates learning regimes and relates to measure of consciousness in recurrent neural networks. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.05.15.594236. [PMID: 38798582 PMCID: PMC11118502 DOI: 10.1101/2024.05.15.594236] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/29/2024]
Abstract
Recurrent neural networks exhibit chaotic dynamics when the variance in their connection strengths exceed a critical value. Recent work indicates connection variance also modulates learning strategies; networks learn "rich" representations when initialized with low coupling and "lazier" solutions with larger variance. Using Watts-Strogatz networks of varying sparsity, structure, and hidden weight variance, we find that the critical coupling strength dividing chaotic from ordered dynamics also differentiates rich and lazy learning strategies. Training moves both stable and chaotic networks closer to the edge of chaos, with networks learning richer representations before the transition to chaos. In contrast, biologically realistic connectivity structures foster stability over a wide range of variances. The transition to chaos is also reflected in a measure that clinically discriminates levels of consciousness, the perturbational complexity index (PCIst). Networks with high values of PCIst exhibit stable dynamics and rich learning, suggesting a consciousness prior may promote rich learning. The results suggest a clear relationship between critical dynamics, learning regimes and complexity-based measures of consciousness.
Collapse
|
12
|
Chang JC, Perich MG, Miller LE, Gallego JA, Clopath C. De novo motor learning creates structure in neural activity that shapes adaptation. Nat Commun 2024; 15:4084. [PMID: 38744847 PMCID: PMC11094149 DOI: 10.1038/s41467-024-48008-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/26/2023] [Accepted: 04/18/2024] [Indexed: 05/16/2024] Open
Abstract
Animals can quickly adapt learned movements to external perturbations, and their existing motor repertoire likely influences their ease of adaptation. Long-term learning causes lasting changes in neural connectivity, which shapes the activity patterns that can be produced during adaptation. Here, we examined how a neural population's existing activity patterns, acquired through de novo learning, affect subsequent adaptation by modeling motor cortical neural population dynamics with recurrent neural networks. We trained networks on different motor repertoires comprising varying numbers of movements, which they acquired following various learning experiences. Networks with multiple movements had more constrained and robust dynamics, which were associated with more defined neural 'structure'-organization in the available population activity patterns. This structure facilitated adaptation, but only when the changes imposed by the perturbation were congruent with the organization of the inputs and the structure in neural activity acquired during de novo learning. These results highlight trade-offs in skill acquisition and demonstrate how different learning experiences can shape the geometrical properties of neural population activity and subsequent adaptation.
Collapse
Affiliation(s)
- Joanna C Chang
- Department of Bioengineering, Imperial College London, London, UK
| | - Matthew G Perich
- Département de Neurosciences, Faculté de Médecine, Université de Montréal, Montréal, QC, Canada
- Mila, Québec Artificial Intelligence Institute, Montréal, QC, Canada
| | - Lee E Miller
- Departments of Physiology, Biomedical Engineering and Physical Medicine and Rehabilitation, Northwestern University and Shirley Ryan Ability Lab, Chicago, IL, USA
| | - Juan A Gallego
- Department of Bioengineering, Imperial College London, London, UK.
| | - Claudia Clopath
- Department of Bioengineering, Imperial College London, London, UK.
| |
Collapse
|
13
|
Zhou S, Buonomano DV. Unified control of temporal and spatial scales of sensorimotor behavior through neuromodulation of short-term synaptic plasticity. SCIENCE ADVANCES 2024; 10:eadk7257. [PMID: 38701208 DOI: 10.1126/sciadv.adk7257] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/11/2023] [Accepted: 04/03/2024] [Indexed: 05/05/2024]
Abstract
Neuromodulators have been shown to alter the temporal profile of short-term synaptic plasticity (STP); however, the computational function of this neuromodulation remains unexplored. Here, we propose that the neuromodulation of STP provides a general mechanism to scale neural dynamics and motor outputs in time and space. We trained recurrent neural networks that incorporated STP to produce complex motor trajectories-handwritten digits-with different temporal (speed) and spatial (size) scales. Neuromodulation of STP produced temporal and spatial scaling of the learned dynamics and enhanced temporal or spatial generalization compared to standard training of the synaptic weights in the absence of STP. The model also accounted for the results of two experimental studies involving flexible sensorimotor timing. Neuromodulation of STP provides a unified and biologically plausible mechanism to control the temporal and spatial scales of neural dynamics and sensorimotor behaviors.
Collapse
Affiliation(s)
- Shanglin Zhou
- Institute for Translational Brain Research, Fudan University, Shanghai, China
- State Key Laboratory of Medical Neurobiology, Fudan University, Shanghai, China
- MOE Frontiers Center for Brain Science, Fudan University, Shanghai, China
- Zhongshan Hospital, Fudan University, Shanghai, China
| | - Dean V Buonomano
- Department of Neurobiology, University of California, Los Angeles, Los Angeles, CA, USA
- Department of Psychology, University of California, Los Angeles, Los Angeles, CA, USA
| |
Collapse
|
14
|
Fortunato C, Bennasar-Vázquez J, Park J, Chang JC, Miller LE, Dudman JT, Perich MG, Gallego JA. Nonlinear manifolds underlie neural population activity during behaviour. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2023.07.18.549575. [PMID: 37503015 PMCID: PMC10370078 DOI: 10.1101/2023.07.18.549575] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 07/29/2023]
Abstract
There is rich variety in the activity of single neurons recorded during behaviour. Yet, these diverse single neuron responses can be well described by relatively few patterns of neural co-modulation. The study of such low-dimensional structure of neural population activity has provided important insights into how the brain generates behaviour. Virtually all of these studies have used linear dimensionality reduction techniques to estimate these population-wide co-modulation patterns, constraining them to a flat "neural manifold". Here, we hypothesised that since neurons have nonlinear responses and make thousands of distributed and recurrent connections that likely amplify such nonlinearities, neural manifolds should be intrinsically nonlinear. Combining neural population recordings from monkey, mouse, and human motor cortex, and mouse striatum, we show that: 1) neural manifolds are intrinsically nonlinear; 2) their nonlinearity becomes more evident during complex tasks that require more varied activity patterns; and 3) manifold nonlinearity varies across architecturally distinct brain regions. Simulations using recurrent neural network models confirmed the proposed relationship between circuit connectivity and manifold nonlinearity, including the differences across architecturally distinct regions. Thus, neural manifolds underlying the generation of behaviour are inherently nonlinear, and properly accounting for such nonlinearities will be critical as neuroscientists move towards studying numerous brain regions involved in increasingly complex and naturalistic behaviours.
Collapse
Affiliation(s)
- Cátia Fortunato
- Department of Bioengineering, Imperial College London, London UK
| | | | - Junchol Park
- Janelia Research Campus, Howard Hughes Medical Institute, Ashburn VA, USA
| | - Joanna C. Chang
- Department of Bioengineering, Imperial College London, London UK
| | - Lee E. Miller
- Department of Neurosciences, Northwestern University, Chicago IL, USA
- Department of Biomedical Engineering, Northwestern University, Chicago IL, USA
- Department of Physical Medicine and Rehabilitation, Northwestern University, Chicago IL, USA, and Shirley Ryan Ability Lab, Chicago, IL, USA
| | - Joshua T. Dudman
- Janelia Research Campus, Howard Hughes Medical Institute, Ashburn VA, USA
| | - Matthew G. Perich
- Department of Neurosciences, Faculté de médecine, Université de Montréal, Montréal, Québec, Canada
- Québec Artificial Intelligence Institute (MILA), Montréal, Québec, Canada
| | - Juan A. Gallego
- Department of Bioengineering, Imperial College London, London UK
| |
Collapse
|
15
|
Talpir I, Livneh Y. Stereotyped goal-directed manifold dynamics in the insular cortex. Cell Rep 2024; 43:114027. [PMID: 38568813 PMCID: PMC11063631 DOI: 10.1016/j.celrep.2024.114027] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/10/2023] [Revised: 02/12/2024] [Accepted: 03/15/2024] [Indexed: 04/05/2024] Open
Abstract
The insular cortex is involved in diverse processes, including bodily homeostasis, emotions, and cognition. However, we lack a comprehensive understanding of how it processes information at the level of neuronal populations. We leveraged recent advances in unsupervised machine learning to study insular cortex population activity patterns (i.e., neuronal manifold) in mice performing goal-directed behaviors. We find that the insular cortex activity manifold is remarkably consistent across different animals and under different motivational states. Activity dynamics within the neuronal manifold are highly stereotyped during rewarded trials, enabling robust prediction of single-trial outcomes across different mice and across various natural and artificial motivational states. Comparing goal-directed behavior with self-paced free consumption, we find that the stereotyped activity patterns reflect task-dependent goal-directed reward anticipation, and not licking, taste, or positive valence. These findings reveal a core computation in insular cortex that could explain its involvement in pathologies involving aberrant motivations.
Collapse
Affiliation(s)
- Itay Talpir
- Department of Brain Sciences, Weizmann Institute of Science, Rehovot 76100, Israel
| | - Yoav Livneh
- Department of Brain Sciences, Weizmann Institute of Science, Rehovot 76100, Israel.
| |
Collapse
|
16
|
Podlaski WF, Machens CK. Approximating Nonlinear Functions With Latent Boundaries in Low-Rank Excitatory-Inhibitory Spiking Networks. Neural Comput 2024; 36:803-857. [PMID: 38658028 DOI: 10.1162/neco_a_01658] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/21/2023] [Accepted: 01/02/2024] [Indexed: 04/26/2024]
Abstract
Deep feedforward and recurrent neural networks have become successful functional models of the brain, but they neglect obvious biological details such as spikes and Dale's law. Here we argue that these details are crucial in order to understand how real neural circuits operate. Towards this aim, we put forth a new framework for spike-based computation in low-rank excitatory-inhibitory spiking networks. By considering populations with rank-1 connectivity, we cast each neuron's spiking threshold as a boundary in a low-dimensional input-output space. We then show how the combined thresholds of a population of inhibitory neurons form a stable boundary in this space, and those of a population of excitatory neurons form an unstable boundary. Combining the two boundaries results in a rank-2 excitatory-inhibitory (EI) network with inhibition-stabilized dynamics at the intersection of the two boundaries. The computation of the resulting networks can be understood as the difference of two convex functions and is thereby capable of approximating arbitrary non-linear input-output mappings. We demonstrate several properties of these networks, including noise suppression and amplification, irregular activity and synaptic balance, as well as how they relate to rate network dynamics in the limit that the boundary becomes soft. Finally, while our work focuses on small networks (5-50 neurons), we discuss potential avenues for scaling up to much larger networks. Overall, our work proposes a new perspective on spiking networks that may serve as a starting point for a mechanistic understanding of biological spike-based computation.
Collapse
Affiliation(s)
- William F Podlaski
- Champalimaud Neuroscience Programme, Champalimaud Foundation, 1400-038 Lisbon, Portugal
| | - Christian K Machens
- Champalimaud Neuroscience Programme, Champalimaud Foundation, 1400-038 Lisbon, Portugal
| |
Collapse
|
17
|
Guo X, Zhang X, Liu J, Zhai G, Zhang T, Zhou R, Lu H, Gao L. Resolving heterogeneity in dynamics of synchronization stability within the salience network in autism spectrum disorder. Prog Neuropsychopharmacol Biol Psychiatry 2024; 131:110956. [PMID: 38296155 DOI: 10.1016/j.pnpbp.2024.110956] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/06/2023] [Revised: 01/16/2024] [Accepted: 01/28/2024] [Indexed: 02/05/2024]
Abstract
BACKGROUND Heterogeneity in resting-state functional connectivity (FC) are one of the characteristics of autism spectrum disorder (ASD). Traditional resting-state FC primarily focuses on linear correlations, ignoring the nonlinear properties involved in synchronization between networks or brain regions. METHODS In the present study, the cross-recurrence quantification analysis, a nonlinear method based on dynamical systems, was utilized to quantify the synchronization stability between brain regions within the salience network (SN) of ASD. Using the resting-state functional magnetic resonance imaging data of 207 children (ASD/typically-developing controls (TC): 105/102) in Autism Brain Imaging Data Exchange database, we analyzed the laminarity and trapping time differences of the synchronization stability between the ASD subtype derived by a K-means clustering analysis and the TC group, and examined the relationship between synchronization stability and the severity of clinical symptoms of the ASD subtypes. RESULTS Based on the synchronization stability within the SN of ASD, we identified two subtypes that showed opposite changes in synchronization stability relative to the TC group. In addition, the synchronization stability of ASD subtypes 1 and 2 can predict the social interaction and communication impairments, respectively. CONCLUSIONS These findings reveal that ASD subgroups with different patterns of synchronization stability within the SN appear distinct clinical symptoms, and highlight the importance of exploring the potential neural mechanism of ASD from a nonlinear perspective.
Collapse
Affiliation(s)
- Xiaonan Guo
- School of Information Science and Engineering, Yanshan University, Qinhuangdao 066004, China; Hebei Key Laboratory of Information Transmission and Signal Processing, Yanshan University, Qinhuangdao 066004, China.
| | - Xia Zhang
- School of Information Science and Engineering, Yanshan University, Qinhuangdao 066004, China; Hebei Key Laboratory of Information Transmission and Signal Processing, Yanshan University, Qinhuangdao 066004, China
| | - Junfeng Liu
- Department of Neurology, West China Hospital, Sichuan University, China, Chengdu, 610041, China
| | - Guangjin Zhai
- School of Information Science and Engineering, Yanshan University, Qinhuangdao 066004, China; Hebei Key Laboratory of Information Transmission and Signal Processing, Yanshan University, Qinhuangdao 066004, China
| | - Tao Zhang
- School of Information Science and Engineering, Yanshan University, Qinhuangdao 066004, China; Hebei Key Laboratory of Information Transmission and Signal Processing, Yanshan University, Qinhuangdao 066004, China
| | - Rongjuan Zhou
- Maternity and Child Health Hospital of Qinhuangdao, Qinhuangdao 066000, China
| | - Huibin Lu
- School of Information Science and Engineering, Yanshan University, Qinhuangdao 066004, China; Hebei Key Laboratory of Information Transmission and Signal Processing, Yanshan University, Qinhuangdao 066004, China
| | - Le Gao
- School of Information Science and Engineering, Yanshan University, Qinhuangdao 066004, China; Hebei Key Laboratory of Information Transmission and Signal Processing, Yanshan University, Qinhuangdao 066004, China.
| |
Collapse
|
18
|
Rush ER, Heckman C, Jayaram K, Humbert JS. Neural dynamics of robust legged robots. Front Robot AI 2024; 11:1324404. [PMID: 38699630 PMCID: PMC11063321 DOI: 10.3389/frobt.2024.1324404] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/19/2023] [Accepted: 03/26/2024] [Indexed: 05/05/2024] Open
Abstract
Legged robot control has improved in recent years with the rise of deep reinforcement learning, however, much of the underlying neural mechanisms remain difficult to interpret. Our aim is to leverage bio-inspired methods from computational neuroscience to better understand the neural activity of robust robot locomotion controllers. Similar to past work, we observe that terrain-based curriculum learning improves agent stability. We study the biomechanical responses and neural activity within our neural network controller by simultaneously pairing physical disturbances with targeted neural ablations. We identify an agile hip reflex that enables the robot to regain its balance and recover from lateral perturbations. Model gradients are employed to quantify the relative degree that various sensory feedback channels drive this reflexive behavior. We also find recurrent dynamics are implicated in robust behavior, and utilize sampling-based ablation methods to identify these key neurons. Our framework combines model-based and sampling-based methods for drawing causal relationships between neural network activity and robust embodied robot behavior.
Collapse
Affiliation(s)
- Eugene R. Rush
- Department of Mechanical Engineering, University of Colorado Boulder, Boulder, CO, United States
| | - Christoffer Heckman
- Department of Computer Science, University of Colorado Boulder, Boulder, CO, United States
| | - Kaushik Jayaram
- Department of Mechanical Engineering, University of Colorado Boulder, Boulder, CO, United States
| | - J. Sean Humbert
- Department of Mechanical Engineering, University of Colorado Boulder, Boulder, CO, United States
| |
Collapse
|
19
|
Wang T, Chen Y, Zhang Y, Cui H. Multiplicative joint coding in preparatory activity for reaching sequence in macaque motor cortex. Nat Commun 2024; 15:3153. [PMID: 38605030 PMCID: PMC11009282 DOI: 10.1038/s41467-024-47511-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/21/2023] [Accepted: 04/02/2024] [Indexed: 04/13/2024] Open
Abstract
Although the motor cortex has been found to be modulated by sensory or cognitive sequences, the linkage between multiple movement elements and sequence-related responses is not yet understood. Here, we recorded neuronal activity from the motor cortex with implanted micro-electrode arrays and single electrodes while monkeys performed a double-reach task that was instructed by simultaneously presented memorized cues. We found that there existed a substantial multiplicative component jointly tuned to impending and subsequent reaches during preparation, then the coding mechanism transferred to an additive manner during execution. This multiplicative joint coding, which also spontaneously emerged in recurrent neural networks trained for double reach, enriches neural patterns for sequential movement, and might explain the linear readout of elemental movements.
Collapse
Affiliation(s)
- Tianwei Wang
- Institute of Neuroscience, Key Laboratory of Primate Neurobiology, Center for Excellence in Brain Science and Intelligence Technology, Chinese Academy of Sciences, Shanghai, 200031, China
- University of Chinese Academy of Sciences, Beijing, 100049, China
- Chinese Institute for Brain Research, Beijing, 102206, China
| | - Yun Chen
- Institute of Neuroscience, Key Laboratory of Primate Neurobiology, Center for Excellence in Brain Science and Intelligence Technology, Chinese Academy of Sciences, Shanghai, 200031, China
- University of Chinese Academy of Sciences, Beijing, 100049, China
- Chinese Institute for Brain Research, Beijing, 102206, China
| | - Yiheng Zhang
- Institute of Neuroscience, Key Laboratory of Primate Neurobiology, Center for Excellence in Brain Science and Intelligence Technology, Chinese Academy of Sciences, Shanghai, 200031, China
- University of Chinese Academy of Sciences, Beijing, 100049, China
- Chinese Institute for Brain Research, Beijing, 102206, China
| | - He Cui
- Institute of Neuroscience, Key Laboratory of Primate Neurobiology, Center for Excellence in Brain Science and Intelligence Technology, Chinese Academy of Sciences, Shanghai, 200031, China.
- University of Chinese Academy of Sciences, Beijing, 100049, China.
- Chinese Institute for Brain Research, Beijing, 102206, China.
- Shanghai Center for Brain Science and Brain-inspired Technology, Shanghai, 200031, China.
| |
Collapse
|
20
|
Stroud JP, Duncan J, Lengyel M. The computational foundations of dynamic coding in working memory. Trends Cogn Sci 2024:S1364-6613(24)00053-6. [PMID: 38580528 DOI: 10.1016/j.tics.2024.02.011] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/11/2023] [Revised: 02/29/2024] [Accepted: 02/29/2024] [Indexed: 04/07/2024]
Abstract
Working memory (WM) is a fundamental aspect of cognition. WM maintenance is classically thought to rely on stable patterns of neural activities. However, recent evidence shows that neural population activities during WM maintenance undergo dynamic variations before settling into a stable pattern. Although this has been difficult to explain theoretically, neural network models optimized for WM typically also exhibit such dynamics. Here, we examine stable versus dynamic coding in neural data, classical models, and task-optimized networks. We review principled mathematical reasons for why classical models do not, while task-optimized models naturally do exhibit dynamic coding. We suggest an update to our understanding of WM maintenance, in which dynamic coding is a fundamental computational feature rather than an epiphenomenon.
Collapse
Affiliation(s)
- Jake P Stroud
- Computational and Biological Learning Lab, Department of Engineering, University of Cambridge, Cambridge, UK.
| | - John Duncan
- MRC Cognition and Brain Sciences Unit, University of Cambridge, Cambridge, UK
| | - Máté Lengyel
- Computational and Biological Learning Lab, Department of Engineering, University of Cambridge, Cambridge, UK; Center for Cognitive Computation, Department of Cognitive Science, Central European University, Budapest, Hungary
| |
Collapse
|
21
|
Greene P, Bastian AJ, Schieber MH, Sarma SV. Optimal reaching subject to computational and physical constraints reveals structure of the sensorimotor control system. Proc Natl Acad Sci U S A 2024; 121:e2319313121. [PMID: 38551834 PMCID: PMC10998569 DOI: 10.1073/pnas.2319313121] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/03/2023] [Accepted: 02/13/2024] [Indexed: 04/02/2024] Open
Abstract
Optimal feedback control provides an abstract framework describing the architecture of the sensorimotor system without prescribing implementation details such as what coordinate system to use, how feedback is incorporated, or how to accommodate changing task complexity. We investigate how such details are determined by computational and physical constraints by creating a model of the upper limb sensorimotor system in which all connection weights between neurons, feedback, and muscles are unknown. By optimizing these parameters with respect to an objective function, we find that the model exhibits a preference for an intrinsic (joint angle) coordinate representation of inputs and feedback and learns to calculate a weighted feedforward and feedback error. We further show that complex reaches around obstacles can be achieved by augmenting our model with a path-planner based on via points. The path-planner revealed "avoidance" neurons that encode directions to reach around obstacles and "placement" neurons that make fine-tuned adjustments to via point placement. Our results demonstrate the surprising capability of computationally constrained systems and highlight interesting characteristics of the sensorimotor system.
Collapse
Affiliation(s)
- Patrick Greene
- Institute for Computational Medicine, The Johns Hopkins University, Baltimore, MD21218
| | - Amy J. Bastian
- Kennedy Krieger Institute, Department of Neurology, The Johns Hopkins University School of Medicine, Baltimore, MD21205
| | - Marc H. Schieber
- Department of Neurology, University of Rochester, Rochester, NY14642
| | - Sridevi V. Sarma
- Institute for Computational Medicine, The Johns Hopkins University, Baltimore, MD21218
- Department of Biomedical Engineering, The Johns Hopkins University School of Medicine & Whiting School of Engineering, Baltimore, MD21218
| |
Collapse
|
22
|
Meng R, Bouchard KE. Bayesian inference of structured latent spaces from neural population activity with the orthogonal stochastic linear mixing model. PLoS Comput Biol 2024; 20:e1011975. [PMID: 38669271 PMCID: PMC11078355 DOI: 10.1371/journal.pcbi.1011975] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/13/2022] [Revised: 05/08/2024] [Accepted: 03/07/2024] [Indexed: 04/28/2024] Open
Abstract
The brain produces diverse functions, from perceiving sounds to producing arm reaches, through the collective activity of populations of many neurons. Determining if and how the features of these exogenous variables (e.g., sound frequency, reach angle) are reflected in population neural activity is important for understanding how the brain operates. Often, high-dimensional neural population activity is confined to low-dimensional latent spaces. However, many current methods fail to extract latent spaces that are clearly structured by exogenous variables. This has contributed to a debate about whether or not brains should be thought of as dynamical systems or representational systems. Here, we developed a new latent process Bayesian regression framework, the orthogonal stochastic linear mixing model (OSLMM) which introduces an orthogonality constraint amongst time-varying mixture coefficients, and provide Markov chain Monte Carlo inference procedures. We demonstrate superior performance of OSLMM on latent trajectory recovery in synthetic experiments and show superior computational efficiency and prediction performance on several real-world benchmark data sets. We primarily focus on demonstrating the utility of OSLMM in two neural data sets: μECoG recordings from rat auditory cortex during presentation of pure tones and multi-single unit recordings form monkey motor cortex during complex arm reaching. We show that OSLMM achieves superior or comparable predictive accuracy of neural data and decoding of external variables (e.g., reach velocity). Most importantly, in both experimental contexts, we demonstrate that OSLMM latent trajectories directly reflect features of the sounds and reaches, demonstrating that neural dynamics are structured by neural representations. Together, these results demonstrate that OSLMM will be useful for the analysis of diverse, large-scale biological time-series datasets.
Collapse
Affiliation(s)
- Rui Meng
- Biological Systems and Engineering Division, Lawrence Berkeley National Laboratory, Berkeley, California, United States of America
| | - Kristofer E. Bouchard
- Biological Systems and Engineering Division, Lawrence Berkeley National Laboratory, Berkeley, California, United States of America
- Scientific Data Division, Lawrence Berkeley National Laboratory, Berkeley, California, United States of America
- Helen Wills Neuroscience Institute, University of California Berkeley, Berkeley, California, United States of America
- Redwood Center for Theoretical Neuroscience, University of California Berkeley, Berkeley, California, United States of America
| |
Collapse
|
23
|
Kay K, Biderman N, Khajeh R, Beiran M, Cueva CJ, Shohamy D, Jensen G, Wei XX, Ferrera VP, Abbott LF. Emergent neural dynamics and geometry for generalization in a transitive inference task. PLoS Comput Biol 2024; 20:e1011954. [PMID: 38662797 PMCID: PMC11125559 DOI: 10.1371/journal.pcbi.1011954] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/26/2023] [Revised: 05/24/2024] [Accepted: 02/28/2024] [Indexed: 05/25/2024] Open
Abstract
Relational cognition-the ability to infer relationships that generalize to novel combinations of objects-is fundamental to human and animal intelligence. Despite this importance, it remains unclear how relational cognition is implemented in the brain due in part to a lack of hypotheses and predictions at the levels of collective neural activity and behavior. Here we discovered, analyzed, and experimentally tested neural networks (NNs) that perform transitive inference (TI), a classic relational task (if A > B and B > C, then A > C). We found NNs that (i) generalized perfectly, despite lacking overt transitive structure prior to training, (ii) generalized when the task required working memory (WM), a capacity thought to be essential to inference in the brain, (iii) emergently expressed behaviors long observed in living subjects, in addition to a novel order-dependent behavior, and (iv) expressed different task solutions yielding alternative behavioral and neural predictions. Further, in a large-scale experiment, we found that human subjects performing WM-based TI showed behavior inconsistent with a class of NNs that characteristically expressed an intuitive task solution. These findings provide neural insights into a classical relational ability, with wider implications for how the brain realizes relational cognition.
Collapse
Affiliation(s)
- Kenneth Kay
- Mortimer B. Zuckerman Mind Brain Behavior Institute, Columbia University, New York, New York, United States of America
- Center for Theoretical Neuroscience, Columbia University, New York, New York, United States of America
- Grossman Center for the Statistics of Mind, Columbia University, New York, New York, United States of America
| | - Natalie Biderman
- Mortimer B. Zuckerman Mind Brain Behavior Institute, Columbia University, New York, New York, United States of America
- Department of Psychology, Columbia University, New York, New York, United States of America
| | - Ramin Khajeh
- Mortimer B. Zuckerman Mind Brain Behavior Institute, Columbia University, New York, New York, United States of America
- Center for Theoretical Neuroscience, Columbia University, New York, New York, United States of America
| | - Manuel Beiran
- Mortimer B. Zuckerman Mind Brain Behavior Institute, Columbia University, New York, New York, United States of America
- Center for Theoretical Neuroscience, Columbia University, New York, New York, United States of America
| | - Christopher J. Cueva
- Department of Brain and Cognitive Sciences, MIT, Cambridge, Massachusetts, United States of America
| | - Daphna Shohamy
- Mortimer B. Zuckerman Mind Brain Behavior Institute, Columbia University, New York, New York, United States of America
- Department of Psychology, Columbia University, New York, New York, United States of America
- The Kavli Institute for Brain Science, Columbia University, New York, New York, United States of America
| | - Greg Jensen
- Mortimer B. Zuckerman Mind Brain Behavior Institute, Columbia University, New York, New York, United States of America
- Department of Neuroscience, Columbia University Medical Center, New York, New York, United States of America
- Department of Psychology at Reed College, Portland, Oregon, United States of America
| | - Xue-Xin Wei
- Departments of Neuroscience and Psychology, The University of Texas at Austin, Austin, Texas, United States of America
| | - Vincent P. Ferrera
- Mortimer B. Zuckerman Mind Brain Behavior Institute, Columbia University, New York, New York, United States of America
- Department of Neuroscience, Columbia University Medical Center, New York, New York, United States of America
- Department of Psychiatry, Columbia University Medical Center, New York, New York, United States of America
| | - LF Abbott
- Mortimer B. Zuckerman Mind Brain Behavior Institute, Columbia University, New York, New York, United States of America
- Center for Theoretical Neuroscience, Columbia University, New York, New York, United States of America
- The Kavli Institute for Brain Science, Columbia University, New York, New York, United States of America
- Department of Neuroscience, Columbia University Medical Center, New York, New York, United States of America
| |
Collapse
|
24
|
McNamee DC. The generative neural microdynamics of cognitive processing. Curr Opin Neurobiol 2024; 85:102855. [PMID: 38428170 DOI: 10.1016/j.conb.2024.102855] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/07/2023] [Revised: 02/06/2024] [Accepted: 02/07/2024] [Indexed: 03/03/2024]
Abstract
The entorhinal cortex and hippocampus form a recurrent network that informs many cognitive processes, including memory, planning, navigation, and imagination. Neural recordings from these regions reveal spatially organized population codes corresponding to external environments and abstract spaces. Aligning the former cognitive functionalities with the latter neural phenomena is a central challenge in understanding the entorhinal-hippocampal circuit (EHC). Disparate experiments demonstrate a surprising level of complexity and apparent disorder in the intricate spatiotemporal dynamics of sequential non-local hippocampal reactivations, which occur particularly, though not exclusively, during immobile pauses and rest. We review these phenomena with a particular focus on their apparent lack of physical simulative realism. These observations are then integrated within a theoretical framework and proposed neural circuit mechanisms that normatively characterize this neural complexity by conceiving different regimes of hippocampal microdynamics as neuromarkers of diverse cognitive computations.
Collapse
|
25
|
Churchland MM, Shenoy KV. Preparatory activity and the expansive null-space. Nat Rev Neurosci 2024; 25:213-236. [PMID: 38443626 DOI: 10.1038/s41583-024-00796-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 01/26/2024] [Indexed: 03/07/2024]
Abstract
The study of the cortical control of movement experienced a conceptual shift over recent decades, as the basic currency of understanding shifted from single-neuron tuning towards population-level factors and their dynamics. This transition was informed by a maturing understanding of recurrent networks, where mechanism is often characterized in terms of population-level factors. By estimating factors from data, experimenters could test network-inspired hypotheses. Central to such hypotheses are 'output-null' factors that do not directly drive motor outputs yet are essential to the overall computation. In this Review, we highlight how the hypothesis of output-null factors was motivated by the venerable observation that motor-cortex neurons are active during movement preparation, well before movement begins. We discuss how output-null factors then became similarly central to understanding neural activity during movement. We discuss how this conceptual framework provided key analysis tools, making it possible for experimenters to address long-standing questions regarding motor control. We highlight an intriguing trend: as experimental and theoretical discoveries accumulate, the range of computational roles hypothesized to be subserved by output-null factors continues to expand.
Collapse
Affiliation(s)
- Mark M Churchland
- Department of Neuroscience, Columbia University, New York, NY, USA.
- Grossman Center for the Statistics of Mind, Columbia University, New York, NY, USA.
- Kavli Institute for Brain Science, Columbia University, New York, NY, USA.
| | - Krishna V Shenoy
- Department of Electrical Engineering, Stanford University, Stanford, CA, USA
- Department of Bioengineering, Stanford University, Stanford, CA, USA
- Department of Neurobiology, Stanford University, Stanford, CA, USA
- Department of Neurosurgery, Stanford University, Stanford, CA, USA
- Wu Tsai Neurosciences Institute, Stanford University, Stanford, CA, USA
- Bio-X Institute, Stanford University, Stanford, CA, USA
- Howard Hughes Medical Institute at Stanford University, Stanford, CA, USA
| |
Collapse
|
26
|
Jarne C. Exploring Flip Flop memories and beyond: training Recurrent Neural Networks with key insights. Front Syst Neurosci 2024; 18:1269190. [PMID: 38600907 PMCID: PMC11004305 DOI: 10.3389/fnsys.2024.1269190] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/29/2023] [Accepted: 03/11/2024] [Indexed: 04/12/2024] Open
Abstract
Training neural networks to perform different tasks is relevant across various disciplines. In particular, Recurrent Neural Networks (RNNs) are of great interest in Computational Neuroscience. Open-source frameworks dedicated to Machine Learning, such as Tensorflow and Keras have produced significant changes in the development of technologies that we currently use. This work contributes by comprehensively investigating and describing the application of RNNs for temporal processing through a study of a 3-bit Flip Flop memory implementation. We delve into the entire modeling process, encompassing equations, task parametrization, and software development. The obtained networks are meticulously analyzed to elucidate dynamics, aided by an array of visualization and analysis tools. Moreover, the provided code is versatile enough to facilitate the modeling of diverse tasks and systems. Furthermore, we present how memory states can be efficiently stored in the vertices of a cube in the dimensionally reduced space, supplementing previous results with a distinct approach.
Collapse
Affiliation(s)
- Cecilia Jarne
- Departamento de Ciencia y Tecnologia de la Universidad Nacional de Quilmes, Bernal, Quilmes, Buenos Aires, Argentina
- CONICET, Buenos Aires, Argentina
- Department of Clinical Medicine, Center of Functionally Integrative Neuroscience, Aarhus University, Aarhus, Denmark
| |
Collapse
|
27
|
Wolff M, Halassa MM. The mediodorsal thalamus in executive control. Neuron 2024; 112:893-908. [PMID: 38295791 DOI: 10.1016/j.neuron.2024.01.002] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/01/2023] [Revised: 11/15/2023] [Accepted: 01/03/2024] [Indexed: 03/23/2024]
Abstract
Executive control, the ability to organize thoughts and action plans in real time, is a defining feature of higher cognition. Classical theories have emphasized cortical contributions to this process, but recent studies have reinvigorated interest in the role of the thalamus. Although it is well established that local thalamic damage diminishes cognitive capacity, such observations have been difficult to inform functional models. Recent progress in experimental techniques is beginning to enrich our understanding of the anatomical, physiological, and computational substrates underlying thalamic engagement in executive control. In this review, we discuss this progress and particularly focus on the mediodorsal thalamus, which regulates the activity within and across frontal cortical areas. We end with a synthesis that highlights frontal thalamocortical interactions in cognitive computations and discusses its functional implications in normal and pathological conditions.
Collapse
Affiliation(s)
- Mathieu Wolff
- University of Bordeaux, CNRS, INCIA, UMR 5287, 33000 Bordeaux, France.
| | - Michael M Halassa
- Department of Neuroscience, Tufts University School of Medicine, Boston, MA, USA; Department of Psychiatry, Tufts University School of Medicine, Boston, MA, USA.
| |
Collapse
|
28
|
Cai W, Taghia J, Menon V. A multi-demand operating system underlying diverse cognitive tasks. Nat Commun 2024; 15:2185. [PMID: 38467606 PMCID: PMC10928152 DOI: 10.1038/s41467-024-46511-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/21/2022] [Accepted: 02/28/2024] [Indexed: 03/13/2024] Open
Abstract
The existence of a multiple-demand cortical system with an adaptive, domain-general, role in cognition has been proposed, but the underlying dynamic mechanisms and their links to cognitive control abilities are poorly understood. Here we use a probabilistic generative Bayesian model of brain circuit dynamics to determine dynamic brain states across multiple cognitive domains, independent datasets, and participant groups, including task fMRI data from Human Connectome Project, Dual Mechanisms of Cognitive Control study and a neurodevelopment study. We discover a shared brain state across seven distinct cognitive tasks and found that the dynamics of this shared brain state predicted cognitive control abilities in each task. Our findings reveal the flexible engagement of dynamic brain processes across multiple cognitive domains and participant groups, and uncover the generative mechanisms underlying the functioning of a domain-general cognitive operating system. Our computational framework opens promising avenues for probing neurocognitive function and dysfunction.
Collapse
Affiliation(s)
- Weidong Cai
- Department of Psychiatry & Behavioral Sciences, Stanford University School of Medicine, Stanford, CA, USA.
- Wu Tsai Neuroscience Institute, Stanford University, Stanford, CA, USA.
| | - Jalil Taghia
- Department of Information Technology, Uppsala University, Uppsala, Sweden
| | - Vinod Menon
- Department of Psychiatry & Behavioral Sciences, Stanford University School of Medicine, Stanford, CA, USA.
- Wu Tsai Neuroscience Institute, Stanford University, Stanford, CA, USA.
- Department of Neurology & Neurological Sciences, Stanford University School of Medicine, Stanford, CA, USA.
| |
Collapse
|
29
|
Chang YT, Finkel EA, Xu D, O'Connor DH. Rule-based modulation of a sensorimotor transformation across cortical areas. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2023.08.21.554194. [PMID: 37662301 PMCID: PMC10473613 DOI: 10.1101/2023.08.21.554194] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/05/2023]
Abstract
Flexible responses to sensory stimuli based on changing rules are critical for adapting to a dynamic environment. However, it remains unclear how the brain encodes rule information and uses this information to guide behavioral responses to sensory stimuli. Here, we made single-unit recordings while head-fixed mice performed a cross-modal sensory selection task in which they switched between two rules in different blocks of trials: licking in response to tactile stimuli applied to a whisker while rejecting visual stimuli, or licking to visual stimuli while rejecting the tactile stimuli. Along a cortical sensorimotor processing stream including the primary (S1) and secondary (S2) somatosensory areas, and the medial (MM) and anterolateral (ALM) motor areas, the single-trial activity of individual neurons distinguished between the two rules both prior to and in response to the tactile stimulus. Variable rule-dependent responses to identical stimuli could in principle occur via appropriate configuration of pre-stimulus preparatory states of a neural population, which would shape the subsequent response. We hypothesized that neural populations in S1, S2, MM and ALM would show preparatory activity states that were set in a rule-dependent manner to cause processing of sensory information according to the current rule. This hypothesis was supported for the motor cortical areas by findings that (1) the current task rule could be decoded from pre-stimulus population activity in ALM and MM; (2) neural subspaces containing the population activity differed between the two rules; and (3) optogenetic disruption of pre-stimulus states within ALM and MM impaired task performance. Our findings indicate that flexible selection of an appropriate action in response to a sensory input can occur via configuration of preparatory states in the motor cortex.
Collapse
|
30
|
Temmar H, Willsey MS, Costello JT, Mender MJ, Cubillos LH, Lam JL, Wallace DM, Kelberman MM, Patil PG, Chestek CA. Artificial neural network for brain-machine interface consistently produces more naturalistic finger movements than linear methods. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.03.01.583000. [PMID: 38496403 PMCID: PMC10942378 DOI: 10.1101/2024.03.01.583000] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 03/19/2024]
Abstract
Brain-machine interfaces (BMI) aim to restore function to persons living with spinal cord injuries by 'decoding' neural signals into behavior. Recently, nonlinear BMI decoders have outperformed previous state-of-the-art linear decoders, but few studies have investigated what specific improvements these nonlinear approaches provide. In this study, we compare how temporally convolved feedforward neural networks (tcFNNs) and linear approaches predict individuated finger movements in open and closed-loop settings. We show that nonlinear decoders generate more naturalistic movements, producing distributions of velocities 85.3% closer to true hand control than linear decoders. Addressing concerns that neural networks may come to inconsistent solutions, we find that regularization techniques improve the consistency of tcFNN convergence by 194.6%, along with improving average performance, and training speed. Finally, we show that tcFNN can leverage training data from multiple task variations to improve generalization. The results of this study show that nonlinear methods produce more naturalistic movements and show potential for generalizing over less constrained tasks. Teaser A neural network decoder produces consistent naturalistic movements and shows potential for real-world generalization through task variations.
Collapse
|
31
|
Ichikawa K, Kaneko K. Bayesian inference is facilitated by modular neural networks with different time scales. PLoS Comput Biol 2024; 20:e1011897. [PMID: 38478575 PMCID: PMC10962854 DOI: 10.1371/journal.pcbi.1011897] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/31/2023] [Revised: 03/25/2024] [Accepted: 02/06/2024] [Indexed: 03/26/2024] Open
Abstract
Various animals, including humans, have been suggested to perform Bayesian inferences to handle noisy, time-varying external information. In performing Bayesian inference by the brain, the prior distribution must be acquired and represented by sampling noisy external inputs. However, the mechanism by which neural activities represent such distributions has not yet been elucidated. Our findings reveal that networks with modular structures, composed of fast and slow modules, are adept at representing this prior distribution, enabling more accurate Bayesian inferences. Specifically, the modular network that consists of a main module connected with input and output layers and a sub-module with slower neural activity connected only with the main module outperformed networks with uniform time scales. Prior information was represented specifically by the slow sub-module, which could integrate observed signals over an appropriate period and represent input means and variances. Accordingly, the neural network could effectively predict the time-varying inputs. Furthermore, by training the time scales of neurons starting from networks with uniform time scales and without modular structure, the above slow-fast modular network structure and the division of roles in which prior knowledge is selectively represented in the slow sub-modules spontaneously emerged. These results explain how the prior distribution for Bayesian inference is represented in the brain, provide insight into the relevance of modular structure with time scale hierarchy to information processing, and elucidate the significance of brain areas with slower time scales.
Collapse
Affiliation(s)
- Kohei Ichikawa
- Department of Basic Science, Graduate School of Arts and Sciences, University of Tokyo, Meguro-ku, Tokyo, Japan
| | - Kunihiko Kaneko
- Research Center for Complex Systems Biology, University of Tokyo, Bunkyo-ku, Tokyo, Japan
- The Niels Bohr Institute, University of Copenhagen, Blegdamsvej, Copenhagen, Denmark
| |
Collapse
|
32
|
Maslennikov O, Perc M, Nekorkin V. Topological features of spike trains in recurrent spiking neural networks that are trained to generate spatiotemporal patterns. Front Comput Neurosci 2024; 18:1363514. [PMID: 38463243 PMCID: PMC10920356 DOI: 10.3389/fncom.2024.1363514] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/30/2023] [Accepted: 02/06/2024] [Indexed: 03/12/2024] Open
Abstract
In this study, we focus on training recurrent spiking neural networks to generate spatiotemporal patterns in the form of closed two-dimensional trajectories. Spike trains in the trained networks are examined in terms of their dissimilarity using the Victor-Purpura distance. We apply algebraic topology methods to the matrices obtained by rank-ordering the entries of the distance matrices, specifically calculating the persistence barcodes and Betti curves. By comparing the features of different types of output patterns, we uncover the complex relations between low-dimensional target signals and the underlying multidimensional spike trains.
Collapse
Affiliation(s)
- Oleg Maslennikov
- Federal Research Center A.V. Gaponov-Grekhov Institute of Applied Physics of the Russian Academy of Sciences, Nizhny Novgorod, Russia
| | - Matjaž Perc
- Faculty of Natural Sciences and Mathematics, University of Maribor, Maribor, Slovenia
- Department of Medical Research, China Medical University Hospital, China Medical University, Taichung City, Taiwan
- Complexity Science Hub Vienna, Vienna, Austria
- Department of Physics, Kyung Hee University, Seoul, Republic of Korea
| | - Vladimir Nekorkin
- Federal Research Center A.V. Gaponov-Grekhov Institute of Applied Physics of the Russian Academy of Sciences, Nizhny Novgorod, Russia
| |
Collapse
|
33
|
Chae S, Sohn JW, Kim SP. Differential Formation of Motor Cortical Dynamics during Movement Preparation According to the Predictability of Go Timing. J Neurosci 2024; 44:e1353232024. [PMID: 38233217 PMCID: PMC10883619 DOI: 10.1523/jneurosci.1353-23.2024] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/20/2023] [Revised: 12/10/2023] [Accepted: 01/08/2024] [Indexed: 01/19/2024] Open
Abstract
The motor cortex not only executes but also prepares movement, as motor cortical neurons exhibit preparatory activity that predicts upcoming movements. In movement preparation, animals adopt different strategies in response to uncertainties existing in nature such as the unknown timing of when a predator will attack-an environmental cue informing "go." However, how motor cortical neurons cope with such uncertainties is less understood. In this study, we aim to investigate whether and how preparatory activity is altered depending on the predictability of "go" timing. We analyze firing activities of the anterior lateral motor cortex in male mice during two auditory delayed-response tasks each with predictable or unpredictable go timing. When go timing is unpredictable, preparatory activities immediately reach and stay in a neural state capable of producing movement anytime to a sudden go cue. When go timing is predictable, preparation activity reaches the movement-producible state more gradually, to secure more accurate decisions. Surprisingly, this preparation process entails a longer reaction time. We find that as preparatory activity increases in accuracy, it takes longer for a neural state to transition from the end of preparation to the start of movement. Our results suggest that the motor cortex fine-tunes preparatory activity for more accurate movement using the predictability of go timing.
Collapse
Affiliation(s)
- Soyoung Chae
- Ulsan National Institute of Science and Technology, Ulsan 44929, South Korea
| | - Jeong-Woo Sohn
- Catholic Kwandong University, Gangwon-do 25601, South Korea
| | - Sung-Phil Kim
- Ulsan National Institute of Science and Technology, Ulsan 44929, South Korea
| |
Collapse
|
34
|
Vahidi P, Sani OG, Shanechi MM. Modeling and dissociation of intrinsic and input-driven neural population dynamics underlying behavior. Proc Natl Acad Sci U S A 2024; 121:e2212887121. [PMID: 38335258 PMCID: PMC10873612 DOI: 10.1073/pnas.2212887121] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/28/2022] [Accepted: 12/03/2023] [Indexed: 02/12/2024] Open
Abstract
Neural dynamics can reflect intrinsic dynamics or dynamic inputs, such as sensory inputs or inputs from other brain regions. To avoid misinterpreting temporally structured inputs as intrinsic dynamics, dynamical models of neural activity should account for measured inputs. However, incorporating measured inputs remains elusive in joint dynamical modeling of neural-behavioral data, which is important for studying neural computations of behavior. We first show how training dynamical models of neural activity while considering behavior but not input or input but not behavior may lead to misinterpretations. We then develop an analytical learning method for linear dynamical models that simultaneously accounts for neural activity, behavior, and measured inputs. The method provides the capability to prioritize the learning of intrinsic behaviorally relevant neural dynamics and dissociate them from both other intrinsic dynamics and measured input dynamics. In data from a simulated brain with fixed intrinsic dynamics that performs different tasks, the method correctly finds the same intrinsic dynamics regardless of the task while other methods can be influenced by the task. In neural datasets from three subjects performing two different motor tasks with task instruction sensory inputs, the method reveals low-dimensional intrinsic neural dynamics that are missed by other methods and are more predictive of behavior and/or neural activity. The method also uniquely finds that the intrinsic behaviorally relevant neural dynamics are largely similar across the different subjects and tasks, whereas the overall neural dynamics are not. These input-driven dynamical models of neural-behavioral data can uncover intrinsic dynamics that may otherwise be missed.
Collapse
Affiliation(s)
- Parsa Vahidi
- Ming Hsieh Department of Electrical and Computer Engineering, Viterbi School of Engineering, University of Southern California, Los Angeles, CA90089
| | - Omid G. Sani
- Ming Hsieh Department of Electrical and Computer Engineering, Viterbi School of Engineering, University of Southern California, Los Angeles, CA90089
| | - Maryam M. Shanechi
- Ming Hsieh Department of Electrical and Computer Engineering, Viterbi School of Engineering, University of Southern California, Los Angeles, CA90089
- Neuroscience Graduate Program, University of Southern California, Los Angeles, CA90089
- Thomas Lord Department of Computer Science and Alfred E. Mann Department of Biomedical Engineering, Viterbi School of Engineering, University of Southern California, Los Angeles, CA90089
| |
Collapse
|
35
|
Kuzmina E, Kriukov D, Lebedev M. Neuronal travelling waves explain rotational dynamics in experimental datasets and modelling. Sci Rep 2024; 14:3566. [PMID: 38347042 PMCID: PMC10861525 DOI: 10.1038/s41598-024-53907-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/25/2023] [Accepted: 02/06/2024] [Indexed: 02/15/2024] Open
Abstract
Spatiotemporal properties of neuronal population activity in cortical motor areas have been subjects of experimental and theoretical investigations, generating numerous interpretations regarding mechanisms for preparing and executing limb movements. Two competing models, representational and dynamical, strive to explain the relationship between movement parameters and neuronal activity. A dynamical model uses the jPCA method that holistically characterizes oscillatory activity in neuron populations by maximizing the data rotational dynamics. Different rotational dynamics interpretations revealed by the jPCA approach have been proposed. Yet, the nature of such dynamics remains poorly understood. We comprehensively analyzed several neuronal-population datasets and found rotational dynamics consistently accounted for by a traveling wave pattern. For quantifying rotation strength, we developed a complex-valued measure, the gyration number. Additionally, we identified parameters influencing rotation extent in the data. Our findings suggest that rotational dynamics and traveling waves are typically the same phenomena, so reevaluation of the previous interpretations where they were considered separate entities is needed.
Collapse
Affiliation(s)
- Ekaterina Kuzmina
- Skolkovo Institute of Science and Technology, Vladimir Zelman Center for Neurobiology and Brain Rehabilitation, Moscow, Russia, 121205.
- Artificial Intelligence Research Institute (AIRI), Moscow, Russia.
| | - Dmitrii Kriukov
- Artificial Intelligence Research Institute (AIRI), Moscow, Russia
- Skolkovo Institute of Science and Technology, Center for Molecular and Cellular Biology, Moscow, Russia, 121205
| | - Mikhail Lebedev
- Faculty of Mechanics and Mathematics, Lomonosov Moscow State University, Moscow, Russia, 119992
- Sechenov Institute of Evolutionary Physiology and Biochemistry of the Russian Academy of Sciences, Saint-Petersburg, Russia, 194223
| |
Collapse
|
36
|
Zimnik AJ, Cora Ames K, An X, Driscoll L, Lara AH, Russo AA, Susoy V, Cunningham JP, Paninski L, Churchland MM, Glaser JI. Identifying Interpretable Latent Factors with Sparse Component Analysis. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.02.05.578988. [PMID: 38370650 PMCID: PMC10871230 DOI: 10.1101/2024.02.05.578988] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/20/2024]
Abstract
In many neural populations, the computationally relevant signals are posited to be a set of 'latent factors' - signals shared across many individual neurons. Understanding the relationship between neural activity and behavior requires the identification of factors that reflect distinct computational roles. Methods for identifying such factors typically require supervision, which can be suboptimal if one is unsure how (or whether) factors can be grouped into distinct, meaningful sets. Here, we introduce Sparse Component Analysis (SCA), an unsupervised method that identifies interpretable latent factors. SCA seeks factors that are sparse in time and occupy orthogonal dimensions. With these simple constraints, SCA facilitates surprisingly clear parcellations of neural activity across a range of behaviors. We applied SCA to motor cortex activity from reaching and cycling monkeys, single-trial imaging data from C. elegans, and activity from a multitask artificial network. SCA consistently identified sets of factors that were useful in describing network computations.
Collapse
Affiliation(s)
- Andrew J Zimnik
- Department of Neuroscience, Columbia University Medical Center, New York, NY, USA
- Zuckerman Institute, Columbia University, New York, NY, USA
| | - K Cora Ames
- Department of Neuroscience, Columbia University Medical Center, New York, NY, USA
- Zuckerman Institute, Columbia University, New York, NY, USA
- Grossman Center for the Statistics of Mind, Columbia University, New York, NY, USA
- Center for Theoretical Neuroscience, Columbia University, New York, NY, USA
| | - Xinyue An
- Department of Neurology, Northwestern University, Chicago, IL, USA
- Interdepartmental Neuroscience Program, Northwestern University, Chicago, IL, USA
| | - Laura Driscoll
- Department of Electrical Engineering, Stanford University, Stanford, CA, USA
- Allen Institute for Neural Dynamics, Allen Institute, Seattle, CA, USA
| | - Antonio H Lara
- Department of Neuroscience, Columbia University Medical Center, New York, NY, USA
- Zuckerman Institute, Columbia University, New York, NY, USA
| | - Abigail A Russo
- Department of Neuroscience, Columbia University Medical Center, New York, NY, USA
- Zuckerman Institute, Columbia University, New York, NY, USA
| | - Vladislav Susoy
- Department of Physics, Harvard University, Cambridge, MA, USA
- Center for Brain Science, Harvard University, Cambridge, MA, USA
| | - John P Cunningham
- Zuckerman Institute, Columbia University, New York, NY, USA
- Grossman Center for the Statistics of Mind, Columbia University, New York, NY, USA
- Center for Theoretical Neuroscience, Columbia University, New York, NY, USA
- Department of Statistics, Columbia University, New York, NY, USA
| | - Liam Paninski
- Zuckerman Institute, Columbia University, New York, NY, USA
- Grossman Center for the Statistics of Mind, Columbia University, New York, NY, USA
- Center for Theoretical Neuroscience, Columbia University, New York, NY, USA
- Department of Statistics, Columbia University, New York, NY, USA
| | - Mark M Churchland
- Department of Neuroscience, Columbia University Medical Center, New York, NY, USA
- Zuckerman Institute, Columbia University, New York, NY, USA
- Grossman Center for the Statistics of Mind, Columbia University, New York, NY, USA
- Kavli Institute for Brain Science, Columbia University Medical Center, New York, NY, USA
| | - Joshua I Glaser
- Department of Neurology, Northwestern University, Chicago, IL, USA
- Department of Computer Science, Northwestern University, Evanston, IL, USA
| |
Collapse
|
37
|
Pals M, Macke JH, Barak O. Trained recurrent neural networks develop phase-locked limit cycles in a working memory task. PLoS Comput Biol 2024; 20:e1011852. [PMID: 38315736 PMCID: PMC10868787 DOI: 10.1371/journal.pcbi.1011852] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/13/2023] [Revised: 02/15/2024] [Accepted: 01/22/2024] [Indexed: 02/07/2024] Open
Abstract
Neural oscillations are ubiquitously observed in many brain areas. One proposed functional role of these oscillations is that they serve as an internal clock, or 'frame of reference'. Information can be encoded by the timing of neural activity relative to the phase of such oscillations. In line with this hypothesis, there have been multiple empirical observations of such phase codes in the brain. Here we ask: What kind of neural dynamics support phase coding of information with neural oscillations? We tackled this question by analyzing recurrent neural networks (RNNs) that were trained on a working memory task. The networks were given access to an external reference oscillation and tasked to produce an oscillation, such that the phase difference between the reference and output oscillation maintains the identity of transient stimuli. We found that networks converged to stable oscillatory dynamics. Reverse engineering these networks revealed that each phase-coded memory corresponds to a separate limit cycle attractor. We characterized how the stability of the attractor dynamics depends on both reference oscillation amplitude and frequency, properties that can be experimentally observed. To understand the connectivity structures that underlie these dynamics, we showed that trained networks can be described as two phase-coupled oscillators. Using this insight, we condensed our trained networks to a reduced model consisting of two functional modules: One that generates an oscillation and one that implements a coupling function between the internal oscillation and external reference. In summary, by reverse engineering the dynamics and connectivity of trained RNNs, we propose a mechanism by which neural networks can harness reference oscillations for working memory. Specifically, we propose that a phase-coding network generates autonomous oscillations which it couples to an external reference oscillation in a multi-stable fashion.
Collapse
Affiliation(s)
- Matthijs Pals
- Machine Learning in Science, Excellence Cluster Machine Learning, University of Tübingen, Tübingen, Germany
- Tübingen AI Center, University of Tübingen, Tübingen, Germany
| | - Jakob H. Macke
- Machine Learning in Science, Excellence Cluster Machine Learning, University of Tübingen, Tübingen, Germany
- Tübingen AI Center, University of Tübingen, Tübingen, Germany
- Department Empirical Inference, Max Planck Institute for Intelligent Systems, Tübingen, Germany
| | - Omri Barak
- Rappaport Faculty of Medicine Technion, Israel Institute of Technology, Haifa, Israel
- Network Biology Research Laboratory, Israel Institute of Technology, Haifa, Israel
| |
Collapse
|
38
|
Khanna AR, Muñoz W, Kim YJ, Kfir Y, Paulk AC, Jamali M, Cai J, Mustroph ML, Caprara I, Hardstone R, Mejdell M, Meszéna D, Zuckerman A, Schweitzer J, Cash S, Williams ZM. Single-neuronal elements of speech production in humans. Nature 2024; 626:603-610. [PMID: 38297120 PMCID: PMC10866697 DOI: 10.1038/s41586-023-06982-w] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/22/2023] [Accepted: 12/14/2023] [Indexed: 02/02/2024]
Abstract
Humans are capable of generating extraordinarily diverse articulatory movement combinations to produce meaningful speech. This ability to orchestrate specific phonetic sequences, and their syllabification and inflection over subsecond timescales allows us to produce thousands of word sounds and is a core component of language1,2. The fundamental cellular units and constructs by which we plan and produce words during speech, however, remain largely unknown. Here, using acute ultrahigh-density Neuropixels recordings capable of sampling across the cortical column in humans, we discover neurons in the language-dominant prefrontal cortex that encoded detailed information about the phonetic arrangement and composition of planned words during the production of natural speech. These neurons represented the specific order and structure of articulatory events before utterance and reflected the segmentation of phonetic sequences into distinct syllables. They also accurately predicted the phonetic, syllabic and morphological components of upcoming words and showed a temporally ordered dynamic. Collectively, we show how these mixtures of cells are broadly organized along the cortical column and how their activity patterns transition from articulation planning to production. We also demonstrate how these cells reliably track the detailed composition of consonant and vowel sounds during perception and how they distinguish processes specifically related to speaking from those related to listening. Together, these findings reveal a remarkably structured organization and encoding cascade of phonetic representations by prefrontal neurons in humans and demonstrate a cellular process that can support the production of speech.
Collapse
Affiliation(s)
- Arjun R Khanna
- Department of Neurosurgery, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA
| | - William Muñoz
- Department of Neurosurgery, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA
| | | | - Yoav Kfir
- Department of Neurosurgery, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA
| | - Angelique C Paulk
- Department of Neurology, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA
| | - Mohsen Jamali
- Department of Neurosurgery, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA
| | - Jing Cai
- Department of Neurosurgery, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA
| | - Martina L Mustroph
- Department of Neurosurgery, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA
| | - Irene Caprara
- Department of Neurosurgery, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA
| | - Richard Hardstone
- Department of Neurology, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA
| | - Mackenna Mejdell
- Department of Neurosurgery, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA
| | - Domokos Meszéna
- Department of Neurology, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA
| | | | - Jeffrey Schweitzer
- Department of Neurosurgery, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA
| | - Sydney Cash
- Department of Neurology, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA
| | - Ziv M Williams
- Department of Neurosurgery, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA.
- Harvard-MIT Division of Health Sciences and Technology, Boston, MA, USA.
- Harvard Medical School, Program in Neuroscience, Boston, MA, USA.
| |
Collapse
|
39
|
Waraich SA, Victor JD. The Geometry of Low- and High-Level Perceptual Spaces. J Neurosci 2024; 44:e1460232023. [PMID: 38267235 PMCID: PMC10860617 DOI: 10.1523/jneurosci.1460-23.2023] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/01/2023] [Revised: 11/27/2023] [Accepted: 11/28/2023] [Indexed: 01/26/2024] Open
Abstract
Low-level features are typically continuous (e.g., the gamut between two colors), but semantic information is often categorical (there is no corresponding gradient between dog and turtle) and hierarchical (animals live in land, water, or air). To determine the impact of these differences on cognitive representations, we characterized the geometry of perceptual spaces of five domains: a domain dominated by semantic information (animal names presented as words), a domain dominated by low-level features (colored textures), and three intermediate domains (animal images, lightly texturized animal images that were easy to recognize, and heavily texturized animal images that were difficult to recognize). Each domain had 37 stimuli derived from the same animal names. From 13 participants (9F), we gathered similarity judgments in each domain via an efficient psychophysical ranking paradigm. We then built geometric models of each domain for each participant, in which distances between stimuli accounted for participants' similarity judgments and intrinsic uncertainty. Remarkably, the five domains had similar global properties: each required 5-7 dimensions, and a modest amount of spherical curvature provided the best fit. However, the arrangement of the stimuli within these embeddings depended on the level of semantic information: dendrograms derived from semantic domains (word, image, and lightly texturized images) were more "tree-like" than those from feature-dominated domains (heavily texturized images and textures). Thus, the perceptual spaces of domains along this feature-dominated to semantic-dominated gradient shift to a tree-like organization when semantic information dominates, while retaining a similar global geometry.
Collapse
Affiliation(s)
| | - Jonathan D Victor
- Division of Systems Neurology and Neuroscience, Feil Family Brain and Mind Research Institute, Weill Cornell Medical College, New York 10065, New York
| |
Collapse
|
40
|
Chai Y, Qi K, Wu Y, Li D, Tan G, Guo Y, Chu J, Mu Y, Shen C, Wen Q. All-optical interrogation of brain-wide activity in freely swimming larval zebrafish. iScience 2024; 27:108385. [PMID: 38205255 PMCID: PMC10776927 DOI: 10.1016/j.isci.2023.108385] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/12/2023] [Revised: 09/22/2023] [Accepted: 10/30/2023] [Indexed: 01/12/2024] Open
Abstract
We introduce an all-optical technique that enables volumetric imaging of brain-wide calcium activity and targeted optogenetic stimulation of specific brain regions in unrestrained larval zebrafish. The system consists of three main components: a 3D tracking module, a dual-color fluorescence imaging module, and a real-time activity manipulation module. Our approach uses a sensitive genetically encoded calcium indicator in combination with a long Stokes shift red fluorescence protein as a reference channel, allowing the extraction of Ca2+ activity from signals contaminated by motion artifacts. The method also incorporates rapid 3D image reconstruction and registration, facilitating real-time selective optogenetic stimulation of different regions of the brain. By demonstrating that selective light activation of the midbrain regions in larval zebrafish could reliably trigger biased turning behavior and changes of brain-wide neural activity, we present a valuable tool for investigating the causal relationship between distributed neural circuit dynamics and naturalistic behavior.
Collapse
Affiliation(s)
- Yuming Chai
- Division of Life Sciences and Medicine, University of Science and Technology of China, Hefei, China
- Hefei National Research Center for Physical Sciences at the Microscale, Center for Integrative Imaging, University of Science and Technology of China, Hefei, China
| | - Kexin Qi
- Division of Life Sciences and Medicine, University of Science and Technology of China, Hefei, China
- Hefei National Research Center for Physical Sciences at the Microscale, Center for Integrative Imaging, University of Science and Technology of China, Hefei, China
| | - Yubin Wu
- Division of Life Sciences and Medicine, University of Science and Technology of China, Hefei, China
- Hefei National Research Center for Physical Sciences at the Microscale, Center for Integrative Imaging, University of Science and Technology of China, Hefei, China
| | - Daguang Li
- Division of Life Sciences and Medicine, University of Science and Technology of China, Hefei, China
- Hefei National Research Center for Physical Sciences at the Microscale, Center for Integrative Imaging, University of Science and Technology of China, Hefei, China
| | - Guodong Tan
- Division of Life Sciences and Medicine, University of Science and Technology of China, Hefei, China
- Hefei National Research Center for Physical Sciences at the Microscale, Center for Integrative Imaging, University of Science and Technology of China, Hefei, China
| | - Yuqi Guo
- Guangdong Provincial Key Laboratory of Biomedical Optical Imaging Technology and Center for Biomedical Optics and Molecular Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Jun Chu
- Guangdong Provincial Key Laboratory of Biomedical Optical Imaging Technology and Center for Biomedical Optics and Molecular Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Yu Mu
- Institute of Neuroscience, State Key Laboratory of Neuroscience, Center for Excellence in Brain Science and Intelligence Technology, Chinese Academy of Sciences, Shanghai, China
| | - Chen Shen
- Division of Life Sciences and Medicine, University of Science and Technology of China, Hefei, China
- Hefei National Research Center for Physical Sciences at the Microscale, Center for Integrative Imaging, University of Science and Technology of China, Hefei, China
| | - Quan Wen
- Division of Life Sciences and Medicine, University of Science and Technology of China, Hefei, China
- Hefei National Research Center for Physical Sciences at the Microscale, Center for Integrative Imaging, University of Science and Technology of China, Hefei, China
| |
Collapse
|
41
|
Gort J. Emergence of Universal Computations Through Neural Manifold Dynamics. Neural Comput 2024; 36:227-270. [PMID: 38101328 DOI: 10.1162/neco_a_01631] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/07/2023] [Accepted: 09/05/2023] [Indexed: 12/17/2023]
Abstract
There is growing evidence that many forms of neural computation may be implemented by low-dimensional dynamics unfolding at the population scale. However, neither the connectivity structure nor the general capabilities of these embedded dynamical processes are currently understood. In this work, the two most common formalisms of firing-rate models are evaluated using tools from analysis, topology, and nonlinear dynamics in order to provide plausible explanations for these problems. It is shown that low-rank structured connectivities predict the formation of invariant and globally attracting manifolds in all these models. Regarding the dynamics arising in these manifolds, it is proved they are topologically equivalent across the considered formalisms. This letter also shows that under the low-rank hypothesis, the flows emerging in neural manifolds, including input-driven systems, are universal, which broadens previous findings. It explores how low-dimensional orbits can bear the production of continuous sets of muscular trajectories, the implementation of central pattern generators, and the storage of memory states. These dynamics can robustly simulate any Turing machine over arbitrary bounded memory strings, virtually endowing rate models with the power of universal computation. In addition, the letter shows how the low-rank hypothesis predicts the parsimonious correlation structure observed in cortical activity. Finally, it discusses how this theory could provide a useful tool from which to study neuropsychological phenomena using mathematical methods.
Collapse
Affiliation(s)
- Joan Gort
- Facultat de Psicologia, Universitat Autònoma de Barcelona, 08193, Bellaterra, Barcelona, Spain
| |
Collapse
|
42
|
Gast R, Solla SA, Kennedy A. Neural heterogeneity controls computations in spiking neural networks. Proc Natl Acad Sci U S A 2024; 121:e2311885121. [PMID: 38198531 PMCID: PMC10801870 DOI: 10.1073/pnas.2311885121] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/12/2023] [Accepted: 11/27/2023] [Indexed: 01/12/2024] Open
Abstract
The brain is composed of complex networks of interacting neurons that express considerable heterogeneity in their physiology and spiking characteristics. How does this neural heterogeneity influence macroscopic neural dynamics, and how might it contribute to neural computation? In this work, we use a mean-field model to investigate computation in heterogeneous neural networks, by studying how the heterogeneity of cell spiking thresholds affects three key computational functions of a neural population: the gating, encoding, and decoding of neural signals. Our results suggest that heterogeneity serves different computational functions in different cell types. In inhibitory interneurons, varying the degree of spike threshold heterogeneity allows them to gate the propagation of neural signals in a reciprocally coupled excitatory population. Whereas homogeneous interneurons impose synchronized dynamics that narrow the dynamic repertoire of the excitatory neurons, heterogeneous interneurons act as an inhibitory offset while preserving excitatory neuron function. Spike threshold heterogeneity also controls the entrainment properties of neural networks to periodic input, thus affecting the temporal gating of synaptic inputs. Among excitatory neurons, heterogeneity increases the dimensionality of neural dynamics, improving the network's capacity to perform decoding tasks. Conversely, homogeneous networks suffer in their capacity for function generation, but excel at encoding signals via multistable dynamic regimes. Drawing from these findings, we propose intra-cell-type heterogeneity as a mechanism for sculpting the computational properties of local circuits of excitatory and inhibitory spiking neurons, permitting the same canonical microcircuit to be tuned for diverse computational tasks.
Collapse
Affiliation(s)
- Richard Gast
- Department of Neuroscience, Feinberg School of Medicine, Northwestern University, Chicago, IL60611
- Aligning Science Across Parkinson’s Collaborative Research Network, Chevy Chase, MD20815
| | - Sara A. Solla
- Department of Neuroscience, Feinberg School of Medicine, Northwestern University, Chicago, IL60611
| | - Ann Kennedy
- Department of Neuroscience, Feinberg School of Medicine, Northwestern University, Chicago, IL60611
- Aligning Science Across Parkinson’s Collaborative Research Network, Chevy Chase, MD20815
| |
Collapse
|
43
|
Oby ER, Degenhart AD, Grigsby EM, Motiwala A, McClain NT, Marino PJ, Yu BM, Batista AP. Dynamical constraints on neural population activity. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.01.03.573543. [PMID: 38260549 PMCID: PMC10802336 DOI: 10.1101/2024.01.03.573543] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/24/2024]
Abstract
The manner in which neural activity unfolds over time is thought to be central to sensory, motor, and cognitive functions in the brain. Network models have long posited that the brain's computations involve time courses of activity that are shaped by the underlying network. A prediction from this view is that the activity time courses should be difficult to violate. We leveraged a brain-computer interface (BCI) to challenge monkeys to violate the naturally-occurring time courses of neural population activity that we observed in motor cortex. This included challenging animals to traverse the natural time course of neural activity in a time-reversed manner. Animals were unable to violate the natural time courses of neural activity when directly challenged to do so. These results provide empirical support for the view that activity time courses observed in the brain indeed reflect the underlying network-level computational mechanisms that they are believed to implement.
Collapse
|
44
|
Sylte OC, Muysers H, Chen HL, Bartos M, Sauer JF. Neuronal tuning to threat exposure remains stable in the mouse prefrontal cortex over multiple days. PLoS Biol 2024; 22:e3002475. [PMID: 38206890 PMCID: PMC10783789 DOI: 10.1371/journal.pbio.3002475] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/28/2023] [Accepted: 12/19/2023] [Indexed: 01/13/2024] Open
Abstract
Intense threat elicits action in the form of active and passive coping. The medial prefrontal cortex (mPFC) executes top-level control over the selection of threat coping strategies, but the dynamics of mPFC activity upon continuing threat encounters remain unexplored. Here, we used 1-photon calcium imaging in mice to probe the activity of prefrontal pyramidal cells during repeated exposure to intense threat in a tail suspension (TS) paradigm. A subset of prefrontal neurons displayed selective activation during TS, which was stably maintained over days. During threat, neurons showed specific tuning to active or passive coping. These responses were unrelated to general motion tuning and persisted over days. Moreover, the neural manifold traversed by low-dimensional population activity remained stable over subsequent days of TS exposure and was preserved across individuals. These data thus reveal a specific, temporally, and interindividually conserved repertoire of prefrontal tuning to behavioral responses under threat.
Collapse
Affiliation(s)
- Ole Christian Sylte
- University of Freiburg, Medical Faculty, Institute of Physiology I, Freiburg, Germany
- University of Freiburg, Faculty of Biology, Freiburg, Germany
| | - Hannah Muysers
- University of Freiburg, Medical Faculty, Institute of Physiology I, Freiburg, Germany
- University of Freiburg, Faculty of Biology, Freiburg, Germany
| | - Hung-Ling Chen
- University of Freiburg, Medical Faculty, Institute of Physiology I, Freiburg, Germany
| | - Marlene Bartos
- University of Freiburg, Medical Faculty, Institute of Physiology I, Freiburg, Germany
| | - Jonas-Frederic Sauer
- University of Freiburg, Medical Faculty, Institute of Physiology I, Freiburg, Germany
| |
Collapse
|
45
|
A neural network that enables flexible nonlinear inference from neural population activity. Nat Biomed Eng 2024; 8:9-10. [PMID: 38086959 DOI: 10.1038/s41551-023-01111-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/25/2024]
|
46
|
Abbaspourazad H, Erturk E, Pesaran B, Shanechi MM. Dynamical flexible inference of nonlinear latent factors and structures in neural population activity. Nat Biomed Eng 2024; 8:85-108. [PMID: 38082181 DOI: 10.1038/s41551-023-01106-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/21/2022] [Accepted: 09/12/2023] [Indexed: 12/26/2023]
Abstract
Modelling the spatiotemporal dynamics in the activity of neural populations while also enabling their flexible inference is hindered by the complexity and noisiness of neural observations. Here we show that the lower-dimensional nonlinear latent factors and latent structures can be computationally modelled in a manner that allows for flexible inference causally, non-causally and in the presence of missing neural observations. To enable flexible inference, we developed a neural network that separates the model into jointly trained manifold and dynamic latent factors such that nonlinearity is captured through the manifold factors and the dynamics can be modelled in tractable linear form on this nonlinear manifold. We show that the model, which we named 'DFINE' (for 'dynamical flexible inference for nonlinear embeddings') achieves flexible inference in simulations of nonlinear dynamics and across neural datasets representing a diversity of brain regions and behaviours. Compared with earlier neural-network models, DFINE enables flexible inference, better predicts neural activity and behaviour, and better captures the latent neural manifold structure. DFINE may advance the development of neurotechnology and investigations in neuroscience.
Collapse
Affiliation(s)
- Hamidreza Abbaspourazad
- Ming Hsieh Department of Electrical and Computer Engineering, Viterbi School of Engineering, University of Southern California, Los Angeles, CA, USA
| | - Eray Erturk
- Ming Hsieh Department of Electrical and Computer Engineering, Viterbi School of Engineering, University of Southern California, Los Angeles, CA, USA
| | - Bijan Pesaran
- Departments of Neurosurgery, Neuroscience, and Bioengineering, University of Pennsylvania, Philadelphia, PA, USA
| | - Maryam M Shanechi
- Ming Hsieh Department of Electrical and Computer Engineering, Viterbi School of Engineering, University of Southern California, Los Angeles, CA, USA.
- Thomas Lord Department of Computer Science, Alfred E. Mann Department of Biomedical Engineering, Neuroscience Graduate Program, University of Southern California, Los Angeles, CA, USA.
| |
Collapse
|
47
|
Buxton RB, Wong EC. Metabolic energetics underlying attractors in neural models. J Neurophysiol 2024; 131:88-105. [PMID: 38056422 DOI: 10.1152/jn.00120.2023] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/20/2023] [Revised: 11/13/2023] [Accepted: 12/04/2023] [Indexed: 12/08/2023] Open
Abstract
Neural population modeling, including the role of neural attractors, is a promising tool for understanding many aspects of brain function. We propose a modeling framework to connect the abstract variables used in modeling to recent cellular-level estimates of the bioenergetic costs of different aspects of neural activity, measured in ATP consumed per second per neuron. Based on recent work, an empirical reference for brain ATP use for the awake resting brain was estimated as ∼2 × 109 ATP/s-neuron across several mammalian species. The energetics framework was applied to the Wilson-Cowan (WC) model of two interacting populations of neurons, one excitatory (E) and one inhibitory (I). Attractors were considered to exhibit steady-state behavior and limit cycle behavior, both of which end when the excitatory stimulus ends, and sustained activity that persists after the stimulus ends. The energy cost of limit cycles, with oscillations much faster than the average neuronal firing rate of the population, is tracked more closely with the firing rate than the limit cycle frequency. Self-sustained firing driven by recurrent excitation, though, involves higher firing rates and a higher energy cost. As an example of a simple network in which each node is a WC model, a combination of three nodes can serve as a flexible circuit element that turns on with an oscillating output when input passes a threshold and then persists after the input ends (an "on-switch"), with moderate overall ATP use. The proposed framework can serve as a guide for anchoring neural population models to plausible bioenergetics requirements.NEW & NOTEWORTHY This work bridges two approaches for understanding brain function: cellular-level studies of the metabolic energy costs of different aspects of neural activity and neural population modeling, including the role of neural attractors. The proposed modeling framework connects energetic costs, in ATP consumed per second per neuron, to the more abstract variables used in neural population modeling. In particular, this work anchors potential neural attractors to physiologically plausible bioenergetics requirements.
Collapse
Affiliation(s)
- Richard B Buxton
- Department of Radiology, University of California, San Diego, California, United States
| | - Eric C Wong
- Department of Radiology, University of California, San Diego, California, United States
- Department of Psychiatry, University of California, San Diego, California, United States
| |
Collapse
|
48
|
Nozari E, Bertolero MA, Stiso J, Caciagli L, Cornblath EJ, He X, Mahadevan AS, Pappas GJ, Bassett DS. Macroscopic resting-state brain dynamics are best described by linear models. Nat Biomed Eng 2024; 8:68-84. [PMID: 38082179 DOI: 10.1038/s41551-023-01117-y] [Citation(s) in RCA: 8] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/23/2020] [Accepted: 09/26/2023] [Indexed: 12/22/2023]
Abstract
It is typically assumed that large networks of neurons exhibit a large repertoire of nonlinear behaviours. Here we challenge this assumption by leveraging mathematical models derived from measurements of local field potentials via intracranial electroencephalography and of whole-brain blood-oxygen-level-dependent brain activity via functional magnetic resonance imaging. We used state-of-the-art linear and nonlinear families of models to describe spontaneous resting-state activity of 700 participants in the Human Connectome Project and 122 participants in the Restoring Active Memory project. We found that linear autoregressive models provide the best fit across both data types and three performance metrics: predictive power, computational complexity and the extent of the residual dynamics unexplained by the model. To explain this observation, we show that microscopic nonlinear dynamics can be counteracted or masked by four factors associated with macroscopic dynamics: averaging over space and over time, which are inherent to aggregated macroscopic brain activity, and observation noise and limited data samples, which stem from technological limitations. We therefore argue that easier-to-interpret linear models can faithfully describe macroscopic brain dynamics during resting-state conditions.
Collapse
Affiliation(s)
- Erfan Nozari
- Department of Mechanical Engineering, University of California, Riverside, CA, USA
- Department of Electrical and Computer Engineering, University of California, Riverside, CA, USA
- Department of Bioengineering, University of California, Riverside, CA, USA
| | - Maxwell A Bertolero
- Department of Bioengineering, University of Pennsylvania, Philadelphia, PA, USA
| | - Jennifer Stiso
- Department of Bioengineering, University of Pennsylvania, Philadelphia, PA, USA
- Department of Neuroscience, University of Pennsylvania, Philadelphia, PA, USA
| | - Lorenzo Caciagli
- Department of Bioengineering, University of Pennsylvania, Philadelphia, PA, USA
| | - Eli J Cornblath
- Department of Bioengineering, University of Pennsylvania, Philadelphia, PA, USA
- Department of Neuroscience, University of Pennsylvania, Philadelphia, PA, USA
| | - Xiaosong He
- Department of Bioengineering, University of Pennsylvania, Philadelphia, PA, USA
| | - Arun S Mahadevan
- Department of Bioengineering, University of Pennsylvania, Philadelphia, PA, USA
| | - George J Pappas
- Department of Electrical and Systems Engineering, University of Pennsylvania, Philadelphia, PA, USA
| | - Dani S Bassett
- Department of Bioengineering, University of Pennsylvania, Philadelphia, PA, USA.
- Department of Electrical and Systems Engineering, University of Pennsylvania, Philadelphia, PA, USA.
- Department of Physics and Astronomy, University of Pennsylvania, Philadelphia, PA, USA.
- Department of Neurology, University of Pennsylvania, Philadelphia, PA, USA.
- Department of Psychiatry, University of Pennsylvania, Philadelphia, PA, USA.
- Santa Fe Institute, Santa Fe, NM, USA.
| |
Collapse
|
49
|
Gonzalo Cogno S, Obenhaus HA, Lautrup A, Jacobsen RI, Clopath C, Andersson SO, Donato F, Moser MB, Moser EI. Minute-scale oscillatory sequences in medial entorhinal cortex. Nature 2024; 625:338-344. [PMID: 38123682 PMCID: PMC10781645 DOI: 10.1038/s41586-023-06864-1] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/01/2022] [Accepted: 11/10/2023] [Indexed: 12/23/2023]
Abstract
The medial entorhinal cortex (MEC) hosts many of the brain's circuit elements for spatial navigation and episodic memory, operations that require neural activity to be organized across long durations of experience1. Whereas location is known to be encoded by spatially tuned cell types in this brain region2,3, little is known about how the activity of entorhinal cells is tied together over time at behaviourally relevant time scales, in the second-to-minute regime. Here we show that MEC neuronal activity has the capacity to be organized into ultraslow oscillations, with periods ranging from tens of seconds to minutes. During these oscillations, the activity is further organized into periodic sequences. Oscillatory sequences manifested while mice ran at free pace on a rotating wheel in darkness, with no change in location or running direction and no scheduled rewards. The sequences involved nearly the entire cell population, and transcended epochs of immobility. Similar sequences were not observed in neighbouring parasubiculum or in visual cortex. Ultraslow oscillatory sequences in MEC may have the potential to couple neurons and circuits across extended time scales and serve as a template for new sequence formation during navigation and episodic memory formation.
Collapse
Affiliation(s)
- Soledad Gonzalo Cogno
- Kavli Institute for Systems Neuroscience and Centre for Algorithms in the Cortex, Fred Kavli Building, Norwegian University of Science and Technology, Trondheim, Norway.
| | - Horst A Obenhaus
- Kavli Institute for Systems Neuroscience and Centre for Algorithms in the Cortex, Fred Kavli Building, Norwegian University of Science and Technology, Trondheim, Norway
| | - Ane Lautrup
- Kavli Institute for Systems Neuroscience and Centre for Algorithms in the Cortex, Fred Kavli Building, Norwegian University of Science and Technology, Trondheim, Norway
| | - R Irene Jacobsen
- Kavli Institute for Systems Neuroscience and Centre for Algorithms in the Cortex, Fred Kavli Building, Norwegian University of Science and Technology, Trondheim, Norway
| | - Claudia Clopath
- Department of Bioengineering, Imperial College London, London, UK
| | - Sebastian O Andersson
- Kavli Institute for Systems Neuroscience and Centre for Algorithms in the Cortex, Fred Kavli Building, Norwegian University of Science and Technology, Trondheim, Norway
- Max Planck Institute for Brain Research, Frankfurt am Main, Germany
| | - Flavio Donato
- Kavli Institute for Systems Neuroscience and Centre for Algorithms in the Cortex, Fred Kavli Building, Norwegian University of Science and Technology, Trondheim, Norway
- Biozentrum Universität Basel, Basel, Switzerland
| | - May-Britt Moser
- Kavli Institute for Systems Neuroscience and Centre for Algorithms in the Cortex, Fred Kavli Building, Norwegian University of Science and Technology, Trondheim, Norway.
| | - Edvard I Moser
- Kavli Institute for Systems Neuroscience and Centre for Algorithms in the Cortex, Fred Kavli Building, Norwegian University of Science and Technology, Trondheim, Norway.
| |
Collapse
|
50
|
Chung B, Zia M, Thomas KA, Michaels JA, Jacob A, Pack A, Williams MJ, Nagapudi K, Teng LH, Arrambide E, Ouellette L, Oey N, Gibbs R, Anschutz P, Lu J, Wu Y, Kashefi M, Oya T, Kersten R, Mosberger AC, O'Connell S, Wang R, Marques H, Mendes AR, Lenschow C, Kondakath G, Kim JJ, Olson W, Quinn KN, Perkins P, Gatto G, Thanawalla A, Coltman S, Kim T, Smith T, Binder-Markey B, Zaback M, Thompson CK, Giszter S, Person A, Goulding M, Azim E, Thakor N, O'Connor D, Trimmer B, Lima SQ, Carey MR, Pandarinath C, Costa RM, Pruszynski JA, Bakir M, Sober SJ. Myomatrix arrays for high-definition muscle recording. eLife 2023; 12:RP88551. [PMID: 38113081 PMCID: PMC10730117 DOI: 10.7554/elife.88551] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/21/2023] Open
Abstract
Neurons coordinate their activity to produce an astonishing variety of motor behaviors. Our present understanding of motor control has grown rapidly thanks to new methods for recording and analyzing populations of many individual neurons over time. In contrast, current methods for recording the nervous system's actual motor output - the activation of muscle fibers by motor neurons - typically cannot detect the individual electrical events produced by muscle fibers during natural behaviors and scale poorly across species and muscle groups. Here we present a novel class of electrode devices ('Myomatrix arrays') that record muscle activity at unprecedented resolution across muscles and behaviors. High-density, flexible electrode arrays allow for stable recordings from the muscle fibers activated by a single motor neuron, called a 'motor unit,' during natural behaviors in many species, including mice, rats, primates, songbirds, frogs, and insects. This technology therefore allows the nervous system's motor output to be monitored in unprecedented detail during complex behaviors across species and muscle morphologies. We anticipate that this technology will allow rapid advances in understanding the neural control of behavior and identifying pathologies of the motor system.
Collapse
Affiliation(s)
- Bryce Chung
- Department of Biology, Emory UniversityAtlantaUnited States
| | - Muneeb Zia
- School of Electrical and Computer Engineering, Georgia Institute of TechnologyAtlantaUnited States
| | - Kyle A Thomas
- Graduate Program in Biomedical Engineering at Emory University and Georgia TechAtlantaUnited States
| | | | - Amanda Jacob
- Department of Biology, Emory UniversityAtlantaUnited States
| | - Andrea Pack
- Neuroscience Graduate Program, Emory UniversityAtlantaUnited States
| | - Matthew J Williams
- Graduate Program in Biomedical Engineering at Emory University and Georgia TechAtlantaUnited States
| | | | - Lay Heng Teng
- Department of Biology, Emory UniversityAtlantaUnited States
| | | | | | - Nicole Oey
- Department of Biology, Emory UniversityAtlantaUnited States
| | - Rhuna Gibbs
- Department of Biology, Emory UniversityAtlantaUnited States
| | - Philip Anschutz
- Graduate Program in BioEngineering, Georgia TechAtlantaUnited States
| | - Jiaao Lu
- Graduate Program in Electrical and Computer Engineering, Georgia TechAtlantaUnited States
| | - Yu Wu
- School of Electrical and Computer Engineering, Georgia Institute of TechnologyAtlantaUnited States
| | - Mehrdad Kashefi
- Department of Physiology and Pharmacology, Western UniversityLondonCanada
| | - Tomomichi Oya
- Department of Physiology and Pharmacology, Western UniversityLondonCanada
| | - Rhonda Kersten
- Department of Physiology and Pharmacology, Western UniversityLondonCanada
| | - Alice C Mosberger
- Zuckerman Mind Brain Behavior Institute at Columbia UniversityNew YorkUnited States
| | - Sean O'Connell
- Graduate Program in Biomedical Engineering at Emory University and Georgia TechAtlantaUnited States
| | - Runming Wang
- Department of Biomedical Engineering at Emory University and Georgia TechAtlantaUnited States
| | - Hugo Marques
- Champalimaud Neuroscience Programme, Champalimaud FoundationLisbonPortugal
| | - Ana Rita Mendes
- Champalimaud Neuroscience Programme, Champalimaud FoundationLisbonPortugal
| | - Constanze Lenschow
- Champalimaud Neuroscience Programme, Champalimaud FoundationLisbonPortugal
| | | | - Jeong Jun Kim
- Solomon H. Snyder Department of Neuroscience, Johns Hopkins School of MedicineBaltimoreUnited States
| | - William Olson
- Solomon H. Snyder Department of Neuroscience, Johns Hopkins School of MedicineBaltimoreUnited States
| | - Kiara N Quinn
- Departments of Biomedical Engineering and Neurology, Johns Hopkins School of MedicineBaltimoreUnited States
| | - Pierce Perkins
- Departments of Biomedical Engineering and Neurology, Johns Hopkins School of MedicineBaltimoreUnited States
| | - Graziana Gatto
- Salk Institute for Biological StudiesLa JollaUnited States
| | | | - Susan Coltman
- Department of Physiology and Biophysics, University of Colorado Anschutz Medical CampusAuroraUnited States
| | - Taegyo Kim
- Department of Neurobiology & Anatomy, Drexel University, College of MedicinePhiladelphiaUnited States
| | - Trevor Smith
- Department of Neurobiology & Anatomy, Drexel University, College of MedicinePhiladelphiaUnited States
| | - Ben Binder-Markey
- Department of Physical Therapy and Rehabilitation Sciences, Drexel University College of Nursing and Health ProfessionsPhiladelphiaUnited States
| | - Martin Zaback
- Department of Health and Rehabilitation Sciences, Temple UniversityPhiladelphiaUnited States
| | - Christopher K Thompson
- Department of Health and Rehabilitation Sciences, Temple UniversityPhiladelphiaUnited States
| | - Simon Giszter
- Department of Neurobiology & Anatomy, Drexel University, College of MedicinePhiladelphiaUnited States
| | - Abigail Person
- Department of Physiology and Biophysics, University of Colorado Anschutz Medical CampusAuroraUnited States
- Allen InstituteSeattleUnited States
| | | | - Eiman Azim
- Salk Institute for Biological StudiesLa JollaUnited States
| | - Nitish Thakor
- Departments of Biomedical Engineering and Neurology, Johns Hopkins School of MedicineBaltimoreUnited States
| | - Daniel O'Connor
- Solomon H. Snyder Department of Neuroscience, Johns Hopkins School of MedicineBaltimoreUnited States
| | - Barry Trimmer
- Department of Biology, Tufts UniversityMedfordUnited States
| | - Susana Q Lima
- Champalimaud Neuroscience Programme, Champalimaud FoundationLisbonPortugal
| | - Megan R Carey
- Champalimaud Neuroscience Programme, Champalimaud FoundationLisbonPortugal
| | - Chethan Pandarinath
- Department of Biomedical Engineering at Emory University and Georgia TechAtlantaUnited States
| | - Rui M Costa
- Zuckerman Mind Brain Behavior Institute at Columbia UniversityNew YorkUnited States
| | | | - Muhannad Bakir
- School of Electrical and Computer Engineering, Georgia Institute of TechnologyAtlantaUnited States
| | - Samuel J Sober
- Department of Biology, Emory UniversityAtlantaUnited States
| |
Collapse
|