1
|
Pascual LMM, Vusirikala A, Nemenman IM, Sober SJ, Pasek M. Millisecond-scale motor control precedes sensorimotor learning in Bengalese finches. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.09.27.615500. [PMID: 39386477 PMCID: PMC11463345 DOI: 10.1101/2024.09.27.615500] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/12/2024]
Abstract
A key goal of the nervous system in young animals is to learn motor skills. Songbirds learn to sing as juveniles, providing a unique opportunity to identify the neural correlates of skill acquisition. Prior studies have shown that spike rate variability decreases during song acquisition, suggesting a transition from rate-based neural control to the millisecond-precise motor codes known to underlie adult vocal performance. By quantifying how the ensemble of spike patterns fired by cortical neurons (the "neural vocabulary") and the relationship between spike patterns and song acoustics (the "neural code") change during song acquisition, we quantified how vocal control changes across learning in juvenile Bengalese finches. We found that despite the expected drop in rate variability (a learning-related change in spike vocabulary), the precision of the neural code in the youngest singers is the same as in adults, with 1-2 millisecond variations in spike timing transduced into quantifiably different behaviors. In contrast, fluctuations of firing rates on longer timescales fail to affect the motor output. The consistent presence of millisecond-scale motor coding during changing levels of spike rate and behavioral variability supports the view that variability early in learning stems from deliberate motor exploration rather than imprecise motor control.
Collapse
Affiliation(s)
| | - Aanya Vusirikala
- Neuroscience Graduate Program, Emory University, Atlanta, United States
| | - Ilya M. Nemenman
- Department of Physics, Emory University, Atlanta, United States
- Initiative in Theory and Modeling of Living Systems, Emory University, Atlanta, United States
- Department of Biology, Emory University, Atlanta, United States
| | - Samuel J. Sober
- Department of Biology, Emory University, Atlanta, United States
| | - Michael Pasek
- Department of Physics, Emory University, Atlanta, United States
- Initiative in Theory and Modeling of Living Systems, Emory University, Atlanta, United States
| |
Collapse
|
2
|
Wang B, Torok Z, Duffy A, Bell DG, Wongso S, Velho TAF, Fairhall AL, Lois C. Unsupervised restoration of a complex learned behavior after large-scale neuronal perturbation. Nat Neurosci 2024; 27:1176-1186. [PMID: 38684893 DOI: 10.1038/s41593-024-01630-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/16/2022] [Accepted: 03/26/2024] [Indexed: 05/02/2024]
Abstract
Reliable execution of precise behaviors requires that brain circuits are resilient to variations in neuronal dynamics. Genetic perturbation of the majority of excitatory neurons in HVC, a brain region involved in song production, in adult songbirds with stereotypical songs triggered severe degradation of the song. The song fully recovered within 2 weeks, and substantial improvement occurred even when animals were prevented from singing during the recovery period, indicating that offline mechanisms enable recovery in an unsupervised manner. Song restoration was accompanied by increased excitatory synaptic input to neighboring, unmanipulated neurons in the same brain region. A model inspired by the behavioral and electrophysiological findings suggests that unsupervised single-cell and population-level homeostatic plasticity rules can support the functional restoration after large-scale disruption of networks that implement sequential dynamics. These observations suggest the existence of cellular and systems-level restorative mechanisms that ensure behavioral resilience.
Collapse
Affiliation(s)
- Bo Wang
- Division of Biology and Biological Engineering, California Institute of Technology, Pasadena, CA, USA.
| | - Zsofia Torok
- Division of Biology and Biological Engineering, California Institute of Technology, Pasadena, CA, USA
| | - Alison Duffy
- Department of Physiology and Biophysics, University of Washington, Seattle, WA, USA
- Computational Neuroscience Center, University of Washington, Seattle, WA, USA
| | - David G Bell
- Computational Neuroscience Center, University of Washington, Seattle, WA, USA
- Department of Physics, University of Washington, Seattle, WA, USA
| | - Shelyn Wongso
- Division of Biology and Biological Engineering, California Institute of Technology, Pasadena, CA, USA
| | - Tarciso A F Velho
- Division of Biology and Biological Engineering, California Institute of Technology, Pasadena, CA, USA
| | - Adrienne L Fairhall
- Department of Physiology and Biophysics, University of Washington, Seattle, WA, USA
- Computational Neuroscience Center, University of Washington, Seattle, WA, USA
- Department of Physics, University of Washington, Seattle, WA, USA
| | - Carlos Lois
- Division of Biology and Biological Engineering, California Institute of Technology, Pasadena, CA, USA.
| |
Collapse
|
3
|
Soldado-Magraner S, Buonomano DV. Neural Sequences and the Encoding of Time. ADVANCES IN EXPERIMENTAL MEDICINE AND BIOLOGY 2024; 1455:81-93. [PMID: 38918347 DOI: 10.1007/978-3-031-60183-5_5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/27/2024]
Abstract
Converging experimental and computational evidence indicate that on the scale of seconds the brain encodes time through changing patterns of neural activity. Experimentally, two general forms of neural dynamic regimes that can encode time have been observed: neural population clocks and ramping activity. Neural population clocks provide a high-dimensional code to generate complex spatiotemporal output patterns, in which each neuron exhibits a nonlinear temporal profile. A prototypical example of neural population clocks are neural sequences, which have been observed across species, brain areas, and behavioral paradigms. Additionally, neural sequences emerge in artificial neural networks trained to solve time-dependent tasks. Here, we examine the role of neural sequences in the encoding of time, and how they may emerge in a biologically plausible manner. We conclude that neural sequences may represent a canonical computational regime to perform temporal computations.
Collapse
Affiliation(s)
| | - Dean V Buonomano
- Department of Neurobiology, University of California, Los Angeles, Los Angeles, CA, USA.
- Department of Psychology, University of California, Los Angeles, Los Angeles, CA, USA.
| |
Collapse
|
4
|
Zhou S, Seay M, Taxidis J, Golshani P, Buonomano DV. Multiplexing working memory and time in the trajectories of neural networks. Nat Hum Behav 2023; 7:1170-1184. [PMID: 37081099 PMCID: PMC10913811 DOI: 10.1038/s41562-023-01592-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/08/2022] [Accepted: 03/22/2023] [Indexed: 04/22/2023]
Abstract
Working memory (WM) and timing are generally considered distinct cognitive functions, but similar neural signatures have been implicated in both. To explore the hypothesis that WM and timing may rely on shared neural mechanisms, we used psychophysical tasks that contained either task-irrelevant timing or WM components. In both cases, the task-irrelevant component influenced performance. We then developed recurrent neural network (RNN) simulations that revealed that cue-specific neural sequences, which multiplexed WM and time, emerged as the dominant regime that captured the behavioural findings. During training, RNN dynamics transitioned from low-dimensional ramps to high-dimensional neural sequences, and depending on task requirements, steady-state or ramping activity was also observed. Analysis of RNN structure revealed that neural sequences relied primarily on inhibitory connections, and could survive the deletion of all excitatory-to-excitatory connections. Our results indicate that in some instances WM is encoded in time-varying neural activity because of the importance of predicting when WM will be used.
Collapse
Affiliation(s)
- Shanglin Zhou
- Department of Neurobiology, David Geffen School of Medicine, University of California, Los Angeles, CA, USA
| | - Michael Seay
- Department of Psychology, University of California, Los Angeles, CA, USA
| | - Jiannis Taxidis
- Program in Neurosciences and Mental Health, Hospital for Sick Children, Toronto, Ontario, Canada
- Department of Physiology, University of Toronto, Toronto, Ontario, Canada
| | - Peyman Golshani
- Department of Neurology, David Geffen School of Medicine, University of California, Los Angeles, Los Angeles, CA, USA
- Integrative Center for Learning and Memory, Brain Research Institute, University of California, Los Angeles, Los Angeles, CA, USA
- UCLA Semel Institute for Neuroscience and Behavioral Sciences, University of California, Los Angeles, Los Angeles, CA, USA
- West Los Angeles VA Medical Center, Los Angeles, CA, USA
| | - Dean V Buonomano
- Department of Neurobiology, David Geffen School of Medicine, University of California, Los Angeles, CA, USA.
- Department of Psychology, University of California, Los Angeles, CA, USA.
- Department of Neurology, David Geffen School of Medicine, University of California, Los Angeles, Los Angeles, CA, USA.
| |
Collapse
|
5
|
Encoding time in neural dynamic regimes with distinct computational tradeoffs. PLoS Comput Biol 2022; 18:e1009271. [PMID: 35239644 PMCID: PMC8893702 DOI: 10.1371/journal.pcbi.1009271] [Citation(s) in RCA: 15] [Impact Index Per Article: 7.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/02/2021] [Accepted: 02/08/2022] [Indexed: 11/19/2022] Open
Abstract
Converging evidence suggests the brain encodes time in dynamic patterns of neural activity, including neural sequences, ramping activity, and complex dynamics. Most temporal tasks, however, require more than just encoding time, and can have distinct computational requirements including the need to exhibit temporal scaling, generalize to novel contexts, or robustness to noise. It is not known how neural circuits can encode time and satisfy distinct computational requirements, nor is it known whether similar patterns of neural activity at the population level can exhibit dramatically different computational or generalization properties. To begin to answer these questions, we trained RNNs on two timing tasks based on behavioral studies. The tasks had different input structures but required producing identically timed output patterns. Using a novel framework we quantified whether RNNs encoded two intervals using either of three different timing strategies: scaling, absolute, or stimulus-specific dynamics. We found that similar neural dynamic patterns at the level of single intervals, could exhibit fundamentally different properties, including, generalization, the connectivity structure of the trained networks, and the contribution of excitatory and inhibitory neurons. Critically, depending on the task structure RNNs were better suited for generalization or robustness to noise. Further analysis revealed different connection patterns underlying the different regimes. Our results predict that apparently similar neural dynamic patterns at the population level (e.g., neural sequences) can exhibit fundamentally different computational properties in regards to their ability to generalize to novel stimuli and their robustness to noise—and that these differences are associated with differences in network connectivity and distinct contributions of excitatory and inhibitory neurons. We also predict that the task structure used in different experimental studies accounts for some of the experimentally observed variability in how networks encode time. The ability to tell time and anticipate when external events will occur are among the most fundamental computations the brain performs. Converging evidence suggests the brain encodes time through changing patterns of neural activity. Different temporal tasks, however, have distinct computational requirements, such as the need to flexibly scale temporal patterns or generalize to novel inputs. To understand how networks can encode time and satisfy different computational requirements we trained recurrent neural networks (RNNs) on two timing tasks that have previously been used in behavioral studies. Both tasks required producing identically timed output patterns. Using a novel framework to quantify how networks encode different intervals, we found that similar patterns of neural activity—neural sequences—were associated with fundamentally different underlying mechanisms, including the connectivity patterns of the RNNs. Critically, depending on the task the RNNs were trained on, they were better suited for generalization or robustness to noise. Our results predict that similar patterns of neural activity can be produced by distinct RNN configurations, which in turn have fundamentally different computational tradeoffs. Our results also predict that differences in task structure account for some of the experimentally observed variability in how networks encode time.
Collapse
|