1
|
Costacurta JC, Bhandarkar S, Zoltowski DM, Linderman SW. Structured flexibility in recurrent neural networks via neuromodulation. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.07.26.605315. [PMID: 39091788 PMCID: PMC11291173 DOI: 10.1101/2024.07.26.605315] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 08/04/2024]
Abstract
The goal of theoretical neuroscience is to develop models that help us better understand biological intelligence. Such models range broadly in complexity and biological detail. For example, task-optimized recurrent neural networks (RNNs) have generated hypotheses about how the brain may perform various computations, but these models typically assume a fixed weight matrix representing the synaptic connectivity between neurons. From decades of neuroscience research, we know that synaptic weights are constantly changing, controlled in part by chemicals such as neuromodulators. In this work we explore the computational implications of synaptic gain scaling, a form of neuromodulation, using task-optimized low-rank RNNs. In our neuromodulated RNN (NM-RNN) model, a neuromodulatory subnetwork outputs a low-dimensional neuromodulatory signal that dynamically scales the low-rank recurrent weights of an output-generating RNN. In empirical experiments, we find that the structured flexibility in the NM-RNN allows it to both train and generalize with a higher degree of accuracy than low-rank RNNs on a set of canonical tasks. Additionally, via theoretical analyses we show how neuromodulatory gain scaling endows networks with gating mechanisms commonly found in artificial RNNs. We end by analyzing the low-rank dynamics of trai ned NM-RNNs, to show how task computations are distributed.
Collapse
|
2
|
Serrano-Fernández L, Beirán M, Parga N. Emergent perceptual biases from state-space geometry in trained spiking recurrent neural networks. Cell Rep 2024; 43:114412. [PMID: 38968075 DOI: 10.1016/j.celrep.2024.114412] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/23/2023] [Revised: 04/08/2024] [Accepted: 06/12/2024] [Indexed: 07/07/2024] Open
Abstract
A stimulus held in working memory is perceived as contracted toward the average stimulus. This contraction bias has been extensively studied in psychophysics, but little is known about its origin from neural activity. By training recurrent networks of spiking neurons to discriminate temporal intervals, we explored the causes of this bias and how behavior relates to population firing activity. We found that the trained networks exhibited animal-like behavior. Various geometric features of neural trajectories in state space encoded warped representations of the durations of the first interval modulated by sensory history. Formulating a normative model, we showed that these representations conveyed a Bayesian estimate of the interval durations, thus relating activity and behavior. Importantly, our findings demonstrate that Bayesian computations already occur during the sensory phase of the first stimulus and persist throughout its maintenance in working memory, until the time of stimulus comparison.
Collapse
Affiliation(s)
- Luis Serrano-Fernández
- Departamento de Física Teórica, Universidad Autónoma de Madrid, 28049 Madrid, Spain; Centro de Investigación Avanzada en Física Fundamental, Universidad Autónoma de Madrid, 28049 Madrid, Spain
| | - Manuel Beirán
- Center for Theoretical Neuroscience, Zuckerman Institute, Columbia University, New York, NY, USA
| | - Néstor Parga
- Departamento de Física Teórica, Universidad Autónoma de Madrid, 28049 Madrid, Spain; Centro de Investigación Avanzada en Física Fundamental, Universidad Autónoma de Madrid, 28049 Madrid, Spain.
| |
Collapse
|
3
|
Bayones L, Zainos A, Alvarez M, Romo R, Franci A, Rossi-Pool R. Orthogonality of sensory and contextual categorical dynamics embedded in a continuum of responses from the second somatosensory cortex. Proc Natl Acad Sci U S A 2024; 121:e2316765121. [PMID: 38990946 PMCID: PMC11260089 DOI: 10.1073/pnas.2316765121] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/26/2023] [Accepted: 06/12/2024] [Indexed: 07/13/2024] Open
Abstract
How does the brain simultaneously process signals that bring complementary information, like raw sensory signals and their transformed counterparts, without any disruptive interference? Contemporary research underscores the brain's adeptness in using decorrelated responses to reduce such interference. Both neurophysiological findings and artificial neural networks support the notion of orthogonal representation for signal differentiation and parallel processing. Yet, where, and how raw sensory signals are transformed into more abstract representations remains unclear. Using a temporal pattern discrimination task in trained monkeys, we revealed that the second somatosensory cortex (S2) efficiently segregates faithful and transformed neural responses into orthogonal subspaces. Importantly, S2 population encoding for transformed signals, but not for faithful ones, disappeared during a nondemanding version of this task, which suggests that signal transformation and their decoding from downstream areas are only active on-demand. A mechanistic computation model points to gain modulation as a possible biological mechanism for the observed context-dependent computation. Furthermore, individual neural activities that underlie the orthogonal population representations exhibited a continuum of responses, with no well-determined clusters. These findings advocate that the brain, while employing a continuum of heterogeneous neural responses, splits population signals into orthogonal subspaces in a context-dependent fashion to enhance robustness, performance, and improve coding efficiency.
Collapse
Affiliation(s)
- Lucas Bayones
- Instituto de Fisiología Celular, Departamento de Neurociencia Cognitiva, Universidad Nacional Autónoma de México, Mexico City04510, Mexico
| | - Antonio Zainos
- Instituto de Fisiología Celular, Departamento de Neurociencia Cognitiva, Universidad Nacional Autónoma de México, Mexico City04510, Mexico
| | - Manuel Alvarez
- Instituto de Fisiología Celular, Departamento de Neurociencia Cognitiva, Universidad Nacional Autónoma de México, Mexico City04510, Mexico
| | | | - Alessio Franci
- Departmento de Matemática, Facultad de Ciencias, Universidad Nacional Autónoma de México, Mexico City04510, Mexico
- Montefiore Institute, University of Liège, Liège4000, Belgique
- Wallon ExceLlence (WEL) Research Institute, Wavre1300, Belgique
| | - Román Rossi-Pool
- Instituto de Fisiología Celular, Departamento de Neurociencia Cognitiva, Universidad Nacional Autónoma de México, Mexico City04510, Mexico
- Centro de Ciencias de la Complejidad, Universidad Nacional Autónoma de México, Mexico City04510, Mexico
| |
Collapse
|
4
|
Ostojic S, Fusi S. Computational role of structure in neural activity and connectivity. Trends Cogn Sci 2024; 28:677-690. [PMID: 38553340 DOI: 10.1016/j.tics.2024.03.003] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/24/2023] [Revised: 02/29/2024] [Accepted: 03/07/2024] [Indexed: 07/05/2024]
Abstract
One major challenge of neuroscience is identifying structure in seemingly disorganized neural activity. Different types of structure have different computational implications that can help neuroscientists understand the functional role of a particular brain area. Here, we outline a unified approach to characterize structure by inspecting the representational geometry and the modularity properties of the recorded activity and show that a similar approach can also reveal structure in connectivity. We start by setting up a general framework for determining geometry and modularity in activity and connectivity and relating these properties with computations performed by the network. We then use this framework to review the types of structure found in recent studies of model networks performing three classes of computations.
Collapse
Affiliation(s)
- Srdjan Ostojic
- Laboratoire de Neurosciences Cognitives et Computationnelles, INSERM U960, Ecole Normale Superieure - PSL Research University, 75005 Paris, France.
| | - Stefano Fusi
- Center for Theoretical Neuroscience, Columbia University, New York, NY, USA; Zuckerman Mind Brain Behavior Institute, Columbia University, New York, NY, USA; Department of Neuroscience, Columbia University, New York, NY, USA; Kavli Institute for Brain Science, Columbia University, New York, NY, USA
| |
Collapse
|
5
|
Zhou S, Buonomano DV. Unified control of temporal and spatial scales of sensorimotor behavior through neuromodulation of short-term synaptic plasticity. SCIENCE ADVANCES 2024; 10:eadk7257. [PMID: 38701208 DOI: 10.1126/sciadv.adk7257] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/11/2023] [Accepted: 04/03/2024] [Indexed: 05/05/2024]
Abstract
Neuromodulators have been shown to alter the temporal profile of short-term synaptic plasticity (STP); however, the computational function of this neuromodulation remains unexplored. Here, we propose that the neuromodulation of STP provides a general mechanism to scale neural dynamics and motor outputs in time and space. We trained recurrent neural networks that incorporated STP to produce complex motor trajectories-handwritten digits-with different temporal (speed) and spatial (size) scales. Neuromodulation of STP produced temporal and spatial scaling of the learned dynamics and enhanced temporal or spatial generalization compared to standard training of the synaptic weights in the absence of STP. The model also accounted for the results of two experimental studies involving flexible sensorimotor timing. Neuromodulation of STP provides a unified and biologically plausible mechanism to control the temporal and spatial scales of neural dynamics and sensorimotor behaviors.
Collapse
Affiliation(s)
- Shanglin Zhou
- Institute for Translational Brain Research, Fudan University, Shanghai, China
- State Key Laboratory of Medical Neurobiology, Fudan University, Shanghai, China
- MOE Frontiers Center for Brain Science, Fudan University, Shanghai, China
- Zhongshan Hospital, Fudan University, Shanghai, China
| | - Dean V Buonomano
- Department of Neurobiology, University of California, Los Angeles, Los Angeles, CA, USA
- Department of Psychology, University of California, Los Angeles, Los Angeles, CA, USA
| |
Collapse
|
6
|
Cao R, Bright IM, Howard MW. Ramping cells in rodent mPFC encode time to past and future events via real Laplace transform. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.02.13.580170. [PMID: 38405896 PMCID: PMC10888827 DOI: 10.1101/2024.02.13.580170] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/27/2024]
Abstract
In interval reproduction tasks, animals must remember the event starting the interval and anticipate the time of the planned response to terminate the interval. The interval reproduction task thus allows for studying both memory for the past and anticipation of the future. We analyzed previously published recordings from rodent mPFC (Henke et al., 2021) during an interval reproduction task and identified two cell groups by modeling their temporal receptive fields using hierarchical Bayesian models. The firing in the "past cells" group peaked at the start of the interval and relaxed exponentially back to baseline. The firing in the "future cells" group increased exponentially and peaked right before the planned action at the end of the interval. Contrary to the previous assumption that timing information in the brain has one or two time scales for a given interval, we found strong evidence for a continuous distribution of the exponential rate constants for both past and future cell populations. The real Laplace transformation of time predicts exponential firing with a continuous distribution of rate constants across the population. Therefore, the firing pattern of the past cells can be identified with the Laplace transform of time since the past event while the firing pattern of the future cells can be identified with the Laplace transform of time until the planned future event.
Collapse
Affiliation(s)
- Rui Cao
- Department of Psychological and Brain Sciences, Boston University
| | - Ian M Bright
- Department of Psychological and Brain Sciences, Boston University
| | - Marc W Howard
- Department of Psychological and Brain Sciences, Boston University
| |
Collapse
|
7
|
Gort J. Emergence of Universal Computations Through Neural Manifold Dynamics. Neural Comput 2024; 36:227-270. [PMID: 38101328 DOI: 10.1162/neco_a_01631] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/07/2023] [Accepted: 09/05/2023] [Indexed: 12/17/2023]
Abstract
There is growing evidence that many forms of neural computation may be implemented by low-dimensional dynamics unfolding at the population scale. However, neither the connectivity structure nor the general capabilities of these embedded dynamical processes are currently understood. In this work, the two most common formalisms of firing-rate models are evaluated using tools from analysis, topology, and nonlinear dynamics in order to provide plausible explanations for these problems. It is shown that low-rank structured connectivities predict the formation of invariant and globally attracting manifolds in all these models. Regarding the dynamics arising in these manifolds, it is proved they are topologically equivalent across the considered formalisms. This letter also shows that under the low-rank hypothesis, the flows emerging in neural manifolds, including input-driven systems, are universal, which broadens previous findings. It explores how low-dimensional orbits can bear the production of continuous sets of muscular trajectories, the implementation of central pattern generators, and the storage of memory states. These dynamics can robustly simulate any Turing machine over arbitrary bounded memory strings, virtually endowing rate models with the power of universal computation. In addition, the letter shows how the low-rank hypothesis predicts the parsimonious correlation structure observed in cortical activity. Finally, it discusses how this theory could provide a useful tool from which to study neuropsychological phenomena using mathematical methods.
Collapse
Affiliation(s)
- Joan Gort
- Facultat de Psicologia, Universitat Autònoma de Barcelona, 08193, Bellaterra, Barcelona, Spain
| |
Collapse
|
8
|
Rolando F, Kononowicz TW, Duhamel JR, Doyère V, Wirth S. Distinct neural adaptations to time demand in the striatum and the hippocampus. Curr Biol 2024; 34:156-170.e7. [PMID: 38141617 DOI: 10.1016/j.cub.2023.11.066] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/12/2023] [Revised: 10/18/2023] [Accepted: 11/30/2023] [Indexed: 12/25/2023]
Abstract
How do neural codes adjust to track time across a range of resolutions, from milliseconds to multi-seconds, as a function of the temporal frequency at which events occur? To address this question, we studied time-modulated cells in the striatum and the hippocampus, while macaques categorized three nested intervals within the sub-second or the supra-second range (up to 1, 2, 4, or 8 s), thereby modifying the temporal resolution needed to solve the task. Time-modulated cells carried more information for intervals with explicit timing demand, than for any other interval. The striatum, particularly the caudate, supported the most accurate temporal prediction throughout all time ranges. Strikingly, its temporal readout adjusted non-linearly to the time range, suggesting that the striatal resolution shifted from a precise millisecond to a coarse multi-second range as a function of demand. This is in line with monkey's behavioral latencies, which indicated that they tracked time until 2 s but employed a coarse categorization strategy for durations beyond. By contrast, the hippocampus discriminated only the beginning from the end of intervals, regardless of the range. We propose that the hippocampus may provide an overall poor signal marking an event's beginning, whereas the striatum optimizes neural resources to process time throughout an interval adapting to the ongoing timing necessity.
Collapse
Affiliation(s)
- Felipe Rolando
- Institut des Sciences Cognitives Marc Jeannerod, CNRS, Université Lyon 1, 67 boulevard Pinel, 69500 Bron, France
| | - Tadeusz W Kononowicz
- Institut des Sciences Cognitives Marc Jeannerod, CNRS, Université Lyon 1, 67 boulevard Pinel, 69500 Bron, France; Université Paris-Saclay, CNRS, Institut des Neurosciences Paris-Saclay (NeuroPSI), 91400 Saclay, France; Institute of Psychology, The Polish Academy of Sciences, ul. Jaracza 1, 00-378 Warsaw, Poland
| | - Jean-René Duhamel
- Institut des Sciences Cognitives Marc Jeannerod, CNRS, Université Lyon 1, 67 boulevard Pinel, 69500 Bron, France
| | - Valérie Doyère
- Université Paris-Saclay, CNRS, Institut des Neurosciences Paris-Saclay (NeuroPSI), 91400 Saclay, France
| | - Sylvia Wirth
- Institut des Sciences Cognitives Marc Jeannerod, CNRS, Université Lyon 1, 67 boulevard Pinel, 69500 Bron, France.
| |
Collapse
|
9
|
Durstewitz D, Koppe G, Thurm MI. Reconstructing computational system dynamics from neural data with recurrent neural networks. Nat Rev Neurosci 2023; 24:693-710. [PMID: 37794121 DOI: 10.1038/s41583-023-00740-7] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 08/18/2023] [Indexed: 10/06/2023]
Abstract
Computational models in neuroscience usually take the form of systems of differential equations. The behaviour of such systems is the subject of dynamical systems theory. Dynamical systems theory provides a powerful mathematical toolbox for analysing neurobiological processes and has been a mainstay of computational neuroscience for decades. Recently, recurrent neural networks (RNNs) have become a popular machine learning tool for studying the non-linear dynamics of neural and behavioural processes by emulating an underlying system of differential equations. RNNs have been routinely trained on similar behavioural tasks to those used for animal subjects to generate hypotheses about the underlying computational mechanisms. By contrast, RNNs can also be trained on the measured physiological and behavioural data, thereby directly inheriting their temporal and geometrical properties. In this way they become a formal surrogate for the experimentally probed system that can be further analysed, perturbed and simulated. This powerful approach is called dynamical system reconstruction. In this Perspective, we focus on recent trends in artificial intelligence and machine learning in this exciting and rapidly expanding field, which may be less well known in neuroscience. We discuss formal prerequisites, different model architectures and training approaches for RNN-based dynamical system reconstructions, ways to evaluate and validate model performance, how to interpret trained models in a neuroscience context, and current challenges.
Collapse
Affiliation(s)
- Daniel Durstewitz
- Dept. of Theoretical Neuroscience, Central Institute of Mental Health, Medical Faculty Mannheim, Heidelberg University, Mannheim, Germany.
- Interdisciplinary Center for Scientific Computing, Heidelberg University, Heidelberg, Germany.
- Faculty of Physics and Astronomy, Heidelberg University, Heidelberg, Germany.
| | - Georgia Koppe
- Dept. of Theoretical Neuroscience, Central Institute of Mental Health, Medical Faculty Mannheim, Heidelberg University, Mannheim, Germany
- Dept. of Psychiatry and Psychotherapy, Central Institute of Mental Health, Medical Faculty Mannheim, Heidelberg University, Mannheim, Germany
- Hector Institute for Artificial Intelligence in Psychiatry, Central Institute of Mental Health, Medical Faculty Mannheim, Heidelberg University, Mannheim, Germany
| | - Max Ingo Thurm
- Dept. of Theoretical Neuroscience, Central Institute of Mental Health, Medical Faculty Mannheim, Heidelberg University, Mannheim, Germany
| |
Collapse
|
10
|
Kawai Y, Park J, Tsuda I, Asada M. Learning long-term motor timing/patterns on an orthogonal basis in random neural networks. Neural Netw 2023; 163:298-311. [PMID: 37087852 DOI: 10.1016/j.neunet.2023.04.006] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/22/2022] [Revised: 03/15/2023] [Accepted: 04/05/2023] [Indexed: 04/25/2023]
Abstract
The ability of the brain to generate complex spatiotemporal patterns with specific timings is essential for motor learning and temporal processing. An approach that can model this function, using the spontaneous activity of a random neural network (RNN), is associated with orbital instability. We propose a simple system that learns an arbitrary time series as the linear sum of stable trajectories produced by several small network modules. New finding in computer experiments is that the trajectories of the module outputs are orthogonal to each other. They created a dynamic orthogonal basis acquiring a high representational capacity, which enabled the system to learn the timing of extremely long intervals, such as tens of seconds for a millisecond computation unit, and also the complex time series of Lorenz attractors. This self-sustained system satisfies the stability and orthogonality requirements and thus provides a new neurocomputing framework and perspective for the neural mechanisms of motor learning.
Collapse
Affiliation(s)
- Yuji Kawai
- Symbiotic Intelligent Systems Research Center, Institute for Open and Transdisciplinary Research Initiatives, Osaka University, 1-1 Yamadaoka, Suita, Osaka 565-0871, Japan.
| | - Jihoon Park
- Symbiotic Intelligent Systems Research Center, Institute for Open and Transdisciplinary Research Initiatives, Osaka University, 1-1 Yamadaoka, Suita, Osaka 565-0871, Japan; Center for Information and Neural Networks, National Institute of Information and Communications Technology, 1-4 Yamadaoka, Suita, Osaka 565-0871, Japan
| | - Ichiro Tsuda
- Chubu University Academy of Emerging Sciences/Center for Mathematical Science and Artificial Intelligence, Chubu University, 1200 Matsumoto-cho, Kasugai, Aichi 487-8501, Japan
| | - Minoru Asada
- Symbiotic Intelligent Systems Research Center, Institute for Open and Transdisciplinary Research Initiatives, Osaka University, 1-1 Yamadaoka, Suita, Osaka 565-0871, Japan; Center for Information and Neural Networks, National Institute of Information and Communications Technology, 1-4 Yamadaoka, Suita, Osaka 565-0871, Japan; Chubu University Academy of Emerging Sciences/Center for Mathematical Science and Artificial Intelligence, Chubu University, 1200 Matsumoto-cho, Kasugai, Aichi 487-8501, Japan; International Professional University of Technology in Osaka, 3-3-1 Umeda, Kita-ku, Osaka 530-0001, Japan
| |
Collapse
|