1
|
Jáidar O, Albarran E, Albarran EN, Wu YW, Ding JB. Refinement of efficient encodings of movement in the dorsolateral striatum throughout learning. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.06.06.596654. [PMID: 38895486 PMCID: PMC11185645 DOI: 10.1101/2024.06.06.596654] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/21/2024]
Abstract
The striatum is required for normal action selection, movement, and sensorimotor learning. Although action-specific striatal ensembles have been well documented, it is not well understood how these ensembles are formed and how their dynamics may evolve throughout motor learning. Here we used longitudinal 2-photon Ca2+ imaging of dorsal striatal neurons in head-fixed mice as they learned to self-generate locomotion. We observed a significant activation of both direct- and indirect-pathway spiny projection neurons (dSPNs and iSPNs, respectively) during early locomotion bouts and sessions that gradually decreased over time. For dSPNs, onset- and offset-ensembles were gradually refined from active motion-nonspecific cells. iSPN ensembles emerged from neurons initially active during opponent actions before becoming onset- or offset-specific. Our results show that as striatal ensembles are progressively refined, the number of active nonspecific striatal neurons decrease and the overall efficiency of the striatum information encoding for learned actions increases.
Collapse
Affiliation(s)
- Omar Jáidar
- Department of Neurosurgery, Stanford University, Stanford, CA 94305, USA
| | - Eddy Albarran
- Department of Neurosurgery, Stanford University, Stanford, CA 94305, USA
- Aligning Science Across Parkinson’s (ASAP) Collaborative Research Network, Chevy Chase, MD 20815, USA
- Current address: Columbia University
| | | | - Yu-Wei Wu
- Department of Neurosurgery, Stanford University, Stanford, CA 94305, USA
- Current address: Institute of Molecular Biology, Academia Sinica
| | - Jun B. Ding
- Department of Neurosurgery, Stanford University, Stanford, CA 94305, USA
- Aligning Science Across Parkinson’s (ASAP) Collaborative Research Network, Chevy Chase, MD 20815, USA
- Department of Neurology and Neurological Sciences, Stanford University, Stanford, CA 94305, USA
- The Phil & Penny Knight Initiative for Brain Resilience at the Wu Tsai Neurosciences Institute, Stanford University
| |
Collapse
|
2
|
Nicola W, Newton TR, Clopath C. The impact of spike timing precision and spike emission reliability on decoding accuracy. Sci Rep 2024; 14:10536. [PMID: 38719897 PMCID: PMC11078995 DOI: 10.1038/s41598-024-58524-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/24/2023] [Accepted: 04/01/2024] [Indexed: 05/12/2024] Open
Abstract
Precisely timed and reliably emitted spikes are hypothesized to serve multiple functions, including improving the accuracy and reproducibility of encoding stimuli, memories, or behaviours across trials. When these spikes occur as a repeating sequence, they can be used to encode and decode a potential time series. Here, we show both analytically and in simulations that the error incurred in approximating a time series with precisely timed and reliably emitted spikes decreases linearly with the number of neurons or spikes used in the decoding. This was verified numerically with synthetically generated patterns of spikes. Further, we found that if spikes were imprecise in their timing, or unreliable in their emission, the error incurred in decoding with these spikes would be sub-linear. However, if the spike precision or spike reliability increased with network size, the error incurred in decoding a time-series with sequences of spikes would maintain a linear decrease with network size. The spike precision had to increase linearly with network size, while the probability of spike failure had to decrease with the square-root of the network size. Finally, we identified a candidate circuit to test this scaling relationship: the repeating sequences of spikes with sub-millisecond precision in area HVC (proper name) of the zebra finch. This scaling relationship can be tested using both neural data and song-spectrogram-based recordings while taking advantage of the natural fluctuation in HVC network size due to neurogenesis.
Collapse
Affiliation(s)
- Wilten Nicola
- University of Calgary, Calgary, Canada.
- Department of Cell Biology and Anatomy, Calgary, Canada.
- Hotchkiss Brain Institute, Calgary, Canada.
| | | | - Claudia Clopath
- Department of Bioengineering, Imperial College London, London, UK
| |
Collapse
|
3
|
Savarimuthu A, Ponniah RJ. Receive, Retain and Retrieve: Psychological and Neurobiological Perspectives on Memory Retrieval. Integr Psychol Behav Sci 2024; 58:303-318. [PMID: 36738400 DOI: 10.1007/s12124-023-09752-5] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 01/22/2023] [Indexed: 02/05/2023]
Abstract
Memory and learning are interdependent processes that involve encoding, storage, and retrieval. Especially memory retrieval is a fundamental cognitive ability to recall memory traces and update stored memory with new information. For effective memory retrieval and learning, the memory must be stabilized from short-term memory to long-term memory. Hence, it is necessary to understand the process of memory retention and retrieval that enhances the process of learning. Though previous cognitive neuroscience research has focused on memory acquisition and storage, the neurobiological mechanisms underlying memory retrieval and its role in learning are less understood. Therefore, this article offers the viewpoint that memory retrieval is essential for selecting, reactivating, stabilizing, and storing information in long-term memory. In arguing how memories are retrieved, consolidated, transmitted, and strengthened for the long term, the article will examine the psychological and neurobiological aspects of memory and learning with synaptic plasticity, long-term potentiation, genetic transcription, and theta oscillation in the brain.
Collapse
Affiliation(s)
- Anisha Savarimuthu
- Department of Humanities and Social Sciences, National Institute of Technology, Tiruchirappalli, India
| | - R Joseph Ponniah
- Department of Humanities and Social Sciences, National Institute of Technology, Tiruchirappalli, India.
| |
Collapse
|
4
|
Gurnani H, Cayco Gajic NA. Signatures of task learning in neural representations. Curr Opin Neurobiol 2023; 83:102759. [PMID: 37708653 DOI: 10.1016/j.conb.2023.102759] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/14/2023] [Revised: 06/28/2023] [Accepted: 07/20/2023] [Indexed: 09/16/2023]
Abstract
While neural plasticity has long been studied as the basis of learning, the growth of large-scale neural recording techniques provides a unique opportunity to study how learning-induced activity changes are coordinated across neurons within the same circuit. These distributed changes can be understood through an evolution of the geometry of neural manifolds and latent dynamics underlying new computations. In parallel, studies of multi-task and continual learning in artificial neural networks hint at a tradeoff between non-interference and compositionality as guiding principles to understand how neural circuits flexibly support multiple behaviors. In this review, we highlight recent findings from both biological and artificial circuits that together form a new framework for understanding task learning at the population level.
Collapse
Affiliation(s)
- Harsha Gurnani
- Department of Biology, University of Washington, Seattle, WA, USA. https://twitter.com/HarshaGurnani
| | - N Alex Cayco Gajic
- Laboratoire de Neuroscience Cognitives, Ecole Normale Supérieure, Université PSL, Paris, France.
| |
Collapse
|
5
|
Micou C, O'Leary T. Representational drift as a window into neural and behavioural plasticity. Curr Opin Neurobiol 2023; 81:102746. [PMID: 37392671 DOI: 10.1016/j.conb.2023.102746] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/11/2023] [Revised: 05/25/2023] [Accepted: 05/31/2023] [Indexed: 07/03/2023]
Abstract
Large-scale recordings of neural activity over days and weeks have revealed that neural representations of familiar tasks, precepts and actions continually evolve without obvious changes in behaviour. We hypothesise that this steady drift in neural activity and accompanying physiological changes is due in part to the continuous application of a learning rule at the cellular and population level. Explicit predictions of this drift can be found in neural network models that use iterative learning to optimise weights. Drift therefore provides a measurable signal that can reveal systems-level properties of biological plasticity mechanisms, such as their precision and effective learning rates.
Collapse
Affiliation(s)
- Charles Micou
- Department of Engineering, University of Cambridge, United Kingdom
| | - Timothy O'Leary
- Department of Engineering, University of Cambridge, United Kingdom; Theoretical Sciences Visiting Program, Okinawa Institute of Science and Technology Graduate University, Onna, 904-0495, Japan.
| |
Collapse
|
6
|
Hiratani N, Latham PE. Developmental and evolutionary constraints on olfactory circuit selection. Proc Natl Acad Sci U S A 2022; 119:e2100600119. [PMID: 35263217 PMCID: PMC8931209 DOI: 10.1073/pnas.2100600119] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/14/2021] [Accepted: 01/14/2022] [Indexed: 11/18/2022] Open
Abstract
SignificanceIn this work, we explore the hypothesis that biological neural networks optimize their architecture, through evolution, for learning. We study early olfactory circuits of mammals and insects, which have relatively similar structure but a huge diversity in size. We approximate these circuits as three-layer networks and estimate, analytically, the scaling of the optimal hidden-layer size with input-layer size. We find that both longevity and information in the genome constrain the hidden-layer size, so a range of allometric scalings is possible. However, the experimentally observed allometric scalings in mammals and insects are consistent with biologically plausible values. This analysis should pave the way for a deeper understanding of both biological and artificial networks.
Collapse
Affiliation(s)
- Naoki Hiratani
- Gatsby Computational Neuroscience Unit, University College London, London W1T 4JG, United Kingdom
| | - Peter E. Latham
- Gatsby Computational Neuroscience Unit, University College London, London W1T 4JG, United Kingdom
| |
Collapse
|
7
|
Merging pruning and neuroevolution: towards robust and efficient controllers for modular soft robots. KNOWL ENG REV 2022. [DOI: 10.1017/s0269888921000151] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/23/2022]
Abstract
Abstract
Artificial neural networks (ANNs) can be employed as controllers for robotic agents. Their structure is often complex, with many neurons and connections, especially when the robots have many sensors and actuators distributed across their bodies and/or when high expressive power is desirable. Pruning (removing neurons or connections) reduces the complexity of the ANN, thus increasing its energy efficiency, and has been reported to improve the generalization capability, in some cases. In addition, it is well-known that pruning in biological neural networks plays a fundamental role in the development of brains and their ability to learn. In this study, we consider the evolutionary optimization of neural controllers for the case study of Voxel-based soft robots, a kind of modular, bio-inspired soft robots, applying pruning during fitness evaluation. For a locomotion task, and for centralized as well as distributed controllers, we experimentally characterize the effect of different forms of pruning on after-pruning effectiveness, life-long effectiveness, adaptability to new terrains, and behavior. We find that incorporating some forms of pruning in neuroevolution leads to almost equally effective controllers as those evolved without pruning, with the benefit of higher robustness to pruning. We also observe occasional improvements in generalization ability.
Collapse
|
8
|
Neural optimization: Understanding trade-offs with Pareto theory. Curr Opin Neurobiol 2021; 71:84-91. [PMID: 34688051 DOI: 10.1016/j.conb.2021.08.008] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/19/2021] [Accepted: 08/26/2021] [Indexed: 11/21/2022]
Abstract
Nervous systems, like any organismal structure, have been shaped by evolutionary processes to increase fitness. The resulting neural 'bauplan' has to account for multiple objectives simultaneously, including computational function, as well as additional factors such as robustness to environmental changes and energetic limitations. Oftentimes these objectives compete, and quantification of the relative impact of individual optimization targets is non-trivial. Pareto optimality offers a theoretical framework to decipher objectives and trade-offs between them. We, therefore, highlight Pareto theory as a useful tool for the analysis of neurobiological systems from biophysically detailed cells to large-scale network structures and behavior. The Pareto approach can help to assess optimality, identify relevant objectives and their respective impact, and formulate testable hypotheses.
Collapse
|
9
|
Scholl C, Rule ME, Hennig MH. The information theory of developmental pruning: Optimizing global network architectures using local synaptic rules. PLoS Comput Biol 2021; 17:e1009458. [PMID: 34634045 PMCID: PMC8584672 DOI: 10.1371/journal.pcbi.1009458] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/30/2020] [Revised: 11/11/2021] [Accepted: 09/17/2021] [Indexed: 11/19/2022] Open
Abstract
During development, biological neural networks produce more synapses and neurons than needed. Many of these synapses and neurons are later removed in a process known as neural pruning. Why networks should initially be over-populated, and the processes that determine which synapses and neurons are ultimately pruned, remains unclear. We study the mechanisms and significance of neural pruning in model neural networks. In a deep Boltzmann machine model of sensory encoding, we find that (1) synaptic pruning is necessary to learn efficient network architectures that retain computationally-relevant connections, (2) pruning by synaptic weight alone does not optimize network size and (3) pruning based on a locally-available measure of importance based on Fisher information allows the network to identify structurally important vs. unimportant connections and neurons. This locally-available measure of importance has a biological interpretation in terms of the correlations between presynaptic and postsynaptic neurons, and implies an efficient activity-driven pruning rule. Overall, we show how local activity-dependent synaptic pruning can solve the global problem of optimizing a network architecture. We relate these findings to biology as follows: (I) Synaptic over-production is necessary for activity-dependent connectivity optimization. (II) In networks that have more neurons than needed, cells compete for activity, and only the most important and selective neurons are retained. (III) Cells may also be pruned due to a loss of synapses on their axons. This occurs when the information they convey is not relevant to the target population. Biological neural networks need to be efficient and compact, as synapses and neurons require space to store and energy to operate and maintain. This favors an optimized network topology that minimizes redundant neurons and connections. Large numbers of extra neurons and synapses are produced during development, and later removed as the brain matures. A key question to understand this process is how neurons determine which synapses are important. We used statistical models of neural networks to simulate developmental pruning. We show that neurons in such networks can use locally available information to measure the importance of their synapses in a biologically plausible way. We demonstrate that this pruning rule, which is motivated by information theoretic considerations, retains network topologies that can efficiently encode sensory inputs. In contrast, pruning at random, or based on synaptic weights alone, was less able to identify redundant neurons.
Collapse
Affiliation(s)
| | - Michael E. Rule
- University of Cambridge, Engineering Department, Cambridge, United Kingdom
| | - Matthias H. Hennig
- University of Edinburgh, Institute for Adaptive and Neural Computation, Edinburgh, United Kingdom
- * E-mail:
| |
Collapse
|
10
|
Raman DV, O'Leary T. Optimal plasticity for memory maintenance during ongoing synaptic change. eLife 2021; 10:62912. [PMID: 34519270 PMCID: PMC8504970 DOI: 10.7554/elife.62912] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/08/2020] [Accepted: 09/13/2021] [Indexed: 11/13/2022] Open
Abstract
Synaptic connections in many brain circuits fluctuate, exhibiting substantial turnover and remodelling over hours to days. Surprisingly, experiments show that most of this flux in connectivity persists in the absence of learning or known plasticity signals. How can neural circuits retain learned information despite a large proportion of ongoing and potentially disruptive synaptic changes? We address this question from first principles by analysing how much compensatory plasticity would be required to optimally counteract ongoing fluctuations, regardless of whether fluctuations are random or systematic. Remarkably, we find that the answer is largely independent of plasticity mechanisms and circuit architectures: compensatory plasticity should be at most equal in magnitude to fluctuations, and often less, in direct agreement with previously unexplained experimental observations. Moreover, our analysis shows that a high proportion of learning-independent synaptic change is consistent with plasticity mechanisms that accurately compute error gradients.
Collapse
Affiliation(s)
- Dhruva V Raman
- Department of Engineering, University of Cambridge, Cambridge, United Kingdom
| | - Timothy O'Leary
- Department of Engineering, University of Cambridge, Cambridge, United Kingdom
| |
Collapse
|
11
|
Raman DV, O'Leary T. Frozen algorithms: how the brain's wiring facilitates learning. Curr Opin Neurobiol 2021; 67:207-214. [PMID: 33508698 PMCID: PMC8202511 DOI: 10.1016/j.conb.2020.12.017] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/30/2020] [Revised: 12/21/2020] [Accepted: 12/30/2020] [Indexed: 12/03/2022]
Abstract
Synapses and neural connectivity are plastic and shaped by experience. But to what extent does connectivity itself influence the ability of a neural circuit to learn? Insights from optimization theory and AI shed light on how learning can be implemented in neural circuits. Though abstract in their nature, learning algorithms provide a principled set of hypotheses on the necessary ingredients for learning in neural circuits. These include the kinds of signals and circuit motifs that enable learning from experience, as well as an appreciation of the constraints that make learning challenging in a biological setting. Remarkably, some simple connectivity patterns can boost the efficiency of relatively crude learning rules, showing how the brain can use anatomy to compensate for the biological constraints of known synaptic plasticity mechanisms. Modern connectomics provides rich data for exploring this principle, and may reveal how brain connectivity is constrained by the requirement to learn efficiently.
Collapse
Affiliation(s)
- Dhruva V Raman
- Department of Engineering, University of Cambridge, United Kingdom
| | - Timothy O'Leary
- Department of Engineering, University of Cambridge, United Kingdom.
| |
Collapse
|
12
|
Changeux JP, Goulas A, Hilgetag CC. A Connectomic Hypothesis for the Hominization of the Brain. Cereb Cortex 2021; 31:2425-2449. [PMID: 33367521 PMCID: PMC8023825 DOI: 10.1093/cercor/bhaa365] [Citation(s) in RCA: 37] [Impact Index Per Article: 12.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/17/2020] [Revised: 10/30/2020] [Accepted: 11/02/2020] [Indexed: 02/06/2023] Open
Abstract
Cognitive abilities of the human brain, including language, have expanded dramatically in the course of our recent evolution from nonhuman primates, despite only minor apparent changes at the gene level. The hypothesis we propose for this paradox relies upon fundamental features of human brain connectivity, which contribute to a characteristic anatomical, functional, and computational neural phenotype, offering a parsimonious framework for connectomic changes taking place upon the human-specific evolution of the genome. Many human connectomic features might be accounted for by substantially increased brain size within the global neural architecture of the primate brain, resulting in a larger number of neurons and areas and the sparsification, increased modularity, and laminar differentiation of cortical connections. The combination of these features with the developmental expansion of upper cortical layers, prolonged postnatal brain development, and multiplied nongenetic interactions with the physical, social, and cultural environment gives rise to categorically human-specific cognitive abilities including the recursivity of language. Thus, a small set of genetic regulatory events affecting quantitative gene expression may plausibly account for the origins of human brain connectivity and cognition.
Collapse
Affiliation(s)
- Jean-Pierre Changeux
- CNRS UMR 3571, Institut Pasteur, 75724 Paris, France
- Communications Cellulaires, Collège de France, 75005 Paris, France
| | - Alexandros Goulas
- Institute of Computational Neuroscience, University Medical Center Eppendorf, Hamburg University, 20246 Hamburg, Germany
| | - Claus C Hilgetag
- Institute of Computational Neuroscience, University Medical Center Eppendorf, Hamburg University, 20246 Hamburg, Germany
- Department of Health Sciences, Boston University, Boston, MA 02115, USA
| |
Collapse
|
13
|
Sanders H, Wilson MA, Gershman SJ. Hippocampal remapping as hidden state inference. eLife 2020; 9:51140. [PMID: 32515352 PMCID: PMC7282808 DOI: 10.7554/elife.51140] [Citation(s) in RCA: 55] [Impact Index Per Article: 13.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/20/2019] [Accepted: 05/09/2020] [Indexed: 11/13/2022] Open
Abstract
Cells in the hippocampus tuned to spatial location (place cells) typically change their tuning when an animal changes context, a phenomenon known as remapping. A fundamental challenge to understanding remapping is the fact that what counts as a ‘‘context change’’ has never been precisely defined. Furthermore, different remapping phenomena have been classified on the basis of how much the tuning changes after different types and degrees of context change, but the relationship between these variables is not clear. We address these ambiguities by formalizing remapping in terms of hidden state inference. According to this view, remapping does not directly reflect objective, observable properties of the environment, but rather subjective beliefs about the hidden state of the environment. We show how the hidden state framework can resolve a number of puzzles about the nature of remapping.
Collapse
Affiliation(s)
- Honi Sanders
- Center for Brains Minds and Machines, Harvard University, Cambridge, United States.,Picower Institute for Learning and Memory and Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, United States
| | - Matthew A Wilson
- Center for Brains Minds and Machines, Harvard University, Cambridge, United States.,Picower Institute for Learning and Memory and Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, United States
| | - Samuel J Gershman
- Center for Brains Minds and Machines, Harvard University, Cambridge, United States.,Department of Psychology, Harvard University, Cambridge, United States
| |
Collapse
|
14
|
Herbet G, Duffau H. Revisiting the Functional Anatomy of the Human Brain: Toward a Meta-Networking Theory of Cerebral Functions. Physiol Rev 2020; 100:1181-1228. [PMID: 32078778 DOI: 10.1152/physrev.00033.2019] [Citation(s) in RCA: 124] [Impact Index Per Article: 31.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/11/2022] Open
Abstract
For more than one century, brain processing was mainly thought in a localizationist framework, in which one given function was underpinned by a discrete, isolated cortical area, and with a similar cerebral organization across individuals. However, advances in brain mapping techniques in humans have provided new insights into the organizational principles of anatomo-functional architecture. Here, we review recent findings gained from neuroimaging, electrophysiological, as well as lesion studies. Based on these recent data on brain connectome, we challenge the traditional, outdated localizationist view and propose an alternative meta-networking theory. This model holds that complex cognitions and behaviors arise from the spatiotemporal integration of distributed but relatively specialized networks underlying conation and cognition (e.g., language, spatial cognition). Dynamic interactions between such circuits result in a perpetual succession of new equilibrium states, opening the door to considerable interindividual behavioral variability and to neuroplastic phenomena. Indeed, a meta-networking organization underlies the uniquely human propensity to learn complex abilities, and also explains how postlesional reshaping can lead to some degrees of functional compensation in brain-damaged patients. We discuss the major implications of this approach in fundamental neurosciences as well as for clinical developments, especially in neurology, psychiatry, neurorehabilitation, and restorative neurosurgery.
Collapse
Affiliation(s)
- Guillaume Herbet
- Department of Neurosurgery, Gui de Chauliac Hospital, Montpellier University Medical Center, Montpellier, France; Team "Plasticity of Central Nervous System, Stem Cells and Glial Tumors," INSERM U1191, Institute of Functional Genomics, Montpellier, France; and University of Montpellier, Montpellier, France
| | - Hugues Duffau
- Department of Neurosurgery, Gui de Chauliac Hospital, Montpellier University Medical Center, Montpellier, France; Team "Plasticity of Central Nervous System, Stem Cells and Glial Tumors," INSERM U1191, Institute of Functional Genomics, Montpellier, France; and University of Montpellier, Montpellier, France
| |
Collapse
|
15
|
Maes A, Barahona M, Clopath C. Learning spatiotemporal signals using a recurrent spiking network that discretizes time. PLoS Comput Biol 2020; 16:e1007606. [PMID: 31961853 PMCID: PMC7028299 DOI: 10.1371/journal.pcbi.1007606] [Citation(s) in RCA: 27] [Impact Index Per Article: 6.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/09/2019] [Revised: 02/18/2020] [Accepted: 12/13/2019] [Indexed: 12/15/2022] Open
Abstract
Learning to produce spatiotemporal sequences is a common task that the brain has to solve. The same neurons may be used to produce different sequential behaviours. The way the brain learns and encodes such tasks remains unknown as current computational models do not typically use realistic biologically-plausible learning. Here, we propose a model where a spiking recurrent network of excitatory and inhibitory spiking neurons drives a read-out layer: the dynamics of the driver recurrent network is trained to encode time which is then mapped through the read-out neurons to encode another dimension, such as space or a phase. Different spatiotemporal patterns can be learned and encoded through the synaptic weights to the read-out neurons that follow common Hebbian learning rules. We demonstrate that the model is able to learn spatiotemporal dynamics on time scales that are behaviourally relevant and we show that the learned sequences are robustly replayed during a regime of spontaneous activity.
Collapse
Affiliation(s)
- Amadeus Maes
- Department of Bioengineering, Imperial College London, London, United Kingdom
| | - Mauricio Barahona
- Department of Mathematics, Imperial College London, London, United Kingdom
| | - Claudia Clopath
- Department of Bioengineering, Imperial College London, London, United Kingdom
| |
Collapse
|
16
|
Richards BA, Lillicrap TP, Beaudoin P, Bengio Y, Bogacz R, Christensen A, Clopath C, Costa RP, de Berker A, Ganguli S, Gillon CJ, Hafner D, Kepecs A, Kriegeskorte N, Latham P, Lindsay GW, Miller KD, Naud R, Pack CC, Poirazi P, Roelfsema P, Sacramento J, Saxe A, Scellier B, Schapiro AC, Senn W, Wayne G, Yamins D, Zenke F, Zylberberg J, Therien D, Kording KP. A deep learning framework for neuroscience. Nat Neurosci 2019; 22:1761-1770. [PMID: 31659335 PMCID: PMC7115933 DOI: 10.1038/s41593-019-0520-2] [Citation(s) in RCA: 393] [Impact Index Per Article: 78.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/21/2019] [Accepted: 09/23/2019] [Indexed: 11/08/2022]
Abstract
Systems neuroscience seeks explanations for how the brain implements a wide variety of perceptual, cognitive and motor tasks. Conversely, artificial intelligence attempts to design computational systems based on the tasks they will have to solve. In artificial neural networks, the three components specified by design are the objective functions, the learning rules and the architectures. With the growing success of deep learning, which utilizes brain-inspired architectures, these three designed components have increasingly become central to how we model, engineer and optimize complex artificial learning systems. Here we argue that a greater focus on these components would also benefit systems neuroscience. We give examples of how this optimization-based framework can drive theoretical and experimental progress in neuroscience. We contend that this principled perspective on systems neuroscience will help to generate more rapid progress.
Collapse
Affiliation(s)
- Blake A Richards
- Mila, Montréal, Quebec, Canada.
- School of Computer Science, McGill University, Montréal, Quebec, Canada.
- Department of Neurology & Neurosurgery, McGill University, Montréal, Quebec, Canada.
- Canadian Institute for Advanced Research, Toronto, Ontario, Canada.
| | - Timothy P Lillicrap
- DeepMind, Inc., London, UK
- Centre for Computation, Mathematics and Physics in the Life Sciences and Experimental Biology, University College London, London, UK
| | | | - Yoshua Bengio
- Mila, Montréal, Quebec, Canada
- Canadian Institute for Advanced Research, Toronto, Ontario, Canada
- Université de Montréal, Montréal, Quebec, Canada
| | - Rafal Bogacz
- MRC Brain Network Dynamics Unit, University of Oxford, Oxford, UK
| | - Amelia Christensen
- Department of Electrical Engineering, Stanford University, Stanford, CA, USA
| | - Claudia Clopath
- Department of Bioengineering, Imperial College London, London, UK
| | - Rui Ponte Costa
- Computational Neuroscience Unit, School of Computer Science, Electrical and Electronic Engineering, and Engineering Maths, University of Bristol, Bristol, UK
- Department of Physiology, Universität Bern, Bern, Switzerland
| | | | - Surya Ganguli
- Department of Applied Physics, Stanford University, Stanford, CA, USA
- Google Brain, Mountain View, CA, USA
| | - Colleen J Gillon
- Department of Biological Sciences, University of Toronto Scarborough, Toronto, Ontario, Canada
- Department of Cell & Systems Biology, University of Toronto, Toronto, Ontario, Canada
| | - Danijar Hafner
- Google Brain, Mountain View, CA, USA
- Department of Computer Science, University of Toronto, Toronto, Ontario, Canada
- Vector Institute, Toronto, Ontario, Canada
| | - Adam Kepecs
- Cold Spring Harbor Laboratory, Cold Spring Harbor, NY, USA
| | - Nikolaus Kriegeskorte
- Department of Psychology and Neuroscience, Columbia University, New York, NY, USA
- Zuckerman Mind Brain Behavior Institute, Columbia University, New York, New York, USA
| | - Peter Latham
- Gatsby Computational Neuroscience Unit, University College London, London, UK
| | - Grace W Lindsay
- Zuckerman Mind Brain Behavior Institute, Columbia University, New York, New York, USA
- Center for Theoretical Neuroscience, Columbia University, New York, NY, USA
| | - Kenneth D Miller
- Zuckerman Mind Brain Behavior Institute, Columbia University, New York, New York, USA
- Center for Theoretical Neuroscience, Columbia University, New York, NY, USA
- Department of Neuroscience, College of Physicians and Surgeons, Columbia University, New York, NY, USA
| | - Richard Naud
- University of Ottawa Brain and Mind Institute, Ottawa, Ontario, Canada
- Department of Cellular and Molecular Medicine, University of Ottawa, Ottawa, Ontario, Canada
| | - Christopher C Pack
- Department of Neurology & Neurosurgery, McGill University, Montréal, Quebec, Canada
| | - Panayiota Poirazi
- Institute of Molecular Biology and Biotechnology (IMBB), Foundation for Research and Technology-Hellas (FORTH), Heraklion, Crete, Greece
| | - Pieter Roelfsema
- Department of Vision & Cognition, Netherlands Institute for Neuroscience, Amsterdam, Netherlands
| | - João Sacramento
- Institute of Neuroinformatics, ETH Zürich and University of Zürich, Zürich, Switzerland
| | - Andrew Saxe
- Department of Experimental Psychology, University of Oxford, Oxford, UK
| | - Benjamin Scellier
- Mila, Montréal, Quebec, Canada
- Université de Montréal, Montréal, Quebec, Canada
| | - Anna C Schapiro
- Department of Psychology, University of Pennsylvania, Philadelphia, PA, USA
| | - Walter Senn
- Department of Physiology, Universität Bern, Bern, Switzerland
| | | | - Daniel Yamins
- Department of Psychology, Stanford University, Stanford, CA, USA
- Department of Computer Science, Stanford University, Stanford, CA, USA
- Wu Tsai Neurosciences Institute, Stanford University, Stanford, CA, USA
| | - Friedemann Zenke
- Friedrich Miescher Institute for Biomedical Research, Basel, Switzerland
- Centre for Neural Circuits and Behaviour, University of Oxford, Oxford, UK
| | - Joel Zylberberg
- Canadian Institute for Advanced Research, Toronto, Ontario, Canada
- Department of Physics and Astronomy York University, Toronto, Ontario, Canada
- Center for Vision Research, York University, Toronto, Ontario, Canada
| | | | - Konrad P Kording
- Canadian Institute for Advanced Research, Toronto, Ontario, Canada
- Department of Bioengineering, University of Pennsylvania, Philadelphia, PA, USA
- Department of Neuroscience, University of Pennsylvania, Philadelphia, PA, USA
| |
Collapse
|
17
|
Rule ME, O'Leary T, Harvey CD. Causes and consequences of representational drift. Curr Opin Neurobiol 2019; 58:141-147. [PMID: 31569062 PMCID: PMC7385530 DOI: 10.1016/j.conb.2019.08.005] [Citation(s) in RCA: 99] [Impact Index Per Article: 19.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/05/2019] [Revised: 08/13/2019] [Accepted: 08/27/2019] [Indexed: 01/27/2023]
Abstract
The nervous system learns new associations while maintaining memories over long periods, exhibiting a balance between flexibility and stability. Recent experiments reveal that neuronal representations of learned sensorimotor tasks continually change over days and weeks, even after animals have achieved expert behavioral performance. How is learned information stored to allow consistent behavior despite ongoing changes in neuronal activity? What functions could ongoing reconfiguration serve? We highlight recent experimental evidence for such representational drift in sensorimotor systems, and discuss how this fits into a framework of distributed population codes. We identify recent theoretical work that suggests computational roles for drift and argue that the recurrent and distributed nature of sensorimotor representations permits drift while limiting disruptive effects. We propose that representational drift may create error signals between interconnected brain regions that can be used to keep neural codes consistent in the presence of continual change. These concepts suggest experimental and theoretical approaches to studying both learning and maintenance of distributed and adaptive population codes.
Collapse
Affiliation(s)
- Michael E Rule
- Department of Engineering, University of Cambridge, Cambridge CB21PZ, United Kingdom
| | - Timothy O'Leary
- Department of Engineering, University of Cambridge, Cambridge CB21PZ, United Kingdom.
| | | |
Collapse
|