1
|
Dorian CC, Taxidis J, Golshani P. Non-spatial hippocampal behavioral timescale synaptic plasticity during working memory is gated by entorhinal inputs. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.08.27.609983. [PMID: 39253411 PMCID: PMC11383060 DOI: 10.1101/2024.08.27.609983] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Subscribe] [Scholar Register] [Indexed: 09/11/2024]
Abstract
Behavioral timescale synaptic plasticity (BTSP) is a form of synaptic potentiation where the occurrence of a single large plateau potential in CA1 hippocampal neurons leads to the formation of reliable place fields during spatial learning tasks. We asked whether BTSP could also be a plasticity mechanism for generation of non-spatial responses in the hippocampus and what roles the medial and lateral entorhinal cortex (MEC and LEC) play in driving non-spatial BTSP. By performing simultaneous calcium imaging of dorsal CA1 neurons and chemogenetic inhibition of LEC or MEC while mice performed an olfactory working memory task' we discovered BTSP-like events which formed stable odor-specific fields. Critically' the success rate of calcium events generating a significant odor-field increased with event amplitude' and large events exhibited asymmetrical formation with the newly formed odor-fields preceding the timepoint of their induction event. We found that MEC and LEC play distinct roles in modulating BTSP: MEC inhibition reduced the frequency of large calcium events' while LEC inhibition reduced the success rate of odor-field generation. Using two-photon calcium imaging of LEC and MEC temporammonic axons projecting to CA1 we found that LEC projections to CA1 were strongly odor selective even early in task learning' while MEC projection odor-selectivity increased with task learning but remained weaker than LEC. Finally' we found that LEC and MEC inhibition both slowed representational drift of odor representations in CA1 across 48 hours. Altogether' odor-specific information from LEC and strong odor-timed activity from MEC are crucial for driving BTSP in CA1 which is a synaptic plasticity mechanism for generation of both spatial and non-spatial responses in the hippocampus that may play a role in explaining representational drift and one-shot learning of non-spatial information.
Collapse
Affiliation(s)
- Conor C Dorian
- Department of Neurology, David Geffen School of Medicine, University of California Los Angeles, Los Angeles, CA, USA
| | - Jiannis Taxidis
- Program in Neurosciences and Mental Health, Hospital for Sick Children, Toronto, Ontario, Canada
- Department of Physiology, University of Toronto, Toronto, Ontario, Canada
| | - Peyman Golshani
- Department of Neurology, David Geffen School of Medicine, University of California Los Angeles, Los Angeles, CA, USA
- Greater Los Angeles Veteran Affairs Medical Center, Los Angeles, CA, USA
- Intellectual and Developmental Disabilities Research Center, University of California Los Angeles, Los Angeles, CA, USA
- Semel Institute for Neuroscience and Human Behavior, University of California, Los Angeles, CA, USA
- Integrative Center for Learning and Memory, University of California, Los Angeles, CA, USA
| |
Collapse
|
2
|
Bredenberg C, Savin C. Desiderata for Normative Models of Synaptic Plasticity. Neural Comput 2024; 36:1245-1285. [PMID: 38776950 DOI: 10.1162/neco_a_01671] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/09/2023] [Accepted: 02/06/2024] [Indexed: 05/25/2024]
Abstract
Normative models of synaptic plasticity use computational rationales to arrive at predictions of behavioral and network-level adaptive phenomena. In recent years, there has been an explosion of theoretical work in this realm, but experimental confirmation remains limited. In this review, we organize work on normative plasticity models in terms of a set of desiderata that, when satisfied, are designed to ensure that a given model demonstrates a clear link between plasticity and adaptive behavior, is consistent with known biological evidence about neural plasticity and yields specific testable predictions. As a prototype, we include a detailed analysis of the REINFORCE algorithm. We also discuss how new models have begun to improve on the identified criteria and suggest avenues for further development. Overall, we provide a conceptual guide to help develop neural learning theories that are precise, powerful, and experimentally testable.
Collapse
Affiliation(s)
- Colin Bredenberg
- Center for Neural Science, New York University, New York, NY 10003, U.S.A
- Mila-Quebec AI Institute, Montréal, QC H2S 3H1, Canada
| | - Cristina Savin
- Center for Neural Science, New York University, New York, NY 10003, U.S.A
- Center for Data Science, New York University, New York, NY 10011, U.S.A.
| |
Collapse
|
3
|
Ratzon A, Derdikman D, Barak O. Representational drift as a result of implicit regularization. eLife 2024; 12:RP90069. [PMID: 38695551 PMCID: PMC11065423 DOI: 10.7554/elife.90069] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/04/2024] Open
Abstract
Recent studies show that, even in constant environments, the tuning of single neurons changes over time in a variety of brain regions. This representational drift has been suggested to be a consequence of continuous learning under noise, but its properties are still not fully understood. To investigate the underlying mechanism, we trained an artificial network on a simplified navigational task. The network quickly reached a state of high performance, and many units exhibited spatial tuning. We then continued training the network and noticed that the activity became sparser with time. Initial learning was orders of magnitude faster than ensuing sparsification. This sparsification is consistent with recent results in machine learning, in which networks slowly move within their solution space until they reach a flat area of the loss function. We analyzed four datasets from different labs, all demonstrating that CA1 neurons become sparser and more spatially informative with exposure to the same environment. We conclude that learning is divided into three overlapping phases: (i) Fast familiarity with the environment; (ii) slow implicit regularization; and (iii) a steady state of null drift. The variability in drift dynamics opens the possibility of inferring learning algorithms from observations of drift statistics.
Collapse
Affiliation(s)
- Aviv Ratzon
- Rappaport Faculty of Medicine, Technion - Israel Institute of TechnologyHaifaIsrael
- Network Biology Research Laboratory, Technion - Israel Institute of TechnologyHaifaIsrael
| | - Dori Derdikman
- Rappaport Faculty of Medicine, Technion - Israel Institute of TechnologyHaifaIsrael
| | - Omri Barak
- Rappaport Faculty of Medicine, Technion - Israel Institute of TechnologyHaifaIsrael
- Network Biology Research Laboratory, Technion - Israel Institute of TechnologyHaifaIsrael
| |
Collapse
|
4
|
Stern M, Liu AJ, Balasubramanian V. Physical effects of learning. Phys Rev E 2024; 109:024311. [PMID: 38491658 DOI: 10.1103/physreve.109.024311] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/22/2023] [Accepted: 01/31/2024] [Indexed: 03/18/2024]
Abstract
Interacting many-body physical systems ranging from neural networks in the brain to folding proteins to self-modifying electrical circuits can learn to perform diverse tasks. This learning, both in nature and in engineered systems, can occur through evolutionary selection or through dynamical rules that drive active learning from experience. Here, we show that learning in linear physical networks with weak input signals leaves architectural imprints on the Hessian of a physical system. Compared to a generic organization of the system components, (a) the effective physical dimension of the response to inputs decreases, (b) the response of physical degrees of freedom to random perturbations (or system "susceptibility") increases, and (c) the low-eigenvalue eigenvectors of the Hessian align with the task. Overall, these effects embody the typical scenario for learning processes in physical systems in the weak input regime, suggesting ways of discovering whether a physical network may have been trained.
Collapse
Affiliation(s)
- Menachem Stern
- Department of Physics and Astronomy, University of Pennsylvania, Philadelphia, Pennsylvania 19104, USA
| | - Andrea J Liu
- Department of Physics and Astronomy, University of Pennsylvania, Philadelphia, Pennsylvania 19104, USA
- Center for Computational Biology, Flatiron Institute, Simons Foundation, New York, New York 10010, USA
| | - Vijay Balasubramanian
- Department of Physics and Astronomy, University of Pennsylvania, Philadelphia, Pennsylvania 19104, USA
- Santa Fe Institute, 1399 Hyde Park Road, Santa Fe, New Mexico 87501, USA
- Theoretische Natuurkunde, Vrije Universiteit Brussel, Pleinlaan 2, B-1050 Brussels, Belgium
| |
Collapse
|
5
|
Torricelli F, Tomassini A, Pezzulo G, Pozzo T, Fadiga L, D'Ausilio A. Motor invariants in action execution and perception. Phys Life Rev 2023; 44:13-47. [PMID: 36462345 DOI: 10.1016/j.plrev.2022.11.003] [Citation(s) in RCA: 16] [Impact Index Per Article: 16.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/18/2022] [Accepted: 11/21/2022] [Indexed: 11/27/2022]
Abstract
The nervous system is sensitive to statistical regularities of the external world and forms internal models of these regularities to predict environmental dynamics. Given the inherently social nature of human behavior, being capable of building reliable predictive models of others' actions may be essential for successful interaction. While social prediction might seem to be a daunting task, the study of human motor control has accumulated ample evidence that our movements follow a series of kinematic invariants, which can be used by observers to reduce their uncertainty during social exchanges. Here, we provide an overview of the most salient regularities that shape biological motion, examine the role of these invariants in recognizing others' actions, and speculate that anchoring socially-relevant perceptual decisions to such kinematic invariants provides a key computational advantage for inferring conspecifics' goals and intentions.
Collapse
Affiliation(s)
- Francesco Torricelli
- Department of Neuroscience and Rehabilitation, University of Ferrara, Via Fossato di Mortara, 17-19, 44121 Ferrara, Italy; Center for Translational Neurophysiology of Speech and Communication, Italian Institute of Technology, Via Fossato di Mortara, 17-19, 44121 Ferrara, Italy
| | - Alice Tomassini
- Center for Translational Neurophysiology of Speech and Communication, Italian Institute of Technology, Via Fossato di Mortara, 17-19, 44121 Ferrara, Italy
| | - Giovanni Pezzulo
- Institute of Cognitive Sciences and Technologies, National Research Council, Via San Martino della Battaglia 44, 00185 Rome, Italy
| | - Thierry Pozzo
- Center for Translational Neurophysiology of Speech and Communication, Italian Institute of Technology, Via Fossato di Mortara, 17-19, 44121 Ferrara, Italy; INSERM UMR1093-CAPS, UFR des Sciences du Sport, Université Bourgogne Franche-Comté, F-21000, Dijon, France
| | - Luciano Fadiga
- Department of Neuroscience and Rehabilitation, University of Ferrara, Via Fossato di Mortara, 17-19, 44121 Ferrara, Italy; Center for Translational Neurophysiology of Speech and Communication, Italian Institute of Technology, Via Fossato di Mortara, 17-19, 44121 Ferrara, Italy
| | - Alessandro D'Ausilio
- Department of Neuroscience and Rehabilitation, University of Ferrara, Via Fossato di Mortara, 17-19, 44121 Ferrara, Italy; Center for Translational Neurophysiology of Speech and Communication, Italian Institute of Technology, Via Fossato di Mortara, 17-19, 44121 Ferrara, Italy.
| |
Collapse
|
6
|
Qin S, Farashahi S, Lipshutz D, Sengupta AM, Chklovskii DB, Pehlevan C. Coordinated drift of receptive fields in Hebbian/anti-Hebbian network models during noisy representation learning. Nat Neurosci 2023; 26:339-349. [PMID: 36635497 DOI: 10.1038/s41593-022-01225-z] [Citation(s) in RCA: 7] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/21/2021] [Accepted: 10/28/2022] [Indexed: 01/13/2023]
Abstract
Recent experiments have revealed that neural population codes in many brain areas continuously change even when animals have fully learned and stably perform their tasks. This representational 'drift' naturally leads to questions about its causes, dynamics and functions. Here we explore the hypothesis that neural representations optimize a representational objective with a degenerate solution space, and noisy synaptic updates drive the network to explore this (near-)optimal space causing representational drift. We illustrate this idea and explore its consequences in simple, biologically plausible Hebbian/anti-Hebbian network models of representation learning. We find that the drifting receptive fields of individual neurons can be characterized by a coordinated random walk, with effective diffusion constants depending on various parameters such as learning rate, noise amplitude and input statistics. Despite such drift, the representational similarity of population codes is stable over time. Our model recapitulates experimental observations in the hippocampus and posterior parietal cortex and makes testable predictions that can be probed in future experiments.
Collapse
Affiliation(s)
- Shanshan Qin
- John A. Paulson School of Engineering and Applied Sciences, Harvard University, Cambridge, MA, USA
- Center for Brain Science, Harvard University, Cambridge, MA, USA
| | - Shiva Farashahi
- Center for Computational Neuroscience, Flatiron Institute, New York, NY, USA
| | - David Lipshutz
- Center for Computational Neuroscience, Flatiron Institute, New York, NY, USA
| | - Anirvan M Sengupta
- Center for Computational Neuroscience, Flatiron Institute, New York, NY, USA
- Department of Physics and Astronomy, Rutgers University, New Brunswick, NJ, USA
| | - Dmitri B Chklovskii
- Center for Computational Neuroscience, Flatiron Institute, New York, NY, USA
- NYU Langone Medical Center, New York, NY, USA
| | - Cengiz Pehlevan
- John A. Paulson School of Engineering and Applied Sciences, Harvard University, Cambridge, MA, USA.
- Center for Brain Science, Harvard University, Cambridge, MA, USA.
| |
Collapse
|
7
|
KASAI H. Unraveling the mysteries of dendritic spine dynamics: Five key principles shaping memory and cognition. PROCEEDINGS OF THE JAPAN ACADEMY. SERIES B, PHYSICAL AND BIOLOGICAL SCIENCES 2023; 99:254-305. [PMID: 37821392 PMCID: PMC10749395 DOI: 10.2183/pjab.99.018] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/13/2023] [Accepted: 07/11/2023] [Indexed: 10/13/2023]
Abstract
Recent research extends our understanding of brain processes beyond just action potentials and chemical transmissions within neural circuits, emphasizing the mechanical forces generated by excitatory synapses on dendritic spines to modulate presynaptic function. From in vivo and in vitro studies, we outline five central principles of synaptic mechanics in brain function: P1: Stability - Underpinning the integral relationship between the structure and function of the spine synapses. P2: Extrinsic dynamics - Highlighting synapse-selective structural plasticity which plays a crucial role in Hebbian associative learning, distinct from pathway-selective long-term potentiation (LTP) and depression (LTD). P3: Neuromodulation - Analyzing the role of G-protein-coupled receptors, particularly dopamine receptors, in time-sensitive modulation of associative learning frameworks such as Pavlovian classical conditioning and Thorndike's reinforcement learning (RL). P4: Instability - Addressing the intrinsic dynamics crucial to memory management during continual learning, spotlighting their role in "spine dysgenesis" associated with mental disorders. P5: Mechanics - Exploring how synaptic mechanics influence both sides of synapses to establish structural traces of short- and long-term memory, thereby aiding the integration of mental functions. We also delve into the historical background and foresee impending challenges.
Collapse
Affiliation(s)
- Haruo KASAI
- International Research Center for Neurointelligence (WPI-IRCN), UTIAS, The University of Tokyo, Bunkyo-ku, Tokyo, Japan
- Laboratory of Structural Physiology, Center for Disease Biology and Integrative Medicine, Faculty of Medicine, The University of Tokyo, Bunkyo-ku, Tokyo, Japan
| |
Collapse
|
8
|
Bosten JM, Coen-Cagli R, Franklin A, Solomon SG, Webster MA. Calibrating Vision: Concepts and Questions. Vision Res 2022; 201:108131. [PMID: 37139435 PMCID: PMC10151026 DOI: 10.1016/j.visres.2022.108131] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
The idea that visual coding and perception are shaped by experience and adjust to changes in the environment or the observer is universally recognized as a cornerstone of visual processing, yet the functions and processes mediating these calibrations remain in many ways poorly understood. In this article we review a number of facets and issues surrounding the general notion of calibration, with a focus on plasticity within the encoding and representational stages of visual processing. These include how many types of calibrations there are - and how we decide; how plasticity for encoding is intertwined with other principles of sensory coding; how it is instantiated at the level of the dynamic networks mediating vision; how it varies with development or between individuals; and the factors that may limit the form or degree of the adjustments. Our goal is to give a small glimpse of an enormous and fundamental dimension of vision, and to point to some of the unresolved questions in our understanding of how and why ongoing calibrations are a pervasive and essential element of vision.
Collapse
Affiliation(s)
| | - Ruben Coen-Cagli
- Department of Systems Computational Biology, and Dominick P. Purpura Department of Neuroscience, and Department of Ophthalmology and Visual Sciences, Albert Einstein College of Medicine, Bronx NY
| | | | - Samuel G Solomon
- Institute of Behavioural Neuroscience, Department of Experimental Psychology, University College London, UK
| | | |
Collapse
|
9
|
Chambers AR, Aschauer DF, Eppler JB, Kaschube M, Rumpel S. A stable sensory map emerges from a dynamic equilibrium of neurons with unstable tuning properties. Cereb Cortex 2022; 33:5597-5612. [PMID: 36418925 PMCID: PMC10152095 DOI: 10.1093/cercor/bhac445] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/15/2022] [Revised: 10/17/2022] [Accepted: 10/21/2022] [Indexed: 11/25/2022] Open
Abstract
Abstract
Recent long-term measurements of neuronal activity have revealed that, despite stability in large-scale topographic maps, the tuning properties of individual cortical neurons can undergo substantial reformatting over days. To shed light on this apparent contradiction, we captured the sound response dynamics of auditory cortical neurons using repeated 2-photon calcium imaging in awake mice. We measured sound-evoked responses to a set of pure tone and complex sound stimuli in more than 20,000 auditory cortex neurons over several days. We found that a substantial fraction of neurons dropped in and out of the population response. We modeled these dynamics as a simple discrete-time Markov chain, capturing the continuous changes in responsiveness observed during stable behavioral and environmental conditions. Although only a minority of neurons were driven by the sound stimuli at a given time point, the model predicts that most cells would at least transiently become responsive within 100 days. We observe that, despite single-neuron volatility, the population-level representation of sound frequency was stably maintained, demonstrating the dynamic equilibrium underlying the tonotopic map. Our results show that sensory maps are maintained by shifting subpopulations of neurons “sharing” the job of creating a sensory representation.
Collapse
Affiliation(s)
- Anna R Chambers
- Institute of Physiology, Focus Program Translational Neurosciences, University Medical Center, Johannes Gutenberg University Mainz , Duesbergweg 6, Mainz 55128 , Germany
| | - Dominik F Aschauer
- Institute of Physiology, Focus Program Translational Neurosciences, University Medical Center, Johannes Gutenberg University Mainz , Duesbergweg 6, Mainz 55128 , Germany
| | - Jens-Bastian Eppler
- Frankfurt Institute for Advanced Studies and Department of Computer Science, Goethe University Frankfurt , Ruth-Moufang-Straße 1, Frankfurt am Main 60438 , Germany
| | - Matthias Kaschube
- Frankfurt Institute for Advanced Studies and Department of Computer Science, Goethe University Frankfurt , Ruth-Moufang-Straße 1, Frankfurt am Main 60438 , Germany
| | - Simon Rumpel
- Institute of Physiology, Focus Program Translational Neurosciences, University Medical Center, Johannes Gutenberg University Mainz , Duesbergweg 6, Mainz 55128 , Germany
| |
Collapse
|
10
|
Skatchkovsky N, Jang H, Simeone O. Bayesian continual learning via spiking neural networks. Front Comput Neurosci 2022; 16:1037976. [PMID: 36465962 PMCID: PMC9708898 DOI: 10.3389/fncom.2022.1037976] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/06/2022] [Accepted: 10/26/2022] [Indexed: 09/19/2023] Open
Abstract
Among the main features of biological intelligence are energy efficiency, capacity for continual adaptation, and risk management via uncertainty quantification. Neuromorphic engineering has been thus far mostly driven by the goal of implementing energy-efficient machines that take inspiration from the time-based computing paradigm of biological brains. In this paper, we take steps toward the design of neuromorphic systems that are capable of adaptation to changing learning tasks, while producing well-calibrated uncertainty quantification estimates. To this end, we derive online learning rules for spiking neural networks (SNNs) within a Bayesian continual learning framework. In it, each synaptic weight is represented by parameters that quantify the current epistemic uncertainty resulting from prior knowledge and observed data. The proposed online rules update the distribution parameters in a streaming fashion as data are observed. We instantiate the proposed approach for both real-valued and binary synaptic weights. Experimental results using Intel's Lava platform show the merits of Bayesian over frequentist learning in terms of capacity for adaptation and uncertainty quantification.
Collapse
Affiliation(s)
- Nicolas Skatchkovsky
- King's Communication, Learning and Information Processing (KCLIP) Lab, Department of Engineering, King's College London, London, United Kingdom
| | - Hyeryung Jang
- Department of Artificial Intelligence, Dongguk University, Seoul, South Korea
| | - Osvaldo Simeone
- King's Communication, Learning and Information Processing (KCLIP) Lab, Department of Engineering, King's College London, London, United Kingdom
| |
Collapse
|
11
|
Aitken K, Garrett M, Olsen S, Mihalas S. The geometry of representational drift in natural and artificial neural networks. PLoS Comput Biol 2022; 18:e1010716. [PMID: 36441762 PMCID: PMC9731438 DOI: 10.1371/journal.pcbi.1010716] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/31/2022] [Revised: 12/08/2022] [Accepted: 11/07/2022] [Indexed: 11/29/2022] Open
Abstract
Neurons in sensory areas encode/represent stimuli. Surprisingly, recent studies have suggested that, even during persistent performance, these representations are not stable and change over the course of days and weeks. We examine stimulus representations from fluorescence recordings across hundreds of neurons in the visual cortex using in vivo two-photon calcium imaging and we corroborate previous studies finding that such representations change as experimental trials are repeated across days. This phenomenon has been termed "representational drift". In this study we geometrically characterize the properties of representational drift in the primary visual cortex of mice in two open datasets from the Allen Institute and propose a potential mechanism behind such drift. We observe representational drift both for passively presented stimuli, as well as for stimuli which are behaviorally relevant. Across experiments, the drift differs from in-session variance and most often occurs along directions that have the most in-class variance, leading to a significant turnover in the neurons used for a given representation. Interestingly, despite this significant change due to drift, linear classifiers trained to distinguish neuronal representations show little to no degradation in performance across days. The features we observe in the neural data are similar to properties of artificial neural networks where representations are updated by continual learning in the presence of dropout, i.e. a random masking of nodes/weights, but not other types of noise. Therefore, we conclude that a potential reason for the representational drift in biological networks is driven by an underlying dropout-like noise while continuously learning and that such a mechanism may be computational advantageous for the brain in the same way it is for artificial neural networks, e.g. preventing overfitting.
Collapse
Affiliation(s)
- Kyle Aitken
- MindScope Program, Allen Institute, Seattle, Washington, United States of America
| | - Marina Garrett
- MindScope Program, Allen Institute, Seattle, Washington, United States of America
| | - Shawn Olsen
- MindScope Program, Allen Institute, Seattle, Washington, United States of America
| | - Stefan Mihalas
- MindScope Program, Allen Institute, Seattle, Washington, United States of America
| |
Collapse
|
12
|
Patel S, Johnson K, Adank D, Rosas-Vidal LE. Longitudinal monitoring of prefrontal cortical ensemble dynamics reveals new insights into stress habituation. Neurobiol Stress 2022; 20:100481. [PMID: 36160815 PMCID: PMC9489534 DOI: 10.1016/j.ynstr.2022.100481] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/29/2022] [Revised: 08/12/2022] [Accepted: 08/24/2022] [Indexed: 01/25/2023] Open
Abstract
The prefrontal cortex is highly susceptible to the detrimental effects of stress and has been implicated in the pathogenesis of stress-related psychiatric disorders. It is not well understood, however, how stress is represented at the neuronal level in the prefrontal cortical neuronal ensembles. Even less understood is how the representation of stress changes over time with repeated exposure. Here we show that the prelimbic prefrontal neuronal ensemble representation of foot shock stress exhibits rapid spatial drift within and between sessions. Despite this rapid spatial drift of the ensemble, the representation of the stressor itself stabilizes over days. Our results suggest that stress is represented by rapidly drifting ensembles and despite this rapid drift, important features of the neuronal representation are stabilized, suggesting a neural correlate of stress habituation is present within prefrontal cortical neuron populations.
Collapse
Affiliation(s)
- Sachin Patel
- Department of Psychiatry and Behavioral Sciences, Northwestern University Feinberg School of Medicine, Chicago, IL, 60611, USA
| | - Keenan Johnson
- Department of Psychiatry and Behavioral Sciences, Northwestern University Feinberg School of Medicine, Chicago, IL, 60611, USA
| | - Danielle Adank
- Vanderbilt Brain Institute, Vanderbilt University, Nashville, TN, USA
- Interdisciplinary Program in Neuroscience, Vanderbilt University, Nashville, TN, USA
| | - Luis E. Rosas-Vidal
- Department of Psychiatry and Behavioral Sciences, Vanderbilt University Medical Center Nashville, TN, USA
| |
Collapse
|
13
|
Driscoll LN, Duncker L, Harvey CD. Representational drift: Emerging theories for continual learning and experimental future directions. Curr Opin Neurobiol 2022; 76:102609. [PMID: 35939861 DOI: 10.1016/j.conb.2022.102609] [Citation(s) in RCA: 35] [Impact Index Per Article: 17.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/04/2022] [Revised: 06/08/2022] [Accepted: 06/23/2022] [Indexed: 11/03/2022]
Abstract
Recent work has revealed that the neural activity patterns correlated with sensation, cognition, and action often are not stable and instead undergo large scale changes over days and weeks-a phenomenon called representational drift. Here, we highlight recent observations of drift, how drift is unlikely to be explained by experimental confounds, and how the brain can likely compensate for drift to allow stable computation. We propose that drift might have important roles in neural computation to allow continual learning, both for separating and relating memories that occur at distinct times. Finally, we present an outlook on future experimental directions that are needed to further characterize drift and to test emerging theories for drift's role in computation.
Collapse
Affiliation(s)
- Laura N Driscoll
- Department of Electrical Engineering, Stanford University, Stanford, CA, USA.
| | - Lea Duncker
- Howard Hughes Medical Institute, Stanford University, Stanford, CA, USA.
| | | |
Collapse
|
14
|
Masset P, Qin S, Zavatone-Veth JA. Drifting neuronal representations: Bug or feature? BIOLOGICAL CYBERNETICS 2022; 116:253-266. [PMID: 34993613 DOI: 10.1007/s00422-021-00916-3] [Citation(s) in RCA: 10] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/25/2021] [Accepted: 11/17/2021] [Indexed: 06/14/2023]
Abstract
The brain displays a remarkable ability to sustain stable memories, allowing animals to execute precise behaviors or recall stimulus associations years after they were first learned. Yet, recent long-term recording experiments have revealed that single-neuron representations continuously change over time, contravening the classical assumption that learned features remain static. How do unstable neural codes support robust perception, memories, and actions? Here, we review recent experimental evidence for such representational drift across brain areas, as well as dissections of its functional characteristics and underlying mechanisms. We emphasize theoretical proposals for how drift need not only be a form of noise for which the brain must compensate. Rather, it can emerge from computationally beneficial mechanisms in hierarchical networks performing robust probabilistic computations.
Collapse
Affiliation(s)
- Paul Masset
- Center for Brain Science, Harvard University, Cambridge, MA, USA.
- Department of Molecular and Cellular Biology, Harvard University, Cambridge, MA, USA.
| | - Shanshan Qin
- Center for Brain Science, Harvard University, Cambridge, MA, USA
- School of Engineering and Applied Sciences, Harvard University, Cambridge, MA, USA
| | - Jacob A Zavatone-Veth
- Center for Brain Science, Harvard University, Cambridge, MA, USA
- Department of Physics, Harvard University, Cambridge, MA, USA
| |
Collapse
|
15
|
Wycoff JF, Dillavou S, Stern M, Liu AJ, Durian DJ. Desynchronous learning in a physics-driven learning network. J Chem Phys 2022; 156:144903. [DOI: 10.1063/5.0084631] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/15/2022] Open
Abstract
In a neuron network, synapses update individually using local information, allowing for entirely decentralized learning. In contrast, elements in an artificial neural network are typically updated simultaneously using a central processor. Here, we investigate the feasibility and effect of desynchronous learning in a recently introduced decentralized, physics-driven learning network. We show that desynchronizing the learning process does not degrade the performance for a variety of tasks in an idealized simulation. In experiment, desynchronization actually improves the performance by allowing the system to better explore the discretized state space of solutions. We draw an analogy between desynchronization and mini-batching in stochastic gradient descent and show that they have similar effects on the learning process. Desynchronizing the learning process establishes physics-driven learning networks as truly fully distributed learning machines, promoting better performance and scalability in deployment.
Collapse
Affiliation(s)
- J. F. Wycoff
- Department of Physics and Astronomy, University of Pennsylvania, Philadelphia, Pennsylvania 19104, USA
| | - S. Dillavou
- Department of Physics and Astronomy, University of Pennsylvania, Philadelphia, Pennsylvania 19104, USA
| | - M. Stern
- Department of Physics and Astronomy, University of Pennsylvania, Philadelphia, Pennsylvania 19104, USA
| | - A. J. Liu
- Department of Physics and Astronomy, University of Pennsylvania, Philadelphia, Pennsylvania 19104, USA
| | - D. J. Durian
- Department of Physics and Astronomy, University of Pennsylvania, Philadelphia, Pennsylvania 19104, USA
| |
Collapse
|
16
|
Learning-induced biases in the ongoing dynamics of sensory representations predict stimulus generalization. Cell Rep 2022; 38:110340. [PMID: 35139386 DOI: 10.1016/j.celrep.2022.110340] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/21/2021] [Revised: 11/16/2021] [Accepted: 01/14/2022] [Indexed: 11/22/2022] Open
Abstract
Sensory stimuli have long been thought to be represented in the brain as activity patterns of specific neuronal assemblies. However, we still know relatively little about the long-term dynamics of sensory representations. Using chronic in vivo calcium imaging in the mouse auditory cortex, we find that sensory representations undergo continuous recombination, even under behaviorally stable conditions. Auditory cued fear conditioning introduces a bias into these ongoing dynamics, resulting in a long-lasting increase in the number of stimuli activating the same subset of neurons. This plasticity is specific for stimuli sharing representational similarity to the conditioned sound prior to conditioning and predicts behaviorally observed stimulus generalization. Our findings demonstrate that learning-induced plasticity leading to a representational linkage between the conditioned stimulus and non-conditioned stimuli weaves into ongoing dynamics of the brain rather than acting on an otherwise static substrate.
Collapse
|
17
|
Masset P, Zavatone-Veth JA, Connor JP, Murthy VN, Pehlevan C. Natural gradient enables fast sampling in spiking neural networks. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 2022; 35:22018-22034. [PMID: 37476623 PMCID: PMC10358281] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Subscribe] [Scholar Register] [Indexed: 07/22/2023]
Abstract
For animals to navigate an uncertain world, their brains need to estimate uncertainty at the timescales of sensations and actions. Sampling-based algorithms afford a theoretically-grounded framework for probabilistic inference in neural circuits, but it remains unknown how one can implement fast sampling algorithms in biologically-plausible spiking networks. Here, we propose to leverage the population geometry, controlled by the neural code and the neural dynamics, to implement fast samplers in spiking neural networks. We first show that two classes of spiking samplers-efficient balanced spiking networks that simulate Langevin sampling, and networks with probabilistic spike rules that implement Metropolis-Hastings sampling-can be unified within a common framework. We then show that careful choice of population geometry, corresponding to the natural space of parameters, enables rapid inference of parameters drawn from strongly-correlated high-dimensional distributions in both networks. Our results suggest design principles for algorithms for sampling-based probabilistic inference in spiking neural networks, yielding potential inspiration for neuromorphic computing and testable predictions for neurobiology.
Collapse
Affiliation(s)
- Paul Masset
- Center for Brain Science, Harvard University Cambridge, MA 02138
- Department of Molecular and Cellular Biology, Harvard University Cambridge, MA 02138
| | - Jacob A Zavatone-Veth
- Center for Brain Science, Harvard University Cambridge, MA 02138
- Department of Physics, Harvard University Cambridge, MA 02138
| | - J Patrick Connor
- John A. Paulson School of Engineering and Applied Sciences, Harvard University Cambridge, MA 02138
| | - Venkatesh N Murthy
- Center for Brain Science, Harvard University Cambridge, MA 02138
- Department of Molecular and Cellular Biology, Harvard University Cambridge, MA 02138
| | - Cengiz Pehlevan
- Center for Brain Science, Harvard University Cambridge, MA 02138
- John A. Paulson School of Engineering and Applied Sciences, Harvard University Cambridge, MA 02138
| |
Collapse
|
18
|
Jordan J, Schmidt M, Senn W, Petrovici MA. Evolving interpretable plasticity for spiking networks. eLife 2021; 10:66273. [PMID: 34709176 PMCID: PMC8553337 DOI: 10.7554/elife.66273] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/05/2021] [Accepted: 08/19/2021] [Indexed: 11/25/2022] Open
Abstract
Continuous adaptation allows survival in an ever-changing world. Adjustments in the synaptic coupling strength between neurons are essential for this capability, setting us apart from simpler, hard-wired organisms. How these changes can be mathematically described at the phenomenological level, as so-called ‘plasticity rules’, is essential both for understanding biological information processing and for developing cognitively performant artificial systems. We suggest an automated approach for discovering biophysically plausible plasticity rules based on the definition of task families, associated performance measures and biophysical constraints. By evolving compact symbolic expressions, we ensure the discovered plasticity rules are amenable to intuitive understanding, fundamental for successful communication and human-guided generalization. We successfully apply our approach to typical learning scenarios and discover previously unknown mechanisms for learning efficiently from rewards, recover efficient gradient-descent methods for learning from target signals, and uncover various functionally equivalent STDP-like rules with tuned homeostatic mechanisms. Our brains are incredibly adaptive. Every day we form memories, acquire new knowledge or refine existing skills. This stands in contrast to our current computers, which typically can only perform pre-programmed actions. Our own ability to adapt is the result of a process called synaptic plasticity, in which the strength of the connections between neurons can change. To better understand brain function and build adaptive machines, researchers in neuroscience and artificial intelligence (AI) are modeling the underlying mechanisms. So far, most work towards this goal was guided by human intuition – that is, by the strategies scientists think are most likely to succeed. Despite the tremendous progress, this approach has two drawbacks. First, human time is limited and expensive. And second, researchers have a natural – and reasonable – tendency to incrementally improve upon existing models, rather than starting from scratch. Jordan, Schmidt et al. have now developed a new approach based on ‘evolutionary algorithms’. These computer programs search for solutions to problems by mimicking the process of biological evolution, such as the concept of survival of the fittest. The approach exploits the increasing availability of cheap but powerful computers. Compared to its predecessors (or indeed human brains), it also uses search strategies that are less biased by previous models. The evolutionary algorithms were presented with three typical learning scenarios. In the first, the computer had to spot a repeating pattern in a continuous stream of input without receiving feedback on how well it was doing. In the second scenario, the computer received virtual rewards whenever it behaved in the desired manner – an example of reinforcement learning. Finally, in the third ‘supervised learning’ scenario, the computer was told exactly how much its behavior deviated from the desired behavior. For each of these scenarios, the evolutionary algorithms were able to discover mechanisms of synaptic plasticity to solve the new task successfully. Using evolutionary algorithms to study how computers ‘learn’ will provide new insights into how brains function in health and disease. It could also pave the way for developing intelligent machines that can better adapt to the needs of their users.
Collapse
Affiliation(s)
- Jakob Jordan
- Department of Physiology, University of Bern, Bern, Switzerland
| | - Maximilian Schmidt
- Ascent Robotics, Tokyo, Japan.,RIKEN Center for Brain Science, Tokyo, Japan
| | - Walter Senn
- Department of Physiology, University of Bern, Bern, Switzerland
| | - Mihai A Petrovici
- Department of Physiology, University of Bern, Bern, Switzerland.,Kirchhoff-Institute for Physics, Heidelberg University, Heidelberg, Germany
| |
Collapse
|
19
|
Schug S, Benzing F, Steger A. Presynaptic stochasticity improves energy efficiency and helps alleviate the stability-plasticity dilemma. eLife 2021; 10:e69884. [PMID: 34661525 PMCID: PMC8716105 DOI: 10.7554/elife.69884] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/28/2021] [Accepted: 10/18/2021] [Indexed: 12/30/2022] Open
Abstract
When an action potential arrives at a synapse there is a large probability that no neurotransmitter is released. Surprisingly, simple computational models suggest that these synaptic failures enable information processing at lower metabolic costs. However, these models only consider information transmission at single synapses ignoring the remainder of the neural network as well as its overall computational goal. Here, we investigate how synaptic failures affect the energy efficiency of models of entire neural networks that solve a goal-driven task. We find that presynaptic stochasticity and plasticity improve energy efficiency and show that the network allocates most energy to a sparse subset of important synapses. We demonstrate that stabilising these synapses helps to alleviate the stability-plasticity dilemma, thus connecting a presynaptic notion of importance to a computational role in lifelong learning. Overall, our findings present a set of hypotheses for how presynaptic plasticity and stochasticity contribute to sparsity, energy efficiency and improved trade-offs in the stability-plasticity dilemma.
Collapse
Affiliation(s)
- Simon Schug
- Institute of Neuroinformatics, University of Zurich & ETH ZurichZurichSwitzerland
| | | | | |
Collapse
|
20
|
Acharya J, Basu A, Legenstein R, Limbacher T, Poirazi P, Wu X. Dendritic Computing: Branching Deeper into Machine Learning. Neuroscience 2021; 489:275-289. [PMID: 34656706 DOI: 10.1016/j.neuroscience.2021.10.001] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/02/2021] [Revised: 09/07/2021] [Accepted: 10/03/2021] [Indexed: 12/31/2022]
Abstract
In this paper, we discuss the nonlinear computational power provided by dendrites in biological and artificial neurons. We start by briefly presenting biological evidence about the type of dendritic nonlinearities, respective plasticity rules and their effect on biological learning as assessed by computational models. Four major computational implications are identified as improved expressivity, more efficient use of resources, utilizing internal learning signals, and enabling continual learning. We then discuss examples of how dendritic computations have been used to solve real-world classification problems with performance reported on well known data sets used in machine learning. The works are categorized according to the three primary methods of plasticity used-structural plasticity, weight plasticity, or plasticity of synaptic delays. Finally, we show the recent trend of confluence between concepts of deep learning and dendritic computations and highlight some future research directions.
Collapse
Affiliation(s)
| | - Arindam Basu
- Department of Electrical Engineering, City University of Hong Kong, Hong Kong
| | - Robert Legenstein
- Institute of Theoretical Computer Science, Graz University of Technology, Austria
| | - Thomas Limbacher
- Institute of Theoretical Computer Science, Graz University of Technology, Austria
| | - Panayiota Poirazi
- Institute of Molecular Biology and Biotechnology (IMBB), Foundation for Research and Technology-Hellas (FORTH), Greece
| | - Xundong Wu
- School of Computer Science, Hangzhou Dianzi University, China
| |
Collapse
|
21
|
Manos T, Diaz-Pier S, Tass PA. Long-Term Desynchronization by Coordinated Reset Stimulation in a Neural Network Model With Synaptic and Structural Plasticity. Front Physiol 2021; 12:716556. [PMID: 34566681 PMCID: PMC8455881 DOI: 10.3389/fphys.2021.716556] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/28/2021] [Accepted: 08/16/2021] [Indexed: 11/16/2022] Open
Abstract
Several brain disorders are characterized by abnormal neuronal synchronization. To specifically counteract abnormal neuronal synchrony and, hence, related symptoms, coordinated reset (CR) stimulation was computationally developed. In principle, successive epochs of synchronizing and desynchronizing stimulation may reversibly move neural networks with plastic synapses back and forth between stable regimes with synchronized and desynchronized firing. Computationally derived predictions have been verified in pre-clinical and clinical studies, paving the way for novel therapies. However, as yet, computational models were not able to reproduce the clinically observed increase of desynchronizing effects of regularly administered CR stimulation intermingled by long stimulation-free epochs. We show that this clinically important phenomenon can be computationally reproduced by taking into account structural plasticity (SP), a mechanism that deletes or generates synapses in order to homeostatically adapt the firing rates of neurons to a set point-like target firing rate in the course of days to months. If we assume that CR stimulation favorably reduces the target firing rate of SP, the desynchronizing effects of CR stimulation increase after long stimulation-free epochs, in accordance with clinically observed phenomena. Our study highlights the pivotal role of stimulation- and dosing-induced modulation of homeostatic set points in therapeutic processes.
Collapse
Affiliation(s)
- Thanos Manos
- Institute of Neuroscience and Medicine, Brain and Behaviour (INM-7), Research Centre Jülich, Jülich, Germany.,Medical Faculty, Institute of Systems Neuroscience, Heinrich Heine University Düsseldorf, Düsseldorf, Germany.,Laboratoire de Physique Théorique et Modélisation, CNRS, UMR 8089, CY Cergy Paris Université, Cergy-Pontoise Cedex, France
| | - Sandra Diaz-Pier
- Simulation & Data Lab Neuroscience, Institute for Advanced Simulation, Jülich Supercomputing Centre (JSC), Forschungszentrum Jülich GmbH, JARA, Jülich, Germany
| | - Peter A Tass
- Department of Neurosurgery, Stanford University School of Medicine, Stanford, CA, United States
| |
Collapse
|
22
|
Computational roles of intrinsic synaptic dynamics. Curr Opin Neurobiol 2021; 70:34-42. [PMID: 34303124 DOI: 10.1016/j.conb.2021.06.002] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/10/2021] [Revised: 05/14/2021] [Accepted: 06/15/2021] [Indexed: 12/26/2022]
Abstract
Conventional theories assume that long-term information storage in the brain is implemented by modifying synaptic efficacy. Recent experimental findings challenge this view by demonstrating that dendritic spine sizes, or their corresponding synaptic weights, are highly volatile even in the absence of neural activity. Here, we review previous computational works on the roles of these intrinsic synaptic dynamics. We first present the possibility for neuronal networks to sustain stable performance in their presence, and we then hypothesize that intrinsic dynamics could be more than mere noise to withstand, but they may improve information processing in the brain.
Collapse
|
23
|
Hasani R, Ferrari G, Yamamoto H, Tanii T, Prati E. Role of Noise in Spontaneous Activity of Networks of Neurons on Patterned Silicon Emulated by Noise–activated CMOS Neural Nanoelectronic Circuits. NANO EXPRESS 2021. [DOI: 10.1088/2632-959x/abf2ae] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/11/2022]
Abstract
Abstract
Background noise in biological cortical microcircuits constitutes a powerful resource to assess their computational tasks, including, for instance, the synchronization of spiking activity, the enhancement of the speed of information transmission, and the minimization of the corruption of signals. We explore the correlation of spontaneous firing activity of ≈ 100 biological neurons adhering to engineered scaffolds by governing the number of functionalized patterned connection pathways among groups of neurons. We then emulate the biological system by a series of noise-activated silicon neural network simulations. We show that by suitably tuning both the amplitude of noise and the number of synapses between the silicon neurons, the same controlled correlation of the biological population is achieved. Our results extend to a realistic silicon nanoelectronics neuron design using noise injection to be exploited in artificial spiking neural networks such as liquid state machines and recurrent neural networks for stochastic computation.
Collapse
|
24
|
Chuang YY, Vollmer ML, Shafaei-Bajestan E, Gahl S, Hendrix P, Baayen RH. The processing of pseudoword form and meaning in production and comprehension: A computational modeling approach using linear discriminative learning. Behav Res Methods 2021; 53:945-976. [PMID: 32377973 PMCID: PMC8219637 DOI: 10.3758/s13428-020-01356-w] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Pseudowords have long served as key tools in psycholinguistic investigations of the lexicon. A common assumption underlying the use of pseudowords is that they are devoid of meaning: Comparing words and pseudowords may then shed light on how meaningful linguistic elements are processed differently from meaningless sound strings. However, pseudowords may in fact carry meaning. On the basis of a computational model of lexical processing, linear discriminative learning (LDL Baayen et al., Complexity, 2019, 1-39, 2019), we compute numeric vectors representing the semantics of pseudowords. We demonstrate that quantitative measures gauging the semantic neighborhoods of pseudowords predict reaction times in the Massive Auditory Lexical Decision (MALD) database (Tucker et al., 2018). We also show that the model successfully predicts the acoustic durations of pseudowords. Importantly, model predictions hinge on the hypothesis that the mechanisms underlying speech production and comprehension interact. Thus, pseudowords emerge as an outstanding tool for gauging the resonance between production and comprehension. Many pseudowords in the MALD database contain inflectional suffixes. Unlike many contemporary models, LDL captures the semantic commonalities of forms sharing inflectional exponents without using the linguistic construct of morphemes. We discuss methodological and theoretical implications for models of lexical processing and morphological theory. The results of this study, complementing those on real words reported in Baayen et al., (Complexity, 2019, 1-39, 2019), thus provide further evidence for the usefulness of LDL both as a cognitive model of the mental lexicon, and as a tool for generating new quantitative measures that are predictive for human lexical processing.
Collapse
Affiliation(s)
- Yu-Ying Chuang
- Seminar für Sprachwissenschaft, Eberhard-Karls University of Tübingen, Tübingen, Germany.
| | - Marie Lenka Vollmer
- Seminar für Sprachwissenschaft, Eberhard-Karls University of Tübingen, Tübingen, Germany
| | - Elnaz Shafaei-Bajestan
- Seminar für Sprachwissenschaft, Eberhard-Karls University of Tübingen, Tübingen, Germany
| | - Susanne Gahl
- Department of Linguistics, University of California at Berkeley, Berkeley, CA, USA
| | - Peter Hendrix
- Seminar für Sprachwissenschaft, Eberhard-Karls University of Tübingen, Tübingen, Germany
| | - R Harald Baayen
- Seminar für Sprachwissenschaft, Eberhard-Karls University of Tübingen, Tübingen, Germany
| |
Collapse
|
25
|
Kasai H, Ziv NE, Okazaki H, Yagishita S, Toyoizumi T. Spine dynamics in the brain, mental disorders and artificial neural networks. Nat Rev Neurosci 2021; 22:407-422. [PMID: 34050339 DOI: 10.1038/s41583-021-00467-3] [Citation(s) in RCA: 77] [Impact Index Per Article: 25.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 04/14/2021] [Indexed: 12/15/2022]
Abstract
In the brain, most synapses are formed on minute protrusions known as dendritic spines. Unlike their artificial intelligence counterparts, spines are not merely tuneable memory elements: they also embody algorithms that implement the brain's ability to learn from experience and cope with new challenges. Importantly, they exhibit structural dynamics that depend on activity, excitatory input and inhibitory input (synaptic plasticity or 'extrinsic' dynamics) and dynamics independent of activity ('intrinsic' dynamics), both of which are subject to neuromodulatory influences and reinforcers such as dopamine. Here we succinctly review extrinsic and intrinsic dynamics, compare these with parallels in machine learning where they exist, describe the importance of intrinsic dynamics for memory management and adaptation, and speculate on how disruption of extrinsic and intrinsic dynamics may give rise to mental disorders. Throughout, we also highlight algorithmic features of spine dynamics that may be relevant to future artificial intelligence developments.
Collapse
Affiliation(s)
- Haruo Kasai
- Laboratory of Structural Physiology, Center for Disease Biology and Integrative Medicine, Faculty of Medicine, The University of Tokyo, Tokyo, Japan. .,International Research Center for Neurointelligence (WPI-IRCN), UTIAS, The University of Tokyo, Bunkyo-ku, Tokyo, Japan.
| | - Noam E Ziv
- Technion Faculty of Medicine and Network Biology Research Labs, Technion City, Haifa, Israel
| | - Hitoshi Okazaki
- Laboratory of Structural Physiology, Center for Disease Biology and Integrative Medicine, Faculty of Medicine, The University of Tokyo, Tokyo, Japan.,International Research Center for Neurointelligence (WPI-IRCN), UTIAS, The University of Tokyo, Bunkyo-ku, Tokyo, Japan
| | - Sho Yagishita
- Laboratory of Structural Physiology, Center for Disease Biology and Integrative Medicine, Faculty of Medicine, The University of Tokyo, Tokyo, Japan.,International Research Center for Neurointelligence (WPI-IRCN), UTIAS, The University of Tokyo, Bunkyo-ku, Tokyo, Japan
| | - Taro Toyoizumi
- Laboratory for Neural Computation and Adaptation, RIKEN Center for Brain Science, Saitama, Japan.,Department of Mathematical Informatics, Graduate School of Information Science and Technology, The University of Tokyo, Tokyo, Japan
| |
Collapse
|
26
|
Covi E, Donati E, Liang X, Kappel D, Heidari H, Payvand M, Wang W. Adaptive Extreme Edge Computing for Wearable Devices. Front Neurosci 2021; 15:611300. [PMID: 34045939 PMCID: PMC8144334 DOI: 10.3389/fnins.2021.611300] [Citation(s) in RCA: 23] [Impact Index Per Article: 7.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/28/2020] [Accepted: 03/24/2021] [Indexed: 11/13/2022] Open
Abstract
Wearable devices are a fast-growing technology with impact on personal healthcare for both society and economy. Due to the widespread of sensors in pervasive and distributed networks, power consumption, processing speed, and system adaptation are vital in future smart wearable devices. The visioning and forecasting of how to bring computation to the edge in smart sensors have already begun, with an aspiration to provide adaptive extreme edge computing. Here, we provide a holistic view of hardware and theoretical solutions toward smart wearable devices that can provide guidance to research in this pervasive computing era. We propose various solutions for biologically plausible models for continual learning in neuromorphic computing technologies for wearable sensors. To envision this concept, we provide a systematic outline in which prospective low power and low latency scenarios of wearable sensors in neuromorphic platforms are expected. We successively describe vital potential landscapes of neuromorphic processors exploiting complementary metal-oxide semiconductors (CMOS) and emerging memory technologies (e.g., memristive devices). Furthermore, we evaluate the requirements for edge computing within wearable devices in terms of footprint, power consumption, latency, and data size. We additionally investigate the challenges beyond neuromorphic computing hardware, algorithms and devices that could impede enhancement of adaptive edge computing in smart wearable devices.
Collapse
Affiliation(s)
| | - Elisa Donati
- Institute of Neuroinformatics, University of Zurich, Eidgenössische Technische Hochschule Zürich (ETHZ), Zurich, Switzerland
| | - Xiangpeng Liang
- Microelectronics Lab, James Watt School of Engineering, University of Glasgow, Glasgow, United Kingdom
| | - David Kappel
- Bernstein Center for Computational Neuroscience, III Physikalisches Institut–Biophysik, Georg-August Universität, Göttingen, Germany
| | - Hadi Heidari
- Microelectronics Lab, James Watt School of Engineering, University of Glasgow, Glasgow, United Kingdom
| | - Melika Payvand
- Institute of Neuroinformatics, University of Zurich, Eidgenössische Technische Hochschule Zürich (ETHZ), Zurich, Switzerland
| | - Wei Wang
- The Andrew and Erna Viterbi Department of Electrical Engineering, Technion–Israel Institute of Technology, Haifa, Israel
| |
Collapse
|
27
|
Syed T, Kakani V, Cui X, Kim H. Exploring Optimized Spiking Neural Network Architectures for Classification Tasks on Embedded Platforms. SENSORS (BASEL, SWITZERLAND) 2021; 21:3240. [PMID: 34067080 PMCID: PMC8125750 DOI: 10.3390/s21093240] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/22/2021] [Revised: 04/29/2021] [Accepted: 05/02/2021] [Indexed: 11/16/2022]
Abstract
In recent times, the usage of modern neuromorphic hardware for brain-inspired SNNs has grown exponentially. In the context of sparse input data, they are undertaking low power consumption for event-based neuromorphic hardware, specifically in the deeper layers. However, using deep ANNs for training spiking models is still considered as a tedious task. Until recently, various ANN to SNN conversion methods in the literature have been proposed to train deep SNN models. Nevertheless, these methods require hundreds to thousands of time-steps for training and still cannot attain good SNN performance. This work proposes a customized model (VGG, ResNet) architecture to train deep convolutional spiking neural networks. In this current study, the training is carried out using deep convolutional spiking neural networks with surrogate gradient descent backpropagation in a customized layer architecture similar to deep artificial neural networks. Moreover, this work also proposes fewer time-steps for training SNNs with surrogate gradient descent. During the training with surrogate gradient descent backpropagation, overfitting problems have been encountered. To overcome these problems, this work refines the SNN based dropout technique with surrogate gradient descent. The proposed customized SNN models achieve good classification results on both private and public datasets. In this work, several experiments have been carried out on an embedded platform (NVIDIA JETSON TX2 board), where the deployment of customized SNN models has been extensively conducted. Performance validations have been carried out in terms of processing time and inference accuracy between PC and embedded platforms, showing that the proposed customized models and training techniques are feasible for achieving a better performance on various datasets such as CIFAR-10, MNIST, SVHN, and private KITTI and Korean License plate dataset.
Collapse
Affiliation(s)
- Tehreem Syed
- Electrical and Computer Engineering, Inha University, 100 Inha-ro, Nam-gu, Incheon 22212, Korea;
| | - Vijay Kakani
- Integrated System and Engineering, School of Global Convergence Studies, Inha University, 100 Inha-ro, Nam-gu, Incheon 22212, Korea;
| | - Xuenan Cui
- Information and Communication Engineering, Inha University, 100 Inha-ro, Nam-gu, Incheon 22212, Korea;
| | - Hakil Kim
- Electrical and Computer Engineering, Inha University, 100 Inha-ro, Nam-gu, Incheon 22212, Korea;
| |
Collapse
|
28
|
Laborieux A, Ernoult M, Hirtzlin T, Querlioz D. Synaptic metaplasticity in binarized neural networks. Nat Commun 2021; 12:2549. [PMID: 33953183 PMCID: PMC8100137 DOI: 10.1038/s41467-021-22768-y] [Citation(s) in RCA: 14] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/05/2020] [Accepted: 03/25/2021] [Indexed: 11/09/2022] Open
Abstract
While deep neural networks have surpassed human performance in multiple situations, they are prone to catastrophic forgetting: upon training a new task, they rapidly forget previously learned ones. Neuroscience studies, based on idealized tasks, suggest that in the brain, synapses overcome this issue by adjusting their plasticity depending on their past history. However, such "metaplastic" behaviors do not transfer directly to mitigate catastrophic forgetting in deep neural networks. In this work, we interpret the hidden weights used by binarized neural networks, a low-precision version of deep neural networks, as metaplastic variables, and modify their training technique to alleviate forgetting. Building on this idea, we propose and demonstrate experimentally, in situations of multitask and stream learning, a training technique that reduces catastrophic forgetting without needing previously presented data, nor formal boundaries between datasets and with performance approaching more mainstream techniques with task boundaries. We support our approach with a theoretical analysis on a tractable task. This work bridges computational neuroscience and deep learning, and presents significant assets for future embedded and neuromorphic systems, especially when using novel nanodevices featuring physics analogous to metaplasticity.
Collapse
Affiliation(s)
- Axel Laborieux
- Université Paris-Saclay, CNRS, Centre de Nanosciences et de Nanotechnologies, Palaiseau, France.
| | - Maxence Ernoult
- Université Paris-Saclay, CNRS, Centre de Nanosciences et de Nanotechnologies, Palaiseau, France
- Unité Mixte de Physique, CNRS, Thales, Université Paris-Saclay, Palaiseau, France
| | - Tifenn Hirtzlin
- Université Paris-Saclay, CNRS, Centre de Nanosciences et de Nanotechnologies, Palaiseau, France
| | - Damien Querlioz
- Université Paris-Saclay, CNRS, Centre de Nanosciences et de Nanotechnologies, Palaiseau, France.
| |
Collapse
|
29
|
Aitchison L, Jegminat J, Menendez JA, Pfister JP, Pouget A, Latham PE. Synaptic plasticity as Bayesian inference. Nat Neurosci 2021; 24:565-571. [PMID: 33707754 DOI: 10.1038/s41593-021-00809-5] [Citation(s) in RCA: 31] [Impact Index Per Article: 10.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/19/2020] [Accepted: 01/26/2021] [Indexed: 01/21/2023]
Abstract
Learning, especially rapid learning, is critical for survival. However, learning is hard; a large number of synaptic weights must be set based on noisy, often ambiguous, sensory information. In such a high-noise regime, keeping track of probability distributions over weights is the optimal strategy. Here we hypothesize that synapses take that strategy; in essence, when they estimate weights, they include error bars. They then use that uncertainty to adjust their learning rates, with more uncertain weights having higher learning rates. We also make a second, independent, hypothesis: synapses communicate their uncertainty by linking it to variability in postsynaptic potential size, with more uncertainty leading to more variability. These two hypotheses cast synaptic plasticity as a problem of Bayesian inference, and thus provide a normative view of learning. They generalize known learning rules, offer an explanation for the large variability in the size of postsynaptic potentials and make falsifiable experimental predictions.
Collapse
Affiliation(s)
- Laurence Aitchison
- Gatsby Computational Neuroscience Unit, University College London, London, UK. .,Department of Computer Science, University of Bristol, Bristol, UK.
| | - Jannes Jegminat
- Institute of Neuroinformatics, UZH/ETH Zurich, Zurich, Switzerland.,Department of Physiology, University of Bern, Bern, Switzerland
| | - Jorge Aurelio Menendez
- Gatsby Computational Neuroscience Unit, University College London, London, UK.,CoMPLEX, University College London, London, UK
| | - Jean-Pascal Pfister
- Institute of Neuroinformatics, UZH/ETH Zurich, Zurich, Switzerland.,Department of Physiology, University of Bern, Bern, Switzerland
| | - Alexandre Pouget
- Gatsby Computational Neuroscience Unit, University College London, London, UK.,Department of Basic Neurosciences, University of Geneva, Geneva, Switzerland
| | - Peter E Latham
- Gatsby Computational Neuroscience Unit, University College London, London, UK
| |
Collapse
|
30
|
Mau W, Hasselmo ME, Cai DJ. The brain in motion: How ensemble fluidity drives memory-updating and flexibility. eLife 2020; 9:e63550. [PMID: 33372892 PMCID: PMC7771967 DOI: 10.7554/elife.63550] [Citation(s) in RCA: 47] [Impact Index Per Article: 11.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/29/2020] [Accepted: 12/12/2020] [Indexed: 12/18/2022] Open
Abstract
While memories are often thought of as flashbacks to a previous experience, they do not simply conserve veridical representations of the past but must continually integrate new information to ensure survival in dynamic environments. Therefore, 'drift' in neural firing patterns, typically construed as disruptive 'instability' or an undesirable consequence of noise, may actually be useful for updating memories. In our view, continual modifications in memory representations reconcile classical theories of stable memory traces with neural drift. Here we review how memory representations are updated through dynamic recruitment of neuronal ensembles on the basis of excitability and functional connectivity at the time of learning. Overall, we emphasize the importance of considering memories not as static entities, but instead as flexible network states that reactivate and evolve across time and experience.
Collapse
Affiliation(s)
- William Mau
- Neuroscience Department, Icahn School of Medicine at Mount SinaiNew YorkUnited States
| | | | - Denise J Cai
- Neuroscience Department, Icahn School of Medicine at Mount SinaiNew YorkUnited States
| |
Collapse
|
31
|
Limbacher T, Legenstein R. Emergence of Stable Synaptic Clusters on Dendrites Through Synaptic Rewiring. Front Comput Neurosci 2020; 14:57. [PMID: 32848681 PMCID: PMC7424032 DOI: 10.3389/fncom.2020.00057] [Citation(s) in RCA: 16] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/03/2020] [Accepted: 05/22/2020] [Indexed: 11/16/2022] Open
Abstract
The connectivity structure of neuronal networks in cortex is highly dynamic. This ongoing cortical rewiring is assumed to serve important functions for learning and memory. We analyze in this article a model for the self-organization of synaptic inputs onto dendritic branches of pyramidal cells. The model combines a generic stochastic rewiring principle with a simple synaptic plasticity rule that depends on local dendritic activity. In computer simulations, we find that this synaptic rewiring model leads to synaptic clustering, that is, temporally correlated inputs become locally clustered on dendritic branches. This empirical finding is backed up by a theoretical analysis which shows that rewiring in our model favors network configurations with synaptic clustering. We propose that synaptic clustering plays an important role in the organization of computation and memory in cortical circuits: we find that synaptic clustering through the proposed rewiring mechanism can serve as a mechanism to protect memories from subsequent modifications on a medium time scale. Rewiring of synaptic connections onto specific dendritic branches may thus counteract the general problem of catastrophic forgetting in neural networks.
Collapse
Affiliation(s)
| | - Robert Legenstein
- Institute of Theoretical Computer Science, Graz University of Technology, Graz, Austria
| |
Collapse
|
32
|
Senden M, Peters J, Röhrbein F, Deco G, Goebel R. Editorial: The Embodied Brain: Computational Mechanisms of Integrated Sensorimotor Interactions With a Dynamic Environment. Front Comput Neurosci 2020; 14:53. [PMID: 32625074 PMCID: PMC7314992 DOI: 10.3389/fncom.2020.00053] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/09/2020] [Accepted: 05/15/2020] [Indexed: 11/13/2022] Open
Affiliation(s)
- Mario Senden
- Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University, Maastricht, Netherlands.,Maastricht Brain Imaging Center (M-BIC), Maastricht University, Maastricht, Netherlands
| | - Judith Peters
- Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University, Maastricht, Netherlands.,Maastricht Brain Imaging Center (M-BIC), Maastricht University, Maastricht, Netherlands.,Department of Vision and Cognition, Netherlands Institute for Neuroscience, Royal Netherlands Academy of Arts and Sciences (KNAW), Amsterdam, Netherlands
| | - Florian Röhrbein
- Institut für Informatik VI, Technische Universität München, Munich, Germany
| | - Gustavo Deco
- Center for Brain and Cognition, Computational Neuroscience Group, Department of Information and Communication Technologies, Universitat Pompeu Fabra, Barcelona, Spain.,Institució Catalana de la Recerca i Estudis Avançats (ICREA), Universitat Pompeu Fabra, Barcelona, Spain
| | - Rainer Goebel
- Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University, Maastricht, Netherlands.,Maastricht Brain Imaging Center (M-BIC), Maastricht University, Maastricht, Netherlands.,Department of Vision and Cognition, Netherlands Institute for Neuroscience, Royal Netherlands Academy of Arts and Sciences (KNAW), Amsterdam, Netherlands
| |
Collapse
|
33
|
Fang Y, Yu Z, Chen F. Noise Helps Optimization Escape From Saddle Points in the Synaptic Plasticity. Front Neurosci 2020; 14:343. [PMID: 32410937 PMCID: PMC7201302 DOI: 10.3389/fnins.2020.00343] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/29/2019] [Accepted: 03/23/2020] [Indexed: 11/20/2022] Open
Abstract
Numerous experimental studies suggest that noise is inherent in the human brain. However, the functional importance of noise remains unknown. n particular, from a computational perspective, such stochasticity is potentially harmful to brain function. In machine learning, a large number of saddle points are surrounded by high error plateaus and give the illusion of the existence of local minimum. As a result, being trapped in the saddle points can dramatically impair learning and adding noise will attack such saddle point problems in high-dimensional optimization, especially under the strict saddle condition. Motivated by these arguments, we propose one biologically plausible noise structure and demonstrate that noise can efficiently improve the optimization performance of spiking neural networks based on stochastic gradient descent. The strict saddle condition for synaptic plasticity is deduced, and under such conditions, noise can help optimization escape from saddle points on high dimensional domains. The theoretical results explain the stochasticity of synapses and guide us on how to make use of noise. In addition, we provide biological interpretations of proposed noise structures from two points: one based on the free energy principle in neuroscience and another based on observations of in vivo experiments. Our simulation results manifest that in the learning and test phase, the accuracy of synaptic sampling with noise is almost 20% higher than that without noise for synthesis dataset, and the gain in accuracy with/without noise is at least 10% for the MNIST and CIFAR-10 dataset. Our study provides a new learning framework for the brain and sheds new light on deep noisy spiking neural networks.
Collapse
Affiliation(s)
- Ying Fang
- Department of Automation, Center for Brain-Inspired Computing Research, Tsinghua University, Beijing, China
- Beijing Innovation Center for Future Chip, Beijing, China
- Beijing Key Laboratory of Security in Big Data Processing and Application, Beijing, China
| | - Zhaofei Yu
- National Engineering Laboratory for Video Technology, School of Electronics Engineering and Computer Science, Peking University, Beijing, China
| | - Feng Chen
- Department of Automation, Center for Brain-Inspired Computing Research, Tsinghua University, Beijing, China
- Beijing Innovation Center for Future Chip, Beijing, China
- Beijing Key Laboratory of Security in Big Data Processing and Application, Beijing, China
| |
Collapse
|
34
|
Sweeney Y, Clopath C. Population coupling predicts the plasticity of stimulus responses in cortical circuits. eLife 2020; 9:e56053. [PMID: 32314959 PMCID: PMC7224697 DOI: 10.7554/elife.56053] [Citation(s) in RCA: 18] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/14/2020] [Accepted: 04/16/2020] [Indexed: 12/31/2022] Open
Abstract
Some neurons have stimulus responses that are stable over days, whereas other neurons have highly plastic stimulus responses. Using a recurrent network model, we explore whether this could be due to an underlying diversity in their synaptic plasticity. We find that, in a network with diverse learning rates, neurons with fast rates are more coupled to population activity than neurons with slow rates. This plasticity-coupling link predicts that neurons with high population coupling exhibit more long-term stimulus response variability than neurons with low population coupling. We substantiate this prediction using recordings from the Allen Brain Observatory, finding that a neuron's population coupling is correlated with the plasticity of its orientation preference. Simulations of a simple perceptual learning task suggest a particular functional architecture: a stable 'backbone' of stimulus representation formed by neurons with low population coupling, on top of which lies a flexible substrate of neurons with high population coupling.
Collapse
Affiliation(s)
- Yann Sweeney
- Department of Bioengineering, Imperial College LondonLondonUnited Kingdom
| | - Claudia Clopath
- Department of Bioengineering, Imperial College LondonLondonUnited Kingdom
| |
Collapse
|
35
|
Lee C, Sarwar SS, Panda P, Srinivasan G, Roy K. Enabling Spike-Based Backpropagation for Training Deep Neural Network Architectures. Front Neurosci 2020; 14:119. [PMID: 32180697 PMCID: PMC7059737 DOI: 10.3389/fnins.2020.00119] [Citation(s) in RCA: 79] [Impact Index Per Article: 19.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/11/2019] [Accepted: 01/30/2020] [Indexed: 12/24/2022] Open
Abstract
Spiking Neural Networks (SNNs) have recently emerged as a prominent neural computing paradigm. However, the typical shallow SNN architectures have limited capacity for expressing complex representations while training deep SNNs using input spikes has not been successful so far. Diverse methods have been proposed to get around this issue such as converting off-the-shelf trained deep Artificial Neural Networks (ANNs) to SNNs. However, the ANN-SNN conversion scheme fails to capture the temporal dynamics of a spiking system. On the other hand, it is still a difficult problem to directly train deep SNNs using input spike events due to the discontinuous, non-differentiable nature of the spike generation function. To overcome this problem, we propose an approximate derivative method that accounts for the leaky behavior of LIF neurons. This method enables training deep convolutional SNNs directly (with input spike events) using spike-based backpropagation. Our experiments show the effectiveness of the proposed spike-based learning on deep networks (VGG and Residual architectures) by achieving the best classification accuracies in MNIST, SVHN, and CIFAR-10 datasets compared to other SNNs trained with a spike-based learning. Moreover, we analyze sparse event-based computations to demonstrate the efficacy of the proposed SNN training method for inference operation in the spiking domain.
Collapse
|
36
|
Meier F, Dang-Nhu R, Steger A. Adaptive Tuning Curve Widths Improve Sample Efficient Learning. Front Comput Neurosci 2020; 14:12. [PMID: 32132915 PMCID: PMC7041413 DOI: 10.3389/fncom.2020.00012] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/20/2019] [Accepted: 01/29/2019] [Indexed: 11/13/2022] Open
Abstract
Natural brains perform miraculously well in learning new tasks from a small number of samples, whereas sample efficient learning is still a major open problem in the field of machine learning. Here, we raise the question, how the neural coding scheme affects sample efficiency, and make first progress on this question by proposing and analyzing a learning algorithm that uses a simple reinforce-type plasticity mechanism and does not require any gradients to learn low dimensional mappings. It harnesses three bio-plausible mechanisms, namely, population codes with bell shaped tuning curves, continous attractor mechanisms and probabilistic synapses, to achieve sample efficient learning. We show both theoretically and by simulations that population codes with broadly tuned neurons lead to high sample efficiency, whereas codes with sharply tuned neurons account for high final precision. Moreover, a dynamic adaptation of the tuning width during learning gives rise to both, high sample efficiency and high final precision. We prove a sample efficiency guarantee for our algorithm that lies within a logarithmic factor from the information theoretical optimum. Our simulations show that for low dimensional mappings, our learning algorithm achieves comparable sample efficiency to multi-layer perceptrons trained by gradient descent, although it does not use any gradients. Furthermore, it achieves competitive sample efficiency in low dimensional reinforcement learning tasks. From a machine learning perspective, these findings may inspire novel approaches to improve sample efficiency. From a neuroscience perspective, these findings suggest sample efficiency as a yet unstudied functional role of adaptive tuning curve width.
Collapse
Affiliation(s)
- Florian Meier
- Department of Computer Science, ETH Zürich, Zurich, Switzerland
| | | | | |
Collapse
|
37
|
Jordan J, Petrovici MA, Breitwieser O, Schemmel J, Meier K, Diesmann M, Tetzlaff T. Deterministic networks for probabilistic computing. Sci Rep 2019; 9:18303. [PMID: 31797943 PMCID: PMC6893033 DOI: 10.1038/s41598-019-54137-7] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/03/2018] [Accepted: 11/06/2019] [Indexed: 01/13/2023] Open
Abstract
Neuronal network models of high-level brain functions such as memory recall and reasoning often rely on the presence of some form of noise. The majority of these models assumes that each neuron in the functional network is equipped with its own private source of randomness, often in the form of uncorrelated external noise. In vivo, synaptic background input has been suggested to serve as the main source of noise in biological neuronal networks. However, the finiteness of the number of such noise sources constitutes a challenge to this idea. Here, we show that shared-noise correlations resulting from a finite number of independent noise sources can substantially impair the performance of stochastic network models. We demonstrate that this problem is naturally overcome by replacing the ensemble of independent noise sources by a deterministic recurrent neuronal network. By virtue of inhibitory feedback, such networks can generate small residual spatial correlations in their activity which, counter to intuition, suppress the detrimental effect of shared input. We exploit this mechanism to show that a single recurrent network of a few hundred neurons can serve as a natural noise source for a large ensemble of functional networks performing probabilistic computations, each comprising thousands of units.
Collapse
Affiliation(s)
- Jakob Jordan
- Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA Institute Brain-Structure-Function Relationships (INM-10), Jülich Research Centre, Jülich, Germany.
- Department of Physiology, University of Bern, Bern, Switzerland.
| | - Mihai A Petrovici
- Department of Physiology, University of Bern, Bern, Switzerland
- Kirchhoff Institute for Physics, Ruprecht-Karls-University Heidelberg, Heidelberg, Germany
| | - Oliver Breitwieser
- Kirchhoff Institute for Physics, Ruprecht-Karls-University Heidelberg, Heidelberg, Germany
| | - Johannes Schemmel
- Kirchhoff Institute for Physics, Ruprecht-Karls-University Heidelberg, Heidelberg, Germany
| | - Karlheinz Meier
- Kirchhoff Institute for Physics, Ruprecht-Karls-University Heidelberg, Heidelberg, Germany
| | - Markus Diesmann
- Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA Institute Brain-Structure-Function Relationships (INM-10), Jülich Research Centre, Jülich, Germany
- Department of Psychiatry, Psychotherapy and Psychosomatics, Medical Faculty, RWTH Aachen University, Aachen, Germany
- Department of Physics, Faculty 1, RWTH Aachen University, Aachen, Germany
| | - Tom Tetzlaff
- Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA Institute Brain-Structure-Function Relationships (INM-10), Jülich Research Centre, Jülich, Germany
| |
Collapse
|
38
|
Kaiser J, Hoff M, Konle A, Vasquez Tieck JC, Kappel D, Reichard D, Subramoney A, Legenstein R, Roennau A, Maass W, Dillmann R. Embodied Synaptic Plasticity With Online Reinforcement Learning. Front Neurorobot 2019; 13:81. [PMID: 31632262 PMCID: PMC6786305 DOI: 10.3389/fnbot.2019.00081] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/01/2019] [Accepted: 09/13/2019] [Indexed: 01/02/2023] Open
Abstract
The endeavor to understand the brain involves multiple collaborating research fields. Classically, synaptic plasticity rules derived by theoretical neuroscientists are evaluated in isolation on pattern classification tasks. This contrasts with the biological brain which purpose is to control a body in closed-loop. This paper contributes to bringing the fields of computational neuroscience and robotics closer together by integrating open-source software components from these two fields. The resulting framework allows to evaluate the validity of biologically-plausibe plasticity models in closed-loop robotics environments. We demonstrate this framework to evaluate Synaptic Plasticity with Online REinforcement learning (SPORE), a reward-learning rule based on synaptic sampling, on two visuomotor tasks: reaching and lane following. We show that SPORE is capable of learning to perform policies within the course of simulated hours for both tasks. Provisional parameter explorations indicate that the learning rate and the temperature driving the stochastic processes that govern synaptic learning dynamics need to be regulated for performance improvements to be retained. We conclude by discussing the recent deep reinforcement learning techniques which would be beneficial to increase the functionality of SPORE on visuomotor tasks.
Collapse
Affiliation(s)
- Jacques Kaiser
- FZI Research Center for Information Technology, Karlsruhe, Germany
| | - Michael Hoff
- FZI Research Center for Information Technology, Karlsruhe, Germany
- Institute for Theoretical Computer Science, Graz University of Technology, Graz, Austria
| | - Andreas Konle
- FZI Research Center for Information Technology, Karlsruhe, Germany
| | | | - David Kappel
- Institute for Theoretical Computer Science, Graz University of Technology, Graz, Austria
- Bernstein Center for Computational Neuroscience, III Physikalisches Institut-Biophysik, Georg-August Universität, Göttingen, Germany
- Technische Universität Dresden, Chair of Highly Parallel VLSI Systems and Neuromorphic Circuits, Dresden, Germany
| | - Daniel Reichard
- FZI Research Center for Information Technology, Karlsruhe, Germany
| | - Anand Subramoney
- Institute for Theoretical Computer Science, Graz University of Technology, Graz, Austria
| | - Robert Legenstein
- Institute for Theoretical Computer Science, Graz University of Technology, Graz, Austria
| | - Arne Roennau
- FZI Research Center for Information Technology, Karlsruhe, Germany
| | - Wolfgang Maass
- Institute for Theoretical Computer Science, Graz University of Technology, Graz, Austria
| | - Rüdiger Dillmann
- FZI Research Center for Information Technology, Karlsruhe, Germany
| |
Collapse
|
39
|
Rule ME, O'Leary T, Harvey CD. Causes and consequences of representational drift. Curr Opin Neurobiol 2019; 58:141-147. [PMID: 31569062 PMCID: PMC7385530 DOI: 10.1016/j.conb.2019.08.005] [Citation(s) in RCA: 99] [Impact Index Per Article: 19.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/05/2019] [Revised: 08/13/2019] [Accepted: 08/27/2019] [Indexed: 01/27/2023]
Abstract
The nervous system learns new associations while maintaining memories over long periods, exhibiting a balance between flexibility and stability. Recent experiments reveal that neuronal representations of learned sensorimotor tasks continually change over days and weeks, even after animals have achieved expert behavioral performance. How is learned information stored to allow consistent behavior despite ongoing changes in neuronal activity? What functions could ongoing reconfiguration serve? We highlight recent experimental evidence for such representational drift in sensorimotor systems, and discuss how this fits into a framework of distributed population codes. We identify recent theoretical work that suggests computational roles for drift and argue that the recurrent and distributed nature of sensorimotor representations permits drift while limiting disruptive effects. We propose that representational drift may create error signals between interconnected brain regions that can be used to keep neural codes consistent in the presence of continual change. These concepts suggest experimental and theoretical approaches to studying both learning and maintenance of distributed and adaptive population codes.
Collapse
Affiliation(s)
- Michael E Rule
- Department of Engineering, University of Cambridge, Cambridge CB21PZ, United Kingdom
| | - Timothy O'Leary
- Department of Engineering, University of Cambridge, Cambridge CB21PZ, United Kingdom.
| | | |
Collapse
|
40
|
Fang Y, Yu Z, Liu JK, Chen F. A unified neural circuit of causal inference and multisensory integration. Neurocomputing 2019. [DOI: 10.1016/j.neucom.2019.05.067] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/26/2022]
|
41
|
Yan Y, Kappel D, Neumarker F, Partzsch J, Vogginger B, Hoppner S, Furber S, Maass W, Legenstein R, Mayr C. Efficient Reward-Based Structural Plasticity on a SpiNNaker 2 Prototype. IEEE TRANSACTIONS ON BIOMEDICAL CIRCUITS AND SYSTEMS 2019; 13:579-591. [PMID: 30932847 DOI: 10.1109/tbcas.2019.2906401] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Abstract
Advances in neuroscience uncover the mechanisms employed by the brain to efficiently solve complex learning tasks with very limited resources. However, the efficiency is often lost when one tries to port these findings to a silicon substrate, since brain-inspired algorithms often make extensive use of complex functions, such as random number generators, that are expensive to compute on standard general purpose hardware. The prototype chip of the second generation SpiNNaker system is designed to overcome this problem. Low-power advanced RISC machine (ARM) processors equipped with a random number generator and an exponential function accelerator enable the efficient execution of brain-inspired algorithms. We implement the recently introduced reward-based synaptic sampling model that employs structural plasticity to learn a function or task. The numerical simulation of the model requires to update the synapse variables in each time step including an explorative random term. To the best of our knowledge, this is the most complex synapse model implemented so far on the SpiNNaker system. By making efficient use of the hardware accelerators and numerical optimizations, the computation time of one plasticity update is reduced by a factor of 2. This, combined with fitting the model into to the local static random access memory (SRAM), leads to 62% energy reduction compared to the case without accelerators and the use of external dynamic random access memory (DRAM). The model implementation is integrated into the SpiNNaker software framework allowing for scalability onto larger systems. The hardware-software system presented in this paper paves the way for power-efficient mobile and biomedical applications with biologically plausible brain-inspired algorithms.
Collapse
|
42
|
Medial Prefrontal Cortex Population Activity Is Plastic Irrespective of Learning. J Neurosci 2019; 39:3470-3483. [PMID: 30814311 DOI: 10.1523/jneurosci.1370-17.2019] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/19/2017] [Revised: 01/09/2019] [Accepted: 01/11/2019] [Indexed: 11/21/2022] Open
Abstract
The prefrontal cortex (PFC) is thought to learn the relationships between actions and their outcomes. But little is known about what changes to population activity in PFC are specific to learning these relationships. Here we characterize the plasticity of population activity in the medial PFC (mPFC) of male rats learning rules on a Y-maze. First, we show that the population always changes its patterns of joint activity between the periods of sleep either side of a training session on the maze, regardless of successful rule learning during training. Next, by comparing the structure of population activity in sleep and training, we show that this population plasticity differs between learning and nonlearning sessions. In learning sessions, the changes in population activity in post-training sleep incorporate the changes to the population activity during training on the maze. In nonlearning sessions, the changes in sleep and training are unrelated. Finally, we show evidence that the nonlearning and learning forms of population plasticity are driven by different neuron-level changes, with the nonlearning form entirely accounted for by independent changes to the excitability of individual neurons, and the learning form also including changes to firing rate couplings between neurons. Collectively, our results suggest two different forms of population plasticity in mPFC during the learning of action-outcome relationships: one a persistent change in population activity structure decoupled from overt rule-learning, and the other a directional change driven by feedback during behavior.SIGNIFICANCE STATEMENT The PFC is thought to represent our knowledge about what action is worth doing in which context. But we do not know how the activity of neurons in PFC collectively changes when learning which actions are relevant. Here we show, in a trial-and-error task, that population activity in PFC is persistently changing, regardless of learning. Only during episodes of clear learning of relevant actions are the accompanying changes to population activity carried forward into sleep, suggesting a long-lasting form of neural plasticity. Our results suggest that representations of relevant actions in PFC are acquired by reward imposing a direction onto ongoing population plasticity.
Collapse
|
43
|
Dong M, Vicario DS. Neural Correlate of Transition Violation and Deviance Detection in the Songbird Auditory Forebrain. Front Syst Neurosci 2018; 12:46. [PMID: 30356811 PMCID: PMC6190688 DOI: 10.3389/fnsys.2018.00046] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/16/2018] [Accepted: 09/18/2018] [Indexed: 12/21/2022] Open
Abstract
Deviants are stimuli that violate one's prediction about the incoming stimuli. Studying deviance detection helps us understand how nervous system learns temporal patterns between stimuli and forms prediction about the future. Detecting deviant stimuli is also critical for animals' survival in the natural environment filled with complex sounds and patterns. Using natural songbird vocalizations as stimuli, we recorded multi-unit and single-unit activity from the zebra finch auditory forebrain while presenting rare repeated stimuli after regular alternating stimuli (alternating oddball experiment) or rare deviant among multiple different common stimuli (context oddball experiment). The alternating oddball experiment showed that neurons were sensitive to rare repetitions in regular alternations. In the absence of expectation, repetition suppresses neural responses to the 2nd stimulus in the repetition. When repetition violates expectation, neural responses to the 2nd stimulus in the repetition were stronger than expected. The context oddball experiment showed that a stimulus elicits stronger neural responses when it is presented infrequently as a deviant among multiple common stimuli. As the acoustic differences between deviant and common stimuli increase, the response enhancement also increases. These results together showed that neural encoding of a stimulus depends not only on the acoustic features of the stimulus but also on the preceding stimuli and the transition patterns between them. These results also imply that the classical oddball effect may result from a combination of repetition suppression and deviance enhancement. Classification analyses showed that the difficulties in decoding the stimulus responsible for the neural responses differed for deviants in different experimental conditions. These findings suggest that learning transition patterns and detecting deviants in natural sequences may depend on a hierarchy of neural mechanisms, which may be involved in more complex forms of auditory processing that depend on the transition patterns between stimuli, such as speech processing.
Collapse
Affiliation(s)
- Mingwen Dong
- Behavior and Systems Neuroscience, Psychology Department, Rutgers, the State University of New Jersey, New Brunswick, NJ, United States
| | - David S Vicario
- Behavior and Systems Neuroscience, Psychology Department, Rutgers, the State University of New Jersey, New Brunswick, NJ, United States
| |
Collapse
|
44
|
Llera-Montero M, Sacramento J, Costa RP. Computational roles of plastic probabilistic synapses. Curr Opin Neurobiol 2018; 54:90-97. [PMID: 30308457 DOI: 10.1016/j.conb.2018.09.002] [Citation(s) in RCA: 20] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/02/2018] [Revised: 07/02/2018] [Accepted: 09/06/2018] [Indexed: 11/18/2022]
Abstract
The probabilistic nature of synaptic transmission has remained enigmatic. However, recent developments have started to shed light on why the brain may rely on probabilistic synapses. Here, we start out by reviewing experimental evidence on the specificity and plasticity of synaptic response statistics. Next, we overview different computational perspectives on the function of plastic probabilistic synapses for constrained, statistical and deep learning. We highlight that all of these views require some form of optimisation of probabilistic synapses, which has recently gained support from theoretical analysis of long-term synaptic plasticity experiments. Finally, we contrast these different computational views and propose avenues for future research. Overall, we argue that the time is ripe for a better understanding of the computational functions of probabilistic synapses.
Collapse
Affiliation(s)
- Milton Llera-Montero
- Computational Neuroscience Unit, Department of Computer Science, School of Computer Science, Electrical and Electronic Engineering, and Engineering Mathematics, Faculty of Engineering, University of Bristol, United Kingdom; Bristol Neuroscience, University of Bristol, United Kingdom; School of Psychological Science, Faculty of Life Sciences, University of Bristol, United Kingdom
| | | | - Rui Ponte Costa
- Computational Neuroscience Unit, Department of Computer Science, School of Computer Science, Electrical and Electronic Engineering, and Engineering Mathematics, Faculty of Engineering, University of Bristol, United Kingdom; Bristol Neuroscience, University of Bristol, United Kingdom; Department of Physiology, University of Bern, Switzerland; Centre for Neural Circuits and Behaviour, Department of Physiology, Anatomy and Genetics, University of Oxford, United Kingdom.
| |
Collapse
|
45
|
Chance, long tails, and inference in a non-Gaussian, Bayesian theory of vocal learning in songbirds. Proc Natl Acad Sci U S A 2018; 115:E8538-E8546. [PMID: 30127024 DOI: 10.1073/pnas.1713020115] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/24/2022] Open
Abstract
Traditional theories of sensorimotor learning posit that animals use sensory error signals to find the optimal motor command in the face of Gaussian sensory and motor noise. However, most such theories cannot explain common behavioral observations, for example, that smaller sensory errors are more readily corrected than larger errors and large abrupt (but not gradually introduced) errors lead to weak learning. Here, we propose a theory of sensorimotor learning that explains these observations. The theory posits that the animal controls an entire probability distribution of motor commands rather than trying to produce a single optimal command and that learning arises via Bayesian inference when new sensory information becomes available. We test this theory using data from a songbird, the Bengalese finch, that is adapting the pitch (fundamental frequency) of its song following perturbations of auditory feedback using miniature headphones. We observe the distribution of the sung pitches to have long, non-Gaussian tails, which, within our theory, explains the observed dynamics of learning. Further, the theory makes surprising predictions about the dynamics of the shape of the pitch distribution, which we confirm experimentally.
Collapse
|
46
|
Bogdan PA, Rowley AGD, Rhodes O, Furber SB. Structural Plasticity on the SpiNNaker Many-Core Neuromorphic System. Front Neurosci 2018; 12:434. [PMID: 30034320 PMCID: PMC6043813 DOI: 10.3389/fnins.2018.00434] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/22/2018] [Accepted: 06/11/2018] [Indexed: 01/15/2023] Open
Abstract
The structural organization of cortical areas is not random, with topographic maps commonplace in sensory processing centers. This topographical organization allows optimal wiring between neurons, multimodal sensory integration, and performs input dimensionality reduction. In this work, a model of topographic map formation is implemented on the SpiNNaker neuromorphic platform, running in realtime using point neurons, and making use of both synaptic rewiring and spike-timing dependent plasticity (STDP). In agreement with Bamford et al. (2010), we demonstrate that synaptic rewiring refines an initially rough topographic map over and beyond the ability of STDP, and that input selectivity learnt through STDP is embedded into the network connectivity through rewiring. Moreover, we show the presented model can be used to generate topographic maps between layers of neurons with minimal initial connectivity, and stabilize mappings which would otherwise be unstable through the inclusion of lateral inhibition.
Collapse
Affiliation(s)
- Petruț A Bogdan
- School of Computer Science, University of Manchester, Manchester, United Kingdom
| | - Andrew G D Rowley
- School of Computer Science, University of Manchester, Manchester, United Kingdom
| | - Oliver Rhodes
- School of Computer Science, University of Manchester, Manchester, United Kingdom
| | - Steve B Furber
- School of Computer Science, University of Manchester, Manchester, United Kingdom
| |
Collapse
|
47
|
A Dynamic Connectome Supports the Emergence of Stable Computational Function of Neural Circuits through Reward-Based Learning. eNeuro 2018; 5:eN-TNC-0301-17. [PMID: 29696150 PMCID: PMC5913731 DOI: 10.1523/eneuro.0301-17.2018] [Citation(s) in RCA: 41] [Impact Index Per Article: 6.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/30/2017] [Revised: 03/22/2018] [Accepted: 03/26/2018] [Indexed: 11/21/2022] Open
Abstract
Synaptic connections between neurons in the brain are dynamic because of continuously ongoing spine dynamics, axonal sprouting, and other processes. In fact, it was recently shown that the spontaneous synapse-autonomous component of spine dynamics is at least as large as the component that depends on the history of pre- and postsynaptic neural activity. These data are inconsistent with common models for network plasticity and raise the following questions: how can neural circuits maintain a stable computational function in spite of these continuously ongoing processes, and what could be functional uses of these ongoing processes? Here, we present a rigorous theoretical framework for these seemingly stochastic spine dynamics and rewiring processes in the context of reward-based learning tasks. We show that spontaneous synapse-autonomous processes, in combination with reward signals such as dopamine, can explain the capability of networks of neurons in the brain to configure themselves for specific computational tasks, and to compensate automatically for later changes in the network or task. Furthermore, we show theoretically and through computer simulations that stable computational performance is compatible with continuously ongoing synapse-autonomous changes. After reaching good computational performance it causes primarily a slow drift of network architecture and dynamics in task-irrelevant dimensions, as observed for neural activity in motor cortex and other areas. On the more abstract level of reinforcement learning the resulting model gives rise to an understanding of reward-driven network plasticity as continuous sampling of network configurations.
Collapse
|
48
|
Synaptic Tenacity or Lack Thereof: Spontaneous Remodeling of Synapses. Trends Neurosci 2018; 41:89-99. [DOI: 10.1016/j.tins.2017.12.003] [Citation(s) in RCA: 57] [Impact Index Per Article: 9.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/13/2017] [Revised: 11/22/2017] [Accepted: 12/04/2017] [Indexed: 11/18/2022]
|
49
|
Nonlinear Bayesian filtering and learning: a neuronal dynamics for perception. Sci Rep 2017; 7:8722. [PMID: 28821729 PMCID: PMC5562918 DOI: 10.1038/s41598-017-06519-y] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/15/2017] [Accepted: 06/19/2017] [Indexed: 11/09/2022] Open
Abstract
The robust estimation of dynamical hidden features, such as the position of prey, based on sensory inputs is one of the hallmarks of perception. This dynamical estimation can be rigorously formulated by nonlinear Bayesian filtering theory. Recent experimental and behavioral studies have shown that animals' performance in many tasks is consistent with such a Bayesian statistical interpretation. However, it is presently unclear how a nonlinear Bayesian filter can be efficiently implemented in a network of neurons that satisfies some minimum constraints of biological plausibility. Here, we propose the Neural Particle Filter (NPF), a sampling-based nonlinear Bayesian filter, which does not rely on importance weights. We show that this filter can be interpreted as the neuronal dynamics of a recurrently connected rate-based neural network receiving feed-forward input from sensory neurons. Further, it captures properties of temporal and multi-sensory integration that are crucial for perception, and it allows for online parameter learning with a maximum likelihood approach. The NPF holds the promise to avoid the 'curse of dimensionality', and we demonstrate numerically its capability to outperform weighted particle filters in higher dimensions and when the number of particles is limited.
Collapse
|
50
|
Dynamic Reorganization of Neuronal Activity Patterns in Parietal Cortex. Cell 2017; 170:986-999.e16. [PMID: 28823559 DOI: 10.1016/j.cell.2017.07.021] [Citation(s) in RCA: 211] [Impact Index Per Article: 30.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/30/2017] [Revised: 05/28/2017] [Accepted: 07/13/2017] [Indexed: 01/03/2023]
Abstract
Neuronal representations change as associations are learned between sensory stimuli and behavioral actions. However, it is poorly understood whether representations for learned associations stabilize in cortical association areas or continue to change following learning. We tracked the activity of posterior parietal cortex neurons for a month as mice stably performed a virtual-navigation task. The relationship between cells' activity and task features was mostly stable on single days but underwent major reorganization over weeks. The neurons informative about task features (trial type and maze locations) changed across days. Despite changes in individual cells, the population activity had statistically similar properties each day and stable information for over a week. As mice learned additional associations, new activity patterns emerged in the neurons used for existing representations without greatly affecting the rate of change of these representations. We propose that dynamic neuronal activity patterns could balance plasticity for learning and stability for memory.
Collapse
|