1
|
Bredenberg C, Savin C. Desiderata for Normative Models of Synaptic Plasticity. Neural Comput 2024; 36:1245-1285. [PMID: 38776950 DOI: 10.1162/neco_a_01671] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/09/2023] [Accepted: 02/06/2024] [Indexed: 05/25/2024]
Abstract
Normative models of synaptic plasticity use computational rationales to arrive at predictions of behavioral and network-level adaptive phenomena. In recent years, there has been an explosion of theoretical work in this realm, but experimental confirmation remains limited. In this review, we organize work on normative plasticity models in terms of a set of desiderata that, when satisfied, are designed to ensure that a given model demonstrates a clear link between plasticity and adaptive behavior, is consistent with known biological evidence about neural plasticity and yields specific testable predictions. As a prototype, we include a detailed analysis of the REINFORCE algorithm. We also discuss how new models have begun to improve on the identified criteria and suggest avenues for further development. Overall, we provide a conceptual guide to help develop neural learning theories that are precise, powerful, and experimentally testable.
Collapse
Affiliation(s)
- Colin Bredenberg
- Center for Neural Science, New York University, New York, NY 10003, U.S.A
- Mila-Quebec AI Institute, Montréal, QC H2S 3H1, Canada
| | - Cristina Savin
- Center for Neural Science, New York University, New York, NY 10003, U.S.A
- Center for Data Science, New York University, New York, NY 10011, U.S.A.
| |
Collapse
|
2
|
Song Y, Shin W, Kim P, Jeong J. Neural representations for multi-context visuomotor adaptation and the impact of common representation on multi-task performance: a multivariate decoding approach. Front Hum Neurosci 2023; 17:1221944. [PMID: 37822708 PMCID: PMC10562562 DOI: 10.3389/fnhum.2023.1221944] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/17/2023] [Accepted: 08/30/2023] [Indexed: 10/13/2023] Open
Abstract
The human brain's remarkable motor adaptability stems from the formation of context representations and the use of a common context representation (e.g., an invariant task structure across task contexts) derived from structural learning. However, direct evaluation of context representations and structural learning in sensorimotor tasks remains limited. This study aimed to rigorously distinguish neural representations of visual, movement, and context levels crucial for multi-context visuomotor adaptation and investigate the association between representation commonality across task contexts and adaptation performance using multivariate decoding analysis with fMRI data. Here, we focused on three distinct task contexts, two of which share a rotation structure (i.e., visuomotor rotation contexts with -90° and +90° rotations, in which the mouse cursor's movement was rotated 90 degrees counterclockwise and clockwise relative to the hand-movement direction, respectively) and the remaining one does not (i.e., mirror-reversal context where the horizontal movement of the computer mouse was inverted). This study found that visual representations (i.e., visual direction) were decoded in the occipital area, while movement representations (i.e., hand-movement direction) were decoded across various visuomotor-related regions. These findings are consistent with prior research and the widely recognized roles of those areas. Task-context representations (i.e., either -90° rotation, +90° rotation, or mirror-reversal) were also distinguishable in various brain regions. Notably, these regions largely overlapped with those encoding visual and movement representations. This overlap suggests a potential intricate dependency of encoding visual and movement directions on the context information. Moreover, we discovered that higher task performance is associated with task-context representation commonality, as evidenced by negative correlations between task performance and task-context-decoding accuracy in various brain regions, potentially supporting structural learning. Importantly, despite limited similarities between tasks (e.g., rotation and mirror-reversal contexts), such association was still observed, suggesting an efficient mechanism in the brain that extracts commonalities from different task contexts (such as visuomotor rotations or mirror-reversal) at multiple structural levels, from high-level abstractions to lower-level details. In summary, while illuminating the intricate interplay between visuomotor processing and context information, our study highlights the efficiency of learning mechanisms, thereby paving the way for future exploration of the brain's versatile motor ability.
Collapse
Affiliation(s)
- Youngjo Song
- Department of Bio and Brain Engineering, College of Engineering, Korea Advanced Institute of Science and Technology (KAIST), Daejeon, South Korea
| | - Wooree Shin
- Department of Bio and Brain Engineering, College of Engineering, Korea Advanced Institute of Science and Technology (KAIST), Daejeon, South Korea
- Program of Brain and Cognitive Engineering, College of Engineering, Korea Advanced Institute of Science and Technology (KAIST), Daejeon, South Korea
| | - Pyeongsoo Kim
- Department of Bio and Brain Engineering, College of Engineering, Korea Advanced Institute of Science and Technology (KAIST), Daejeon, South Korea
| | - Jaeseung Jeong
- Department of Brain and Cognitive Sciences, College of Life Science and Bioengineering, Korea Advanced Institute of Science and Technology (KAIST), Daejeon, South Korea
| |
Collapse
|
3
|
Heald JB, Wolpert DM, Lengyel M. The Computational and Neural Bases of Context-Dependent Learning. Annu Rev Neurosci 2023; 46:233-258. [PMID: 36972611 PMCID: PMC10348919 DOI: 10.1146/annurev-neuro-092322-100402] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/29/2023]
Abstract
Flexible behavior requires the creation, updating, and expression of memories to depend on context. While the neural underpinnings of each of these processes have been intensively studied, recent advances in computational modeling revealed a key challenge in context-dependent learning that had been largely ignored previously: Under naturalistic conditions, context is typically uncertain, necessitating contextual inference. We review a theoretical approach to formalizing context-dependent learning in the face of contextual uncertainty and the core computations it requires. We show how this approach begins to organize a large body of disparate experimental observations, from multiple levels of brain organization (including circuits, systems, and behavior) and multiple brain regions (most prominently the prefrontal cortex, the hippocampus, and motor cortices), into a coherent framework. We argue that contextual inference may also be key to understanding continual learning in the brain. This theory-driven perspective places contextual inference as a core component of learning.
Collapse
Affiliation(s)
- James B Heald
- Department of Neuroscience and Zuckerman Mind Brain Behavior Institute, Columbia University, New York, NY, USA; ,
| | - Daniel M Wolpert
- Department of Neuroscience and Zuckerman Mind Brain Behavior Institute, Columbia University, New York, NY, USA; ,
- Computational and Biological Learning Lab, Department of Engineering, University of Cambridge, Cambridge, United Kingdom;
| | - Máté Lengyel
- Computational and Biological Learning Lab, Department of Engineering, University of Cambridge, Cambridge, United Kingdom;
- Center for Cognitive Computation, Department of Cognitive Science, Central European University, Budapest, Hungary
| |
Collapse
|
4
|
Giotis C, Serb A, Manouras V, Stathopoulos S, Prodromakis T. Palimpsest memories stored in memristive synapses. SCIENCE ADVANCES 2022; 8:eabn7920. [PMID: 35731877 PMCID: PMC9217086 DOI: 10.1126/sciadv.abn7920] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/20/2021] [Accepted: 05/09/2022] [Indexed: 06/15/2023]
Abstract
Biological synapses store multiple memories on top of each other in a palimpsest fashion and at different time scales. Palimpsest consolidation is facilitated by the interaction of hidden biochemical processes governing synaptic efficacy during varying lifetimes. This arrangement allows idle memories to be temporarily overwritten without being forgotten, while previously unseen memories are used in the short term. While embedded artificial intelligence can greatly benefit from this functionality, a practical demonstration in hardware is missing. Here, we show how the intrinsic properties of metal-oxide volatile memristors emulate the processes supporting biological palimpsest consolidation. Our memristive synapses exhibit an expanded doubled capacity and protect a consolidated memory while up to hundreds of uncorrelated short-term memories temporarily overwrite it, without requiring specialized instructions. We further demonstrate this technology in the context of visual working memory. This showcases how emerging memory technologies can efficiently expand the capabilities of artificial intelligence hardware toward more generalized learning memories.
Collapse
Affiliation(s)
- Christos Giotis
- Department of Electronics and Computer Science, University of Southampton, Southampton SO17 1BJ, UK
| | - Alexander Serb
- Department of Electronics and Computer Science, University of Southampton, Southampton SO17 1BJ, UK
- Centre for Electronics Frontiers, School of Engineering, University of Edinburgh, Edinburgh EH9 3FB, UK
| | - Vasileios Manouras
- Department of Electronics and Computer Science, University of Southampton, Southampton SO17 1BJ, UK
| | - Spyros Stathopoulos
- Department of Electronics and Computer Science, University of Southampton, Southampton SO17 1BJ, UK
| | - Themis Prodromakis
- Department of Electronics and Computer Science, University of Southampton, Southampton SO17 1BJ, UK
- Centre for Electronics Frontiers, School of Engineering, University of Edinburgh, Edinburgh EH9 3FB, UK
| |
Collapse
|
5
|
Korcsak-Gorzo A, Müller MG, Baumbach A, Leng L, Breitwieser OJ, van Albada SJ, Senn W, Meier K, Legenstein R, Petrovici MA. Cortical oscillations support sampling-based computations in spiking neural networks. PLoS Comput Biol 2022; 18:e1009753. [PMID: 35324886 PMCID: PMC8947809 DOI: 10.1371/journal.pcbi.1009753] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/06/2021] [Accepted: 12/14/2021] [Indexed: 11/19/2022] Open
Abstract
Being permanently confronted with an uncertain world, brains have faced evolutionary pressure to represent this uncertainty in order to respond appropriately. Often, this requires visiting multiple interpretations of the available information or multiple solutions to an encountered problem. This gives rise to the so-called mixing problem: since all of these "valid" states represent powerful attractors, but between themselves can be very dissimilar, switching between such states can be difficult. We propose that cortical oscillations can be effectively used to overcome this challenge. By acting as an effective temperature, background spiking activity modulates exploration. Rhythmic changes induced by cortical oscillations can then be interpreted as a form of simulated tempering. We provide a rigorous mathematical discussion of this link and study some of its phenomenological implications in computer simulations. This identifies a new computational role of cortical oscillations and connects them to various phenomena in the brain, such as sampling-based probabilistic inference, memory replay, multisensory cue combination, and place cell flickering.
Collapse
Affiliation(s)
- Agnes Korcsak-Gorzo
- Kirchhoff-Institute for Physics, Heidelberg University, Heidelberg, Germany
- Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA-Institute Brain Structure-Function Relationships (INM-10), Jülich Research Centre, Jülich, Germany
- RWTH Aachen University, Aachen, Germany
| | - Michael G. Müller
- Institute of Theoretical Computer Science, Graz University of Technology, Graz, Austria
| | - Andreas Baumbach
- Kirchhoff-Institute for Physics, Heidelberg University, Heidelberg, Germany
- Department of Physiology, University of Bern, Bern, Switzerland
| | - Luziwei Leng
- Kirchhoff-Institute for Physics, Heidelberg University, Heidelberg, Germany
| | | | - Sacha J. van Albada
- Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA-Institute Brain Structure-Function Relationships (INM-10), Jülich Research Centre, Jülich, Germany
- Institute of Zoology, University of Cologne, Cologne, Germany
| | - Walter Senn
- Department of Physiology, University of Bern, Bern, Switzerland
| | - Karlheinz Meier
- Kirchhoff-Institute for Physics, Heidelberg University, Heidelberg, Germany
| | - Robert Legenstein
- Institute of Theoretical Computer Science, Graz University of Technology, Graz, Austria
| | - Mihai A. Petrovici
- Kirchhoff-Institute for Physics, Heidelberg University, Heidelberg, Germany
- Department of Physiology, University of Bern, Bern, Switzerland
| |
Collapse
|
6
|
Sanders H, Wilson MA, Gershman SJ. Hippocampal remapping as hidden state inference. eLife 2020; 9:51140. [PMID: 32515352 PMCID: PMC7282808 DOI: 10.7554/elife.51140] [Citation(s) in RCA: 55] [Impact Index Per Article: 13.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/20/2019] [Accepted: 05/09/2020] [Indexed: 11/13/2022] Open
Abstract
Cells in the hippocampus tuned to spatial location (place cells) typically change their tuning when an animal changes context, a phenomenon known as remapping. A fundamental challenge to understanding remapping is the fact that what counts as a ‘‘context change’’ has never been precisely defined. Furthermore, different remapping phenomena have been classified on the basis of how much the tuning changes after different types and degrees of context change, but the relationship between these variables is not clear. We address these ambiguities by formalizing remapping in terms of hidden state inference. According to this view, remapping does not directly reflect objective, observable properties of the environment, but rather subjective beliefs about the hidden state of the environment. We show how the hidden state framework can resolve a number of puzzles about the nature of remapping.
Collapse
Affiliation(s)
- Honi Sanders
- Center for Brains Minds and Machines, Harvard University, Cambridge, United States.,Picower Institute for Learning and Memory and Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, United States
| | - Matthew A Wilson
- Center for Brains Minds and Machines, Harvard University, Cambridge, United States.,Picower Institute for Learning and Memory and Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, United States
| | - Samuel J Gershman
- Center for Brains Minds and Machines, Harvard University, Cambridge, United States.,Department of Psychology, Harvard University, Cambridge, United States
| |
Collapse
|
7
|
Abstract
In natural data, the class and intensity of stimuli are correlated. Current machine learning algorithms ignore this ubiquitous statistical property of stimuli, usually by requiring normalized inputs. From a biological perspective, it remains unclear how neural circuits may account for these dependencies in inference and learning. Here, we use a probabilistic framework to model class-specific intensity variations, and we derive approximate inference and online learning rules which reflect common hallmarks of neural computation. Concretely, we show that a neural circuit equipped with specific forms of synaptic and intrinsic plasticity (IP) can learn the class-specific features and intensities of stimuli simultaneously. Our model provides a normative interpretation of IP as a critical part of sensory learning and predicts that neurons can represent nontrivial input statistics in their excitabilities. Computationally, our approach yields improved statistical representations for realistic datasets in the visual and auditory domains. In particular, we demonstrate the utility of the model in estimating the contrastive stress of speech.
Collapse
|
8
|
Neural Variability and Sampling-Based Probabilistic Representations in the Visual Cortex. Neuron 2017; 92:530-543. [PMID: 27764674 PMCID: PMC5077700 DOI: 10.1016/j.neuron.2016.09.038] [Citation(s) in RCA: 119] [Impact Index Per Article: 17.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/22/2016] [Revised: 07/27/2016] [Accepted: 09/06/2016] [Indexed: 11/21/2022]
Abstract
Neural responses in the visual cortex are variable, and there is now an abundance of data characterizing how the magnitude and structure of this variability depends on the stimulus. Current theories of cortical computation fail to account for these data; they either ignore variability altogether or only model its unstructured Poisson-like aspects. We develop a theory in which the cortex performs probabilistic inference such that population activity patterns represent statistical samples from the inferred probability distribution. Our main prediction is that perceptual uncertainty is directly encoded by the variability, rather than the average, of cortical responses. Through direct comparisons to previously published data as well as original data analyses, we show that a sampling-based probabilistic representation accounts for the structure of noise, signal, and spontaneous response variability and correlations in the primary visual cortex. These results suggest a novel role for neural variability in cortical dynamics and computations.
Collapse
|
9
|
Mark S, Romani S, Jezek K, Tsodyks M. Theta-paced flickering between place-cell maps in the hippocampus: A model based on short-term synaptic plasticity. Hippocampus 2017; 27:959-970. [PMID: 28558154 PMCID: PMC5575492 DOI: 10.1002/hipo.22743] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/09/2016] [Revised: 05/16/2017] [Accepted: 05/18/2017] [Indexed: 01/29/2023]
Abstract
Hippocampal place cells represent different environments with distinct neural activity patterns. Following an abrupt switch between two familiar configurations of visual cues defining two environments, the hippocampal neural activity pattern switches almost immediately to the corresponding representation. Surprisingly, during a transient period following the switch to the new environment, occasional fast transitions between the two activity patterns (flickering) were observed (Jezek, Henriksen, Treves, Moser, & Moser, 2011). Here we show that an attractor neural network model of place cells with connections endowed with short‐term synaptic plasticity can account for this phenomenon. A memory trace of the recent history of network activity is maintained in the state of the synapses, allowing the network to temporarily reactivate the representation of the previous environment in the absence of the corresponding sensory cues. The model predicts that the number of flickering events depends on the amplitude of the ongoing theta rhythm and the distance between the current position of the animal and its position at the time of cue switching. We test these predictions with new analysis of experimental data. These results suggest a potential role of short‐term synaptic plasticity in recruiting the activity of different cell assemblies and in shaping hippocampal activity of behaving animals.
Collapse
Affiliation(s)
- Shirley Mark
- Department of Neurobiology, Weizmann Institute of Science, Rehovot, 76100, Israel
| | - Sandro Romani
- HHMI Janelia Research Campus, Ashburn, Virginia, 20147, USA
| | - Karel Jezek
- Biomedical Center, Faculty of Medicine in Pilsen, Charles University, Pilsen, 32300, Czech Republic.,Kavli Institute for Systems Neuroscience and Centre for Neural Computation, Norwegian University of Science and Technology, Trondheim, 7491, Norway
| | - Misha Tsodyks
- Department of Neurobiology, Weizmann Institute of Science, Rehovot, 76100, Israel
| |
Collapse
|
10
|
Aitchison L, Lengyel M. The Hamiltonian Brain: Efficient Probabilistic Inference with Excitatory-Inhibitory Neural Circuit Dynamics. PLoS Comput Biol 2016; 12:e1005186. [PMID: 28027294 PMCID: PMC5189947 DOI: 10.1371/journal.pcbi.1005186] [Citation(s) in RCA: 29] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/06/2015] [Accepted: 10/06/2016] [Indexed: 12/19/2022] Open
Abstract
Probabilistic inference offers a principled framework for understanding both behaviour and cortical computation. However, two basic and ubiquitous properties of cortical responses seem difficult to reconcile with probabilistic inference: neural activity displays prominent oscillations in response to constant input, and large transient changes in response to stimulus onset. Indeed, cortical models of probabilistic inference have typically either concentrated on tuning curve or receptive field properties and remained agnostic as to the underlying circuit dynamics, or had simplistic dynamics that gave neither oscillations nor transients. Here we show that these dynamical behaviours may in fact be understood as hallmarks of the specific representation and algorithm that the cortex employs to perform probabilistic inference. We demonstrate that a particular family of probabilistic inference algorithms, Hamiltonian Monte Carlo (HMC), naturally maps onto the dynamics of excitatory-inhibitory neural networks. Specifically, we constructed a model of an excitatory-inhibitory circuit in primary visual cortex that performed HMC inference, and thus inherently gave rise to oscillations and transients. These oscillations were not mere epiphenomena but served an important functional role: speeding up inference by rapidly spanning a large volume of state space. Inference thus became an order of magnitude more efficient than in a non-oscillatory variant of the model. In addition, the network matched two specific properties of observed neural dynamics that would otherwise be difficult to account for using probabilistic inference. First, the frequency of oscillations as well as the magnitude of transients increased with the contrast of the image stimulus. Second, excitation and inhibition were balanced, and inhibition lagged excitation. These results suggest a new functional role for the separation of cortical populations into excitatory and inhibitory neurons, and for the neural oscillations that emerge in such excitatory-inhibitory networks: enhancing the efficiency of cortical computations. Our brain operates in the face of substantial uncertainty due to ambiguity in the inputs, and inherent unpredictability in the environment. Behavioural and neural evidence indicates that the brain often uses a close approximation of the optimal strategy, probabilistic inference, to interpret sensory inputs and make decisions under uncertainty. However, the circuit dynamics underlying such probabilistic computations are unknown. In particular, two fundamental properties of cortical responses, the presence of oscillations and transients, are difficult to reconcile with probabilistic inference. We show that excitatory-inhibitory neural networks are naturally suited to implement a particular inference algorithm, Hamiltonian Monte Carlo. Our network showed oscillations and transients like those found in the cortex and took advantage of these dynamical motifs to speed up inference by an order of magnitude. These results suggest a new functional role for the separation of cortical populations into excitatory and inhibitory neurons, and for the neural oscillations that emerge in such excitatory-inhibitory networks: enhancing the efficiency of cortical computations.
Collapse
Affiliation(s)
- Laurence Aitchison
- Gatsby Computational Neuroscience Unit, University College London, London, United Kingdom
- * E-mail:
| | - Máté Lengyel
- Computational & Biological Learning Lab, Department of Engineering, University of Cambridge, Cambridge, United Kingdom
- Department of Cognitive Science, Central European University, Budapest, Hungary
| |
Collapse
|
11
|
Prince LY, Bacon TJ, Tigaret CM, Mellor JR. Neuromodulation of the Feedforward Dentate Gyrus-CA3 Microcircuit. Front Synaptic Neurosci 2016; 8:32. [PMID: 27799909 PMCID: PMC5065980 DOI: 10.3389/fnsyn.2016.00032] [Citation(s) in RCA: 25] [Impact Index Per Article: 3.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/31/2016] [Accepted: 09/20/2016] [Indexed: 12/16/2022] Open
Abstract
The feedforward dentate gyrus-CA3 microcircuit in the hippocampus is thought to activate ensembles of CA3 pyramidal cells and interneurons to encode and retrieve episodic memories. The creation of these CA3 ensembles depends on neuromodulatory input and synaptic plasticity within this microcircuit. Here we review the mechanisms by which the neuromodulators aceylcholine, noradrenaline, dopamine, and serotonin reconfigure this microcircuit and thereby infer the net effect of these modulators on the processes of episodic memory encoding and retrieval.
Collapse
Affiliation(s)
- Luke Y Prince
- Centre for Synaptic Plasticity, School of Physiology, Pharmacology and Neuroscience, University of Bristol Bristol, UK
| | - Travis J Bacon
- Centre for Synaptic Plasticity, School of Physiology, Pharmacology and Neuroscience, University of Bristol Bristol, UK
| | - Cezar M Tigaret
- Centre for Synaptic Plasticity, School of Physiology, Pharmacology and Neuroscience, University of Bristol Bristol, UK
| | - Jack R Mellor
- Centre for Synaptic Plasticity, School of Physiology, Pharmacology and Neuroscience, University of Bristol Bristol, UK
| |
Collapse
|
12
|
Computational principles of synaptic memory consolidation. Nat Neurosci 2016; 19:1697-1706. [PMID: 27694992 DOI: 10.1038/nn.4401] [Citation(s) in RCA: 89] [Impact Index Per Article: 11.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/06/2016] [Accepted: 09/01/2016] [Indexed: 02/07/2023]
Abstract
Memories are stored and retained through complex, coupled processes operating on multiple timescales. To understand the computational principles behind these intricate networks of interactions, we construct a broad class of synaptic models that efficiently harness biological complexity to preserve numerous memories by protecting them against the adverse effects of overwriting. The memory capacity scales almost linearly with the number of synapses, which is a substantial improvement over the square root scaling of previous models. This was achieved by combining multiple dynamical processes that initially store memories in fast variables and then progressively transfer them to slower variables. Notably, the interactions between fast and slow variables are bidirectional. The proposed models are robust to parameter perturbations and can explain several properties of biological memory, including delayed expression of synaptic modifications, metaplasticity, and spacing effects.
Collapse
|
13
|
Abstract
Behavioral and neuroscientific data on reward-based decision making point to a fundamental distinction between habitual and goal-directed action selection. The formation of habits, which requires simple updating of cached values, has been studied in great detail, and the reward prediction error theory of dopamine function has enjoyed prominent success in accounting for its neural bases. In contrast, the neural circuit mechanisms of goal-directed decision making, requiring extended iterative computations to estimate values online, are still unknown. Here we present a spiking neural network that provably solves the difficult online value estimation problem underlying goal-directed decision making in a near-optimal way and reproduces behavioral as well as neurophysiological experimental data on tasks ranging from simple binary choice to sequential decision making. Our model uses local plasticity rules to learn the synaptic weights of a simple neural network to achieve optimal performance and solves one-step decision-making tasks, commonly considered in neuroeconomics, as well as more challenging sequential decision-making tasks within 1 s. These decision times, and their parametric dependence on task parameters, as well as the final choice probabilities match behavioral data, whereas the evolution of neural activities in the network closely mimics neural responses recorded in frontal cortices during the execution of such tasks. Our theory provides a principled framework to understand the neural underpinning of goal-directed decision making and makes novel predictions for sequential decision-making tasks with multiple rewards. SIGNIFICANCE STATEMENT Goal-directed actions requiring prospective planning pervade decision making, but their circuit-level mechanisms remain elusive. We show how a model circuit of biologically realistic spiking neurons can solve this computationally challenging problem in a novel way. The synaptic weights of our network can be learned using local plasticity rules such that its dynamics devise a near-optimal plan of action. By systematically comparing our model results to experimental data, we show that it reproduces behavioral decision times and choice probabilities as well as neural responses in a rich set of tasks. Our results thus offer the first biologically realistic account for complex goal-directed decision making at a computational, algorithmic, and implementational level.
Collapse
|
14
|
Iigaya K. Adaptive learning and decision-making under uncertainty by metaplastic synapses guided by a surprise detection system. eLife 2016; 5:e18073. [PMID: 27504806 PMCID: PMC5008908 DOI: 10.7554/elife.18073] [Citation(s) in RCA: 21] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/23/2016] [Accepted: 08/08/2016] [Indexed: 01/27/2023] Open
Abstract
Recent experiments have shown that animals and humans have a remarkable ability to adapt their learning rate according to the volatility of the environment. Yet the neural mechanism responsible for such adaptive learning has remained unclear. To fill this gap, we investigated a biophysically inspired, metaplastic synaptic model within the context of a well-studied decision-making network, in which synapses can change their rate of plasticity in addition to their efficacy according to a reward-based learning rule. We found that our model, which assumes that synaptic plasticity is guided by a novel surprise detection system, captures a wide range of key experimental findings and performs as well as a Bayes optimal model, with remarkably little parameter tuning. Our results further demonstrate the computational power of synaptic plasticity, and provide insights into the circuit-level computation which underlies adaptive decision-making.
Collapse
Affiliation(s)
- Kiyohito Iigaya
- Gatsby Computational Neuroscience Unit, University College London, London, United Kingdom,Center for Theoretical Neuroscience, College of Physicians and Surgeons, Columbia University, New York, United States,Department of Physics, Columbia University, New York, United States,
| |
Collapse
|
15
|
Tully PJ, Hennig MH, Lansner A. Synaptic and nonsynaptic plasticity approximating probabilistic inference. Front Synaptic Neurosci 2014; 6:8. [PMID: 24782758 PMCID: PMC3986567 DOI: 10.3389/fnsyn.2014.00008] [Citation(s) in RCA: 20] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/19/2013] [Accepted: 03/20/2014] [Indexed: 12/28/2022] Open
Abstract
Learning and memory operations in neural circuits are believed to involve molecular cascades of synaptic and nonsynaptic changes that lead to a diverse repertoire of dynamical phenomena at higher levels of processing. Hebbian and homeostatic plasticity, neuromodulation, and intrinsic excitability all conspire to form and maintain memories. But it is still unclear how these seemingly redundant mechanisms could jointly orchestrate learning in a more unified system. To this end, a Hebbian learning rule for spiking neurons inspired by Bayesian statistics is proposed. In this model, synaptic weights and intrinsic currents are adapted on-line upon arrival of single spikes, which initiate a cascade of temporally interacting memory traces that locally estimate probabilities associated with relative neuronal activation levels. Trace dynamics enable synaptic learning to readily demonstrate a spike-timing dependence, stably return to a set-point over long time scales, and remain competitive despite this stability. Beyond unsupervised learning, linking the traces with an external plasticity-modulating signal enables spike-based reinforcement learning. At the postsynaptic neuron, the traces are represented by an activity-dependent ion channel that is shown to regulate the input received by a postsynaptic cell and generate intrinsic graded persistent firing levels. We show how spike-based Hebbian-Bayesian learning can be performed in a simulated inference task using integrate-and-fire (IAF) neurons that are Poisson-firing and background-driven, similar to the preferred regime of cortical neurons. Our results support the view that neurons can represent information in the form of probability distributions, and that probabilistic inference could be a functional by-product of coupled synaptic and nonsynaptic mechanisms operating over several timescales. The model provides a biophysical realization of Bayesian computation by reconciling several observed neural phenomena whose functional effects are only partially understood in concert.
Collapse
Affiliation(s)
- Philip J Tully
- Department of Computational Biology, Royal Institute of Technology (KTH) Stockholm, Sweden ; Stockholm Brain Institute, Karolinska Institute Stockholm, Sweden ; School of Informatics, Institute for Adaptive and Neural Computation, University of Edinburgh Edinburgh, UK
| | - Matthias H Hennig
- School of Informatics, Institute for Adaptive and Neural Computation, University of Edinburgh Edinburgh, UK
| | - Anders Lansner
- Department of Computational Biology, Royal Institute of Technology (KTH) Stockholm, Sweden ; Stockholm Brain Institute, Karolinska Institute Stockholm, Sweden ; Department of Numerical Analysis and Computer Science, Stockholm University Stockholm, Sweden
| |
Collapse
|