1
|
Gontier C, Surace SC, Delvendahl I, Müller M, Pfister JP. Efficient sampling-based Bayesian Active Learning for synaptic characterization. PLoS Comput Biol 2023; 19:e1011342. [PMID: 37603559 PMCID: PMC10470935 DOI: 10.1371/journal.pcbi.1011342] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/04/2023] [Revised: 08/31/2023] [Accepted: 07/10/2023] [Indexed: 08/23/2023] Open
Abstract
Bayesian Active Learning (BAL) is an efficient framework for learning the parameters of a model, in which input stimuli are selected to maximize the mutual information between the observations and the unknown parameters. However, the applicability of BAL to experiments is limited as it requires performing high-dimensional integrations and optimizations in real time. Current methods are either too time consuming, or only applicable to specific models. Here, we propose an Efficient Sampling-Based Bayesian Active Learning (ESB-BAL) framework, which is efficient enough to be used in real-time biological experiments. We apply our method to the problem of estimating the parameters of a chemical synapse from the postsynaptic responses to evoked presynaptic action potentials. Using synthetic data and synaptic whole-cell patch-clamp recordings, we show that our method can improve the precision of model-based inferences, thereby paving the way towards more systematic and efficient experimental designs in physiology.
Collapse
Affiliation(s)
- Camille Gontier
- Department of Physiology, University of Bern, Bern, Switzerland
- Rehab Neural Engineering Labs, University of Pittsburgh, Pittsburgh, Pennsylvania, United States of America
| | | | - Igor Delvendahl
- Department of Molecular Life Sciences, University of Zurich, Zurich, Switzerland
- Neuroscience Center Zurich, University of Zurich, Zurich, Switzerland
| | - Martin Müller
- Department of Molecular Life Sciences, University of Zurich, Zurich, Switzerland
- Neuroscience Center Zurich, University of Zurich, Zurich, Switzerland
- University Research Priority Program (URPP), Adaptive Brain Circuits in Development and Learning (AdaBD), University of Zurich, Zurich, Switzerland
| | | |
Collapse
|
2
|
Conti D, Mora T. Nonequilibrium dynamics of adaptation in sensory systems. Phys Rev E 2022; 106:054404. [PMID: 36559478 DOI: 10.1103/physreve.106.054404] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/19/2020] [Accepted: 10/11/2022] [Indexed: 06/17/2023]
Abstract
Adaptation is used by biological sensory systems to respond to a wide range of environmental signals, by adapting their response properties to the statistics of the stimulus in order to maximize information transmission. We derive rules of optimal adaptation to changes in the mean and variance of a continuous stimulus in terms of Bayesian filters and map them onto stochastic equations that couple the state of the environment to an internal variable controlling the response function. We calculate numerical and exact results for the speed and accuracy of adaptation and its impact on information transmission. We find that, in the regime of efficient adaptation, the speed of adaptation scales sublinearly with the rate of change of the environment. Finally, we exploit the mathematical equivalence between adaptation and stochastic thermodynamics to quantitatively relate adaptation to the irreversibility of the adaptation time course, defined by the rate of entropy production. Our results suggest a means to empirically quantify adaptation in a model-free and nonparametric way.
Collapse
Affiliation(s)
- Daniele Conti
- Laboratoire de Physique, École Normale Supérieure, CNRS, PSL Université, Sorbonne Université, Université de Paris, 75005 Paris, France
| | - Thierry Mora
- Laboratoire de Physique, École Normale Supérieure, CNRS, PSL Université, Sorbonne Université, Université de Paris, 75005 Paris, France
| |
Collapse
|
3
|
Jordan J, Schmidt M, Senn W, Petrovici MA. Evolving interpretable plasticity for spiking networks. eLife 2021; 10:66273. [PMID: 34709176 PMCID: PMC8553337 DOI: 10.7554/elife.66273] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/05/2021] [Accepted: 08/19/2021] [Indexed: 11/25/2022] Open
Abstract
Continuous adaptation allows survival in an ever-changing world. Adjustments in the synaptic coupling strength between neurons are essential for this capability, setting us apart from simpler, hard-wired organisms. How these changes can be mathematically described at the phenomenological level, as so-called ‘plasticity rules’, is essential both for understanding biological information processing and for developing cognitively performant artificial systems. We suggest an automated approach for discovering biophysically plausible plasticity rules based on the definition of task families, associated performance measures and biophysical constraints. By evolving compact symbolic expressions, we ensure the discovered plasticity rules are amenable to intuitive understanding, fundamental for successful communication and human-guided generalization. We successfully apply our approach to typical learning scenarios and discover previously unknown mechanisms for learning efficiently from rewards, recover efficient gradient-descent methods for learning from target signals, and uncover various functionally equivalent STDP-like rules with tuned homeostatic mechanisms. Our brains are incredibly adaptive. Every day we form memories, acquire new knowledge or refine existing skills. This stands in contrast to our current computers, which typically can only perform pre-programmed actions. Our own ability to adapt is the result of a process called synaptic plasticity, in which the strength of the connections between neurons can change. To better understand brain function and build adaptive machines, researchers in neuroscience and artificial intelligence (AI) are modeling the underlying mechanisms. So far, most work towards this goal was guided by human intuition – that is, by the strategies scientists think are most likely to succeed. Despite the tremendous progress, this approach has two drawbacks. First, human time is limited and expensive. And second, researchers have a natural – and reasonable – tendency to incrementally improve upon existing models, rather than starting from scratch. Jordan, Schmidt et al. have now developed a new approach based on ‘evolutionary algorithms’. These computer programs search for solutions to problems by mimicking the process of biological evolution, such as the concept of survival of the fittest. The approach exploits the increasing availability of cheap but powerful computers. Compared to its predecessors (or indeed human brains), it also uses search strategies that are less biased by previous models. The evolutionary algorithms were presented with three typical learning scenarios. In the first, the computer had to spot a repeating pattern in a continuous stream of input without receiving feedback on how well it was doing. In the second scenario, the computer received virtual rewards whenever it behaved in the desired manner – an example of reinforcement learning. Finally, in the third ‘supervised learning’ scenario, the computer was told exactly how much its behavior deviated from the desired behavior. For each of these scenarios, the evolutionary algorithms were able to discover mechanisms of synaptic plasticity to solve the new task successfully. Using evolutionary algorithms to study how computers ‘learn’ will provide new insights into how brains function in health and disease. It could also pave the way for developing intelligent machines that can better adapt to the needs of their users.
Collapse
Affiliation(s)
- Jakob Jordan
- Department of Physiology, University of Bern, Bern, Switzerland
| | - Maximilian Schmidt
- Ascent Robotics, Tokyo, Japan.,RIKEN Center for Brain Science, Tokyo, Japan
| | - Walter Senn
- Department of Physiology, University of Bern, Bern, Switzerland
| | - Mihai A Petrovici
- Department of Physiology, University of Bern, Bern, Switzerland.,Kirchhoff-Institute for Physics, Heidelberg University, Heidelberg, Germany
| |
Collapse
|
4
|
Sohn H, Narain D. Neural implementations of Bayesian inference. Curr Opin Neurobiol 2021; 70:121-129. [PMID: 34678599 DOI: 10.1016/j.conb.2021.09.008] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/05/2021] [Revised: 08/18/2021] [Accepted: 09/09/2021] [Indexed: 10/20/2022]
Abstract
Bayesian inference has emerged as a general framework that captures how organisms make decisions under uncertainty. Recent experimental findings reveal disparate mechanisms for how the brain generates behaviors predicted by normative Bayesian theories. Here, we identify two broad classes of neural implementations for Bayesian inference: a modular class, where each probabilistic component of Bayesian computation is independently encoded and a transform class, where uncertain measurements are converted to Bayesian estimates through latent processes. Many recent experimental neuroscience findings studying probabilistic inference broadly fall into these classes. We identify potential avenues for synthesis across these two classes and the disparities that, at present, cannot be reconciled. We conclude that to distinguish among implementation hypotheses for Bayesian inference, we require greater engagement among theoretical and experimental neuroscientists in an effort that spans different scales of analysis, circuits, tasks, and species.
Collapse
Affiliation(s)
- Hansem Sohn
- McGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, MA, 02139, USA
| | - Devika Narain
- Dept. of Neuroscience, Erasmus University Medical Center, Rotterdam, 3015, CN, the Netherlands.
| |
Collapse
|
5
|
Aitchison L, Jegminat J, Menendez JA, Pfister JP, Pouget A, Latham PE. Synaptic plasticity as Bayesian inference. Nat Neurosci 2021; 24:565-571. [PMID: 33707754 DOI: 10.1038/s41593-021-00809-5] [Citation(s) in RCA: 31] [Impact Index Per Article: 10.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/19/2020] [Accepted: 01/26/2021] [Indexed: 01/21/2023]
Abstract
Learning, especially rapid learning, is critical for survival. However, learning is hard; a large number of synaptic weights must be set based on noisy, often ambiguous, sensory information. In such a high-noise regime, keeping track of probability distributions over weights is the optimal strategy. Here we hypothesize that synapses take that strategy; in essence, when they estimate weights, they include error bars. They then use that uncertainty to adjust their learning rates, with more uncertain weights having higher learning rates. We also make a second, independent, hypothesis: synapses communicate their uncertainty by linking it to variability in postsynaptic potential size, with more uncertainty leading to more variability. These two hypotheses cast synaptic plasticity as a problem of Bayesian inference, and thus provide a normative view of learning. They generalize known learning rules, offer an explanation for the large variability in the size of postsynaptic potentials and make falsifiable experimental predictions.
Collapse
Affiliation(s)
- Laurence Aitchison
- Gatsby Computational Neuroscience Unit, University College London, London, UK. .,Department of Computer Science, University of Bristol, Bristol, UK.
| | - Jannes Jegminat
- Institute of Neuroinformatics, UZH/ETH Zurich, Zurich, Switzerland.,Department of Physiology, University of Bern, Bern, Switzerland
| | - Jorge Aurelio Menendez
- Gatsby Computational Neuroscience Unit, University College London, London, UK.,CoMPLEX, University College London, London, UK
| | - Jean-Pascal Pfister
- Institute of Neuroinformatics, UZH/ETH Zurich, Zurich, Switzerland.,Department of Physiology, University of Bern, Bern, Switzerland
| | - Alexandre Pouget
- Gatsby Computational Neuroscience Unit, University College London, London, UK.,Department of Basic Neurosciences, University of Geneva, Geneva, Switzerland
| | - Peter E Latham
- Gatsby Computational Neuroscience Unit, University College London, London, UK
| |
Collapse
|
6
|
Presynaptic endoplasmic reticulum regulates short-term plasticity in hippocampal synapses. Commun Biol 2021; 4:241. [PMID: 33623091 PMCID: PMC7902852 DOI: 10.1038/s42003-021-01761-7] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/01/2020] [Accepted: 01/25/2021] [Indexed: 01/31/2023] Open
Abstract
Short-term plasticity preserves a brief history of synaptic activity that is communicated to the postsynaptic neuron. This is primarily regulated by a calcium signal initiated by voltage dependent calcium channels in the presynaptic terminal. Imaging studies of CA3-CA1 synapses reveal the presence of another source of calcium, the endoplasmic reticulum (ER) in all presynaptic terminals. However, the precise role of the ER in modifying STP remains unexplored. We performed in-silico experiments in synaptic geometries based on reconstructions of the rat CA3-CA1 synapses to investigate the contribution of ER. Our model predicts that presynaptic ER is critical in generating the observed short-term plasticity profile of CA3-CA1 synapses and allows synapses with low release probability to operate more reliably. Blocking the ER lowers facilitation in a manner similar to what has been previously characterized in animal models of Alzheimer's disease and underscores the important role played by presynaptic stores in normal function.
Collapse
|
7
|
Nesse WH, Maler L, Longtin A. Enhanced Signal Detection by Adaptive Decorrelation of Interspike Intervals. Neural Comput 2020; 33:341-375. [PMID: 33253034 DOI: 10.1162/neco_a_01347] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/27/2022]
Abstract
Spike trains with negative interspike interval (ISI) correlations, in which long/short ISIs are more likely followed by short/long ISIs, are common in many neurons. They can be described by stochastic models with a spike-triggered adaptation variable. We analyze a phenomenon in these models where such statistically dependent ISI sequences arise in tandem with quasi-statistically independent and identically distributed (quasi-IID) adaptation variable sequences. The sequences of adaptation states and resulting ISIs are linked by a nonlinear decorrelating transformation. We establish general conditions on a family of stochastic spiking models that guarantee this quasi-IID property and establish bounds on the resulting baseline ISI correlations. Inputs that elicit weak firing rate changes in samples with many spikes are known to be more detectible when negative ISI correlations are present because they reduce spike count variance; this defines a variance-reduced firing rate coding benchmark. We performed a Fisher information analysis on these adapting models exhibiting ISI correlations to show that a spike pattern code based on the quasi-IID property achieves the upper bound of detection performance, surpassing rate codes with the same mean rate-including the variance-reduced rate code benchmark-by 20% to 30%. The information loss in rate codes arises because the benefits of reduced spike count variance cannot compensate for the lower firing rate gain due to adaptation. Since adaptation states have similar dynamics to synaptic responses, the quasi-IID decorrelation transformation of the spike train is plausibly implemented by downstream neurons through matched postsynaptic kinetics. This provides an explanation for observed coding performance in sensory systems that cannot be accounted for by rate coding, for example, at the detection threshold where rate changes can be insignificant.
Collapse
Affiliation(s)
- William H Nesse
- Department of Mathematics, University of Utah, Salt Lake City, UT 84112, U.S.A.
| | - Leonard Maler
- Department of Cellular and Molecular Medicine, University of Ottawa, Ottawa, ON K1H 8M5, Canada
| | - André Longtin
- Department of Physics, University of Ottawa, Ottawa, ON K1N 6N5, Canada
| |
Collapse
|
8
|
Zhu JQ, Sanborn AN, Chater N. The Bayesian sampler: Generic Bayesian inference causes incoherence in human probability judgments. Psychol Rev 2020; 127:719-748. [PMID: 32191073 PMCID: PMC7571263 DOI: 10.1037/rev0000190] [Citation(s) in RCA: 14] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/24/2019] [Revised: 01/10/2020] [Accepted: 01/11/2020] [Indexed: 11/24/2022]
Abstract
Human probability judgments are systematically biased, in apparent tension with Bayesian models of cognition. But perhaps the brain does not represent probabilities explicitly, but approximates probabilistic calculations through a process of sampling, as used in computational probabilistic models in statistics. Naïve probability estimates can be obtained by calculating the relative frequency of an event within a sample, but these estimates tend to be extreme when the sample size is small. We propose instead that people use a generic prior to improve the accuracy of their probability estimates based on samples, and we call this model the Bayesian sampler. The Bayesian sampler trades off the coherence of probabilistic judgments for improved accuracy, and provides a single framework for explaining phenomena associated with diverse biases and heuristics such as conservatism and the conjunction fallacy. The approach turns out to provide a rational reinterpretation of "noise" in an important recent model of probability judgment, the probability theory plus noise model (Costello & Watts, 2014, 2016a, 2017; Costello & Watts, 2019; Costello, Watts, & Fisher, 2018), making equivalent average predictions for simple events, conjunctions, and disjunctions. The Bayesian sampler does, however, make distinct predictions for conditional probabilities and distributions of probability estimates. We show in 2 new experiments that this model better captures these mean judgments both qualitatively and quantitatively; which model best fits individual distributions of responses depends on the assumed size of the cognitive sample. (PsycInfo Database Record (c) 2020 APA, all rights reserved).
Collapse
Affiliation(s)
| | | | - Nick Chater
- Warwick Business School, University of Warwick
| |
Collapse
|
9
|
Local Design Principles at Hippocampal Synapses Revealed by an Energy-Information Trade-Off. eNeuro 2020; 7:ENEURO.0521-19.2020. [PMID: 32847867 PMCID: PMC7540928 DOI: 10.1523/eneuro.0521-19.2020] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/11/2019] [Revised: 03/16/2020] [Accepted: 03/17/2020] [Indexed: 12/01/2022] Open
Abstract
Synapses across different brain regions display distinct structure-function relationships. We investigated the interplay of fundamental design constraints that shape the transmission properties of the excitatory CA3-CA1 pyramidal cell connection, a prototypic synapse for studying the mechanisms of learning in the mammalian hippocampus. This small synapse is characterized by probabilistic release of transmitter, which is markedly facilitated in response to naturally occurring trains of action potentials. Based on a physiologically motivated computational model of the rat CA3 presynaptic terminal, we show how unreliability and short-term dynamics of vesicular release work together to regulate the trade-off of information transfer versus energy use. We propose that individual CA3-CA1 synapses are designed to operate near the maximum possible capacity of information transmission in an efficient manner. Experimental measurements reveal a wide range of vesicular release probabilities at hippocampal synapses, which may be a necessary consequence of long-term plasticity and homeostatic mechanisms that manifest as presynaptic modifications of the release probability. We show that the timescales and magnitude of short-term plasticity (STP) render synaptic information transfer nearly independent of differences in release probability. Thus, individual synapses transmit optimally while maintaining a heterogeneous distribution of presynaptic strengths indicative of synaptically-encoded memory representations. Our results support the view that organizing principles that are evident on higher scales of neural organization percolate down to the design of an individual synapse.
Collapse
|
10
|
Human group coordination in a sensorimotor task with neuron-like decision-making. Sci Rep 2020; 10:8226. [PMID: 32427875 PMCID: PMC7237467 DOI: 10.1038/s41598-020-64091-4] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/22/2020] [Accepted: 04/08/2020] [Indexed: 11/24/2022] Open
Abstract
The formation of cooperative groups of agents with limited information-processing capabilities to solve complex problems together is a fundamental building principle that cuts through multiple scales in biology from groups of cells to groups of humans. Here, we study an experimental paradigm where a group of humans is joined together to solve a common sensorimotor task that cannot be achieved by a single agent but relies on the cooperation of the group. In particular, each human acts as a neuron-like binary decision-maker that determines in each moment of time whether to be active or not. Inspired by the population vector method for movement decoding, each neuron-like decision-maker is assigned a preferred movement direction that the decision-maker is ignorant about. From the population vector reflecting the group activity, the movement of a cursor is determined, and the task for the group is to steer the cursor into a predefined target. As the preferred movement directions are unknown and players are not allowed to communicate, the group has to learn a control strategy on the fly from the shared visual feedback. Performance is analyzed by learning speed and accuracy, action synchronization, and group coherence. We study four different computational models of the observed behavior, including a perceptron model, a reinforcement learning model, a Bayesian inference model and a Thompson sampling model that efficiently approximates Bayes optimal behavior. The Bayes and especially the Thompson model excel in predicting the human group behavior compared to the other models, suggesting that internal models are crucial for adaptive coordination. We discuss benefits and limitations of our paradigm regarding a better understanding of distributed information processing.
Collapse
|
11
|
Bykowska O, Gontier C, Sax AL, Jia DW, Montero ML, Bird AD, Houghton C, Pfister JP, Costa RP. Model-Based Inference of Synaptic Transmission. Front Synaptic Neurosci 2019; 11:21. [PMID: 31481887 PMCID: PMC6710341 DOI: 10.3389/fnsyn.2019.00021] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/06/2019] [Accepted: 07/29/2019] [Indexed: 12/15/2022] Open
Abstract
Synaptic computation is believed to underlie many forms of animal behavior. A correct identification of synaptic transmission properties is thus crucial for a better understanding of how the brain processes information, stores memories and learns. Recently, a number of new statistical methods for inferring synaptic transmission parameters have been introduced. Here we review and contrast these developments, with a focus on methods aimed at inferring both synaptic release statistics and synaptic dynamics. Furthermore, based on recent proposals we discuss how such methods can be applied to data across different levels of investigation: from intracellular paired experiments to in vivo network-wide recordings. Overall, these developments open the window to reliably estimating synaptic parameters in behaving animals.
Collapse
Affiliation(s)
- Ola Bykowska
- Computational Neuroscience Unit, Department of Computer Science, SCEEM, Faculty of Engineering, University of Bristol, Bristol, United Kingdom
| | - Camille Gontier
- Department of Physiology, University of Bern, Bern, Switzerland
| | - Anne-Lene Sax
- Computational Neuroscience Unit, Department of Computer Science, SCEEM, Faculty of Engineering, University of Bristol, Bristol, United Kingdom
| | - David W. Jia
- Department of Physiology, Anatomy and Genetics, Centre for Neural Circuits and Behaviour, University of Oxford, Oxford, United Kingdom
| | - Milton Llera Montero
- Computational Neuroscience Unit, Department of Computer Science, SCEEM, Faculty of Engineering, University of Bristol, Bristol, United Kingdom
- School of Psychological Science, Faculty of Life Sciences, University of Bristol, Bristol, United Kingdom
| | - Alex D. Bird
- Ernst Strungmann Institute for Neuroscience in Cooperation With Max Planck Society, Frankfurt, Germany
- Frankfurt Institute for Advanced Studies, Frankfurt, Germany
| | - Conor Houghton
- Computational Neuroscience Unit, Department of Computer Science, SCEEM, Faculty of Engineering, University of Bristol, Bristol, United Kingdom
| | - Jean-Pascal Pfister
- Department of Physiology, University of Bern, Bern, Switzerland
- Institute of Neuroinformatics and Neuroscience Center Zurich, University of Zurich/ETH Zurich, Zurich, Switzerland
| | - Rui Ponte Costa
- Computational Neuroscience Unit, Department of Computer Science, SCEEM, Faculty of Engineering, University of Bristol, Bristol, United Kingdom
- Department of Physiology, University of Bern, Bern, Switzerland
| |
Collapse
|
12
|
Castillo AE, Rossoni S, Niven JE. Matched Short-Term Depression and Recovery Encodes Interspike Interval at a Central Synapse. Sci Rep 2018; 8:13629. [PMID: 30206296 PMCID: PMC6134063 DOI: 10.1038/s41598-018-31996-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/09/2018] [Accepted: 08/20/2018] [Indexed: 11/20/2022] Open
Abstract
Reversible decreases in synaptic strength, known as short-term depression (STD), are widespread in neural circuits. Various computational roles have been attributed to STD but these tend to focus upon the initial depression rather than the subsequent recovery. We studied the role of STD and recovery at an excitatory synapse between the fast extensor tibiae (FETi) and flexor tibiae (flexor) motor neurons in the desert locust (Schistocerca gregaria) by making paired intracellular recordings in vivo. Over behaviorally relevant pre-synaptic spike frequencies, we found that this synapse undergoes matched frequency-dependent STD and recovery; higher frequency spikes that evoke stronger, faster STD also produce stronger, faster recovery. The precise matching of depression and recovery time constants at this synapse ensures that flexor excitatory post-synaptic potential (EPSP) amplitude encodes the presynaptic FETi interspike interval (ISI). Computational modelling shows that this precise matching enables the FETi-flexor synapse to linearly encode the ISI in the EPSP amplitude, a coding strategy that may be widespread in neural circuits.
Collapse
Affiliation(s)
- Armando E Castillo
- School of Life Sciences and Centre for Computational Neuroscience and Robotics, University of Sussex, Falmer, Brighton, BN1 9QG, UK. .,Centro de Neurociencias, Instituto de Investigaciones Científicas y Servicios de Alta Tecnología, Ciudad de Saber, Republic of Panama.
| | - Sergio Rossoni
- School of Life Sciences and Centre for Computational Neuroscience and Robotics, University of Sussex, Falmer, Brighton, BN1 9QG, UK.,Department of Physiology, Development and Neuroscience, University of Cambridge, Cambridge, CB2 3EJ, UK
| | - Jeremy E Niven
- School of Life Sciences and Centre for Computational Neuroscience and Robotics, University of Sussex, Falmer, Brighton, BN1 9QG, UK.
| |
Collapse
|
13
|
The Role of Short-Term Plasticity in Neuromorphic Learning: Learning from the Timing of Rate-Varying Events with Fatiguing Spike-Timing-Dependent Plasticity. IEEE NANOTECHNOLOGY MAGAZINE 2018. [DOI: 10.1109/mnano.2018.2845479] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
|
14
|
Abstract
The ability to learn tasks in a sequential fashion is crucial to the development of artificial intelligence. Until now neural networks have not been capable of this and it has been widely thought that catastrophic forgetting is an inevitable feature of connectionist models. We show that it is possible to overcome this limitation and train networks that can maintain expertise on tasks that they have not experienced for a long time. Our approach remembers old tasks by selectively slowing down learning on the weights important for those tasks. We demonstrate our approach is scalable and effective by solving a set of classification tasks based on a hand-written digit dataset and by learning several Atari 2600 games sequentially.
Collapse
|
15
|
Mukunda CL, Narayanan R. Degeneracy in the regulation of short-term plasticity and synaptic filtering by presynaptic mechanisms. J Physiol 2017; 595:2611-2637. [PMID: 28026868 DOI: 10.1113/jp273482] [Citation(s) in RCA: 32] [Impact Index Per Article: 4.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/16/2016] [Accepted: 12/13/2016] [Indexed: 12/14/2022] Open
Abstract
KEY POINTS We develop a new biophysically rooted, physiologically constrained conductance-based synaptic model to mechanistically account for short-term facilitation and depression, respectively through residual calcium and transmitter depletion kinetics. We address the specific question of how presynaptic components (including voltage-gated ion channels, pumps, buffers and release-handling mechanisms) and interactions among them define synaptic filtering and short-term plasticity profiles. Employing global sensitivity analyses (GSAs), we show that near-identical synaptic filters and short-term plasticity profiles could emerge from disparate presynaptic parametric combinations with weak pairwise correlations. Using virtual knockout models, a technique to address the question of channel-specific contributions within the GSA framework, we unveil the differential and variable impact of each ion channel on synaptic physiology. Our conclusions strengthen the argument that parametric and interactional complexity in biological systems should not be viewed from the limited curse-of-dimensionality standpoint, but from the evolutionarily advantageous perspective of providing functional robustness through degeneracy. ABSTRACT Information processing in neurons is known to emerge as a gestalt of pre- and post-synaptic filtering. However, the impact of presynaptic mechanisms on synaptic filters has not been quantitatively assessed. Here, we developed a biophysically rooted, conductance-based model synapse that was endowed with six different voltage-gated ion channels, calcium pumps, calcium buffer and neurotransmitter-replenishment mechanisms in the presynaptic terminal. We tuned our model to match the short-term plasticity profile and band-pass structure of Schaffer collateral synapses, and performed sensitivity analyses to demonstrate that presynaptic voltage-gated ion channels regulated synaptic filters through changes in excitability and associated calcium influx. These sensitivity analyses also revealed that calcium- and release-control mechanisms were effective regulators of synaptic filters, but accomplished this without changes in terminal excitability or calcium influx. Next, to perform global sensitivity analysis, we generated 7000 randomized models spanning 15 presynaptic parameters, and computed eight different physiological measurements in each of these models. We validated these models by applying experimentally obtained bounds on their measurements, and found 104 (∼1.5%) models to match the validation criteria for all eight measurements. Analysing these valid models, we demonstrate that analogous synaptic filters emerge from disparate combinations of presynaptic parameters exhibiting weak pairwise correlations. Finally, using virtual knockout models, we establish the variable and differential impact of different presynaptic channels on synaptic filters, underlining the critical importance of interactions among different presynaptic components in defining synaptic physiology. Our results have significant implications for protein-localization strategies required for physiological robustness and for degeneracy in long-term synaptic plasticity profiles.
Collapse
Affiliation(s)
- Chinmayee L Mukunda
- Cellular Neurophysiology Laboratory, Molecular Biophysics Unit, Indian Institute of Science, Bangalore, 560012, India
| | - Rishikesh Narayanan
- Cellular Neurophysiology Laboratory, Molecular Biophysics Unit, Indian Institute of Science, Bangalore, 560012, India
| |
Collapse
|
16
|
Aitchison L, Lengyel M. The Hamiltonian Brain: Efficient Probabilistic Inference with Excitatory-Inhibitory Neural Circuit Dynamics. PLoS Comput Biol 2016; 12:e1005186. [PMID: 28027294 PMCID: PMC5189947 DOI: 10.1371/journal.pcbi.1005186] [Citation(s) in RCA: 29] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/06/2015] [Accepted: 10/06/2016] [Indexed: 12/19/2022] Open
Abstract
Probabilistic inference offers a principled framework for understanding both behaviour and cortical computation. However, two basic and ubiquitous properties of cortical responses seem difficult to reconcile with probabilistic inference: neural activity displays prominent oscillations in response to constant input, and large transient changes in response to stimulus onset. Indeed, cortical models of probabilistic inference have typically either concentrated on tuning curve or receptive field properties and remained agnostic as to the underlying circuit dynamics, or had simplistic dynamics that gave neither oscillations nor transients. Here we show that these dynamical behaviours may in fact be understood as hallmarks of the specific representation and algorithm that the cortex employs to perform probabilistic inference. We demonstrate that a particular family of probabilistic inference algorithms, Hamiltonian Monte Carlo (HMC), naturally maps onto the dynamics of excitatory-inhibitory neural networks. Specifically, we constructed a model of an excitatory-inhibitory circuit in primary visual cortex that performed HMC inference, and thus inherently gave rise to oscillations and transients. These oscillations were not mere epiphenomena but served an important functional role: speeding up inference by rapidly spanning a large volume of state space. Inference thus became an order of magnitude more efficient than in a non-oscillatory variant of the model. In addition, the network matched two specific properties of observed neural dynamics that would otherwise be difficult to account for using probabilistic inference. First, the frequency of oscillations as well as the magnitude of transients increased with the contrast of the image stimulus. Second, excitation and inhibition were balanced, and inhibition lagged excitation. These results suggest a new functional role for the separation of cortical populations into excitatory and inhibitory neurons, and for the neural oscillations that emerge in such excitatory-inhibitory networks: enhancing the efficiency of cortical computations. Our brain operates in the face of substantial uncertainty due to ambiguity in the inputs, and inherent unpredictability in the environment. Behavioural and neural evidence indicates that the brain often uses a close approximation of the optimal strategy, probabilistic inference, to interpret sensory inputs and make decisions under uncertainty. However, the circuit dynamics underlying such probabilistic computations are unknown. In particular, two fundamental properties of cortical responses, the presence of oscillations and transients, are difficult to reconcile with probabilistic inference. We show that excitatory-inhibitory neural networks are naturally suited to implement a particular inference algorithm, Hamiltonian Monte Carlo. Our network showed oscillations and transients like those found in the cortex and took advantage of these dynamical motifs to speed up inference by an order of magnitude. These results suggest a new functional role for the separation of cortical populations into excitatory and inhibitory neurons, and for the neural oscillations that emerge in such excitatory-inhibitory networks: enhancing the efficiency of cortical computations.
Collapse
Affiliation(s)
- Laurence Aitchison
- Gatsby Computational Neuroscience Unit, University College London, London, United Kingdom
- * E-mail:
| | - Máté Lengyel
- Computational & Biological Learning Lab, Department of Engineering, University of Cambridge, Cambridge, United Kingdom
- Department of Cognitive Science, Central European University, Budapest, Hungary
| |
Collapse
|
17
|
Abstract
Behavioral and neuroscientific data on reward-based decision making point to a fundamental distinction between habitual and goal-directed action selection. The formation of habits, which requires simple updating of cached values, has been studied in great detail, and the reward prediction error theory of dopamine function has enjoyed prominent success in accounting for its neural bases. In contrast, the neural circuit mechanisms of goal-directed decision making, requiring extended iterative computations to estimate values online, are still unknown. Here we present a spiking neural network that provably solves the difficult online value estimation problem underlying goal-directed decision making in a near-optimal way and reproduces behavioral as well as neurophysiological experimental data on tasks ranging from simple binary choice to sequential decision making. Our model uses local plasticity rules to learn the synaptic weights of a simple neural network to achieve optimal performance and solves one-step decision-making tasks, commonly considered in neuroeconomics, as well as more challenging sequential decision-making tasks within 1 s. These decision times, and their parametric dependence on task parameters, as well as the final choice probabilities match behavioral data, whereas the evolution of neural activities in the network closely mimics neural responses recorded in frontal cortices during the execution of such tasks. Our theory provides a principled framework to understand the neural underpinning of goal-directed decision making and makes novel predictions for sequential decision-making tasks with multiple rewards. SIGNIFICANCE STATEMENT Goal-directed actions requiring prospective planning pervade decision making, but their circuit-level mechanisms remain elusive. We show how a model circuit of biologically realistic spiking neurons can solve this computationally challenging problem in a novel way. The synaptic weights of our network can be learned using local plasticity rules such that its dynamics devise a near-optimal plan of action. By systematically comparing our model results to experimental data, we show that it reproduces behavioral decision times and choice probabilities as well as neural responses in a rich set of tasks. Our results thus offer the first biologically realistic account for complex goal-directed decision making at a computational, algorithmic, and implementational level.
Collapse
|
18
|
Ferrati G, Martini FJ, Maravall M. Presynaptic Adenosine Receptor-Mediated Regulation of Diverse Thalamocortical Short-Term Plasticity in the Mouse Whisker Pathway. Front Neural Circuits 2016; 10:9. [PMID: 26941610 PMCID: PMC4763074 DOI: 10.3389/fncir.2016.00009] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/05/2015] [Accepted: 02/05/2016] [Indexed: 12/27/2022] Open
Abstract
Short-term synaptic plasticity (STP) sets the sensitivity of a synapse to incoming activity and determines the temporal patterns that it best transmits. In “driver” thalamocortical (TC) synaptic populations, STP is dominated by depression during stimulation from rest. However, during ongoing stimulation, lemniscal TC connections onto layer 4 neurons in mouse barrel cortex express variable STP. Each synapse responds to input trains with a distinct pattern of depression or facilitation around its mean steady-state response. As a result, in common with other synaptic populations, lemniscal TC synapses express diverse rather than uniform dynamics, allowing for a rich representation of temporally varying stimuli. Here, we show that this STP diversity is regulated presynaptically. Presynaptic adenosine receptors of the A1R type, but not kainate receptors (KARs), modulate STP behavior. Blocking the receptors does not eliminate diversity, indicating that diversity is related to heterogeneous expression of multiple mechanisms in the pathway from presynaptic calcium influx to neurotransmitter release.
Collapse
Affiliation(s)
- Giovanni Ferrati
- Instituto de Neurociencias de Alicante UMH-CSIC Sant Joan d'Alacant, Spain
| | | | - Miguel Maravall
- Instituto de Neurociencias de Alicante UMH-CSICSant Joan d'Alacant, Spain; School of Life Sciences, Sussex Neuroscience, University of SussexBrighton, UK
| |
Collapse
|
19
|
Emulating short-term synaptic dynamics with memristive devices. Sci Rep 2016; 6:18639. [PMID: 26725838 PMCID: PMC4698662 DOI: 10.1038/srep18639] [Citation(s) in RCA: 92] [Impact Index Per Article: 11.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/27/2015] [Accepted: 11/19/2015] [Indexed: 11/12/2022] Open
Abstract
Neuromorphic architectures offer great promise for achieving computation capacities beyond conventional Von Neumann machines. The essential elements for achieving this vision are highly scalable synaptic mimics that do not undermine biological fidelity. Here we demonstrate that single solid-state TiO2 memristors can exhibit non-associative plasticity phenomena observed in biological synapses, supported by their metastable memory state transition properties. We show that, contrary to conventional uses of solid-state memory, the existence of rate-limiting volatility is a key feature for capturing short-term synaptic dynamics. We also show how the temporal dynamics of our prototypes can be exploited to implement spatio-temporal computation, demonstrating the memristors full potential for building biophysically realistic neural processing systems.
Collapse
|
20
|
Surace SC, Pfister JP. A Statistical Model for In Vivo Neuronal Dynamics. PLoS One 2015; 10:e0142435. [PMID: 26571371 PMCID: PMC4646699 DOI: 10.1371/journal.pone.0142435] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/15/2015] [Accepted: 10/21/2015] [Indexed: 11/19/2022] Open
Abstract
Single neuron models have a long tradition in computational neuroscience. Detailed biophysical models such as the Hodgkin-Huxley model as well as simplified neuron models such as the class of integrate-and-fire models relate the input current to the membrane potential of the neuron. Those types of models have been extensively fitted to in vitro data where the input current is controlled. Those models are however of little use when it comes to characterize intracellular in vivo recordings since the input to the neuron is not known. Here we propose a novel single neuron model that characterizes the statistical properties of in vivo recordings. More specifically, we propose a stochastic process where the subthreshold membrane potential follows a Gaussian process and the spike emission intensity depends nonlinearly on the membrane potential as well as the spiking history. We first show that the model has a rich dynamical repertoire since it can capture arbitrary subthreshold autocovariance functions, firing-rate adaptations as well as arbitrary shapes of the action potential. We then show that this model can be efficiently fitted to data without overfitting. We finally show that this model can be used to characterize and therefore precisely compare various intracellular in vivo recordings from different animals and experimental conditions.
Collapse
Affiliation(s)
- Simone Carlo Surace
- Department of Physiology, University of Bern, Bern, Switzerland
- Institute of Neuroinformatics, University of Zurich and ETH Zurich, Zurich, Switzerland
- * E-mail:
| | - Jean-Pascal Pfister
- Institute of Neuroinformatics, University of Zurich and ETH Zurich, Zurich, Switzerland
| |
Collapse
|
21
|
Steimer A, Schindler K. Random Sampling with Interspike-Intervals of the Exponential Integrate and Fire Neuron: A Computational Interpretation of UP-States. PLoS One 2015. [PMID: 26203657 PMCID: PMC4512685 DOI: 10.1371/journal.pone.0132906] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022] Open
Abstract
Oscillations between high and low values of the membrane potential (UP and DOWN states respectively) are an ubiquitous feature of cortical neurons during slow wave sleep and anesthesia. Nevertheless, a surprisingly small number of quantitative studies have been conducted only that deal with this phenomenon’s implications for computation. Here we present a novel theory that explains on a detailed mathematical level the computational benefits of UP states. The theory is based on random sampling by means of interspike intervals (ISIs) of the exponential integrate and fire (EIF) model neuron, such that each spike is considered a sample, whose analog value corresponds to the spike’s preceding ISI. As we show, the EIF’s exponential sodium current, that kicks in when balancing a noisy membrane potential around values close to the firing threshold, leads to a particularly simple, approximative relationship between the neuron’s ISI distribution and input current. Approximation quality depends on the frequency spectrum of the current and is improved upon increasing the voltage baseline towards threshold. Thus, the conceptually simpler leaky integrate and fire neuron that is missing such an additional current boost performs consistently worse than the EIF and does not improve when voltage baseline is increased. For the EIF in contrast, the presented mechanism is particularly effective in the high-conductance regime, which is a hallmark feature of UP-states. Our theoretical results are confirmed by accompanying simulations, which were conducted for input currents of varying spectral composition. Moreover, we provide analytical estimations of the range of ISI distributions the EIF neuron can sample from at a given approximation level. Such samples may be considered by any algorithmic procedure that is based on random sampling, such as Markov Chain Monte Carlo or message-passing methods. Finally, we explain how spike-based random sampling relates to existing computational theories about UP states during slow wave sleep and present possible extensions of the model in the context of spike-frequency adaptation.
Collapse
Affiliation(s)
- Andreas Steimer
- Department of Neurology, Inselspital\Bern University Hospital\University Bern, Bern, Switzerland
- * E-mail:
| | - Kaspar Schindler
- Department of Neurology, Inselspital\Bern University Hospital\University Bern, Bern, Switzerland
| |
Collapse
|
22
|
McElvain LE, Faulstich M, Jeanne JM, Moore JD, du Lac S. Implementation of linear sensory signaling via multiple coordinated mechanisms at central vestibular nerve synapses. Neuron 2015; 85:1132-44. [PMID: 25704949 DOI: 10.1016/j.neuron.2015.01.017] [Citation(s) in RCA: 17] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/09/2014] [Revised: 11/10/2014] [Accepted: 01/16/2015] [Indexed: 12/29/2022]
Abstract
Signal transfer in neural circuits is dynamically modified by the recent history of neuronal activity. Short-term plasticity endows synapses with nonlinear transmission properties, yet synapses in sensory and motor circuits are capable of signaling linearly over a wide range of presynaptic firing rates. How do such synapses achieve rate-invariant transmission despite history-dependent nonlinearities? Here, ultrastructural, biophysical, and computational analyses demonstrate that concerted molecular, anatomical, and physiological refinements are required for central vestibular nerve synapses to linearly transmit rate-coded sensory signals. Vestibular synapses operate in a physiological regime of steady-state depression imposed by tonic firing. Rate-invariant transmission relies on brief presynaptic action potentials that delimit calcium influx, large pools of rapidly mobilized vesicles, multiple low-probability release sites, robust postsynaptic receptor sensitivity, and efficient transmitter clearance. Broadband linear synaptic filtering of head motion signals is thus achieved by coordinately tuned synaptic machinery that maintains physiological operation within inherent cell biological limitations.
Collapse
Affiliation(s)
- Lauren E McElvain
- Neurosciences Graduate Program, University of California, San Diego, La Jolla, CA 92093, USA; Salk Institute for Biological Studies, La Jolla, CA 92037, USA; Champalimaud Neuroscience Programme, Champalimaud Centre for the Unknown, Av. Brasília, Doca de Pedrouços, Lisbon 1400-038, Portugal.
| | | | - James M Jeanne
- Neurosciences Graduate Program, University of California, San Diego, La Jolla, CA 92093, USA
| | - Jeffrey D Moore
- Neurosciences Graduate Program, University of California, San Diego, La Jolla, CA 92093, USA
| | - Sascha du Lac
- Neurosciences Graduate Program, University of California, San Diego, La Jolla, CA 92093, USA; Salk Institute for Biological Studies, La Jolla, CA 92037, USA; Howard Hughes Medical Institute, La Jolla, CA 92037, USA; Johns Hopkins University School of Medicine, Baltimore, MD 21205, USA.
| |
Collapse
|
23
|
Esposito U, Giugliano M, Vasilaki E. Adaptation of short-term plasticity parameters via error-driven learning may explain the correlation between activity-dependent synaptic properties, connectivity motifs and target specificity. Front Comput Neurosci 2015; 8:175. [PMID: 25688203 PMCID: PMC4310301 DOI: 10.3389/fncom.2014.00175] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/15/2014] [Accepted: 12/31/2014] [Indexed: 01/09/2023] Open
Abstract
The anatomical connectivity among neurons has been experimentally found to be largely non-random across brain areas. This means that certain connectivity motifs occur at a higher frequency than would be expected by chance. Of particular interest, short-term synaptic plasticity properties were found to colocalize with specific motifs: an over-expression of bidirectional motifs has been found in neuronal pairs where short-term facilitation dominates synaptic transmission among the neurons, whereas an over-expression of unidirectional motifs has been observed in neuronal pairs where short-term depression dominates. In previous work we found that, given a network with fixed short-term properties, the interaction between short- and long-term plasticity of synaptic transmission is sufficient for the emergence of specific motifs. Here, we introduce an error-driven learning mechanism for short-term plasticity that may explain how such observed correspondences develop from randomly initialized dynamic synapses. By allowing synapses to change their properties, neurons are able to adapt their own activity depending on an error signal. This results in more rich dynamics and also, provided that the learning mechanism is target-specific, leads to specialized groups of synapses projecting onto functionally different targets, qualitatively replicating the experimental results of Wang and collaborators.
Collapse
Affiliation(s)
- Umberto Esposito
- Department Computer Science, University of Sheffield Sheffield, UK
| | - Michele Giugliano
- Department Computer Science, University of Sheffield Sheffield, UK ; Theoretical Neurobiology and Neuroengineering Laboratory, Department Biomedical Sciences, University of Antwerp Antwerp, Belgium ; Laboratory of Neural Microcircuitry, Brain Mind Institute, Swiss Federal Institute of Technology of Lausanne École Polytechnique Fédérale de Lausanne, Switzerland
| | - Eleni Vasilaki
- Department Computer Science, University of Sheffield Sheffield, UK ; Theoretical Neurobiology and Neuroengineering Laboratory, Department Biomedical Sciences, University of Antwerp Antwerp, Belgium ; INSIGNEO Institute for in Silico Medicine, University of Sheffield Sheffield, UK
| |
Collapse
|
24
|
Ujfalussy BB, Makara JK, Branco T, Lengyel M. Dendritic nonlinearities are tuned for efficient spike-based computations in cortical circuits. eLife 2015; 4:e10056. [PMID: 26705334 PMCID: PMC4912838 DOI: 10.7554/elife.10056] [Citation(s) in RCA: 21] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/14/2015] [Accepted: 12/23/2015] [Indexed: 01/27/2023] Open
Abstract
Cortical neurons integrate thousands of synaptic inputs in their dendrites in highly nonlinear ways. It is unknown how these dendritic nonlinearities in individual cells contribute to computations at the level of neural circuits. Here, we show that dendritic nonlinearities are critical for the efficient integration of synaptic inputs in circuits performing analog computations with spiking neurons. We developed a theory that formalizes how a neuron's dendritic nonlinearity that is optimal for integrating synaptic inputs depends on the statistics of its presynaptic activity patterns. Based on their in vivo preynaptic population statistics (firing rates, membrane potential fluctuations, and correlations due to ensemble dynamics), our theory accurately predicted the responses of two different types of cortical pyramidal cells to patterned stimulation by two-photon glutamate uncaging. These results reveal a new computational principle underlying dendritic integration in cortical neurons by suggesting a functional link between cellular and systems--level properties of cortical circuits.
Collapse
Affiliation(s)
- Balázs B Ujfalussy
- Computational and Biological Learning Lab, Department of Engineering, University of Cambridge, Cambridge, United Kingdom,Wigner Research Centre for Physics, Hungarian Academy of Sciences, Budapest, Hungary,MRC Laboratory of Molecular Biology, Cambridge, United Kingdom,Lendület Laboratory of Neuronal Signaling, Institute of Experimental Medicine, Hungarian Academy of Sciences, Budapest, Hungary,
| | - Judit K Makara
- Lendület Laboratory of Neuronal Signaling, Institute of Experimental Medicine, Hungarian Academy of Sciences, Budapest, Hungary,Janelia Farm Research Campus, Howard Hughes Medical Institute, Ashburn, United States
| | - Tiago Branco
- MRC Laboratory of Molecular Biology, Cambridge, United Kingdom,Wolfson Institute for Biomedical Research, University College London, London, United Kingdom
| | - Máté Lengyel
- Computational and Biological Learning Lab, Department of Engineering, University of Cambridge, Cambridge, United Kingdom,Department of Cognitive Science, Central European University, Budapest, Hungary
| |
Collapse
|
25
|
de Jong APH, Fioravante D. Translating neuronal activity at the synapse: presynaptic calcium sensors in short-term plasticity. Front Cell Neurosci 2014; 8:356. [PMID: 25400547 PMCID: PMC4212674 DOI: 10.3389/fncel.2014.00356] [Citation(s) in RCA: 25] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/18/2014] [Accepted: 10/09/2014] [Indexed: 01/03/2023] Open
Abstract
The complex manner in which patterns of presynaptic neural activity are translated into short-term plasticity (STP) suggests the existence of multiple presynaptic calcium (Ca(2+)) sensors, which regulate the amplitude and time-course of STP and are the focus of this review. We describe two canonical Ca(2+)-binding protein domains (C2 domains and EF-hands) and define criteria that need to be met for a protein to qualify as a Ca(2+) sensor mediating STP. With these criteria in mind, we discuss various forms of STP and identify established and putative Ca(2+) sensors. We find that despite the multitude of proposed sensors, only three are well established in STP: Munc13, protein kinase C (PKC) and synaptotagmin-7. For putative sensors, we pinpoint open questions and potential pitfalls. Finally, we discuss how the molecular properties and modes of action of Ca(2+) sensors can explain their differential involvement in STP and shape net synaptic output.
Collapse
Affiliation(s)
| | - Diasynou Fioravante
- Department of Neurobiology, Physiology and Behavior, Center for Neuroscience, University of California Davis Davis, CA, USA
| |
Collapse
|
26
|
Tully PJ, Hennig MH, Lansner A. Synaptic and nonsynaptic plasticity approximating probabilistic inference. Front Synaptic Neurosci 2014; 6:8. [PMID: 24782758 PMCID: PMC3986567 DOI: 10.3389/fnsyn.2014.00008] [Citation(s) in RCA: 24] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/19/2013] [Accepted: 03/20/2014] [Indexed: 12/28/2022] Open
Abstract
Learning and memory operations in neural circuits are believed to involve molecular cascades of synaptic and nonsynaptic changes that lead to a diverse repertoire of dynamical phenomena at higher levels of processing. Hebbian and homeostatic plasticity, neuromodulation, and intrinsic excitability all conspire to form and maintain memories. But it is still unclear how these seemingly redundant mechanisms could jointly orchestrate learning in a more unified system. To this end, a Hebbian learning rule for spiking neurons inspired by Bayesian statistics is proposed. In this model, synaptic weights and intrinsic currents are adapted on-line upon arrival of single spikes, which initiate a cascade of temporally interacting memory traces that locally estimate probabilities associated with relative neuronal activation levels. Trace dynamics enable synaptic learning to readily demonstrate a spike-timing dependence, stably return to a set-point over long time scales, and remain competitive despite this stability. Beyond unsupervised learning, linking the traces with an external plasticity-modulating signal enables spike-based reinforcement learning. At the postsynaptic neuron, the traces are represented by an activity-dependent ion channel that is shown to regulate the input received by a postsynaptic cell and generate intrinsic graded persistent firing levels. We show how spike-based Hebbian-Bayesian learning can be performed in a simulated inference task using integrate-and-fire (IAF) neurons that are Poisson-firing and background-driven, similar to the preferred regime of cortical neurons. Our results support the view that neurons can represent information in the form of probability distributions, and that probabilistic inference could be a functional by-product of coupled synaptic and nonsynaptic mechanisms operating over several timescales. The model provides a biophysical realization of Bayesian computation by reconciling several observed neural phenomena whose functional effects are only partially understood in concert.
Collapse
Affiliation(s)
- Philip J Tully
- Department of Computational Biology, Royal Institute of Technology (KTH) Stockholm, Sweden ; Stockholm Brain Institute, Karolinska Institute Stockholm, Sweden ; School of Informatics, Institute for Adaptive and Neural Computation, University of Edinburgh Edinburgh, UK
| | - Matthias H Hennig
- School of Informatics, Institute for Adaptive and Neural Computation, University of Edinburgh Edinburgh, UK
| | - Anders Lansner
- Department of Computational Biology, Royal Institute of Technology (KTH) Stockholm, Sweden ; Stockholm Brain Institute, Karolinska Institute Stockholm, Sweden ; Department of Numerical Analysis and Computer Science, Stockholm University Stockholm, Sweden
| |
Collapse
|
27
|
Vardi R, Marmari H, Kanter I. Error correction and fast detectors implemented by ultrafast neuronal plasticity. PHYSICAL REVIEW. E, STATISTICAL, NONLINEAR, AND SOFT MATTER PHYSICS 2014; 89:042712. [PMID: 24827283 DOI: 10.1103/physreve.89.042712] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/30/2014] [Indexed: 06/03/2023]
Abstract
We experimentally show that the neuron functions as a precise time integrator, where the accumulated changes in neuronal response latencies, under complex and random stimulation patterns, are solely a function of a global quantity, the average time lag between stimulations. In contrast, momentary leaps in the neuronal response latency follow trends of consecutive stimulations, indicating ultrafast neuronal plasticity. On a circuit level, this ultrafast neuronal plasticity phenomenon implements error-correction mechanisms and fast detectors for misplaced stimulations. Additionally, at moderate (high) stimulation rates this phenomenon destabilizes (stabilizes) a periodic neuronal activity disrupted by misplaced stimulations.
Collapse
Affiliation(s)
- Roni Vardi
- Gonda Interdisciplinary Brain Research Center and the Goodman Faculty of Life Sciences, Bar-Ilan University, Ramat-Gan 52900, Israel
| | - Hagar Marmari
- Gonda Interdisciplinary Brain Research Center and the Goodman Faculty of Life Sciences, Bar-Ilan University, Ramat-Gan 52900, Israel
| | - Ido Kanter
- Gonda Interdisciplinary Brain Research Center and the Goodman Faculty of Life Sciences, Bar-Ilan University, Ramat-Gan 52900, Israel and Department of Physics, Bar-Ilan University, Ramat-Gan 52900, Israel
| |
Collapse
|
28
|
Abstract
To produce sensation, neuronal pathways must transmit and process stimulus patterns that unfold over time. This behavior is determined by short-term synaptic plasticity (STP), which shapes the temporal filtering properties of synapses in a pathway. We explored STP variability across thalamocortical (TC) synapses, measuring whole-cell responses to stimulation of TC fibers in layer 4 neurons of mouse barrel cortex in vitro. As expected, STP during stimulation from rest was dominated by depression. However, STP during ongoing stimulation was strikingly diverse across TC connections. Diversity took the form of variable tuning to the latest interstimulus interval: some connections responded weakly to shorter intervals, while other connections were facilitated. These behaviors did not cluster into categories but formed a continuum. Diverse tuning did not require disynaptic inhibition. Hence, monosynaptic excitatory lemniscal TC connections onto layer 4 do not behave uniformly during ongoing stimulation. Each connection responds differentially to particular stimulation intervals, enriching the ability of the pathway to convey complex, temporally fluctuating information.
Collapse
|
29
|
Tang R, Dai J. Biophoton signal transmission and processing in the brain. JOURNAL OF PHOTOCHEMISTRY AND PHOTOBIOLOGY B-BIOLOGY 2013; 139:71-5. [PMID: 24461927 DOI: 10.1016/j.jphotobiol.2013.12.008] [Citation(s) in RCA: 44] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/27/2013] [Revised: 12/13/2013] [Accepted: 12/13/2013] [Indexed: 11/19/2022]
Abstract
The transmission and processing of neural information in the nervous system plays a key role in neural functions. It is well accepted that neural communication is mediated by bioelectricity and chemical molecules via the processes called bioelectrical and chemical transmission, respectively. Indeed, the traditional theories seem to give valuable explanations for the basic functions of the nervous system, but difficult to construct general accepted concepts or principles to provide reasonable explanations of higher brain functions and mental activities, such as perception, learning and memory, emotion and consciousness. Therefore, many unanswered questions and debates over the neural encoding and mechanisms of neuronal networks remain. Cell to cell communication by biophotons, also called ultra-weak photon emissions, has been demonstrated in several plants, bacteria and certain animal cells. Recently, both experimental evidence and theoretical speculation have suggested that biophotons may play a potential role in neural signal transmission and processing, contributing to the understanding of the high functions of nervous system. In this paper, we review the relevant experimental findings and discuss the possible underlying mechanisms of biophoton signal transmission and processing in the nervous system.
Collapse
Affiliation(s)
- Rendong Tang
- Wuhan Institute for Neuroscience and Neuroengineering, South-Central University for Nationalities, Wuhan 430074, China
| | - Jiapei Dai
- Wuhan Institute for Neuroscience and Neuroengineering, South-Central University for Nationalities, Wuhan 430074, China.
| |
Collapse
|
30
|
Blackman AV, Abrahamsson T, Costa RP, Lalanne T, Sjöström PJ. Target-cell-specific short-term plasticity in local circuits. Front Synaptic Neurosci 2013; 5:11. [PMID: 24367330 PMCID: PMC3854841 DOI: 10.3389/fnsyn.2013.00011] [Citation(s) in RCA: 58] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/30/2013] [Accepted: 11/07/2013] [Indexed: 11/14/2022] Open
Abstract
Short-term plasticity (STP) denotes changes in synaptic strength that last up to tens of seconds. It is generally thought that STP impacts information transfer across synaptic connections and may thereby provide neurons with, for example, the ability to detect input coherence, to maintain stability and to promote synchronization. STP is due to a combination of mechanisms, including vesicle depletion and calcium accumulation in synaptic terminals. Different forms of STP exist, depending on many factors, including synapse type. Recent evidence shows that synapse dependence holds true even for connections that originate from a single presynaptic cell, which implies that postsynaptic target cell type can determine synaptic short-term dynamics. This arrangement is surprising, since STP itself is chiefly due to presynaptic mechanisms. Target-specific synaptic dynamics in addition imply that STP is not a bug resulting from synapses fatiguing when driven too hard, but rather a feature that is selectively implemented in the brain for specific functional purposes. As an example, target-specific STP results in sequential somatic and dendritic inhibition in neocortical and hippocampal excitatory cells during high-frequency firing. Recent studies also show that the Elfn1 gene specifically controls STP at some synapse types. In addition, presynaptic NMDA receptors have been implicated in synapse-specific control of synaptic dynamics during high-frequency activity. We argue that synapse-specific STP deserves considerable further study, both experimentally and theoretically, since its function is not well known. We propose that synapse-specific STP has to be understood in the context of the local circuit, which requires combining different scientific disciplines ranging from molecular biology through electrophysiology to computer modeling.
Collapse
Affiliation(s)
- Arne V Blackman
- Department of Neuroscience, Physiology and Pharmacology, University College London London, UK
| | - Therese Abrahamsson
- Department of Neurology and Neurosurgery, Centre for Research in Neuroscience, The Research Institute of the McGill University Health Centre, Montreal General Hospital Montreal, QC, Canada
| | - Rui Ponte Costa
- Neuroinformatics Doctoral Training Centre, School of Informatics, Institute for Adaptive and Neural Computation, University of Edinburgh Edinburgh, UK
| | - Txomin Lalanne
- Department of Neurology and Neurosurgery, Centre for Research in Neuroscience, The Research Institute of the McGill University Health Centre, Montreal General Hospital Montreal, QC, Canada ; Integrated Program in Neuroscience, McGill University Montreal, QC, Canada
| | - P Jesper Sjöström
- Department of Neuroscience, Physiology and Pharmacology, University College London London, UK ; Department of Neurology and Neurosurgery, Centre for Research in Neuroscience, The Research Institute of the McGill University Health Centre, Montreal General Hospital Montreal, QC, Canada
| |
Collapse
|
31
|
Ehrlich DE, Ryan SJ, Hazra R, Guo JD, Rainnie DG. Postnatal maturation of GABAergic transmission in the rat basolateral amygdala. J Neurophysiol 2013; 110:926-41. [PMID: 23719209 PMCID: PMC3742982 DOI: 10.1152/jn.01105.2012] [Citation(s) in RCA: 61] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/28/2012] [Accepted: 05/28/2013] [Indexed: 12/12/2022] Open
Abstract
Many psychiatric disorders, including anxiety and autism spectrum disorders, have early ages of onset and high incidence in juveniles. To better treat and prevent these disorders, it is important to first understand normal development of brain circuits that process emotion. Healthy and maladaptive emotional processing involve the basolateral amygdala (BLA), dysfunction of which has been implicated in numerous psychiatric disorders. Normal function of the adult BLA relies on a fine balance of glutamatergic excitation and GABAergic inhibition. Elsewhere in the brain GABAergic transmission changes throughout development, but little is known about the maturation of GABAergic transmission in the BLA. Here we used whole cell patch-clamp recording and single-cell RT-PCR to study GABAergic transmission in rat BLA principal neurons at postnatal day (P)7, P14, P21, P28, and P35. GABAA currents exhibited a significant twofold reduction in rise time and nearly 25% reduction in decay time constant between P7 and P28. This corresponded with a shift in expression of GABAA receptor subunit mRNA from the α2- to the α1-subunit. The reversal potential for GABAA receptors transitioned from depolarizing to hyperpolarizing with age, from around -55 mV at P7 to -70 mV by P21. There was a corresponding shift in expression of opposing chloride pumps that influence the reversal, from NKCC1 to KCC2. Finally, we observed short-term depression of GABAA postsynaptic currents in immature neurons that was significantly and gradually abolished by P28. These findings reveal that in the developing BLA GABAergic transmission is highly dynamic, reaching maturity at the end of the first postnatal month.
Collapse
Affiliation(s)
- David E Ehrlich
- Department of Psychiatry and Behavioral Sciences, Emory University School of Medicine, Atlanta, Georgia 30329, USA
| | | | | | | | | |
Collapse
|
32
|
Fung CCA, Wang H, Lam K, Wong KYM, Wu S. Resolution enhancement in neural networks with dynamical synapses. Front Comput Neurosci 2013; 7:73. [PMID: 23781197 PMCID: PMC3677988 DOI: 10.3389/fncom.2013.00073] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/01/2013] [Accepted: 05/15/2013] [Indexed: 11/29/2022] Open
Abstract
Conventionally, information is represented by spike rates in the neural system. Here, we consider the ability of temporally modulated activities in neuronal networks to carry information extra to spike rates. These temporal modulations, commonly known as population spikes, are due to the presence of synaptic depression in a neuronal network model. We discuss its relevance to an experiment on transparent motions in macaque monkeys by Treue et al. in 2000. They found that if the moving directions of objects are too close, the firing rate profile will be very similar to that with one direction. As the difference in the moving directions of objects is large enough, the neuronal system would respond in such a way that the network enhances the resolution in the moving directions of the objects. In this paper, we propose that this behavior can be reproduced by neural networks with dynamical synapses when there are multiple external inputs. We will demonstrate how resolution enhancement can be achieved, and discuss the conditions under which temporally modulated activities are able to enhance information processing performances in general.
Collapse
Affiliation(s)
- C C Alan Fung
- Department of Physics, The Hong Kong University of Science and Technology Hong Kong, China
| | | | | | | | | |
Collapse
|
33
|
Steimer A, Douglas R. Spike-based probabilistic inference in analog graphical models using interspike-interval coding. Neural Comput 2013; 25:2303-54. [PMID: 23663144 DOI: 10.1162/neco_a_00477] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Temporal spike codes play a crucial role in neural information processing. In particular, there is strong experimental evidence that interspike intervals (ISIs) are used for stimulus representation in neural systems. However, very few algorithmic principles exploit the benefits of such temporal codes for probabilistic inference of stimuli or decisions. Here, we describe and rigorously prove the functional properties of a spike-based processor that uses ISI distributions to perform probabilistic inference. The abstract processor architecture serves as a building block for more concrete, neural implementations of the belief-propagation (BP) algorithm in arbitrary graphical models (e.g., Bayesian networks and factor graphs). The distributed nature of graphical models matches well with the architectural and functional constraints imposed by biology. In our model, ISI distributions represent the BP messages exchanged between factor nodes, leading to the interpretation of a single spike as a random sample that follows such a distribution. We verify the abstract processor model by numerical simulation in full graphs, and demonstrate that it can be applied even in the presence of analog variables. As a particular example, we also show results of a concrete, neural implementation of the processor, although in principle our approach is more flexible and allows different neurobiological interpretations. Furthermore, electrophysiological data from area LIP during behavioral experiments are assessed in light of ISI coding, leading to concrete testable, quantitative predictions and a more accurate description of these data compared to hitherto existing models.
Collapse
Affiliation(s)
- Andreas Steimer
- Institute of Neuroinformatics, University of Zürich and ETH Zürich, Zürich 8057, Switzerland.
| | | |
Collapse
|
34
|
Hennig MH. Theoretical models of synaptic short term plasticity. Front Comput Neurosci 2013; 7:45. [PMID: 23626536 PMCID: PMC3630333 DOI: 10.3389/fncom.2013.00045] [Citation(s) in RCA: 47] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/29/2012] [Accepted: 04/04/2013] [Indexed: 11/13/2022] Open
Abstract
Short term plasticity is a highly abundant form of rapid, activity-dependent modulation of synaptic efficacy. A shared set of mechanisms can cause both depression and enhancement of the postsynaptic response at different synapses, with important consequences for information processing. Mathematical models have been extensively used to study the mechanisms and roles of short term plasticity. This review provides an overview of existing models and their biological basis, and of their main properties. Special attention will be given to slow processes such as calcium channel inactivation and the effect of activation of presynaptic autoreceptors.
Collapse
Affiliation(s)
- Matthias H Hennig
- School of Informatics, Institute for Adaptive and Neural Computation, University of Edinburgh Edinburgh, UK
| |
Collapse
|
35
|
The application of nonlinear Dynamic Causal Modelling for fMRI in subjects at high genetic risk of schizophrenia. Neuroimage 2013; 73:16-29. [PMID: 23384525 DOI: 10.1016/j.neuroimage.2013.01.063] [Citation(s) in RCA: 41] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/17/2012] [Revised: 01/17/2013] [Accepted: 01/22/2013] [Indexed: 01/22/2023] Open
Abstract
Nonlinear Dynamic Causal Modelling (DCM) for fMRI provides computational modelling of gating mechanisms at the neuronal population level. It allows for estimations of connection strengths with nonlinear modulation within task-dependent networks. This paper presents an application of nonlinear DCM in subjects at high familial risk of schizophrenia performing the Hayling Sentence Completion Task (HSCT). We analysed scans of 19 healthy controls and 46 subjects at high familial risk of schizophrenia, which included 26 high risk subjects without psychotic symptoms and 20 subjects with psychotic symptoms. The activity-dependent network consists of the intra parietal cortex (IPS), inferior frontal gyrus (IFG), middle temporal gyrus (MTG), anterior cingulate cortex (ACC) and the mediodorsal (MD) thalamus. The connections between the MD thalamus and the IFG were gated by the MD thalamus. We used DCM to investigate altered connection strength of these connections. Bayesian Model Selection (BMS) at the group and family level was used to compare the optimal bilinear and nonlinear models. Bayesian Model Averaging (BMA) was used to assess the connection strengths with the gating from the MD thalamus and the IFG. The nonlinear models provided the better explanation of the data. Furthermore, the BMA analysis showed significantly lower connection strength of the thalamocortical connection with nonlinear modulation from the MD thalamus in high risk subjects with psychotic symptoms and those who subsequently developed schizophrenia. These findings demonstrate that nonlinear DCM provides a method to investigate altered connectivity at the level of neural circuits. The reduced connection strength with thalamic gating may be a neurobiomarker implicated in the development of psychotic symptoms. This study suggests that nonlinear DCM could lead to new insights into functional and effective dysconnection at the network level in subjects at high familial risk.
Collapse
|
36
|
Lerner I, Bentin S, Shriki O. Spreading activation in an attractor network with latching dynamics: automatic semantic priming revisited. Cogn Sci 2012; 36:1339-82. [PMID: 23094718 PMCID: PMC3490422 DOI: 10.1111/cogs.12007] [Citation(s) in RCA: 26] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022]
Abstract
Localist models of spreading activation (SA) and models assuming distributed representations offer very different takes on semantic priming, a widely investigated paradigm in word recognition and semantic memory research. In this study, we implemented SA in an attractor neural network model with distributed representations and created a unified framework for the two approaches. Our models assume a synaptic depression mechanism leading to autonomous transitions between encoded memory patterns (latching dynamics), which account for the major characteristics of automatic semantic priming in humans. Using computer simulations, we demonstrated how findings that challenged attractor-based networks in the past, such as mediated and asymmetric priming, are a natural consequence of our present model's dynamics. Puzzling results regarding backward priming were also given a straightforward explanation. In addition, the current model addresses some of the differences between semantic and associative relatedness and explains how these differences interact with stimulus onset asynchrony in priming experiments.
Collapse
Affiliation(s)
- Itamar Lerner
- Interdisciplinary Center for Neural Computation, The Hebrew University of Jerusalem, Jerusalem, Israel
| | - Shlomo Bentin
- Interdisciplinary Center for Neural Computation, The Hebrew University of Jerusalem, Jerusalem, Israel
- Department of Psychology, The Hebrew University of Jerusalem, Jerusalem, Israel
| | - Oren Shriki
- Section on Critical Brain Dynamics, National Institute of Mental Health, Bethesda, MD, USA
| |
Collapse
|
37
|
Abstract
This paper presents a review of Bayesian models of brain and behaviour. We first review the basic principles of Bayesian inference. This is followed by descriptions of sampling and variational methods for approximate inference, and forward and backward recursions in time for inference in dynamical models. The review of behavioural models covers work in visual processing, sensory integration, sensorimotor integration, and collective decision making. The review of brain models covers a range of spatial scales from synapses to neurons and population codes, but with an emphasis on models of cortical hierarchies. We describe a simple hierarchical model which provides a mathematical framework relating constructs in Bayesian inference to those in neural computation. We close by reviewing recent theoretical developments in Bayesian inference for planning and control.
Collapse
Affiliation(s)
- William Penny
- Wellcome Trust Centre for Neuroimaging, University College, London WC1N 3BG, UK
| |
Collapse
|
38
|
Naud R, Gerstner W. Coding and decoding with adapting neurons: a population approach to the peri-stimulus time histogram. PLoS Comput Biol 2012; 8:e1002711. [PMID: 23055914 PMCID: PMC3464223 DOI: 10.1371/journal.pcbi.1002711] [Citation(s) in RCA: 34] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/03/2012] [Accepted: 08/03/2012] [Indexed: 11/18/2022] Open
Abstract
The response of a neuron to a time-dependent stimulus, as measured in a Peri-Stimulus-Time-Histogram (PSTH), exhibits an intricate temporal structure that reflects potential temporal coding principles. Here we analyze the encoding and decoding of PSTHs for spiking neurons with arbitrary refractoriness and adaptation. As a modeling framework, we use the spike response model, also known as the generalized linear neuron model. Because of refractoriness, the effect of the most recent spike on the spiking probability a few milliseconds later is very strong. The influence of the last spike needs therefore to be described with high precision, while the rest of the neuronal spiking history merely introduces an average self-inhibition or adaptation that depends on the expected number of past spikes but not on the exact spike timings. Based on these insights, we derive a ‘quasi-renewal equation’ which is shown to yield an excellent description of the firing rate of adapting neurons. We explore the domain of validity of the quasi-renewal equation and compare it with other rate equations for populations of spiking neurons. The problem of decoding the stimulus from the population response (or PSTH) is addressed analogously. We find that for small levels of activity and weak adaptation, a simple accumulator of the past activity is sufficient to decode the original input, but when refractory effects become large decoding becomes a non-linear function of the past activity. The results presented here can be applied to the mean-field analysis of coupled neuron networks, but also to arbitrary point processes with negative self-interaction. How can information be encoded and decoded in populations of adapting neurons? A quantitative answer to this question requires a mathematical expression relating neuronal activity to the external stimulus, and, conversely, stimulus to neuronal activity. Although widely used equations and models exist for the special problem of relating external stimulus to the action potentials of a single neuron, the analogous problem of relating the external stimulus to the activity of a population has proven more difficult. There is a bothersome gap between the dynamics of single adapting neurons and the dynamics of populations. Moreover, if we ignore the single neurons and describe directly the population dynamics, we are faced with the ambiguity of the adapting neural code. The neural code of adapting populations is ambiguous because it is possible to observe a range of population activities in response to a given instantaneous input. Somehow the ambiguity is resolved by the knowledge of the population history, but how precisely? In this article we use approximation methods to provide mathematical expressions that describe the encoding and decoding of external stimuli in adapting populations. The theory presented here helps to bridge the gap between the dynamics of single neurons and that of populations.
Collapse
Affiliation(s)
| | - Wulfram Gerstner
- School of Computer and Communication Sciences and School of Life Sciences, Brain Mind Institute, Ecole Polytechnique Fédérale de Lausanne, Lausanne-EPFL, Lausanne, Switzerland
- * E-mail:
| |
Collapse
|
39
|
Short-term plasticity constrains spatial organization of a hippocampal presynaptic terminal. Proc Natl Acad Sci U S A 2012; 109:14657-62. [PMID: 22908295 DOI: 10.1073/pnas.1211971109] [Citation(s) in RCA: 44] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/03/2023] Open
Abstract
Although the CA3-CA1 synapse is critically important for learning and memory, experimental limitations have to date prevented direct determination of the structural features that determine the response plasticity. Specifically, the local calcium influx responsible for vesicular release and short-term synaptic facilitation strongly depends on the distance between the voltage-dependent calcium channels (VDCCs) and the presynaptic active zone. Estimates for this distance range over two orders of magnitude. Here, we use a biophysically detailed computational model of the presynaptic bouton and demonstrate that available experimental data provide sufficient constraints to uniquely reconstruct the presynaptic architecture. We predict that for a typical CA3-CA1 synapse, there are ~70 VDCCs located 300 nm from the active zone. This result is surprising, because structural studies on other synapses in the hippocampus report much tighter spatial coupling. We demonstrate that the unusual structure of this synapse reflects its functional role in short-term plasticity (STP).
Collapse
|
40
|
Abstract
Sensory receptive fields (RFs) vary as a function of stimulus properties and measurement methods. Previous stimuli or surrounding stimuli facilitate, suppress, or change the selectivity of sensory neurons' responses. Here, we propose that these spatiotemporal contextual dependencies are signatures of efficient perceptual inference and can be explained by a single neural mechanism, input targeted divisive inhibition. To respond both selectively and reliably, sensory neurons should behave as active predictors rather than passive filters. In particular, they should remove input they can predict ("explain away") from the synaptic inputs to all other neurons. This implies that RFs are constantly and dynamically reshaped by the spatial and temporal context, while the true selectivity of sensory neurons resides in their "predictive field." This approach motivates a reinvestigation of sensory representations and particularly the role and specificity of surround suppression and adaptation in sensory areas.
Collapse
|
41
|
Bourjaily MA, Miller P. Dynamic afferent synapses to decision-making networks improve performance in tasks requiring stimulus associations and discriminations. J Neurophysiol 2012; 108:513-27. [PMID: 22457467 DOI: 10.1152/jn.00806.2011] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/22/2022] Open
Abstract
Animals must often make opposing responses to similar complex stimuli. Multiple sensory inputs from such stimuli combine to produce stimulus-specific patterns of neural activity. It is the differences between these activity patterns, even when small, that provide the basis for any differences in behavioral response. In the present study, we investigate three tasks with differing degrees of overlap in the inputs, each with just two response possibilities. We simulate behavioral output via winner-takes-all activity in one of two pools of neurons forming a biologically based decision-making layer. The decision-making layer receives inputs either in a direct stimulus-dependent manner or via an intervening recurrent network of neurons that form the associative layer, whose activity helps distinguish the stimuli of each task. We show that synaptic facilitation of synapses to the decision-making layer improves performance in these tasks, robustly increasing accuracy and speed of responses across multiple configurations of network inputs. Conversely, we find that synaptic depression worsens performance. In a linearly nonseparable task with exclusive-or logic, the benefit of synaptic facilitation lies in its superlinear transmission: effective synaptic strength increases with presynaptic firing rate, which enhances the already present superlinearity of presynaptic firing rate as a function of stimulus-dependent input. In linearly separable single-stimulus discrimination tasks, we find that facilitating synapses are always beneficial because synaptic facilitation always enhances any differences between inputs. Thus we predict that for optimal decision-making accuracy and speed, synapses from sensory or associative areas to decision-making or premotor areas should be facilitating.
Collapse
Affiliation(s)
- Mark A Bourjaily
- Neuroscience Program, Brandeis University, Waltham, MA 02454-9110, USA
| | | |
Collapse
|
42
|
Lochmann T, Deneve S. Neural processing as causal inference. Curr Opin Neurobiol 2012; 21:774-81. [PMID: 21742484 DOI: 10.1016/j.conb.2011.05.018] [Citation(s) in RCA: 129] [Impact Index Per Article: 10.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/21/2011] [Revised: 05/16/2011] [Accepted: 05/20/2011] [Indexed: 11/30/2022]
Abstract
Perception is about making sense, that is, understanding what events in the outside world caused the sensory observations. Consistent with this intuition, many aspects of human behavior confronting noise and ambiguity are well explained by principles of causal inference. Extending these insights, recent studies have applied the same powerful set of tools to perceptual processing at the neural level. According to these approaches, microscopic neural structures solve elementary probabilistic tasks and can be combined to construct hierarchical predictive models of the sensory input. This framework suggests that variability in neural responses reflects the inherent uncertainty associated with sensory interpretations and that sensory neurons are active predictors rather than passive filters of their inputs. Causal inference can account parsimoniously and quantitatively for non-linear dynamical properties in single synapses, single neurons and sensory receptive fields.
Collapse
Affiliation(s)
- Timm Lochmann
- College of Computer, Mathematical, and Natural Sciences, University of Maryland, College Park, MD 20742, United States
| | | |
Collapse
|
43
|
Fung CCA, Wong KYM, Wang H, Wu S. Dynamical synapses enhance neural information processing: gracefulness, accuracy, and mobility. Neural Comput 2012; 24:1147-85. [PMID: 22295986 DOI: 10.1162/neco_a_00269] [Citation(s) in RCA: 23] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Experimental data have revealed that neuronal connection efficacy exhibits two forms of short-term plasticity: short-term depression (STD) and short-term facilitation (STF). They have time constants residing between fast neural signaling and rapid learning and may serve as substrates for neural systems manipulating temporal information on relevant timescales. This study investigates the impact of STD and STF on the dynamics of continuous attractor neural networks and their potential roles in neural information processing. We find that STD endows the network with slow-decaying plateau behaviors: the network that is initially being stimulated to an active state decays to a silent state very slowly on the timescale of STD rather than on that of neuralsignaling. This provides a mechanism for neural systems to hold sensory memory easily and shut off persistent activities gracefully. With STF, we find that the network can hold a memory trace of external inputs in the facilitated neuronal interactions, which provides a way to stabilize the network response to noisy inputs, leading to improved accuracy in population decoding. Furthermore, we find that STD increases the mobility of the network states. The increased mobility enhances the tracking performance of the network in response to time-varying stimuli, leading to anticipative neural responses. In general, we find that STD and STP tend to have opposite effects on network dynamics and complementary computational advantages, suggesting that the brain may employ a strategy of weighting them differentially depending on the computational purpose.
Collapse
Affiliation(s)
- C C Alan Fung
- Department of Physics, Hong Kong University of Science and Technology, Hong Kong, China.
| | | | | | | |
Collapse
|
44
|
A diversity of synaptic filters are created by temporal summation of excitation and inhibition. J Neurosci 2011; 31:14721-34. [PMID: 21994388 DOI: 10.1523/jneurosci.1424-11.2011] [Citation(s) in RCA: 34] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022] Open
Abstract
Temporal filtering is a fundamental operation of nervous systems. In peripheral sensory systems, the temporal pattern of spiking activity can encode various stimulus qualities, and temporal filtering allows postsynaptic neurons to detect behaviorally relevant stimulus features from these spike trains. Intrinsic excitability, short-term synaptic plasticity, and voltage-dependent dendritic conductances have all been identified as mechanisms that can establish temporal filtering behavior in single neurons. Here we show that synaptic integration of temporally summating excitation and inhibition can establish diverse temporal filters of presynaptic input. Mormyrid electric fish communicate by varying the intervals between electric organ discharges. The timing of each discharge is coded by peripheral receptors into precisely timed spikes. Within the midbrain posterior exterolateral nucleus, temporal filtering by individual neurons results in selective responses to a particular range of presynaptic interspike intervals. These neurons are diverse in their temporal filtering properties, reflecting the wide range of intervals that must be detected during natural communication behavior. By manipulating presynaptic spike timing with high temporal resolution, we demonstrate that tuning to behaviorally relevant patterns of presynaptic input is similar in vivo and in vitro. We reveal that GABAergic inhibition plays a critical role in establishing different temporal filtering properties. Further, our results demonstrate that temporal summation of excitation and inhibition establishes selective responses to high and low rates of synaptic input, respectively. Simple models of synaptic integration reveal that variation in these two competing influences provides a basic mechanism for generating diverse temporal filters of synaptic input.
Collapse
|