1
|
Fernández Santoro EM, Karim A, Warnaar P, De Zeeuw CI, Badura A, Negrello M. Purkinje cell models: past, present and future. Front Comput Neurosci 2024; 18:1426653. [PMID: 39049990 PMCID: PMC11266113 DOI: 10.3389/fncom.2024.1426653] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/01/2024] [Accepted: 06/24/2024] [Indexed: 07/27/2024] Open
Abstract
The investigation of the dynamics of Purkinje cell (PC) activity is crucial to unravel the role of the cerebellum in motor control, learning and cognitive processes. Within the cerebellar cortex (CC), these neurons receive all the incoming sensory and motor information, transform it and generate the entire cerebellar output. The relatively homogenous and repetitive structure of the CC, common to all vertebrate species, suggests a single computation mechanism shared across all PCs. While PC models have been developed since the 70's, a comprehensive review of contemporary models is currently lacking. Here, we provide an overview of PC models, ranging from the ones focused on single cell intracellular PC dynamics, through complex models which include synaptic and extrasynaptic inputs. We review how PC models can reproduce physiological activity of the neuron, including firing patterns, current and multistable dynamics, plateau potentials, calcium signaling, intrinsic and synaptic plasticity and input/output computations. We consider models focusing both on somatic and on dendritic computations. Our review provides a critical performance analysis of PC models with respect to known physiological data. We expect our synthesis to be useful in guiding future development of computational models that capture real-life PC dynamics in the context of cerebellar computations.
Collapse
Affiliation(s)
| | - Arun Karim
- Department of Neuroscience, Erasmus MC, Rotterdam, Netherlands
| | - Pascal Warnaar
- Department of Neuroscience, Erasmus MC, Rotterdam, Netherlands
- Netherlands Institute for Neuroscience, Royal Academy of Arts and Sciences, Amsterdam, Netherlands
| | - Chris I. De Zeeuw
- Department of Neuroscience, Erasmus MC, Rotterdam, Netherlands
- Netherlands Institute for Neuroscience, Royal Academy of Arts and Sciences, Amsterdam, Netherlands
| | | | - Mario Negrello
- Department of Neuroscience, Erasmus MC, Rotterdam, Netherlands
| |
Collapse
|
2
|
Xie M, Muscinelli SP, Decker Harris K, Litwin-Kumar A. Task-dependent optimal representations for cerebellar learning. eLife 2023; 12:e82914. [PMID: 37671785 PMCID: PMC10541175 DOI: 10.7554/elife.82914] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/22/2022] [Accepted: 09/05/2023] [Indexed: 09/07/2023] Open
Abstract
The cerebellar granule cell layer has inspired numerous theoretical models of neural representations that support learned behaviors, beginning with the work of Marr and Albus. In these models, granule cells form a sparse, combinatorial encoding of diverse sensorimotor inputs. Such sparse representations are optimal for learning to discriminate random stimuli. However, recent observations of dense, low-dimensional activity across granule cells have called into question the role of sparse coding in these neurons. Here, we generalize theories of cerebellar learning to determine the optimal granule cell representation for tasks beyond random stimulus discrimination, including continuous input-output transformations as required for smooth motor control. We show that for such tasks, the optimal granule cell representation is substantially denser than predicted by classical theories. Our results provide a general theory of learning in cerebellum-like systems and suggest that optimal cerebellar representations are task-dependent.
Collapse
Affiliation(s)
- Marjorie Xie
- Zuckerman Mind Brain Behavior Institute, Columbia UniversityNew YorkUnited States
| | - Samuel P Muscinelli
- Zuckerman Mind Brain Behavior Institute, Columbia UniversityNew YorkUnited States
| | - Kameron Decker Harris
- Department of Computer Science, Western Washington UniversityBellinghamUnited States
| | - Ashok Litwin-Kumar
- Zuckerman Mind Brain Behavior Institute, Columbia UniversityNew YorkUnited States
| |
Collapse
|
3
|
Tamura K, Yamamoto Y, Kobayashi T, Kuriyama R, Yamazaki T. Discrimination and learning of temporal input sequences in a cerebellar Purkinje cell model. Front Cell Neurosci 2023; 17:1075005. [PMID: 36816857 PMCID: PMC9932327 DOI: 10.3389/fncel.2023.1075005] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/20/2022] [Accepted: 01/10/2023] [Indexed: 02/05/2023] Open
Abstract
Introduction Temporal information processing is essential for sequential contraction of various muscles with the appropriate timing and amplitude for fast and smooth motor control. These functions depend on dynamics of neural circuits, which consist of simple neurons that accumulate incoming spikes and emit other spikes. However, recent studies indicate that individual neurons can perform complex information processing through the nonlinear dynamics of dendrites with complex shapes and ion channels. Although we have extensive evidence that cerebellar circuits play a vital role in motor control, studies investigating the computational ability of single Purkinje cells are few. Methods We found, through computer simulations, that a Purkinje cell can discriminate a series of pulses in two directions (from dendrite tip to soma, and from soma to dendrite), as cortical pyramidal cells do. Such direction sensitivity was observed in whatever compartment types of dendrites (spiny, smooth, and main), although they have dierent sets of ion channels. Results We found that the shortest and longest discriminable sequences lasted for 60 ms (6 pulses with 10 ms interval) and 4,000 ms (20 pulses with 200 ms interval), respectively. and that the ratio of discriminable sequences within the region of the interesting parameter space was, on average, 3.3% (spiny), 3.2% (smooth), and 1.0% (main). For the direction sensitivity, a T-type Ca2+ channel was necessary, in contrast with cortical pyramidal cells that have N-methyl-D-aspartate receptors (NMDARs). Furthermore, we tested whether the stimulus direction can be reversed by learning, specifically by simulated long-term depression, and obtained positive results. Discussion Our results show that individual Purkinje cells can perform more complex information processing than is conventionally assumed for a single neuron, and suggest that Purkinje cells act as sequence discriminators, a useful role in motor control and learning.
Collapse
Affiliation(s)
- Kaaya Tamura
- Graduate School of Informatics and Engineering, The University of Electro-Communications, Tokyo, Japan
| | - Yuki Yamamoto
- Graduate School of Medical and Dental Sciences, Tokyo Medical and Dental University, Tokyo, Japan
| | - Taira Kobayashi
- Graduate School of Informatics and Engineering, The University of Electro-Communications, Tokyo, Japan,Graduate School of Sciences and Technology for Innovation, Yamaguchi University, Yamaguchi, Japan
| | - Rin Kuriyama
- Graduate School of Informatics and Engineering, The University of Electro-Communications, Tokyo, Japan
| | - Tadashi Yamazaki
- Graduate School of Informatics and Engineering, The University of Electro-Communications, Tokyo, Japan,*Correspondence: Tadashi Yamazaki ✉
| |
Collapse
|
4
|
Vijayan A, Diwakar S. A cerebellum inspired spiking neural network as a multi-model for pattern classification and robotic trajectory prediction. Front Neurosci 2022; 16:909146. [DOI: 10.3389/fnins.2022.909146] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/31/2022] [Accepted: 11/02/2022] [Indexed: 11/29/2022] Open
Abstract
Spiking neural networks were introduced to understand spatiotemporal information processing in neurons and have found their application in pattern encoding, data discrimination, and classification. Bioinspired network architectures are considered for event-driven tasks, and scientists have looked at different theories based on the architecture and functioning. Motor tasks, for example, have networks inspired by cerebellar architecture where the granular layer recodes sparse representations of the mossy fiber (MF) inputs and has more roles in motor learning. Using abstractions from cerebellar connections and learning rules of deep learning network (DLN), patterns were discriminated within datasets, and the same algorithm was used for trajectory optimization. In the current work, a cerebellum-inspired spiking neural network with dynamics of cerebellar neurons and learning mechanisms attributed to the granular layer, Purkinje cell (PC) layer, and cerebellar nuclei interconnected by excitatory and inhibitory synapses was implemented. The model’s pattern discrimination capability was tested for two tasks on standard machine learning (ML) datasets and on following a trajectory of a low-cost sensor-free robotic articulator. Tuned for supervised learning, the pattern classification capability of the cerebellum-inspired network algorithm has produced more generalized models than data-specific precision models on smaller training datasets. The model showed an accuracy of 72%, which was comparable to standard ML algorithms, such as MLP (78%), Dl4jMlpClassifier (64%), RBFNetwork (71.4%), and libSVM-linear (85.7%). The cerebellar model increased the network’s capability and decreased storage, augmenting faster computations. Additionally, the network model could also implicitly reconstruct the trajectory of a 6-degree of freedom (DOF) robotic arm with a low error rate by reconstructing the kinematic parameters. The variability between the actual and predicted trajectory points was noted to be ± 3 cm (while moving to a position in a cuboid space of 25 × 30 × 40 cm). Although a few known learning rules were implemented among known types of plasticity in the cerebellum, the network model showed a generalized processing capability for a range of signals, modulating the data through the interconnected neural populations. In addition to potential use on sensor-free or feed-forward based controllers for robotic arms and as a generalized pattern classification algorithm, this model adds implications to motor learning theory.
Collapse
|
5
|
Boboeva V, Pezzotta A, Clopath C. Free recall scaling laws and short-term memory effects in a latching attractor network. Proc Natl Acad Sci U S A 2021; 118:e2026092118. [PMID: 34873052 PMCID: PMC8670499 DOI: 10.1073/pnas.2026092118] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 10/14/2021] [Indexed: 11/18/2022] Open
Abstract
Despite the complexity of human memory, paradigms like free recall have revealed robust qualitative and quantitative characteristics, such as power laws governing recall capacity. Although abstract random matrix models could explain such laws, the possibility of their implementation in large networks of interacting neurons has so far remained underexplored. We study an attractor network model of long-term memory endowed with firing rate adaptation and global inhibition. Under appropriate conditions, the transitioning behavior of the network from memory to memory is constrained by limit cycles that prevent the network from recalling all memories, with scaling similar to what has been found in experiments. When the model is supplemented with a heteroassociative learning rule, complementing the standard autoassociative learning rule, as well as short-term synaptic facilitation, our model reproduces other key findings in the free recall literature, namely, serial position effects, contiguity and forward asymmetry effects, and the semantic effects found to guide memory recall. The model is consistent with a broad series of manipulations aimed at gaining a better understanding of the variables that affect recall, such as the role of rehearsal, presentation rates, and continuous and/or end-of-list distractor conditions. We predict that recall capacity may be increased with the addition of small amounts of noise, for example, in the form of weak random stimuli during recall. Finally, we predict that, although the statistics of the encoded memories has a strong effect on the recall capacity, the power laws governing recall capacity may still be expected to hold.
Collapse
Affiliation(s)
- Vezha Boboeva
- Department of Bioengineering, Imperial College London, London SW7 2BX, United Kingdom
| | - Alberto Pezzotta
- Developmental Dynamics Laboratory, The Francis Crick Institute, London NW1 1AT, United Kingdom
| | - Claudia Clopath
- Department of Bioengineering, Imperial College London, London SW7 2BX, United Kingdom;
| |
Collapse
|
6
|
Ingrosso A. Optimal learning with excitatory and inhibitory synapses. PLoS Comput Biol 2020; 16:e1008536. [PMID: 33370266 PMCID: PMC7793294 DOI: 10.1371/journal.pcbi.1008536] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/27/2020] [Revised: 01/08/2021] [Accepted: 11/13/2020] [Indexed: 11/22/2022] Open
Abstract
Characterizing the relation between weight structure and input/output statistics is fundamental for understanding the computational capabilities of neural circuits. In this work, I study the problem of storing associations between analog signals in the presence of correlations, using methods from statistical mechanics. I characterize the typical learning performance in terms of the power spectrum of random input and output processes. I show that optimal synaptic weight configurations reach a capacity of 0.5 for any fraction of excitatory to inhibitory weights and have a peculiar synaptic distribution with a finite fraction of silent synapses. I further provide a link between typical learning performance and principal components analysis in single cases. These results may shed light on the synaptic profile of brain circuits, such as cerebellar structures, that are thought to engage in processing time-dependent signals and performing on-line prediction.
Collapse
Affiliation(s)
- Alessandro Ingrosso
- Zuckerman Mind, Brain, Behavior Institute, Columbia University, New York, New York, United States of America
| |
Collapse
|
7
|
Straub I, Witter L, Eshra A, Hoidis M, Byczkowicz N, Maas S, Delvendahl I, Dorgans K, Savier E, Bechmann I, Krueger M, Isope P, Hallermann S. Gradients in the mammalian cerebellar cortex enable Fourier-like transformation and improve storing capacity. eLife 2020; 9:e51771. [PMID: 32022688 PMCID: PMC7002074 DOI: 10.7554/elife.51771] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/10/2019] [Accepted: 12/20/2019] [Indexed: 12/28/2022] Open
Abstract
Cerebellar granule cells (GCs) make up the majority of all neurons in the vertebrate brain, but heterogeneities among GCs and potential functional consequences are poorly understood. Here, we identified unexpected gradients in the biophysical properties of GCs in mice. GCs closer to the white matter (inner-zone GCs) had higher firing thresholds and could sustain firing with larger current inputs than GCs closer to the Purkinje cell layer (outer-zone GCs). Dynamic Clamp experiments showed that inner- and outer-zone GCs preferentially respond to high- and low-frequency mossy fiber inputs, respectively, enabling dispersion of the mossy fiber input into its frequency components as performed by a Fourier transformation. Furthermore, inner-zone GCs have faster axonal conduction velocity and elicit faster synaptic potentials in Purkinje cells. Neuronal network modeling revealed that these gradients improve spike-timing precision of Purkinje cells and decrease the number of GCs required to learn spike-sequences. Thus, our study uncovers biophysical gradients in the cerebellar cortex enabling a Fourier-like transformation of mossy fiber inputs.
Collapse
Affiliation(s)
- Isabelle Straub
- Carl-Ludwig-Institute for Physiology, Medical FacultyLeipzig UniversityLeipzigGermany
| | - Laurens Witter
- Carl-Ludwig-Institute for Physiology, Medical FacultyLeipzig UniversityLeipzigGermany
- Department of Integrative Neurophysiology, Center for Neurogenomics and Cognitive Research (CNCR)VU UniversityAmsterdamNetherlands
| | - Abdelmoneim Eshra
- Carl-Ludwig-Institute for Physiology, Medical FacultyLeipzig UniversityLeipzigGermany
| | - Miriam Hoidis
- Carl-Ludwig-Institute for Physiology, Medical FacultyLeipzig UniversityLeipzigGermany
| | - Niklas Byczkowicz
- Carl-Ludwig-Institute for Physiology, Medical FacultyLeipzig UniversityLeipzigGermany
| | - Sebastian Maas
- Carl-Ludwig-Institute for Physiology, Medical FacultyLeipzig UniversityLeipzigGermany
| | - Igor Delvendahl
- Carl-Ludwig-Institute for Physiology, Medical FacultyLeipzig UniversityLeipzigGermany
| | - Kevin Dorgans
- Institut des Neurosciences Cellulaires et IntégrativesCNRS, Université de StrasbourgStrasbourgFrance
| | - Elise Savier
- Institut des Neurosciences Cellulaires et IntégrativesCNRS, Université de StrasbourgStrasbourgFrance
| | - Ingo Bechmann
- Institute of Anatomy, Medical FacultyLeipzig UniversityLeipzigGermany
| | - Martin Krueger
- Institute of Anatomy, Medical FacultyLeipzig UniversityLeipzigGermany
| | - Philippe Isope
- Institut des Neurosciences Cellulaires et IntégrativesCNRS, Université de StrasbourgStrasbourgFrance
| | - Stefan Hallermann
- Carl-Ludwig-Institute for Physiology, Medical FacultyLeipzig UniversityLeipzigGermany
| |
Collapse
|
8
|
Robust Associative Learning Is Sufficient to Explain the Structural and Dynamical Properties of Local Cortical Circuits. J Neurosci 2019; 39:6888-6904. [PMID: 31270161 DOI: 10.1523/jneurosci.3218-18.2019] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/02/2019] [Revised: 05/31/2019] [Accepted: 06/24/2019] [Indexed: 11/21/2022] Open
Abstract
The ability of neural networks to associate successive states of network activity lies at the basis of many cognitive functions. Hence, we hypothesized that many ubiquitous structural and dynamical properties of local cortical networks result from associative learning. To test this hypothesis, we trained recurrent networks of excitatory and inhibitory neurons on memories composed of varying numbers of associations and compared the resulting network properties with those observed experimentally. We show that, when the network is robustly loaded with near-maximum amount of associations it can support, it develops properties that are consistent with the observed probabilities of excitatory and inhibitory connections, shapes of connection weight distributions, overexpression of specific 2- and 3-neuron motifs, distributions of connection numbers in clusters of 3-8 neurons, sustained, irregular, and asynchronous firing activity, and balance of excitation and inhibition. In addition, memories loaded into the network can be retrieved, even in the presence of noise that is comparable with the baseline variations in the postsynaptic potential. The confluence of these results suggests that many structural and dynamical properties of local cortical networks are simply a byproduct of associative learning. We predict that overexpression of excitatory-excitatory bidirectional connections observed in many cortical systems must be accompanied with underexpression of bidirectionally connected inhibitory-excitatory neuron pairs.SIGNIFICANCE STATEMENT Many structural and dynamical properties of local cortical networks are ubiquitously present across areas and species. Because synaptic connectivity is shaped by experience, we wondered whether continual learning, rather than genetic control, is responsible for producing such features. To answer this question, we developed a biologically constrained recurrent network of excitatory and inhibitory neurons capable of learning predefined sequences of network states. Embedding such associative memories into the network revealed that, when individual neurons are robustly loaded with a near-maximum amount of memories they can support, the network develops many properties that are consistent with experimental observations. Our findings suggest that basic structural and dynamical properties of local networks in the brain are simply a byproduct of learning and memory storage.
Collapse
|
9
|
Ben-Shushan N, Tsodyks M. Stabilizing patterns in time: Neural network approach. PLoS Comput Biol 2017; 13:e1005861. [PMID: 29232710 PMCID: PMC5741269 DOI: 10.1371/journal.pcbi.1005861] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/19/2017] [Revised: 12/22/2017] [Accepted: 10/31/2017] [Indexed: 11/19/2022] Open
Abstract
Recurrent and feedback networks are capable of holding dynamic memories. Nonetheless, training a network for that task is challenging. In order to do so, one should face non-linear propagation of errors in the system. Small deviations from the desired dynamics due to error or inherent noise might have a dramatic effect in the future. A method to cope with these difficulties is thus needed. In this work we focus on recurrent networks with linear activation functions and binary output unit. We characterize its ability to reproduce a temporal sequence of actions over its output unit. We suggest casting the temporal learning problem to a perceptron problem. In the discrete case a finite margin appears, providing the network, to some extent, robustness to noise, for which it performs perfectly (i.e. producing a desired sequence for an arbitrary number of cycles flawlessly). In the continuous case the margin approaches zero when the output unit changes its state, hence the network is only able to reproduce the sequence with slight jitters. Numerical simulation suggest that in the discrete time case, the longest sequence that can be learned scales, at best, as square root of the network size. A dramatic effect occurs when learning several short sequences in parallel, that is, their total length substantially exceeds the length of the longest single sequence the network can learn. This model easily generalizes to an arbitrary number of output units, which boost its performance. This effect is demonstrated by considering two practical examples for sequence learning. This work suggests a way to overcome stability problems for training recurrent networks and further quantifies the performance of a network under the specific learning scheme. The ability to learn and execute actions in fine temporal resolution is crucial, as many of our day to day actions require such temporal ordering (e.g. limb movement and speech). Indeed, generating stable time-varying outputs, using neural networks has attracted a lot of attention over the last years. One of the core problems, when facing such a task, is the solution stability, hence it was only possible to produce the sequence for a limited number of cycles. Here we propose a robust approach for the task of learning time-varying sequences.
Collapse
Affiliation(s)
- Nadav Ben-Shushan
- Department of Physics, The Weizmann Institute of science, Rehovot, Israel
| | - Misha Tsodyks
- Department of Neurobiology, The Weizmann Institute of science, Rehovot, Israel
- * E-mail:
| |
Collapse
|
10
|
Balanced excitation and inhibition are required for high-capacity, noise-robust neuronal selectivity. Proc Natl Acad Sci U S A 2017; 114:E9366-E9375. [PMID: 29042519 DOI: 10.1073/pnas.1705841114] [Citation(s) in RCA: 49] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022] Open
Abstract
Neurons and networks in the cerebral cortex must operate reliably despite multiple sources of noise. To evaluate the impact of both input and output noise, we determine the robustness of single-neuron stimulus selective responses, as well as the robustness of attractor states of networks of neurons performing memory tasks. We find that robustness to output noise requires synaptic connections to be in a balanced regime in which excitation and inhibition are strong and largely cancel each other. We evaluate the conditions required for this regime to exist and determine the properties of networks operating within it. A plausible synaptic plasticity rule for learning that balances weight configurations is presented. Our theory predicts an optimal ratio of the number of excitatory and inhibitory synapses for maximizing the encoding capacity of balanced networks for given statistics of afferent activations. Previous work has shown that balanced networks amplify spatiotemporal variability and account for observed asynchronous irregular states. Here we present a distinct type of balanced network that amplifies small changes in the impinging signals and emerges automatically from learning to perform neuronal and network functions robustly.
Collapse
|
11
|
Aguilar C, Chossat P, Krupa M, Lavigne F. Latching dynamics in neural networks with synaptic depression. PLoS One 2017; 12:e0183710. [PMID: 28846727 PMCID: PMC5573234 DOI: 10.1371/journal.pone.0183710] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/28/2017] [Accepted: 08/09/2017] [Indexed: 12/02/2022] Open
Abstract
Prediction is the ability of the brain to quickly activate a target concept in response to a related stimulus (prime). Experiments point to the existence of an overlap between the populations of the neurons coding for different stimuli, and other experiments show that prime-target relations arise in the process of long term memory formation. The classical modelling paradigm is that long term memories correspond to stable steady states of a Hopfield network with Hebbian connectivity. Experiments show that short term synaptic depression plays an important role in the processing of memories. This leads naturally to a computational model of priming, called latching dynamics; a stable state (prime) can become unstable and the system may converge to another transiently stable steady state (target). Hopfield network models of latching dynamics have been studied by means of numerical simulation, however the conditions for the existence of this dynamics have not been elucidated. In this work we use a combination of analytic and numerical approaches to confirm that latching dynamics can exist in the context of a symmetric Hebbian learning rule, however lacks robustness and imposes a number of biologically unrealistic restrictions on the model. In particular our work shows that the symmetry of the Hebbian rule is not an obstruction to the existence of latching dynamics, however fine tuning of the parameters of the model is needed.
Collapse
Affiliation(s)
- Carlos Aguilar
- Bases, Corpus, Langage, UMR 7320 CNRS, Université de Nice - Sophia Antipolis, 06357 Nice, France
| | - Pascal Chossat
- Laboratoire J.A.Dieudonné UMR CNRS-UNS 7351, Université de Nice - Sophia Antipolis, 06108 Nice, France
- MathNeuro team, Inria Sophia Antipolis, 06902 Valbonne-Sophia Antipolis, France
| | - Martin Krupa
- Laboratoire J.A.Dieudonné UMR CNRS-UNS 7351, Université de Nice - Sophia Antipolis, 06108 Nice, France
- MathNeuro team, Inria Sophia Antipolis, 06902 Valbonne-Sophia Antipolis, France
- Department of Applied Mathematics, University College Cork, Cork, Ireland
| | - Frédéric Lavigne
- Bases, Corpus, Langage, UMR 7320 CNRS, Université de Nice - Sophia Antipolis, 06357 Nice, France
| |
Collapse
|
12
|
Titley HK, Brunel N, Hansel C. Toward a Neurocentric View of Learning. Neuron 2017; 95:19-32. [PMID: 28683265 PMCID: PMC5519140 DOI: 10.1016/j.neuron.2017.05.021] [Citation(s) in RCA: 127] [Impact Index Per Article: 18.1] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/21/2016] [Revised: 05/12/2017] [Accepted: 05/15/2017] [Indexed: 01/29/2023]
Abstract
Synaptic plasticity (e.g., long-term potentiation [LTP]) is considered the cellular correlate of learning. Recent optogenetic studies on memory engram formation assign a critical role in learning to suprathreshold activation of neurons and their integration into active engrams ("engram cells"). Here we review evidence that ensemble integration may result from LTP but also from cell-autonomous changes in membrane excitability. We propose that synaptic plasticity determines synaptic connectivity maps, whereas intrinsic plasticity-possibly separated in time-amplifies neuronal responsiveness and acutely drives engram integration. Our proposal marks a move away from an exclusively synaptocentric toward a non-exclusive, neurocentric view of learning.
Collapse
Affiliation(s)
- Heather K Titley
- Department of Neurobiology, University of Chicago, Chicago, IL 60637, USA
| | - Nicolas Brunel
- Department of Neurobiology, University of Chicago, Chicago, IL 60637, USA; Department of Statistics, University of Chicago, Chicago, IL 60637, USA
| | - Christian Hansel
- Department of Neurobiology, University of Chicago, Chicago, IL 60637, USA.
| |
Collapse
|
13
|
Safaryan K, Maex R, Davey N, Adams R, Steuber V. Nonspecific synaptic plasticity improves the recognition of sparse patterns degraded by local noise. Sci Rep 2017; 7:46550. [PMID: 28425471 PMCID: PMC5397845 DOI: 10.1038/srep46550] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/20/2016] [Accepted: 03/22/2017] [Indexed: 01/22/2023] Open
Abstract
Many forms of synaptic plasticity require the local production of volatile or rapidly diffusing substances such as nitric oxide. The nonspecific plasticity these neuromodulators may induce at neighboring non-active synapses is thought to be detrimental for the specificity of memory storage. We show here that memory retrieval may benefit from this non-specific plasticity when the applied sparse binary input patterns are degraded by local noise. Simulations of a biophysically realistic model of a cerebellar Purkinje cell in a pattern recognition task show that, in the absence of noise, leakage of plasticity to adjacent synapses degrades the recognition of sparse static patterns. However, above a local noise level of 20%, the model with nonspecific plasticity outperforms the standard, specific model. The gain in performance is greatest when the spatial distribution of noise in the input matches the range of diffusion-induced plasticity. Hence non-specific plasticity may offer a benefit in noisy environments or when the pressure to generalize is strong.
Collapse
Affiliation(s)
- Karen Safaryan
- Centre for Computer Science and Informatics Research, University of Hertfordshire, College Lane, AL10 9AB Hatfield, United Kingdom.,Department of Physics and Astronomy, Knudsen Hall, University of California, Los Angeles CA, 90095-0001, USA
| | - Reinoud Maex
- Centre for Computer Science and Informatics Research, University of Hertfordshire, College Lane, AL10 9AB Hatfield, United Kingdom.,Department of Cognitive Sciences, Ecole Normale Supérieure, rue d'Ulm 25, 75005 Paris, France
| | - Neil Davey
- Centre for Computer Science and Informatics Research, University of Hertfordshire, College Lane, AL10 9AB Hatfield, United Kingdom
| | - Rod Adams
- Centre for Computer Science and Informatics Research, University of Hertfordshire, College Lane, AL10 9AB Hatfield, United Kingdom
| | - Volker Steuber
- Centre for Computer Science and Informatics Research, University of Hertfordshire, College Lane, AL10 9AB Hatfield, United Kingdom
| |
Collapse
|
14
|
Buchin A, Rieubland S, Häusser M, Gutkin BS, Roth A. Inverse Stochastic Resonance in Cerebellar Purkinje Cells. PLoS Comput Biol 2016; 12:e1005000. [PMID: 27541958 PMCID: PMC4991839 DOI: 10.1371/journal.pcbi.1005000] [Citation(s) in RCA: 37] [Impact Index Per Article: 4.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/30/2015] [Accepted: 05/29/2016] [Indexed: 11/18/2022] Open
Abstract
Purkinje neurons play an important role in cerebellar computation since their axons are the only projection from the cerebellar cortex to deeper cerebellar structures. They have complex internal dynamics, which allow them to fire spontaneously, display bistability, and also to be involved in network phenomena such as high frequency oscillations and travelling waves. Purkinje cells exhibit type II excitability, which can be revealed by a discontinuity in their f-I curves. We show that this excitability mechanism allows Purkinje cells to be efficiently inhibited by noise of a particular variance, a phenomenon known as inverse stochastic resonance (ISR). While ISR has been described in theoretical models of single neurons, here we provide the first experimental evidence for this effect. We find that an adaptive exponential integrate-and-fire model fitted to the basic Purkinje cell characteristics using a modified dynamic IV method displays ISR and bistability between the resting state and a repetitive activity limit cycle. ISR allows the Purkinje cell to operate in different functional regimes: the all-or-none toggle or the linear filter mode, depending on the variance of the synaptic input. We propose that synaptic noise allows Purkinje cells to quickly switch between these functional regimes. Using mutual information analysis, we demonstrate that ISR can lead to a locally optimal information transfer between the input and output spike train of the Purkinje cell. These results provide the first experimental evidence for ISR and suggest a functional role for ISR in cerebellar information processing. How neurons generate output spikes in response to various combinations of inputs is a central issue in contemporary neuroscience. Due to their large dendritic tree and complex intrinsic properties, cerebellar Purkinje cells are an important model system to study this input-output transformation. Here we examine how noise can change the parameters of this transformation. In experiments we found that spike generation in Purkinje cells can be efficiently inhibited by noise of a particular amplitude. This effect is called inverse stochastic resonance (ISR) and has previously been described only in theoretical models of neurons. We explain the mechanism underlying ISR using a simple model matching the properties of experimentally characterized Purkinje cells. We found that ISR is present in Purkinje cells when the mean input current is near threshold for spike generation. ISR can be explained by the co-existence of resting and spiking solutions of the simple model. Changes of the input noise variance change the lifetime of these resting and spiking states, suggesting a mechanism for a tunable filter with long time constants implemented by a Purkinje cell population in the cerebellum. Finally, ISR leads to locally optimal information transfer from the input to the output of a Purkinje cell.
Collapse
Affiliation(s)
- Anatoly Buchin
- Group for Neural Theory, Laboratoire des Neurosciences Cognitives, École Normale Supérieure, Paris, France
- Institute of Physics, Nanotechnology and Telecommunications, Peter the Great St. Petersburg Polytechnic University, Saint Petersburg, Russia
- Center for Cognition and Decision Making, Department of Psychology, NRU Higher School of Economics, Moscow, Russia
- * E-mail:
| | - Sarah Rieubland
- Wolfson Institute for Biomedical Research and Department of Neuroscience, Physiology and Pharmacology, University College London, London, United Kingdom
| | - Michael Häusser
- Wolfson Institute for Biomedical Research and Department of Neuroscience, Physiology and Pharmacology, University College London, London, United Kingdom
| | - Boris S. Gutkin
- Group for Neural Theory, Laboratoire des Neurosciences Cognitives, École Normale Supérieure, Paris, France
- Center for Cognition and Decision Making, Department of Psychology, NRU Higher School of Economics, Moscow, Russia
| | - Arnd Roth
- Wolfson Institute for Biomedical Research and Department of Neuroscience, Physiology and Pharmacology, University College London, London, United Kingdom
| |
Collapse
|
15
|
Yang Z, Santamaria F. Purkinje cell intrinsic excitability increases after synaptic long term depression. J Neurophysiol 2016; 116:1208-17. [PMID: 27306677 DOI: 10.1152/jn.00369.2016] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/16/2016] [Accepted: 06/07/2016] [Indexed: 11/22/2022] Open
Abstract
Coding in cerebellar Purkinje cells not only depends on synaptic plasticity but also on their intrinsic membrane excitability. We performed whole cell patch-clamp recordings of Purkinje cells in sagittal cerebellar slices in mice. We found that inducing long-term depression (LTD) in the parallel fiber to Purkinje cell synapses results in an increase in the gain of the firing rate response. This increase in excitability is accompanied by an increase in the input resistance and a decrease in the amplitude of the hyperpolarization-activated cyclic nucleotide-gated (HCN) channel-mediated voltage sag. Application of a HCN channel blocker prevents the increase in input resistance and excitability without blocking the expression of synaptic LTD. We conclude that the induction of parallel fiber-Purkinje cell LTD is accompanied by an increase in excitability of Purkinje cells through downregulation of the HCN-mediated h current. We suggest that HCN downregulation is linked to the biochemical pathway that sustains synaptic LTD. Given the diversity of information carried by the parallel fiber system, we suggest that changes in intrinsic excitability enhance the coding capacity of the Purkinje cell to specific input sources.
Collapse
Affiliation(s)
- Zhen Yang
- UTSA Neurosciences Institute and Department of Biology, University of Texas at San Antonio, San Antonio, Texas
| | - Fidel Santamaria
- UTSA Neurosciences Institute and Department of Biology, University of Texas at San Antonio, San Antonio, Texas
| |
Collapse
|
16
|
Givon-Mayo R, Haar S, Aminov Y, Simons E, Donchin O. Long Pauses in Cerebellar Interneurons in Anesthetized Animals. THE CEREBELLUM 2016; 16:293-305. [PMID: 27255704 DOI: 10.1007/s12311-016-0792-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
Abstract
Are long pauses in the firing of cerebellar interneurons (CINs) related to Purkinje cell (PC) pauses? If PC pauses affect the larger network, then we should find a close relationship between CIN pauses and those in PCs. We recorded activity of 241 cerebellar cortical neurons (206 CINs and 35 PCs) in three anesthetized cats. One fifth of the CINs and more than half of the PCs were identified as pausing. Pauses in CINs and PCs showed some differences: CIN mean pause length was shorter, and, after pauses, only CINs had sustained reduction in their firing rate (FR). Almost all pausing CINs fell into same cluster when we used different methods of clustering CINs by their spontaneous activity. The mean spontaneous firing rate of that cluster was approximately 53 Hz. We also examined cross-correlations in simultaneously recorded neurons. Of 39 cell pairs examined, 14 (35 %) had cross-correlations significantly different from those expected by chance. Almost half of the pairs with two CINs showed statistically significant negative correlations. In contrast, PC/CIN pairs did not often show significant effects in the cross-correlation (12/15 pairs). However, for both CIN/CIN and PC/CIN pairs, pauses in one unit tended to correspond to a reduction in the firing rate of the adjacent unit. In our view, our results support the possibility that previously reported PC bistability is part of a larger network response and not merely a biophysical property of PCs. Any functional role for PC bistability should probably be sought in the context of the broader network.
Collapse
Affiliation(s)
- Ronit Givon-Mayo
- The Faculty of Health Science, Ben-Gurion University of the Negev, Beer-Sheva, Israel
- Zlotowski Center for Neuroscience, Ben-Gurion University of the Negev, Beer-Sheva, Israel
- Physical Therapy Department, Ono Academic College, Kiryat Ono, Israel
| | - Shlomi Haar
- Zlotowski Center for Neuroscience, Ben-Gurion University of the Negev, Beer-Sheva, Israel
- Department of Brain and Cognitive Sciences, Ben-Gurion University of the Negev, Beer-Sheva, Israel
- Department of Biomedical Engineering, Ben-Gurion University of the Negev, P.O.B. 653, Beer-Sheva, 8410501, Israel
| | - Yoav Aminov
- Department of Biomedical Engineering, Ben-Gurion University of the Negev, P.O.B. 653, Beer-Sheva, 8410501, Israel
| | - Esther Simons
- Department of Neuroscience, Erasmus MC, Rotterdam, The Netherlands
| | - Opher Donchin
- Zlotowski Center for Neuroscience, Ben-Gurion University of the Negev, Beer-Sheva, Israel.
- Department of Biomedical Engineering, Ben-Gurion University of the Negev, P.O.B. 653, Beer-Sheva, 8410501, Israel.
- Department of Neuroscience, Erasmus MC, Rotterdam, The Netherlands.
| |
Collapse
|
17
|
Brunel N. Is cortical connectivity optimized for storing information? Nat Neurosci 2016; 19:749-755. [PMID: 27065365 DOI: 10.1038/nn.4286] [Citation(s) in RCA: 71] [Impact Index Per Article: 8.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/11/2016] [Accepted: 03/14/2016] [Indexed: 12/13/2022]
Abstract
Cortical networks are thought to be shaped by experience-dependent synaptic plasticity. Theoretical studies have shown that synaptic plasticity allows a network to store a memory of patterns of activity such that they become attractors of the dynamics of the network. Here we study the properties of the excitatory synaptic connectivity in a network that maximizes the number of stored patterns of activity in a robust fashion. We show that the resulting synaptic connectivity matrix has the following properties: it is sparse, with a large fraction of zero synaptic weights ('potential' synapses); bidirectionally coupled pairs of neurons are over-represented in comparison to a random network; and bidirectionally connected pairs have stronger synapses on average than unidirectionally connected pairs. All these features reproduce quantitatively available data on connectivity in cortex. This suggests synaptic connectivity in cortex is optimized to store a large number of attractor states in a robust fashion.
Collapse
Affiliation(s)
- Nicolas Brunel
- Department of Statistics, The University of Chicago, Chicago, Illinois, USA.,Department of Neurobiology, The University of Chicago, Chicago, Illinois, USA
| |
Collapse
|
18
|
Alemi A, Baldassi C, Brunel N, Zecchina R. A Three-Threshold Learning Rule Approaches the Maximal Capacity of Recurrent Neural Networks. PLoS Comput Biol 2015; 11:e1004439. [PMID: 26291608 PMCID: PMC4546407 DOI: 10.1371/journal.pcbi.1004439] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/30/2015] [Accepted: 06/19/2015] [Indexed: 11/30/2022] Open
Abstract
Understanding the theoretical foundations of how memories are encoded and retrieved in neural populations is a central challenge in neuroscience. A popular theoretical scenario for modeling memory function is the attractor neural network scenario, whose prototype is the Hopfield model. The model simplicity and the locality of the synaptic update rules come at the cost of a poor storage capacity, compared with the capacity achieved with perceptron learning algorithms. Here, by transforming the perceptron learning rule, we present an online learning rule for a recurrent neural network that achieves near-maximal storage capacity without an explicit supervisory error signal, relying only upon locally accessible information. The fully-connected network consists of excitatory binary neurons with plastic recurrent connections and non-plastic inhibitory feedback stabilizing the network dynamics; the memory patterns to be memorized are presented online as strong afferent currents, producing a bimodal distribution for the neuron synaptic inputs. Synapses corresponding to active inputs are modified as a function of the value of the local fields with respect to three thresholds. Above the highest threshold, and below the lowest threshold, no plasticity occurs. In between these two thresholds, potentiation/depression occurs when the local field is above/below an intermediate threshold. We simulated and analyzed a network of binary neurons implementing this rule and measured its storage capacity for different sizes of the basins of attraction. The storage capacity obtained through numerical simulations is shown to be close to the value predicted by analytical calculations. We also measured the dependence of capacity on the strength of external inputs. Finally, we quantified the statistics of the resulting synaptic connectivity matrix, and found that both the fraction of zero weight synapses and the degree of symmetry of the weight matrix increase with the number of stored patterns. Recurrent neural networks have been shown to be able to store memory patterns as fixed point attractors of the dynamics of the network. The prototypical learning rule for storing memories in attractor neural networks is Hebbian learning, which can store up to 0.138N uncorrelated patterns in a recurrent network of N neurons. This is very far from the maximal capacity 2N, which can be achieved by supervised rules, e.g. by the perceptron learning rule. However, these rules are problematic for neurons in the neocortex or the hippocampus, since they rely on the computation of a supervisory error signal for each neuron of the network. We show here that the total synaptic input received by a neuron during the presentation of a sufficiently strong stimulus contains implicit information about the error, which can be extracted by setting three thresholds on the total input, defining depression and potentiation regions. The resulting learning rule implements basic biological constraints, and our simulations show that a network implementing it gets very close to the maximal capacity, both in the dense and sparse regimes, across all values of storage robustness. The rule predicts that when the total synaptic inputs goes beyond a threshold, no potentiation should occur.
Collapse
Affiliation(s)
- Alireza Alemi
- Human Genetics Foundation (HuGeF), Turin, Italy
- DISAT, Politecnico di Torino, Turin, Italy
- * E-mail: ,
| | - Carlo Baldassi
- Human Genetics Foundation (HuGeF), Turin, Italy
- DISAT, Politecnico di Torino, Turin, Italy
| | - Nicolas Brunel
- Departments of Statistics and Neurobiology, University of Chicago, Chicago, Illinois, United States of America
| | - Riccardo Zecchina
- Human Genetics Foundation (HuGeF), Turin, Italy
- DISAT, Politecnico di Torino, Turin, Italy
| |
Collapse
|
19
|
De Zeeuw CI, Hoogland TM. Reappraisal of Bergmann glial cells as modulators of cerebellar circuit function. Front Cell Neurosci 2015; 9:246. [PMID: 26190972 PMCID: PMC4488625 DOI: 10.3389/fncel.2015.00246] [Citation(s) in RCA: 31] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/27/2015] [Accepted: 06/17/2015] [Indexed: 11/13/2022] Open
Abstract
Just as there is a huge morphological and functional diversity of neuron types specialized for specific aspects of information processing in the brain, astrocytes have equally distinct morphologies and functions that aid optimal functioning of the circuits in which they are embedded. One type of astrocyte, the Bergmann glial cell (BG) of the cerebellum, is a prime example of a highly diversified astrocyte type, the architecture of which is adapted to the cerebellar circuit and facilitates an impressive range of functions that optimize information processing in the adult brain. In this review we expand on the function of the BG in the cerebellum to highlight the importance of astrocytes not only in housekeeping functions, but also in contributing to plasticity and information processing in the cerebellum.
Collapse
Affiliation(s)
- Chris I De Zeeuw
- Cerebellar Coordination and Cognition, Netherlands Institute for Neuroscience Amsterdam, Netherlands ; Department of Neuroscience, Erasmus MC Rotterdam, Netherlands
| | - Tycho M Hoogland
- Cerebellar Coordination and Cognition, Netherlands Institute for Neuroscience Amsterdam, Netherlands ; Department of Neuroscience, Erasmus MC Rotterdam, Netherlands
| |
Collapse
|
20
|
Energy Efficient Sparse Connectivity from Imbalanced Synaptic Plasticity Rules. PLoS Comput Biol 2015; 11:e1004265. [PMID: 26046817 PMCID: PMC4457870 DOI: 10.1371/journal.pcbi.1004265] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/19/2014] [Accepted: 04/05/2015] [Indexed: 11/22/2022] Open
Abstract
It is believed that energy efficiency is an important constraint in brain evolution. As synaptic transmission dominates energy consumption, energy can be saved by ensuring that only a few synapses are active. It is therefore likely that the formation of sparse codes and sparse connectivity are fundamental objectives of synaptic plasticity. In this work we study how sparse connectivity can result from a synaptic learning rule of excitatory synapses. Information is maximised when potentiation and depression are balanced according to the mean presynaptic activity level and the resulting fraction of zero-weight synapses is around 50%. However, an imbalance towards depression increases the fraction of zero-weight synapses without significantly affecting performance. We show that imbalanced plasticity corresponds to imposing a regularising constraint on the L1-norm of the synaptic weight vector, a procedure that is well-known to induce sparseness. Imbalanced plasticity is biophysically plausible and leads to more efficient synaptic configurations than a previously suggested approach that prunes synapses after learning. Our framework gives a novel interpretation to the high fraction of silent synapses found in brain regions like the cerebellum. Recent estimates point out that a large part of the energetic budget of the mammalian cortex is spent in transmitting signals between neurons across synapses. Despite this, studies of learning and memory do not usually take energy efficiency into account. In this work we address the canonical computational problem of storing memories with synaptic plasticity. However, instead of optimising solely for information capacity, we search for energy efficient solutions. This implies that the number of functional synapses needs to be small (sparse connectivity) while maintaining high information. We suggest imbalanced plasticity, a learning regime where net depression is stronger than potentiation, as a simple and plausible means to learn more efficient neural circuits. Our framework gives a novel interpretation to the high fraction of silent synapses found in brain regions like the cerebellum.
Collapse
|
21
|
Lennon W, Hecht-Nielsen R, Yamazaki T. A spiking network model of cerebellar Purkinje cells and molecular layer interneurons exhibiting irregular firing. Front Comput Neurosci 2014; 8:157. [PMID: 25520646 PMCID: PMC4249458 DOI: 10.3389/fncom.2014.00157] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/12/2014] [Accepted: 11/14/2014] [Indexed: 11/24/2022] Open
Abstract
While the anatomy of the cerebellar microcircuit is well-studied, how it implements cerebellar function is not understood. A number of models have been proposed to describe this mechanism but few emphasize the role of the vast network Purkinje cells (PKJs) form with the molecular layer interneurons (MLIs)—the stellate and basket cells. We propose a model of the MLI-PKJ network composed of simple spiking neurons incorporating the major anatomical and physiological features. In computer simulations, the model reproduces the irregular firing patterns observed in PKJs and MLIs in vitro and a shift toward faster, more regular firing patterns when inhibitory synaptic currents are blocked. In the model, the time between PKJ spikes is shown to be proportional to the amount of feedforward inhibition from an MLI on average. The two key elements of the model are: (1) spontaneously active PKJs and MLIs due to an endogenous depolarizing current, and (2) adherence to known anatomical connectivity along a parasagittal strip of cerebellar cortex. We propose this model to extend previous spiking network models of the cerebellum and for further computational investigation into the role of irregular firing and MLIs in cerebellar learning and function.
Collapse
Affiliation(s)
- William Lennon
- Department of Electrical and Computer Engineering, University of California San Diego, La Jolla, CA, USA
| | - Robert Hecht-Nielsen
- Department of Electrical and Computer Engineering, University of California San Diego, La Jolla, CA, USA
| | - Tadashi Yamazaki
- Graduate School of Informatics and Engineering, The University of Electro-Communications Chofu, Japan
| |
Collapse
|
22
|
Abstract
To signal the onset of salient sensory features or execute well-timed motor sequences, neuronal circuits must transform streams of incoming spike trains into precisely timed firing. To address the efficiency and fidelity with which neurons can perform such computations, we developed a theory to characterize the capacity of feedforward networks to generate desired spike sequences. We find the maximum number of desired output spikes a neuron can implement to be 0.1-0.3 per synapse. We further present a biologically plausible learning rule that allows feedforward and recurrent networks to learn multiple mappings between inputs and desired spike sequences. We apply this framework to reconstruct synaptic weights from spiking activity and study the precision with which the temporal structure of ongoing behavior can be inferred from the spiking of premotor neurons. This work provides a powerful approach for characterizing the computational and learning capacities of single neurons and neuronal circuits.
Collapse
|
23
|
Optimal properties of analog perceptrons with excitatory weights. PLoS Comput Biol 2013; 9:e1002919. [PMID: 23436991 PMCID: PMC3578758 DOI: 10.1371/journal.pcbi.1002919] [Citation(s) in RCA: 22] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/28/2012] [Accepted: 12/27/2012] [Indexed: 11/19/2022] Open
Abstract
The cerebellum is a brain structure which has been traditionally devoted to supervised learning. According to this theory, plasticity at the Parallel Fiber (PF) to Purkinje Cell (PC) synapses is guided by the Climbing fibers (CF), which encode an 'error signal'. Purkinje cells have thus been modeled as perceptrons, learning input/output binary associations. At maximal capacity, a perceptron with excitatory weights expresses a large fraction of zero-weight synapses, in agreement with experimental findings. However, numerous experiments indicate that the firing rate of Purkinje cells varies in an analog, not binary, manner. In this paper, we study the perceptron with analog inputs and outputs. We show that the optimal input has a sparse binary distribution, in good agreement with the burst firing of the Granule cells. In addition, we show that the weight distribution consists of a large fraction of silent synapses, as in previously studied binary perceptron models, and as seen experimentally.
Collapse
|