1
|
Fernández Santoro EM, Karim A, Warnaar P, De Zeeuw CI, Badura A, Negrello M. Purkinje cell models: past, present and future. Front Comput Neurosci 2024; 18:1426653. [PMID: 39049990 PMCID: PMC11266113 DOI: 10.3389/fncom.2024.1426653] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/01/2024] [Accepted: 06/24/2024] [Indexed: 07/27/2024] Open
Abstract
The investigation of the dynamics of Purkinje cell (PC) activity is crucial to unravel the role of the cerebellum in motor control, learning and cognitive processes. Within the cerebellar cortex (CC), these neurons receive all the incoming sensory and motor information, transform it and generate the entire cerebellar output. The relatively homogenous and repetitive structure of the CC, common to all vertebrate species, suggests a single computation mechanism shared across all PCs. While PC models have been developed since the 70's, a comprehensive review of contemporary models is currently lacking. Here, we provide an overview of PC models, ranging from the ones focused on single cell intracellular PC dynamics, through complex models which include synaptic and extrasynaptic inputs. We review how PC models can reproduce physiological activity of the neuron, including firing patterns, current and multistable dynamics, plateau potentials, calcium signaling, intrinsic and synaptic plasticity and input/output computations. We consider models focusing both on somatic and on dendritic computations. Our review provides a critical performance analysis of PC models with respect to known physiological data. We expect our synthesis to be useful in guiding future development of computational models that capture real-life PC dynamics in the context of cerebellar computations.
Collapse
Affiliation(s)
| | - Arun Karim
- Department of Neuroscience, Erasmus MC, Rotterdam, Netherlands
| | - Pascal Warnaar
- Department of Neuroscience, Erasmus MC, Rotterdam, Netherlands
- Netherlands Institute for Neuroscience, Royal Academy of Arts and Sciences, Amsterdam, Netherlands
| | - Chris I. De Zeeuw
- Department of Neuroscience, Erasmus MC, Rotterdam, Netherlands
- Netherlands Institute for Neuroscience, Royal Academy of Arts and Sciences, Amsterdam, Netherlands
| | | | - Mario Negrello
- Department of Neuroscience, Erasmus MC, Rotterdam, Netherlands
| |
Collapse
|
2
|
Xie M, Muscinelli SP, Decker Harris K, Litwin-Kumar A. Task-dependent optimal representations for cerebellar learning. eLife 2023; 12:e82914. [PMID: 37671785 PMCID: PMC10541175 DOI: 10.7554/elife.82914] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/22/2022] [Accepted: 09/05/2023] [Indexed: 09/07/2023] Open
Abstract
The cerebellar granule cell layer has inspired numerous theoretical models of neural representations that support learned behaviors, beginning with the work of Marr and Albus. In these models, granule cells form a sparse, combinatorial encoding of diverse sensorimotor inputs. Such sparse representations are optimal for learning to discriminate random stimuli. However, recent observations of dense, low-dimensional activity across granule cells have called into question the role of sparse coding in these neurons. Here, we generalize theories of cerebellar learning to determine the optimal granule cell representation for tasks beyond random stimulus discrimination, including continuous input-output transformations as required for smooth motor control. We show that for such tasks, the optimal granule cell representation is substantially denser than predicted by classical theories. Our results provide a general theory of learning in cerebellum-like systems and suggest that optimal cerebellar representations are task-dependent.
Collapse
Affiliation(s)
- Marjorie Xie
- Zuckerman Mind Brain Behavior Institute, Columbia UniversityNew YorkUnited States
| | - Samuel P Muscinelli
- Zuckerman Mind Brain Behavior Institute, Columbia UniversityNew YorkUnited States
| | - Kameron Decker Harris
- Department of Computer Science, Western Washington UniversityBellinghamUnited States
| | - Ashok Litwin-Kumar
- Zuckerman Mind Brain Behavior Institute, Columbia UniversityNew YorkUnited States
| |
Collapse
|
3
|
Vijayan A, Diwakar S. A cerebellum inspired spiking neural network as a multi-model for pattern classification and robotic trajectory prediction. Front Neurosci 2022; 16:909146. [DOI: 10.3389/fnins.2022.909146] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/31/2022] [Accepted: 11/02/2022] [Indexed: 11/29/2022] Open
Abstract
Spiking neural networks were introduced to understand spatiotemporal information processing in neurons and have found their application in pattern encoding, data discrimination, and classification. Bioinspired network architectures are considered for event-driven tasks, and scientists have looked at different theories based on the architecture and functioning. Motor tasks, for example, have networks inspired by cerebellar architecture where the granular layer recodes sparse representations of the mossy fiber (MF) inputs and has more roles in motor learning. Using abstractions from cerebellar connections and learning rules of deep learning network (DLN), patterns were discriminated within datasets, and the same algorithm was used for trajectory optimization. In the current work, a cerebellum-inspired spiking neural network with dynamics of cerebellar neurons and learning mechanisms attributed to the granular layer, Purkinje cell (PC) layer, and cerebellar nuclei interconnected by excitatory and inhibitory synapses was implemented. The model’s pattern discrimination capability was tested for two tasks on standard machine learning (ML) datasets and on following a trajectory of a low-cost sensor-free robotic articulator. Tuned for supervised learning, the pattern classification capability of the cerebellum-inspired network algorithm has produced more generalized models than data-specific precision models on smaller training datasets. The model showed an accuracy of 72%, which was comparable to standard ML algorithms, such as MLP (78%), Dl4jMlpClassifier (64%), RBFNetwork (71.4%), and libSVM-linear (85.7%). The cerebellar model increased the network’s capability and decreased storage, augmenting faster computations. Additionally, the network model could also implicitly reconstruct the trajectory of a 6-degree of freedom (DOF) robotic arm with a low error rate by reconstructing the kinematic parameters. The variability between the actual and predicted trajectory points was noted to be ± 3 cm (while moving to a position in a cuboid space of 25 × 30 × 40 cm). Although a few known learning rules were implemented among known types of plasticity in the cerebellum, the network model showed a generalized processing capability for a range of signals, modulating the data through the interconnected neural populations. In addition to potential use on sensor-free or feed-forward based controllers for robotic arms and as a generalized pattern classification algorithm, this model adds implications to motor learning theory.
Collapse
|
4
|
Schönsberg F, Roudi Y, Treves A. Efficiency of Local Learning Rules in Threshold-Linear Associative Networks. PHYSICAL REVIEW LETTERS 2021; 126:018301. [PMID: 33480759 DOI: 10.1103/physrevlett.126.018301] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/30/2020] [Revised: 12/10/2020] [Accepted: 12/11/2020] [Indexed: 06/12/2023]
Abstract
We derive the Gardner storage capacity for associative networks of threshold linear units, and show that with Hebbian learning they can operate closer to such Gardner bound than binary networks, and even surpass it. This is largely achieved through a sparsification of the retrieved patterns, which we analyze for theoretical and empirical distributions of activity. As reaching the optimal capacity via nonlocal learning rules like back propagation requires slow and neurally implausible training procedures, our results indicate that one-shot self-organized Hebbian learning can be just as efficient.
Collapse
Affiliation(s)
| | - Yasser Roudi
- Kavli Institute for Systems Neuroscience & Centre for Neural Computation, NTNU, Trondheim, Norway
| | - Alessandro Treves
- SISSA, Scuola Internazionale Superiore di Studi Avanzati, Trieste, Italy
- Kavli Institute for Systems Neuroscience & Centre for Neural Computation, NTNU, Trondheim, Norway
| |
Collapse
|
5
|
Ingrosso A. Optimal learning with excitatory and inhibitory synapses. PLoS Comput Biol 2020; 16:e1008536. [PMID: 33370266 PMCID: PMC7793294 DOI: 10.1371/journal.pcbi.1008536] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/27/2020] [Revised: 01/08/2021] [Accepted: 11/13/2020] [Indexed: 11/22/2022] Open
Abstract
Characterizing the relation between weight structure and input/output statistics is fundamental for understanding the computational capabilities of neural circuits. In this work, I study the problem of storing associations between analog signals in the presence of correlations, using methods from statistical mechanics. I characterize the typical learning performance in terms of the power spectrum of random input and output processes. I show that optimal synaptic weight configurations reach a capacity of 0.5 for any fraction of excitatory to inhibitory weights and have a peculiar synaptic distribution with a finite fraction of silent synapses. I further provide a link between typical learning performance and principal components analysis in single cases. These results may shed light on the synaptic profile of brain circuits, such as cerebellar structures, that are thought to engage in processing time-dependent signals and performing on-line prediction.
Collapse
Affiliation(s)
- Alessandro Ingrosso
- Zuckerman Mind, Brain, Behavior Institute, Columbia University, New York, New York, United States of America
| |
Collapse
|
6
|
Lynn MB, Lee KFH, Soares C, Naud R, Béïque JC. A Synthetic Likelihood Solution to the Silent Synapse Estimation Problem. Cell Rep 2020; 32:107916. [PMID: 32697998 DOI: 10.1016/j.celrep.2020.107916] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/12/2020] [Revised: 05/04/2020] [Accepted: 06/25/2020] [Indexed: 11/19/2022] Open
Abstract
Functional features of synaptic populations are typically inferred from random electrophysiological sampling of small subsets of synapses. Are these samples unbiased? Here, we develop a biophysically constrained statistical framework to address this question and apply it to assess the performance of a widely used method based on a failure-rate analysis to quantify the occurrence of silent (AMPAR-lacking) synapses. We simulate this method in silico and find that it is characterized by strong and systematic biases, poor reliability, and weak statistical power. Key conclusions are validated by whole-cell recordings from hippocampal neurons. To address these shortcomings, we develop a simulator of the experimental protocol and use it to compute a synthetic likelihood. By maximizing the likelihood, we infer silent synapse fraction with no bias, low variance, and superior statistical power over alternatives. Together, this generalizable approach highlights how a simulator of experimental methodologies can substantially improve the estimation of physiological properties.
Collapse
Affiliation(s)
- Michael B Lynn
- Department of Cellular and Molecular Medicine, University of Ottawa, Ottawa, ON K1H 8M5, Canada
| | - Kevin F H Lee
- Department of Cellular and Molecular Medicine, University of Ottawa, Ottawa, ON K1H 8M5, Canada
| | - Cary Soares
- Department of Cellular and Molecular Medicine, University of Ottawa, Ottawa, ON K1H 8M5, Canada
| | - Richard Naud
- Department of Cellular and Molecular Medicine, University of Ottawa, Ottawa, ON K1H 8M5, Canada; Centre for Neural Dynamics, University of Ottawa, Ottawa, ON K1H 8M5, Canada; University of Ottawa's Brain and Mind Research Institute, Ottawa, ON K1H 8M5, Canada; Department of Physics, STEM Complex, Room 336, 150 Louis Pasteur Private, University of Ottawa, Ottawa, ON K1N 6N5, Canada.
| | - Jean-Claude Béïque
- Department of Cellular and Molecular Medicine, University of Ottawa, Ottawa, ON K1H 8M5, Canada; Canadian Partnership for Stroke Recovery, University of Ottawa, Ottawa, ON K1H 8M5, Canada; Centre for Neural Dynamics, University of Ottawa, Ottawa, ON K1H 8M5, Canada; University of Ottawa's Brain and Mind Research Institute, Ottawa, ON K1H 8M5, Canada.
| |
Collapse
|
7
|
Straub I, Witter L, Eshra A, Hoidis M, Byczkowicz N, Maas S, Delvendahl I, Dorgans K, Savier E, Bechmann I, Krueger M, Isope P, Hallermann S. Gradients in the mammalian cerebellar cortex enable Fourier-like transformation and improve storing capacity. eLife 2020; 9:e51771. [PMID: 32022688 PMCID: PMC7002074 DOI: 10.7554/elife.51771] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/10/2019] [Accepted: 12/20/2019] [Indexed: 12/28/2022] Open
Abstract
Cerebellar granule cells (GCs) make up the majority of all neurons in the vertebrate brain, but heterogeneities among GCs and potential functional consequences are poorly understood. Here, we identified unexpected gradients in the biophysical properties of GCs in mice. GCs closer to the white matter (inner-zone GCs) had higher firing thresholds and could sustain firing with larger current inputs than GCs closer to the Purkinje cell layer (outer-zone GCs). Dynamic Clamp experiments showed that inner- and outer-zone GCs preferentially respond to high- and low-frequency mossy fiber inputs, respectively, enabling dispersion of the mossy fiber input into its frequency components as performed by a Fourier transformation. Furthermore, inner-zone GCs have faster axonal conduction velocity and elicit faster synaptic potentials in Purkinje cells. Neuronal network modeling revealed that these gradients improve spike-timing precision of Purkinje cells and decrease the number of GCs required to learn spike-sequences. Thus, our study uncovers biophysical gradients in the cerebellar cortex enabling a Fourier-like transformation of mossy fiber inputs.
Collapse
Affiliation(s)
- Isabelle Straub
- Carl-Ludwig-Institute for Physiology, Medical FacultyLeipzig UniversityLeipzigGermany
| | - Laurens Witter
- Carl-Ludwig-Institute for Physiology, Medical FacultyLeipzig UniversityLeipzigGermany
- Department of Integrative Neurophysiology, Center for Neurogenomics and Cognitive Research (CNCR)VU UniversityAmsterdamNetherlands
| | - Abdelmoneim Eshra
- Carl-Ludwig-Institute for Physiology, Medical FacultyLeipzig UniversityLeipzigGermany
| | - Miriam Hoidis
- Carl-Ludwig-Institute for Physiology, Medical FacultyLeipzig UniversityLeipzigGermany
| | - Niklas Byczkowicz
- Carl-Ludwig-Institute for Physiology, Medical FacultyLeipzig UniversityLeipzigGermany
| | - Sebastian Maas
- Carl-Ludwig-Institute for Physiology, Medical FacultyLeipzig UniversityLeipzigGermany
| | - Igor Delvendahl
- Carl-Ludwig-Institute for Physiology, Medical FacultyLeipzig UniversityLeipzigGermany
| | - Kevin Dorgans
- Institut des Neurosciences Cellulaires et IntégrativesCNRS, Université de StrasbourgStrasbourgFrance
| | - Elise Savier
- Institut des Neurosciences Cellulaires et IntégrativesCNRS, Université de StrasbourgStrasbourgFrance
| | - Ingo Bechmann
- Institute of Anatomy, Medical FacultyLeipzig UniversityLeipzigGermany
| | - Martin Krueger
- Institute of Anatomy, Medical FacultyLeipzig UniversityLeipzigGermany
| | - Philippe Isope
- Institut des Neurosciences Cellulaires et IntégrativesCNRS, Université de StrasbourgStrasbourgFrance
| | - Stefan Hallermann
- Carl-Ludwig-Institute for Physiology, Medical FacultyLeipzig UniversityLeipzigGermany
| |
Collapse
|
8
|
Bouvier G, Aljadeff J, Clopath C, Bimbard C, Ranft J, Blot A, Nadal JP, Brunel N, Hakim V, Barbour B. Cerebellar learning using perturbations. eLife 2018; 7:e31599. [PMID: 30418871 PMCID: PMC6231762 DOI: 10.7554/elife.31599] [Citation(s) in RCA: 29] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/28/2017] [Accepted: 10/06/2018] [Indexed: 12/24/2022] Open
Abstract
The cerebellum aids the learning of fast, coordinated movements. According to current consensus, erroneously active parallel fibre synapses are depressed by complex spikes signalling movement errors. However, this theory cannot solve the credit assignment problem of processing a global movement evaluation into multiple cell-specific error signals. We identify a possible implementation of an algorithm solving this problem, whereby spontaneous complex spikes perturb ongoing movements, create eligibility traces and signal error changes guiding plasticity. Error changes are extracted by adaptively cancelling the average error. This framework, stochastic gradient descent with estimated global errors (SGDEGE), predicts synaptic plasticity rules that apparently contradict the current consensus but were supported by plasticity experiments in slices from mice under conditions designed to be physiological, highlighting the sensitivity of plasticity studies to experimental conditions. We analyse the algorithm's convergence and capacity. Finally, we suggest SGDEGE may also operate in the basal ganglia.
Collapse
Affiliation(s)
- Guy Bouvier
- Institut de biologie de l’École normale supérieure (IBENS)École normale supérieure, CNRS, INSERM, PSL UniversityParisFrance
| | - Johnatan Aljadeff
- Departments of Statistics and NeurobiologyUniversity of ChicagoChicagoUnited States
| | - Claudia Clopath
- Department of BioengineeringImperial College LondonLondonUnited Kingdom
| | - Célian Bimbard
- Institut de biologie de l’École normale supérieure (IBENS)École normale supérieure, CNRS, INSERM, PSL UniversityParisFrance
| | - Jonas Ranft
- Institut de biologie de l’École normale supérieure (IBENS)École normale supérieure, CNRS, INSERM, PSL UniversityParisFrance
| | - Antonin Blot
- Institut de biologie de l’École normale supérieure (IBENS)École normale supérieure, CNRS, INSERM, PSL UniversityParisFrance
| | - Jean-Pierre Nadal
- Laboratoire de Physique StatistiqueÉcole normale supérieure, CNRS, PSL University, Sorbonne UniversitéParisFrance
- Centre d’Analyse et de Mathématique SocialesEHESS, CNRS, PSL UniversityParisFrance
| | - Nicolas Brunel
- Departments of Statistics and NeurobiologyUniversity of ChicagoChicagoUnited States
| | - Vincent Hakim
- Laboratoire de Physique StatistiqueÉcole normale supérieure, CNRS, PSL University, Sorbonne UniversitéParisFrance
| | - Boris Barbour
- Institut de biologie de l’École normale supérieure (IBENS)École normale supérieure, CNRS, INSERM, PSL UniversityParisFrance
| |
Collapse
|
9
|
Abstract
Neurons integrate information from many neighbors when they process information. Inputs to a given neuron are thus indistinguishable from one another. Under the assumption that neurons maximize their information storage, indistinguishability is shown to place a strong constraint on the distribution of strengths between neurons. The distribution of individual synapse strengths is found to follow a modified Boltzmann distribution with strength proportional to [Formula: see text]. The model is shown to be consistent with experimental data from Caenorhabditis elegans connectivity and in vivo synaptic strength measurements. The [Formula: see text] dependence helps account for the observation of many zero or weak connections between neurons or sparsity of the neural network.
Collapse
Affiliation(s)
- Joseph Snider
- Institute for Neural Computation, University of California, San Diego, La Jolla, CA 90039, U.S.A.
| |
Collapse
|
10
|
Aguilar C, Chossat P, Krupa M, Lavigne F. Latching dynamics in neural networks with synaptic depression. PLoS One 2017; 12:e0183710. [PMID: 28846727 PMCID: PMC5573234 DOI: 10.1371/journal.pone.0183710] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/28/2017] [Accepted: 08/09/2017] [Indexed: 12/02/2022] Open
Abstract
Prediction is the ability of the brain to quickly activate a target concept in response to a related stimulus (prime). Experiments point to the existence of an overlap between the populations of the neurons coding for different stimuli, and other experiments show that prime-target relations arise in the process of long term memory formation. The classical modelling paradigm is that long term memories correspond to stable steady states of a Hopfield network with Hebbian connectivity. Experiments show that short term synaptic depression plays an important role in the processing of memories. This leads naturally to a computational model of priming, called latching dynamics; a stable state (prime) can become unstable and the system may converge to another transiently stable steady state (target). Hopfield network models of latching dynamics have been studied by means of numerical simulation, however the conditions for the existence of this dynamics have not been elucidated. In this work we use a combination of analytic and numerical approaches to confirm that latching dynamics can exist in the context of a symmetric Hebbian learning rule, however lacks robustness and imposes a number of biologically unrealistic restrictions on the model. In particular our work shows that the symmetry of the Hebbian rule is not an obstruction to the existence of latching dynamics, however fine tuning of the parameters of the model is needed.
Collapse
Affiliation(s)
- Carlos Aguilar
- Bases, Corpus, Langage, UMR 7320 CNRS, Université de Nice - Sophia Antipolis, 06357 Nice, France
| | - Pascal Chossat
- Laboratoire J.A.Dieudonné UMR CNRS-UNS 7351, Université de Nice - Sophia Antipolis, 06108 Nice, France
- MathNeuro team, Inria Sophia Antipolis, 06902 Valbonne-Sophia Antipolis, France
| | - Martin Krupa
- Laboratoire J.A.Dieudonné UMR CNRS-UNS 7351, Université de Nice - Sophia Antipolis, 06108 Nice, France
- MathNeuro team, Inria Sophia Antipolis, 06902 Valbonne-Sophia Antipolis, France
- Department of Applied Mathematics, University College Cork, Cork, Ireland
| | - Frédéric Lavigne
- Bases, Corpus, Langage, UMR 7320 CNRS, Université de Nice - Sophia Antipolis, 06357 Nice, France
| |
Collapse
|
11
|
Titley HK, Brunel N, Hansel C. Toward a Neurocentric View of Learning. Neuron 2017; 95:19-32. [PMID: 28683265 PMCID: PMC5519140 DOI: 10.1016/j.neuron.2017.05.021] [Citation(s) in RCA: 127] [Impact Index Per Article: 18.1] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/21/2016] [Revised: 05/12/2017] [Accepted: 05/15/2017] [Indexed: 01/29/2023]
Abstract
Synaptic plasticity (e.g., long-term potentiation [LTP]) is considered the cellular correlate of learning. Recent optogenetic studies on memory engram formation assign a critical role in learning to suprathreshold activation of neurons and their integration into active engrams ("engram cells"). Here we review evidence that ensemble integration may result from LTP but also from cell-autonomous changes in membrane excitability. We propose that synaptic plasticity determines synaptic connectivity maps, whereas intrinsic plasticity-possibly separated in time-amplifies neuronal responsiveness and acutely drives engram integration. Our proposal marks a move away from an exclusively synaptocentric toward a non-exclusive, neurocentric view of learning.
Collapse
Affiliation(s)
- Heather K Titley
- Department of Neurobiology, University of Chicago, Chicago, IL 60637, USA
| | - Nicolas Brunel
- Department of Neurobiology, University of Chicago, Chicago, IL 60637, USA; Department of Statistics, University of Chicago, Chicago, IL 60637, USA
| | - Christian Hansel
- Department of Neurobiology, University of Chicago, Chicago, IL 60637, USA.
| |
Collapse
|
12
|
Synaptic plasticity in dendrites: complications and coping strategies. Curr Opin Neurobiol 2017; 43:177-186. [PMID: 28453975 DOI: 10.1016/j.conb.2017.03.012] [Citation(s) in RCA: 27] [Impact Index Per Article: 3.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/07/2016] [Revised: 03/20/2017] [Accepted: 03/22/2017] [Indexed: 12/15/2022]
Abstract
The elaborate morphology, nonlinear membrane mechanisms and spatiotemporally varying synaptic activation patterns of dendrites complicate the expression, compartmentalization and modulation of synaptic plasticity. To grapple with this complexity, we start with the observation that neurons in different brain areas face markedly different learning problems, and dendrites of different neuron types contribute to the cell's input-output function in markedly different ways. By committing to specific assumptions regarding a neuron's learning problem and its input-output function, specific inferences can be drawn regarding the synaptic plasticity mechanisms and outcomes that we 'ought' to expect for that neuron. Exploiting this assumption-driven approach can help both in interpreting existing experimental data and designing future experiments aimed at understanding the brain's myriad learning processes.
Collapse
|
13
|
Yang Z, Santamaria F. Purkinje cell intrinsic excitability increases after synaptic long term depression. J Neurophysiol 2016; 116:1208-17. [PMID: 27306677 DOI: 10.1152/jn.00369.2016] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/16/2016] [Accepted: 06/07/2016] [Indexed: 11/22/2022] Open
Abstract
Coding in cerebellar Purkinje cells not only depends on synaptic plasticity but also on their intrinsic membrane excitability. We performed whole cell patch-clamp recordings of Purkinje cells in sagittal cerebellar slices in mice. We found that inducing long-term depression (LTD) in the parallel fiber to Purkinje cell synapses results in an increase in the gain of the firing rate response. This increase in excitability is accompanied by an increase in the input resistance and a decrease in the amplitude of the hyperpolarization-activated cyclic nucleotide-gated (HCN) channel-mediated voltage sag. Application of a HCN channel blocker prevents the increase in input resistance and excitability without blocking the expression of synaptic LTD. We conclude that the induction of parallel fiber-Purkinje cell LTD is accompanied by an increase in excitability of Purkinje cells through downregulation of the HCN-mediated h current. We suggest that HCN downregulation is linked to the biochemical pathway that sustains synaptic LTD. Given the diversity of information carried by the parallel fiber system, we suggest that changes in intrinsic excitability enhance the coding capacity of the Purkinje cell to specific input sources.
Collapse
Affiliation(s)
- Zhen Yang
- UTSA Neurosciences Institute and Department of Biology, University of Texas at San Antonio, San Antonio, Texas
| | - Fidel Santamaria
- UTSA Neurosciences Institute and Department of Biology, University of Texas at San Antonio, San Antonio, Texas
| |
Collapse
|
14
|
Brunel N. Is cortical connectivity optimized for storing information? Nat Neurosci 2016; 19:749-755. [PMID: 27065365 DOI: 10.1038/nn.4286] [Citation(s) in RCA: 71] [Impact Index Per Article: 8.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/11/2016] [Accepted: 03/14/2016] [Indexed: 12/13/2022]
Abstract
Cortical networks are thought to be shaped by experience-dependent synaptic plasticity. Theoretical studies have shown that synaptic plasticity allows a network to store a memory of patterns of activity such that they become attractors of the dynamics of the network. Here we study the properties of the excitatory synaptic connectivity in a network that maximizes the number of stored patterns of activity in a robust fashion. We show that the resulting synaptic connectivity matrix has the following properties: it is sparse, with a large fraction of zero synaptic weights ('potential' synapses); bidirectionally coupled pairs of neurons are over-represented in comparison to a random network; and bidirectionally connected pairs have stronger synapses on average than unidirectionally connected pairs. All these features reproduce quantitatively available data on connectivity in cortex. This suggests synaptic connectivity in cortex is optimized to store a large number of attractor states in a robust fashion.
Collapse
Affiliation(s)
- Nicolas Brunel
- Department of Statistics, The University of Chicago, Chicago, Illinois, USA.,Department of Neurobiology, The University of Chicago, Chicago, Illinois, USA
| |
Collapse
|
15
|
Bouvier G, Higgins D, Spolidoro M, Carrel D, Mathieu B, Léna C, Dieudonné S, Barbour B, Brunel N, Casado M. Burst-Dependent Bidirectional Plasticity in the Cerebellum Is Driven by Presynaptic NMDA Receptors. Cell Rep 2016; 15:104-116. [PMID: 27052175 DOI: 10.1016/j.celrep.2016.03.004] [Citation(s) in RCA: 38] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/19/2015] [Revised: 01/15/2016] [Accepted: 02/25/2016] [Indexed: 12/22/2022] Open
Abstract
Numerous studies have shown that cerebellar function is related to the plasticity at the synapses between parallel fibers and Purkinje cells. How specific input patterns determine plasticity outcomes, as well as the biophysics underlying plasticity of these synapses, remain unclear. Here, we characterize the patterns of activity that lead to postsynaptically expressed LTP using both in vivo and in vitro experiments. Similar to the requirements of LTD, we find that high-frequency bursts are necessary to trigger LTP and that this burst-dependent plasticity depends on presynaptic NMDA receptors and nitric oxide (NO) signaling. We provide direct evidence for calcium entry through presynaptic NMDA receptors in a subpopulation of parallel fiber varicosities. Finally, we develop and experimentally verify a mechanistic plasticity model based on NO and calcium signaling. The model reproduces plasticity outcomes from data and predicts the effect of arbitrary patterns of synaptic inputs on Purkinje cells, thereby providing a unified description of plasticity.
Collapse
Affiliation(s)
- Guy Bouvier
- Ecole Normale Supérieure, Institut de Biologie de l'ENS (IBENS), Inserm U1024, CNRS UMR 8197, Paris 75005, France
| | - David Higgins
- Ecole Normale Supérieure, Institut de Biologie de l'ENS (IBENS), Inserm U1024, CNRS UMR 8197, Paris 75005, France; Departments of Statistics and Neurobiology, University of Chicago, Chicago, IL 60637, USA
| | - Maria Spolidoro
- Ecole Normale Supérieure, Institut de Biologie de l'ENS (IBENS), Inserm U1024, CNRS UMR 8197, Paris 75005, France
| | - Damien Carrel
- Ecole Normale Supérieure, Institut de Biologie de l'ENS (IBENS), Inserm U1024, CNRS UMR 8197, Paris 75005, France
| | - Benjamin Mathieu
- Ecole Normale Supérieure, Institut de Biologie de l'ENS (IBENS), Inserm U1024, CNRS UMR 8197, Paris 75005, France
| | - Clément Léna
- Ecole Normale Supérieure, Institut de Biologie de l'ENS (IBENS), Inserm U1024, CNRS UMR 8197, Paris 75005, France
| | - Stéphane Dieudonné
- Ecole Normale Supérieure, Institut de Biologie de l'ENS (IBENS), Inserm U1024, CNRS UMR 8197, Paris 75005, France
| | - Boris Barbour
- Ecole Normale Supérieure, Institut de Biologie de l'ENS (IBENS), Inserm U1024, CNRS UMR 8197, Paris 75005, France
| | - Nicolas Brunel
- Departments of Statistics and Neurobiology, University of Chicago, Chicago, IL 60637, USA
| | - Mariano Casado
- Ecole Normale Supérieure, Institut de Biologie de l'ENS (IBENS), Inserm U1024, CNRS UMR 8197, Paris 75005, France.
| |
Collapse
|
16
|
Alemi A, Baldassi C, Brunel N, Zecchina R. A Three-Threshold Learning Rule Approaches the Maximal Capacity of Recurrent Neural Networks. PLoS Comput Biol 2015; 11:e1004439. [PMID: 26291608 PMCID: PMC4546407 DOI: 10.1371/journal.pcbi.1004439] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/30/2015] [Accepted: 06/19/2015] [Indexed: 11/30/2022] Open
Abstract
Understanding the theoretical foundations of how memories are encoded and retrieved in neural populations is a central challenge in neuroscience. A popular theoretical scenario for modeling memory function is the attractor neural network scenario, whose prototype is the Hopfield model. The model simplicity and the locality of the synaptic update rules come at the cost of a poor storage capacity, compared with the capacity achieved with perceptron learning algorithms. Here, by transforming the perceptron learning rule, we present an online learning rule for a recurrent neural network that achieves near-maximal storage capacity without an explicit supervisory error signal, relying only upon locally accessible information. The fully-connected network consists of excitatory binary neurons with plastic recurrent connections and non-plastic inhibitory feedback stabilizing the network dynamics; the memory patterns to be memorized are presented online as strong afferent currents, producing a bimodal distribution for the neuron synaptic inputs. Synapses corresponding to active inputs are modified as a function of the value of the local fields with respect to three thresholds. Above the highest threshold, and below the lowest threshold, no plasticity occurs. In between these two thresholds, potentiation/depression occurs when the local field is above/below an intermediate threshold. We simulated and analyzed a network of binary neurons implementing this rule and measured its storage capacity for different sizes of the basins of attraction. The storage capacity obtained through numerical simulations is shown to be close to the value predicted by analytical calculations. We also measured the dependence of capacity on the strength of external inputs. Finally, we quantified the statistics of the resulting synaptic connectivity matrix, and found that both the fraction of zero weight synapses and the degree of symmetry of the weight matrix increase with the number of stored patterns. Recurrent neural networks have been shown to be able to store memory patterns as fixed point attractors of the dynamics of the network. The prototypical learning rule for storing memories in attractor neural networks is Hebbian learning, which can store up to 0.138N uncorrelated patterns in a recurrent network of N neurons. This is very far from the maximal capacity 2N, which can be achieved by supervised rules, e.g. by the perceptron learning rule. However, these rules are problematic for neurons in the neocortex or the hippocampus, since they rely on the computation of a supervisory error signal for each neuron of the network. We show here that the total synaptic input received by a neuron during the presentation of a sufficiently strong stimulus contains implicit information about the error, which can be extracted by setting three thresholds on the total input, defining depression and potentiation regions. The resulting learning rule implements basic biological constraints, and our simulations show that a network implementing it gets very close to the maximal capacity, both in the dense and sparse regimes, across all values of storage robustness. The rule predicts that when the total synaptic inputs goes beyond a threshold, no potentiation should occur.
Collapse
Affiliation(s)
- Alireza Alemi
- Human Genetics Foundation (HuGeF), Turin, Italy
- DISAT, Politecnico di Torino, Turin, Italy
- * E-mail: ,
| | - Carlo Baldassi
- Human Genetics Foundation (HuGeF), Turin, Italy
- DISAT, Politecnico di Torino, Turin, Italy
| | - Nicolas Brunel
- Departments of Statistics and Neurobiology, University of Chicago, Chicago, Illinois, United States of America
| | - Riccardo Zecchina
- Human Genetics Foundation (HuGeF), Turin, Italy
- DISAT, Politecnico di Torino, Turin, Italy
| |
Collapse
|
17
|
Spanne A, Jörntell H. Questioning the role of sparse coding in the brain. Trends Neurosci 2015; 38:417-27. [PMID: 26093844 DOI: 10.1016/j.tins.2015.05.005] [Citation(s) in RCA: 54] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/01/2015] [Revised: 05/20/2015] [Accepted: 05/24/2015] [Indexed: 01/27/2023]
Abstract
Coding principles are central to understanding the organization of brain circuitry. Sparse coding offers several advantages, but a near-consensus has developed that it only has beneficial properties, and these are partially unique to sparse coding. We find that these advantages come at the cost of several trade-offs, with the lower capacity for generalization being especially problematic, and the value of sparse coding as a measure and its experimental support are both questionable. Furthermore, silent synapses and inhibitory interneurons can permit learning speed and memory capacity that was previously ascribed to sparse coding only. Combining these properties without exaggerated sparse coding improves the capacity for generalization and facilitates learning of models of a complex and high-dimensional reality.
Collapse
Affiliation(s)
- Anton Spanne
- Neural Basis of Sensorimotor Control, Department of Experimental Medical Science, Biomedical Center F10, Tornavägen 10, 221 84 Lund, Sweden
| | - Henrik Jörntell
- Neural Basis of Sensorimotor Control, Department of Experimental Medical Science, Biomedical Center F10, Tornavägen 10, 221 84 Lund, Sweden.
| |
Collapse
|
18
|
Energy Efficient Sparse Connectivity from Imbalanced Synaptic Plasticity Rules. PLoS Comput Biol 2015; 11:e1004265. [PMID: 26046817 PMCID: PMC4457870 DOI: 10.1371/journal.pcbi.1004265] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/19/2014] [Accepted: 04/05/2015] [Indexed: 11/22/2022] Open
Abstract
It is believed that energy efficiency is an important constraint in brain evolution. As synaptic transmission dominates energy consumption, energy can be saved by ensuring that only a few synapses are active. It is therefore likely that the formation of sparse codes and sparse connectivity are fundamental objectives of synaptic plasticity. In this work we study how sparse connectivity can result from a synaptic learning rule of excitatory synapses. Information is maximised when potentiation and depression are balanced according to the mean presynaptic activity level and the resulting fraction of zero-weight synapses is around 50%. However, an imbalance towards depression increases the fraction of zero-weight synapses without significantly affecting performance. We show that imbalanced plasticity corresponds to imposing a regularising constraint on the L1-norm of the synaptic weight vector, a procedure that is well-known to induce sparseness. Imbalanced plasticity is biophysically plausible and leads to more efficient synaptic configurations than a previously suggested approach that prunes synapses after learning. Our framework gives a novel interpretation to the high fraction of silent synapses found in brain regions like the cerebellum. Recent estimates point out that a large part of the energetic budget of the mammalian cortex is spent in transmitting signals between neurons across synapses. Despite this, studies of learning and memory do not usually take energy efficiency into account. In this work we address the canonical computational problem of storing memories with synaptic plasticity. However, instead of optimising solely for information capacity, we search for energy efficient solutions. This implies that the number of functional synapses needs to be small (sparse connectivity) while maintaining high information. We suggest imbalanced plasticity, a learning regime where net depression is stronger than potentiation, as a simple and plausible means to learn more efficient neural circuits. Our framework gives a novel interpretation to the high fraction of silent synapses found in brain regions like the cerebellum.
Collapse
|