1
|
Yang X, La Camera G. Co-existence of synaptic plasticity and metastable dynamics in a spiking model of cortical circuits. PLoS Comput Biol 2024; 20:e1012220. [PMID: 38950068 DOI: 10.1371/journal.pcbi.1012220] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/05/2024] [Accepted: 06/01/2024] [Indexed: 07/03/2024] Open
Abstract
Evidence for metastable dynamics and its role in brain function is emerging at a fast pace and is changing our understanding of neural coding by putting an emphasis on hidden states of transient activity. Clustered networks of spiking neurons have enhanced synaptic connections among groups of neurons forming structures called cell assemblies; such networks are capable of producing metastable dynamics that is in agreement with many experimental results. However, it is unclear how a clustered network structure producing metastable dynamics may emerge from a fully local plasticity rule, i.e., a plasticity rule where each synapse has only access to the activity of the neurons it connects (as opposed to the activity of other neurons or other synapses). Here, we propose a local plasticity rule producing ongoing metastable dynamics in a deterministic, recurrent network of spiking neurons. The metastable dynamics co-exists with ongoing plasticity and is the consequence of a self-tuning mechanism that keeps the synaptic weights close to the instability line where memories are spontaneously reactivated. In turn, the synaptic structure is stable to ongoing dynamics and random perturbations, yet it remains sufficiently plastic to remap sensory representations to encode new sets of stimuli. Both the plasticity rule and the metastable dynamics scale well with network size, with synaptic stability increasing with the number of neurons. Overall, our results show that it is possible to generate metastable dynamics over meaningful hidden states using a simple but biologically plausible plasticity rule which co-exists with ongoing neural dynamics.
Collapse
Affiliation(s)
- Xiaoyu Yang
- Graduate Program in Physics and Astronomy, Stony Brook University, Stony Brook, New York, United States of America
- Department of Neurobiology and Behavior, Stony Brook University, Stony Brook, New York, United States of America
- Center for Neural Circuit Dynamics, Stony Brook University, Stony Brook, New York, United States of America
| | - Giancarlo La Camera
- Department of Neurobiology and Behavior, Stony Brook University, Stony Brook, New York, United States of America
- Center for Neural Circuit Dynamics, Stony Brook University, Stony Brook, New York, United States of America
| |
Collapse
|
2
|
Yang X, La Camera G. Co-existence of synaptic plasticity and metastable dynamics in a spiking model of cortical circuits. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2023.12.07.570692. [PMID: 38106233 PMCID: PMC10723399 DOI: 10.1101/2023.12.07.570692] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/19/2023]
Abstract
Evidence for metastable dynamics and its role in brain function is emerging at a fast pace and is changing our understanding of neural coding by putting an emphasis on hidden states of transient activity. Clustered networks of spiking neurons have enhanced synaptic connections among groups of neurons forming structures called cell assemblies; such networks are capable of producing metastable dynamics that is in agreement with many experimental results. However, it is unclear how a clustered network structure producing metastable dynamics may emerge from a fully local plasticity rule, i.e., a plasticity rule where each synapse has only access to the activity of the neurons it connects (as opposed to the activity of other neurons or other synapses). Here, we propose a local plasticity rule producing ongoing metastable dynamics in a deterministic, recurrent network of spiking neurons. The metastable dynamics co-exists with ongoing plasticity and is the consequence of a self-tuning mechanism that keeps the synaptic weights close to the instability line where memories are spontaneously reactivated. In turn, the synaptic structure is stable to ongoing dynamics and random perturbations, yet it remains sufficiently plastic to remap sensory representations to encode new sets of stimuli. Both the plasticity rule and the metastable dynamics scale well with network size, with synaptic stability increasing with the number of neurons. Overall, our results show that it is possible to generate metastable dynamics over meaningful hidden states using a simple but biologically plausible plasticity rule which co-exists with ongoing neural dynamics.
Collapse
Affiliation(s)
- Xiaoyu Yang
- Graduate Program in Physics and Astronomy, Stony Brook University
- Department of Neurobiology & Behavior, Stony Brook University
- Center for Neural Circuit Dynamics, Stony Brook University
| | - Giancarlo La Camera
- Department of Neurobiology & Behavior, Stony Brook University
- Center for Neural Circuit Dynamics, Stony Brook University
| |
Collapse
|
3
|
Agnes EJ, Vogels TP. Co-dependent excitatory and inhibitory plasticity accounts for quick, stable and long-lasting memories in biological networks. Nat Neurosci 2024; 27:964-974. [PMID: 38509348 PMCID: PMC11089004 DOI: 10.1038/s41593-024-01597-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/29/2022] [Accepted: 02/08/2024] [Indexed: 03/22/2024]
Abstract
The brain's functionality is developed and maintained through synaptic plasticity. As synapses undergo plasticity, they also affect each other. The nature of such 'co-dependency' is difficult to disentangle experimentally, because multiple synapses must be monitored simultaneously. To help understand the experimentally observed phenomena, we introduce a framework that formalizes synaptic co-dependency between different connection types. The resulting model explains how inhibition can gate excitatory plasticity while neighboring excitatory-excitatory interactions determine the strength of long-term potentiation. Furthermore, we show how the interplay between excitatory and inhibitory synapses can account for the quick rise and long-term stability of a variety of synaptic weight profiles, such as orientation tuning and dendritic clustering of co-active synapses. In recurrent neuronal networks, co-dependent plasticity produces rich and stable motor cortex-like dynamics with high input sensitivity. Our results suggest an essential role for the neighborly synaptic interaction during learning, connecting micro-level physiology with network-wide phenomena.
Collapse
Affiliation(s)
- Everton J Agnes
- Centre for Neural Circuits and Behaviour, University of Oxford, Oxford, UK.
- Biozentrum, University of Basel, Basel, Switzerland.
| | - Tim P Vogels
- Centre for Neural Circuits and Behaviour, University of Oxford, Oxford, UK
- Institute of Science and Technology Austria, Klosterneuburg, Austria
| |
Collapse
|
4
|
Tomko M, Benuskova L, Jedlicka P. A voltage-based Event-Timing-Dependent Plasticity rule accounts for LTP subthreshold and suprathreshold for dendritic spikes in CA1 pyramidal neurons. J Comput Neurosci 2024; 52:125-131. [PMID: 38470534 PMCID: PMC11035391 DOI: 10.1007/s10827-024-00868-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/07/2023] [Revised: 03/01/2024] [Accepted: 03/05/2024] [Indexed: 03/14/2024]
Abstract
Long-term potentiation (LTP) is a synaptic mechanism involved in learning and memory. Experiments have shown that dendritic sodium spikes (Na-dSpikes) are required for LTP in the distal apical dendrites of CA1 pyramidal cells. On the other hand, LTP in perisomatic dendrites can be induced by synaptic input patterns that can be both subthreshold and suprathreshold for Na-dSpikes. It is unclear whether these results can be explained by one unifying plasticity mechanism. Here, we show in biophysically and morphologically realistic compartmental models of the CA1 pyramidal cell that these forms of LTP can be fully accounted for by a simple plasticity rule. We call it the voltage-based Event-Timing-Dependent Plasticity (ETDP) rule. The presynaptic event is the presynaptic spike or release of glutamate. The postsynaptic event is the local depolarization that exceeds a certain plasticity threshold. Our model reproduced the experimentally observed LTP in a variety of protocols, including local pharmacological inhibition of dendritic spikes by tetrodotoxin (TTX). In summary, we have provided a validation of the voltage-based ETDP, suggesting that this simple plasticity rule can be used to model even complex spatiotemporal patterns of long-term synaptic plasticity in neuronal dendrites.
Collapse
Affiliation(s)
- Matus Tomko
- Centre of Biosciences, Institute of Molecular Physiology and Genetics, Slovak Academy of Sciences, Dubravska cesta 9, Bratislava, 840 05, Slovakia.
- Faculty of Medicine, Institute of Medical Physics and Biophysics, Comenius University Bratislava, Bratislava, Slovakia.
| | - Lubica Benuskova
- Faculty of Mathematics, Physics and Informatics, Centre for Cognitive Science, Department of Applied Informatics, Comenius University Bratislava, Bratislava, Slovakia
| | - Peter Jedlicka
- Faculty of Medicine, ICAR3R-Interdisciplinary Centre for 3Rs in Animal Research, Justus Liebig University Giessen, Giessen, Germany
- Institute of Clinical Neuroanatomy, Neuroscience Center, Goethe University Frankfurt, Frankfurt/Main, Germany
| |
Collapse
|
5
|
Fitz H, Hagoort P, Petersson KM. Neurobiological Causal Models of Language Processing. NEUROBIOLOGY OF LANGUAGE (CAMBRIDGE, MASS.) 2024; 5:225-247. [PMID: 38645618 PMCID: PMC11025648 DOI: 10.1162/nol_a_00133] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 09/29/2022] [Accepted: 12/18/2023] [Indexed: 04/23/2024]
Abstract
The language faculty is physically realized in the neurobiological infrastructure of the human brain. Despite significant efforts, an integrated understanding of this system remains a formidable challenge. What is missing from most theoretical accounts is a specification of the neural mechanisms that implement language function. Computational models that have been put forward generally lack an explicit neurobiological foundation. We propose a neurobiologically informed causal modeling approach which offers a framework for how to bridge this gap. A neurobiological causal model is a mechanistic description of language processing that is grounded in, and constrained by, the characteristics of the neurobiological substrate. It intends to model the generators of language behavior at the level of implementational causality. We describe key features and neurobiological component parts from which causal models can be built and provide guidelines on how to implement them in model simulations. Then we outline how this approach can shed new light on the core computational machinery for language, the long-term storage of words in the mental lexicon and combinatorial processing in sentence comprehension. In contrast to cognitive theories of behavior, causal models are formulated in the "machine language" of neurobiology which is universal to human cognition. We argue that neurobiological causal modeling should be pursued in addition to existing approaches. Eventually, this approach will allow us to develop an explicit computational neurobiology of language.
Collapse
Affiliation(s)
- Hartmut Fitz
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, The Netherlands
- Neurobiology of Language Department, Max Planck Institute for Psycholinguistics, Nijmegen, The Netherlands
| | - Peter Hagoort
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, The Netherlands
- Neurobiology of Language Department, Max Planck Institute for Psycholinguistics, Nijmegen, The Netherlands
| | - Karl Magnus Petersson
- Neurobiology of Language Department, Max Planck Institute for Psycholinguistics, Nijmegen, The Netherlands
- Faculty of Medicine and Biomedical Sciences, University of Algarve, Faro, Portugal
| |
Collapse
|
6
|
Hansel C. Contiguity in perception: origins in cellular associative computations. Trends Neurosci 2024; 47:170-180. [PMID: 38310022 PMCID: PMC10939850 DOI: 10.1016/j.tins.2024.01.001] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/16/2023] [Revised: 11/30/2023] [Accepted: 01/05/2024] [Indexed: 02/05/2024]
Abstract
Our brains are good at detecting and learning associative structures; according to some linguistic theories, this capacity even constitutes a prerequisite for the development of syntax and compositionality in language and verbalized thought. I will argue that the search for associative motifs in input patterns is an evolutionary old brain function that enables contiguity in sensory perception and orientation in time and space. It has its origins in an elementary material property of cells that is particularly evident at chemical synapses: input-assigned calcium influx that activates calcium sensor proteins involved in memory storage. This machinery for the detection and learning of associative motifs generates knowledge about input relationships and integrates this knowledge into existing networks through updates in connectivity patterns.
Collapse
Affiliation(s)
- Christian Hansel
- Department of Neurobiology, University of Chicago, Chicago, IL 60637, USA.
| |
Collapse
|
7
|
Li G, McLaughlin DW, Peskin CS. A biochemical description of postsynaptic plasticity-with timescales ranging from milliseconds to seconds. Proc Natl Acad Sci U S A 2024; 121:e2311709121. [PMID: 38324573 PMCID: PMC10873618 DOI: 10.1073/pnas.2311709121] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/10/2023] [Accepted: 12/29/2023] [Indexed: 02/09/2024] Open
Abstract
Synaptic plasticity [long-term potentiation/depression (LTP/D)], is a cellular mechanism underlying learning. Two distinct types of early LTP/D (E-LTP/D), acting on very different time scales, have been observed experimentally-spike timing dependent plasticity (STDP), on time scales of tens of ms; and behavioral time scale synaptic plasticity (BTSP), on time scales of seconds. BTSP is a candidate for a mechanism underlying rapid learning of spatial location by place cells. Here, a computational model of the induction of E-LTP/D at a spine head of a synapse of a hippocampal pyramidal neuron is developed. The single-compartment model represents two interacting biochemical pathways for the activation (phosphorylation) of the kinase (CaMKII) with a phosphatase, with ion inflow through channels (NMDAR, CaV1,Na). The biochemical reactions are represented by a deterministic system of differential equations, with a detailed description of the activation of CaMKII that includes the opening of the compact state of CaMKII. This single model captures realistic responses (temporal profiles with the differing timescales) of STDP and BTSP and their asymmetries. The simulations distinguish several mechanisms underlying STDP vs. BTSP, including i) the flow of [Formula: see text] through NMDAR vs. CaV1 channels, and ii) the origin of several time scales in the activation of CaMKII. The model also realizes a priming mechanism for E-LTP that is induced by [Formula: see text] flow through CaV1.3 channels. Once in the spine head, this small additional [Formula: see text] opens the compact state of CaMKII, placing CaMKII ready for subsequent induction of LTP.
Collapse
Affiliation(s)
- Guanchun Li
- Courant Institute and Center for Neural Science, Department of Mathematics, New York University, New York, NY10012
| | - David W. McLaughlin
- Courant Institute and Center for Neural Science, Department of Mathematics, New York University, New York, NY10012
- Center for Neural Science, Department of Neural Science, New York University, New York, NY10012
- Institute of Mathematical Science, Mathematics Department, New York University-Shanghai, Shanghai200122, China
- Neuroscience Institute of New York University Langone Health, New York University, New York, NY10016
| | - Charles S. Peskin
- Courant Institute and Center for Neural Science, Department of Mathematics, New York University, New York, NY10012
- Center for Neural Science, Department of Neural Science, New York University, New York, NY10012
| |
Collapse
|
8
|
Kim HR, Long M, Sekerková G, Maes A, Kennedy A, Martina M. Hypernegative GABA A Reversal Potential in Pyramidal Cells Contributes to Medial Prefrontal Cortex Deactivation in a Mouse Model of Neuropathic Pain. THE JOURNAL OF PAIN 2024; 25:522-532. [PMID: 37793537 PMCID: PMC10841847 DOI: 10.1016/j.jpain.2023.09.021] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/23/2023] [Revised: 08/21/2023] [Accepted: 09/27/2023] [Indexed: 10/06/2023]
Abstract
Deactivation of the medial prefrontal cortex (mPFC) has been broadly reported in both neuropathic pain models and human chronic pain patients. Several cellular mechanisms may contribute to the inhibition of mPFC activity, including enhanced GABAergic inhibition. The functional effect of GABAA(γ-aminobutyric acid type A)-receptor activation depends on the concentration of intracellular chloride in the postsynaptic neuron, which is mainly regulated by the activity of Na-K-2Cl cotransporter isoform 1 (NKCC1) and K-Cl cotransporter isoform 2 (KCC2), 2 potassium-chloride cotransporters that import and extrude chloride, respectively. Recent work has shown that the NKCC1-KCC2 ratio is affected in numerous pathological conditions, and we hypothesized that it may contribute to the alteration of mPFC function in neuropathic pain. We used quantitative in situ hybridization to assess the level of expression of NKCC1 and KCC2 in the mPFC of a mouse model of neuropathic pain (spared nerve injury), and we found that KCC2 transcript is increased in the mPFC of spared nerve injury mice while NKCC1 is not affected. Perforated patch recordings further showed that this results in the hypernegative reversal potential of the GABAA current in pyramidal neurons of the mPFC. Computational simulations suggested that this change in GABAA reversal potential is sufficient to significantly reduce the overall activity of the cortical network. Thus, our results identify a novel pathological modulation of GABAA function and a new mechanism by which mPFC function is inhibited in neuropathic pain. Our data also help explain previous findings showing that activation of mPFC interneurons has proalgesic effect in neuropathic, but not in control conditions. PERSPECTIVE: Chronic pain is associated with the presence of depolarizing GABAA current in the spinal cord, suggesting that pharmacological NKCC1 antagonism has analgesic effects. However, our results show that in neuropathic pain, GABAA current is actually hyperinhibitory in the mPFC, where it contributes to the mPFC functional deactivation. This suggests caution in the use of NKCC1 antagonism to treat pain.
Collapse
Affiliation(s)
- Haram R Kim
- Department of Neuroscience, Feinberg School of Medicine, Northwestern University, Chicago, Illinois
| | - Manzhao Long
- Department of Neuroscience, Feinberg School of Medicine, Northwestern University, Chicago, Illinois
| | - Gabriella Sekerková
- Department of Neuroscience, Feinberg School of Medicine, Northwestern University, Chicago, Illinois
| | - Amadeus Maes
- Department of Neuroscience, Feinberg School of Medicine, Northwestern University, Chicago, Illinois
| | - Ann Kennedy
- Department of Neuroscience, Feinberg School of Medicine, Northwestern University, Chicago, Illinois
| | - Marco Martina
- Department of Neuroscience, Feinberg School of Medicine, Northwestern University, Chicago, Illinois
| |
Collapse
|
9
|
de Brito CSN, Gerstner W. Learning what matters: Synaptic plasticity with invariance to second-order input correlations. PLoS Comput Biol 2024; 20:e1011844. [PMID: 38346073 PMCID: PMC10890752 DOI: 10.1371/journal.pcbi.1011844] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/01/2022] [Revised: 02/23/2024] [Accepted: 01/18/2024] [Indexed: 02/25/2024] Open
Abstract
Cortical populations of neurons develop sparse representations adapted to the statistics of the environment. To learn efficient population codes, synaptic plasticity mechanisms must differentiate relevant latent features from spurious input correlations, which are omnipresent in cortical networks. Here, we develop a theory for sparse coding and synaptic plasticity that is invariant to second-order correlations in the input. Going beyond classical Hebbian learning, our learning objective explains the functional form of observed excitatory plasticity mechanisms, showing how Hebbian long-term depression (LTD) cancels the sensitivity to second-order correlations so that receptive fields become aligned with features hidden in higher-order statistics. Invariance to second-order correlations enhances the versatility of biologically realistic learning models, supporting optimal decoding from noisy inputs and sparse population coding from spatially correlated stimuli. In a spiking model with triplet spike-timing-dependent plasticity (STDP), we show that individual neurons can learn localized oriented receptive fields, circumventing the need for input preprocessing, such as whitening, or population-level lateral inhibition. The theory advances our understanding of local unsupervised learning in cortical circuits, offers new interpretations of the Bienenstock-Cooper-Munro and triplet STDP models, and assigns a specific functional role to synaptic LTD mechanisms in pyramidal neurons.
Collapse
Affiliation(s)
- Carlos Stein Naves de Brito
- École Polytechnique Fédérale de Lausanne, EPFL, Lusanne, Switzerland
- Champalimaud Research, Champalimaud Centre for the Unknown, Lisbon, Portugal
| | - Wulfram Gerstner
- École Polytechnique Fédérale de Lausanne, EPFL, Lusanne, Switzerland
| |
Collapse
|
10
|
Friedenberger Z, Harkin E, Tóth K, Naud R. Silences, spikes and bursts: Three-part knot of the neural code. J Physiol 2023; 601:5165-5193. [PMID: 37889516 DOI: 10.1113/jp281510] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/13/2023] [Accepted: 09/28/2023] [Indexed: 10/28/2023] Open
Abstract
When a neuron breaks silence, it can emit action potentials in a number of patterns. Some responses are so sudden and intense that electrophysiologists felt the need to single them out, labelling action potentials emitted at a particularly high frequency with a metonym - bursts. Is there more to bursts than a figure of speech? After all, sudden bouts of high-frequency firing are expected to occur whenever inputs surge. The burst coding hypothesis advances that the neural code has three syllables: silences, spikes and bursts. We review evidence supporting this ternary code in terms of devoted mechanisms for burst generation, synaptic transmission and synaptic plasticity. We also review the learning and attention theories for which such a triad is beneficial.
Collapse
Affiliation(s)
- Zachary Friedenberger
- Brain and Mind Institute, Department of Cellular and Molecular Medicine, University of Ottawa, Ottawa, Ontario, Canada
- Centre for Neural Dynamics and Artifical Intelligence, Department of Physics, University of Ottawa, Ottawa, Ontario, Ottawa
| | - Emerson Harkin
- Brain and Mind Institute, Department of Cellular and Molecular Medicine, University of Ottawa, Ottawa, Ontario, Canada
| | - Katalin Tóth
- Brain and Mind Institute, Department of Cellular and Molecular Medicine, University of Ottawa, Ottawa, Ontario, Canada
| | - Richard Naud
- Brain and Mind Institute, Department of Cellular and Molecular Medicine, University of Ottawa, Ottawa, Ontario, Canada
- Centre for Neural Dynamics and Artifical Intelligence, Department of Physics, University of Ottawa, Ottawa, Ontario, Ottawa
| |
Collapse
|
11
|
Halvagal MS, Zenke F. The combination of Hebbian and predictive plasticity learns invariant object representations in deep sensory networks. Nat Neurosci 2023; 26:1906-1915. [PMID: 37828226 PMCID: PMC10620089 DOI: 10.1038/s41593-023-01460-y] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/25/2022] [Accepted: 09/08/2023] [Indexed: 10/14/2023]
Abstract
Recognition of objects from sensory stimuli is essential for survival. To that end, sensory networks in the brain must form object representations invariant to stimulus changes, such as size, orientation and context. Although Hebbian plasticity is known to shape sensory networks, it fails to create invariant object representations in computational models, raising the question of how the brain achieves such processing. In the present study, we show that combining Hebbian plasticity with a predictive form of plasticity leads to invariant representations in deep neural network models. We derive a local learning rule that generalizes to spiking neural networks and naturally accounts for several experimentally observed properties of synaptic plasticity, including metaplasticity and spike-timing-dependent plasticity. Finally, our model accurately captures neuronal selectivity changes observed in the primate inferotemporal cortex in response to altered visual experience. Thus, we provide a plausible normative theory emphasizing the importance of predictive plasticity mechanisms for successful representational learning.
Collapse
Affiliation(s)
- Manu Srinath Halvagal
- Friedrich Miescher Institute for Biomedical Research, Basel, Switzerland
- Faculty of Science, University of Basel, Basel, Switzerland
| | - Friedemann Zenke
- Friedrich Miescher Institute for Biomedical Research, Basel, Switzerland.
- Faculty of Science, University of Basel, Basel, Switzerland.
| |
Collapse
|
12
|
O'Donnell C. Nonlinear slow-timescale mechanisms in synaptic plasticity. Curr Opin Neurobiol 2023; 82:102778. [PMID: 37657186 DOI: 10.1016/j.conb.2023.102778] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/13/2023] [Revised: 08/07/2023] [Accepted: 08/09/2023] [Indexed: 09/03/2023]
Abstract
Learning and memory rely on synapses changing their strengths in response to neural activity. However, there is a substantial gap between the timescales of neural electrical dynamics (1-100 ms) and organism behaviour during learning (seconds-minutes). What mechanisms bridge this timescale gap? What are the implications for theories of brain learning? Here I first cover experimental evidence for slow-timescale factors in plasticity induction. Then I review possible underlying cellular and synaptic mechanisms, and insights from recent computational models that incorporate such slow-timescale variables. I conclude that future progress in understanding brain learning across timescales will require both experimental and computational modelling studies that map out the nonlinearities implemented by both fast and slow plasticity mechanisms at synapses, and crucially, their joint interactions.
Collapse
Affiliation(s)
- Cian O'Donnell
- School of Computing, Engineering, and Intelligent Systems, Magee Campus, Ulster University, Derry/Londonderry, UK; School of Computer Science, Electrical and Electronic Engineering, and Engineering Maths, University of Bristol, Bristol, UK.
| |
Collapse
|
13
|
Saponati M, Vinck M. Sequence anticipation and spike-timing-dependent plasticity emerge from a predictive learning rule. Nat Commun 2023; 14:4985. [PMID: 37604825 PMCID: PMC10442404 DOI: 10.1038/s41467-023-40651-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/01/2021] [Accepted: 08/03/2023] [Indexed: 08/23/2023] Open
Abstract
Intelligent behavior depends on the brain's ability to anticipate future events. However, the learning rules that enable neurons to predict and fire ahead of sensory inputs remain largely unknown. We propose a plasticity rule based on predictive processing, where the neuron learns a low-rank model of the synaptic input dynamics in its membrane potential. Neurons thereby amplify those synapses that maximally predict other synaptic inputs based on their temporal relations, which provide a solution to an optimization problem that can be implemented at the single-neuron level using only local information. Consequently, neurons learn sequences over long timescales and shift their spikes towards the first inputs in a sequence. We show that this mechanism can explain the development of anticipatory signalling and recall in a recurrent network. Furthermore, we demonstrate that the learning rule gives rise to several experimentally observed STDP (spike-timing-dependent plasticity) mechanisms. These findings suggest prediction as a guiding principle to orchestrate learning and synaptic plasticity in single neurons.
Collapse
Affiliation(s)
- Matteo Saponati
- Ernst Strüngmann Institute (ESI) for Neuroscience in Cooperation with Max Planck Society, 60528, Frankfurt Am Main, Germany.
- IMPRS for Neural Circuits, Max-Planck Institute for Brain Research, 60438, Frankfurt Am Main, Germany.
- Donders Centre for Neuroscience, Department of Neuroinformatics, Radboud University, 6525, Nijmegen, The Netherlands.
| | - Martin Vinck
- Ernst Strüngmann Institute (ESI) for Neuroscience in Cooperation with Max Planck Society, 60528, Frankfurt Am Main, Germany.
- Donders Centre for Neuroscience, Department of Neuroinformatics, Radboud University, 6525, Nijmegen, The Netherlands.
| |
Collapse
|
14
|
Rodrigues YE, Tigaret CM, Marie H, O'Donnell C, Veltz R. A stochastic model of hippocampal synaptic plasticity with geometrical readout of enzyme dynamics. eLife 2023; 12:e80152. [PMID: 37589251 PMCID: PMC10435238 DOI: 10.7554/elife.80152] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/10/2022] [Accepted: 03/22/2023] [Indexed: 08/18/2023] Open
Abstract
Discovering the rules of synaptic plasticity is an important step for understanding brain learning. Existing plasticity models are either (1) top-down and interpretable, but not flexible enough to account for experimental data, or (2) bottom-up and biologically realistic, but too intricate to interpret and hard to fit to data. To avoid the shortcomings of these approaches, we present a new plasticity rule based on a geometrical readout mechanism that flexibly maps synaptic enzyme dynamics to predict plasticity outcomes. We apply this readout to a multi-timescale model of hippocampal synaptic plasticity induction that includes electrical dynamics, calcium, CaMKII and calcineurin, and accurate representation of intrinsic noise sources. Using a single set of model parameters, we demonstrate the robustness of this plasticity rule by reproducing nine published ex vivo experiments covering various spike-timing and frequency-dependent plasticity induction protocols, animal ages, and experimental conditions. Our model also predicts that in vivo-like spike timing irregularity strongly shapes plasticity outcome. This geometrical readout modelling approach can be readily applied to other excitatory or inhibitory synapses to discover their synaptic plasticity rules.
Collapse
Affiliation(s)
- Yuri Elias Rodrigues
- Université Côte d’AzurNiceFrance
- Institut de Pharmacologie Moléculaire et Cellulaire (IPMC), CNRSValbonneFrance
- Inria Center of University Côte d’Azur (Inria)Sophia AntipolisFrance
| | - Cezar M Tigaret
- Neuroscience and Mental Health Research Innovation Institute, Division of Psychological Medicine and Clinical Neurosciences,School of Medicine, Cardiff UniversityCardiffUnited Kingdom
| | - Hélène Marie
- Université Côte d’AzurNiceFrance
- Institut de Pharmacologie Moléculaire et Cellulaire (IPMC), CNRSValbonneFrance
| | - Cian O'Donnell
- School of Computing, Engineering, and Intelligent Systems, Magee Campus, Ulster UniversityLondonderryUnited Kingdom
- School of Computer Science, Electrical and Electronic Engineering, and Engineering Mathematics, University of BristolBristolUnited Kingdom
| | - Romain Veltz
- Inria Center of University Côte d’Azur (Inria)Sophia AntipolisFrance
| |
Collapse
|
15
|
Soleimani G, Nitsche MA, Bergmann TO, Towhidkhah F, Violante IR, Lorenz R, Kuplicki R, Tsuchiyagaito A, Mulyana B, Mayeli A, Ghobadi-Azbari P, Mosayebi-Samani M, Zilverstand A, Paulus MP, Bikson M, Ekhtiari H. Closing the loop between brain and electrical stimulation: towards precision neuromodulation treatments. Transl Psychiatry 2023; 13:279. [PMID: 37582922 PMCID: PMC10427701 DOI: 10.1038/s41398-023-02565-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/08/2022] [Revised: 07/06/2023] [Accepted: 07/20/2023] [Indexed: 08/17/2023] Open
Abstract
One of the most critical challenges in using noninvasive brain stimulation (NIBS) techniques for the treatment of psychiatric and neurologic disorders is inter- and intra-individual variability in response to NIBS. Response variations in previous findings suggest that the one-size-fits-all approach does not seem the most appropriate option for enhancing stimulation outcomes. While there is a growing body of evidence for the feasibility and effectiveness of individualized NIBS approaches, the optimal way to achieve this is yet to be determined. Transcranial electrical stimulation (tES) is one of the NIBS techniques showing promising results in modulating treatment outcomes in several psychiatric and neurologic disorders, but it faces the same challenge for individual optimization. With new computational and methodological advances, tES can be integrated with real-time functional magnetic resonance imaging (rtfMRI) to establish closed-loop tES-fMRI for individually optimized neuromodulation. Closed-loop tES-fMRI systems aim to optimize stimulation parameters based on minimizing differences between the model of the current brain state and the desired value to maximize the expected clinical outcome. The methodological space to optimize closed-loop tES fMRI for clinical applications includes (1) stimulation vs. data acquisition timing, (2) fMRI context (task-based or resting-state), (3) inherent brain oscillations, (4) dose-response function, (5) brain target trait and state and (6) optimization algorithm. Closed-loop tES-fMRI technology has several advantages over non-individualized or open-loop systems to reshape the future of neuromodulation with objective optimization in a clinically relevant context such as drug cue reactivity for substance use disorder considering both inter and intra-individual variations. Using multi-level brain and behavior measures as input and desired outcomes to individualize stimulation parameters provides a framework for designing personalized tES protocols in precision psychiatry.
Collapse
Affiliation(s)
- Ghazaleh Soleimani
- Department of Psychiatry & Behavioral Sciences, University of Minnesota, Minneapolis, MN, USA
- Department of Biomedical Engineering, Amirkabir University of Technology, Tehran, Iran
| | - Michael A Nitsche
- Department of Psychology and Neuroscience, Leibniz Research Center for Working Environment and Human Factors, Dortmund, Germany
- Bielefeld University, University Hospital OWL, Protestant Hospital of Bethel Foundation, University Clinic of Psychiatry and Psychotherapy, and University Clinic of Child and Adolescent Psychiatry and Psychotherapy, Bielefeld, Germany
| | - Til Ole Bergmann
- Neuroimaging Center, Focus Program Translational Neuroscience, Johannes Gutenberg University Medical Center Mainz, Mainz, Germany
- Leibniz Institute for Resilience Research, Mainz, Germany
| | - Farzad Towhidkhah
- Department of Biomedical Engineering, Amirkabir University of Technology, Tehran, Iran
| | - Ines R Violante
- School of Psychology, Faculty of Health and Medical Sciences, University of Surrey, Guilford, UK
| | - Romy Lorenz
- Department of Psychology, Stanford University, Stanford, CA, USA
- MRC CBU, University of Cambridge, Cambridge, UK
- Department of Neurophysics, MPI, Leipzig, Germany
| | | | | | - Beni Mulyana
- Laureate Institute for Brain Research, Tulsa, OK, USA
- School of Electrical and Computer Engineering, University of Oklahoma, Tulsa, OK, USA
| | - Ahmad Mayeli
- University of Pittsburgh Medical Center, Pittsburg, PA, USA
| | - Peyman Ghobadi-Azbari
- Department of Biomedical Engineering, Shahed University, Tehran, Iran
- Iranian National Center for Addiction Studies, Tehran University of Medical Sciences, Tehran, Iran
| | - Mohsen Mosayebi-Samani
- Department of Psychology and Neuroscience, Leibniz Research Center for Working Environment and Human Factors, Dortmund, Germany
| | - Anna Zilverstand
- Department of Psychiatry & Behavioral Sciences, University of Minnesota, Minneapolis, MN, USA
| | | | | | - Hamed Ekhtiari
- Department of Psychiatry & Behavioral Sciences, University of Minnesota, Minneapolis, MN, USA.
- Laureate Institute for Brain Research, Tulsa, OK, USA.
| |
Collapse
|
16
|
Maes A, Barahona M, Clopath C. Long- and short-term history effects in a spiking network model of statistical learning. Sci Rep 2023; 13:12939. [PMID: 37558704 PMCID: PMC10412617 DOI: 10.1038/s41598-023-39108-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/23/2023] [Accepted: 07/20/2023] [Indexed: 08/11/2023] Open
Abstract
The statistical structure of the environment is often important when making decisions. There are multiple theories of how the brain represents statistical structure. One such theory states that neural activity spontaneously samples from probability distributions. In other words, the network spends more time in states which encode high-probability stimuli. Starting from the neural assembly, increasingly thought of to be the building block for computation in the brain, we focus on how arbitrary prior knowledge about the external world can both be learned and spontaneously recollected. We present a model based upon learning the inverse of the cumulative distribution function. Learning is entirely unsupervised using biophysical neurons and biologically plausible learning rules. We show how this prior knowledge can then be accessed to compute expectations and signal surprise in downstream networks. Sensory history effects emerge from the model as a consequence of ongoing learning.
Collapse
Affiliation(s)
- Amadeus Maes
- Department of Neuroscience, Feinberg School of Medicine, Northwestern University, Chicago, USA.
- Department of Bioengineering, Imperial College London, London, UK.
| | | | - Claudia Clopath
- Department of Bioengineering, Imperial College London, London, UK
| |
Collapse
|
17
|
Isomura T, Kotani K, Jimbo Y, Friston KJ. Experimental validation of the free-energy principle with in vitro neural networks. Nat Commun 2023; 14:4547. [PMID: 37550277 PMCID: PMC10406890 DOI: 10.1038/s41467-023-40141-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/12/2022] [Accepted: 07/13/2023] [Indexed: 08/09/2023] Open
Abstract
Empirical applications of the free-energy principle are not straightforward because they entail a commitment to a particular process theory, especially at the cellular and synaptic levels. Using a recently established reverse engineering technique, we confirm the quantitative predictions of the free-energy principle using in vitro networks of rat cortical neurons that perform causal inference. Upon receiving electrical stimuli-generated by mixing two hidden sources-neurons self-organised to selectively encode the two sources. Pharmacological up- and downregulation of network excitability disrupted the ensuing inference, consistent with changes in prior beliefs about hidden sources. As predicted, changes in effective synaptic connectivity reduced variational free energy, where the connection strengths encoded parameters of the generative model. In short, we show that variational free energy minimisation can quantitatively predict the self-organisation of neuronal networks, in terms of their responses and plasticity. These results demonstrate the applicability of the free-energy principle to in vitro neural networks and establish its predictive validity in this setting.
Collapse
Affiliation(s)
- Takuya Isomura
- Brain Intelligence Theory Unit, RIKEN Center for Brain Science, 2-1 Hirosawa, Wako, Saitama, 351-0198, Japan.
| | - Kiyoshi Kotani
- Research Center for Advanced Science and Technology, The University of Tokyo, 4-6-1 Komaba, Meguro-ku, Tokyo, 153-8904, Japan
| | - Yasuhiko Jimbo
- Department of Precision Engineering, School of Engineering, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo, 113-8656, Japan
| | - Karl J Friston
- Wellcome Centre for Human Neuroimaging, Queen Square Institute of Neurology, University College London, London, WC1N 3AR, UK
- VERSES AI Research Lab, Los Angeles, CA, 90016, USA
| |
Collapse
|
18
|
Gallinaro JV, Scholl B, Clopath C. Synaptic weights that correlate with presynaptic selectivity increase decoding performance. PLoS Comput Biol 2023; 19:e1011362. [PMID: 37549193 PMCID: PMC10434873 DOI: 10.1371/journal.pcbi.1011362] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/28/2022] [Revised: 08/17/2023] [Accepted: 07/16/2023] [Indexed: 08/09/2023] Open
Abstract
The activity of neurons in the visual cortex is often characterized by tuning curves, which are thought to be shaped by Hebbian plasticity during development and sensory experience. This leads to the prediction that neural circuits should be organized such that neurons with similar functional preference are connected with stronger weights. In support of this idea, previous experimental and theoretical work have provided evidence for a model of the visual cortex characterized by such functional subnetworks. A recent experimental study, however, have found that the postsynaptic preferred stimulus was defined by the total number of spines activated by a given stimulus and independent of their individual strength. While this result might seem to contradict previous literature, there are many factors that define how a given synaptic input influences postsynaptic selectivity. Here, we designed a computational model in which postsynaptic functional preference is defined by the number of inputs activated by a given stimulus. Using a plasticity rule where synaptic weights tend to correlate with presynaptic selectivity, and is independent of functional-similarity between pre- and postsynaptic activity, we find that this model can be used to decode presented stimuli in a manner that is comparable to maximum likelihood inference.
Collapse
Affiliation(s)
- Júlia V. Gallinaro
- Bioengineering Department, Imperial College London, London, United Kingdom
| | - Benjamin Scholl
- Department of Neuroscience, Perelman School of Medicine, University of Pennsylvania, Philadephia, Pennsylvania, United States of America
| | - Claudia Clopath
- Bioengineering Department, Imperial College London, London, United Kingdom
| |
Collapse
|
19
|
Akella S, Bastos AM, Miller EK, Principe JC. Measurable fields-to-spike causality and its dependence on cortical layer and area. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2023:2023.01.17.524451. [PMID: 37577637 PMCID: PMC10418085 DOI: 10.1101/2023.01.17.524451] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 08/15/2023]
Abstract
Distinct dynamics in different cortical layers are apparent in neuronal and local field potential (LFP) patterns, yet their associations in the context of laminar processing have been sparingly analyzed. Here, we study the laminar organization of spike-field causal flow within and across visual (V4) and frontal areas (PFC) of monkeys performing a visual task. Using an event-based quantification of LFPs and a directed information estimator, we found area and frequency specificity in the laminar organization of spike-field causal connectivity. Gamma bursts (40-80 Hz) in the superficial layers of V4 largely drove intralaminar spiking. These gamma influences also fed forward up the cortical hierarchy to modulate laminar spiking in PFC. In PFC, the direction of intralaminar information flow was from spikes → fields where these influences dually controlled top-down and bottom-up processing. Our results, enabled by innovative methodologies, emphasize the complexities of spike-field causal interactions amongst multiple brain areas and behavior.
Collapse
Affiliation(s)
- Shailaja Akella
- Allen Institute, Seattle, WA, United States
- Department of Electrical and Computer Engineering, University of Florida Gainesville, FL, United States
| | - André M. Bastos
- Department of Psychology and Vanderbilt Brain Institute,Vanderbilt University, Nashville, TN, United States
| | - Earl K. Miller
- The Picower Institute for Learning and Memory, MIT, Cambridge, MA, United States
| | - Jose C. Principe
- Department of Electrical and Computer Engineering, University of Florida Gainesville, FL, United States
| |
Collapse
|
20
|
Malakasis N, Chavlis S, Poirazi P. Synaptic turnover promotes efficient learning in bio-realistic spiking neural networks. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2023:2023.05.22.541722. [PMID: 37292929 PMCID: PMC10245885 DOI: 10.1101/2023.05.22.541722] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
While artificial machine learning systems achieve superhuman performance in specific tasks such as language processing, image and video recognition, they do so use extremely large datasets and huge amounts of power. On the other hand, the brain remains superior in several cognitively challenging tasks while operating with the energy of a small lightbulb. We use a biologically constrained spiking neural network model to explore how the neural tissue achieves such high efficiency and assess its learning capacity on discrimination tasks. We found that synaptic turnover, a form of structural plasticity, which is the ability of the brain to form and eliminate synapses continuously, increases both the speed and the performance of our network on all tasks tested. Moreover, it allows accurate learning using a smaller number of examples. Importantly, these improvements are most significant under conditions of resource scarcity, such as when the number of trainable parameters is halved and when the task difficulty is increased. Our findings provide new insights into the mechanisms that underlie efficient learning in the brain and can inspire the development of more efficient and flexible machine learning algorithms.
Collapse
Affiliation(s)
- Nikos Malakasis
- School of Medicine, University of Crete, Heraklion 70013, Greece
- Institute of Molecular Biology and Biotechnology, Foundation for Research and Technology-Hellas, Heraklion 70013, Greece
| | - Spyridon Chavlis
- Institute of Molecular Biology and Biotechnology, Foundation for Research and Technology-Hellas, Heraklion 70013, Greece
| | - Panayiota Poirazi
- Institute of Molecular Biology and Biotechnology, Foundation for Research and Technology-Hellas, Heraklion 70013, Greece
| |
Collapse
|
21
|
Pancholi R, Sun-Yan A, Peron S. Microstimulation of sensory cortex engages natural sensory representations. Curr Biol 2023; 33:1765-1777.e5. [PMID: 37130521 PMCID: PMC10246453 DOI: 10.1016/j.cub.2023.03.085] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/10/2022] [Revised: 03/03/2023] [Accepted: 03/30/2023] [Indexed: 05/04/2023]
Abstract
Cortical activity patterns occupy a small subset of possible network states. If this is due to intrinsic network properties, microstimulation of sensory cortex should evoke activity patterns resembling those observed during natural sensory input. Here, we use optical microstimulation of virally transfected layer 2/3 pyramidal neurons in the mouse primary vibrissal somatosensory cortex to compare artificially evoked activity with natural activity evoked by whisker touch and movement ("whisking"). We find that photostimulation engages touch- but not whisking-responsive neurons more than expected by chance. Neurons that respond to photostimulation and touch or to touch alone exhibit higher spontaneous pairwise correlations than purely photoresponsive neurons. Exposure to several days of simultaneous touch and optogenetic stimulation raises both overlap and spontaneous activity correlations among touch and photoresponsive neurons. We thus find that cortical microstimulation engages existing cortical representations and that repeated co-presentation of natural and artificial stimulation enhances this effect.
Collapse
Affiliation(s)
- Ravi Pancholi
- Center for Neural Science, New York University, 4 Washington Pl., Rm. 621, New York, NY 10003, USA
| | - Andrew Sun-Yan
- Center for Neural Science, New York University, 4 Washington Pl., Rm. 621, New York, NY 10003, USA
| | - Simon Peron
- Center for Neural Science, New York University, 4 Washington Pl., Rm. 621, New York, NY 10003, USA.
| |
Collapse
|
22
|
Bergoin R, Torcini A, Deco G, Quoy M, Zamora-López G. Inhibitory neurons control the consolidation of neural assemblies via adaptation to selective stimuli. Sci Rep 2023; 13:6949. [PMID: 37117236 PMCID: PMC10147639 DOI: 10.1038/s41598-023-34165-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/25/2022] [Accepted: 04/25/2023] [Indexed: 04/30/2023] Open
Abstract
Brain circuits display modular architecture at different scales of organization. Such neural assemblies are typically associated to functional specialization but the mechanisms leading to their emergence and consolidation still remain elusive. In this paper we investigate the role of inhibition in structuring new neural assemblies driven by the entrainment to various inputs. In particular, we focus on the role of partially synchronized dynamics for the creation and maintenance of structural modules in neural circuits by considering a network of excitatory and inhibitory [Formula: see text]-neurons with plastic Hebbian synapses. The learning process consists of an entrainment to temporally alternating stimuli that are applied to separate regions of the network. This entrainment leads to the emergence of modular structures. Contrary to common practice in artificial neural networks-where the acquired weights are typically frozen after the learning session-we allow for synaptic adaptation even after the learning phase. We find that the presence of inhibitory neurons in the network is crucial for the emergence and the post-learning consolidation of the modular structures. Indeed networks made of purely excitatory neurons or of neurons not respecting Dale's principle are unable to form or to maintain the modular architecture induced by the stimuli. We also demonstrate that the number of inhibitory neurons in the network is directly related to the maximal number of neural assemblies that can be consolidated, supporting the idea that inhibition has a direct impact on the memory capacity of the neural network.
Collapse
Affiliation(s)
- Raphaël Bergoin
- ETIS, UMR 8051, ENSEA, CY Cergy Paris Université, CNRS, 6 Av. du Ponceau, 95000, Cergy-Pontoise, France.
- Center for Brain and Cognition, Department of Information and Communications Technologies, Pompeu Fabra University, Carrer Ramón Trias i Fargas 25-27, 08005, Barcelona, Spain.
| | - Alessandro Torcini
- Laboratoire de Physique Théorique et Modélisation, UMR 8089, CY Cergy Paris Université, CNRS, 2 Av. Adolphe Chauvin, 95032, Cergy-Pontoise, France
| | - Gustavo Deco
- Center for Brain and Cognition, Department of Information and Communications Technologies, Pompeu Fabra University, Carrer Ramón Trias i Fargas 25-27, 08005, Barcelona, Spain
- Instituciò Catalana de Recerca i Estudis Avançats (ICREA), Passeig Lluis Companys 23, 08010, Barcelona, Spain
| | - Mathias Quoy
- ETIS, UMR 8051, ENSEA, CY Cergy Paris Université, CNRS, 6 Av. du Ponceau, 95000, Cergy-Pontoise, France
- IPAL, CNRS, 1 Fusionopolis Way #21-01 Connexis (South Tower), Singapore, 138632, Singapore
| | - Gorka Zamora-López
- Center for Brain and Cognition, Department of Information and Communications Technologies, Pompeu Fabra University, Carrer Ramón Trias i Fargas 25-27, 08005, Barcelona, Spain
| |
Collapse
|
23
|
Wilmes KA, Clopath C. Dendrites help mitigate the plasticity-stability dilemma. Sci Rep 2023; 13:6543. [PMID: 37085642 PMCID: PMC10121616 DOI: 10.1038/s41598-023-32410-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/04/2022] [Accepted: 03/27/2023] [Indexed: 04/23/2023] Open
Abstract
With Hebbian learning 'who fires together wires together', well-known problems arise. Hebbian plasticity can cause unstable network dynamics and overwrite stored memories. Because the known homeostatic plasticity mechanisms tend to be too slow to combat unstable dynamics, it has been proposed that plasticity must be highly gated and synaptic strengths limited. While solving the issue of stability, gating and limiting plasticity does not solve the stability-plasticity dilemma. We propose that dendrites enable both stable network dynamics and considerable synaptic changes, as they allow the gating of plasticity in a compartment-specific manner. We investigate how gating plasticity influences network stability in plastic balanced spiking networks of neurons with dendrites. We compare how different ways to gate plasticity, namely via modulating excitability, learning rate, and inhibition increase stability. We investigate how dendritic versus perisomatic gating allows for different amounts of weight changes in stable networks. We suggest that the compartmentalisation of pyramidal cells enables dendritic synaptic changes while maintaining stability. We show that the coupling between dendrite and soma is critical for the plasticity-stability trade-off. Finally, we show that spatially restricted plasticity additionally improves stability.
Collapse
Affiliation(s)
- Katharina A Wilmes
- Imperial College London, London, United Kingdom.
- University of Bern, Bern, Switzerland.
| | | |
Collapse
|
24
|
Chen S, Yang Q, Lim S. Efficient inference of synaptic plasticity rule with Gaussian process regression. iScience 2023; 26:106182. [PMID: 36879810 PMCID: PMC9985048 DOI: 10.1016/j.isci.2023.106182] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/22/2022] [Revised: 01/24/2023] [Accepted: 02/07/2023] [Indexed: 02/16/2023] Open
Abstract
Finding the form of synaptic plasticity is critical to understanding its functions underlying learning and memory. We investigated an efficient method to infer synaptic plasticity rules in various experimental settings. We considered biologically plausible models fitting a wide range of in-vitro studies and examined the recovery of their firing-rate dependence from sparse and noisy data. Among the methods assuming low-rankness or smoothness of plasticity rules, Gaussian process regression (GPR), a nonparametric Bayesian approach, performs the best. Under the conditions measuring changes in synaptic weights directly or measuring changes in neural activities as indirect observables of synaptic plasticity, which leads to different inference problems, GPR performs well. Also, GPR could simultaneously recover multiple plasticity rules and robustly perform under various plasticity rules and noise levels. Such flexibility and efficiency, particularly at the low sampling regime, make GPR suitable for recent experimental developments and inferring a broader class of plasticity models.
Collapse
Affiliation(s)
- Shirui Chen
- Department of Applied Mathematics, University of Washington, Lewis Hall 201, Box 353925, Seattle, WA 98195-3925, USA.,Neural Science, New York University Shanghai, 1555 Century Avenue, Shanghai, 200122, China
| | - Qixin Yang
- The Edmond and Lily Safra Center for Brain Sciences, The Hebrew University, The Suzanne and Charles Goodman Brain Sciences Building, Edmond J. Safra Campus, Jerusalem, 9190401, Israel.,Neural Science, New York University Shanghai, 1555 Century Avenue, Shanghai, 200122, China
| | - Sukbin Lim
- Neural Science, New York University Shanghai, 1555 Century Avenue, Shanghai, 200122, China.,NYU-ECNU Institute of Brain and Cognitive Science at NYU Shanghai, 3663 Zhongshan Road North, Shanghai, 200062, China
| |
Collapse
|
25
|
Dainauskas JJ, Marie H, Migliore M, Saudargiene A. GluN2B-NMDAR subunit contribution on synaptic plasticity: A phenomenological model for CA3-CA1 synapses. Front Synaptic Neurosci 2023; 15:1113957. [PMID: 37008680 PMCID: PMC10050887 DOI: 10.3389/fnsyn.2023.1113957] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/01/2022] [Accepted: 02/13/2023] [Indexed: 03/17/2023] Open
Abstract
Synaptic plasticity is believed to be a key mechanism underlying learning and memory. We developed a phenomenological N-methyl-D-aspartate (NMDA) receptor-based voltage-dependent synaptic plasticity model for synaptic modifications at hippocampal CA3-CA1 synapses on a hippocampal CA1 pyramidal neuron. The model incorporates the GluN2A-NMDA and GluN2B-NMDA receptor subunit-based functions and accounts for the synaptic strength dependence on the postsynaptic NMDA receptor composition and functioning without explicitly modeling the NMDA receptor-mediated intracellular calcium, a local trigger of synaptic plasticity. We embedded the model into a two-compartmental model of a hippocampal CA1 pyramidal cell and validated it against experimental data of spike-timing-dependent synaptic plasticity (STDP), high and low-frequency stimulation. The developed model predicts altered learning rules in synapses formed on the apical dendrites of the detailed compartmental model of CA1 pyramidal neuron in the presence of the GluN2B-NMDA receptor hypofunction and can be used in hippocampal networks to model learning in health and disease.
Collapse
Affiliation(s)
- Justinas J. Dainauskas
- Laboratory of Biophysics and Bioinformatics, Neuroscience Institute, Lithuanian University of Health Sciences, Kaunas, Lithuania
- Department of Informatics, Vytautas Magnus University, Kaunas, Lithuania
| | - Hélène Marie
- Université Côte d'Azur, Centre National de la Recherche Scientifique (CNRS) UMR 7275, Institut de Pharmacologie Moléculaire et Cellulaire (IPMC), Valbonne, France
| | - Michele Migliore
- Institute of Biophysics, National Research Council, Palermo, Italy
| | - Ausra Saudargiene
- Laboratory of Biophysics and Bioinformatics, Neuroscience Institute, Lithuanian University of Health Sciences, Kaunas, Lithuania
- *Correspondence: Ausra Saudargiene
| |
Collapse
|
26
|
Halter M, Bégon-Lours L, Sousa M, Popoff Y, Drechsler U, Bragaglia V, Offrein BJ. A multi-timescale synaptic weight based on ferroelectric hafnium zirconium oxide. COMMUNICATIONS MATERIALS 2023; 4:14. [PMID: 36843629 PMCID: PMC9936949 DOI: 10.1038/s43246-023-00342-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 04/25/2022] [Accepted: 02/06/2023] [Indexed: 06/18/2023]
Abstract
Brain-inspired computing emerged as a forefront technology to harness the growing amount of data generated in an increasingly connected society. The complex dynamics involving short- and long-term memory are key to the undisputed performance of biological neural networks. Here, we report on sub-µm-sized artificial synaptic weights exploiting a combination of a ferroelectric space charge effect and oxidation state modulation in the oxide channel of a ferroelectric field effect transistor. They lead to a quasi-continuous resistance tuning of the synapse by a factor of 60 and a fine-grained weight update of more than 200 resistance values. We leverage a fast, saturating ferroelectric effect and a slow, ionic drift and diffusion process to engineer a multi-timescale artificial synapse. Our device demonstrates an endurance of more than 10 10 cycles, a ferroelectric retention of more than 10 years, and various types of volatility behavior on distinct timescales, making it well suited for neuromorphic and cognitive computing.
Collapse
Affiliation(s)
- Mattia Halter
- IBM Research Europe - Zurich Research Laboratory, CH-8803 Rüschlikon, Switzerland
- ETH Zurich - Integrated Systems Laboratory, CH-8092 Zurich, Switzerland
- Present Address: Lumiphase AG, CH-8712 Stäfa, Switzerland
| | - Laura Bégon-Lours
- IBM Research Europe - Zurich Research Laboratory, CH-8803 Rüschlikon, Switzerland
| | - Marilyne Sousa
- IBM Research Europe - Zurich Research Laboratory, CH-8803 Rüschlikon, Switzerland
| | - Youri Popoff
- IBM Research Europe - Zurich Research Laboratory, CH-8803 Rüschlikon, Switzerland
- ETH Zurich - Integrated Systems Laboratory, CH-8092 Zurich, Switzerland
| | - Ute Drechsler
- IBM Research Europe - Zurich Research Laboratory, CH-8803 Rüschlikon, Switzerland
| | - Valeria Bragaglia
- IBM Research Europe - Zurich Research Laboratory, CH-8803 Rüschlikon, Switzerland
| | - Bert Jan Offrein
- IBM Research Europe - Zurich Research Laboratory, CH-8803 Rüschlikon, Switzerland
| |
Collapse
|
27
|
Thiele M, Berner R, Tass PA, Schöll E, Yanchuk S. Asymmetric adaptivity induces recurrent synchronization in complex networks. CHAOS (WOODBURY, N.Y.) 2023; 33:023123. [PMID: 36859232 DOI: 10.1063/5.0128102] [Citation(s) in RCA: 8] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/26/2022] [Accepted: 01/18/2023] [Indexed: 06/18/2023]
Abstract
Rhythmic activities that alternate between coherent and incoherent phases are ubiquitous in chemical, ecological, climate, or neural systems. Despite their importance, general mechanisms for their emergence are little understood. In order to fill this gap, we present a framework for describing the emergence of recurrent synchronization in complex networks with adaptive interactions. This phenomenon is manifested at the macroscopic level by temporal episodes of coherent and incoherent dynamics that alternate recurrently. At the same time, the dynamics of the individual nodes do not change qualitatively. We identify asymmetric adaptation rules and temporal separation between the adaptation and the dynamics of individual nodes as key features for the emergence of recurrent synchronization. Our results suggest that asymmetric adaptation might be a fundamental ingredient for recurrent synchronization phenomena as seen in pattern generators, e.g., in neuronal systems.
Collapse
Affiliation(s)
- Max Thiele
- Institut für Theoretische Physik, Technische Universität Berlin, 10623 Berlin, Germany
| | - Rico Berner
- Institut für Theoretische Physik, Technische Universität Berlin, 10623 Berlin, Germany
| | - Peter A Tass
- Department of Neurosurgery, Stanford University, Stanford, California 94305, USA
| | - Eckehard Schöll
- Institut für Theoretische Physik, Technische Universität Berlin, 10623 Berlin, Germany
| | - Serhiy Yanchuk
- Potsdam Institute for Climate Impact Research, 14473 Potsdam, Germany
| |
Collapse
|
28
|
Garnier Artiñano T, Andalibi V, Atula I, Maestri M, Vanni S. Biophysical parameters control signal transfer in spiking network. Front Comput Neurosci 2023; 17:1011814. [PMID: 36761840 PMCID: PMC9905747 DOI: 10.3389/fncom.2023.1011814] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/04/2022] [Accepted: 01/09/2023] [Indexed: 01/26/2023] Open
Abstract
Introduction Information transmission and representation in both natural and artificial networks is dependent on connectivity between units. Biological neurons, in addition, modulate synaptic dynamics and post-synaptic membrane properties, but how these relate to information transmission in a population of neurons is still poorly understood. A recent study investigated local learning rules and showed how a spiking neural network can learn to represent continuous signals. Our study builds on their model to explore how basic membrane properties and synaptic delays affect information transfer. Methods The system consisted of three input and output units and a hidden layer of 300 excitatory and 75 inhibitory leaky integrate-and-fire (LIF) or adaptive integrate-and-fire (AdEx) units. After optimizing the connectivity to accurately replicate the input patterns in the output units, we transformed the model to more biologically accurate units and included synaptic delay and concurrent action potential generation in distinct neurons. We examined three different parameter regimes which comprised either identical physiological values for both excitatory and inhibitory units (Comrade), more biologically accurate values (Bacon), or the Comrade regime whose output units were optimized for low reconstruction error (HiFi). We evaluated information transmission and classification accuracy of the network with four distinct metrics: coherence, Granger causality, transfer entropy, and reconstruction error. Results Biophysical parameters showed a major impact on information transfer metrics. The classification was surprisingly robust, surviving very low firing and information rates, whereas information transmission overall and particularly low reconstruction error were more dependent on higher firing rates in LIF units. In AdEx units, the firing rates were lower and less information was transferred, but interestingly the highest information transmission rates were no longer overlapping with the highest firing rates. Discussion Our findings can be reflected on the predictive coding theory of the cerebral cortex and may suggest information transfer qualities as a phenomenological quality of biological cells.
Collapse
Affiliation(s)
- Tomás Garnier Artiñano
- Helsinki University Hospital (HUS) Neurocenter, Neurology, Helsinki University Hospital, Helsinki, Finland,Department of Neurosciences, Clinicum, University of Helsinki, Helsinki, Finland
| | - Vafa Andalibi
- Department of Computer Science, Indiana University Bloomington, Bloomington, IN, United States
| | - Iiris Atula
- Helsinki University Hospital (HUS) Neurocenter, Neurology, Helsinki University Hospital, Helsinki, Finland,Department of Neurosciences, Clinicum, University of Helsinki, Helsinki, Finland
| | - Matteo Maestri
- Helsinki University Hospital (HUS) Neurocenter, Neurology, Helsinki University Hospital, Helsinki, Finland,Department of Neurosciences, Clinicum, University of Helsinki, Helsinki, Finland,Department of Biomedical and Neuromotor Sciences, University of Bologna, Bologna, Italy
| | - Simo Vanni
- Helsinki University Hospital (HUS) Neurocenter, Neurology, Helsinki University Hospital, Helsinki, Finland,Department of Neurosciences, Clinicum, University of Helsinki, Helsinki, Finland,Department of Physiology, Medicum, University of Helsinki, Helsinki, Finland,*Correspondence: Simo Vanni,
| |
Collapse
|
29
|
Lee JH, Choe Y, Ardid S, Abbasi-Asl R, McCarthy M, Hu B. Editorial: Functional microcircuits in the brain and in artificial intelligent systems. Front Comput Neurosci 2023; 17:1135507. [PMID: 36761841 PMCID: PMC9904200 DOI: 10.3389/fncom.2023.1135507] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/01/2023] [Accepted: 01/10/2023] [Indexed: 01/26/2023] Open
Affiliation(s)
- Jung H. Lee
- Pacific Northwest National Laboratory, Seattle, WA, United States,*Correspondence: Jung H. Lee
| | - Yoonsuck Choe
- Department of Computer Science and Engineering, Texas A&M University, College Station, TX, United States
| | - Salva Ardid
- Department of Applied Physics and Institut d'Investigació per a la Gestió Integrada de Zones Costaneres (IGIC), Universitat Politècnica de València, Gandia, Spain
| | - Reza Abbasi-Asl
- Department of Neurology, University of California, San Francisco, San Francisco, CA, United States
| | - Michelle McCarthy
- Department of Mathematics and Statistics, Boston University, Boston, MA, United States
| | - Brian Hu
- Kitware, Inc., Arlington, VA, United States
| |
Collapse
|
30
|
Mikulasch FA, Rudelt L, Wibral M, Priesemann V. Where is the error? Hierarchical predictive coding through dendritic error computation. Trends Neurosci 2023; 46:45-59. [PMID: 36577388 DOI: 10.1016/j.tins.2022.09.007] [Citation(s) in RCA: 19] [Impact Index Per Article: 19.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/21/2022] [Revised: 09/28/2022] [Accepted: 09/28/2022] [Indexed: 11/19/2022]
Abstract
Top-down feedback in cortex is critical for guiding sensory processing, which has prominently been formalized in the theory of hierarchical predictive coding (hPC). However, experimental evidence for error units, which are central to the theory, is inconclusive and it remains unclear how hPC can be implemented with spiking neurons. To address this, we connect hPC to existing work on efficient coding in balanced networks with lateral inhibition and predictive computation at apical dendrites. Together, this work points to an efficient implementation of hPC with spiking neurons, where prediction errors are computed not in separate units, but locally in dendritic compartments. We then discuss the correspondence of this model to experimentally observed connectivity patterns, plasticity, and dynamics in cortex.
Collapse
Affiliation(s)
- Fabian A Mikulasch
- Max-Planck-Institute for Dynamics and Self-Organization, Göttingen, Germany.
| | - Lucas Rudelt
- Max-Planck-Institute for Dynamics and Self-Organization, Göttingen, Germany
| | - Michael Wibral
- Göttingen Campus Institute for Dynamics of Biological Networks, Georg-August University, Göttingen, Germany
| | - Viola Priesemann
- Max-Planck-Institute for Dynamics and Self-Organization, Göttingen, Germany; Bernstein Center for Computational Neuroscience (BCCN), Göttingen, Germany; Department of Physics, Georg-August University, Göttingen, Germany
| |
Collapse
|
31
|
Scott DN, Frank MJ. Adaptive control of synaptic plasticity integrates micro- and macroscopic network function. Neuropsychopharmacology 2023; 48:121-144. [PMID: 36038780 PMCID: PMC9700774 DOI: 10.1038/s41386-022-01374-6] [Citation(s) in RCA: 6] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/11/2022] [Revised: 06/23/2022] [Accepted: 06/24/2022] [Indexed: 11/09/2022]
Abstract
Synaptic plasticity configures interactions between neurons and is therefore likely to be a primary driver of behavioral learning and development. How this microscopic-macroscopic interaction occurs is poorly understood, as researchers frequently examine models within particular ranges of abstraction and scale. Computational neuroscience and machine learning models offer theoretically powerful analyses of plasticity in neural networks, but results are often siloed and only coarsely linked to biology. In this review, we examine connections between these areas, asking how network computations change as a function of diverse features of plasticity and vice versa. We review how plasticity can be controlled at synapses by calcium dynamics and neuromodulatory signals, the manifestation of these changes in networks, and their impacts in specialized circuits. We conclude that metaplasticity-defined broadly as the adaptive control of plasticity-forges connections across scales by governing what groups of synapses can and can't learn about, when, and to what ends. The metaplasticity we discuss acts by co-opting Hebbian mechanisms, shifting network properties, and routing activity within and across brain systems. Asking how these operations can go awry should also be useful for understanding pathology, which we address in the context of autism, schizophrenia and Parkinson's disease.
Collapse
Affiliation(s)
- Daniel N Scott
- Cognitive Linguistic, and Psychological Sciences, Brown University, Providence, RI, USA.
- Carney Institute for Brain Science, Brown University, Providence, RI, USA.
| | - Michael J Frank
- Cognitive Linguistic, and Psychological Sciences, Brown University, Providence, RI, USA.
- Carney Institute for Brain Science, Brown University, Providence, RI, USA.
| |
Collapse
|
32
|
Miehl C, Gjorgjieva J. Stability and learning in excitatory synapses by nonlinear inhibitory plasticity. PLoS Comput Biol 2022; 18:e1010682. [PMID: 36459503 PMCID: PMC9718420 DOI: 10.1371/journal.pcbi.1010682] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/05/2022] [Accepted: 10/25/2022] [Indexed: 12/03/2022] Open
Abstract
Synaptic changes are hypothesized to underlie learning and memory formation in the brain. But Hebbian synaptic plasticity of excitatory synapses on its own is unstable, leading to either unlimited growth of synaptic strengths or silencing of neuronal activity without additional homeostatic mechanisms. To control excitatory synaptic strengths, we propose a novel form of synaptic plasticity at inhibitory synapses. Using computational modeling, we suggest two key features of inhibitory plasticity, dominance of inhibition over excitation and a nonlinear dependence on the firing rate of postsynaptic excitatory neurons whereby inhibitory synaptic strengths change with the same sign (potentiate or depress) as excitatory synaptic strengths. We demonstrate that the stable synaptic strengths realized by this novel inhibitory plasticity model affects excitatory/inhibitory weight ratios in agreement with experimental results. Applying a disinhibitory signal can gate plasticity and lead to the generation of receptive fields and strong bidirectional connectivity in a recurrent network. Hence, a novel form of nonlinear inhibitory plasticity can simultaneously stabilize excitatory synaptic strengths and enable learning upon disinhibition.
Collapse
Affiliation(s)
- Christoph Miehl
- Max Planck Institute for Brain Research, Frankfurt am Main, Germany
- School of Life Sciences, Technical University of Munich, Freising, Germany
- * E-mail: (CM); (JG)
| | - Julijana Gjorgjieva
- Max Planck Institute for Brain Research, Frankfurt am Main, Germany
- School of Life Sciences, Technical University of Munich, Freising, Germany
- * E-mail: (CM); (JG)
| |
Collapse
|
33
|
Ratas I, Pyragas K. Interplay of different synchronization modes and synaptic plasticity in a system of class I neurons. Sci Rep 2022; 12:19631. [PMID: 36385488 PMCID: PMC9668974 DOI: 10.1038/s41598-022-24001-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/31/2022] [Accepted: 11/08/2022] [Indexed: 11/17/2022] Open
Abstract
We analyze the effect of spike-timing-dependent plasticity (STDP) on a system of pulse-coupled class I neurons. Our research begins with a system of two mutually connected quadratic integrate-and-fire (QIF) neurons, which are canonical representatives of class I neurons. Along with various asymptotic modes previously observed in other neuronal models with plastic synapses, we found a stable synchronous mode characterized by unidirectional link from a slower neuron to a faster neuron. In this frequency-locked mode, the faster neuron emits multiple spikes per cycle of the slower neuron. We analytically obtain the Arnold tongues for this mode without STDP and with STDP. We also consider larger plastic networks of QIF neurons and show that the detected mode can manifest itself in such a way that slow neurons become pacemakers. As a result, slow and fast neurons can form large synchronous clusters that generate low-frequency oscillations. We demonstrate the generality of the results obtained with two connected QIF neurons using Wang-Buzsáki and Morris-Lecar biophysically plausible class I neuron models.
Collapse
Affiliation(s)
- Irmantas Ratas
- grid.425985.7Center for Physical Sciences and Technology, 10257 Vilnius, Lithuania
| | - Kestutis Pyragas
- grid.425985.7Center for Physical Sciences and Technology, 10257 Vilnius, Lithuania
| |
Collapse
|
34
|
Garg N, Balafrej I, Stewart TC, Portal JM, Bocquet M, Querlioz D, Drouin D, Rouat J, Beilliard Y, Alibart F. Voltage-dependent synaptic plasticity: Unsupervised probabilistic Hebbian plasticity rule based on neurons membrane potential. Front Neurosci 2022; 16:983950. [PMID: 36340782 PMCID: PMC9634260 DOI: 10.3389/fnins.2022.983950] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/01/2022] [Accepted: 09/05/2022] [Indexed: 11/27/2022] Open
Abstract
This study proposes voltage-dependent-synaptic plasticity (VDSP), a novel brain-inspired unsupervised local learning rule for the online implementation of Hebb’s plasticity mechanism on neuromorphic hardware. The proposed VDSP learning rule updates the synaptic conductance on the spike of the postsynaptic neuron only, which reduces by a factor of two the number of updates with respect to standard spike timing dependent plasticity (STDP). This update is dependent on the membrane potential of the presynaptic neuron, which is readily available as part of neuron implementation and hence does not require additional memory for storage. Moreover, the update is also regularized on synaptic weight and prevents explosion or vanishing of weights on repeated stimulation. Rigorous mathematical analysis is performed to draw an equivalence between VDSP and STDP. To validate the system-level performance of VDSP, we train a single-layer spiking neural network (SNN) for the recognition of handwritten digits. We report 85.01 ± 0.76% (Mean ± SD) accuracy for a network of 100 output neurons on the MNIST dataset. The performance improves when scaling the network size (89.93 ± 0.41% for 400 output neurons, 90.56 ± 0.27 for 500 neurons), which validates the applicability of the proposed learning rule for spatial pattern recognition tasks. Future work will consider more complicated tasks. Interestingly, the learning rule better adapts than STDP to the frequency of input signal and does not require hand-tuning of hyperparameters.
Collapse
Affiliation(s)
- Nikhil Garg
- Institut Interdisciplinaire d’Innovation Technologique (3IT), Université de Sherbrooke, Sherbrooke, QC, Canada
- Laboratoire Nanotechnologies Nanosystèmes (LN2)—CNRS UMI-3463, Université de Sherbrooke, Sherbrooke, QC, Canada
- Institute of Electronics, Microelectronics and Nanotechnology (IEMN), Université de Lille, Villeneuve-d’Ascq, France
- *Correspondence: Nikhil Garg,
| | - Ismael Balafrej
- Institut Interdisciplinaire d’Innovation Technologique (3IT), Université de Sherbrooke, Sherbrooke, QC, Canada
- Laboratoire Nanotechnologies Nanosystèmes (LN2)—CNRS UMI-3463, Université de Sherbrooke, Sherbrooke, QC, Canada
- NECOTIS Research Lab, Department of Electrical and Computer Engineering, University of Sherbrooke, Sherbrooke, QC, Canada
| | - Terrence C. Stewart
- National Research Council Canada, University of Waterloo Collaboration Centre, Waterloo, ON, Canada
| | - Jean-Michel Portal
- Aix-Marseille Université, Université de Toulon, CNRS, IM2NP, Marseille, France
| | - Marc Bocquet
- Institute of Electronics, Microelectronics and Nanotechnology (IEMN), Université de Lille, Villeneuve-d’Ascq, France
| | - Damien Querlioz
- Université Paris-Saclay, CNRS, Centre de Nanosciences et de Nanotechnologies, Palaiseau, France
| | - Dominique Drouin
- Institut Interdisciplinaire d’Innovation Technologique (3IT), Université de Sherbrooke, Sherbrooke, QC, Canada
- Laboratoire Nanotechnologies Nanosystèmes (LN2)—CNRS UMI-3463, Université de Sherbrooke, Sherbrooke, QC, Canada
| | - Jean Rouat
- Institut Interdisciplinaire d’Innovation Technologique (3IT), Université de Sherbrooke, Sherbrooke, QC, Canada
- Laboratoire Nanotechnologies Nanosystèmes (LN2)—CNRS UMI-3463, Université de Sherbrooke, Sherbrooke, QC, Canada
- NECOTIS Research Lab, Department of Electrical and Computer Engineering, University of Sherbrooke, Sherbrooke, QC, Canada
| | - Yann Beilliard
- Institut Interdisciplinaire d’Innovation Technologique (3IT), Université de Sherbrooke, Sherbrooke, QC, Canada
- Laboratoire Nanotechnologies Nanosystèmes (LN2)—CNRS UMI-3463, Université de Sherbrooke, Sherbrooke, QC, Canada
| | - Fabien Alibart
- Institut Interdisciplinaire d’Innovation Technologique (3IT), Université de Sherbrooke, Sherbrooke, QC, Canada
- Laboratoire Nanotechnologies Nanosystèmes (LN2)—CNRS UMI-3463, Université de Sherbrooke, Sherbrooke, QC, Canada
- Institute of Electronics, Microelectronics and Nanotechnology (IEMN), Université de Lille, Villeneuve-d’Ascq, France
- Fabien Alibart,
| |
Collapse
|
35
|
Oberländer J, Bouhadjar Y, Morrison A. Learning and replaying spatiotemporal sequences: A replication study. Front Integr Neurosci 2022; 16:974177. [PMID: 36310714 PMCID: PMC9614051 DOI: 10.3389/fnint.2022.974177] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/20/2022] [Accepted: 08/17/2022] [Indexed: 11/13/2022] Open
Abstract
Learning and replaying spatiotemporal sequences are fundamental computations performed by the brain and specifically the neocortex. These features are critical for a wide variety of cognitive functions, including sensory perception and the execution of motor and language skills. Although several computational models demonstrate this capability, many are either hard to reconcile with biological findings or have limited functionality. To address this gap, a recent study proposed a biologically plausible model based on a spiking recurrent neural network supplemented with read-out neurons. After learning, the recurrent network develops precise switching dynamics by successively activating and deactivating small groups of neurons. The read-out neurons are trained to respond to particular groups and can thereby reproduce the learned sequence. For the model to serve as the basis for further research, it is important to determine its replicability. In this Brief Report, we give a detailed description of the model and identify missing details, inconsistencies or errors in or between the original paper and its reference implementation. We re-implement the full model in the neural simulator NEST in conjunction with the NESTML modeling language and confirm the main findings of the original work.
Collapse
Affiliation(s)
- Jette Oberländer
- Institute of Neuroscience and Medicine (INM-6), Institute for Advanced Simulation (IAS-6), JARA-Institute Brain Structure-Function Relationship (JBI-1/INM-10), Research Centre Jülich, Jülich, Germany
- Department of Computer Science 3-Software Engineering, RWTH Aachen University, Aachen, Germany
| | - Younes Bouhadjar
- Institute of Neuroscience and Medicine (INM-6), Institute for Advanced Simulation (IAS-6), JARA-Institute Brain Structure-Function Relationship (JBI-1/INM-10), Research Centre Jülich, Jülich, Germany
- Jülich Research Centre and JARA, Peter Grünberg Institute (PGI-7, 10), Jülich, Germany
- RWTH Aachen University, Aachen, Germany
- *Correspondence: Younes Bouhadjar
| | - Abigail Morrison
- Institute of Neuroscience and Medicine (INM-6), Institute for Advanced Simulation (IAS-6), JARA-Institute Brain Structure-Function Relationship (JBI-1/INM-10), Research Centre Jülich, Jülich, Germany
- Department of Computer Science 3-Software Engineering, RWTH Aachen University, Aachen, Germany
| |
Collapse
|
36
|
Hu B, Guan ZH, Chen G, Chen CLP. Neuroscience and Network Dynamics Toward Brain-Inspired Intelligence. IEEE TRANSACTIONS ON CYBERNETICS 2022; 52:10214-10227. [PMID: 33909581 DOI: 10.1109/tcyb.2021.3071110] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
This article surveys the interdisciplinary research of neuroscience, network science, and dynamic systems, with emphasis on the emergence of brain-inspired intelligence. To replicate brain intelligence, a practical way is to reconstruct cortical networks with dynamic activities that nourish the brain functions, instead of using only artificial computing networks. The survey provides a complex network and spatiotemporal dynamics (abbr. network dynamics) perspective for understanding the brain and cortical networks and, furthermore, develops integrated approaches of neuroscience and network dynamics toward building brain-inspired intelligence with learning and resilience functions. Presented are fundamental concepts and principles of complex networks, neuroscience, and hybrid dynamic systems, as well as relevant studies about the brain and intelligence. Other promising research directions, such as brain science, data science, quantum information science, and machine behavior are also briefly discussed toward future applications.
Collapse
|
37
|
Miehl C, Onasch S, Festa D, Gjorgjieva J. Formation and computational implications of assemblies in neural circuits. J Physiol 2022. [PMID: 36068723 DOI: 10.1113/jp282750] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/20/2022] [Accepted: 08/22/2022] [Indexed: 11/08/2022] Open
Abstract
In the brain, patterns of neural activity represent sensory information and store it in non-random synaptic connectivity. A prominent theoretical hypothesis states that assemblies, groups of neurons that are strongly connected to each other, are the key computational units underlying perception and memory formation. Compatible with these hypothesised assemblies, experiments have revealed groups of neurons that display synchronous activity, either spontaneously or upon stimulus presentation, and exhibit behavioural relevance. While it remains unclear how assemblies form in the brain, theoretical work has vastly contributed to the understanding of various interacting mechanisms in this process. Here, we review the recent theoretical literature on assembly formation by categorising the involved mechanisms into four components: synaptic plasticity, symmetry breaking, competition and stability. We highlight different approaches and assumptions behind assembly formation and discuss recent ideas of assemblies as the key computational unit in the brain. Abstract figure legend Assembly Formation. Assemblies are groups of strongly connected neurons formed by the interaction of multiple mechanisms and with vast computational implications. Four interacting components are thought to drive assembly formation: synaptic plasticity, symmetry breaking, competition and stability. This article is protected by copyright. All rights reserved.
Collapse
Affiliation(s)
- Christoph Miehl
- Computation in Neural Circuits, Max Planck Institute for Brain Research, 60438, Frankfurt, Germany.,School of Life Sciences, Technical University of Munich, 85354, Freising, Germany
| | - Sebastian Onasch
- Computation in Neural Circuits, Max Planck Institute for Brain Research, 60438, Frankfurt, Germany.,School of Life Sciences, Technical University of Munich, 85354, Freising, Germany
| | - Dylan Festa
- Computation in Neural Circuits, Max Planck Institute for Brain Research, 60438, Frankfurt, Germany.,School of Life Sciences, Technical University of Munich, 85354, Freising, Germany
| | - Julijana Gjorgjieva
- Computation in Neural Circuits, Max Planck Institute for Brain Research, 60438, Frankfurt, Germany.,School of Life Sciences, Technical University of Munich, 85354, Freising, Germany
| |
Collapse
|
38
|
Mizusaki BEP, Li SSY, Costa RP, Sjöström PJ. Pre- and postsynaptically expressed spike-timing-dependent plasticity contribute differentially to neuronal learning. PLoS Comput Biol 2022; 18:e1009409. [PMID: 35700188 PMCID: PMC9236267 DOI: 10.1371/journal.pcbi.1009409] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/26/2021] [Revised: 06/27/2022] [Accepted: 05/11/2022] [Indexed: 11/18/2022] Open
Abstract
A plethora of experimental studies have shown that long-term synaptic plasticity can be expressed pre- or postsynaptically depending on a range of factors such as developmental stage, synapse type, and activity patterns. The functional consequences of this diversity are not clear, although it is understood that whereas postsynaptic expression of plasticity predominantly affects synaptic response amplitude, presynaptic expression alters both synaptic response amplitude and short-term dynamics. In most models of neuronal learning, long-term synaptic plasticity is implemented as changes in connective weights. The consideration of long-term plasticity as a fixed change in amplitude corresponds more closely to post- than to presynaptic expression, which means theoretical outcomes based on this choice of implementation may have a postsynaptic bias. To explore the functional implications of the diversity of expression of long-term synaptic plasticity, we adapted a model of long-term plasticity, more specifically spike-timing-dependent plasticity (STDP), such that it was expressed either independently pre- or postsynaptically, or in a mixture of both ways. We compared pair-based standard STDP models and a biologically tuned triplet STDP model, and investigated the outcomes in a minimal setting, using two different learning schemes: in the first, inputs were triggered at different latencies, and in the second a subset of inputs were temporally correlated. We found that presynaptic changes adjusted the speed of learning, while postsynaptic expression was more efficient at regulating spike timing and frequency. When combining both expression loci, postsynaptic changes amplified the response range, while presynaptic plasticity allowed control over postsynaptic firing rates, potentially providing a form of activity homeostasis. Our findings highlight how the seemingly innocuous choice of implementing synaptic plasticity by single weight modification may unwittingly introduce a postsynaptic bias in modelling outcomes. We conclude that pre- and postsynaptically expressed plasticity are not interchangeable, but enable complimentary functions. Differences between functional properties of pre- or postsynaptically expressed long-term plasticity have not yet been explored in much detail. In this paper, we used minimalist models of STDP with different expression loci, in search of fundamental functional consequences. Biologically, presynaptic expression acts mostly on neurotransmitter release, thereby altering short-term synaptic dynamics, whereas postsynaptic expression affects mainly synaptic gain. We compared models where plasticity was expressed only presynaptically or postsynaptically, or in both ways. We found that postsynaptic plasticity had a bigger impact over response times, while both pre- and postsynaptic plasticity were similarly capable of detecting correlated inputs. A model with biologically tuned expression of plasticity achieved the same outcome over a range of frequencies. Also, postsynaptic spiking frequency was not directly affected by presynaptic plasticity of short-term plasticity alone, however in combination with a postsynaptic component, it helped restrain positive feedback, contributing to activity homeostasis. In conclusion, expression locus may determine affinity for distinct coding schemes while also contributing to keep activity within bounds. Our findings highlight the importance of carefully implementing expression of plasticity in biological modelling, since the locus of expression may affect functional outcomes in simulations.
Collapse
Affiliation(s)
- Beatriz Eymi Pimentel Mizusaki
- Centre for Research in Neuroscience, Brain Repair and Integrative Neuroscience Programme, Departments of Medicine, Neurology and Neurosurgery, The Research Institute of the McGill University Health Centre, Montreal General Hospital, Montreal, Quebec, Canada
- Instituto de Física, Universidade Federal do Rio Grande do Sul, Porto Alegre, Rio Grande do Sul, Brazil
- Computational Neuroscience Unit, Department of Computer Science, SCEEM, Faculty of Engineering, University of Bristol, Bristol, United Kingdom
| | - Sally Si Ying Li
- Centre for Research in Neuroscience, Brain Repair and Integrative Neuroscience Programme, Departments of Medicine, Neurology and Neurosurgery, The Research Institute of the McGill University Health Centre, Montreal General Hospital, Montreal, Quebec, Canada
| | - Rui Ponte Costa
- Computational Neuroscience Unit, Department of Computer Science, SCEEM, Faculty of Engineering, University of Bristol, Bristol, United Kingdom
- Department of Physiology, University of Bern, Bern, Switzerland
- Centre for Neural Circuits and Behaviour, Department of Physiology, Anatomy and Genetics, University of Oxford, Oxford, United Kingdom
| | - Per Jesper Sjöström
- Centre for Research in Neuroscience, Brain Repair and Integrative Neuroscience Programme, Departments of Medicine, Neurology and Neurosurgery, The Research Institute of the McGill University Health Centre, Montreal General Hospital, Montreal, Quebec, Canada
- * E-mail:
| |
Collapse
|
39
|
Chindemi G, Abdellah M, Amsalem O, Benavides-Piccione R, Delattre V, Doron M, Ecker A, Jaquier AT, King J, Kumbhar P, Monney C, Perin R, Rössert C, Tuncel AM, Van Geit W, DeFelipe J, Graupner M, Segev I, Markram H, Muller EB. A calcium-based plasticity model for predicting long-term potentiation and depression in the neocortex. Nat Commun 2022; 13:3038. [PMID: 35650191 PMCID: PMC9160074 DOI: 10.1038/s41467-022-30214-w] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/11/2021] [Accepted: 04/19/2022] [Indexed: 01/14/2023] Open
Abstract
Pyramidal cells (PCs) form the backbone of the layered structure of the neocortex, and plasticity of their synapses is thought to underlie learning in the brain. However, such long-term synaptic changes have been experimentally characterized between only a few types of PCs, posing a significant barrier for studying neocortical learning mechanisms. Here we introduce a model of synaptic plasticity based on data-constrained postsynaptic calcium dynamics, and show in a neocortical microcircuit model that a single parameter set is sufficient to unify the available experimental findings on long-term potentiation (LTP) and long-term depression (LTD) of PC connections. In particular, we find that the diverse plasticity outcomes across the different PC types can be explained by cell-type-specific synaptic physiology, cell morphology and innervation patterns, without requiring type-specific plasticity. Generalizing the model to in vivo extracellular calcium concentrations, we predict qualitatively different plasticity dynamics from those observed in vitro. This work provides a first comprehensive null model for LTP/LTD between neocortical PC types in vivo, and an open framework for further developing models of cortical synaptic plasticity.
Collapse
Affiliation(s)
- Giuseppe Chindemi
- Blue Brain Project, École Polytechnique Fédérale de Lausanne, Geneva, Switzerland.
| | - Marwan Abdellah
- Blue Brain Project, École Polytechnique Fédérale de Lausanne, Geneva, Switzerland
| | - Oren Amsalem
- Department of Neurobiology, the Hebrew University of Jerusalem, Jerusalem, Israel.,Division of Endocrinology, Diabetes and Metabolism, Beth Israel Deaconess Medical Center, Harvard Medical School, Boston, MA, 02215, USA
| | - Ruth Benavides-Piccione
- Instituto Cajal, Consejo Superior de Investigaciones Científicas, Madrid, Spain.,Laboratorio Cajal de Circuitos Corticales, Centro de Tecnología Biomédica, Universidad Politécnica de Madrid, Madrid, Spain
| | - Vincent Delattre
- Laboratory of Neural Microcircuitry, École Polytechnique Fédérale de Lausanne, Lausanne, Switzerland
| | - Michael Doron
- Edmond and Lily Safra Center for Brain Sciences, the Hebrew University of Jerusalem, Jerusalem, Israel
| | - András Ecker
- Blue Brain Project, École Polytechnique Fédérale de Lausanne, Geneva, Switzerland
| | - Aurélien T Jaquier
- Blue Brain Project, École Polytechnique Fédérale de Lausanne, Geneva, Switzerland
| | - James King
- Blue Brain Project, École Polytechnique Fédérale de Lausanne, Geneva, Switzerland
| | - Pramod Kumbhar
- Blue Brain Project, École Polytechnique Fédérale de Lausanne, Geneva, Switzerland
| | - Caitlin Monney
- Blue Brain Project, École Polytechnique Fédérale de Lausanne, Geneva, Switzerland
| | - Rodrigo Perin
- Laboratory of Neural Microcircuitry, École Polytechnique Fédérale de Lausanne, Lausanne, Switzerland
| | - Christian Rössert
- Blue Brain Project, École Polytechnique Fédérale de Lausanne, Geneva, Switzerland
| | - Anil M Tuncel
- Blue Brain Project, École Polytechnique Fédérale de Lausanne, Geneva, Switzerland
| | - Werner Van Geit
- Blue Brain Project, École Polytechnique Fédérale de Lausanne, Geneva, Switzerland
| | - Javier DeFelipe
- Instituto Cajal, Consejo Superior de Investigaciones Científicas, Madrid, Spain.,Laboratorio Cajal de Circuitos Corticales, Centro de Tecnología Biomédica, Universidad Politécnica de Madrid, Madrid, Spain
| | - Michael Graupner
- Université de Paris, SPPIN - Saints-Pères Paris Institute for the Neurosciences, CNRS, Paris, France
| | - Idan Segev
- Department of Neurobiology, the Hebrew University of Jerusalem, Jerusalem, Israel.,Edmond and Lily Safra Center for Brain Sciences, the Hebrew University of Jerusalem, Jerusalem, Israel
| | - Henry Markram
- Blue Brain Project, École Polytechnique Fédérale de Lausanne, Geneva, Switzerland.,Laboratory of Neural Microcircuitry, École Polytechnique Fédérale de Lausanne, Lausanne, Switzerland
| | - Eilif B Muller
- Blue Brain Project, École Polytechnique Fédérale de Lausanne, Geneva, Switzerland. .,Department of Neurosciences, Faculty of Medicine, Université de Montréal, Montréal, QC, Canada. .,CHU Sainte-Justine Research Center, Montréal, QC, Canada. .,Quebec Artificial Intelligence Institute (Mila), Montréal, Canada.
| |
Collapse
|
40
|
Li KT, He X, Zhou G, Yang J, Li T, Hu H, Ji D, Zhou C, Ma H. Rational designing of oscillatory rhythmicity for memory rescue in plasticity-impaired learning networks. Cell Rep 2022; 39:110678. [PMID: 35417714 DOI: 10.1016/j.celrep.2022.110678] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/20/2021] [Revised: 01/19/2022] [Accepted: 03/22/2022] [Indexed: 12/15/2022] Open
Abstract
In the brain, oscillatory strength embedded in network rhythmicity is important for processing experiences, and this process is disrupted in certain psychiatric disorders. The use of rhythmic network stimuli can change these oscillations and has shown promise in terms of improving cognitive function, although the underlying mechanisms are poorly understood. Here, we combine a two-layer learning model, with experiments involving genetically modified mice, that provides precise control of experience-driven oscillations by manipulating long-term potentiation of excitatory synapses onto inhibitory interneurons (LTPE→I). We find that, in the absence of LTPE→I, impaired network dynamics and memory are rescued by activating inhibitory neurons to augment the power in theta and gamma frequencies, which prevents network overexcitation with less inhibitory rebound. In contrast, increasing either theta or gamma power alone was less effective. Thus, inducing network changes at dual frequencies is involved in memory encoding, indicating a potentially feasible strategy for optimizing network-stimulating therapies.
Collapse
Affiliation(s)
- Kwan Tung Li
- Department of Physics, Centre for Nonlinear Studies, Beijing-Hong Kong-Singapore Joint Centre for Nonlinear and Complex Systems (Hong Kong), Institute of Computational and Theoretical Studies, Hong Kong Baptist University, Hong Kong, China
| | - Xingzhi He
- Department of Neurobiology, Affiliated Mental Health Center and Hangzhou Seventh People's Hospital, Zhejiang University School of Medicine, Hangzhou 310058, China; Liangzhu Laboratory, MOE Frontier Science Center for Brain Research and Brain-Machine Integration, School of Brain Science and Brain Medicine, Zhejiang University, Hangzhou 310058, China
| | - Guangjun Zhou
- Department of Neurobiology, Affiliated Mental Health Center and Hangzhou Seventh People's Hospital, Zhejiang University School of Medicine, Hangzhou 310058, China; Liangzhu Laboratory, MOE Frontier Science Center for Brain Research and Brain-Machine Integration, School of Brain Science and Brain Medicine, Zhejiang University, Hangzhou 310058, China
| | - Jing Yang
- Department of Neurobiology, Affiliated Mental Health Center and Hangzhou Seventh People's Hospital, Zhejiang University School of Medicine, Hangzhou 310058, China; Liangzhu Laboratory, MOE Frontier Science Center for Brain Research and Brain-Machine Integration, School of Brain Science and Brain Medicine, Zhejiang University, Hangzhou 310058, China
| | - Tao Li
- Department of Neurobiology, Affiliated Mental Health Center and Hangzhou Seventh People's Hospital, Zhejiang University School of Medicine, Hangzhou 310058, China; Liangzhu Laboratory, MOE Frontier Science Center for Brain Research and Brain-Machine Integration, School of Brain Science and Brain Medicine, Zhejiang University, Hangzhou 310058, China
| | - Hailan Hu
- Department of Neurobiology, Affiliated Mental Health Center and Hangzhou Seventh People's Hospital, Zhejiang University School of Medicine, Hangzhou 310058, China; Liangzhu Laboratory, MOE Frontier Science Center for Brain Research and Brain-Machine Integration, School of Brain Science and Brain Medicine, Zhejiang University, Hangzhou 310058, China; Research Units for Emotion and Emotion disorders, Chinese Academy of Medical Sciences, Beijing 100730, China
| | - Daoyun Ji
- Department of Neuroscience, Baylor College of Medicine, Houston, TX 77030, USA; Department of Molecular and Cellular Biology, Baylor College of Medicine, Houston, TX 77030, USA
| | - Changsong Zhou
- Department of Physics, Centre for Nonlinear Studies, Beijing-Hong Kong-Singapore Joint Centre for Nonlinear and Complex Systems (Hong Kong), Institute of Computational and Theoretical Studies, Hong Kong Baptist University, Hong Kong, China; Department of Physics, Zhejiang University, Hangzhou 310027, China.
| | - Huan Ma
- Department of Neurobiology, Affiliated Mental Health Center and Hangzhou Seventh People's Hospital, Zhejiang University School of Medicine, Hangzhou 310058, China; Liangzhu Laboratory, MOE Frontier Science Center for Brain Research and Brain-Machine Integration, School of Brain Science and Brain Medicine, Zhejiang University, Hangzhou 310058, China; Research Units for Emotion and Emotion disorders, Chinese Academy of Medical Sciences, Beijing 100730, China.
| |
Collapse
|
41
|
Prediction-error neurons in circuits with multiple neuron types: Formation, refinement, and functional implications. Proc Natl Acad Sci U S A 2022; 119:e2115699119. [PMID: 35320037 PMCID: PMC9060484 DOI: 10.1073/pnas.2115699119] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/14/2023] Open
Abstract
An influential idea in neuroscience is that neural circuits do not only passively process sensory information but rather actively compare them with predictions thereof. A core element of this comparison is prediction-error neurons, the activity of which only changes upon mismatches between actual and predicted sensory stimuli. While it has been shown that these prediction-error neurons come in different variants, it is largely unresolved how they are simultaneously formed and shaped by highly interconnected neural networks. By using a computational model, we study the circuit-level mechanisms that give rise to different variants of prediction-error neurons. Our results shed light on the formation, refinement, and robustness of prediction-error circuits, an important step toward a better understanding of predictive processing. Predictable sensory stimuli do not evoke significant responses in a subset of cortical excitatory neurons. Some of those neurons, however, change their activity upon mismatches between actual and predicted stimuli. Different variants of these prediction-error neurons exist, and they differ in their responses to unexpected sensory stimuli. However, it is unclear how these variants can develop and coexist in the same recurrent network and how they are simultaneously shaped by the astonishing diversity of inhibitory interneurons. Here, we study these questions in a computational network model with three types of inhibitory interneurons. We find that balancing excitation and inhibition in multiple pathways gives rise to heterogeneous prediction-error circuits. Dependent on the network’s initial connectivity and distribution of actual and predicted sensory inputs, these circuits can form different variants of prediction-error neurons that are robust to network perturbations and generalize to stimuli not seen during learning. These variants can be learned simultaneously via homeostatic inhibitory plasticity with low baseline firing rates. Finally, we demonstrate that prediction-error neurons can support biased perception, we illustrate a number of functional implications, and we discuss testable predictions.
Collapse
|
42
|
Adaptive erasure of spurious sequences in sensory cortical circuits. Neuron 2022; 110:1857-1868.e5. [PMID: 35358415 PMCID: PMC9616807 DOI: 10.1016/j.neuron.2022.03.006] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/23/2021] [Revised: 11/12/2021] [Accepted: 03/07/2022] [Indexed: 12/02/2022]
Abstract
Sequential activity reflecting previously experienced temporal sequences is considered a hallmark of learning across cortical areas. However, it is unknown how cortical circuits avoid the converse problem: producing spurious sequences that are not reflecting sequences in their inputs. We develop methods to quantify and study sequentiality in neural responses. We show that recurrent circuit responses generally include spurious sequences, which are specifically prevented in circuits that obey two widely known features of cortical microcircuit organization: Dale’s law and Hebbian connectivity. In particular, spike-timing-dependent plasticity in excitation-inhibition networks leads to an adaptive erasure of spurious sequences. We tested our theory in multielectrode recordings from the visual cortex of awake ferrets. Although responses to natural stimuli were largely non-sequential, responses to artificial stimuli initially included spurious sequences, which diminished over extended exposure. These results reveal an unexpected role for Hebbian experience-dependent plasticity and Dale’s law in sensory cortical circuits. Recurrent circuits generate spurious sequences without sequential inputs A principled measure of total sequentiality in population responses is developed Theory predicts that Hebbian plasticity should abolish spurious sequences Spurious sequences in the visual cortex diminish with experience
Collapse
|
43
|
Chen H, Xie L, Wang Y, Zhang H. Postsynaptic Potential Energy as Determinant of Synaptic Plasticity. Front Comput Neurosci 2022; 16:804604. [PMID: 35250524 PMCID: PMC8891168 DOI: 10.3389/fncom.2022.804604] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/29/2021] [Accepted: 01/13/2022] [Indexed: 02/06/2023] Open
Abstract
Metabolic energy can be used as a unifying principle to control neuronal activity. However, whether and how metabolic energy alone can determine the outcome of synaptic plasticity remains unclear. This study proposes a computational model of synaptic plasticity that is completely determined by energy. A simple quantitative relationship between synaptic plasticity and postsynaptic potential energy is established. Synaptic weight is directly proportional to the difference between the baseline potential energy and the suprathreshold potential energy and is constrained by the maximum energy supply. Results show that the energy constraint improves the performance of synaptic plasticity and avoids setting the hard boundary of synaptic weights. With the same set of model parameters, our model can reproduce several classical experiments in homo- and heterosynaptic plasticity. The proposed model can explain the interaction mechanism of Hebbian and homeostatic plasticity at the cellular level. Homeostatic synaptic plasticity at different time scales coexists. Homeostatic plasticity operating on a long time scale is caused by heterosynaptic plasticity and, on the same time scale as Hebbian synaptic plasticity, is caused by the constraint of energy supply.
Collapse
Affiliation(s)
- Huanwen Chen
- School of Automation, Central South University, Changsha, China
- *Correspondence: Huanwen Chen
| | - Lijuan Xie
- Institute of Physiology and Psychology, School of Marxism, Changsha University of Science and Technology, Changsha, China
| | - Yijun Wang
- School of Automation, Central South University, Changsha, China
| | - Hang Zhang
- School of Automation, Central South University, Changsha, China
| |
Collapse
|
44
|
Isomura T. Active inference leads to Bayesian neurophysiology. Neurosci Res 2021; 175:38-45. [PMID: 34968557 DOI: 10.1016/j.neures.2021.12.003] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/24/2021] [Revised: 12/13/2021] [Accepted: 12/14/2021] [Indexed: 01/20/2023]
Abstract
The neuronal substrates that implement the free-energy principle and ensuing active inference at the neuron and synapse level have not been fully elucidated. This Review considers possible neuronal substrates underlying the principle. First, the foundations of the free-energy principle are introduced, and then its ability to empirically explain various brain functions and psychological and biological phenomena in terms of Bayesian inference is described. Mathematically, the dynamics of neural activity and plasticity that minimise a cost function can be cast as performing Bayesian inference that minimises variational free energy. This equivalence licenses the adoption of the free-energy principle as a universal characterisation of neural networks. Further, the neural network structure itself represents a generative model under which an agent operates. A virtue of this perspective is that it enables the formal association of neural network properties with prior beliefs that regulate inference and learning. The possible neuronal substrates that implement prior beliefs and how to empirically examine the theory are discussed. This perspective renders brain activity explainable, leading to a deeper understanding of the neuronal mechanisms underlying basic psychology and psychiatric disorders in terms of an implicit generative model.
Collapse
Affiliation(s)
- Takuya Isomura
- Brain Intelligence Theory Unit, RIKEN Center for Brain Science, 2-1 Hirosawa, Wako, Saitama 351-0198, Japan.
| |
Collapse
|
45
|
Milstein AD, Li Y, Bittner KC, Grienberger C, Soltesz I, Magee JC, Romani S. Bidirectional synaptic plasticity rapidly modifies hippocampal representations. eLife 2021; 10:e73046. [PMID: 34882093 PMCID: PMC8776257 DOI: 10.7554/elife.73046] [Citation(s) in RCA: 40] [Impact Index Per Article: 13.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/13/2021] [Accepted: 12/08/2021] [Indexed: 11/13/2022] Open
Abstract
Learning requires neural adaptations thought to be mediated by activity-dependent synaptic plasticity. A relatively non-standard form of synaptic plasticity driven by dendritic calcium spikes, or plateau potentials, has been reported to underlie place field formation in rodent hippocampal CA1 neurons. Here, we found that this behavioral timescale synaptic plasticity (BTSP) can also reshape existing place fields via bidirectional synaptic weight changes that depend on the temporal proximity of plateau potentials to pre-existing place fields. When evoked near an existing place field, plateau potentials induced less synaptic potentiation and more depression, suggesting BTSP might depend inversely on postsynaptic activation. However, manipulations of place cell membrane potential and computational modeling indicated that this anti-correlation actually results from a dependence on current synaptic weight such that weak inputs potentiate and strong inputs depress. A network model implementing this bidirectional synaptic learning rule suggested that BTSP enables population activity, rather than pairwise neuronal correlations, to drive neural adaptations to experience.
Collapse
Affiliation(s)
- Aaron D Milstein
- Department of Neurosurgery and Stanford Neurosciences Institute, Stanford University School of MedicineStanfordUnited States
- Department of Neuroscience and Cell Biology, Robert Wood Johnson Medical School and Center for Advanced Biotechnology and Medicine, Rutgers UniversityPiscatawayUnited States
| | - Yiding Li
- Howard Hughes Medical Institute, Baylor College of MedicineHoustonUnited States
| | - Katie C Bittner
- Howard Hughes Medical Institute, Janelia Research CampusAshburnUnited States
| | | | - Ivan Soltesz
- Department of Neurosurgery and Stanford Neurosciences Institute, Stanford University School of MedicineStanfordUnited States
| | - Jeffrey C Magee
- Howard Hughes Medical Institute, Baylor College of MedicineHoustonUnited States
| | - Sandro Romani
- Howard Hughes Medical Institute, Janelia Research CampusAshburnUnited States
| |
Collapse
|
46
|
Larisch R, Gönner L, Teichmann M, Hamker FH. Sensory coding and contrast invariance emerge from the control of plastic inhibition over emergent selectivity. PLoS Comput Biol 2021; 17:e1009566. [PMID: 34843455 PMCID: PMC8629393 DOI: 10.1371/journal.pcbi.1009566] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/20/2021] [Accepted: 10/15/2021] [Indexed: 11/18/2022] Open
Abstract
Visual stimuli are represented by a highly efficient code in the primary visual cortex, but the development of this code is still unclear. Two distinct factors control coding efficiency: Representational efficiency, which is determined by neuronal tuning diversity, and metabolic efficiency, which is influenced by neuronal gain. How these determinants of coding efficiency are shaped during development, supported by excitatory and inhibitory plasticity, is only partially understood. We investigate a fully plastic spiking network of the primary visual cortex, building on phenomenological plasticity rules. Our results suggest that inhibitory plasticity is key to the emergence of tuning diversity and accurate input encoding. We show that inhibitory feedback (random and specific) increases the metabolic efficiency by implementing a gain control mechanism. Interestingly, this led to the spontaneous emergence of contrast-invariant tuning curves. Our findings highlight that (1) interneuron plasticity is key to the development of tuning diversity and (2) that efficient sensory representations are an emergent property of the resulting network. Synaptic plasticity is crucial for the development of efficient input representation in the different sensory cortices, such as the primary visual cortex. Efficient visual representation is determined by two factors: representational efficiency, i.e. how many different input features can be represented, and metabolic efficiency, i.e. how many spikes are required to represent a specific feature. Previous research has pointed out the importance of plasticity at excitatory synapses to achieve high representational efficiency and feedback inhibition as a gain control mechanism for controlling metabolic efficiency. However, it is only partially understood how the influence of inhibitory plasticity on excitatory plasticity can lead to an efficient representation. Using a spiking neural network, we show that plasticity at feed-forward and feedback inhibitory synapses is necessary for the emergence of well-distributed neuronal selectivity to improve representational efficiency. Further, the emergent balance between excitatory and inhibitory currents improves the metabolic efficiency, and leads to contrast-invariant tuning as an inherent network property. Extending previous work, our simulation results highlight the importance of plasticity at inhibitory synapses.
Collapse
Affiliation(s)
- René Larisch
- Department of Computer Science, Artificial Intelligence, TU Chemnitz, Chemnitz, Germany
- * E-mail: (RL); (FHH)
| | - Lorenz Gönner
- Department of Computer Science, Artificial Intelligence, TU Chemnitz, Chemnitz, Germany
- Faculty of Psychology, Lifespan Developmental Neuroscience, TU Dresden, Dresden, Germany
| | - Michael Teichmann
- Department of Computer Science, Artificial Intelligence, TU Chemnitz, Chemnitz, Germany
| | - Fred H. Hamker
- Department of Computer Science, Artificial Intelligence, TU Chemnitz, Chemnitz, Germany
- Bernstein Center Computational Neuroscience, Berlin, Germany
- * E-mail: (RL); (FHH)
| |
Collapse
|
47
|
Drifting assemblies for persistent memory: Neuron transitions and unsupervised compensation. Proc Natl Acad Sci U S A 2021; 118:2023832118. [PMID: 34772802 DOI: 10.1073/pnas.2023832118] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 09/11/2021] [Indexed: 11/18/2022] Open
Abstract
Change is ubiquitous in living beings. In particular, the connectome and neural representations can change. Nevertheless, behaviors and memories often persist over long times. In a standard model, associative memories are represented by assemblies of strongly interconnected neurons. For faithful storage these assemblies are assumed to consist of the same neurons over time. Here we propose a contrasting memory model with complete temporal remodeling of assemblies, based on experimentally observed changes of synapses and neural representations. The assemblies drift freely as noisy autonomous network activity and spontaneous synaptic turnover induce neuron exchange. The gradual exchange allows activity-dependent and homeostatic plasticity to conserve the representational structure and keep inputs, outputs, and assemblies consistent. This leads to persistent memory. Our findings explain recent experimental results on temporal evolution of fear memory representations and suggest that memory systems need to be understood in their completeness as individual parts may constantly change.
Collapse
|
48
|
Sadeh S, Clopath C. Excitatory-inhibitory balance modulates the formation and dynamics of neuronal assemblies in cortical networks. SCIENCE ADVANCES 2021; 7:eabg8411. [PMID: 34731002 PMCID: PMC8565910 DOI: 10.1126/sciadv.abg8411] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/22/2021] [Accepted: 09/14/2021] [Indexed: 05/20/2023]
Abstract
Repetitive activation of subpopulations of neurons leads to the formation of neuronal assemblies, which can guide learning and behavior. Recent technological advances have made the artificial induction of these assemblies feasible, yet how various parameters of induction can be optimized is not clear. Here, we studied this question in large-scale cortical network models with excitatory-inhibitory balance. We found that the background network in which assemblies are embedded can strongly modulate their dynamics and formation. Networks with dominant excitatory interactions enabled a fast formation of assemblies, but this was accompanied by recruitment of other non-perturbed neurons, leading to some degree of nonspecific induction. On the other hand, networks with strong excitatory-inhibitory interactions ensured that the formation of assemblies remained constrained to the perturbed neurons, but slowed down the induction. Our results suggest that these two regimes can be suitable for computational and cognitive tasks with different trade-offs between speed and specificity.
Collapse
Affiliation(s)
- Sadra Sadeh
- Bioengineering Department, Imperial College London, London SW7 2AZ, UK
| | | |
Collapse
|
49
|
Gallinaro JV, Clopath C. Memories in a network with excitatory and inhibitory plasticity are encoded in the spiking irregularity. PLoS Comput Biol 2021; 17:e1009593. [PMID: 34762644 PMCID: PMC8610285 DOI: 10.1371/journal.pcbi.1009593] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/23/2021] [Revised: 11/23/2021] [Accepted: 10/26/2021] [Indexed: 11/19/2022] Open
Abstract
Cell assemblies are thought to be the substrate of memory in the brain. Theoretical studies have previously shown that assemblies can be formed in networks with multiple types of plasticity. But how exactly they are formed and how they encode information is yet to be fully understood. One possibility is that memories are stored in silent assemblies. Here we used a computational model to study the formation of silent assemblies in a network of spiking neurons with excitatory and inhibitory plasticity. We found that even though the formed assemblies were silent in terms of mean firing rate, they had an increased coefficient of variation of inter-spike intervals. We also found that this spiking irregularity could be read out with support of short-term plasticity, and that it could contribute to the longevity of memories.
Collapse
Affiliation(s)
- Júlia V. Gallinaro
- Bioengineering Department, Imperial College London, London, United Kingdom
| | - Claudia Clopath
- Bioengineering Department, Imperial College London, London, United Kingdom
| |
Collapse
|
50
|
Jordan J, Schmidt M, Senn W, Petrovici MA. Evolving interpretable plasticity for spiking networks. eLife 2021; 10:66273. [PMID: 34709176 PMCID: PMC8553337 DOI: 10.7554/elife.66273] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/05/2021] [Accepted: 08/19/2021] [Indexed: 11/25/2022] Open
Abstract
Continuous adaptation allows survival in an ever-changing world. Adjustments in the synaptic coupling strength between neurons are essential for this capability, setting us apart from simpler, hard-wired organisms. How these changes can be mathematically described at the phenomenological level, as so-called ‘plasticity rules’, is essential both for understanding biological information processing and for developing cognitively performant artificial systems. We suggest an automated approach for discovering biophysically plausible plasticity rules based on the definition of task families, associated performance measures and biophysical constraints. By evolving compact symbolic expressions, we ensure the discovered plasticity rules are amenable to intuitive understanding, fundamental for successful communication and human-guided generalization. We successfully apply our approach to typical learning scenarios and discover previously unknown mechanisms for learning efficiently from rewards, recover efficient gradient-descent methods for learning from target signals, and uncover various functionally equivalent STDP-like rules with tuned homeostatic mechanisms. Our brains are incredibly adaptive. Every day we form memories, acquire new knowledge or refine existing skills. This stands in contrast to our current computers, which typically can only perform pre-programmed actions. Our own ability to adapt is the result of a process called synaptic plasticity, in which the strength of the connections between neurons can change. To better understand brain function and build adaptive machines, researchers in neuroscience and artificial intelligence (AI) are modeling the underlying mechanisms. So far, most work towards this goal was guided by human intuition – that is, by the strategies scientists think are most likely to succeed. Despite the tremendous progress, this approach has two drawbacks. First, human time is limited and expensive. And second, researchers have a natural – and reasonable – tendency to incrementally improve upon existing models, rather than starting from scratch. Jordan, Schmidt et al. have now developed a new approach based on ‘evolutionary algorithms’. These computer programs search for solutions to problems by mimicking the process of biological evolution, such as the concept of survival of the fittest. The approach exploits the increasing availability of cheap but powerful computers. Compared to its predecessors (or indeed human brains), it also uses search strategies that are less biased by previous models. The evolutionary algorithms were presented with three typical learning scenarios. In the first, the computer had to spot a repeating pattern in a continuous stream of input without receiving feedback on how well it was doing. In the second scenario, the computer received virtual rewards whenever it behaved in the desired manner – an example of reinforcement learning. Finally, in the third ‘supervised learning’ scenario, the computer was told exactly how much its behavior deviated from the desired behavior. For each of these scenarios, the evolutionary algorithms were able to discover mechanisms of synaptic plasticity to solve the new task successfully. Using evolutionary algorithms to study how computers ‘learn’ will provide new insights into how brains function in health and disease. It could also pave the way for developing intelligent machines that can better adapt to the needs of their users.
Collapse
Affiliation(s)
- Jakob Jordan
- Department of Physiology, University of Bern, Bern, Switzerland
| | - Maximilian Schmidt
- Ascent Robotics, Tokyo, Japan.,RIKEN Center for Brain Science, Tokyo, Japan
| | - Walter Senn
- Department of Physiology, University of Bern, Bern, Switzerland
| | - Mihai A Petrovici
- Department of Physiology, University of Bern, Bern, Switzerland.,Kirchhoff-Institute for Physics, Heidelberg University, Heidelberg, Germany
| |
Collapse
|