1
|
Kappel D, Tetzlaff C. Synapses learn to utilize stochastic pre-synaptic release for the prediction of postsynaptic dynamics. PLoS Comput Biol 2024; 20:e1012531. [PMID: 39495714 PMCID: PMC11534197 DOI: 10.1371/journal.pcbi.1012531] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/06/2023] [Accepted: 10/01/2024] [Indexed: 11/06/2024] Open
Abstract
Synapses in the brain are highly noisy, which leads to a large trial-by-trial variability. Given how costly synapses are in terms of energy consumption these high levels of noise are surprising. Here we propose that synapses use noise to represent uncertainties about the somatic activity of the postsynaptic neuron. To show this, we developed a mathematical framework, in which the synapse as a whole interacts with the soma of the postsynaptic neuron in a similar way to an agent that is situated and behaves in an uncertain, dynamic environment. This framework suggests that synapses use an implicit internal model of the somatic membrane dynamics that is being updated by a synaptic learning rule, which resembles experimentally well-established LTP/LTD mechanisms. In addition, this approach entails that a synapse utilizes its inherently noisy synaptic release to also encode its uncertainty about the state of the somatic potential. Although each synapse strives for predicting the somatic dynamics of its postsynaptic neuron, we show that the emergent dynamics of many synapses in a neuronal network resolve different learning problems such as pattern classification or closed-loop control in a dynamic environment. Hereby, synapses coordinate themselves to represent and utilize uncertainties on the network level in behaviorally ambiguous situations.
Collapse
Affiliation(s)
- David Kappel
- III. Physikalisches Institut – Biophysik, Georg-August Universität, Göttingen, Germany
- Institut für Neuroinformatik, Ruhr-Universität Bochum, Bochum, Germany
| | - Christian Tetzlaff
- III. Physikalisches Institut – Biophysik, Georg-August Universität, Göttingen, Germany
- Group of Computational Synaptic Physiology, Department for Neuro- and Sensory Physiology, University Medical Center Göttingen, Göttingen, Germany
| |
Collapse
|
2
|
Bhasin BJ, Raymond JL, Goldman MS. Synaptic weight dynamics underlying memory consolidation: Implications for learning rules, circuit organization, and circuit function. Proc Natl Acad Sci U S A 2024; 121:e2406010121. [PMID: 39365821 PMCID: PMC11474072 DOI: 10.1073/pnas.2406010121] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/26/2024] [Accepted: 08/12/2024] [Indexed: 10/06/2024] Open
Abstract
Systems consolidation is a common feature of learning and memory systems, in which a long-term memory initially stored in one brain region becomes persistently stored in another region. We studied the dynamics of systems consolidation in simple circuit architectures with two sites of plasticity, one in an early-learning and one in a late-learning brain area. We show that the synaptic dynamics of the circuit during consolidation of an analog memory can be understood as a temporal integration process, by which transient changes in activity driven by plasticity in the early-learning area are accumulated into persistent synaptic changes at the late-learning site. This simple principle naturally leads to a speed-accuracy tradeoff in systems consolidation and provides insight into how the circuit mitigates the stability-plasticity dilemma of storing new memories while preserving core features of older ones. Furthermore, it imposes two constraints on the circuit. First, the plasticity rule at the late-learning site must stably support a continuum of possible outputs for a given input. We show that this is readily achieved by heterosynaptic but not standard Hebbian rules. Second, to turn off the consolidation process and prevent erroneous changes at the late-learning site, neural activity in the early-learning area must be reset to its baseline activity. We provide two biologically plausible implementations for this reset that propose functional roles in stabilizing consolidation for core elements of the cerebellar circuit.
Collapse
Affiliation(s)
- Brandon J. Bhasin
- Department of Bioengineering, Stanford University, Stanford, CA94305
- Center for Neuroscience, University of California, Davis, CA95616
| | - Jennifer L. Raymond
- Department of Neurobiology, Stanford University School of Medicine, Stanford, CA94305
| | - Mark S. Goldman
- Center for Neuroscience, University of California, Davis, CA95616
- Department of Neurobiology, Physiology, and Behavior, University of California, Davis, CA95616
- Department of Ophthalmology and Vision Science, University of California, Davis, CA95616
| |
Collapse
|
3
|
Brucklacher M, Pezzulo G, Mannella F, Galati G, Pennartz CMA. Learning to segment self-generated from externally caused optic flow through sensorimotor mismatch circuits. Neural Netw 2024; 181:106716. [PMID: 39383679 DOI: 10.1016/j.neunet.2024.106716] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/15/2024] [Revised: 06/09/2024] [Accepted: 09/07/2024] [Indexed: 10/11/2024]
Abstract
Efficient sensory detection requires the capacity to ignore task-irrelevant information, for example when optic flow patterns created by egomotion need to be disentangled from object perception. To investigate how this is achieved in the visual system, predictive coding with sensorimotor mismatch detection is an attractive starting point. Indeed, experimental evidence for sensorimotor mismatch signals in early visual areas exists, but it is not understood how they are integrated into cortical networks that perform input segmentation and categorization. Our model advances a biologically plausible solution by extending predictive coding models with the ability to distinguish self-generated from externally caused optic flow. We first show that a simple three neuron circuit produces experience-dependent sensorimotor mismatch responses, in agreement with calcium imaging data from mice. This microcircuit is then integrated into a neural network with two generative streams. The motor-to-visual stream consists of parallel microcircuits between motor and visual areas and learns to spatially predict optic flow resulting from self-motion. The second stream bidirectionally connects a motion-selective higher visual area (mHVA) to V1, assigning a crucial role to the abundant feedback connections to V1: the maintenance of a generative model of externally caused optic flow. In the model, area mHVA learns to segment moving objects from the background, and facilitates object categorization. Based on shared neurocomputational principles across species, the model also maps onto primate visual cortex. Our work extends Hebbian predictive coding to sensorimotor settings, in which the agent actively moves - and learns to predict the consequences of its own movements.
Collapse
Affiliation(s)
- Matthias Brucklacher
- Cognitive and Systems Neuroscience, University of Amsterdam, 1098XH Amsterdam, Netherlands.
| | - Giovanni Pezzulo
- Institute of Cognitive Sciences and Technologies, National Research Council, 00196 Rome, Italy
| | - Francesco Mannella
- Institute of Cognitive Sciences and Technologies, National Research Council, 00196 Rome, Italy
| | - Gaspare Galati
- Brain Imaging Laboratory, Department of Psychology, Sapienza University, 00185 Rome, Italy
| | - Cyriel M A Pennartz
- Cognitive and Systems Neuroscience, University of Amsterdam, 1098XH Amsterdam, Netherlands
| |
Collapse
|
4
|
Keller GB, Sterzer P. Predictive Processing: A Circuit Approach to Psychosis. Annu Rev Neurosci 2024; 47:85-101. [PMID: 38424472 DOI: 10.1146/annurev-neuro-100223-121214] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/02/2024]
Abstract
Predictive processing is a computational framework that aims to explain how the brain processes sensory information by making predictions about the environment and minimizing prediction errors. It can also be used to explain some of the key symptoms of psychotic disorders such as schizophrenia. In recent years, substantial advances have been made in our understanding of the neuronal circuitry that underlies predictive processing in cortex. In this review, we summarize these findings and how they might relate to psychosis and to observed cell type-specific effects of antipsychotic drugs. We argue that quantifying the effects of antipsychotic drugs on specific neuronal circuit elements is a promising approach to understanding not only the mechanism of action of antipsychotic drugs but also psychosis. Finally, we outline some of the key experiments that should be done. The aims of this review are to provide an overview of the current circuit-based approaches to psychosis and to encourage further research in this direction.
Collapse
Affiliation(s)
- Georg B Keller
- Friedrich Miescher Institute for Biomedical Research, Basel, Switzerland;
- Faculty of Natural Science, University of Basel, Basel, Switzerland
| | - Philipp Sterzer
- Department of Psychiatry, University of Basel, Basel, Switzerland
| |
Collapse
|
5
|
Liao Z, Gonzalez KC, Li DM, Yang CM, Holder D, McClain NE, Zhang G, Evans SW, Chavarha M, Simko J, Makinson CD, Lin MZ, Losonczy A, Negrean A. Functional architecture of intracellular oscillations in hippocampal dendrites. Nat Commun 2024; 15:6295. [PMID: 39060234 PMCID: PMC11282248 DOI: 10.1038/s41467-024-50546-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/13/2024] [Accepted: 07/10/2024] [Indexed: 07/28/2024] Open
Abstract
Fast electrical signaling in dendrites is central to neural computations that support adaptive behaviors. Conventional techniques lack temporal and spatial resolution and the ability to track underlying membrane potential dynamics present across the complex three-dimensional dendritic arbor in vivo. Here, we perform fast two-photon imaging of dendritic and somatic membrane potential dynamics in single pyramidal cells in the CA1 region of the mouse hippocampus during awake behavior. We study the dynamics of subthreshold membrane potential and suprathreshold dendritic events throughout the dendritic arbor in vivo by combining voltage imaging with simultaneous local field potential recording, post hoc morphological reconstruction, and a spatial navigation task. We systematically quantify the modulation of local event rates by locomotion in distinct dendritic regions, report an advancing gradient of dendritic theta phase along the basal-tuft axis, and describe a predominant hyperpolarization of the dendritic arbor during sharp-wave ripples. Finally, we find that spatial tuning of dendritic representations dynamically reorganizes following place field formation. Our data reveal how the organization of electrical signaling in dendrites maps onto the anatomy of the dendritic tree across behavior, oscillatory network, and functional cell states.
Collapse
Affiliation(s)
- Zhenrui Liao
- Department of Neuroscience, Columbia University, New York, USA
- Mortimer B. Zuckerman Mind Brain Behavior Institute, Columbia University, New York, USA
| | - Kevin C Gonzalez
- Department of Neuroscience, Columbia University, New York, USA
- Mortimer B. Zuckerman Mind Brain Behavior Institute, Columbia University, New York, USA
| | - Deborah M Li
- Department of Neuroscience, Columbia University, New York, USA
- Mortimer B. Zuckerman Mind Brain Behavior Institute, Columbia University, New York, USA
| | - Catalina M Yang
- Department of Neuroscience, Columbia University, New York, USA
- Mortimer B. Zuckerman Mind Brain Behavior Institute, Columbia University, New York, USA
| | - Donald Holder
- Department of Neuroscience, Columbia University, New York, USA
- Mortimer B. Zuckerman Mind Brain Behavior Institute, Columbia University, New York, USA
| | - Natalie E McClain
- Department of Neuroscience, Columbia University, New York, USA
- Mortimer B. Zuckerman Mind Brain Behavior Institute, Columbia University, New York, USA
| | - Guofeng Zhang
- Department of Neurobiology, Stanford University, Stanford, USA
| | - Stephen W Evans
- Department of Neurobiology, Stanford University, Stanford, USA
- The Boulder Creek Research Institute, Los Altos, USA
| | - Mariya Chavarha
- Department of Bioengineering, Stanford University, Stanford, USA
| | - Jane Simko
- Department of Neuroscience, Columbia University, New York, USA
- Department of Neurology, Columbia University, New York, USA
| | - Christopher D Makinson
- Department of Neuroscience, Columbia University, New York, USA
- Department of Neurology, Columbia University, New York, USA
| | - Michael Z Lin
- Department of Neurobiology, Stanford University, Stanford, USA
- Department of Bioengineering, Stanford University, Stanford, USA
| | - Attila Losonczy
- Department of Neuroscience, Columbia University, New York, USA.
- Mortimer B. Zuckerman Mind Brain Behavior Institute, Columbia University, New York, USA.
- Kavli Institute for Brain Science, Columbia University, New York, USA.
| | - Adrian Negrean
- Department of Neuroscience, Columbia University, New York, USA.
- Mortimer B. Zuckerman Mind Brain Behavior Institute, Columbia University, New York, USA.
- Allen Institute for Neural Dynamics, Seattle, USA.
| |
Collapse
|
6
|
Bhasin BJ, Raymond JL, Goldman MS. Synaptic weight dynamics underlying memory consolidation: implications for learning rules, circuit organization, and circuit function. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.03.20.586036. [PMID: 38585936 PMCID: PMC10996481 DOI: 10.1101/2024.03.20.586036] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 04/09/2024]
Abstract
Systems consolidation is a common feature of learning and memory systems, in which a long-term memory initially stored in one brain region becomes persistently stored in another region. We studied the dynamics of systems consolidation in simple circuit architectures with two sites of plasticity, one in an early-learning and one in a late-learning brain area. We show that the synaptic dynamics of the circuit during consolidation of an analog memory can be understood as a temporal integration process, by which transient changes in activity driven by plasticity in the early-learning area are accumulated into persistent synaptic changes at the late-learning site. This simple principle naturally leads to a speed-accuracy tradeoff in systems consolidation and provides insight into how the circuit mitigates the stability-plasticity dilemma of storing new memories while preserving core features of older ones. Furthermore, it imposes two constraints on the circuit. First, the plasticity rule at the late-learning site must stably support a continuum of possible outputs for a given input. We show that this is readily achieved by heterosynaptic but not standard Hebbian rules. Second, to turn off the consolidation process and prevent erroneous changes at the late-learning site, neural activity in the early-learning area must be reset to its baseline activity. We propose two biologically plausible implementations for this reset that suggest novel roles for core elements of the cerebellar circuit. Significance Statement How are memories transformed over time? We propose a simple organizing principle for how long term memories are moved from an initial to a final site of storage. We show that successful transfer occurs when the late site of memory storage is endowed with synaptic plasticity rules that stably accumulate changes in activity occurring at the early site of memory storage. We instantiate this principle in a simple computational model that is representative of brain circuits underlying a variety of behaviors. The model suggests how a neural circuit can store new memories while preserving core features of older ones, and suggests novel roles for core elements of the cerebellar circuit.
Collapse
|
7
|
Bredenberg C, Savin C. Desiderata for Normative Models of Synaptic Plasticity. Neural Comput 2024; 36:1245-1285. [PMID: 38776950 DOI: 10.1162/neco_a_01671] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/09/2023] [Accepted: 02/06/2024] [Indexed: 05/25/2024]
Abstract
Normative models of synaptic plasticity use computational rationales to arrive at predictions of behavioral and network-level adaptive phenomena. In recent years, there has been an explosion of theoretical work in this realm, but experimental confirmation remains limited. In this review, we organize work on normative plasticity models in terms of a set of desiderata that, when satisfied, are designed to ensure that a given model demonstrates a clear link between plasticity and adaptive behavior, is consistent with known biological evidence about neural plasticity and yields specific testable predictions. As a prototype, we include a detailed analysis of the REINFORCE algorithm. We also discuss how new models have begun to improve on the identified criteria and suggest avenues for further development. Overall, we provide a conceptual guide to help develop neural learning theories that are precise, powerful, and experimentally testable.
Collapse
Affiliation(s)
- Colin Bredenberg
- Center for Neural Science, New York University, New York, NY 10003, U.S.A
- Mila-Quebec AI Institute, Montréal, QC H2S 3H1, Canada
| | - Cristina Savin
- Center for Neural Science, New York University, New York, NY 10003, U.S.A
- Center for Data Science, New York University, New York, NY 10011, U.S.A.
| |
Collapse
|
8
|
Jordan J, Sacramento J, Wybo WAM, Petrovici MA, Senn W. Conductance-based dendrites perform Bayes-optimal cue integration. PLoS Comput Biol 2024; 20:e1012047. [PMID: 38865345 PMCID: PMC11168673 DOI: 10.1371/journal.pcbi.1012047] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/06/2022] [Accepted: 03/31/2024] [Indexed: 06/14/2024] Open
Abstract
A fundamental function of cortical circuits is the integration of information from different sources to form a reliable basis for behavior. While animals behave as if they optimally integrate information according to Bayesian probability theory, the implementation of the required computations in the biological substrate remains unclear. We propose a novel, Bayesian view on the dynamics of conductance-based neurons and synapses which suggests that they are naturally equipped to optimally perform information integration. In our approach apical dendrites represent prior expectations over somatic potentials, while basal dendrites represent likelihoods of somatic potentials. These are parametrized by local quantities, the effective reversal potentials and membrane conductances. We formally demonstrate that under these assumptions the somatic compartment naturally computes the corresponding posterior. We derive a gradient-based plasticity rule, allowing neurons to learn desired target distributions and weight synaptic inputs by their relative reliabilities. Our theory explains various experimental findings on the system and single-cell level related to multi-sensory integration, which we illustrate with simulations. Furthermore, we make experimentally testable predictions on Bayesian dendritic integration and synaptic plasticity.
Collapse
Affiliation(s)
- Jakob Jordan
- Department of Physiology, University of Bern, Bern, Switzerland
- Electrical Engineering, Yale University, New Haven, Connecticut, United States of America
| | - João Sacramento
- Department of Physiology, University of Bern, Bern, Switzerland
- Institute of Neuroinformatics, UZH / ETH Zurich, Zurich, Switzerland
| | - Willem A. M. Wybo
- Department of Physiology, University of Bern, Bern, Switzerland
- Institute of Neuroscience and Medicine, Forschungszentrum Jülich, Jülich, Germany
| | | | - Walter Senn
- Department of Physiology, University of Bern, Bern, Switzerland
| |
Collapse
|
9
|
Storm JF, Klink PC, Aru J, Senn W, Goebel R, Pigorini A, Avanzini P, Vanduffel W, Roelfsema PR, Massimini M, Larkum ME, Pennartz CMA. An integrative, multiscale view on neural theories of consciousness. Neuron 2024; 112:1531-1552. [PMID: 38447578 DOI: 10.1016/j.neuron.2024.02.004] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/08/2023] [Revised: 12/20/2023] [Accepted: 02/05/2024] [Indexed: 03/08/2024]
Abstract
How is conscious experience related to material brain processes? A variety of theories aiming to answer this age-old question have emerged from the recent surge in consciousness research, and some are now hotly debated. Although most researchers have so far focused on the development and validation of their preferred theory in relative isolation, this article, written by a group of scientists representing different theories, takes an alternative approach. Noting that various theories often try to explain different aspects or mechanistic levels of consciousness, we argue that the theories do not necessarily contradict each other. Instead, several of them may converge on fundamental neuronal mechanisms and be partly compatible and complementary, so that multiple theories can simultaneously contribute to our understanding. Here, we consider unifying, integration-oriented approaches that have so far been largely neglected, seeking to combine valuable elements from various theories.
Collapse
Affiliation(s)
- Johan F Storm
- The Brain Signaling Group, Division of Physiology, IMB, Faculty of Medicine, University of Oslo, Domus Medica, Sognsvannsveien 9, Blindern, 0317 Oslo, Norway.
| | - P Christiaan Klink
- Department of Vision and Cognition, Netherlands Institute for Neuroscience, Royal Netherlands Academy of Arts and Sciences, 1105 BA Amsterdam, the Netherlands; Experimental Psychology, Helmholtz Institute, Utrecht University, 3584 CS Utrecht, the Netherlands; Laboratory of Visual Brain Therapy, Sorbonne Université, Institut National de la Santé et de la Recherche Médicale, Centre National de la Recherche Scientifique, Institut de la Vision, Paris 75012, France
| | - Jaan Aru
- Institute of Computer Science, University of Tartu, Tartu, Estonia
| | - Walter Senn
- Department of Physiology, University of Bern, Bern, Switzerland
| | - Rainer Goebel
- Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University, Oxfordlaan 55, 6229 EV Maastricht, The Netherlands
| | - Andrea Pigorini
- Department of Biomedical, Surgical and Dental Sciences, Università degli Studi di Milano, Milan 20122, Italy
| | - Pietro Avanzini
- Istituto di Neuroscienze, Consiglio Nazionale delle Ricerche, 43125 Parma, Italy
| | - Wim Vanduffel
- Department of Neurosciences, Laboratory of Neuro and Psychophysiology, KU Leuven Medical School, 3000 Leuven, Belgium; Leuven Brain Institute, KU Leuven, 3000 Leuven, Belgium; Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Charlestown, MA 02129, USA; Department of Radiology, Harvard Medical School, Boston, MA 02144, USA
| | - Pieter R Roelfsema
- Department of Vision and Cognition, Netherlands Institute for Neuroscience, Royal Netherlands Academy of Arts and Sciences, 1105 BA Amsterdam, the Netherlands; Laboratory of Visual Brain Therapy, Sorbonne Université, Institut National de la Santé et de la Recherche Médicale, Centre National de la Recherche Scientifique, Institut de la Vision, Paris 75012, France; Department of Integrative Neurophysiology, VU University, De Boelelaan 1085, 1081 HV Amsterdam, the Netherlands; Department of Neurosurgery, Academisch Medisch Centrum, Postbus 22660, 1100 DD Amsterdam, the Netherlands
| | - Marcello Massimini
- Department of Biomedical and Clinical Sciences "L. Sacco", Università degli Studi di Milano, Milan 20157, Italy; Istituto di Ricovero e Cura a Carattere Scientifico, Fondazione Don Carlo Gnocchi, Milan 20122, Italy; Azrieli Program in Brain, Mind and Consciousness, Canadian Institute for Advanced Research (CIFAR), Toronto, ON M5G 1M1, Canada
| | - Matthew E Larkum
- Institute of Biology, Humboldt University Berlin, Berlin, Germany; Neurocure Center for Excellence, Charité Universitätsmedizin Berlin, Berlin, Germany
| | - Cyriel M A Pennartz
- Swammerdam Institute for Life Sciences, Center for Neuroscience, Faculty of Science, University of Amsterdam, Sciencepark 904, Amsterdam 1098 XH, the Netherlands; Research Priority Program Brain and Cognition, University of Amsterdam, Amsterdam, the Netherlands
| |
Collapse
|
10
|
Webb B. Beyond prediction error: 25 years of modeling the associations formed in the insect mushroom body. Learn Mem 2024; 31:a053824. [PMID: 38862164 PMCID: PMC11199945 DOI: 10.1101/lm.053824.123] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/03/2023] [Accepted: 03/01/2024] [Indexed: 06/13/2024]
Abstract
The insect mushroom body has gained increasing attention as a system in which the computational basis of neural learning circuits can be unraveled. We now understand in detail the key locations in this circuit where synaptic associations are formed between sensory patterns and values leading to actions. However, the actual learning rule (or rules) implemented by neural activity and leading to synaptic change is still an open question. Here, I survey the diversity of answers that have been offered in computational models of this system over the past decades, including the recurring assumption-in line with top-down theories of associative learning-that the core function is to reduce prediction error. However, I will argue, a more bottom-up approach may ultimately reveal a richer algorithmic capacity in this still enigmatic brain neuropil.
Collapse
Affiliation(s)
- Barbara Webb
- School of Informatics, University of Edinburgh, Edinburgh EH8 9AB, United Kingdom
| |
Collapse
|
11
|
Lakshminarasimhan KJ, Xie M, Cohen JD, Sauerbrei BA, Hantman AW, Litwin-Kumar A, Escola S. Specific connectivity optimizes learning in thalamocortical loops. Cell Rep 2024; 43:114059. [PMID: 38602873 PMCID: PMC11104520 DOI: 10.1016/j.celrep.2024.114059] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/22/2023] [Revised: 01/04/2024] [Accepted: 03/20/2024] [Indexed: 04/13/2024] Open
Abstract
Thalamocortical loops have a central role in cognition and motor control, but precisely how they contribute to these processes is unclear. Recent studies showing evidence of plasticity in thalamocortical synapses indicate a role for the thalamus in shaping cortical dynamics through learning. Since signals undergo a compression from the cortex to the thalamus, we hypothesized that the computational role of the thalamus depends critically on the structure of corticothalamic connectivity. To test this, we identified the optimal corticothalamic structure that promotes biologically plausible learning in thalamocortical synapses. We found that corticothalamic projections specialized to communicate an efference copy of the cortical output benefit motor control, while communicating the modes of highest variance is optimal for working memory tasks. We analyzed neural recordings from mice performing grasping and delayed discrimination tasks and found corticothalamic communication consistent with these predictions. These results suggest that the thalamus orchestrates cortical dynamics in a functionally precise manner through structured connectivity.
Collapse
Affiliation(s)
| | - Marjorie Xie
- Zuckerman Mind Brain Behavior Institute, Columbia University, New York, NY 10027, USA
| | - Jeremy D Cohen
- Neuroscience Center, University of North Carolina, Chapel Hill, NC 27559, USA
| | - Britton A Sauerbrei
- Department of Neurosciences, Case Western Reserve University, Cleveland, OH 44106, USA
| | - Adam W Hantman
- Neuroscience Center, University of North Carolina, Chapel Hill, NC 27559, USA
| | - Ashok Litwin-Kumar
- Zuckerman Mind Brain Behavior Institute, Columbia University, New York, NY 10027, USA.
| | - Sean Escola
- Department of Psychiatry, Columbia University, New York, NY 10032, USA.
| |
Collapse
|
12
|
Lee K, Dora S, Mejias JF, Bohte SM, Pennartz CMA. Predictive coding with spiking neurons and feedforward gist signaling. Front Comput Neurosci 2024; 18:1338280. [PMID: 38680678 PMCID: PMC11045951 DOI: 10.3389/fncom.2024.1338280] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/14/2023] [Accepted: 03/14/2024] [Indexed: 05/01/2024] Open
Abstract
Predictive coding (PC) is an influential theory in neuroscience, which suggests the existence of a cortical architecture that is constantly generating and updating predictive representations of sensory inputs. Owing to its hierarchical and generative nature, PC has inspired many computational models of perception in the literature. However, the biological plausibility of existing models has not been sufficiently explored due to their use of artificial neurons that approximate neural activity with firing rates in the continuous time domain and propagate signals synchronously. Therefore, we developed a spiking neural network for predictive coding (SNN-PC), in which neurons communicate using event-driven and asynchronous spikes. Adopting the hierarchical structure and Hebbian learning algorithms from previous PC neural network models, SNN-PC introduces two novel features: (1) a fast feedforward sweep from the input to higher areas, which generates a spatially reduced and abstract representation of input (i.e., a neural code for the gist of a scene) and provides a neurobiological alternative to an arbitrary choice of priors; and (2) a separation of positive and negative error-computing neurons, which counters the biological implausibility of a bi-directional error neuron with a very high baseline firing rate. After training with the MNIST handwritten digit dataset, SNN-PC developed hierarchical internal representations and was able to reconstruct samples it had not seen during training. SNN-PC suggests biologically plausible mechanisms by which the brain may perform perceptual inference and learning in an unsupervised manner. In addition, it may be used in neuromorphic applications that can utilize its energy-efficient, event-driven, local learning, and parallel information processing nature.
Collapse
Affiliation(s)
- Kwangjun Lee
- Cognitive and Systems Neuroscience Group, Swammerdam Institute for Life Sciences, Faculty of Science, University of Amsterdam, Amsterdam, Netherlands
| | - Shirin Dora
- Cognitive and Systems Neuroscience Group, Swammerdam Institute for Life Sciences, Faculty of Science, University of Amsterdam, Amsterdam, Netherlands
- Department of Computer Science, School of Science, Loughborough University, Loughborough, United Kingdom
| | - Jorge F. Mejias
- Cognitive and Systems Neuroscience Group, Swammerdam Institute for Life Sciences, Faculty of Science, University of Amsterdam, Amsterdam, Netherlands
| | - Sander M. Bohte
- Cognitive and Systems Neuroscience Group, Swammerdam Institute for Life Sciences, Faculty of Science, University of Amsterdam, Amsterdam, Netherlands
- Machine Learning Group, Centre of Mathematics and Computer Science, Amsterdam, Netherlands
| | - Cyriel M. A. Pennartz
- Cognitive and Systems Neuroscience Group, Swammerdam Institute for Life Sciences, Faculty of Science, University of Amsterdam, Amsterdam, Netherlands
| |
Collapse
|
13
|
Millidge B, Tang M, Osanlouy M, Harper NS, Bogacz R. Predictive coding networks for temporal prediction. PLoS Comput Biol 2024; 20:e1011183. [PMID: 38557984 PMCID: PMC11008833 DOI: 10.1371/journal.pcbi.1011183] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/15/2023] [Revised: 04/11/2024] [Accepted: 03/12/2024] [Indexed: 04/04/2024] Open
Abstract
One of the key problems the brain faces is inferring the state of the world from a sequence of dynamically changing stimuli, and it is not yet clear how the sensory system achieves this task. A well-established computational framework for describing perceptual processes in the brain is provided by the theory of predictive coding. Although the original proposals of predictive coding have discussed temporal prediction, later work developing this theory mostly focused on static stimuli, and key questions on neural implementation and computational properties of temporal predictive coding networks remain open. Here, we address these questions and present a formulation of the temporal predictive coding model that can be naturally implemented in recurrent networks, in which activity dynamics rely only on local inputs to the neurons, and learning only utilises local Hebbian plasticity. Additionally, we show that temporal predictive coding networks can approximate the performance of the Kalman filter in predicting behaviour of linear systems, and behave as a variant of a Kalman filter which does not track its own subjective posterior variance. Importantly, temporal predictive coding networks can achieve similar accuracy as the Kalman filter without performing complex mathematical operations, but just employing simple computations that can be implemented by biological networks. Moreover, when trained with natural dynamic inputs, we found that temporal predictive coding can produce Gabor-like, motion-sensitive receptive fields resembling those observed in real neurons in visual areas. In addition, we demonstrate how the model can be effectively generalized to nonlinear systems. Overall, models presented in this paper show how biologically plausible circuits can predict future stimuli and may guide research on understanding specific neural circuits in brain areas involved in temporal prediction.
Collapse
Affiliation(s)
- Beren Millidge
- MRC Brain Network Dynamics Unit, University of Oxford, Oxford, United Kingdom
| | - Mufeng Tang
- MRC Brain Network Dynamics Unit, University of Oxford, Oxford, United Kingdom
| | - Mahyar Osanlouy
- Auckland Bioengineering Institute, University of Auckland, Auckland, New Zealand
| | - Nicol S. Harper
- Department of Physiology, Anatomy and Genetics, University of Oxford, Oxford, United Kingdom
| | - Rafal Bogacz
- MRC Brain Network Dynamics Unit, University of Oxford, Oxford, United Kingdom
| |
Collapse
|
14
|
Liao Z, Gonzalez KC, Li DM, Yang CM, Holder D, McClain NE, Zhang G, Evans SW, Chavarha M, Yi J, Makinson CD, Lin MZ, Losonczy A, Negrean A. Functional architecture of intracellular oscillations in hippocampal dendrites. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.02.12.579750. [PMID: 38405778 PMCID: PMC10888786 DOI: 10.1101/2024.02.12.579750] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/27/2024]
Abstract
Fast electrical signaling in dendrites is central to neural computations that support adaptive behaviors. Conventional techniques lack temporal and spatial resolution and the ability to track underlying membrane potential dynamics present across the complex three-dimensional dendritic arbor in vivo. Here, we perform fast two-photon imaging of dendritic and somatic membrane potential dynamics in single pyramidal cells in the CA1 region of the mouse hippocampus during awake behavior. We study the dynamics of subthreshold membrane potential and suprathreshold dendritic events throughout the dendritic arbor in vivo by combining voltage imaging with simultaneous local field potential recording, post hoc morphological reconstruction, and a spatial navigation task. We systematically quantify the modulation of local event rates by locomotion in distinct dendritic regions and report an advancing gradient of dendritic theta phase along the basal-tuft axis, then describe a predominant hyperpolarization of the dendritic arbor during sharp-wave ripples. Finally, we find spatial tuning of dendritic representations dynamically reorganizes following place field formation. Our data reveal how the organization of electrical signaling in dendrites maps onto the anatomy of the dendritic tree across behavior, oscillatory network, and functional cell states.
Collapse
Affiliation(s)
- Zhenrui Liao
- Department of Neuroscience, Columbia University, New York, United States
- Mortimer B. Zuckerman Mind Brain Behavior Institute, Columbia University, New York, United States
| | - Kevin C. Gonzalez
- Department of Neuroscience, Columbia University, New York, United States
- Mortimer B. Zuckerman Mind Brain Behavior Institute, Columbia University, New York, United States
| | - Deborah M. Li
- Department of Neuroscience, Columbia University, New York, United States
- Mortimer B. Zuckerman Mind Brain Behavior Institute, Columbia University, New York, United States
| | - Catalina M. Yang
- Department of Neuroscience, Columbia University, New York, United States
- Mortimer B. Zuckerman Mind Brain Behavior Institute, Columbia University, New York, United States
| | - Donald Holder
- Department of Neuroscience, Columbia University, New York, United States
- Mortimer B. Zuckerman Mind Brain Behavior Institute, Columbia University, New York, United States
| | - Natalie E. McClain
- Department of Neuroscience, Columbia University, New York, United States
- Mortimer B. Zuckerman Mind Brain Behavior Institute, Columbia University, New York, United States
| | - Guofeng Zhang
- Department of Neurobiology, Stanford University, Stanford, United States
| | - Stephen W. Evans
- Department of Neurobiology, Stanford University, Stanford, United States
| | - Mariya Chavarha
- Department of Bioengineering, Stanford University, Stanford, United States
| | - Jane Yi
- Department of Neuroscience, Columbia University, New York, United States
- Department of Neurology, Columbia University, New York, United States
| | - Christopher D. Makinson
- Department of Neuroscience, Columbia University, New York, United States
- Department of Neurology, Columbia University, New York, United States
| | - Michael Z. Lin
- Department of Neurobiology, Stanford University, Stanford, United States
- Department of Bioengineering, Stanford University, Stanford, United States
| | - Attila Losonczy
- Department of Neuroscience, Columbia University, New York, United States
- Mortimer B. Zuckerman Mind Brain Behavior Institute, Columbia University, New York, United States
- Kavli Institute for Brain Science, Columbia University, New York, United States
| | - Adrian Negrean
- Department of Neuroscience, Columbia University, New York, United States
- Mortimer B. Zuckerman Mind Brain Behavior Institute, Columbia University, New York, United States
| |
Collapse
|
15
|
Capone C, Lupo C, Muratore P, Paolucci PS. Beyond spiking networks: The computational advantages of dendritic amplification and input segregation. Proc Natl Acad Sci U S A 2023; 120:e2220743120. [PMID: 38019856 PMCID: PMC10710097 DOI: 10.1073/pnas.2220743120] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/07/2022] [Accepted: 10/11/2023] [Indexed: 12/01/2023] Open
Abstract
The brain can efficiently learn a wide range of tasks, motivating the search for biologically inspired learning rules for improving current artificial intelligence technology. Most biological models are composed of point neurons and cannot achieve state-of-the-art performance in machine learning. Recent works have proposed that input segregation (neurons receive sensory information and higher-order feedback in segregated compartments), and nonlinear dendritic computation would support error backpropagation in biological neurons. However, these approaches require propagating errors with a fine spatiotemporal structure to all the neurons, which is unlikely to be feasible in a biological network. To relax this assumption, we suggest that bursts and dendritic input segregation provide a natural support for target-based learning, which propagates targets rather than errors. A coincidence mechanism between the basal and the apical compartments allows for generating high-frequency bursts of spikes. This architecture supports a burst-dependent learning rule, based on the comparison between the target bursting activity triggered by the teaching signal and the one caused by the recurrent connections, providing support for target-based learning. We show that this framework can be used to efficiently solve spatiotemporal tasks, such as context-dependent store and recall of three-dimensional trajectories, and navigation tasks. Finally, we suggest that this neuronal architecture naturally allows for orchestrating "hierarchical imitation learning", enabling the decomposition of challenging long-horizon decision-making tasks into simpler subtasks. We show a possible implementation of this in a two-level network, where the high network produces the contextual signal for the low network.
Collapse
Affiliation(s)
- Cristiano Capone
- Istituto Nazionale di Fisica Nucleare (INFN), Sezione di Roma, Rome00185, Italy
| | - Cosimo Lupo
- Istituto Nazionale di Fisica Nucleare (INFN), Sezione di Roma, Rome00185, Italy
| | - Paolo Muratore
- Scuola Internazionale Superiore di Studi Avanzati (SISSA), Visual Neuroscience Lab, Trieste34136, Italy
| | | |
Collapse
|
16
|
Makarov R, Pagkalos M, Poirazi P. Dendrites and efficiency: Optimizing performance and resource utilization. Curr Opin Neurobiol 2023; 83:102812. [PMID: 37980803 DOI: 10.1016/j.conb.2023.102812] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/31/2023] [Revised: 10/19/2023] [Accepted: 10/21/2023] [Indexed: 11/21/2023]
Abstract
The brain is a highly efficient system that has evolved to optimize performance under limited resources. In this review, we highlight recent theoretical and experimental studies that support the view that dendrites make information processing and storage in the brain more efficient. This is achieved through the dynamic modulation of integration versus segregation of inputs and activity within a neuron. We argue that under conditions of limited energy and space, dendrites help biological networks to implement complex functions such as processing natural stimuli on behavioral timescales, performing the inference process on those stimuli in a context-specific manner, and storing the information in overlapping populations of neurons. A global picture starts to emerge, in which dendrites help the brain achieve efficiency through a combination of optimization strategies that balance the tradeoff between performance and resource utilization.
Collapse
Affiliation(s)
- Roman Makarov
- Institute of Molecular Biology and Biotechnology (IMBB), Foundation for Research and Technology Hellas (FORTH), Heraklion, 70013, Greece; Department of Biology, University of Crete, Heraklion, 70013, Greece. https://twitter.com/_RomanMakarov
| | - Michalis Pagkalos
- Institute of Molecular Biology and Biotechnology (IMBB), Foundation for Research and Technology Hellas (FORTH), Heraklion, 70013, Greece; Department of Biology, University of Crete, Heraklion, 70013, Greece. https://twitter.com/MPagkalos
| | - Panayiota Poirazi
- Institute of Molecular Biology and Biotechnology (IMBB), Foundation for Research and Technology Hellas (FORTH), Heraklion, 70013, Greece.
| |
Collapse
|
17
|
Zahid U, Guo Q, Fountas Z. Predictive Coding as a Neuromorphic Alternative to Backpropagation: A Critical Evaluation. Neural Comput 2023; 35:1881-1909. [PMID: 37844326 DOI: 10.1162/neco_a_01620] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/05/2023] [Accepted: 08/01/2023] [Indexed: 10/18/2023]
Abstract
Backpropagation has rapidly become the workhorse credit assignment algorithm for modern deep learning methods. Recently, modified forms of predictive coding (PC), an algorithm with origins in computational neuroscience, have been shown to result in approximately or exactly equal parameter updates to those under backpropagation. Due to this connection, it has been suggested that PC can act as an alternative to backpropagation with desirable properties that may facilitate implementation in neuromorphic systems. Here, we explore these claims using the different contemporary PC variants proposed in the literature. We obtain time complexity bounds for these PC variants, which we show are lower bounded by backpropagation. We also present key properties of these variants that have implications for neurobiological plausibility and their interpretations, particularly from the perspective of standard PC as a variational Bayes algorithm for latent probabilistic models. Our findings shed new light on the connection between the two learning frameworks and suggest that in its current forms, PC may have more limited potential as a direct replacement of backpropagation than previously envisioned.
Collapse
Affiliation(s)
- Umais Zahid
- Huawei Technologies R&D, London N19 3HT, U.K.
| | - Qinghai Guo
- Huawei Technologies R&D, Shenzhen 518129, China
| | | |
Collapse
|
18
|
Zhang Y, He G, Ma L, Liu X, Hjorth JJJ, Kozlov A, He Y, Zhang S, Kotaleski JH, Tian Y, Grillner S, Du K, Huang T. A GPU-based computational framework that bridges neuron simulation and artificial intelligence. Nat Commun 2023; 14:5798. [PMID: 37723170 PMCID: PMC10507119 DOI: 10.1038/s41467-023-41553-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/30/2022] [Accepted: 09/08/2023] [Indexed: 09/20/2023] Open
Abstract
Biophysically detailed multi-compartment models are powerful tools to explore computational principles of the brain and also serve as a theoretical framework to generate algorithms for artificial intelligence (AI) systems. However, the expensive computational cost severely limits the applications in both the neuroscience and AI fields. The major bottleneck during simulating detailed compartment models is the ability of a simulator to solve large systems of linear equations. Here, we present a novel Dendritic Hierarchical Scheduling (DHS) method to markedly accelerate such a process. We theoretically prove that the DHS implementation is computationally optimal and accurate. This GPU-based method performs with 2-3 orders of magnitude higher speed than that of the classic serial Hines method in the conventional CPU platform. We build a DeepDendrite framework, which integrates the DHS method and the GPU computing engine of the NEURON simulator and demonstrate applications of DeepDendrite in neuroscience tasks. We investigate how spatial patterns of spine inputs affect neuronal excitability in a detailed human pyramidal neuron model with 25,000 spines. Furthermore, we provide a brief discussion on the potential of DeepDendrite for AI, specifically highlighting its ability to enable the efficient training of biophysically detailed models in typical image classification tasks.
Collapse
Affiliation(s)
- Yichen Zhang
- National Key Laboratory for Multimedia Information Processing, School of Computer Science, Peking University, Beijing, 100871, China
| | - Gan He
- National Key Laboratory for Multimedia Information Processing, School of Computer Science, Peking University, Beijing, 100871, China
| | - Lei Ma
- National Key Laboratory for Multimedia Information Processing, School of Computer Science, Peking University, Beijing, 100871, China
- Beijing Academy of Artificial Intelligence (BAAI), Beijing, 100084, China
| | - Xiaofei Liu
- National Key Laboratory for Multimedia Information Processing, School of Computer Science, Peking University, Beijing, 100871, China
- School of Information Science and Engineering, Yunnan University, Kunming, 650500, China
| | - J J Johannes Hjorth
- Science for Life Laboratory, School of Electrical Engineering and Computer Science, Royal Institute of Technology KTH, Stockholm, SE-10044, Sweden
| | - Alexander Kozlov
- Science for Life Laboratory, School of Electrical Engineering and Computer Science, Royal Institute of Technology KTH, Stockholm, SE-10044, Sweden
- Department of Neuroscience, Karolinska Institute, Stockholm, SE-17165, Sweden
| | - Yutao He
- National Key Laboratory for Multimedia Information Processing, School of Computer Science, Peking University, Beijing, 100871, China
| | - Shenjian Zhang
- National Key Laboratory for Multimedia Information Processing, School of Computer Science, Peking University, Beijing, 100871, China
| | - Jeanette Hellgren Kotaleski
- Science for Life Laboratory, School of Electrical Engineering and Computer Science, Royal Institute of Technology KTH, Stockholm, SE-10044, Sweden
- Department of Neuroscience, Karolinska Institute, Stockholm, SE-17165, Sweden
| | - Yonghong Tian
- National Key Laboratory for Multimedia Information Processing, School of Computer Science, Peking University, Beijing, 100871, China
- School of Electrical and Computer Engineering, Shenzhen Graduate School, Peking University, Shenzhen, 518055, China
| | - Sten Grillner
- Department of Neuroscience, Karolinska Institute, Stockholm, SE-17165, Sweden
| | - Kai Du
- Institute for Artificial Intelligence, Peking University, Beijing, 100871, China.
| | - Tiejun Huang
- National Key Laboratory for Multimedia Information Processing, School of Computer Science, Peking University, Beijing, 100871, China
- Beijing Academy of Artificial Intelligence (BAAI), Beijing, 100084, China
- Institute for Artificial Intelligence, Peking University, Beijing, 100871, China
| |
Collapse
|
19
|
Auksztulewicz R, Rajendran VG, Peng F, Schnupp JWH, Harper NS. Omission responses in local field potentials in rat auditory cortex. BMC Biol 2023; 21:130. [PMID: 37254137 PMCID: PMC10230691 DOI: 10.1186/s12915-023-01592-4] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/12/2022] [Accepted: 04/11/2023] [Indexed: 06/01/2023] Open
Abstract
BACKGROUND Non-invasive recordings of gross neural activity in humans often show responses to omitted stimuli in steady trains of identical stimuli. This has been taken as evidence for the neural coding of prediction or prediction error. However, evidence for such omission responses from invasive recordings of cellular-scale responses in animal models is scarce. Here, we sought to characterise omission responses using extracellular recordings in the auditory cortex of anaesthetised rats. We profiled omission responses across local field potentials (LFP), analogue multiunit activity (AMUA), and single/multi-unit spiking activity, using stimuli that were fixed-rate trains of acoustic noise bursts where 5% of bursts were randomly omitted. RESULTS Significant omission responses were observed in LFP and AMUA signals, but not in spiking activity. These omission responses had a lower amplitude and longer latency than burst-evoked sensory responses, and omission response amplitude increased as a function of the number of preceding bursts. CONCLUSIONS Together, our findings show that omission responses are most robustly observed in LFP and AMUA signals (relative to spiking activity). This has implications for models of cortical processing that require many neurons to encode prediction errors in their spike output.
Collapse
Affiliation(s)
- Ryszard Auksztulewicz
- Center for Cognitive Neuroscience Berlin, Free University Berlin, Berlin, Germany.
- Dept of Neuroscience, City University of Hong Kong, Hong Kong, Hong Kong S.A.R..
| | | | - Fei Peng
- Dept of Neuroscience, City University of Hong Kong, Hong Kong, Hong Kong S.A.R
| | | | | |
Collapse
|
20
|
Dainauskas JJ, Marie H, Migliore M, Saudargiene A. GluN2B-NMDAR subunit contribution on synaptic plasticity: A phenomenological model for CA3-CA1 synapses. Front Synaptic Neurosci 2023; 15:1113957. [PMID: 37008680 PMCID: PMC10050887 DOI: 10.3389/fnsyn.2023.1113957] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/01/2022] [Accepted: 02/13/2023] [Indexed: 03/17/2023] Open
Abstract
Synaptic plasticity is believed to be a key mechanism underlying learning and memory. We developed a phenomenological N-methyl-D-aspartate (NMDA) receptor-based voltage-dependent synaptic plasticity model for synaptic modifications at hippocampal CA3-CA1 synapses on a hippocampal CA1 pyramidal neuron. The model incorporates the GluN2A-NMDA and GluN2B-NMDA receptor subunit-based functions and accounts for the synaptic strength dependence on the postsynaptic NMDA receptor composition and functioning without explicitly modeling the NMDA receptor-mediated intracellular calcium, a local trigger of synaptic plasticity. We embedded the model into a two-compartmental model of a hippocampal CA1 pyramidal cell and validated it against experimental data of spike-timing-dependent synaptic plasticity (STDP), high and low-frequency stimulation. The developed model predicts altered learning rules in synapses formed on the apical dendrites of the detailed compartmental model of CA1 pyramidal neuron in the presence of the GluN2B-NMDA receptor hypofunction and can be used in hippocampal networks to model learning in health and disease.
Collapse
Affiliation(s)
- Justinas J. Dainauskas
- Laboratory of Biophysics and Bioinformatics, Neuroscience Institute, Lithuanian University of Health Sciences, Kaunas, Lithuania
- Department of Informatics, Vytautas Magnus University, Kaunas, Lithuania
| | - Hélène Marie
- Université Côte d'Azur, Centre National de la Recherche Scientifique (CNRS) UMR 7275, Institut de Pharmacologie Moléculaire et Cellulaire (IPMC), Valbonne, France
| | - Michele Migliore
- Institute of Biophysics, National Research Council, Palermo, Italy
| | - Ausra Saudargiene
- Laboratory of Biophysics and Bioinformatics, Neuroscience Institute, Lithuanian University of Health Sciences, Kaunas, Lithuania
- *Correspondence: Ausra Saudargiene
| |
Collapse
|
21
|
Mikulasch FA, Rudelt L, Wibral M, Priesemann V. Where is the error? Hierarchical predictive coding through dendritic error computation. Trends Neurosci 2023; 46:45-59. [PMID: 36577388 DOI: 10.1016/j.tins.2022.09.007] [Citation(s) in RCA: 30] [Impact Index Per Article: 30.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/21/2022] [Revised: 09/28/2022] [Accepted: 09/28/2022] [Indexed: 11/19/2022]
Abstract
Top-down feedback in cortex is critical for guiding sensory processing, which has prominently been formalized in the theory of hierarchical predictive coding (hPC). However, experimental evidence for error units, which are central to the theory, is inconclusive and it remains unclear how hPC can be implemented with spiking neurons. To address this, we connect hPC to existing work on efficient coding in balanced networks with lateral inhibition and predictive computation at apical dendrites. Together, this work points to an efficient implementation of hPC with spiking neurons, where prediction errors are computed not in separate units, but locally in dendritic compartments. We then discuss the correspondence of this model to experimentally observed connectivity patterns, plasticity, and dynamics in cortex.
Collapse
Affiliation(s)
- Fabian A Mikulasch
- Max-Planck-Institute for Dynamics and Self-Organization, Göttingen, Germany.
| | - Lucas Rudelt
- Max-Planck-Institute for Dynamics and Self-Organization, Göttingen, Germany
| | - Michael Wibral
- Göttingen Campus Institute for Dynamics of Biological Networks, Georg-August University, Göttingen, Germany
| | - Viola Priesemann
- Max-Planck-Institute for Dynamics and Self-Organization, Göttingen, Germany; Bernstein Center for Computational Neuroscience (BCCN), Göttingen, Germany; Department of Physics, Georg-August University, Göttingen, Germany
| |
Collapse
|
22
|
Gao T, Deng B, Wang J, Yi G. Highly efficient neuromorphic learning system of spiking neural network with multi-compartment leaky integrate-and-fire neurons. Front Neurosci 2022; 16:929644. [PMID: 36248664 PMCID: PMC9554099 DOI: 10.3389/fnins.2022.929644] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/27/2022] [Accepted: 09/07/2022] [Indexed: 11/13/2022] Open
Abstract
A spiking neural network (SNN) is considered a high-performance learning system that matches the digital circuits and presents higher efficiency due to the architecture and computation of spiking neurons. While implementing a SNN on a field-programmable gate array (FPGA), the gradient back-propagation through layers consumes a surprising number of resources. In this paper, we aim to realize an efficient architecture of SNN on the FPGA to reduce resource and power consumption. The multi-compartment leaky integrate-and-fire (MLIF) model is used to convert spike trains to the plateau potential in dendrites. We accumulate the potential in the apical dendrite during the training period. The average of this accumulative result is the dendritic plateau potential and is used to guide the updates of synaptic weights. Based on this architecture, the SNN is implemented on FPGA efficiently. In the implementation of a neuromorphic learning system, the shift multiplier (shift MUL) module and piecewise linear (PWL) algorithm are used to replace multipliers and complex nonlinear functions to match the digital circuits. The neuromorphic learning system is constructed with resources on FPGA without dataflow between on-chip and off-chip memories. Our neuromorphic learning system performs with higher resource utilization and power efficiency than previous on-chip learning systems.
Collapse
|
23
|
Mercier MS, Magloire V, Cornford JH, Kullmann DM. Long-term potentiation in neurogliaform interneurons modulates excitation-inhibition balance in the temporoammonic pathway. J Physiol 2022; 600:4001-4017. [PMID: 35876215 PMCID: PMC9540908 DOI: 10.1113/jp282753] [Citation(s) in RCA: 9] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/16/2021] [Accepted: 07/19/2022] [Indexed: 11/08/2022] Open
Abstract
Apical dendrites of pyramidal neurons integrate information from higher-order cortex and thalamus, and gate signalling and plasticity at proximal synapses. In the hippocampus, neurogliaform cells and other interneurons located within stratum lacunosum-moleculare (SLM) mediate powerful inhibition of CA1 pyramidal neuron distal dendrites. Is the recruitment of such inhibition itself subject to use-dependent plasticity, and if so, what induction rules apply? Here we show that interneurons in mouse SLM exhibit Hebbian NMDA receptor-dependent long-term potentiation (LTP). Such plasticity can be induced by selective optogenetic stimulation of afferents in the temporoammonic pathway from the entorhinal cortex (EC), but not by equivalent stimulation of afferents from the thalamic nucleus reuniens. We further show that theta-burst patterns of afferent firing induces LTP in neurogliaform interneurons identified using neuron-derived neurotrophic factor (Ndnf)-Cre mice. Theta-burst activity of EC afferents led to an increase in disynaptic feed-forward inhibition, but not monosynaptic excitation, of CA1 pyramidal neurons. Activity-dependent synaptic plasticity in SLM interneurons thus alters the excitation-inhibition balance at EC inputs to the apical dendrites of pyramidal neurons, implying a dynamic role for these interneurons in gating CA1 dendritic computations. KEY POINTS: Electrogenic phenomena in distal dendrites of principal neurons in the hippocampus have a major role in gating synaptic plasticity at afferent synapses on proximal dendrites. Apical dendrites also receive powerful feed-forward inhibition, mediated in large part by neurogliaform neurons. Here we show that theta-burst activity in afferents from the entorhinal cortex (EC) induces 'Hebbian' long-term potentiation (LTP) at excitatory synapses recruiting these GABAergic cells. LTP in interneurons innervating apical dendrites increases disynaptic inhibition of principal neurons, thus shifting the excitation-inhibition balance in the temporoammonic (TA) pathway in favour of inhibition, with implications for computations and learning rules in proximal dendrites.
Collapse
Affiliation(s)
- Marion S. Mercier
- UCL Queen Square Institute of NeurologyDepartment of Clinical and Experimental EpilepsyUniversity College LondonLondonUK
| | - Vincent Magloire
- UCL Queen Square Institute of NeurologyDepartment of Clinical and Experimental EpilepsyUniversity College LondonLondonUK
| | - Jonathan H. Cornford
- UCL Queen Square Institute of NeurologyDepartment of Clinical and Experimental EpilepsyUniversity College LondonLondonUK
| | - Dimitri M. Kullmann
- UCL Queen Square Institute of NeurologyDepartment of Clinical and Experimental EpilepsyUniversity College LondonLondonUK
| |
Collapse
|
24
|
Capone C, Muratore P, Paolucci PS. Error-based or target-based? A unified framework for learning in recurrent spiking networks. PLoS Comput Biol 2022; 18:e1010221. [PMID: 35727852 PMCID: PMC9249234 DOI: 10.1371/journal.pcbi.1010221] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/06/2021] [Revised: 07/01/2022] [Accepted: 05/17/2022] [Indexed: 11/25/2022] Open
Abstract
The field of recurrent neural networks is over-populated by a variety of proposed learning rules and protocols. The scope of this work is to define a generalized framework, to move a step forward towards the unification of this fragmented scenario. In the field of supervised learning, two opposite approaches stand out, error-based and target-based. This duality gave rise to a scientific debate on which learning framework is the most likely to be implemented in biological networks of neurons. Moreover, the existence of spikes raises the question of whether the coding of information is rate-based or spike-based. To face these questions, we proposed a learning model with two main parameters, the rank of the feedback learning matrix R and the tolerance to spike timing τ⋆. We demonstrate that a low (high) rank R accounts for an error-based (target-based) learning rule, while high (low) tolerance to spike timing promotes rate-based (spike-based) coding. We show that in a store and recall task, high-ranks allow for lower MSE values, while low-ranks enable a faster convergence. Our framework naturally lends itself to Behavioral Cloning and allows for efficiently solving relevant closed-loop tasks, investigating what parameters (R,τ⋆) are optimal to solve a specific task. We found that a high R is essential for tasks that require retaining memory for a long time (Button and Food). On the other hand, this is not relevant for a motor task (the 2D Bipedal Walker). In this case, we find that precise spike-based coding enables optimal performances. Finally, we show that our theoretical formulation allows for defining protocols to estimate the rank of the feedback error in biological networks. We release a PyTorch implementation of our model supporting GPU parallelization. Learning in biological or artificial networks means changing the laws governing the network dynamics in order to better behave in a specific situation. However, there exists no consensus on what rules regulate learning in biological systems. To face these questions, we propose a novel theoretical formulation for learning with two main parameters, the number of learning constraints ( R) and the tolerance to spike timing (τ⋆). We demonstrate that a low (high) rank R accounts for an error-based (target-based) learning rule, while high (low) tolerance to spike timing τ⋆ promotes rate-based (spike-based) coding. Our approach naturally lends itself to Imitation Learning (and Behavioral Cloning in particular) and we apply it to solve relevant closed-loop tasks such as the button-and-food task, and the 2D Bipedal Walker. The button-and-food is a navigation task that requires retaining a long-term memory, and benefits from a high R. On the other hand, the 2D Bipedal Walker is a motor task and benefits from a low τ⋆. Finally, we show that our theoretical formulation suggests protocols to deduce the structure of learning feedback in biological networks.
Collapse
Affiliation(s)
| | - Paolo Muratore
- Cognitive Neuroscience, SISSA, Trieste, Italy
- * E-mail: (CC); (PM)
| | | |
Collapse
|
25
|
Vafidis P, Owald D, D'Albis T, Kempter R. Learning accurate path integration in ring attractor models of the head direction system. eLife 2022; 11:e69841. [PMID: 35723252 PMCID: PMC9286743 DOI: 10.7554/elife.69841] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/28/2021] [Accepted: 06/17/2022] [Indexed: 11/13/2022] Open
Abstract
Ring attractor models for angular path integration have received strong experimental support. To function as integrators, head direction circuits require precisely tuned connectivity, but it is currently unknown how such tuning could be achieved. Here, we propose a network model in which a local, biologically plausible learning rule adjusts synaptic efficacies during development, guided by supervisory allothetic cues. Applied to the Drosophila head direction system, the model learns to path-integrate accurately and develops a connectivity strikingly similar to the one reported in experiments. The mature network is a quasi-continuous attractor and reproduces key experiments in which optogenetic stimulation controls the internal representation of heading in flies, and where the network remaps to integrate with different gains in rodents. Our model predicts that path integration requires self-supervised learning during a developmental phase, and proposes a general framework to learn to path-integrate with gain-1 even in architectures that lack the physical topography of a ring.
Collapse
Affiliation(s)
- Pantelis Vafidis
- Computation and Neural Systems, California Institute of TechnologyPasadenaUnited States
- Bernstein Center for Computational NeuroscienceBerlinGermany
- Institute for Theoretical Biology, Department of Biology, Humboldt-Universität zu BerlinBerlinGermany
| | - David Owald
- Institute of Neurophysiology, Charité – Universitätsmedizin Berlin, corporate member of Freie Universität Berlin and Humboldt-Universität zu Berlin, and Berlin Institute of HealthBerlinGermany
- NeuroCure, Charité - Universitätsmedizin BerlinBerlinGermany
- Einstein Center for NeurosciencesBerlinGermany
| | - Tiziano D'Albis
- Bernstein Center for Computational NeuroscienceBerlinGermany
- Institute for Theoretical Biology, Department of Biology, Humboldt-Universität zu BerlinBerlinGermany
| | - Richard Kempter
- Bernstein Center for Computational NeuroscienceBerlinGermany
- Institute for Theoretical Biology, Department of Biology, Humboldt-Universität zu BerlinBerlinGermany
- Einstein Center for NeurosciencesBerlinGermany
| |
Collapse
|
26
|
Asabuki T, Kokate P, Fukai T. Neural circuit mechanisms of hierarchical sequence learning tested on large-scale recording data. PLoS Comput Biol 2022; 18:e1010214. [PMID: 35727828 PMCID: PMC9249189 DOI: 10.1371/journal.pcbi.1010214] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/22/2021] [Revised: 07/01/2022] [Accepted: 05/16/2022] [Indexed: 11/24/2022] Open
Abstract
The brain performs various cognitive functions by learning the spatiotemporal salient features of the environment. This learning requires unsupervised segmentation of hierarchically organized spike sequences, but the underlying neural mechanism is only poorly understood. Here, we show that a recurrent gated network of neurons with dendrites can efficiently solve difficult segmentation tasks. In this model, multiplicative recurrent connections learn a context-dependent gating of dendro-somatic information transfers to minimize error in the prediction of somatic responses by the dendrites. Consequently, these connections filter the redundant input features represented by the dendrites but unnecessary in the given context. The model was tested on both synthetic and real neural data. In particular, the model was successful for segmenting multiple cell assemblies repeating in large-scale calcium imaging data containing thousands of cortical neurons. Our results suggest that recurrent gating of dendro-somatic signal transfers is crucial for cortical learning of context-dependent segmentation tasks.
Collapse
Affiliation(s)
- Toshitake Asabuki
- Neural Coding and Brain Computing Unit, Okinawa Institute of Science and Technology, Onna-son, Okinawa, Japan
| | - Prajakta Kokate
- Neural Coding and Brain Computing Unit, Okinawa Institute of Science and Technology, Onna-son, Okinawa, Japan
| | - Tomoki Fukai
- Neural Coding and Brain Computing Unit, Okinawa Institute of Science and Technology, Onna-son, Okinawa, Japan
| |
Collapse
|
27
|
Dellaferrera G, Asabuki T, Fukai T. Modeling the Repetition-Based Recovering of Acoustic and Visual Sources With Dendritic Neurons. Front Neurosci 2022; 16:855753. [PMID: 35573290 PMCID: PMC9097820 DOI: 10.3389/fnins.2022.855753] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/15/2022] [Accepted: 03/31/2022] [Indexed: 11/13/2022] Open
Abstract
In natural auditory environments, acoustic signals originate from the temporal superimposition of different sound sources. The problem of inferring individual sources from ambiguous mixtures of sounds is known as blind source decomposition. Experiments on humans have demonstrated that the auditory system can identify sound sources as repeating patterns embedded in the acoustic input. Source repetition produces temporal regularities that can be detected and used for segregation. Specifically, listeners can identify sounds occurring more than once across different mixtures, but not sounds heard only in a single mixture. However, whether such a behavior can be computationally modeled has not yet been explored. Here, we propose a biologically inspired computational model to perform blind source separation on sequences of mixtures of acoustic stimuli. Our method relies on a somatodendritic neuron model trained with a Hebbian-like learning rule which was originally conceived to detect spatio-temporal patterns recurring in synaptic inputs. We show that the segregation capabilities of our model are reminiscent of the features of human performance in a variety of experimental settings involving synthesized sounds with naturalistic properties. Furthermore, we extend the study to investigate the properties of segregation on task settings not yet explored with human subjects, namely natural sounds and images. Overall, our work suggests that somatodendritic neuron models offer a promising neuro-inspired learning strategy to account for the characteristics of the brain segregation capabilities as well as to make predictions on yet untested experimental settings.
Collapse
Affiliation(s)
- Giorgia Dellaferrera
- Neural Coding and Brain Computing Unit, Okinawa Institute of Science and Technology, Okinawa, Japan
- Institute of Neuroinformatics, University of Zurich and Swiss Federal Institute of Technology Zurich (ETH), Zurich, Switzerland
| | - Toshitake Asabuki
- Neural Coding and Brain Computing Unit, Okinawa Institute of Science and Technology, Okinawa, Japan
| | - Tomoki Fukai
- Neural Coding and Brain Computing Unit, Okinawa Institute of Science and Technology, Okinawa, Japan
| |
Collapse
|
28
|
Kreutzer E, Senn W, Petrovici MA. Natural-gradient learning for spiking neurons. eLife 2022; 11:e66526. [PMID: 35467527 PMCID: PMC9038192 DOI: 10.7554/elife.66526] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/13/2021] [Accepted: 02/21/2022] [Indexed: 11/16/2022] Open
Abstract
In many normative theories of synaptic plasticity, weight updates implicitly depend on the chosen parametrization of the weights. This problem relates, for example, to neuronal morphology: synapses which are functionally equivalent in terms of their impact on somatic firing can differ substantially in spine size due to their different positions along the dendritic tree. Classical theories based on Euclidean-gradient descent can easily lead to inconsistencies due to such parametrization dependence. The issues are solved in the framework of Riemannian geometry, in which we propose that plasticity instead follows natural-gradient descent. Under this hypothesis, we derive a synaptic learning rule for spiking neurons that couples functional efficiency with the explanation of several well-documented biological phenomena such as dendritic democracy, multiplicative scaling, and heterosynaptic plasticity. We therefore suggest that in its search for functional synaptic plasticity, evolution might have come up with its own version of natural-gradient descent.
Collapse
Affiliation(s)
- Elena Kreutzer
- Department of Physiology, University of BernBernSwitzerland
| | - Walter Senn
- Department of Physiology, University of BernBernSwitzerland
| | - Mihai A Petrovici
- Department of Physiology, University of BernBernSwitzerland
- Kirchhoff-Institute for Physics, Heidelberg UniversityHeidelbergGermany
| |
Collapse
|
29
|
Yang S, Gao T, Wang J, Deng B, Azghadi MR, Lei T, Linares-Barranco B. SAM: A Unified Self-Adaptive Multicompartmental Spiking Neuron Model for Learning With Working Memory. Front Neurosci 2022; 16:850945. [PMID: 35527819 PMCID: PMC9074872 DOI: 10.3389/fnins.2022.850945] [Citation(s) in RCA: 10] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/08/2022] [Accepted: 03/15/2022] [Indexed: 11/13/2022] Open
Abstract
Working memory is a fundamental feature of biological brains for perception, cognition, and learning. In addition, learning with working memory, which has been show in conventional artificial intelligence systems through recurrent neural networks, is instrumental to advanced cognitive intelligence. However, it is hard to endow a simple neuron model with working memory, and to understand the biological mechanisms that have resulted in such a powerful ability at the neuronal level. This article presents a novel self-adaptive multicompartment spiking neuron model, referred to as SAM, for spike-based learning with working memory. SAM integrates four major biological principles including sparse coding, dendritic non-linearity, intrinsic self-adaptive dynamics, and spike-driven learning. We first describe SAM's design and explore the impacts of critical parameters on its biological dynamics. We then use SAM to build spiking networks to accomplish several different tasks including supervised learning of the MNIST dataset using sequential spatiotemporal encoding, noisy spike pattern classification, sparse coding during pattern classification, spatiotemporal feature detection, meta-learning with working memory applied to a navigation task and the MNIST classification task, and working memory for spatiotemporal learning. Our experimental results highlight the energy efficiency and robustness of SAM in these wide range of challenging tasks. The effects of SAM model variations on its working memory are also explored, hoping to offer insight into the biological mechanisms underlying working memory in the brain. The SAM model is the first attempt to integrate the capabilities of spike-driven learning and working memory in a unified single neuron with multiple timescale dynamics. The competitive performance of SAM could potentially contribute to the development of efficient adaptive neuromorphic computing systems for various applications from robotics to edge computing.
Collapse
Affiliation(s)
- Shuangming Yang
- School of Electrical and Information Engineering, Tianjin University, Tianjin, China
| | - Tian Gao
- School of Electrical and Information Engineering, Tianjin University, Tianjin, China
| | - Jiang Wang
- School of Electrical and Information Engineering, Tianjin University, Tianjin, China
| | - Bin Deng
- School of Electrical and Information Engineering, Tianjin University, Tianjin, China
| | | | - Tao Lei
- School of Electronic Information and Artificial Intelligence, Shaanxi University of Science and Technology, Xi’an, China
| | | |
Collapse
|
30
|
Otor Y, Achvat S, Cermak N, Benisty H, Abboud M, Barak O, Schiller Y, Poleg-Polsky A, Schiller J. Dynamic compartmental computations in tuft dendrites of layer 5 neurons during motor behavior. Science 2022; 376:267-275. [PMID: 35420959 DOI: 10.1126/science.abn1421] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/17/2022]
Abstract
Tuft dendrites of layer 5 pyramidal neurons form specialized compartments important for motor learning and performance, yet their computational capabilities remain unclear. Structural-functional mapping of the tuft tree from the motor cortex during motor tasks revealed two morphologically distinct populations of layer 5 pyramidal tract neurons (PTNs) that exhibit specific tuft computational properties. Early bifurcating and large nexus PTNs showed marked tuft functional compartmentalization, representing different motor variable combinations within and between their two tuft hemi-trees. By contrast, late bifurcating and smaller nexus PTNs showed synchronous tuft activation. Dendritic structure and dynamic recruitment of the N-methyl-d-aspartate (NMDA)-spiking mechanism explained the differential compartmentalization patterns. Our findings support a morphologically dependent framework for motor computations, in which independent amplification units can be combinatorically recruited to represent different motor sequences within the same tree.
Collapse
Affiliation(s)
- Yara Otor
- Department of Physiology, Technion Medical School, Bat-Galim, Haifa 31096, Israel
| | - Shay Achvat
- Department of Physiology, Technion Medical School, Bat-Galim, Haifa 31096, Israel
| | - Nathan Cermak
- Department of Physiology, Technion Medical School, Bat-Galim, Haifa 31096, Israel
| | - Hadas Benisty
- Yale University School of Medicine; Bethany, CT, USA
| | - Maisan Abboud
- Department of Physiology, Technion Medical School, Bat-Galim, Haifa 31096, Israel
| | - Omri Barak
- Department of Physiology, Technion Medical School, Bat-Galim, Haifa 31096, Israel
| | - Yitzhak Schiller
- Department of Physiology, Technion Medical School, Bat-Galim, Haifa 31096, Israel
| | - Alon Poleg-Polsky
- Department of Physiology and Biophysics; University of Colorado School of Medicine, 12800 East 19th Avenue MS8307, Aurora, CO 8004, USA
| | - Jackie Schiller
- Department of Physiology, Technion Medical School, Bat-Galim, Haifa 31096, Israel
| |
Collapse
|
31
|
Deperrois N, Petrovici MA, Senn W, Jordan J. Learning cortical representations through perturbed and adversarial dreaming. eLife 2022; 11:76384. [PMID: 35384841 PMCID: PMC9071267 DOI: 10.7554/elife.76384] [Citation(s) in RCA: 9] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/14/2021] [Accepted: 03/07/2022] [Indexed: 11/24/2022] Open
Abstract
Humans and other animals learn to extract general concepts from sensory experience without extensive teaching. This ability is thought to be facilitated by offline states like sleep where previous experiences are systemically replayed. However, the characteristic creative nature of dreams suggests that learning semantic representations may go beyond merely replaying previous experiences. We support this hypothesis by implementing a cortical architecture inspired by generative adversarial networks (GANs). Learning in our model is organized across three different global brain states mimicking wakefulness, non-rapid eye movement (NREM), and REM sleep, optimizing different, but complementary, objective functions. We train the model on standard datasets of natural images and evaluate the quality of the learned representations. Our results suggest that generating new, virtual sensory inputs via adversarial dreaming during REM sleep is essential for extracting semantic concepts, while replaying episodic memories via perturbed dreaming during NREM sleep improves the robustness of latent representations. The model provides a new computational perspective on sleep states, memory replay, and dreams, and suggests a cortical implementation of GANs.
Collapse
Affiliation(s)
| | | | - Walter Senn
- Department of Physiology, University of Bern, Bern, Switzerland
| | - Jakob Jordan
- Department of Physiology, University of Bern, Bern, Switzerland
| |
Collapse
|
32
|
Rosenbaum R. On the relationship between predictive coding and backpropagation. PLoS One 2022; 17:e0266102. [PMID: 35358258 PMCID: PMC8970408 DOI: 10.1371/journal.pone.0266102] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/29/2021] [Accepted: 03/14/2022] [Indexed: 11/19/2022] Open
Abstract
Artificial neural networks are often interpreted as abstract models of biological neuronal networks, but they are typically trained using the biologically unrealistic backpropagation algorithm and its variants. Predictive coding has been proposed as a potentially more biologically realistic alternative to backpropagation for training neural networks. This manuscript reviews and extends recent work on the mathematical relationship between predictive coding and backpropagation for training feedforward artificial neural networks on supervised learning tasks. Implications of these results for the interpretation of predictive coding and deep neural networks as models of biological learning are discussed along with a repository of functions, Torch2PC, for performing predictive coding with PyTorch neural network models.
Collapse
Affiliation(s)
- Robert Rosenbaum
- Department of Applied and Computational Mathematics and Statistics, University of Notre Dame, Notre Dame, IN, United States of America
| |
Collapse
|
33
|
Feldhoff F, Toepfer H, Harczos T, Klefenz F. Periodicity Pitch Perception Part III: Sensibility and Pachinko Volatility. Front Neurosci 2022; 16:736642. [PMID: 35356050 PMCID: PMC8959216 DOI: 10.3389/fnins.2022.736642] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/05/2021] [Accepted: 02/07/2022] [Indexed: 11/29/2022] Open
Abstract
Neuromorphic computer models are used to explain sensory perceptions. Auditory models generate cochleagrams, which resemble the spike distributions in the auditory nerve. Neuron ensembles along the auditory pathway transform sensory inputs step by step and at the end pitch is represented in auditory categorical spaces. In two previous articles in the series on periodicity pitch perception an extended auditory model had been successfully used for explaining periodicity pitch proved for various musical instrument generated tones and sung vowels. In this third part in the series the focus is on octopus cells as they are central sensitivity elements in auditory cognition processes. A powerful numerical model had been devised, in which auditory nerve fibers (ANFs) spike events are the inputs, triggering the impulse responses of the octopus cells. Efficient algorithms are developed and demonstrated to explain the behavior of octopus cells with a focus on a simple event-based hardware implementation of a layer of octopus neurons. The main finding is, that an octopus' cell model in a local receptive field fine-tunes to a specific trajectory by a spike-timing-dependent plasticity (STDP) learning rule with synaptic pre-activation and the dendritic back-propagating signal as post condition. Successful learning explains away the teacher and there is thus no need for a temporally precise control of plasticity that distinguishes between learning and retrieval phases. Pitch learning is cascaded: At first octopus cells respond individually by self-adjustment to specific trajectories in their local receptive fields, then unions of octopus cells are collectively learned for pitch discrimination. Pitch estimation by inter-spike intervals is shown exemplary using two input scenarios: a simple sinus tone and a sung vowel. The model evaluation indicates an improvement in pitch estimation on a fixed time-scale.
Collapse
Affiliation(s)
- Frank Feldhoff
- Advanced Electromagnetics Group, Technische Universität Ilmenau, Ilmenau, Germany
| | - Hannes Toepfer
- Advanced Electromagnetics Group, Technische Universität Ilmenau, Ilmenau, Germany
| | - Tamas Harczos
- Fraunhofer-Institut für Digitale Medientechnologie, Ilmenau, Germany
- Auditory Neuroscience and Optogenetics Laboratory, German Primate Center, Göttingen, Germany
- audifon GmbH & Co. KG, Kölleda, Germany
| | - Frank Klefenz
- Fraunhofer-Institut für Digitale Medientechnologie, Ilmenau, Germany
| |
Collapse
|
34
|
Larkum M. Are dendrites conceptually useful? Neuroscience 2022; 489:4-14. [DOI: 10.1016/j.neuroscience.2022.03.008] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/19/2021] [Revised: 02/10/2022] [Accepted: 03/05/2022] [Indexed: 12/13/2022]
|
35
|
Jegminat J, Surace SC, Pfister JP. Learning as filtering: Implications for spike-based plasticity. PLoS Comput Biol 2022; 18:e1009721. [PMID: 35196324 PMCID: PMC8865661 DOI: 10.1371/journal.pcbi.1009721] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/18/2020] [Accepted: 12/03/2021] [Indexed: 11/22/2022] Open
Abstract
Most normative models in computational neuroscience describe the task of learning as the optimisation of a cost function with respect to a set of parameters. However, learning as optimisation fails to account for a time-varying environment during the learning process and the resulting point estimate in parameter space does not account for uncertainty. Here, we frame learning as filtering, i.e., a principled method for including time and parameter uncertainty. We derive the filtering-based learning rule for a spiking neuronal network—the Synaptic Filter—and show its computational and biological relevance. For the computational relevance, we show that filtering improves the weight estimation performance compared to a gradient learning rule with optimal learning rate. The dynamics of the mean of the Synaptic Filter is consistent with spike-timing dependent plasticity (STDP) while the dynamics of the variance makes novel predictions regarding spike-timing dependent changes of EPSP variability. Moreover, the Synaptic Filter explains experimentally observed negative correlations between homo- and heterosynaptic plasticity. The task of learning is commonly framed as parameter optimisation. Here, we adopt the framework of learning as filtering where the task is to continuously estimate the uncertainty about the parameters to be learned. We apply this framework to synaptic plasticity in a spiking neuronal network. Filtering includes a time-varying environment and parameter uncertainty on the level of the learning task. We show that learning as filtering can qualitatively explain two biological experiments on synaptic plasticity that cannot be explained by learning as optimisation. Moreover, we make a new prediction and improve performance with respect to a gradient learning rule. Thus, learning as filtering is a promising candidate for learning models.
Collapse
Affiliation(s)
- Jannes Jegminat
- Department of Physiology, University of Bern, Bern, Switzerland
- Institute of Neuroinformatics and Neuroscience Center Zurich, ETH and the University of Zurich, Zurich, Switzerland
- * E-mail:
| | | | - Jean-Pascal Pfister
- Department of Physiology, University of Bern, Bern, Switzerland
- Institute of Neuroinformatics and Neuroscience Center Zurich, ETH and the University of Zurich, Zurich, Switzerland
| |
Collapse
|
36
|
Geng HY, Arbuthnott G, Yung WH, Ke Y. Long-Range Monosynaptic Inputs Targeting Apical and Basal Dendrites of Primary Motor Cortex Deep Output Neurons. Cereb Cortex 2021; 32:3975-3989. [PMID: 34905771 DOI: 10.1093/cercor/bhab460] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/28/2021] [Revised: 11/10/2021] [Accepted: 11/16/2021] [Indexed: 12/31/2022] Open
Abstract
The primary motor cortex (M1) integrates various long-range signals from other brain regions for the learning and execution of goal-directed movements. How the different inputs target the distinct apical and basal dendrites of M1 pyramidal neurons is crucial in understanding the functions of M1, but the detailed connectivity pattern is still largely unknown. Here, by combining cre-dependent rabies virus tracing, layer-specific chemical retrograde tracing, optogenetic stimulation, and electrophysiological recording, we mapped all long-range monosynaptic inputs to M1 deep output neurons in layer 5 (L5) in mice. We revealed that most upstream areas innervate both dendritic compartments concurrently. These include the sensory cortices, higher motor cortices, sensory and motor thalamus, association cortices, as well as many subcortical nuclei. Furthermore, the dichotomous inputs arise mostly from spatially segregated neuronal subpopulations within an upstream nucleus, and even in the case of an individual cortical layer. Therefore, these input areas could serve as both feedforward and feedback sources albeit via different subpopulations. Taken together, our findings revealed a previously unknown and highly intricate synaptic input pattern of M1L5 neurons, which implicates that the dendritic computations carried out by these neurons during motor execution or learning are far more complicated than we currently understand.
Collapse
Affiliation(s)
- Hong-Yan Geng
- School of Biomedical Sciences, Faculty of Medicine, The Chinese University of Hong Kong, Hong Kong
| | - Gordon Arbuthnott
- Brain Mechanisms for Behaviour Unit, Okinawa Institute of Science and Technology Graduate University, Okinawa 904-0485, Japan
| | - Wing-Ho Yung
- School of Biomedical Sciences, Faculty of Medicine, The Chinese University of Hong Kong, Hong Kong.,Gerald Choa Neuroscience Centre, The Chinese University of Hong Kong, Hong Kong
| | - Ya Ke
- School of Biomedical Sciences, Faculty of Medicine, The Chinese University of Hong Kong, Hong Kong.,Gerald Choa Neuroscience Centre, The Chinese University of Hong Kong, Hong Kong
| |
Collapse
|
37
|
Remme MWH, Bergmann U, Alevi D, Schreiber S, Sprekeler H, Kempter R. Hebbian plasticity in parallel synaptic pathways: A circuit mechanism for systems memory consolidation. PLoS Comput Biol 2021; 17:e1009681. [PMID: 34874938 PMCID: PMC8683039 DOI: 10.1371/journal.pcbi.1009681] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/14/2021] [Revised: 12/17/2021] [Accepted: 11/24/2021] [Indexed: 12/03/2022] Open
Abstract
Systems memory consolidation involves the transfer of memories across brain regions and the transformation of memory content. For example, declarative memories that transiently depend on the hippocampal formation are transformed into long-term memory traces in neocortical networks, and procedural memories are transformed within cortico-striatal networks. These consolidation processes are thought to rely on replay and repetition of recently acquired memories, but the cellular and network mechanisms that mediate the changes of memories are poorly understood. Here, we suggest that systems memory consolidation could arise from Hebbian plasticity in networks with parallel synaptic pathways-two ubiquitous features of neural circuits in the brain. We explore this hypothesis in the context of hippocampus-dependent memories. Using computational models and mathematical analyses, we illustrate how memories are transferred across circuits and discuss why their representations could change. The analyses suggest that Hebbian plasticity mediates consolidation by transferring a linear approximation of a previously acquired memory into a parallel pathway. Our modelling results are further in quantitative agreement with lesion studies in rodents. Moreover, a hierarchical iteration of the mechanism yields power-law forgetting-as observed in psychophysical studies in humans. The predicted circuit mechanism thus bridges spatial scales from single cells to cortical areas and time scales from milliseconds to years.
Collapse
Affiliation(s)
- Michiel W. H. Remme
- Department of Biology, Institute for Theoretical Biology, Humboldt-Universität zu Berlin, Berlin, Germany
| | - Urs Bergmann
- Department of Biology, Institute for Theoretical Biology, Humboldt-Universität zu Berlin, Berlin, Germany
| | - Denis Alevi
- Department for Electrical Engineering and Computer Science, Technische Universität Berlin, Berlin, Germany
- Bernstein Center for Computational Neuroscience Berlin, Berlin, Germany
| | - Susanne Schreiber
- Department of Biology, Institute for Theoretical Biology, Humboldt-Universität zu Berlin, Berlin, Germany
- Bernstein Center for Computational Neuroscience Berlin, Berlin, Germany
- Einstein Center for Neurosciences Berlin, Berlin, Germany
| | - Henning Sprekeler
- Department for Electrical Engineering and Computer Science, Technische Universität Berlin, Berlin, Germany
- Bernstein Center for Computational Neuroscience Berlin, Berlin, Germany
- Einstein Center for Neurosciences Berlin, Berlin, Germany
- Excellence Cluster Science of Intelligence, Berlin, Germany
| | - Richard Kempter
- Department of Biology, Institute for Theoretical Biology, Humboldt-Universität zu Berlin, Berlin, Germany
- Bernstein Center for Computational Neuroscience Berlin, Berlin, Germany
- Einstein Center for Neurosciences Berlin, Berlin, Germany
| |
Collapse
|
38
|
Fukai T, Asabuki T, Haga T. Neural mechanisms for learning hierarchical structures of information. Curr Opin Neurobiol 2021; 70:145-153. [PMID: 34808521 DOI: 10.1016/j.conb.2021.10.011] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/31/2020] [Revised: 09/27/2021] [Accepted: 10/27/2021] [Indexed: 10/19/2022]
Abstract
Spatial and temporal information from the environment is often hierarchically organized, so is our knowledge formed about the environment. Identifying the meaningful segments embedded in hierarchically structured information is crucial for cognitive functions, including visual, auditory, motor, memory, and language processing. Segmentation enables the grasping of the links between isolated entities, offering the basis for reasoning and thinking. Importantly, the brain learns such segmentation without external instructions. Here, we review the underlying computational mechanisms implemented at the single-cell and network levels. The network-level mechanism has an interesting similarity to machine-learning methods for graph segmentation. The brain possibly implements methods for the analysis of the hierarchical structures of the environment at multiple levels of its processing hierarchy.
Collapse
Affiliation(s)
- Tomoki Fukai
- Neural Coding and Brain Computing Unit, Okinawa Institute of Science and Technology, Tancha 1919-1, Onna-son, Okinawa 904-0495, Japan.
| | - Toshitake Asabuki
- Neural Coding and Brain Computing Unit, Okinawa Institute of Science and Technology, Tancha 1919-1, Onna-son, Okinawa 904-0495, Japan
| | - Tatsuya Haga
- Neural Coding and Brain Computing Unit, Okinawa Institute of Science and Technology, Tancha 1919-1, Onna-son, Okinawa 904-0495, Japan
| |
Collapse
|
39
|
Niu LY, Wei Y, Long JY, Liu WB. High-Accuracy Spiking Neural Network for Objective Recognition Based on Proportional Attenuating Neuron. Neural Process Lett 2021. [DOI: 10.1007/s11063-021-10669-6] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
|
40
|
Vercruysse F, Naud R, Sprekeler H. Self-organization of a doubly asynchronous irregular network state for spikes and bursts. PLoS Comput Biol 2021; 17:e1009478. [PMID: 34748532 PMCID: PMC8575278 DOI: 10.1371/journal.pcbi.1009478] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/27/2021] [Accepted: 09/24/2021] [Indexed: 11/21/2022] Open
Abstract
Cortical pyramidal cells (PCs) have a specialized dendritic mechanism for the generation of bursts, suggesting that these events play a special role in cortical information processing. In vivo, bursts occur at a low, but consistent rate. Theory suggests that this network state increases the amount of information they convey. However, because burst activity relies on a threshold mechanism, it is rather sensitive to dendritic input levels. In spiking network models, network states in which bursts occur rarely are therefore typically not robust, but require fine-tuning. Here, we show that this issue can be solved by a homeostatic inhibitory plasticity rule in dendrite-targeting interneurons that is consistent with experimental data. The suggested learning rule can be combined with other forms of inhibitory plasticity to self-organize a network state in which both spikes and bursts occur asynchronously and irregularly at low rate. Finally, we show that this network state creates the network conditions for a recently suggested multiplexed code and thereby indeed increases the amount of information encoded in bursts.
Collapse
Affiliation(s)
- Filip Vercruysse
- Department for Electrical Engineering and Computer Science, Technische Universität Berlin, Berlin, Germany
- Bernstein Center for Computational Neuroscience, Berlin, Germany
| | - Richard Naud
- Department of Physics, University of Ottawa, Ottawa, Canada
- uOttawa Brain Mind Institute, Center for Neural Dynamics, Department of Cellular and Molecular Medicine, University of Ottawa, Ottawa, Canada
| | - Henning Sprekeler
- Department for Electrical Engineering and Computer Science, Technische Universität Berlin, Berlin, Germany
- Bernstein Center for Computational Neuroscience, Berlin, Germany
| |
Collapse
|
41
|
Jordan J, Schmidt M, Senn W, Petrovici MA. Evolving interpretable plasticity for spiking networks. eLife 2021; 10:66273. [PMID: 34709176 PMCID: PMC8553337 DOI: 10.7554/elife.66273] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/05/2021] [Accepted: 08/19/2021] [Indexed: 11/25/2022] Open
Abstract
Continuous adaptation allows survival in an ever-changing world. Adjustments in the synaptic coupling strength between neurons are essential for this capability, setting us apart from simpler, hard-wired organisms. How these changes can be mathematically described at the phenomenological level, as so-called ‘plasticity rules’, is essential both for understanding biological information processing and for developing cognitively performant artificial systems. We suggest an automated approach for discovering biophysically plausible plasticity rules based on the definition of task families, associated performance measures and biophysical constraints. By evolving compact symbolic expressions, we ensure the discovered plasticity rules are amenable to intuitive understanding, fundamental for successful communication and human-guided generalization. We successfully apply our approach to typical learning scenarios and discover previously unknown mechanisms for learning efficiently from rewards, recover efficient gradient-descent methods for learning from target signals, and uncover various functionally equivalent STDP-like rules with tuned homeostatic mechanisms. Our brains are incredibly adaptive. Every day we form memories, acquire new knowledge or refine existing skills. This stands in contrast to our current computers, which typically can only perform pre-programmed actions. Our own ability to adapt is the result of a process called synaptic plasticity, in which the strength of the connections between neurons can change. To better understand brain function and build adaptive machines, researchers in neuroscience and artificial intelligence (AI) are modeling the underlying mechanisms. So far, most work towards this goal was guided by human intuition – that is, by the strategies scientists think are most likely to succeed. Despite the tremendous progress, this approach has two drawbacks. First, human time is limited and expensive. And second, researchers have a natural – and reasonable – tendency to incrementally improve upon existing models, rather than starting from scratch. Jordan, Schmidt et al. have now developed a new approach based on ‘evolutionary algorithms’. These computer programs search for solutions to problems by mimicking the process of biological evolution, such as the concept of survival of the fittest. The approach exploits the increasing availability of cheap but powerful computers. Compared to its predecessors (or indeed human brains), it also uses search strategies that are less biased by previous models. The evolutionary algorithms were presented with three typical learning scenarios. In the first, the computer had to spot a repeating pattern in a continuous stream of input without receiving feedback on how well it was doing. In the second scenario, the computer received virtual rewards whenever it behaved in the desired manner – an example of reinforcement learning. Finally, in the third ‘supervised learning’ scenario, the computer was told exactly how much its behavior deviated from the desired behavior. For each of these scenarios, the evolutionary algorithms were able to discover mechanisms of synaptic plasticity to solve the new task successfully. Using evolutionary algorithms to study how computers ‘learn’ will provide new insights into how brains function in health and disease. It could also pave the way for developing intelligent machines that can better adapt to the needs of their users.
Collapse
Affiliation(s)
- Jakob Jordan
- Department of Physiology, University of Bern, Bern, Switzerland
| | - Maximilian Schmidt
- Ascent Robotics, Tokyo, Japan.,RIKEN Center for Brain Science, Tokyo, Japan
| | - Walter Senn
- Department of Physiology, University of Bern, Bern, Switzerland
| | - Mihai A Petrovici
- Department of Physiology, University of Bern, Bern, Switzerland.,Kirchhoff-Institute for Physics, Heidelberg University, Heidelberg, Germany
| |
Collapse
|
42
|
A synaptic learning rule for exploiting nonlinear dendritic computation. Neuron 2021; 109:4001-4017.e10. [PMID: 34715026 PMCID: PMC8691952 DOI: 10.1016/j.neuron.2021.09.044] [Citation(s) in RCA: 25] [Impact Index Per Article: 8.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/23/2021] [Revised: 08/10/2021] [Accepted: 09/23/2021] [Indexed: 11/23/2022]
Abstract
Information processing in the brain depends on the integration of synaptic input distributed throughout neuronal dendrites. Dendritic integration is a hierarchical process, proposed to be equivalent to integration by a multilayer network, potentially endowing single neurons with substantial computational power. However, whether neurons can learn to harness dendritic properties to realize this potential is unknown. Here, we develop a learning rule from dendritic cable theory and use it to investigate the processing capacity of a detailed pyramidal neuron model. We show that computations using spatial or temporal features of synaptic input patterns can be learned, and even synergistically combined, to solve a canonical nonlinear feature-binding problem. The voltage dependence of the learning rule drives coactive synapses to engage dendritic nonlinearities, whereas spike-timing dependence shapes the time course of subthreshold potentials. Dendritic input-output relationships can therefore be flexibly tuned through synaptic plasticity, allowing optimal implementation of nonlinear functions by single neurons.
Collapse
|
43
|
Acharya J, Basu A, Legenstein R, Limbacher T, Poirazi P, Wu X. Dendritic Computing: Branching Deeper into Machine Learning. Neuroscience 2021; 489:275-289. [PMID: 34656706 DOI: 10.1016/j.neuroscience.2021.10.001] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/02/2021] [Revised: 09/07/2021] [Accepted: 10/03/2021] [Indexed: 12/31/2022]
Abstract
In this paper, we discuss the nonlinear computational power provided by dendrites in biological and artificial neurons. We start by briefly presenting biological evidence about the type of dendritic nonlinearities, respective plasticity rules and their effect on biological learning as assessed by computational models. Four major computational implications are identified as improved expressivity, more efficient use of resources, utilizing internal learning signals, and enabling continual learning. We then discuss examples of how dendritic computations have been used to solve real-world classification problems with performance reported on well known data sets used in machine learning. The works are categorized according to the three primary methods of plasticity used-structural plasticity, weight plasticity, or plasticity of synaptic delays. Finally, we show the recent trend of confluence between concepts of deep learning and dendritic computations and highlight some future research directions.
Collapse
Affiliation(s)
| | - Arindam Basu
- Department of Electrical Engineering, City University of Hong Kong, Hong Kong
| | - Robert Legenstein
- Institute of Theoretical Computer Science, Graz University of Technology, Austria
| | - Thomas Limbacher
- Institute of Theoretical Computer Science, Graz University of Technology, Austria
| | - Panayiota Poirazi
- Institute of Molecular Biology and Biotechnology (IMBB), Foundation for Research and Technology-Hellas (FORTH), Greece
| | - Xundong Wu
- School of Computer Science, Hangzhou Dianzi University, China
| |
Collapse
|
44
|
Zhang X, Lu J, Wang Z, Wang R, Wei J, Shi T, Dou C, Wu Z, Zhu J, Shang D, Xing G, Chan M, Liu Q, Liu M. Hybrid memristor-CMOS neurons for in-situ learning in fully hardware memristive spiking neural networks. Sci Bull (Beijing) 2021; 66:1624-1633. [PMID: 36654296 DOI: 10.1016/j.scib.2021.04.014] [Citation(s) in RCA: 17] [Impact Index Per Article: 5.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/11/2020] [Revised: 03/03/2021] [Accepted: 03/26/2021] [Indexed: 02/03/2023]
Abstract
Spiking neural network, inspired by the human brain, consisting of spiking neurons and plastic synapses, is a promising solution for highly efficient data processing in neuromorphic computing. Recently, memristor-based neurons and synapses are becoming intriguing candidates to build spiking neural networks in hardware, owing to the close resemblance between their device dynamics and the biological counterparts. However, the functionalities of memristor-based neurons are currently very limited, and a hardware demonstration of fully memristor-based spiking neural networks supporting in-situ learning is very challenging. Here, a hybrid spiking neuron combining a memristor with simple digital circuits is designed and implemented in hardware to enhance neuron functions. The hybrid neuron with memristive dynamics not only realizes the basic leaky integrate-and-fire neuron function but also enables the in-situ tuning of the connected synaptic weights. Finally, a fully hardware spiking neural network with the hybrid neurons and memristive synapses is experimentally demonstrated for the first time, and in-situ Hebbian learning is achieved with this network. This work opens up a way towards the implementation of spiking neurons, supporting in-situ learning for future neuromorphic computing systems.
Collapse
Affiliation(s)
- Xumeng Zhang
- Frontier Institute of Chip and System, Fudan University, Shanghai 200433, China; Key Laboratory of Microelectronic Devices & Integrated Technology, Institute of Microelectronics, Chinese Academy of Sciences, Beijing 100029, China; University of Chinese Academy of Sciences, Beijing 100049, China
| | - Jian Lu
- Key Laboratory of Microelectronic Devices & Integrated Technology, Institute of Microelectronics, Chinese Academy of Sciences, Beijing 100029, China
| | - Zhongrui Wang
- Department of Electrical and Electronic Engineering, the University of Hong Kong, Hong Kong, China
| | - Rui Wang
- Key Laboratory of Microelectronic Devices & Integrated Technology, Institute of Microelectronics, Chinese Academy of Sciences, Beijing 100029, China; University of Chinese Academy of Sciences, Beijing 100049, China
| | - Jinsong Wei
- Key Laboratory of Microelectronic Devices & Integrated Technology, Institute of Microelectronics, Chinese Academy of Sciences, Beijing 100029, China
| | - Tuo Shi
- Key Laboratory of Microelectronic Devices & Integrated Technology, Institute of Microelectronics, Chinese Academy of Sciences, Beijing 100029, China; Zhejiang Laboratory, Hangzhou 311122, China
| | - Chunmeng Dou
- Key Laboratory of Microelectronic Devices & Integrated Technology, Institute of Microelectronics, Chinese Academy of Sciences, Beijing 100029, China; University of Chinese Academy of Sciences, Beijing 100049, China
| | - Zuheng Wu
- Key Laboratory of Microelectronic Devices & Integrated Technology, Institute of Microelectronics, Chinese Academy of Sciences, Beijing 100029, China; University of Chinese Academy of Sciences, Beijing 100049, China
| | - Jiaxue Zhu
- Key Laboratory of Microelectronic Devices & Integrated Technology, Institute of Microelectronics, Chinese Academy of Sciences, Beijing 100029, China; University of Chinese Academy of Sciences, Beijing 100049, China
| | - Dashan Shang
- Key Laboratory of Microelectronic Devices & Integrated Technology, Institute of Microelectronics, Chinese Academy of Sciences, Beijing 100029, China; University of Chinese Academy of Sciences, Beijing 100049, China
| | - Guozhong Xing
- Key Laboratory of Microelectronic Devices & Integrated Technology, Institute of Microelectronics, Chinese Academy of Sciences, Beijing 100029, China; University of Chinese Academy of Sciences, Beijing 100049, China
| | - Mansun Chan
- Department of Electronic and Computer Engineering, the Hong Kong University of Science and Technology, Hong Kong, China
| | - Qi Liu
- Frontier Institute of Chip and System, Fudan University, Shanghai 200433, China; Key Laboratory of Microelectronic Devices & Integrated Technology, Institute of Microelectronics, Chinese Academy of Sciences, Beijing 100029, China.
| | - Ming Liu
- Frontier Institute of Chip and System, Fudan University, Shanghai 200433, China; Key Laboratory of Microelectronic Devices & Integrated Technology, Institute of Microelectronics, Chinese Academy of Sciences, Beijing 100029, China
| |
Collapse
|
45
|
Schubert F, Gros C. Nonlinear Dendritic Coincidence Detection for Supervised Learning. Front Comput Neurosci 2021; 15:718020. [PMID: 34421566 PMCID: PMC8372750 DOI: 10.3389/fncom.2021.718020] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/31/2021] [Accepted: 07/13/2021] [Indexed: 11/25/2022] Open
Abstract
Cortical pyramidal neurons have a complex dendritic anatomy, whose function is an active research field. In particular, the segregation between its soma and the apical dendritic tree is believed to play an active role in processing feed-forward sensory information and top-down or feedback signals. In this work, we use a simple two-compartment model accounting for the nonlinear interactions between basal and apical input streams and show that standard unsupervised Hebbian learning rules in the basal compartment allow the neuron to align the feed-forward basal input with the top-down target signal received by the apical compartment. We show that this learning process, termed coincidence detection, is robust against strong distractions in the basal input space and demonstrate its effectiveness in a linear classification task.
Collapse
Affiliation(s)
- Fabian Schubert
- Institute for Theoretical Physics, Goethe University Frankfurt am Main, Frankfurt am Main, Germany
| | - Claudius Gros
- Institute for Theoretical Physics, Goethe University Frankfurt am Main, Frankfurt am Main, Germany
| |
Collapse
|
46
|
Kaiser J, Billaudelle S, Müller E, Tetzlaff C, Schemmel J, Schmitt S. EMULATING DENDRITIC COMPUTING PARADIGMS ON ANALOG NEUROMORPHIC HARDWARE. Neuroscience 2021; 489:290-300. [PMID: 34428499 DOI: 10.1016/j.neuroscience.2021.08.013] [Citation(s) in RCA: 14] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/09/2021] [Revised: 08/10/2021] [Accepted: 08/11/2021] [Indexed: 10/20/2022]
Abstract
BrainScaleS-2 is an accelerated and highly configurable neuromorphic system with physical models of neurons and synapses. Beyond networks of spiking point neurons, it allows for the implementation of user-defined neuron morphologies. Both passive propagation of electric signals between compartments as well as dendritic spikes and plateau potentials can be emulated. In this paper, three multi-compartment neuron morphologies are chosen to demonstrate passive propagation of postsynaptic potentials, spatio-temporal coincidence detection of synaptic inputs in a dendritic branch, and the replication of the BAC burst firing mechanism found in layer 5 pyramidal neurons of the neocortex.
Collapse
Affiliation(s)
- Jakob Kaiser
- Heidelberg University, Kirchhoff-Institute for Physics, Germany
| | | | - Eric Müller
- Heidelberg University, Kirchhoff-Institute for Physics, Germany
| | | | | | | |
Collapse
|
47
|
Quantum superposition inspired spiking neural network. iScience 2021; 24:102880. [PMID: 34401664 PMCID: PMC8348858 DOI: 10.1016/j.isci.2021.102880] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/04/2021] [Revised: 03/21/2021] [Accepted: 07/14/2021] [Indexed: 11/21/2022] Open
Abstract
Despite advances in artificial intelligence models, neural networks still cannot achieve human performance, partly due to differences in how information is encoded and processed compared with human brain. Information in an artificial neural network (ANN) is represented using a statistical method and processed as a fitting function, enabling handling of structural patterns in image, text, and speech processing. However, substantial changes to the statistical characteristics of the data, for example, reversing the background of an image, dramatically reduce the performance. Here, we propose a quantum superposition spiking neural network (QS-SNN) inspired by quantum mechanisms and phenomena in the brain, which can handle reversal of image background color. The QS-SNN incorporates quantum theory with brain-inspired spiking neural network models from a computational perspective, resulting in more robust performance compared with traditional ANN models, especially when processing noisy inputs. The results presented here will inform future efforts to develop brain-inspired artificial intelligence.
Collapse
|
48
|
Lipshutz D, Bahroun Y, Golkar S, Sengupta AM, Chklovskii DB. A Biologically Plausible Neural Network for Multichannel Canonical Correlation Analysis. Neural Comput 2021; 33:2309-2352. [PMID: 34412114 DOI: 10.1162/neco_a_01414] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/13/2020] [Accepted: 03/23/2021] [Indexed: 11/04/2022]
Abstract
Cortical pyramidal neurons receive inputs from multiple distinct neural populations and integrate these inputs in separate dendritic compartments. We explore the possibility that cortical microcircuits implement canonical correlation analysis (CCA), an unsupervised learning method that projects the inputs onto a common subspace so as to maximize the correlations between the projections. To this end, we seek a multichannel CCA algorithm that can be implemented in a biologically plausible neural network. For biological plausibility, we require that the network operates in the online setting and its synaptic update rules are local. Starting from a novel CCA objective function, we derive an online optimization algorithm whose optimization steps can be implemented in a single-layer neural network with multicompartmental neurons and local non-Hebbian learning rules. We also derive an extension of our online CCA algorithm with adaptive output rank and output whitening. Interestingly, the extension maps onto a neural network whose neural architecture and synaptic updates resemble neural circuitry and non-Hebbian plasticity observed in the cortex.
Collapse
Affiliation(s)
- David Lipshutz
- Center for Computational Neuroscience, Flatiron Institute, New York, NY 10010, U.S.A.
| | - Yanis Bahroun
- Center for Computational Neuroscience, Flatiron Institute, New York, NY 10010, U.S.A.
| | - Siavash Golkar
- Center for Computational Neuroscience, Flatiron Institute, New York, NY 10010, U.S.A.
| | - Anirvan M Sengupta
- Center for Computational Neuroscience, Flatiron Institute, New York, NY 10010, U.S.A., and Department of Physics and Astronomy, Rutgers University, Piscataway, NJ 08854 U.S.A.
| | - Dmitri B Chklovskii
- Center for Computational Neuroscience, Flatiron Institute, New York, NY 10010, U.S.A., and Neuroscience Institute, NYU Medical Center, New York, NY 10016, U.S.A.
| |
Collapse
|
49
|
Harkin EF, Shen PR, Goel A, Richards BA, Naud R. Parallel and Recurrent Cascade Models as a Unifying Force for Understanding Sub-cellular Computation. Neuroscience 2021; 489:200-215. [PMID: 34358629 DOI: 10.1016/j.neuroscience.2021.07.026] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/23/2021] [Revised: 07/06/2021] [Accepted: 07/25/2021] [Indexed: 11/15/2022]
Abstract
Neurons are very complicated computational devices, incorporating numerous non-linear processes, particularly in their dendrites. Biophysical models capture these processes directly by explicitly modelling physiological variables, such as ion channels, current flow, membrane capacitance, etc. However, another option for capturing the complexities of real neural computation is to use cascade models, which treat individual neurons as a cascade of linear and non-linear operations, akin to a multi-layer artificial neural network. Recent research has shown that cascade models can capture single-cell computation well, but there are still a number of sub-cellular, regenerative dendritic phenomena that they cannot capture, such as the interaction between sodium, calcium, and NMDA spikes in different compartments. Here, we propose that it is possible to capture these additional phenomena using parallel, recurrent cascade models, wherein an individual neuron is modelled as a cascade of parallel linear and non-linear operations that can be connected recurrently, akin to a multi-layer, recurrent, artificial neural network. Given their tractable mathematical structure, we show that neuron models expressed in terms of parallel recurrent cascades can themselves be integrated into multi-layered artificial neural networks and trained to perform complex tasks. We go on to discuss potential implications and uses of these models for artificial intelligence. Overall, we argue that parallel, recurrent cascade models provide an important, unifying tool for capturing single-cell computation and exploring the algorithmic implications of physiological phenomena.
Collapse
Affiliation(s)
- Emerson F Harkin
- uOttawa Brain and Mind Institute, Centre for Neural Dynamics, Department of Cellular and Molecular Medicine, University of Ottawa, Ottawa, ON, Canada
| | - Peter R Shen
- Department of Systems Design Engineering, University of Waterloo, Waterloo, ON, Canada
| | - Anish Goel
- Lisgar Collegiate Institute, Ottawa, ON, Canada
| | - Blake A Richards
- Mila, Montréal, QC, Canada; Montreal Neurological Institute, Montréal, QC, Canada; Department of Neurology and Neurosurgery, McGill University, Montréal, QC, Canada; School of Computer Science, McGill University, Montréal, QC, Canada.
| | - Richard Naud
- uOttawa Brain and Mind Institute, Centre for Neural Dynamics, Department of Cellular and Molecular Medicine, University of Ottawa, Ottawa, ON, Canada; Department of Physics, University of Ottawa, Ottawa, ON, Canada.
| |
Collapse
|
50
|
Stapmanns J, Hahne J, Helias M, Bolten M, Diesmann M, Dahmen D. Event-Based Update of Synapses in Voltage-Based Learning Rules. Front Neuroinform 2021; 15:609147. [PMID: 34177505 PMCID: PMC8222618 DOI: 10.3389/fninf.2021.609147] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/22/2020] [Accepted: 04/07/2021] [Indexed: 11/13/2022] Open
Abstract
Due to the point-like nature of neuronal spiking, efficient neural network simulators often employ event-based simulation schemes for synapses. Yet many types of synaptic plasticity rely on the membrane potential of the postsynaptic cell as a third factor in addition to pre- and postsynaptic spike times. In some learning rules membrane potentials not only influence synaptic weight changes at the time points of spike events but in a continuous manner. In these cases, synapses therefore require information on the full time course of membrane potentials to update their strength which a priori suggests a continuous update in a time-driven manner. The latter hinders scaling of simulations to realistic cortical network sizes and relevant time scales for learning. Here, we derive two efficient algorithms for archiving postsynaptic membrane potentials, both compatible with modern simulation engines based on event-based synapse updates. We theoretically contrast the two algorithms with a time-driven synapse update scheme to analyze advantages in terms of memory and computations. We further present a reference implementation in the spiking neural network simulator NEST for two prototypical voltage-based plasticity rules: the Clopath rule and the Urbanczik-Senn rule. For both rules, the two event-based algorithms significantly outperform the time-driven scheme. Depending on the amount of data to be stored for plasticity, which heavily differs between the rules, a strong performance increase can be achieved by compressing or sampling of information on membrane potentials. Our results on computational efficiency related to archiving of information provide guidelines for the design of learning rules in order to make them practically usable in large-scale networks.
Collapse
Affiliation(s)
- Jonas Stapmanns
- Institute of Neuroscience and Medicine (INM-6), Institute for Advanced Simulation (IAS-6), JARA Institute Brain Structure Function Relationship (INM-10), Jülich Research Centre, Jülich, Germany
- Department of Physics, Institute for Theoretical Solid State Physics, RWTH Aachen University, Aachen, Germany
| | - Jan Hahne
- School of Mathematics and Natural Sciences, Bergische Universität Wuppertal, Wuppertal, Germany
| | - Moritz Helias
- Institute of Neuroscience and Medicine (INM-6), Institute for Advanced Simulation (IAS-6), JARA Institute Brain Structure Function Relationship (INM-10), Jülich Research Centre, Jülich, Germany
- Department of Physics, Institute for Theoretical Solid State Physics, RWTH Aachen University, Aachen, Germany
| | - Matthias Bolten
- School of Mathematics and Natural Sciences, Bergische Universität Wuppertal, Wuppertal, Germany
| | - Markus Diesmann
- Institute of Neuroscience and Medicine (INM-6), Institute for Advanced Simulation (IAS-6), JARA Institute Brain Structure Function Relationship (INM-10), Jülich Research Centre, Jülich, Germany
- Department of Physics, Faculty 1, RWTH Aachen University, Aachen, Germany
- Department of Psychiatry, Psychotherapy and Psychosomatics, Medical Faculty, RWTH Aachen University, Aachen, Germany
| | - David Dahmen
- Institute of Neuroscience and Medicine (INM-6), Institute for Advanced Simulation (IAS-6), JARA Institute Brain Structure Function Relationship (INM-10), Jülich Research Centre, Jülich, Germany
| |
Collapse
|