1
|
Jordan J, Sacramento J, Wybo WAM, Petrovici MA, Senn W. Conductance-based dendrites perform Bayes-optimal cue integration. PLoS Comput Biol 2024; 20:e1012047. [PMID: 38865345 PMCID: PMC11168673 DOI: 10.1371/journal.pcbi.1012047] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/06/2022] [Accepted: 03/31/2024] [Indexed: 06/14/2024] Open
Abstract
A fundamental function of cortical circuits is the integration of information from different sources to form a reliable basis for behavior. While animals behave as if they optimally integrate information according to Bayesian probability theory, the implementation of the required computations in the biological substrate remains unclear. We propose a novel, Bayesian view on the dynamics of conductance-based neurons and synapses which suggests that they are naturally equipped to optimally perform information integration. In our approach apical dendrites represent prior expectations over somatic potentials, while basal dendrites represent likelihoods of somatic potentials. These are parametrized by local quantities, the effective reversal potentials and membrane conductances. We formally demonstrate that under these assumptions the somatic compartment naturally computes the corresponding posterior. We derive a gradient-based plasticity rule, allowing neurons to learn desired target distributions and weight synaptic inputs by their relative reliabilities. Our theory explains various experimental findings on the system and single-cell level related to multi-sensory integration, which we illustrate with simulations. Furthermore, we make experimentally testable predictions on Bayesian dendritic integration and synaptic plasticity.
Collapse
Affiliation(s)
- Jakob Jordan
- Department of Physiology, University of Bern, Bern, Switzerland
- Electrical Engineering, Yale University, New Haven, Connecticut, United States of America
| | - João Sacramento
- Department of Physiology, University of Bern, Bern, Switzerland
- Institute of Neuroinformatics, UZH / ETH Zurich, Zurich, Switzerland
| | - Willem A. M. Wybo
- Department of Physiology, University of Bern, Bern, Switzerland
- Institute of Neuroscience and Medicine, Forschungszentrum Jülich, Jülich, Germany
| | | | - Walter Senn
- Department of Physiology, University of Bern, Bern, Switzerland
| |
Collapse
|
2
|
Bouhadjar Y, Wouters DJ, Diesmann M, Tetzlaff T. Coherent noise enables probabilistic sequence replay in spiking neuronal networks. PLoS Comput Biol 2023; 19:e1010989. [PMID: 37130121 PMCID: PMC10153753 DOI: 10.1371/journal.pcbi.1010989] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/14/2022] [Accepted: 03/02/2023] [Indexed: 05/03/2023] Open
Abstract
Animals rely on different decision strategies when faced with ambiguous or uncertain cues. Depending on the context, decisions may be biased towards events that were most frequently experienced in the past, or be more explorative. A particular type of decision making central to cognition is sequential memory recall in response to ambiguous cues. A previously developed spiking neuronal network implementation of sequence prediction and recall learns complex, high-order sequences in an unsupervised manner by local, biologically inspired plasticity rules. In response to an ambiguous cue, the model deterministically recalls the sequence shown most frequently during training. Here, we present an extension of the model enabling a range of different decision strategies. In this model, explorative behavior is generated by supplying neurons with noise. As the model relies on population encoding, uncorrelated noise averages out, and the recall dynamics remain effectively deterministic. In the presence of locally correlated noise, the averaging effect is avoided without impairing the model performance, and without the need for large noise amplitudes. We investigate two forms of correlated noise occurring in nature: shared synaptic background inputs, and random locking of the stimulus to spatiotemporal oscillations in the network activity. Depending on the noise characteristics, the network adopts various recall strategies. This study thereby provides potential mechanisms explaining how the statistics of learned sequences affect decision making, and how decision strategies can be adjusted after learning.
Collapse
Affiliation(s)
- Younes Bouhadjar
- Institute of Neuroscience and Medicine (INM-6), & Institute for Advanced Simulation (IAS-6), & JARA BRAIN Institute Structure-Function Relationships (INM-10), Jülich Research Centre, Jülich, Germany
- Peter Grünberg Institute (PGI-7,10), Jülich Research Centre and JARA, Jülich, Germany
- RWTH Aachen University, Aachen, Germany
| | - Dirk J Wouters
- Institute of Electronic Materials (IWE 2) & JARA-FIT, RWTH Aachen University, Aachen, Germany
| | - Markus Diesmann
- Institute of Neuroscience and Medicine (INM-6), & Institute for Advanced Simulation (IAS-6), & JARA BRAIN Institute Structure-Function Relationships (INM-10), Jülich Research Centre, Jülich, Germany
- Department of Physics, Faculty 1, & Department of Psychiatry, Psychotherapy, and Psychosomatics, Medical School, RWTH Aachen University, Aachen, Germany
| | - Tom Tetzlaff
- Institute of Neuroscience and Medicine (INM-6), & Institute for Advanced Simulation (IAS-6), & JARA BRAIN Institute Structure-Function Relationships (INM-10), Jülich Research Centre, Jülich, Germany
| |
Collapse
|
3
|
Klassert R, Baumbach A, Petrovici MA, Gärttner M. Variational learning of quantum ground states on spiking neuromorphic hardware. iScience 2022; 25:104707. [PMID: 35992070 PMCID: PMC9386107 DOI: 10.1016/j.isci.2022.104707] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/13/2022] [Revised: 05/05/2022] [Accepted: 06/28/2022] [Indexed: 11/26/2022] Open
Abstract
Recent research has demonstrated the usefulness of neural networks as variational ansatz functions for quantum many-body states. However, high-dimensional sampling spaces and transient autocorrelations confront these approaches with a challenging computational bottleneck. Compared to conventional neural networks, physical model devices offer a fast, efficient and inherently parallel substrate capable of related forms of Markov chain Monte Carlo sampling. Here, we demonstrate the ability of a neuromorphic chip to represent the ground states of quantum spin models by variational energy minimization. We develop a training algorithm and apply it to the transverse field Ising model, showing good performance at moderate system sizes (N≤10). A systematic hyperparameter study shows that performance depends on sample quality, which is limited by temporal parameter variations on the analog neuromorphic chip. Our work thus provides an important step towards harnessing the capabilities of neuromorphic hardware for tackling the curse of dimensionality in quantum many-body problems. Variational scheme for representing quantum ground states with neuromorphic hardware Accelerated physical system yields system-size independent sample generation time Accurate learning of ground states across a quantum phase transition Detailed analysis of algorithmic and technical limitations
Collapse
|
4
|
Korcsak-Gorzo A, Müller MG, Baumbach A, Leng L, Breitwieser OJ, van Albada SJ, Senn W, Meier K, Legenstein R, Petrovici MA. Cortical oscillations support sampling-based computations in spiking neural networks. PLoS Comput Biol 2022; 18:e1009753. [PMID: 35324886 PMCID: PMC8947809 DOI: 10.1371/journal.pcbi.1009753] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/06/2021] [Accepted: 12/14/2021] [Indexed: 11/19/2022] Open
Abstract
Being permanently confronted with an uncertain world, brains have faced evolutionary pressure to represent this uncertainty in order to respond appropriately. Often, this requires visiting multiple interpretations of the available information or multiple solutions to an encountered problem. This gives rise to the so-called mixing problem: since all of these "valid" states represent powerful attractors, but between themselves can be very dissimilar, switching between such states can be difficult. We propose that cortical oscillations can be effectively used to overcome this challenge. By acting as an effective temperature, background spiking activity modulates exploration. Rhythmic changes induced by cortical oscillations can then be interpreted as a form of simulated tempering. We provide a rigorous mathematical discussion of this link and study some of its phenomenological implications in computer simulations. This identifies a new computational role of cortical oscillations and connects them to various phenomena in the brain, such as sampling-based probabilistic inference, memory replay, multisensory cue combination, and place cell flickering.
Collapse
Affiliation(s)
- Agnes Korcsak-Gorzo
- Kirchhoff-Institute for Physics, Heidelberg University, Heidelberg, Germany
- Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA-Institute Brain Structure-Function Relationships (INM-10), Jülich Research Centre, Jülich, Germany
- RWTH Aachen University, Aachen, Germany
| | - Michael G. Müller
- Institute of Theoretical Computer Science, Graz University of Technology, Graz, Austria
| | - Andreas Baumbach
- Kirchhoff-Institute for Physics, Heidelberg University, Heidelberg, Germany
- Department of Physiology, University of Bern, Bern, Switzerland
| | - Luziwei Leng
- Kirchhoff-Institute for Physics, Heidelberg University, Heidelberg, Germany
| | | | - Sacha J. van Albada
- Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA-Institute Brain Structure-Function Relationships (INM-10), Jülich Research Centre, Jülich, Germany
- Institute of Zoology, University of Cologne, Cologne, Germany
| | - Walter Senn
- Department of Physiology, University of Bern, Bern, Switzerland
| | - Karlheinz Meier
- Kirchhoff-Institute for Physics, Heidelberg University, Heidelberg, Germany
| | - Robert Legenstein
- Institute of Theoretical Computer Science, Graz University of Technology, Graz, Austria
| | - Mihai A. Petrovici
- Kirchhoff-Institute for Physics, Heidelberg University, Heidelberg, Germany
- Department of Physiology, University of Bern, Bern, Switzerland
| |
Collapse
|
5
|
Jordan J, Schmidt M, Senn W, Petrovici MA. Evolving interpretable plasticity for spiking networks. eLife 2021; 10:66273. [PMID: 34709176 PMCID: PMC8553337 DOI: 10.7554/elife.66273] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/05/2021] [Accepted: 08/19/2021] [Indexed: 11/25/2022] Open
Abstract
Continuous adaptation allows survival in an ever-changing world. Adjustments in the synaptic coupling strength between neurons are essential for this capability, setting us apart from simpler, hard-wired organisms. How these changes can be mathematically described at the phenomenological level, as so-called ‘plasticity rules’, is essential both for understanding biological information processing and for developing cognitively performant artificial systems. We suggest an automated approach for discovering biophysically plausible plasticity rules based on the definition of task families, associated performance measures and biophysical constraints. By evolving compact symbolic expressions, we ensure the discovered plasticity rules are amenable to intuitive understanding, fundamental for successful communication and human-guided generalization. We successfully apply our approach to typical learning scenarios and discover previously unknown mechanisms for learning efficiently from rewards, recover efficient gradient-descent methods for learning from target signals, and uncover various functionally equivalent STDP-like rules with tuned homeostatic mechanisms. Our brains are incredibly adaptive. Every day we form memories, acquire new knowledge or refine existing skills. This stands in contrast to our current computers, which typically can only perform pre-programmed actions. Our own ability to adapt is the result of a process called synaptic plasticity, in which the strength of the connections between neurons can change. To better understand brain function and build adaptive machines, researchers in neuroscience and artificial intelligence (AI) are modeling the underlying mechanisms. So far, most work towards this goal was guided by human intuition – that is, by the strategies scientists think are most likely to succeed. Despite the tremendous progress, this approach has two drawbacks. First, human time is limited and expensive. And second, researchers have a natural – and reasonable – tendency to incrementally improve upon existing models, rather than starting from scratch. Jordan, Schmidt et al. have now developed a new approach based on ‘evolutionary algorithms’. These computer programs search for solutions to problems by mimicking the process of biological evolution, such as the concept of survival of the fittest. The approach exploits the increasing availability of cheap but powerful computers. Compared to its predecessors (or indeed human brains), it also uses search strategies that are less biased by previous models. The evolutionary algorithms were presented with three typical learning scenarios. In the first, the computer had to spot a repeating pattern in a continuous stream of input without receiving feedback on how well it was doing. In the second scenario, the computer received virtual rewards whenever it behaved in the desired manner – an example of reinforcement learning. Finally, in the third ‘supervised learning’ scenario, the computer was told exactly how much its behavior deviated from the desired behavior. For each of these scenarios, the evolutionary algorithms were able to discover mechanisms of synaptic plasticity to solve the new task successfully. Using evolutionary algorithms to study how computers ‘learn’ will provide new insights into how brains function in health and disease. It could also pave the way for developing intelligent machines that can better adapt to the needs of their users.
Collapse
Affiliation(s)
- Jakob Jordan
- Department of Physiology, University of Bern, Bern, Switzerland
| | - Maximilian Schmidt
- Ascent Robotics, Tokyo, Japan.,RIKEN Center for Brain Science, Tokyo, Japan
| | - Walter Senn
- Department of Physiology, University of Bern, Bern, Switzerland
| | - Mihai A Petrovici
- Department of Physiology, University of Bern, Bern, Switzerland.,Kirchhoff-Institute for Physics, Heidelberg University, Heidelberg, Germany
| |
Collapse
|
6
|
|
7
|
Synthesis of recurrent neural dynamics for monotone inclusion with application to Bayesian inference. Neural Netw 2020; 131:231-241. [PMID: 32818873 DOI: 10.1016/j.neunet.2020.07.037] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/23/2020] [Revised: 07/06/2020] [Accepted: 07/31/2020] [Indexed: 11/22/2022]
Abstract
We propose a top-down approach to construct recurrent neural circuit dynamics for the mathematical problem of monotone inclusion (MoI). MoI in a general optimization framework that encompasses a wide range of contemporary problems, including Bayesian inference and Markov decision making. We show that in a recurrent neural circuit/network with Poisson neurons, each neuron's firing curve can be understood as a proximal operator of a local objective function, while the overall circuit dynamics constitutes an operator-splitting system of ordinary differential equations whose equilibrium point corresponds to the solution of the MoI problem. Our analysis thus establishes that neural circuits are a substrate for solving a broad class of computational tasks. In this regard, we provide an explicit synthesis procedure for building neural circuits for specific MoI problems and demonstrate it for the specific case of Bayesian inference and sparse neural coding.
Collapse
|
8
|
Kades L, Pawlowski JM. Discrete Langevin machine: Bridging the gap between thermodynamic and neuromorphic systems. Phys Rev E 2020; 101:063304. [PMID: 32688507 DOI: 10.1103/physreve.101.063304] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/04/2020] [Accepted: 04/08/2020] [Indexed: 01/09/2023]
Abstract
A formulation of Langevin dynamics for discrete systems is derived as a class of generic stochastic processes. The dynamics simplify for a two-state system and suggest a network architecture which is implemented by the Langevin machine. The Langevin machine represents a promising approach to compute successfully quantitative exact results of Boltzmann distributed systems by LIF neurons. Besides a detailed introduction of the dynamics, different simplified models of a neuromorphic hardware system are studied with respect to a control of emerging sources of errors.
Collapse
Affiliation(s)
- Lukas Kades
- Institut für Theoretische Physik, Universität Heidelberg, Philosophenweg 16, 69120 Heidelberg, Germany
| | - Jan M Pawlowski
- Institut für Theoretische Physik, Universität Heidelberg, Philosophenweg 16, 69120 Heidelberg, Germany
| |
Collapse
|
9
|
Kungl AF, Schmitt S, Klähn J, Müller P, Baumbach A, Dold D, Kugele A, Müller E, Koke C, Kleider M, Mauch C, Breitwieser O, Leng L, Gürtler N, Güttler M, Husmann D, Husmann K, Hartel A, Karasenko V, Grübl A, Schemmel J, Meier K, Petrovici MA. Accelerated Physical Emulation of Bayesian Inference in Spiking Neural Networks. Front Neurosci 2019; 13:1201. [PMID: 31798400 PMCID: PMC6868054 DOI: 10.3389/fnins.2019.01201] [Citation(s) in RCA: 17] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/10/2019] [Accepted: 10/23/2019] [Indexed: 11/13/2022] Open
Abstract
The massively parallel nature of biological information processing plays an important role due to its superiority in comparison to human-engineered computing devices. In particular, it may hold the key to overcoming the von Neumann bottleneck that limits contemporary computer architectures. Physical-model neuromorphic devices seek to replicate not only this inherent parallelism, but also aspects of its microscopic dynamics in analog circuits emulating neurons and synapses. However, these machines require network models that are not only adept at solving particular tasks, but that can also cope with the inherent imperfections of analog substrates. We present a spiking network model that performs Bayesian inference through sampling on the BrainScaleS neuromorphic platform, where we use it for generative and discriminative computations on visual data. By illustrating its functionality on this platform, we implicitly demonstrate its robustness to various substrate-specific distortive effects, as well as its accelerated capability for computation. These results showcase the advantages of brain-inspired physical computation and provide important building blocks for large-scale neuromorphic applications.
Collapse
Affiliation(s)
- Akos F Kungl
- Kirchhoff-Institute for Physics, Heidelberg University, Heidelberg, Germany
| | - Sebastian Schmitt
- Kirchhoff-Institute for Physics, Heidelberg University, Heidelberg, Germany
| | - Johann Klähn
- Kirchhoff-Institute for Physics, Heidelberg University, Heidelberg, Germany
| | - Paul Müller
- Kirchhoff-Institute for Physics, Heidelberg University, Heidelberg, Germany
| | - Andreas Baumbach
- Kirchhoff-Institute for Physics, Heidelberg University, Heidelberg, Germany
| | - Dominik Dold
- Kirchhoff-Institute for Physics, Heidelberg University, Heidelberg, Germany
| | - Alexander Kugele
- Kirchhoff-Institute for Physics, Heidelberg University, Heidelberg, Germany
| | - Eric Müller
- Kirchhoff-Institute for Physics, Heidelberg University, Heidelberg, Germany
| | - Christoph Koke
- Kirchhoff-Institute for Physics, Heidelberg University, Heidelberg, Germany
| | - Mitja Kleider
- Kirchhoff-Institute for Physics, Heidelberg University, Heidelberg, Germany
| | - Christian Mauch
- Kirchhoff-Institute for Physics, Heidelberg University, Heidelberg, Germany
| | - Oliver Breitwieser
- Kirchhoff-Institute for Physics, Heidelberg University, Heidelberg, Germany
| | - Luziwei Leng
- Kirchhoff-Institute for Physics, Heidelberg University, Heidelberg, Germany
| | - Nico Gürtler
- Kirchhoff-Institute for Physics, Heidelberg University, Heidelberg, Germany
| | - Maurice Güttler
- Kirchhoff-Institute for Physics, Heidelberg University, Heidelberg, Germany
| | - Dan Husmann
- Kirchhoff-Institute for Physics, Heidelberg University, Heidelberg, Germany
| | - Kai Husmann
- Kirchhoff-Institute for Physics, Heidelberg University, Heidelberg, Germany
| | - Andreas Hartel
- Kirchhoff-Institute for Physics, Heidelberg University, Heidelberg, Germany
| | - Vitali Karasenko
- Kirchhoff-Institute for Physics, Heidelberg University, Heidelberg, Germany
| | - Andreas Grübl
- Kirchhoff-Institute for Physics, Heidelberg University, Heidelberg, Germany
| | - Johannes Schemmel
- Kirchhoff-Institute for Physics, Heidelberg University, Heidelberg, Germany
| | - Karlheinz Meier
- Kirchhoff-Institute for Physics, Heidelberg University, Heidelberg, Germany
| | - Mihai A Petrovici
- Kirchhoff-Institute for Physics, Heidelberg University, Heidelberg, Germany.,Department of Physiology, University of Bern, Bern, Switzerland
| |
Collapse
|