1
|
Terada Y, Toyoizumi T. Chaotic neural dynamics facilitate probabilistic computations through sampling. Proc Natl Acad Sci U S A 2024; 121:e2312992121. [PMID: 38648479 PMCID: PMC11067032 DOI: 10.1073/pnas.2312992121] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/28/2023] [Accepted: 02/13/2024] [Indexed: 04/25/2024] Open
Abstract
Cortical neurons exhibit highly variable responses over trials and time. Theoretical works posit that this variability arises potentially from chaotic network dynamics of recurrently connected neurons. Here, we demonstrate that chaotic neural dynamics, formed through synaptic learning, allow networks to perform sensory cue integration in a sampling-based implementation. We show that the emergent chaotic dynamics provide neural substrates for generating samples not only of a static variable but also of a dynamical trajectory, where generic recurrent networks acquire these abilities with a biologically plausible learning rule through trial and error. Furthermore, the networks generalize their experience in the stimulus-evoked samples to the inference without partial or all sensory information, which suggests a computational role of spontaneous activity as a representation of the priors as well as a tractable biological computation for marginal distributions. These findings suggest that chaotic neural dynamics may serve for the brain function as a Bayesian generative model.
Collapse
Affiliation(s)
- Yu Terada
- Laboratory for Neural Computation and Adaptation, RIKEN Center for Brain Science, Saitama351-0198, Japan
- Department of Neurobiology, University of California, San Diego, La Jolla, CA92093
- The Institute for Physics of Intelligence, The University of Tokyo, Tokyo113-0033, Japan
| | - Taro Toyoizumi
- Laboratory for Neural Computation and Adaptation, RIKEN Center for Brain Science, Saitama351-0198, Japan
- Department of Mathematical Informatics, Graduate School of Information Science and Technology, The University of Tokyo, Tokyo113-8656, Japan
| |
Collapse
|
2
|
Podlaski WF, Machens CK. Approximating Nonlinear Functions With Latent Boundaries in Low-Rank Excitatory-Inhibitory Spiking Networks. Neural Comput 2024; 36:803-857. [PMID: 38658028 DOI: 10.1162/neco_a_01658] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/21/2023] [Accepted: 01/02/2024] [Indexed: 04/26/2024]
Abstract
Deep feedforward and recurrent neural networks have become successful functional models of the brain, but they neglect obvious biological details such as spikes and Dale's law. Here we argue that these details are crucial in order to understand how real neural circuits operate. Towards this aim, we put forth a new framework for spike-based computation in low-rank excitatory-inhibitory spiking networks. By considering populations with rank-1 connectivity, we cast each neuron's spiking threshold as a boundary in a low-dimensional input-output space. We then show how the combined thresholds of a population of inhibitory neurons form a stable boundary in this space, and those of a population of excitatory neurons form an unstable boundary. Combining the two boundaries results in a rank-2 excitatory-inhibitory (EI) network with inhibition-stabilized dynamics at the intersection of the two boundaries. The computation of the resulting networks can be understood as the difference of two convex functions and is thereby capable of approximating arbitrary non-linear input-output mappings. We demonstrate several properties of these networks, including noise suppression and amplification, irregular activity and synaptic balance, as well as how they relate to rate network dynamics in the limit that the boundary becomes soft. Finally, while our work focuses on small networks (5-50 neurons), we discuss potential avenues for scaling up to much larger networks. Overall, our work proposes a new perspective on spiking networks that may serve as a starting point for a mechanistic understanding of biological spike-based computation.
Collapse
Affiliation(s)
- William F Podlaski
- Champalimaud Neuroscience Programme, Champalimaud Foundation, 1400-038 Lisbon, Portugal
| | - Christian K Machens
- Champalimaud Neuroscience Programme, Champalimaud Foundation, 1400-038 Lisbon, Portugal
| |
Collapse
|
3
|
Gradient-based learning drives robust representations in recurrent neural networks by balancing compression and expansion. NAT MACH INTELL 2022. [DOI: 10.1038/s42256-022-00498-0] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/02/2023]
|
4
|
Xiao ZC, Lin KK. Multilevel monte carlo for cortical circuit models. J Comput Neurosci 2022; 50:9-15. [PMID: 35000059 DOI: 10.1007/s10827-021-00807-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/30/2020] [Revised: 11/18/2021] [Accepted: 11/24/2021] [Indexed: 11/25/2022]
Abstract
Multilevel Monte Carlo (MLMC) methods aim to speed up computation of statistics from dynamical simulations. MLMC is easy to implement and is sometimes very effective, but its efficacy may depend on the underlying dynamics. We apply MLMC to networks of spiking neurons and assess its effectiveness on prototypical models of cortical circuitry under different conditions. We find that MLMC can be very efficient for computing reliable features, i.e., features of network dynamics that are reproducible upon repeated presentation of the same external forcing. In contrast, MLMC is less effective for complex, internally generated activity. Qualitative explanations are given using concepts from random dynamical systems theory.
Collapse
Affiliation(s)
- Zhuo-Cheng Xiao
- Courant Institute of Mathematical Sciences, New York University, New York, USA.
| | - Kevin K Lin
- School of Mathematical Sciences, University of Arizona, Tucson, Arizona, USA
| |
Collapse
|
5
|
Bernardi D, Doron G, Brecht M, Lindner B. A network model of the barrel cortex combined with a differentiator detector reproduces features of the behavioral response to single-neuron stimulation. PLoS Comput Biol 2021; 17:e1007831. [PMID: 33556070 PMCID: PMC7895413 DOI: 10.1371/journal.pcbi.1007831] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/25/2020] [Revised: 02/19/2021] [Accepted: 01/17/2021] [Indexed: 11/23/2022] Open
Abstract
The stimulation of a single neuron in the rat somatosensory cortex can elicit a behavioral response. The probability of a behavioral response does not depend appreciably on the duration or intensity of a constant stimulation, whereas the response probability increases significantly upon injection of an irregular current. Biological mechanisms that can potentially suppress a constant input signal are present in the dynamics of both neurons and synapses and seem ideal candidates to explain these experimental findings. Here, we study a large network of integrate-and-fire neurons with several salient features of neuronal populations in the rat barrel cortex. The model includes cellular spike-frequency adaptation, experimentally constrained numbers and types of chemical synapses endowed with short-term plasticity, and gap junctions. Numerical simulations of this model indicate that cellular and synaptic adaptation mechanisms alone may not suffice to account for the experimental results if the local network activity is read out by an integrator. However, a circuit that approximates a differentiator can detect the single-cell stimulation with a reliability that barely depends on the length or intensity of the stimulus, but that increases when an irregular signal is used. This finding is in accordance with the experimental results obtained for the stimulation of a regularly-spiking excitatory cell. It is widely assumed that only a large group of neurons can encode a stimulus or control behavior. This tenet of neuroscience has been challenged by experiments in which stimulating a single cortical neuron has had a measurable effect on an animal’s behavior. Recently, theoretical studies have explored how a single-neuron stimulation could be detected in a large recurrent network. However, these studies missed essential biological mechanisms of cortical networks and are unable to explain more recent experiments in the barrel cortex. Here, to describe the stimulated brain area, we propose and study a network model endowed with many important biological features of the barrel cortex. Importantly, we also investigate different readout mechanisms, i.e. ways in which the stimulation effects can propagate to other brain areas. We show that a readout network which tracks rapid variations in the local network activity is in agreement with the experiments. Our model demonstrates a possible mechanism for how the stimulation of a single neuron translates into a signal at the population level, which is taken as a proxy of the animal’s response. Our results illustrate the power of spiking neural networks to properly describe the effects of a single neuron’s activity.
Collapse
Affiliation(s)
- Davide Bernardi
- Bernstein Center for Computational Neuroscience Berlin, Berlin, Germany
- Institut für Physik, Humboldt-Universität zu Berlin, Berlin, Germany
- Center for Translational Neurophysiology of Speech and Communication, Fondazione Istituto Italiano di Tecnologia, Ferrara, Italy
- * E-mail:
| | - Guy Doron
- Bernstein Center for Computational Neuroscience Berlin, Berlin, Germany
| | - Michael Brecht
- Bernstein Center for Computational Neuroscience Berlin, Berlin, Germany
| | - Benjamin Lindner
- Bernstein Center for Computational Neuroscience Berlin, Berlin, Germany
- Institut für Physik, Humboldt-Universität zu Berlin, Berlin, Germany
| |
Collapse
|
6
|
Baker C, Zhu V, Rosenbaum R. Nonlinear stimulus representations in neural circuits with approximate excitatory-inhibitory balance. PLoS Comput Biol 2020; 16:e1008192. [PMID: 32946433 PMCID: PMC7526938 DOI: 10.1371/journal.pcbi.1008192] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/21/2020] [Revised: 09/30/2020] [Accepted: 07/24/2020] [Indexed: 12/02/2022] Open
Abstract
Balanced excitation and inhibition is widely observed in cortex. How does this balance shape neural computations and stimulus representations? This question is often studied using computational models of neuronal networks in a dynamically balanced state. But balanced network models predict a linear relationship between stimuli and population responses. So how do cortical circuits implement nonlinear representations and computations? We show that every balanced network architecture admits stimuli that break the balanced state and these breaks in balance push the network into a "semi-balanced state" characterized by excess inhibition to some neurons, but an absence of excess excitation. The semi-balanced state produces nonlinear stimulus representations and nonlinear computations, is unavoidable in networks driven by multiple stimuli, is consistent with cortical recordings, and has a direct mathematical relationship to artificial neural networks.
Collapse
Affiliation(s)
- Cody Baker
- Department of Applied and Computational Mathematics and Statistics, University of Notre Dame, Notre Dame, IN, USA
| | - Vicky Zhu
- Department of Applied and Computational Mathematics and Statistics, University of Notre Dame, Notre Dame, IN, USA
| | - Robert Rosenbaum
- Department of Applied and Computational Mathematics and Statistics, University of Notre Dame, Notre Dame, IN, USA
- Interdisciplinary Center for Network Science and Applications, University of Notre Dame, Notre Dame, IN, USA
| |
Collapse
|
7
|
Nolte M, Reimann MW, King JG, Markram H, Muller EB. Cortical reliability amid noise and chaos. Nat Commun 2019; 10:3792. [PMID: 31439838 PMCID: PMC6706377 DOI: 10.1038/s41467-019-11633-8] [Citation(s) in RCA: 19] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/22/2018] [Accepted: 07/23/2019] [Indexed: 02/01/2023] Open
Abstract
Typical responses of cortical neurons to identical sensory stimuli appear highly variable. It has thus been proposed that the cortex primarily uses a rate code. However, other studies have argued for spike-time coding under certain conditions. The potential role of spike-time coding is directly limited by the internally generated variability of cortical circuits, which remains largely unexplored. Here, we quantify this internally generated variability using a biophysical model of rat neocortical microcircuitry with biologically realistic noise sources. We find that stochastic neurotransmitter release is a critical component of internally generated variability, causing rapidly diverging, chaotic recurrent network dynamics. Surprisingly, the same nonlinear recurrent network dynamics can transiently overcome the chaos in response to weak feed-forward thalamocortical inputs, and support reliable spike times with millisecond precision. Our model shows that the noisy and chaotic network dynamics of recurrent cortical microcircuitry are compatible with stimulus-evoked, millisecond spike-time reliability, resolving a long-standing debate.
Collapse
Affiliation(s)
- Max Nolte
- Blue Brain Project, École Polytechnique Fédérale de Lausanne, 1202, Geneva, Switzerland.
| | - Michael W Reimann
- Blue Brain Project, École Polytechnique Fédérale de Lausanne, 1202, Geneva, Switzerland
| | - James G King
- Blue Brain Project, École Polytechnique Fédérale de Lausanne, 1202, Geneva, Switzerland
| | - Henry Markram
- Blue Brain Project, École Polytechnique Fédérale de Lausanne, 1202, Geneva, Switzerland
- Laboratory of Neural Microcircuitry, Brain Mind Institute, École Polytechnique Fédérale de Lausanne, 1015, Lausanne, Switzerland
| | - Eilif B Muller
- Blue Brain Project, École Polytechnique Fédérale de Lausanne, 1202, Geneva, Switzerland.
| |
Collapse
|
8
|
Balanced excitation and inhibition are required for high-capacity, noise-robust neuronal selectivity. Proc Natl Acad Sci U S A 2017; 114:E9366-E9375. [PMID: 29042519 DOI: 10.1073/pnas.1705841114] [Citation(s) in RCA: 49] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022] Open
Abstract
Neurons and networks in the cerebral cortex must operate reliably despite multiple sources of noise. To evaluate the impact of both input and output noise, we determine the robustness of single-neuron stimulus selective responses, as well as the robustness of attractor states of networks of neurons performing memory tasks. We find that robustness to output noise requires synaptic connections to be in a balanced regime in which excitation and inhibition are strong and largely cancel each other. We evaluate the conditions required for this regime to exist and determine the properties of networks operating within it. A plausible synaptic plasticity rule for learning that balances weight configurations is presented. Our theory predicts an optimal ratio of the number of excitatory and inhibitory synapses for maximizing the encoding capacity of balanced networks for given statistics of afferent activations. Previous work has shown that balanced networks amplify spatiotemporal variability and account for observed asynchronous irregular states. Here we present a distinct type of balanced network that amplifies small changes in the impinging signals and emerges automatically from learning to perform neuronal and network functions robustly.
Collapse
|
9
|
How linear response shaped models of neural circuits and the quest for alternatives. Curr Opin Neurobiol 2017; 46:234-240. [DOI: 10.1016/j.conb.2017.09.001] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/24/2017] [Accepted: 09/07/2017] [Indexed: 11/23/2022]
|
10
|
Bianchi FM, Livi L, Alippi C, Jenssen R. Multiplex visibility graphs to investigate recurrent neural network dynamics. Sci Rep 2017; 7:44037. [PMID: 28281563 PMCID: PMC5345088 DOI: 10.1038/srep44037] [Citation(s) in RCA: 22] [Impact Index Per Article: 3.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/27/2016] [Accepted: 02/01/2017] [Indexed: 11/18/2022] Open
Abstract
A recurrent neural network (RNN) is a universal approximator of dynamical systems, whose performance often depends on sensitive hyperparameters. Tuning them properly may be difficult and, typically, based on a trial-and-error approach. In this work, we adopt a graph-based framework to interpret and characterize internal dynamics of a class of RNNs called echo state networks (ESNs). We design principled unsupervised methods to derive hyperparameters configurations yielding maximal ESN performance, expressed in terms of prediction error and memory capacity. In particular, we propose to model time series generated by each neuron activations with a horizontal visibility graph, whose topological properties have been shown to be related to the underlying system dynamics. Successively, horizontal visibility graphs associated with all neurons become layers of a larger structure called a multiplex. We show that topological properties of such a multiplex reflect important features of ESN dynamics that can be used to guide the tuning of its hyperparamers. Results obtained on several benchmarks and a real-world dataset of telephone call data records show the effectiveness of the proposed methods.
Collapse
Affiliation(s)
- Filippo Maria Bianchi
- Machine Learning Group, Department of Physics and Technology, University of Tromsø, 9019 Tromsø, Norway
| | - Lorenzo Livi
- Department of Computer Science, College of Engineering, Mathematics and Physical Sciences, University of Exeter, Exeter EX4 4QF, United Kingdom
| | - Cesare Alippi
- Department of Electronics, Information, and Bioengineering, Politecnico di Milano, 20133 Milan, Italy
- Faculty of Informatics, Universitá della Svizzera Italiana, 6900 Lugano, Switzerland
| | - Robert Jenssen
- Machine Learning Group, Department of Physics and Technology, University of Tromsø, 9019 Tromsø, Norway
| |
Collapse
|