1
|
Mattera A, Alfieri V, Granato G, Baldassarre G. Chaotic recurrent neural networks for brain modelling: A review. Neural Netw 2024; 184:107079. [PMID: 39756119 DOI: 10.1016/j.neunet.2024.107079] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/06/2024] [Revised: 11/25/2024] [Accepted: 12/19/2024] [Indexed: 01/07/2025]
Abstract
Even in the absence of external stimuli, the brain is spontaneously active. Indeed, most cortical activity is internally generated by recurrence. Both theoretical and experimental studies suggest that chaotic dynamics characterize this spontaneous activity. While the precise function of brain chaotic activity is still puzzling, we know that chaos confers many advantages. From a computational perspective, chaos enhances the complexity of network dynamics. From a behavioural point of view, chaotic activity could generate the variability required for exploration. Furthermore, information storage and transfer are maximized at the critical border between order and chaos. Despite these benefits, many computational brain models avoid incorporating spontaneous chaotic activity due to the challenges it poses for learning algorithms. In recent years, however, multiple approaches have been proposed to overcome this limitation. As a result, many different algorithms have been developed, initially within the reservoir computing paradigm. Over time, the field has evolved to increase the biological plausibility and performance of the algorithms, sometimes going beyond the reservoir computing framework. In this review article, we examine the computational benefits of chaos and the unique properties of chaotic recurrent neural networks, with a particular focus on those typically utilized in reservoir computing. We also provide a detailed analysis of the algorithms designed to train chaotic RNNs, tracing their historical evolution and highlighting key milestones in their development. Finally, we explore the applications and limitations of chaotic RNNs for brain modelling, consider their potential broader impacts beyond neuroscience, and outline promising directions for future research.
Collapse
Affiliation(s)
- Andrea Mattera
- Institute of Cognitive Sciences and Technology, National Research Council, Via Romagnosi 18a, I-00196, Rome, Italy.
| | - Valerio Alfieri
- Institute of Cognitive Sciences and Technology, National Research Council, Via Romagnosi 18a, I-00196, Rome, Italy; International School of Advanced Studies, Center for Neuroscience, University of Camerino, Via Gentile III Da Varano, 62032, Camerino, Italy
| | - Giovanni Granato
- Institute of Cognitive Sciences and Technology, National Research Council, Via Romagnosi 18a, I-00196, Rome, Italy
| | - Gianluca Baldassarre
- Institute of Cognitive Sciences and Technology, National Research Council, Via Romagnosi 18a, I-00196, Rome, Italy
| |
Collapse
|
2
|
Çatal Y, Keskin K, Wolman A, Klar P, Smith D, Northoff G. Flexibility of intrinsic neural timescales during distinct behavioral states. Commun Biol 2024; 7:1667. [PMID: 39702547 DOI: 10.1038/s42003-024-07349-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/06/2024] [Accepted: 12/02/2024] [Indexed: 12/21/2024] Open
Abstract
Recent neuroimaging studies demonstrate a heterogeneity of timescales prevalent in the brain's ongoing spontaneous activity, labeled intrinsic neural timescales (INT). At the same time, neural timescales also reflect stimulus- or task-related activity. The relationship of the INT during the brain's spontaneous activity with their involvement in task states including behavior remains unclear. To address this question, we combined calcium imaging data of spontaneously behaving mice and human electroencephalography (EEG) during rest and task states with computational modeling. We obtained four primary findings: (i) the distinct behavioral states can be accurately predicted from INT, (ii) INT become longer during behavioral states compared to rest, (iii) INT change from rest to task is correlated negatively with the variability of INT during rest, (iv) neural mass modeling shows a key role of recurrent connections in mediating the rest-task change of INT. Extending current findings, our results show the dynamic nature of the brain's INT in reflecting continuous behavior through their flexible rest-task modulation possibly mediated by recurrent connections.
Collapse
Affiliation(s)
- Yasir Çatal
- Mind, Brain Imaging and Neuroethics Research Unit, University of Ottawa, Ontario, ON, Canada.
- University of Ottawa Institute of Mental Health Research, Ottawa, ON, Canada.
| | - Kaan Keskin
- University of Ottawa Institute of Mental Health Research, Ottawa, ON, Canada
- Department of Psychiatry, Ege University, Izmir, Turkey
- SoCAT Lab, Ege University, Izmir, Turkey
| | - Angelika Wolman
- Mind, Brain Imaging and Neuroethics Research Unit, University of Ottawa, Ontario, ON, Canada
- University of Ottawa Institute of Mental Health Research, Ottawa, ON, Canada
| | - Philipp Klar
- Faculty of Mathematics and Natural Sciences, Institute of Experimental Psychology, Heinrich Heine University of Düsseldorf, Düsseldorf, Germany
- Institute of Neuroscience and Medicine, Brain & Behaviour (INM-7), Research Centre Jülich, Jülich, Germany
| | - David Smith
- University of Ottawa Institute of Mental Health Research, Ottawa, ON, Canada
- Department of Cellular and Molecular Medicine, University of Ottawa, Ottawa, ON, Canada
| | - Georg Northoff
- Mind, Brain Imaging and Neuroethics Research Unit, University of Ottawa, Ontario, ON, Canada
- University of Ottawa Institute of Mental Health Research, Ottawa, ON, Canada
| |
Collapse
|
3
|
Xiao G, Cai Y, Zhang Y, Xie J, Wu L, Xie H, Wu J, Dai Q. Mesoscale neuronal granular trial variability in vivo illustrated by nonlinear recurrent network in silico. Nat Commun 2024; 15:9894. [PMID: 39548098 PMCID: PMC11567969 DOI: 10.1038/s41467-024-54346-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/23/2024] [Accepted: 11/06/2024] [Indexed: 11/17/2024] Open
Abstract
Large-scale neural recording with single-neuron resolution has revealed the functional complexity of the neural systems. However, even under well-designed task conditions, the cortex-wide network exhibits highly dynamic trial variability, posing challenges to the conventional trial-averaged analysis. To study mesoscale trial variability, we conducted a comparative study between fluorescence imaging of layer-2/3 neurons in vivo and network simulation in silico. We imaged up to 40,000 cortical neurons' triggered responses by deep brain stimulus (DBS). And we build an in silico network to reproduce the biological phenomena we observed in vivo. We proved the existence of ineluctable trial variability and found it influenced by input amplitude and range. Moreover, we demonstrated that a spatially heterogeneous coding community accounts for more reliable inter-trial coding despite single-unit trial variability. A deeper understanding of trial variability from the perspective of a dynamical system may lead to uncovering intellectual abilities such as parallel coding and creativity.
Collapse
Affiliation(s)
- Guihua Xiao
- Beijing National Research Center for Information Science and Technology, Tsinghua University, Beijing, China
- Department of Automation, Tsinghua University, Beijing, China
- Institute for Brain and Cognitive Sciences, Tsinghua University, Beijing, China
| | - Yeyi Cai
- Department of Automation, Tsinghua University, Beijing, China
- Institute for Brain and Cognitive Sciences, Tsinghua University, Beijing, China
| | - Yuanlong Zhang
- Department of Automation, Tsinghua University, Beijing, China
- Institute for Brain and Cognitive Sciences, Tsinghua University, Beijing, China
| | - Jingyu Xie
- Department of Automation, Tsinghua University, Beijing, China
- Institute for Brain and Cognitive Sciences, Tsinghua University, Beijing, China
| | - Lifan Wu
- Department of Automation, Tsinghua University, Beijing, China
- Institute for Brain and Cognitive Sciences, Tsinghua University, Beijing, China
| | - Hao Xie
- Department of Automation, Tsinghua University, Beijing, China
- Institute for Brain and Cognitive Sciences, Tsinghua University, Beijing, China
| | - Jiamin Wu
- Beijing National Research Center for Information Science and Technology, Tsinghua University, Beijing, China.
- Department of Automation, Tsinghua University, Beijing, China.
- Institute for Brain and Cognitive Sciences, Tsinghua University, Beijing, China.
- IDG/McGovern Institute for Brain Research, Tsinghua University, Beijing, China.
| | - Qionghai Dai
- Beijing National Research Center for Information Science and Technology, Tsinghua University, Beijing, China.
- Department of Automation, Tsinghua University, Beijing, China.
- Institute for Brain and Cognitive Sciences, Tsinghua University, Beijing, China.
- IDG/McGovern Institute for Brain Research, Tsinghua University, Beijing, China.
| |
Collapse
|
4
|
Wu S, Huang C, Snyder AC, Smith MA, Doiron B, Yu BM. Automated customization of large-scale spiking network models to neuronal population activity. NATURE COMPUTATIONAL SCIENCE 2024; 4:690-705. [PMID: 39285002 DOI: 10.1038/s43588-024-00688-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/22/2023] [Accepted: 08/08/2024] [Indexed: 09/22/2024]
Abstract
Understanding brain function is facilitated by constructing computational models that accurately reproduce aspects of brain activity. Networks of spiking neurons capture the underlying biophysics of neuronal circuits, yet their activity's dependence on model parameters is notoriously complex. As a result, heuristic methods have been used to configure spiking network models, which can lead to an inability to discover activity regimes complex enough to match large-scale neuronal recordings. Here we propose an automatic procedure, Spiking Network Optimization using Population Statistics (SNOPS), to customize spiking network models that reproduce the population-wide covariability of large-scale neuronal recordings. We first confirmed that SNOPS accurately recovers simulated neural activity statistics. Then, we applied SNOPS to recordings in macaque visual and prefrontal cortices and discovered previously unknown limitations of spiking network models. Taken together, SNOPS can guide the development of network models, thereby enabling deeper insight into how networks of neurons give rise to brain function.
Collapse
Affiliation(s)
- Shenghao Wu
- Neuroscience Institute, Carnegie Mellon University, Pittsburgh, PA, USA
- Machine Learning Department, Carnegie Mellon University, Pittsburgh, PA, USA
- Neural Basis of Cognition, Pittsburgh, PA, USA
| | - Chengcheng Huang
- Neural Basis of Cognition, Pittsburgh, PA, USA
- Department of Neuroscience, University of Pittsburgh, Pittsburgh, PA, USA
- Department of Mathematics, University of Pittsburgh, Pittsburgh, PA, USA
| | - Adam C Snyder
- Department of Brain and Cognitive Sciences, University of Rochester, Rochester, NY, USA
| | - Matthew A Smith
- Neuroscience Institute, Carnegie Mellon University, Pittsburgh, PA, USA
- Neural Basis of Cognition, Pittsburgh, PA, USA
- Department of Biomedical Engineering, Carnegie Mellon University, Pittsburgh, PA, USA
| | - Brent Doiron
- Department of Statistics, University of Chicago, Chicago, IL, USA
- Grossman Center for Quantitative Biology and Human Behavior, University of Chicago, Chicago, IL, USA
| | - Byron M Yu
- Neuroscience Institute, Carnegie Mellon University, Pittsburgh, PA, USA.
- Neural Basis of Cognition, Pittsburgh, PA, USA.
- Department of Biomedical Engineering, Carnegie Mellon University, Pittsburgh, PA, USA.
- Department of Electrical and Computer Engineering, Carnegie Mellon University, Pittsburgh, PA, USA.
| |
Collapse
|
5
|
Cone I, Clopath C, Shouval HZ. Learning to express reward prediction error-like dopaminergic activity requires plastic representations of time. Nat Commun 2024; 15:5856. [PMID: 38997276 PMCID: PMC11245539 DOI: 10.1038/s41467-024-50205-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/23/2023] [Accepted: 07/02/2024] [Indexed: 07/14/2024] Open
Abstract
The dominant theoretical framework to account for reinforcement learning in the brain is temporal difference learning (TD) learning, whereby certain units signal reward prediction errors (RPE). The TD algorithm has been traditionally mapped onto the dopaminergic system, as firing properties of dopamine neurons can resemble RPEs. However, certain predictions of TD learning are inconsistent with experimental results, and previous implementations of the algorithm have made unscalable assumptions regarding stimulus-specific fixed temporal bases. We propose an alternate framework to describe dopamine signaling in the brain, FLEX (Flexibly Learned Errors in Expected Reward). In FLEX, dopamine release is similar, but not identical to RPE, leading to predictions that contrast to those of TD. While FLEX itself is a general theoretical framework, we describe a specific, biophysically plausible implementation, the results of which are consistent with a preponderance of both existing and reanalyzed experimental data.
Collapse
Affiliation(s)
- Ian Cone
- Department of Bioengineering, Imperial College London, London, UK
- Department of Neurobiology and Anatomy, University of Texas Medical School at Houston, Houston, TX, USA
- Applied Physics Program, Rice University, Houston, TX, USA
| | - Claudia Clopath
- Department of Bioengineering, Imperial College London, London, UK
| | - Harel Z Shouval
- Department of Neurobiology and Anatomy, University of Texas Medical School at Houston, Houston, TX, USA.
- Department of Electrical and Computer Engineering, Rice University, Houston, TX, USA.
| |
Collapse
|
6
|
Stroud JP, Duncan J, Lengyel M. The computational foundations of dynamic coding in working memory. Trends Cogn Sci 2024; 28:614-627. [PMID: 38580528 DOI: 10.1016/j.tics.2024.02.011] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/11/2023] [Revised: 02/29/2024] [Accepted: 02/29/2024] [Indexed: 04/07/2024]
Abstract
Working memory (WM) is a fundamental aspect of cognition. WM maintenance is classically thought to rely on stable patterns of neural activities. However, recent evidence shows that neural population activities during WM maintenance undergo dynamic variations before settling into a stable pattern. Although this has been difficult to explain theoretically, neural network models optimized for WM typically also exhibit such dynamics. Here, we examine stable versus dynamic coding in neural data, classical models, and task-optimized networks. We review principled mathematical reasons for why classical models do not, while task-optimized models naturally do exhibit dynamic coding. We suggest an update to our understanding of WM maintenance, in which dynamic coding is a fundamental computational feature rather than an epiphenomenon.
Collapse
Affiliation(s)
- Jake P Stroud
- Computational and Biological Learning Lab, Department of Engineering, University of Cambridge, Cambridge, UK.
| | - John Duncan
- MRC Cognition and Brain Sciences Unit, University of Cambridge, Cambridge, UK
| | - Máté Lengyel
- Computational and Biological Learning Lab, Department of Engineering, University of Cambridge, Cambridge, UK; Center for Cognitive Computation, Department of Cognitive Science, Central European University, Budapest, Hungary
| |
Collapse
|
7
|
Dubinin I, Effenberger F. Fading memory as inductive bias in residual recurrent networks. Neural Netw 2024; 173:106179. [PMID: 38387205 DOI: 10.1016/j.neunet.2024.106179] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/27/2023] [Revised: 02/07/2024] [Accepted: 02/13/2024] [Indexed: 02/24/2024]
Abstract
Residual connections have been proposed as an architecture-based inductive bias to mitigate the problem of exploding and vanishing gradients and increased task performance in both feed-forward and recurrent networks (RNNs) when trained with the backpropagation algorithm. Yet, little is known about how residual connections in RNNs influence their dynamics and fading memory properties. Here, we introduce weakly coupled residual recurrent networks (WCRNNs) in which residual connections result in well-defined Lyapunov exponents and allow for studying properties of fading memory. We investigate how the residual connections of WCRNNs influence their performance, network dynamics, and memory properties on a set of benchmark tasks. We show that several distinct forms of residual connections yield effective inductive biases that result in increased network expressivity. In particular, those are residual connections that (i) result in network dynamics at the proximity of the edge of chaos, (ii) allow networks to capitalize on characteristic spectral properties of the data, and (iii) result in heterogeneous memory properties. In addition, we demonstrate how our results can be extended to non-linear residuals and introduce a weakly coupled residual initialization scheme that can be used for Elman RNNs.
Collapse
Affiliation(s)
- Igor Dubinin
- Ernst Strüngmann Institute, Deutschordenstraße 46, Frankfurt am Main, 60528, Germany; Frankfurt Institute for Advanced Studies, Ruth-Moufang-Straße 1, Frankfurt am Main, 60438, Germany.
| | - Felix Effenberger
- Ernst Strüngmann Institute, Deutschordenstraße 46, Frankfurt am Main, 60528, Germany.
| |
Collapse
|
8
|
Goris RLT, Coen-Cagli R, Miller KD, Priebe NJ, Lengyel M. Response sub-additivity and variability quenching in visual cortex. Nat Rev Neurosci 2024; 25:237-252. [PMID: 38374462 PMCID: PMC11444047 DOI: 10.1038/s41583-024-00795-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 01/24/2024] [Indexed: 02/21/2024]
Abstract
Sub-additivity and variability are ubiquitous response motifs in the primary visual cortex (V1). Response sub-additivity enables the construction of useful interpretations of the visual environment, whereas response variability indicates the factors that limit the precision with which the brain can do this. There is increasing evidence that experimental manipulations that elicit response sub-additivity often also quench response variability. Here, we provide an overview of these phenomena and suggest that they may have common origins. We discuss empirical findings and recent model-based insights into the functional operations, computational objectives and circuit mechanisms underlying V1 activity. These different modelling approaches all predict that response sub-additivity and variability quenching often co-occur. The phenomenology of these two response motifs, as well as many of the insights obtained about them in V1, generalize to other cortical areas. Thus, the connection between response sub-additivity and variability quenching may be a canonical motif across the cortex.
Collapse
Affiliation(s)
- Robbe L T Goris
- Center for Perceptual Systems, University of Texas at Austin, Austin, TX, USA.
| | - Ruben Coen-Cagli
- Department of Systems and Computational Biology, Albert Einstein College of Medicine, Bronx, NY, USA
- Dominick P. Purpura Department of Neuroscience, Albert Einstein College of Medicine, Bronx, NY, USA
- Department of Ophthalmology and Visual Sciences, Albert Einstein College of Medicine, Bronx, NY, USA
| | - Kenneth D Miller
- Center for Theoretical Neuroscience, Columbia University, New York, NY, USA
- Kavli Institute for Brain Science, Columbia University, New York, NY, USA
- Dept. of Neuroscience, College of Physicians and Surgeons, Columbia University, New York, NY, USA
- Morton B. Zuckerman Mind Brain Behavior Institute, Columbia University, New York, NY, USA
- Swartz Program in Theoretical Neuroscience, Columbia University, New York, NY, USA
| | - Nicholas J Priebe
- Center for Learning and Memory, University of Texas at Austin, Austin, TX, USA
| | - Máté Lengyel
- Computational and Biological Learning Lab, Department of Engineering, University of Cambridge, Cambridge, UK
- Center for Cognitive Computation, Department of Cognitive Science, Central European University, Budapest, Hungary
| |
Collapse
|
9
|
Churchland MM, Shenoy KV. Preparatory activity and the expansive null-space. Nat Rev Neurosci 2024; 25:213-236. [PMID: 38443626 DOI: 10.1038/s41583-024-00796-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 01/26/2024] [Indexed: 03/07/2024]
Abstract
The study of the cortical control of movement experienced a conceptual shift over recent decades, as the basic currency of understanding shifted from single-neuron tuning towards population-level factors and their dynamics. This transition was informed by a maturing understanding of recurrent networks, where mechanism is often characterized in terms of population-level factors. By estimating factors from data, experimenters could test network-inspired hypotheses. Central to such hypotheses are 'output-null' factors that do not directly drive motor outputs yet are essential to the overall computation. In this Review, we highlight how the hypothesis of output-null factors was motivated by the venerable observation that motor-cortex neurons are active during movement preparation, well before movement begins. We discuss how output-null factors then became similarly central to understanding neural activity during movement. We discuss how this conceptual framework provided key analysis tools, making it possible for experimenters to address long-standing questions regarding motor control. We highlight an intriguing trend: as experimental and theoretical discoveries accumulate, the range of computational roles hypothesized to be subserved by output-null factors continues to expand.
Collapse
Affiliation(s)
- Mark M Churchland
- Department of Neuroscience, Columbia University, New York, NY, USA.
- Grossman Center for the Statistics of Mind, Columbia University, New York, NY, USA.
- Kavli Institute for Brain Science, Columbia University, New York, NY, USA.
| | - Krishna V Shenoy
- Department of Electrical Engineering, Stanford University, Stanford, CA, USA
- Department of Bioengineering, Stanford University, Stanford, CA, USA
- Department of Neurobiology, Stanford University, Stanford, CA, USA
- Department of Neurosurgery, Stanford University, Stanford, CA, USA
- Wu Tsai Neurosciences Institute, Stanford University, Stanford, CA, USA
- Bio-X Institute, Stanford University, Stanford, CA, USA
- Howard Hughes Medical Institute at Stanford University, Stanford, CA, USA
| |
Collapse
|
10
|
Metzner C, Yamakou ME, Voelkl D, Schilling A, Krauss P. Quantifying and Maximizing the Information Flux in Recurrent Neural Networks. Neural Comput 2024; 36:351-384. [PMID: 38363658 DOI: 10.1162/neco_a_01651] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/21/2023] [Accepted: 12/04/2023] [Indexed: 02/18/2024]
Abstract
Free-running recurrent neural networks (RNNs), especially probabilistic models, generate an ongoing information flux that can be quantified with the mutual information I[x→(t),x→(t+1)] between subsequent system states x→. Although previous studies have shown that I depends on the statistics of the network's connection weights, it is unclear how to maximize I systematically and how to quantify the flux in large systems where computing the mutual information becomes intractable. Here, we address these questions using Boltzmann machines as model systems. We find that in networks with moderately strong connections, the mutual information I is approximately a monotonic transformation of the root-mean-square averaged Pearson correlations between neuron pairs, a quantity that can be efficiently computed even in large systems. Furthermore, evolutionary maximization of I[x→(t),x→(t+1)] reveals a general design principle for the weight matrices enabling the systematic construction of systems with a high spontaneous information flux. Finally, we simultaneously maximize information flux and the mean period length of cyclic attractors in the state-space of these dynamical networks. Our results are potentially useful for the construction of RNNs that serve as short-time memories or pattern generators.
Collapse
Affiliation(s)
- Claus Metzner
- Neuroscience Lab, University Hospital Erlangen, 91054 Erlangen, Germany
- Biophysics Lab, Friedrich-Alexander University of Erlangen-Nuremberg, 91054 Erlangen, Germany
| | - Marius E Yamakou
- Department of Data Science, Friedrich-Alexander University Erlangen-Nuremberg, 91054 Erlangen, Germany
| | - Dennis Voelkl
- Neuroscience Lab, University Hospital Erlangen, 91054 Erlangen, Germany
| | - Achim Schilling
- Neuroscience Lab, University Hospital Erlangen, 91054 Erlangen, Germany
- Cognitive Computational Neuroscience Group, Friedrich-Alexander University Erlangen-Nuremberg, 91054 Erlangen, Germany
| | - Patrick Krauss
- Neuroscience Lab, University Hospital Erlangen, 91054 Erlangen, Germany
- Cognitive Computational Neuroscience Group, Friedrich-Alexander University Erlangen-Nuremberg, 91054 Erlangen, Germany
- Pattern Recognition Lab, Friedrich-Alexander University Erlangen-Nuremberg, 91054 Erlangen, Germany
| |
Collapse
|
11
|
Stern M, Istrate N, Mazzucato L. A reservoir of timescales emerges in recurrent circuits with heterogeneous neural assemblies. eLife 2023; 12:e86552. [PMID: 38084779 PMCID: PMC10810607 DOI: 10.7554/elife.86552] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/31/2023] [Accepted: 12/07/2023] [Indexed: 01/26/2024] Open
Abstract
The temporal activity of many physical and biological systems, from complex networks to neural circuits, exhibits fluctuations simultaneously varying over a large range of timescales. Long-tailed distributions of intrinsic timescales have been observed across neurons simultaneously recorded within the same cortical circuit. The mechanisms leading to this striking temporal heterogeneity are yet unknown. Here, we show that neural circuits, endowed with heterogeneous neural assemblies of different sizes, naturally generate multiple timescales of activity spanning several orders of magnitude. We develop an analytical theory using rate networks, supported by simulations of spiking networks with cell-type specific connectivity, to explain how neural timescales depend on assembly size and show that our model can naturally explain the long-tailed timescale distribution observed in the awake primate cortex. When driving recurrent networks of heterogeneous neural assemblies by a time-dependent broadband input, we found that large and small assemblies preferentially entrain slow and fast spectral components of the input, respectively. Our results suggest that heterogeneous assemblies can provide a biologically plausible mechanism for neural circuits to demix complex temporal input signals by transforming temporal into spatial neural codes via frequency-selective neural assemblies.
Collapse
Affiliation(s)
- Merav Stern
- Institute of Neuroscience, University of OregonEugeneUnited States
- Faculty of Medicine, The Hebrew University of JerusalemJerusalemIsrael
| | - Nicolae Istrate
- Institute of Neuroscience, University of OregonEugeneUnited States
- Departments of Physics, University of OregonEugeneUnited States
| | - Luca Mazzucato
- Institute of Neuroscience, University of OregonEugeneUnited States
- Departments of Physics, University of OregonEugeneUnited States
- Mathematics and Biology, University of OregonEugeneUnited States
| |
Collapse
|
12
|
Wu S, Huang C, Snyder A, Smith M, Doiron B, Yu B. Automated customization of large-scale spiking network models to neuronal population activity. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2023:2023.09.21.558920. [PMID: 37790533 PMCID: PMC10542160 DOI: 10.1101/2023.09.21.558920] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/05/2023]
Abstract
Understanding brain function is facilitated by constructing computational models that accurately reproduce aspects of brain activity. Networks of spiking neurons capture the underlying biophysics of neuronal circuits, yet the dependence of their activity on model parameters is notoriously complex. As a result, heuristic methods have been used to configure spiking network models, which can lead to an inability to discover activity regimes complex enough to match large-scale neuronal recordings. Here we propose an automatic procedure, Spiking Network Optimization using Population Statistics (SNOPS), to customize spiking network models that reproduce the population-wide covariability of large-scale neuronal recordings. We first confirmed that SNOPS accurately recovers simulated neural activity statistics. Then, we applied SNOPS to recordings in macaque visual and prefrontal cortices and discovered previously unknown limitations of spiking network models. Taken together, SNOPS can guide the development of network models and thereby enable deeper insight into how networks of neurons give rise to brain function.
Collapse
Affiliation(s)
- Shenghao Wu
- Neuroscience Institute, Carnegie Mellon University, Pittsburgh, PA, USA
- Machine Learning Department, Carnegie Mellon University, Pittsburgh, PA, USA
- Center for the Neural Basis of Cognition, Pittsburgh, PA, USA
| | - Chengcheng Huang
- Center for the Neural Basis of Cognition, Pittsburgh, PA, USA
- Department of Neuroscience, University of Pittsburgh, Pittsburgh, PA, USA
- Department of Mathematics, University of Pittsburgh, Pittsburgh, PA, USA
| | - Adam Snyder
- Department of Neuroscience, University of Rochester, Rochester, NY, USA
- Department of Brain and Cognitive Sciences, University of Rochester, Rochester, NY, USA
- Center for Visual Science, University of Rochester, Rochester, NY, USA
| | - Matthew Smith
- Neuroscience Institute, Carnegie Mellon University, Pittsburgh, PA, USA
- Department of Biomedical Engineering, Carnegie Mellon University, Pittsburgh, PA, USA
- Center for the Neural Basis of Cognition, Pittsburgh, PA, USA
| | - Brent Doiron
- Department of Neurobiology, University of Chicago, Chicago, IL, USA
- Department of Statistics, University of Chicago, Chicago, IL, USA
- Grossman Center for Quantitative Biology and Human Behavior, University of Chicago, Chicago, IL, USA
| | - Byron Yu
- Neuroscience Institute, Carnegie Mellon University, Pittsburgh, PA, USA
- Department of Biomedical Engineering, Carnegie Mellon University, Pittsburgh, PA, USA
- Department of Electrical and Computer Engineering, Carnegie Mellon University, Pittsburgh, PA, USA
- Center for the Neural Basis of Cognition, Pittsburgh, PA, USA
| |
Collapse
|
13
|
Clark DG, Abbott LF, Litwin-Kumar A. Dimension of Activity in Random Neural Networks. PHYSICAL REVIEW LETTERS 2023; 131:118401. [PMID: 37774280 DOI: 10.1103/physrevlett.131.118401] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/09/2022] [Revised: 05/25/2023] [Accepted: 08/08/2023] [Indexed: 10/01/2023]
Abstract
Neural networks are high-dimensional nonlinear dynamical systems that process information through the coordinated activity of many connected units. Understanding how biological and machine-learning networks function and learn requires knowledge of the structure of this coordinated activity, information contained, for example, in cross covariances between units. Self-consistent dynamical mean field theory (DMFT) has elucidated several features of random neural networks-in particular, that they can generate chaotic activity-however, a calculation of cross covariances using this approach has not been provided. Here, we calculate cross covariances self-consistently via a two-site cavity DMFT. We use this theory to probe spatiotemporal features of activity coordination in a classic random-network model with independent and identically distributed (i.i.d.) couplings, showing an extensive but fractionally low effective dimension of activity and a long population-level timescale. Our formulas apply to a wide range of single-unit dynamics and generalize to non-i.i.d. couplings. As an example of the latter, we analyze the case of partially symmetric couplings.
Collapse
Affiliation(s)
- David G Clark
- Zuckerman Institute, Department of Neuroscience, Columbia University, New York, New York 10027, USA
| | - L F Abbott
- Zuckerman Institute, Department of Neuroscience, Columbia University, New York, New York 10027, USA
| | - Ashok Litwin-Kumar
- Zuckerman Institute, Department of Neuroscience, Columbia University, New York, New York 10027, USA
| |
Collapse
|
14
|
Guo L, Kumar A. Role of interneuron subtypes in controlling trial-by-trial output variability in the neocortex. Commun Biol 2023; 6:874. [PMID: 37620550 PMCID: PMC10449833 DOI: 10.1038/s42003-023-05231-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/13/2022] [Accepted: 08/08/2023] [Indexed: 08/26/2023] Open
Abstract
Trial-by-trial variability is a ubiquitous property of neuronal activity in vivo which shapes the stimulus response. Computational models have revealed how local network structure and feedforward inputs shape the trial-by-trial variability. However, the role of input statistics and different interneuron subtypes in this process is less understood. To address this, we investigate the dynamics of stimulus response in a cortical microcircuit model with one excitatory and three inhibitory interneuron populations (PV, SST, VIP). Our findings demonstrate that the balance of inputs to different neuron populations and input covariances are the primary determinants of output trial-by-trial variability. The effect of input covariances is contingent on the input balances. In general, the network exhibits smaller output trial-by-trial variability in a PV-dominated regime than in an SST-dominated regime. Importantly, our work reveals mechanisms by which output trial-by-trial variability can be controlled in a context, state, and task-dependent manner.
Collapse
Affiliation(s)
- Lihao Guo
- Division of Computational Science and Technology, School of Electrical Engineering and Computer Science, KTH Royal Institute of Technology Stockholm, Stockholm, Sweden.
- Scilife Lab, Stockholm, Sweden.
| | - Arvind Kumar
- Division of Computational Science and Technology, School of Electrical Engineering and Computer Science, KTH Royal Institute of Technology Stockholm, Stockholm, Sweden.
- Scilife Lab, Stockholm, Sweden.
| |
Collapse
|
15
|
Handy G, Borisyuk A. Investigating the ability of astrocytes to drive neural network synchrony. PLoS Comput Biol 2023; 19:e1011290. [PMID: 37556468 PMCID: PMC10441806 DOI: 10.1371/journal.pcbi.1011290] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/30/2022] [Revised: 08/21/2023] [Accepted: 06/21/2023] [Indexed: 08/11/2023] Open
Abstract
Recent experimental works have implicated astrocytes as a significant cell type underlying several neuronal processes in the mammalian brain, from encoding sensory information to neurological disorders. Despite this progress, it is still unclear how astrocytes are communicating with and driving their neuronal neighbors. While previous computational modeling works have helped propose mechanisms responsible for driving these interactions, they have primarily focused on interactions at the synaptic level, with microscale models of calcium dynamics and neurotransmitter diffusion. Since it is computationally infeasible to include the intricate microscale details in a network-scale model, little computational work has been done to understand how astrocytes may be influencing spiking patterns and synchronization of large networks. We overcome this issue by first developing an "effective" astrocyte that can be easily implemented to already established network frameworks. We do this by showing that the astrocyte proximity to a synapse makes synaptic transmission faster, weaker, and less reliable. Thus, our "effective" astrocytes can be incorporated by considering heterogeneous synaptic time constants, which are parametrized only by the degree of astrocytic proximity at that synapse. We then apply our framework to large networks of exponential integrate-and-fire neurons with various spatial structures. Depending on key parameters, such as the number of synapses ensheathed and the strength of this ensheathment, we show that astrocytes can push the network to a synchronous state and exhibit spatially correlated patterns.
Collapse
Affiliation(s)
- Gregory Handy
- Departments of Neurobiology and Statistics, University of Chicago, Chicago, Illinois, United States of America
- Grossman Center for Quantitative Biology and Human Behavior, University of Chicago, Chicago, Illinois, United States of America
| | - Alla Borisyuk
- Department of Mathematics, University of Utah, Salt Lake City, Utah, United States of America
| |
Collapse
|
16
|
Naik S, Adibpour P, Dubois J, Dehaene-Lambertz G, Battaglia D. Event-related variability is modulated by task and development. Neuroimage 2023; 276:120208. [PMID: 37268095 DOI: 10.1016/j.neuroimage.2023.120208] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/02/2023] [Revised: 05/11/2023] [Accepted: 05/30/2023] [Indexed: 06/04/2023] Open
Abstract
In carefully designed experimental paradigms, cognitive scientists interpret the mean event-related potentials (ERP) in terms of cognitive operations. However, the huge signal variability from one trial to the next, questions the representability of such mean events. We explored here whether this variability is an unwanted noise, or an informative part of the neural response. We took advantage of the rapid changes in the visual system during human infancy and analyzed the variability of visual responses to central and lateralized faces in 2-to 6-month-old infants compared to adults using high-density electroencephalography (EEG). We observed that neural trajectories of individual trials always remain very far from ERP components, only moderately bending their direction with a substantial temporal jitter across trials. However, single trial trajectories displayed characteristic patterns of acceleration and deceleration when approaching ERP components, as if they were under the active influence of steering forces causing transient attraction and stabilization. These dynamic events could only partly be accounted for by induced microstate transitions or phase reset phenomena. Importantly, these structured modulations of response variability, both between and within trials, had a rich sequential organization, which in infants, was modulated by the task difficulty and age. Our approaches to characterize Event Related Variability (ERV) expand on classic ERP analyses and provide the first evidence for the functional role of ongoing neural variability in human infants.
Collapse
Affiliation(s)
- Shruti Naik
- Cognitive Neuroimaging Unit U992, NeuroSpin Center, F-91190 Gif/Yvette, France
| | - Parvaneh Adibpour
- Cognitive Neuroimaging Unit U992, NeuroSpin Center, F-91190 Gif/Yvette, France
| | - Jessica Dubois
- Cognitive Neuroimaging Unit U992, NeuroSpin Center, F-91190 Gif/Yvette, France; Université de Paris, NeuroDiderot, Inserm, F-75019 Paris, France
| | | | - Demian Battaglia
- Institute for System Neuroscience U1106, Aix-Marseille Université, F-13005 Marseille, France; University of Strasbourg Institute for Advanced Studies (USIAS), F-67000 Strasbourg, France.
| |
Collapse
|
17
|
Haruna J, Toshio R, Nakano N. Path integral approach to universal dynamics of reservoir computers. Phys Rev E 2023; 107:034306. [PMID: 37073052 DOI: 10.1103/physreve.107.034306] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/14/2021] [Accepted: 02/06/2023] [Indexed: 04/20/2023]
Abstract
In this work, we give a characterization of the reservoir computer (RC) by the network structure, especially the probability distribution of random coupling constants. First, based on the path integral method, we clarify the universal behavior of the random network dynamics in the thermodynamic limit, which depends only on the asymptotic behavior of the second cumulant generating functions of the network coupling constants. This result enables us to classify the random networks into several universality classes, according to the distribution function of coupling constants chosen for the networks. Interestingly, it is revealed that such a classification has a close relationship with the distribution of eigenvalues of the random coupling matrix. We also comment on the relation between our theory and some practical choices of random connectivity in the RC. Subsequently, we investigate the relationship between the RC's computational power and the network parameters for several universality classes. We perform several numerical simulations to evaluate the phase diagrams of the steady reservoir states, common-signal-induced synchronization, and the computational power in the chaotic time series inference tasks. As a result, we clarify the close relationship between these quantities, especially a remarkable computational performance near the phase transitions, which is realized even near a nonchaotic transition boundary. These results may provide us with a new perspective on the designing principle for the RC.
Collapse
Affiliation(s)
- Junichi Haruna
- Department of Physics, Kyoto University, Kyoto 606-8502, Japan
| | - Riki Toshio
- Department of Physics, Kyoto University, Kyoto 606-8502, Japan
| | - Naoto Nakano
- Graduate School of Advanced Mathematical Sciences, Meiji University, Tokyo 164-8525, Japan
| |
Collapse
|
18
|
Mosheiff N, Ermentrout B, Huang C. Chaotic dynamics in spatially distributed neuronal networks generate population-wide shared variability. PLoS Comput Biol 2023; 19:e1010843. [PMID: 36626362 PMCID: PMC9870129 DOI: 10.1371/journal.pcbi.1010843] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/29/2022] [Revised: 01/23/2023] [Accepted: 12/26/2022] [Indexed: 01/11/2023] Open
Abstract
Neural activity in the cortex is highly variable in response to repeated stimuli. Population recordings across the cortex demonstrate that the variability of neuronal responses is shared among large groups of neurons and concentrates in a low dimensional space. However, the source of the population-wide shared variability is unknown. In this work, we analyzed the dynamical regimes of spatially distributed networks of excitatory and inhibitory neurons. We found chaotic spatiotemporal dynamics in networks with similar excitatory and inhibitory projection widths, an anatomical feature of the cortex. The chaotic solutions contain broadband frequency power in rate variability and have distance-dependent and low-dimensional correlations, in agreement with experimental findings. In addition, rate chaos can be induced by globally correlated noisy inputs. These results suggest that spatiotemporal chaos in cortical networks can explain the shared variability observed in neuronal population responses.
Collapse
Affiliation(s)
- Noga Mosheiff
- Department of Neuroscience, University of Pittsburgh, Pittsburgh, Pennsylvania, United States of America
- Center for the Neural Basis of Cognition, Pittsburgh, Pennsylvania, United States of America
| | - Bard Ermentrout
- Department of Mathematics, University of Pittsburgh, Pittsburgh, Pennsylvania, United States of America
| | - Chengcheng Huang
- Department of Neuroscience, University of Pittsburgh, Pittsburgh, Pennsylvania, United States of America
- Center for the Neural Basis of Cognition, Pittsburgh, Pennsylvania, United States of America
- Department of Mathematics, University of Pittsburgh, Pittsburgh, Pennsylvania, United States of America
| |
Collapse
|
19
|
Engelken R, Ingrosso A, Khajeh R, Goedeke S, Abbott LF. Input correlations impede suppression of chaos and learning in balanced firing-rate networks. PLoS Comput Biol 2022; 18:e1010590. [PMID: 36469504 PMCID: PMC9754616 DOI: 10.1371/journal.pcbi.1010590] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/21/2022] [Revised: 12/15/2022] [Accepted: 09/20/2022] [Indexed: 12/12/2022] Open
Abstract
Neural circuits exhibit complex activity patterns, both spontaneously and evoked by external stimuli. Information encoding and learning in neural circuits depend on how well time-varying stimuli can control spontaneous network activity. We show that in firing-rate networks in the balanced state, external control of recurrent dynamics, i.e., the suppression of internally-generated chaotic variability, strongly depends on correlations in the input. A distinctive feature of balanced networks is that, because common external input is dynamically canceled by recurrent feedback, it is far more difficult to suppress chaos with common input into each neuron than through independent input. To study this phenomenon, we develop a non-stationary dynamic mean-field theory for driven networks. The theory explains how the activity statistics and the largest Lyapunov exponent depend on the frequency and amplitude of the input, recurrent coupling strength, and network size, for both common and independent input. We further show that uncorrelated inputs facilitate learning in balanced networks.
Collapse
Affiliation(s)
- Rainer Engelken
- Zuckerman Mind, Brain, Behavior Institute, Columbia University, New York, New York, United States of America
| | - Alessandro Ingrosso
- The Abdus Salam International Centre for Theoretical Physics, Trieste, Italy
| | - Ramin Khajeh
- Zuckerman Mind, Brain, Behavior Institute, Columbia University, New York, New York, United States of America
| | - Sven Goedeke
- Neural Network Dynamics and Computation, Institute of Genetics, University of Bonn, Bonn, Germany
| | - L. F. Abbott
- Zuckerman Mind, Brain, Behavior Institute, Columbia University, New York, New York, United States of America
| |
Collapse
|
20
|
Wardak A, Gong P. Extended Anderson Criticality in Heavy-Tailed Neural Networks. PHYSICAL REVIEW LETTERS 2022; 129:048103. [PMID: 35939004 DOI: 10.1103/physrevlett.129.048103] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/15/2021] [Revised: 05/08/2022] [Accepted: 07/05/2022] [Indexed: 06/15/2023]
Abstract
We investigate the emergence of complex dynamics in networks with heavy-tailed connectivity by developing a non-Hermitian random matrix theory. We uncover the existence of an extended critical regime of spatially multifractal fluctuations between the quiescent and active phases. This multifractal critical phase combines features of localization and delocalization and differs from the edge of chaos in classical networks by the appearance of universal hallmarks of Anderson criticality over an extended region in phase space. We show that the rich nonlinear response properties of the extended critical regime can account for a variety of neural dynamics such as the diversity of timescales, providing a computational advantage for persistent classification in a reservoir setting.
Collapse
Affiliation(s)
- Asem Wardak
- School of Physics, University of Sydney, New South Wales 2006, Australia
| | - Pulin Gong
- School of Physics, University of Sydney, New South Wales 2006, Australia
| |
Collapse
|
21
|
Stimulus presentation can enhance spiking irregularity across subcortical and cortical regions. PLoS Comput Biol 2022; 18:e1010256. [PMID: 35789328 PMCID: PMC9286274 DOI: 10.1371/journal.pcbi.1010256] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/18/2021] [Revised: 07/15/2022] [Accepted: 05/27/2022] [Indexed: 11/24/2022] Open
Abstract
Stimulus presentation is believed to quench neural response variability as measured by fano-factor (FF). However, the relative contributions of within-trial spike irregularity and trial-to-trial rate variability to FF fluctuations have remained elusive. Here, we introduce a principled approach for accurate estimation of spiking irregularity and rate variability in time for doubly stochastic point processes. Consistent with previous evidence, analysis showed stimulus-induced reduction in rate variability across multiple cortical and subcortical areas. However, unlike what was previously thought, spiking irregularity, was not constant in time but could be enhanced due to factors such as bursting abating the quench in the post-stimulus FF. Simulations confirmed plausibility of a time varying spiking irregularity arising from within and between pool correlations of excitatory and inhibitory neural inputs. By accurate parsing of neural variability, our approach reveals previously unnoticed changes in neural response variability and constrains candidate mechanisms that give rise to observed rate variability and spiking irregularity within brain regions. Mounting evidence suggest neural response variability to be important for understanding and constraining the underlying neural mechanisms in a given brain area. Here, by analyzing responses across multiple brain areas and by using a principled method for parsing variability components into rate variability and spiking irregularity, we show that unlike what was previously thought, event-related quench of variability is not a brain-wide phenomenon and that point process variability and nonrenewal bursting can enhance post-stimulus spiking irregularity across certain cortical and subcortical regions. We propose possible presynaptic mechanisms that may underlie the observed heterogeneities in spiking variability across the brain.
Collapse
|
22
|
Peng X, Lin W. Complex Dynamics of Noise-Perturbed Excitatory-Inhibitory Neural Networks With Intra-Correlative and Inter-Independent Connections. Front Physiol 2022; 13:915511. [PMID: 35812336 PMCID: PMC9263264 DOI: 10.3389/fphys.2022.915511] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/08/2022] [Accepted: 05/09/2022] [Indexed: 11/24/2022] Open
Abstract
Real neural system usually contains two types of neurons, i.e., excitatory neurons and inhibitory ones. Analytical and numerical interpretation of dynamics induced by different types of interactions among the neurons of two types is beneficial to understanding those physiological functions of the brain. Here, we articulate a model of noise-perturbed random neural networks containing both excitatory and inhibitory (E&I) populations. Particularly, both intra-correlatively and inter-independently connected neurons in two populations are taken into account, which is different from the most existing E&I models only considering the independently-connected neurons. By employing the typical mean-field theory, we obtain an equivalent system of two dimensions with an input of stationary Gaussian process. Investigating the stationary autocorrelation functions along the obtained system, we analytically find the parameters’ conditions under which the synchronized behaviors between the two populations are sufficiently emergent. Taking the maximal Lyapunov exponent as an index, we also find different critical values of the coupling strength coefficients for the chaotic excitatory neurons and for the chaotic inhibitory ones. Interestingly, we reveal that the noise is able to suppress chaotic dynamics of the random neural networks having neurons in two populations, while an appropriate amount of correlation coefficient in intra-coupling strengths can enhance chaos occurrence. Finally, we also detect a previously-reported phenomenon where the parameters region corresponds to neither linearly stable nor chaotic dynamics; however, the size of the region area crucially depends on the populations’ parameters.
Collapse
Affiliation(s)
- Xiaoxiao Peng
- Shanghai Center for Mathematical Sciences, School of Mathematical Sciences, and LMNS, Fudan University, Shanghai, China
- Research Institute of Intelligent Complex Systemsand Center for Computational Systems Biology, Fudan University, Shanghai, China
- *Correspondence: Xiaoxiao Peng, ; Wei Lin,
| | - Wei Lin
- Shanghai Center for Mathematical Sciences, School of Mathematical Sciences, and LMNS, Fudan University, Shanghai, China
- Research Institute of Intelligent Complex Systemsand Center for Computational Systems Biology, Fudan University, Shanghai, China
- State Key Laboratory of Medical Neurobiology, MOE Frontiers Center for Brain Science, and Institutes of Brain Science, Fudan University, Shanghai, China
- *Correspondence: Xiaoxiao Peng, ; Wei Lin,
| |
Collapse
|
23
|
Gradient-based learning drives robust representations in recurrent neural networks by balancing compression and expansion. NAT MACH INTELL 2022. [DOI: 10.1038/s42256-022-00498-0] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/02/2023]
|
24
|
Optimizing the Neural Structure and Hyperparameters of Liquid State Machines Based on Evolutionary Membrane Algorithm. MATHEMATICS 2022. [DOI: 10.3390/math10111844] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/10/2022]
Abstract
As one of the important artificial intelligence fields, brain-like computing attempts to give machines a higher intelligence level by studying and simulating the cognitive principles of the human brain. A spiking neural network (SNN) is one of the research directions of brain-like computing, characterized by better biogenesis and stronger computing power than the traditional neural network. A liquid state machine (LSM) is a neural computing model with a recurrent network structure based on SNN. In this paper, a learning algorithm based on an evolutionary membrane algorithm is proposed to optimize the neural structure and hyperparameters of an LSM. First, the object of the proposed algorithm is designed according to the neural structure and hyperparameters of the LSM. Second, the reaction rules of the proposed algorithm are employed to discover the best neural structure and hyperparameters of the LSM. Third, the membrane structure is that the skin membrane contains several elementary membranes to speed up the search of the proposed algorithm. In the simulation experiment, effectiveness verification is carried out on the MNIST and KTH datasets. In terms of the MNIST datasets, the best test results of the proposed algorithm with 500, 1000 and 2000 spiking neurons are 86.8%, 90.6% and 90.8%, respectively. The best test results of the proposed algorithm on KTH with 500, 1000 and 2000 spiking neurons are 82.9%, 85.3% and 86.3%, respectively. The simulation results show that the proposed algorithm has a more competitive advantage than other experimental algorithms.
Collapse
|
25
|
Metzner C, Krauss P. Dynamics and Information Import in Recurrent Neural Networks. Front Comput Neurosci 2022; 16:876315. [PMID: 35573264 PMCID: PMC9091337 DOI: 10.3389/fncom.2022.876315] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/15/2022] [Accepted: 04/04/2022] [Indexed: 12/27/2022] Open
Abstract
Recurrent neural networks (RNNs) are complex dynamical systems, capable of ongoing activity without any driving input. The long-term behavior of free-running RNNs, described by periodic, chaotic and fixed point attractors, is controlled by the statistics of the neural connection weights, such as the density d of non-zero connections, or the balance b between excitatory and inhibitory connections. However, for information processing purposes, RNNs need to receive external input signals, and it is not clear which of the dynamical regimes is optimal for this information import. We use both the average correlations C and the mutual information I between the momentary input vector and the next system state vector as quantitative measures of information import and analyze their dependence on the balance and density of the network. Remarkably, both resulting phase diagrams C(b, d) and I(b, d) are highly consistent, pointing to a link between the dynamical systems and the information-processing approach to complex systems. Information import is maximal not at the "edge of chaos," which is optimally suited for computation, but surprisingly in the low-density chaotic regime and at the border between the chaotic and fixed point regime. Moreover, we find a completely new type of resonance phenomenon, which we call "Import Resonance" (IR), where the information import shows a maximum, i.e., a peak-like dependence on the coupling strength between the RNN and its external input. IR complements previously found Recurrence Resonance (RR), where correlation and mutual information of successive system states peak for a certain amplitude of noise added to the system. Both IR and RR can be exploited to optimize information processing in artificial neural networks and might also play a crucial role in biological neural systems.
Collapse
Affiliation(s)
- Claus Metzner
- Neuroscience Lab, University Hospital Erlangen, Erlangen, Germany
| | - Patrick Krauss
- Neuroscience Lab, University Hospital Erlangen, Erlangen, Germany
- Cognitive Computational Neuroscience Group, Friedrich-Alexander-University Erlangen-Nuremberg, Erlangen, Germany
- Pattern Recognition Lab, Friedrich-Alexander-University Erlangen-Nuremberg, Erlangen, Germany
| |
Collapse
|
26
|
Darshan R, Rivkind A. Learning to represent continuous variables in heterogeneous neural networks. Cell Rep 2022; 39:110612. [PMID: 35385721 DOI: 10.1016/j.celrep.2022.110612] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/05/2021] [Revised: 02/08/2022] [Accepted: 03/11/2022] [Indexed: 12/13/2022] Open
Abstract
Animals must monitor continuous variables such as position or head direction. Manifold attractor networks-which enable a continuum of persistent neuronal states-provide a key framework to explain this monitoring ability. Neural networks with symmetric synaptic connectivity dominate this framework but are inconsistent with the diverse synaptic connectivity and neuronal representations observed in experiments. Here, we developed a theory for manifold attractors in trained neural networks, which approximates a continuum of persistent states, without assuming unrealistic symmetry. We exploit the theory to predict how asymmetries in the representation and heterogeneity in the connectivity affect the formation of the manifold via training, shape network response to stimulus, and govern mechanisms that possibly lead to destabilization of the manifold. Our work suggests that the functional properties of manifold attractors in the brain can be inferred from the overlooked asymmetries in connectivity and in the low-dimensional representation of the encoded variable.
Collapse
Affiliation(s)
- Ran Darshan
- Janelia Research Campus, Howard Hughes Medical Institute, Ashburn, VA, USA.
| | | |
Collapse
|
27
|
Encoding time in neural dynamic regimes with distinct computational tradeoffs. PLoS Comput Biol 2022; 18:e1009271. [PMID: 35239644 PMCID: PMC8893702 DOI: 10.1371/journal.pcbi.1009271] [Citation(s) in RCA: 15] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/02/2021] [Accepted: 02/08/2022] [Indexed: 11/19/2022] Open
Abstract
Converging evidence suggests the brain encodes time in dynamic patterns of neural activity, including neural sequences, ramping activity, and complex dynamics. Most temporal tasks, however, require more than just encoding time, and can have distinct computational requirements including the need to exhibit temporal scaling, generalize to novel contexts, or robustness to noise. It is not known how neural circuits can encode time and satisfy distinct computational requirements, nor is it known whether similar patterns of neural activity at the population level can exhibit dramatically different computational or generalization properties. To begin to answer these questions, we trained RNNs on two timing tasks based on behavioral studies. The tasks had different input structures but required producing identically timed output patterns. Using a novel framework we quantified whether RNNs encoded two intervals using either of three different timing strategies: scaling, absolute, or stimulus-specific dynamics. We found that similar neural dynamic patterns at the level of single intervals, could exhibit fundamentally different properties, including, generalization, the connectivity structure of the trained networks, and the contribution of excitatory and inhibitory neurons. Critically, depending on the task structure RNNs were better suited for generalization or robustness to noise. Further analysis revealed different connection patterns underlying the different regimes. Our results predict that apparently similar neural dynamic patterns at the population level (e.g., neural sequences) can exhibit fundamentally different computational properties in regards to their ability to generalize to novel stimuli and their robustness to noise—and that these differences are associated with differences in network connectivity and distinct contributions of excitatory and inhibitory neurons. We also predict that the task structure used in different experimental studies accounts for some of the experimentally observed variability in how networks encode time. The ability to tell time and anticipate when external events will occur are among the most fundamental computations the brain performs. Converging evidence suggests the brain encodes time through changing patterns of neural activity. Different temporal tasks, however, have distinct computational requirements, such as the need to flexibly scale temporal patterns or generalize to novel inputs. To understand how networks can encode time and satisfy different computational requirements we trained recurrent neural networks (RNNs) on two timing tasks that have previously been used in behavioral studies. Both tasks required producing identically timed output patterns. Using a novel framework to quantify how networks encode different intervals, we found that similar patterns of neural activity—neural sequences—were associated with fundamentally different underlying mechanisms, including the connectivity patterns of the RNNs. Critically, depending on the task the RNNs were trained on, they were better suited for generalization or robustness to noise. Our results predict that similar patterns of neural activity can be produced by distinct RNN configurations, which in turn have fundamentally different computational tradeoffs. Our results also predict that differences in task structure account for some of the experimentally observed variability in how networks encode time.
Collapse
|
28
|
Toker D, Pappas I, Lendner JD, Frohlich J, Mateos DM, Muthukumaraswamy S, Carhart-Harris R, Paff M, Vespa PM, Monti MM, Sommer FT, Knight RT, D'Esposito M. Consciousness is supported by near-critical slow cortical electrodynamics. Proc Natl Acad Sci U S A 2022; 119:e2024455119. [PMID: 35145021 PMCID: PMC8851554 DOI: 10.1073/pnas.2024455119] [Citation(s) in RCA: 46] [Impact Index Per Article: 15.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/02/2020] [Accepted: 12/20/2021] [Indexed: 12/21/2022] Open
Abstract
Mounting evidence suggests that during conscious states, the electrodynamics of the cortex are poised near a critical point or phase transition and that this near-critical behavior supports the vast flow of information through cortical networks during conscious states. Here, we empirically identify a mathematically specific critical point near which waking cortical oscillatory dynamics operate, which is known as the edge-of-chaos critical point, or the boundary between stability and chaos. We do so by applying the recently developed modified 0-1 chaos test to electrocorticography (ECoG) and magnetoencephalography (MEG) recordings from the cortices of humans and macaques across normal waking, generalized seizure, anesthesia, and psychedelic states. Our evidence suggests that cortical information processing is disrupted during unconscious states because of a transition of low-frequency cortical electric oscillations away from this critical point; conversely, we show that psychedelics may increase the information richness of cortical activity by tuning low-frequency cortical oscillations closer to this critical point. Finally, we analyze clinical electroencephalography (EEG) recordings from patients with disorders of consciousness (DOC) and show that assessing the proximity of slow cortical oscillatory electrodynamics to the edge-of-chaos critical point may be useful as an index of consciousness in the clinical setting.
Collapse
Affiliation(s)
- Daniel Toker
- Department of Psychology, University of California, Los Angeles, CA 90095;
| | - Ioannis Pappas
- Helen Wills Neuroscience Institute, University of California, Berkeley, CA 94704
- Department of Psychology, University of California, Berkeley, CA 94704
- Laboratory of Neuro Imaging, Stevens Institute for Neuroimaging and Informatics, Keck School of Medicine, University of Southern California, Los Angeles, CA 90033
| | - Janna D Lendner
- Helen Wills Neuroscience Institute, University of California, Berkeley, CA 94704
- Department of Anesthesiology and Intensive Care, University Medical Center, 72076 Tübingen, Germany
| | - Joel Frohlich
- Department of Psychology, University of California, Los Angeles, CA 90095
| | - Diego M Mateos
- Consejo Nacional de Investigaciones Científicas y Técnicas de Argentina, C1425 Buenos Aires, Argentina
- Facultad de Ciencia y Tecnología, Universidad Autónoma de Entre Ríos, E3202 Paraná, Entre Ríos, Argentina
- Grupo de Análisis de Neuroimágenes, Instituo de Matemática Aplicada del Litoral, S3000 Santa Fe, Argentina
| | - Suresh Muthukumaraswamy
- School of Pharmacy, Faculty of Medical and Health Sciences, The University of Auckland, 1010 Auckland, New Zealand
| | - Robin Carhart-Harris
- Neuropsychopharmacology Unit, Centre for Psychiatry, Imperial College London, London SW7 2AZ, United Kingdom
- Centre for Psychedelic Research, Department of Psychiatry, Imperial College London, London SW7 2AZ, United Kingdom
| | - Michelle Paff
- Department of Neurological Surgery, University of California, Irvine, CA 92697
| | - Paul M Vespa
- Brain Injury Research Center, Department of Neurosurgery, University of California, Los Angeles, CA 90095
| | - Martin M Monti
- Department of Psychology, University of California, Los Angeles, CA 90095
- Brain Injury Research Center, Department of Neurosurgery, University of California, Los Angeles, CA 90095
| | - Friedrich T Sommer
- Helen Wills Neuroscience Institute, University of California, Berkeley, CA 94704
- Redwood Center for Theoretical Neuroscience, University of California, Berkeley, CA 94704
| | - Robert T Knight
- Helen Wills Neuroscience Institute, University of California, Berkeley, CA 94704
- Department of Psychology, University of California, Berkeley, CA 94704
| | - Mark D'Esposito
- Helen Wills Neuroscience Institute, University of California, Berkeley, CA 94704
- Department of Psychology, University of California, Berkeley, CA 94704
| |
Collapse
|
29
|
Liang J, Zhou C. Criticality enhances the multilevel reliability of stimulus responses in cortical neural networks. PLoS Comput Biol 2022; 18:e1009848. [PMID: 35100254 PMCID: PMC8830719 DOI: 10.1371/journal.pcbi.1009848] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/22/2021] [Revised: 02/10/2022] [Accepted: 01/18/2022] [Indexed: 11/18/2022] Open
Abstract
Cortical neural networks exhibit high internal variability in spontaneous dynamic activities and they can robustly and reliably respond to external stimuli with multilevel features–from microscopic irregular spiking of neurons to macroscopic oscillatory local field potential. A comprehensive study integrating these multilevel features in spontaneous and stimulus–evoked dynamics with seemingly distinct mechanisms is still lacking. Here, we study the stimulus–response dynamics of biologically plausible excitation–inhibition (E–I) balanced networks. We confirm that networks around critical synchronous transition states can maintain strong internal variability but are sensitive to external stimuli. In this dynamical region, applying a stimulus to the network can reduce the trial-to-trial variability and shift the network oscillatory frequency while preserving the dynamical criticality. These multilevel features widely observed in different experiments cannot simultaneously occur in non-critical dynamical states. Furthermore, the dynamical mechanisms underlying these multilevel features are revealed using a semi-analytical mean-field theory that derives the macroscopic network field equations from the microscopic neuronal networks, enabling the analysis by nonlinear dynamics theory and linear noise approximation. The generic dynamical principle revealed here contributes to a more integrative understanding of neural systems and brain functions and incorporates multimodal and multilevel experimental observations. The E–I balanced neural network in combination with the effective mean-field theory can serve as a mechanistic modeling framework to study the multilevel neural dynamics underlying neural information and cognitive processes. The complexity and variability of brain dynamical activity range from neuronal spiking and neural avalanches to oscillatory local field potentials of local neural circuits in both spontaneous and stimulus-evoked states. Such multilevel variable brain dynamics are functionally and behaviorally relevant and are principal components of the underlying circuit organization. To more comprehensively clarify their neural mechanisms, we use a bottom-up approach to study the stimulus–response dynamics of neural circuits. Our model assumes the following key biologically plausible components: excitation–inhibition (E–I) neuronal interaction and chemical synaptic coupling. We show that the circuits with E–I balance have a special dynamic sub-region, the critical region. Circuits around this region could account for the emergence of multilevel brain response patterns, both ongoing and stimulus-induced, observed in different experiments, including the reduction of trial-to-trial variability, effective modulation of gamma frequency, and preservation of criticality in the presence of a stimulus. We further analyze the corresponding nonlinear dynamical principles using a novel and highly generalizable semi-analytical mean-field theory. Our computational and theoretical studies explain the cross-level brain dynamical organization of spontaneous and evoked states in a more integrative manner.
Collapse
Affiliation(s)
- Junhao Liang
- Department of Physics, Centre for Nonlinear Studies and Beijing-Hong Kong-Singapore Joint Centre for Nonlinear and Complex Systems (Hong Kong), Institute of Computational and Theoretical Studies, Hong Kong Baptist University, Kowloon Tong, Hong Kong SAR, China
- Centre for Integrative Neuroscience, Eberhard Karls University of Tübingen, Tübingen, Germany
- Department for Sensory and Sensorimotor Systems, Max Planck Institute for Biological Cybernetics, Tübingen, Germany
| | - Changsong Zhou
- Department of Physics, Centre for Nonlinear Studies and Beijing-Hong Kong-Singapore Joint Centre for Nonlinear and Complex Systems (Hong Kong), Institute of Computational and Theoretical Studies, Hong Kong Baptist University, Kowloon Tong, Hong Kong SAR, China
- Department of Physics, Zhejiang University, Hangzhou, China
- * E-mail:
| |
Collapse
|
30
|
Krishnamurthy K, Can T, Schwab DJ. Theory of Gating in Recurrent Neural Networks. PHYSICAL REVIEW. X 2022; 12:011011. [PMID: 36545030 PMCID: PMC9762509 DOI: 10.1103/physrevx.12.011011] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
Recurrent neural networks (RNNs) are powerful dynamical models, widely used in machine learning (ML) and neuroscience. Prior theoretical work has focused on RNNs with additive interactions. However gating i.e., multiplicative interactions are ubiquitous in real neurons and also the central feature of the best-performing RNNs in ML. Here, we show that gating offers flexible control of two salient features of the collective dynamics: (i) timescales and (ii) dimensionality. The gate controlling timescales leads to a novel marginally stable state, where the network functions as a flexible integrator. Unlike previous approaches, gating permits this important function without parameter fine-tuning or special symmetries. Gates also provide a flexible, context-dependent mechanism to reset the memory trace, thus complementing the memory function. The gate modulating the dimensionality can induce a novel, discontinuous chaotic transition, where inputs push a stable system to strong chaotic activity, in contrast to the typically stabilizing effect of inputs. At this transition, unlike additive RNNs, the proliferation of critical points (topological complexity) is decoupled from the appearance of chaotic dynamics (dynamical complexity). The rich dynamics are summarized in phase diagrams, thus providing a map for principled parameter initialization choices to ML practitioners.
Collapse
Affiliation(s)
- Kamesh Krishnamurthy
- Joseph Henry Laboratories of Physics and PNI, Princeton University, Princeton, New Jersey 08544, USA
| | - Tankut Can
- Institute for Advanced Study, Princeton, New Jersey 08540, USA
| | - David J. Schwab
- Initiative for Theoretical Sciences, Graduate Center, CUNY, New York, New York 10016, USA
| |
Collapse
|
31
|
Urai AE, Doiron B, Leifer AM, Churchland AK. Large-scale neural recordings call for new insights to link brain and behavior. Nat Neurosci 2022; 25:11-19. [PMID: 34980926 DOI: 10.1038/s41593-021-00980-9] [Citation(s) in RCA: 102] [Impact Index Per Article: 34.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/31/2020] [Accepted: 11/08/2021] [Indexed: 12/17/2022]
Abstract
Neuroscientists today can measure activity from more neurons than ever before, and are facing the challenge of connecting these brain-wide neural recordings to computation and behavior. In the present review, we first describe emerging tools and technologies being used to probe large-scale brain activity and new approaches to characterize behavior in the context of such measurements. We next highlight insights obtained from large-scale neural recordings in diverse model systems, and argue that some of these pose a challenge to traditional theoretical frameworks. Finally, we elaborate on existing modeling frameworks to interpret these data, and argue that the interpretation of brain-wide neural recordings calls for new theoretical approaches that may depend on the desired level of understanding. These advances in both neural recordings and theory development will pave the way for critical advances in our understanding of the brain.
Collapse
Affiliation(s)
- Anne E Urai
- Cold Spring Harbor Laboratory, Cold Spring Harbor, NY, USA
- Cognitive Psychology Unit, Leiden University, Leiden, The Netherlands
| | | | | | - Anne K Churchland
- Cold Spring Harbor Laboratory, Cold Spring Harbor, NY, USA.
- University of California Los Angeles, Los Angeles, CA, USA.
| |
Collapse
|
32
|
Tavakoli SK, Longtin A. Complexity Collapse, Fluctuating Synchrony, and Transient Chaos in Neural Networks With Delay Clusters. Front Syst Neurosci 2021; 15:720744. [PMID: 34867219 PMCID: PMC8639886 DOI: 10.3389/fnsys.2021.720744] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/04/2021] [Accepted: 09/20/2021] [Indexed: 11/28/2022] Open
Abstract
Neural circuits operate with delays over a range of time scales, from a few milliseconds in recurrent local circuitry to tens of milliseconds or more for communication between populations. Modeling usually incorporates single fixed delays, meant to represent the mean conduction delay between neurons making up the circuit. We explore conditions under which the inclusion of more delays in a high-dimensional chaotic neural network leads to a reduction in dynamical complexity, a phenomenon recently described as multi-delay complexity collapse (CC) in delay-differential equations with one to three variables. We consider a recurrent local network of 80% excitatory and 20% inhibitory rate model neurons with 10% connection probability. An increase in the width of the distribution of local delays, even to unrealistically large values, does not cause CC, nor does adding more local delays. Interestingly, multiple small local delays can cause CC provided there is a moderate global delayed inhibitory feedback and random initial conditions. CC then occurs through the settling of transient chaos onto a limit cycle. In this regime, there is a form of noise-induced order in which the mean activity variance decreases as the noise increases and disrupts the synchrony. Another novel form of CC is seen where global delayed feedback causes “dropouts,” i.e., epochs of low firing rate network synchrony. Their alternation with epochs of higher firing rate asynchrony closely follows Poisson statistics. Such dropouts are promoted by larger global feedback strength and delay. Finally, periodic driving of the chaotic regime with global feedback can cause CC; the extinction of chaos can outlast the forcing, sometimes permanently. Our results suggest a wealth of phenomena that remain to be discovered in networks with clusters of delays.
Collapse
Affiliation(s)
- S Kamyar Tavakoli
- Department of Physics and Centre for Neural Dynamics, University of Ottawa, Ottawa, ON, Canada
| | - André Longtin
- Department of Physics and Centre for Neural Dynamics, University of Ottawa, Ottawa, ON, Canada
| |
Collapse
|
33
|
Li Y, Kim R, Sejnowski TJ. Learning the Synaptic and Intrinsic Membrane Dynamics Underlying Working Memory in Spiking Neural Network Models. Neural Comput 2021; 33:3264-3287. [PMID: 34710902 PMCID: PMC8662709 DOI: 10.1162/neco_a_01409] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/03/2020] [Accepted: 03/15/2021] [Indexed: 12/12/2022]
Abstract
Recurrent neural network (RNN) models trained to perform cognitive tasks are a useful computational tool for understanding how cortical circuits execute complex computations. However, these models are often composed of units that interact with one another using continuous signals and overlook parameters intrinsic to spiking neurons. Here, we developed a method to directly train not only synaptic-related variables but also membrane-related parameters of a spiking RNN model. Training our model on a wide range of cognitive tasks resulted in diverse yet task-specific synaptic and membrane parameters. We also show that fast membrane time constants and slow synaptic decay dynamics naturally emerge from our model when it is trained on tasks associated with working memory (WM). Further dissecting the optimized parameters revealed that fast membrane properties are important for encoding stimuli, and slow synaptic dynamics are needed for WM maintenance. This approach offers a unique window into how connectivity patterns and intrinsic neuronal properties contribute to complex dynamics in neural populations.
Collapse
Affiliation(s)
- Yinghao Li
- Computational Neurobiology Laboratory, Salk Institute for Biological Studies, La Jolla, CA 92037, U.S.A.
| | - Robert Kim
- Computational Neurobiology Laboratory, Salk Institute for Biological Studies, La Jolla, CA 92037, and Neurosciences Graduate Program and Medical Scientist Training Program, University of California San Diego, La Jolla, CA 92093, U.S.A.
| | - Terrence J Sejnowski
- Computational Neurobiology Laboratory, Salk Institute for Biological Studies, La Jolla, CA 92037, and Institute for Neural Computation and Division of Biological Sciences, University of California San Diego, La Jolla, CA 92093, U.S.A.
| |
Collapse
|
34
|
Curado EMF, Melgar NB, Nobre FD. External Stimuli on Neural Networks: Analytical and Numerical Approaches. ENTROPY 2021; 23:e23081034. [PMID: 34441174 PMCID: PMC8393424 DOI: 10.3390/e23081034] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 07/08/2021] [Revised: 08/03/2021] [Accepted: 08/05/2021] [Indexed: 11/26/2022]
Abstract
Based on the behavior of living beings, which react mostly to external stimuli, we introduce a neural-network model that uses external patterns as a fundamental tool for the process of recognition. In this proposal, external stimuli appear as an additional field, and basins of attraction, representing memories, arise in accordance with this new field. This is in contrast to the more-common attractor neural networks, where memories are attractors inside well-defined basins of attraction. We show that this procedure considerably increases the storage capabilities of the neural network; this property is illustrated by the standard Hopfield model, which reveals that the recognition capacity of our model may be enlarged, typically, by a factor 102. The primary challenge here consists in calibrating the influence of the external stimulus, in order to attenuate the noise generated by memories that are not correlated with the external pattern. The system is analyzed primarily through numerical simulations. However, since there is the possibility of performing analytical calculations for the Hopfield model, the agreement between these two approaches can be tested—matching results are indicated in some cases. We also show that the present proposal exhibits a crucial attribute of living beings, which concerns their ability to react promptly to changes in the external environment. Additionally, we illustrate that this new approach may significantly enlarge the recognition capacity of neural networks in various situations; with correlated and non-correlated memories, as well as diluted, symmetric, or asymmetric interactions (synapses). This demonstrates that it can be implemented easily on a wide diversity of models.
Collapse
|
35
|
Wagner MJ, Savall J, Hernandez O, Mel G, Inan H, Rumyantsev O, Lecoq J, Kim TH, Li JZ, Ramakrishnan C, Deisseroth K, Luo L, Ganguli S, Schnitzer MJ. A neural circuit state change underlying skilled movements. Cell 2021; 184:3731-3747.e21. [PMID: 34214470 PMCID: PMC8844704 DOI: 10.1016/j.cell.2021.06.001] [Citation(s) in RCA: 42] [Impact Index Per Article: 10.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/07/2020] [Revised: 05/09/2021] [Accepted: 06/01/2021] [Indexed: 11/21/2022]
Abstract
In motor neuroscience, state changes are hypothesized to time-lock neural assemblies coordinating complex movements, but evidence for this remains slender. We tested whether a discrete change from more autonomous to coherent spiking underlies skilled movement by imaging cerebellar Purkinje neuron complex spikes in mice making targeted forelimb-reaches. As mice learned the task, millimeter-scale spatiotemporally coherent spiking emerged ipsilateral to the reaching forelimb, and consistent neural synchronization became predictive of kinematic stereotypy. Before reach onset, spiking switched from more disordered to internally time-locked concerted spiking and silence. Optogenetic manipulations of cerebellar feedback to the inferior olive bi-directionally modulated neural synchronization and reaching direction. A simple model explained the reorganization of spiking during reaching as reflecting a discrete bifurcation in olivary network dynamics. These findings argue that to prepare learned movements, olivo-cerebellar circuits enter a self-regulated, synchronized state promoting motor coordination. State changes facilitating behavioral transitions may generalize across neural systems.
Collapse
Affiliation(s)
- Mark J Wagner
- Neurosciences Program, Stanford University, Stanford, CA 94305, USA; Howard Hughes Medical Institute, Stanford University, Stanford, CA 94305, USA; CNC Program, Stanford University, Stanford, CA 94305, USA; Department of Biology, Stanford University, Stanford, CA 94305, USA.
| | - Joan Savall
- Howard Hughes Medical Institute, Stanford University, Stanford, CA 94305, USA; CNC Program, Stanford University, Stanford, CA 94305, USA
| | | | - Gabriel Mel
- Neurosciences Program, Stanford University, Stanford, CA 94305, USA
| | - Hakan Inan
- CNC Program, Stanford University, Stanford, CA 94305, USA; Department of Electrical Engineering, Stanford University, Stanford, CA 94305, USA
| | - Oleg Rumyantsev
- CNC Program, Stanford University, Stanford, CA 94305, USA; Department of Applied Physics, Stanford University, Stanford, CA 94305, USA
| | - Jérôme Lecoq
- CNC Program, Stanford University, Stanford, CA 94305, USA; Department of Biology, Stanford University, Stanford, CA 94305, USA
| | - Tony Hyun Kim
- Howard Hughes Medical Institute, Stanford University, Stanford, CA 94305, USA; CNC Program, Stanford University, Stanford, CA 94305, USA; Department of Electrical Engineering, Stanford University, Stanford, CA 94305, USA
| | - Jin Zhong Li
- CNC Program, Stanford University, Stanford, CA 94305, USA; Department of Biology, Stanford University, Stanford, CA 94305, USA
| | - Charu Ramakrishnan
- Howard Hughes Medical Institute, Stanford University, Stanford, CA 94305, USA; Department of Bioengineering, Stanford University, Stanford, CA 94305, USA
| | - Karl Deisseroth
- Howard Hughes Medical Institute, Stanford University, Stanford, CA 94305, USA; Department of Bioengineering, Stanford University, Stanford, CA 94305, USA
| | - Liqun Luo
- Howard Hughes Medical Institute, Stanford University, Stanford, CA 94305, USA; Department of Biology, Stanford University, Stanford, CA 94305, USA
| | - Surya Ganguli
- Department of Applied Physics, Stanford University, Stanford, CA 94305, USA
| | - Mark J Schnitzer
- Howard Hughes Medical Institute, Stanford University, Stanford, CA 94305, USA; CNC Program, Stanford University, Stanford, CA 94305, USA; Department of Biology, Stanford University, Stanford, CA 94305, USA; Department of Applied Physics, Stanford University, Stanford, CA 94305, USA.
| |
Collapse
|
36
|
Tang AM, Chen KH, Del Campo-Vera RM, Sebastian R, Gogia AS, Nune G, Liu CY, Kellis S, Lee B. Hippocampal and Orbitofrontal Theta Band Coherence Diminishes During Conflict Resolution. World Neurosurg 2021; 152:e32-e44. [PMID: 33872837 DOI: 10.1016/j.wneu.2021.04.023] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/02/2021] [Revised: 04/06/2021] [Accepted: 04/06/2021] [Indexed: 11/28/2022]
Abstract
OBJECTIVE Coherence between the hippocampus and other brain structures has been shown with the theta frequency (3-8 Hz). Cortical decreases in theta coherence are believed to reflect response accuracy efficiency. However, the role of theta coherence during conflict resolution is poorly understood in noncortical areas. In this study, coherence between the hippocampus and orbitofrontal cortex (OFC) was measured during a conflict resolution task. Although both brain areas have been previously implicated in the Stroop task, their interactions are not well understood. METHODS Nine patients were implanted with stereotactic electroencephalography contacts in the hippocampus and OFC. Local field potential data were sampled throughout discrete phases of a Stroop task. Coherence was calculated for hippocampal and OFC contact pairs, and coherence spectrograms were constructed for congruent and incongruent conditions. Coherence changes during cue processing were identified using a nonparametric cluster-permutation t test. Group analysis was conducted to compare overall theta coherence changes among conditions. RESULTS In 6 of 9 patients, decreased theta coherence was observed only during the incongruent condition (P < 0.05). Congruent theta coherence did not change from baseline. Group analysis showed lower theta coherence for the incongruent condition compared with the congruent condition (P < 0.05). CONCLUSIONS Theta coherence between the hippocampus and OFC decreased during conflict. This finding supports existing theories that theta coherence desynchronization contributes to improved response accuracy and processing efficiency during conflict resolution. The underlying theta coherence observed between the hippocampus and OFC during conflict may be distinct from its previously observed role in memory.
Collapse
Affiliation(s)
- Austin M Tang
- Department of Neurological Surgery, Keck School of Medicine of USC, University of Southern California, Los Angeles, California, USA.
| | - Kuang-Hsuan Chen
- Department of Neurological Surgery, Keck School of Medicine of USC, University of Southern California, Los Angeles, California, USA
| | - Roberto Martin Del Campo-Vera
- Department of Neurological Surgery, Keck School of Medicine of USC, University of Southern California, Los Angeles, California, USA
| | - Rinu Sebastian
- Department of Neurological Surgery, Keck School of Medicine of USC, University of Southern California, Los Angeles, California, USA
| | - Angad S Gogia
- Department of Neurological Surgery, Keck School of Medicine of USC, University of Southern California, Los Angeles, California, USA
| | - George Nune
- Department of Neurology, Keck School of Medicine of USC, University of Southern California, Los Angeles, California, USA; USC Neurorestoration Center, Keck School of Medicine of USC, University of Southern California, Los Angeles, California, USA
| | - Charles Y Liu
- Department of Neurological Surgery, Keck School of Medicine of USC, University of Southern California, Los Angeles, California, USA; USC Neurorestoration Center, Keck School of Medicine of USC, University of Southern California, Los Angeles, California, USA; Department of Biology and Biological Engineering, California Institute of Technology, Pasadena, California, USA
| | - Spencer Kellis
- Department of Neurological Surgery, Keck School of Medicine of USC, University of Southern California, Los Angeles, California, USA; USC Neurorestoration Center, Keck School of Medicine of USC, University of Southern California, Los Angeles, California, USA; Department of Biology and Biological Engineering, California Institute of Technology, Pasadena, California, USA; Tianqiao and Chrissy Chen Brain-Machine Interface Center, Chen Institute for Neuroscience, California Institute of Technology, Pasadena, California, USA
| | - Brian Lee
- Department of Neurological Surgery, Keck School of Medicine of USC, University of Southern California, Los Angeles, California, USA; USC Neurorestoration Center, Keck School of Medicine of USC, University of Southern California, Los Angeles, California, USA; Department of Biology and Biological Engineering, California Institute of Technology, Pasadena, California, USA
| |
Collapse
|
37
|
Cone I, Shouval HZ. Learning precise spatiotemporal sequences via biophysically realistic learning rules in a modular, spiking network. eLife 2021; 10:e63751. [PMID: 33734085 PMCID: PMC7972481 DOI: 10.7554/elife.63751] [Citation(s) in RCA: 13] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/06/2020] [Accepted: 02/16/2021] [Indexed: 11/13/2022] Open
Abstract
Multiple brain regions are able to learn and express temporal sequences, and this functionality is an essential component of learning and memory. We propose a substrate for such representations via a network model that learns and recalls discrete sequences of variable order and duration. The model consists of a network of spiking neurons placed in a modular microcolumn based architecture. Learning is performed via a biophysically realistic learning rule that depends on synaptic 'eligibility traces'. Before training, the network contains no memory of any particular sequence. After training, presentation of only the first element in that sequence is sufficient for the network to recall an entire learned representation of the sequence. An extended version of the model also demonstrates the ability to successfully learn and recall non-Markovian sequences. This model provides a possible framework for biologically plausible sequence learning and memory, in agreement with recent experimental results.
Collapse
Affiliation(s)
- Ian Cone
- Neurobiology and Anatomy, University of Texas Medical School at HoustonHouston, TXUnited States
- Applied Physics, Rice UniversityHouston, TXUnited States
| | - Harel Z Shouval
- Neurobiology and Anatomy, University of Texas Medical School at HoustonHouston, TXUnited States
| |
Collapse
|
38
|
Hsu A, Marzen SE. Time cells might be optimized for predictive capacity, not redundancy reduction or memory capacity. Phys Rev E 2021; 102:062404. [PMID: 33465990 DOI: 10.1103/physreve.102.062404] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/19/2020] [Accepted: 10/08/2020] [Indexed: 11/07/2022]
Abstract
Recently, researchers have found time cells in the hippocampus that appear to contain information about the timing of past events. Some researchers have argued that time cells are taking a Laplace transform of their input in order to reconstruct the past stimulus. We argue that stimulus prediction, not stimulus reconstruction or redundancy reduction, is in better agreement with observed responses of time cells. In the process, we introduce new analyses of nonlinear, continuous-time reservoirs that model these time cells.
Collapse
Affiliation(s)
- Alexander Hsu
- W. M. Keck Science Department, Claremont, California 91711, USA
| | - Sarah E Marzen
- W. M. Keck Science Department, Claremont, California 91711, USA
| |
Collapse
|
39
|
Sanzeni A, Histed MH, Brunel N. Response nonlinearities in networks of spiking neurons. PLoS Comput Biol 2020; 16:e1008165. [PMID: 32941457 PMCID: PMC7524009 DOI: 10.1371/journal.pcbi.1008165] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/29/2019] [Revised: 09/29/2020] [Accepted: 07/19/2020] [Indexed: 01/18/2023] Open
Abstract
Combining information from multiple sources is a fundamental operation performed by networks of neurons in the brain, whose general principles are still largely unknown. Experimental evidence suggests that combination of inputs in cortex relies on nonlinear summation. Such nonlinearities are thought to be fundamental to perform complex computations. However, these non-linearities are inconsistent with the balanced-state model, one of the most popular models of cortical dynamics, which predicts networks have a linear response. This linearity is obtained in the limit of very large recurrent coupling strength. We investigate the stationary response of networks of spiking neurons as a function of coupling strength. We show that, while a linear transfer function emerges at strong coupling, nonlinearities are prominent at finite coupling, both at response onset and close to saturation. We derive a general framework to classify nonlinear responses in these networks and discuss which of them can be captured by rate models. This framework could help to understand the diversity of non-linearities observed in cortical networks.
Collapse
Affiliation(s)
- Alessandro Sanzeni
- National institute of Mental Health Intramural Program, NIH, Bethesda, MD, USA
- Department of Neurobiology, Duke University, Durham, NC, USA
| | - Mark H. Histed
- National institute of Mental Health Intramural Program, NIH, Bethesda, MD, USA
| | - Nicolas Brunel
- Department of Neurobiology, Duke University, Durham, NC, USA
- Department of Physics, Duke University, Durham, NC, USA
| |
Collapse
|
40
|
Kuśmierz Ł, Ogawa S, Toyoizumi T. Edge of Chaos and Avalanches in Neural Networks with Heavy-Tailed Synaptic Weight Distribution. PHYSICAL REVIEW LETTERS 2020; 125:028101. [PMID: 32701351 DOI: 10.1103/physrevlett.125.028101] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/12/2019] [Revised: 03/03/2020] [Accepted: 05/26/2020] [Indexed: 06/11/2023]
Abstract
We propose an analytically tractable neural connectivity model with power-law distributed synaptic strengths. When threshold neurons with biologically plausible number of incoming connections are considered, our model features a continuous transition to chaos and can reproduce biologically relevant low activity levels and scale-free avalanches, i.e., bursts of activity with power-law distributions of sizes and lifetimes. In contrast, the Gaussian counterpart exhibits a discontinuous transition to chaos and thus cannot be poised near the edge of chaos. We validate our predictions in simulations of networks of binary as well as leaky integrate-and-fire neurons. Our results suggest that heavy-tailed synaptic distribution may form a weakly informative sparse-connectivity prior that can be useful in biological and artificial adaptive systems.
Collapse
Affiliation(s)
- Łukasz Kuśmierz
- Laboratory for Neural Computation and Adaptation, RIKEN Center for Brain Science, 2-1 Hirosawa, Wako, Saitama 351-0198, Japan
| | - Shun Ogawa
- Laboratory for Neural Computation and Adaptation, RIKEN Center for Brain Science, 2-1 Hirosawa, Wako, Saitama 351-0198, Japan
| | - Taro Toyoizumi
- Laboratory for Neural Computation and Adaptation, RIKEN Center for Brain Science, 2-1 Hirosawa, Wako, Saitama 351-0198, Japan
- Department of Mathematical Informatics, Graduate School of Information Science and Technology, The University of Tokyo, Tokyo 113-8656, Japan
| |
Collapse
|
41
|
Jaffe PI, Brainard MS. Acetylcholine acts on songbird premotor circuitry to invigorate vocal output. eLife 2020; 9:e53288. [PMID: 32425158 PMCID: PMC7237207 DOI: 10.7554/elife.53288] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/02/2019] [Accepted: 04/01/2020] [Indexed: 01/14/2023] Open
Abstract
Acetylcholine is well-understood to enhance cortical sensory responses and perceptual sensitivity in aroused or attentive states. Yet little is known about cholinergic influences on motor cortical regions. Here we use the quantifiable nature of birdsong to investigate how acetylcholine modulates the cortical (pallial) premotor nucleus HVC and shapes vocal output. We found that dialyzing the cholinergic agonist carbachol into HVC increased the pitch, amplitude, tempo and stereotypy of song, similar to the natural invigoration of song that occurs when males direct their songs to females. These carbachol-induced effects were associated with increased neural activity in HVC and occurred independently of basal ganglia circuitry. Moreover, we discovered that the normal invigoration of female-directed song was also accompanied by increased HVC activity and was attenuated by blocking muscarinic acetylcholine receptors. These results indicate that, analogous to its influence on sensory systems, acetylcholine can act directly on cortical premotor circuitry to adaptively shape behavior.
Collapse
Affiliation(s)
- Paul I Jaffe
- Departments of Physiology and Psychiatry, University of California, San FranciscoSan FranciscoUnited States
- Center for Integrative Neuroscience, University of California, San FranciscoSan FranciscoUnited States
- Kavli Institute for Fundamental Neuroscience, University of California, San FranciscoSan FranciscoUnited States
| | - Michael S Brainard
- Departments of Physiology and Psychiatry, University of California, San FranciscoSan FranciscoUnited States
- Center for Integrative Neuroscience, University of California, San FranciscoSan FranciscoUnited States
- Kavli Institute for Fundamental Neuroscience, University of California, San FranciscoSan FranciscoUnited States
- Howard Hughes Medical Institute, University of California, San FranciscoSan FranciscoUnited States
| |
Collapse
|
42
|
Scale free topology as an effective feedback system. PLoS Comput Biol 2020; 16:e1007825. [PMID: 32392249 PMCID: PMC7241857 DOI: 10.1371/journal.pcbi.1007825] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/03/2019] [Revised: 05/21/2020] [Accepted: 03/26/2020] [Indexed: 12/13/2022] Open
Abstract
Biological networks are often heterogeneous in their connectivity pattern, with degree distributions featuring a heavy tail of highly connected hubs. The implications of this heterogeneity on dynamical properties are a topic of much interest. Here we show that interpreting topology as a feedback circuit can provide novel insights on dynamics. Based on the observation that in finite networks a small number of hubs have a disproportionate effect on the entire system, we construct an approximation by lumping these nodes into a single effective hub, which acts as a feedback loop with the rest of the nodes. We use this approximation to study dynamics of networks with scale-free degree distributions, focusing on their probability of convergence to fixed points. We find that the approximation preserves convergence statistics over a wide range of settings. Our mapping provides a parametrization of scale free topology which is predictive at the ensemble level and also retains properties of individual realizations. Specifically, outgoing hubs have an organizing role that can drive the network to convergence, in analogy to suppression of chaos by an external drive. In contrast, incoming hubs have no such property, resulting in a marked difference between the behavior of networks with outgoing vs. incoming scale free degree distribution. Combining feedback analysis with mean field theory predicts a transition between convergent and divergent dynamics which is corroborated by numerical simulations. Furthermore, they highlight the effect of a handful of outlying hubs, rather than of the connectivity distribution law as a whole, on network dynamics. Nature abounds with complex networks of interacting elements—from the proteins in our cells, through neural networks in our brains, to species interacting in ecosystems. In all of these fields, the relation between network structure and dynamics is an important research question. A recurring feature of natural networks is their heterogeneous structure: individual elements exhibit a huge diversity of connectivity patterns, which complicates the understanding of network dynamics. To address this problem, we devised a simplified approximation for complex structured networks which captures their dynamical properties. Separating out the largest “hubs”—a small number of nodes with disproportionately high connectivity—we represent them by a single node linked to the rest of the network. This enables us to borrow concepts from control theory, where a system’s output is linked back to itself forming a feedback loop. In this analogy, hubs in heterogeneous networks implement a feedback circuit with the rest of the network. The analogy reveals how these hubs can coordinate the network and drive it more easily towards stable states. Our approach enables analyzing dynamical properties of heterogeneous networks, which is difficult to achieve with existing techniques. It is potentially applicable to many fields where heterogeneous networks are important.
Collapse
|
43
|
Ipsen JR, Peterson ADH. Consequences of Dale's law on the stability-complexity relationship of random neural networks. Phys Rev E 2020; 101:052412. [PMID: 32575310 DOI: 10.1103/physreve.101.052412] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/29/2019] [Accepted: 04/08/2020] [Indexed: 06/11/2023]
Abstract
In the study of randomly connected neural network dynamics there is a phase transition from a simple state with few equilibria to a complex state characterized by the number of equilibria growing exponentially with the neuron population. Such phase transitions are often used to describe pathological brain state transitions observed in neurological diseases such as epilepsy. In this paper we investigate how more realistic heterogeneous network structures affect these phase transitions using techniques from random matrix theory. Specifically, we parametrize the network structure according to Dale's law and use the Kac-Rice formalism to compute the change in the number of equilibria when a phase transition occurs. We also examine the condition where the network is not balanced between excitation and inhibition causing outliers to appear in the eigenspectrum. This enables us to compute the effects of different heterogeneous network connectivities on brain state transitions, which can provide insights into pathological brain dynamics.
Collapse
Affiliation(s)
- J R Ipsen
- ARC Centre of Excellence for Mathematical and Statistical Frontiers, School of Mathematics and Statistics, University of Melbourne, 3010 Parkville, Victoria, Australia
| | - A D H Peterson
- Graeme Clarke Institute, University of Melbourne, 3053 Carlton, Victoria, Australia and Department of Medicine, St. Vincent's Hospital, University of Melbourne, 3065 Fitzroy, Victoria, Australia
| |
Collapse
|
44
|
Sweeney Y, Clopath C. Population coupling predicts the plasticity of stimulus responses in cortical circuits. eLife 2020; 9:e56053. [PMID: 32314959 PMCID: PMC7224697 DOI: 10.7554/elife.56053] [Citation(s) in RCA: 18] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/14/2020] [Accepted: 04/16/2020] [Indexed: 12/31/2022] Open
Abstract
Some neurons have stimulus responses that are stable over days, whereas other neurons have highly plastic stimulus responses. Using a recurrent network model, we explore whether this could be due to an underlying diversity in their synaptic plasticity. We find that, in a network with diverse learning rates, neurons with fast rates are more coupled to population activity than neurons with slow rates. This plasticity-coupling link predicts that neurons with high population coupling exhibit more long-term stimulus response variability than neurons with low population coupling. We substantiate this prediction using recordings from the Allen Brain Observatory, finding that a neuron's population coupling is correlated with the plasticity of its orientation preference. Simulations of a simple perceptual learning task suggest a particular functional architecture: a stable 'backbone' of stimulus representation formed by neurons with low population coupling, on top of which lies a flexible substrate of neurons with high population coupling.
Collapse
Affiliation(s)
- Yann Sweeney
- Department of Bioengineering, Imperial College LondonLondonUnited Kingdom
| | - Claudia Clopath
- Department of Bioengineering, Imperial College LondonLondonUnited Kingdom
| |
Collapse
|
45
|
Audio-visual experience strengthens multisensory assemblies in adult mouse visual cortex. Nat Commun 2019; 10:5684. [PMID: 31831751 PMCID: PMC6908602 DOI: 10.1038/s41467-019-13607-2] [Citation(s) in RCA: 29] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/29/2018] [Accepted: 11/07/2019] [Indexed: 11/09/2022] Open
Abstract
We experience the world through multiple senses simultaneously. To better understand mechanisms of multisensory processing we ask whether inputs from two senses (auditory and visual) can interact and drive plasticity in neural-circuits of the primary visual cortex (V1). Using genetically-encoded voltage and calcium indicators, we find coincident audio-visual experience modifies both the supra and subthreshold response properties of neurons in L2/3 of mouse V1. Specifically, we find that after audio-visual pairing, a subset of multimodal neurons develops enhanced auditory responses to the paired auditory stimulus. This cross-modal plasticity persists over days and is reflected in the strengthening of small functional networks of L2/3 neurons. We find V1 processes coincident auditory and visual events by strengthening functional associations between feature specific assemblies of multimodal neurons during bouts of sensory driven co-activity, leaving a trace of multisensory experience in the cortical network.
Collapse
|
46
|
Beer C, Barak O. One Step Back, Two Steps Forward: Interference and Learning in Recurrent Neural Networks. Neural Comput 2019; 31:1985-2003. [DOI: 10.1162/neco_a_01222] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Artificial neural networks, trained to perform cognitive tasks, have recently been used as models for neural recordings from animals performing these tasks. While some progress has been made in performing such comparisons, the evolution of network dynamics throughout learning remains unexplored. This is paralleled by an experimental focus on recording from trained animals, with few studies following neural activity throughout training. In this work, we address this gap in the realm of artificial networks by analyzing networks that are trained to perform memory and pattern generation tasks. The functional aspect of these tasks corresponds to dynamical objects in the fully trained network—a line attractor or a set of limit cycles for the two respective tasks. We use these dynamical objects as anchors to study the effect of learning on their emergence. We find that the sequential nature of learning—one trial at a time—has major consequences for the learning trajectory and its final outcome. Specifically, we show that least mean squares (LMS), a simple gradient descent suggested as a biologically plausible version of the FORCE algorithm, is constantly obstructed by forgetting, which is manifested as the destruction of dynamical objects from previous trials. The degree of interference is determined by the correlation between different trials. We show which specific ingredients of FORCE avoid this phenomenon. Overall, this difference results in convergence that is orders of magnitude slower for LMS. Learning implies accumulating information across multiple trials to form the overall concept of the task. Our results show that interference between trials can greatly affect learning in a learning-rule-dependent manner. These insights can help design experimental protocols that minimize such interference, and possibly infer underlying learning rules by observing behavior and neural activity throughout learning.
Collapse
Affiliation(s)
- Chen Beer
- Viterby Faculty of Electrical Engineering and Network Biology Research Laboratories, Technion Israel Institute of Technology, Haifa 320003, Israel
| | - Omri Barak
- Network Biology Research Laboratories and Rappaport Faculty of Medicine, Technion Israel Institute of Technology, Haifa 320003, Israel
| |
Collapse
|
47
|
Verzelli P, Alippi C, Livi L. Echo State Networks with Self-Normalizing Activations on the Hyper-Sphere. Sci Rep 2019; 9:13887. [PMID: 31554855 PMCID: PMC6761167 DOI: 10.1038/s41598-019-50158-4] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/11/2019] [Accepted: 09/05/2019] [Indexed: 11/09/2022] Open
Abstract
Among the various architectures of Recurrent Neural Networks, Echo State Networks (ESNs) emerged due to their simplified and inexpensive training procedure. These networks are known to be sensitive to the setting of hyper-parameters, which critically affect their behavior. Results show that their performance is usually maximized in a narrow region of hyper-parameter space called edge of criticality. Finding such a region requires searching in hyper-parameter space in a sensible way: hyper-parameter configurations marginally outside such a region might yield networks exhibiting fully developed chaos, hence producing unreliable computations. The performance gain due to optimizing hyper-parameters can be studied by considering the memory-nonlinearity trade-off, i.e., the fact that increasing the nonlinear behavior of the network degrades its ability to remember past inputs, and vice-versa. In this paper, we propose a model of ESNs that eliminates critical dependence on hyper-parameters, resulting in networks that provably cannot enter a chaotic regime and, at the same time, denotes nonlinear behavior in phase space characterized by a large memory of past inputs, comparable to the one of linear networks. Our contribution is supported by experiments corroborating our theoretical findings, showing that the proposed model displays dynamics that are rich-enough to approximate many common nonlinear systems used for benchmarking.
Collapse
Affiliation(s)
- Pietro Verzelli
- Faculty of Informatics, Università della Svizzera Italiana, Lugano, 69000, Switzerland.
| | - Cesare Alippi
- Faculty of Informatics, Università della Svizzera Italiana, Lugano, 69000, Switzerland
- Department of Electronics, Information and bioengineering, Politecnico di Milano, Milan, 20133, Italy
| | - Lorenzo Livi
- Departments of Computer Science and Mathematics, University of Manitoba, Winnipeg, MB, R3T 2N2, Canada
- Department of Computer Science, College of Engineering, Mathematics and Physical Sciences, University of Exeter, Exeter, EX4 4QF, United Kingdom
| |
Collapse
|
48
|
Ponghiran W, Srinivasan G, Roy K. Reinforcement Learning With Low-Complexity Liquid State Machines. Front Neurosci 2019; 13:883. [PMID: 31507361 PMCID: PMC6718696 DOI: 10.3389/fnins.2019.00883] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/10/2019] [Accepted: 08/07/2019] [Indexed: 11/13/2022] Open
Abstract
We propose reinforcement learning on simple networks consisting of random connections of spiking neurons (both recurrent and feed-forward) that can learn complex tasks with very little trainable parameters. Such sparse and randomly interconnected recurrent spiking networks exhibit highly non-linear dynamics that transform the inputs into rich high-dimensional representations based on the current and past context. The random input representations can be efficiently interpreted by an output (or readout) layer with trainable parameters. Systematic initialization of the random connections and training of the readout layer using Q-learning algorithm enable such small random spiking networks to learn optimally and achieve the same learning efficiency as humans on complex reinforcement learning (RL) tasks like Atari games. In fact, the sparse recurrent connections cause these networks to retain fading memory of past inputs, thereby enabling them to perform temporal integration across successive RL time-steps and learn with partial state inputs. The spike-based approach using small random recurrent networks provides a computationally efficient alternative to state-of-the-art deep reinforcement learning networks with several layers of trainable parameters.
Collapse
Affiliation(s)
| | | | - Kaushik Roy
- Department of ECE, Purdue University, West Lafayette, IN, United States
| |
Collapse
|
49
|
Hennequin G, Ahmadian Y, Rubin DB, Lengyel M, Miller KD. The Dynamical Regime of Sensory Cortex: Stable Dynamics around a Single Stimulus-Tuned Attractor Account for Patterns of Noise Variability. Neuron 2019; 98:846-860.e5. [PMID: 29772203 PMCID: PMC5971207 DOI: 10.1016/j.neuron.2018.04.017] [Citation(s) in RCA: 87] [Impact Index Per Article: 14.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/02/2016] [Revised: 02/14/2018] [Accepted: 04/12/2018] [Indexed: 12/16/2022]
Abstract
Correlated variability in cortical activity is ubiquitously quenched following stimulus onset, in a stimulus-dependent manner. These modulations have been attributed to circuit dynamics involving either multiple stable states (“attractors”) or chaotic activity. Here we show that a qualitatively different dynamical regime, involving fluctuations about a single, stimulus-driven attractor in a loosely balanced excitatory-inhibitory network (the stochastic “stabilized supralinear network”), best explains these modulations. Given the supralinear input/output functions of cortical neurons, increased stimulus drive strengthens effective network connectivity. This shifts the balance from interactions that amplify variability to suppressive inhibitory feedback, quenching correlated variability around more strongly driven steady states. Comparing to previously published and original data analyses, we show that this mechanism, unlike previous proposals, uniquely accounts for the spatial patterns and fast temporal dynamics of variability suppression. Specifying the cortical operating regime is key to understanding the computations underlying perception. A simple network model explains stimulus-tuning of cortical variability suppression Inhibition stabilizes recurrently interacting neurons with supralinear I/O functions Stimuli strengthen inhibitory stabilization around a stable state, quenching variability Single-trial V1 data are compatible with this model and rules out competing proposals
Collapse
Affiliation(s)
- Guillaume Hennequin
- Computational and Biological Learning Lab, Department of Engineering, University of Cambridge, Cambridge CB2 1PZ, UK.
| | - Yashar Ahmadian
- Center for Theoretical Neuroscience, College of Physicians and Surgeons, Columbia University, New York, NY 10032, USA; Department of Neuroscience, Swartz Program in Theoretical Neuroscience, Kavli Institute for Brain Science, College of Physicians and Surgeons, Columbia University, New York, NY 10032, USA; Centre de Neurophysique, Physiologie, et Pathologie, CNRS, 75270 Paris Cedex 06, France; Institute of Neuroscience, Department of Biology and Mathematics, University of Oregon, Eugene, OR 97403, USA
| | - Daniel B Rubin
- Center for Theoretical Neuroscience, College of Physicians and Surgeons, Columbia University, New York, NY 10032, USA; Department of Neurology, Massachusetts General Hospital and Brigham and Women's Hospital, Harvard Medical School, Boston, MA 02115, USA
| | - Máté Lengyel
- Computational and Biological Learning Lab, Department of Engineering, University of Cambridge, Cambridge CB2 1PZ, UK; Department of Cognitive Science, Central European University, 1051 Budapest, Hungary
| | - Kenneth D Miller
- Center for Theoretical Neuroscience, College of Physicians and Surgeons, Columbia University, New York, NY 10032, USA; Department of Neuroscience, Swartz Program in Theoretical Neuroscience, Kavli Institute for Brain Science, College of Physicians and Surgeons, Columbia University, New York, NY 10032, USA
| |
Collapse
|
50
|
La Camera G, Fontanini A, Mazzucato L. Cortical computations via metastable activity. Curr Opin Neurobiol 2019; 58:37-45. [PMID: 31326722 DOI: 10.1016/j.conb.2019.06.007] [Citation(s) in RCA: 29] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/27/2018] [Accepted: 06/22/2019] [Indexed: 12/27/2022]
Abstract
Metastable brain dynamics are characterized by abrupt, jump-like modulations so that the neural activity in single trials appears to unfold as a sequence of discrete, quasi-stationary 'states'. Evidence that cortical neural activity unfolds as a sequence of metastable states is accumulating at fast pace. Metastable activity occurs both in response to an external stimulus and during ongoing, self-generated activity. These spontaneous metastable states are increasingly found to subserve internal representations that are not locked to external triggers, including states of deliberations, attention and expectation. Moreover, decoding stimuli or decisions via metastable states can be carried out trial-by-trial. Focusing on metastability will allow us to shift our perspective on neural coding from traditional concepts based on trial-averaging to models based on dynamic ensemble representations. Recent theoretical work has started to characterize the mechanistic origin and potential roles of metastable representations. In this article we review recent findings on metastable activity, how it may arise in biologically realistic models, and its potential role for representing internal states as well as relevant task variables.
Collapse
Affiliation(s)
- Giancarlo La Camera
- Department of Neurobiology and Behavior, State University of New York at Stony Brook, Stony Brook, NY 11794, United States; Graduate Program in Neuroscience, State University of New York at Stony Brook, Stony Brook, NY 11794, United States.
| | - Alfredo Fontanini
- Department of Neurobiology and Behavior, State University of New York at Stony Brook, Stony Brook, NY 11794, United States; Graduate Program in Neuroscience, State University of New York at Stony Brook, Stony Brook, NY 11794, United States
| | - Luca Mazzucato
- Departments of Biology and Mathematics and Institute of Neuroscience, University of Oregon, Eugene, OR 97403, United States
| |
Collapse
|