1
|
Clark DG, Abbott LF, Litwin-Kumar A. Dimension of Activity in Random Neural Networks. PHYSICAL REVIEW LETTERS 2023; 131:118401. [PMID: 37774280 DOI: 10.1103/physrevlett.131.118401] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/09/2022] [Revised: 05/25/2023] [Accepted: 08/08/2023] [Indexed: 10/01/2023]
Abstract
Neural networks are high-dimensional nonlinear dynamical systems that process information through the coordinated activity of many connected units. Understanding how biological and machine-learning networks function and learn requires knowledge of the structure of this coordinated activity, information contained, for example, in cross covariances between units. Self-consistent dynamical mean field theory (DMFT) has elucidated several features of random neural networks-in particular, that they can generate chaotic activity-however, a calculation of cross covariances using this approach has not been provided. Here, we calculate cross covariances self-consistently via a two-site cavity DMFT. We use this theory to probe spatiotemporal features of activity coordination in a classic random-network model with independent and identically distributed (i.i.d.) couplings, showing an extensive but fractionally low effective dimension of activity and a long population-level timescale. Our formulas apply to a wide range of single-unit dynamics and generalize to non-i.i.d. couplings. As an example of the latter, we analyze the case of partially symmetric couplings.
Collapse
Affiliation(s)
- David G Clark
- Zuckerman Institute, Department of Neuroscience, Columbia University, New York, New York 10027, USA
| | - L F Abbott
- Zuckerman Institute, Department of Neuroscience, Columbia University, New York, New York 10027, USA
| | - Ashok Litwin-Kumar
- Zuckerman Institute, Department of Neuroscience, Columbia University, New York, New York 10027, USA
| |
Collapse
|
2
|
Pazó D, Gallego R. Volcano transition in populations of phase oscillators with random nonreciprocal interactions. Phys Rev E 2023; 108:014202. [PMID: 37583156 DOI: 10.1103/physreve.108.014202] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/20/2023] [Accepted: 06/08/2023] [Indexed: 08/17/2023]
Abstract
Populations of heterogeneous phase oscillators with frustrated random interactions exhibit a quasiglassy state in which the distribution of local fields is volcanoshaped. In a recent work [Phys. Rev. Lett. 120, 264102 (2018)10.1103/PhysRevLett.120.264102], the volcano transition was replicated in a solvable model using a low-rank, random coupling matrix M. We extend here that model including tunable nonreciprocal interactions, i.e., M^{T}≠M. More specifically, we formulate two different solvable models. In both of them the volcano transition persists if matrix elements M_{jk} and M_{kj} are enough correlated. Our numerical simulations fully confirm the analytical results. To put our work in a wider context, we also investigate numerically the volcano transition in the analogous model with a full-rank random coupling matrix.
Collapse
Affiliation(s)
- Diego Pazó
- Instituto de Física de Cantabria (IFCA), Universidad de Cantabria-CSIC, 39005 Santander, Spain
| | - Rafael Gallego
- Departamento de Matemáticas, Universidad de Oviedo, Campus de Viesques, 33203 Gijón, Spain
| |
Collapse
|
3
|
Different eigenvalue distributions encode the same temporal tasks in recurrent neural networks. Cogn Neurodyn 2023; 17:257-275. [PMID: 35469119 PMCID: PMC9020562 DOI: 10.1007/s11571-022-09802-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/24/2021] [Revised: 01/28/2022] [Accepted: 03/21/2022] [Indexed: 01/26/2023] Open
Abstract
Different brain areas, such as the cortex and, more specifically, the prefrontal cortex, show great recurrence in their connections, even in early sensory areas. Several approaches and methods based on trained networks have been proposed to model and describe these regions. It is essential to understand the dynamics behind the models because they are used to build different hypotheses about the functioning of brain areas and to explain experimental results. The main contribution here is the description of the dynamics through the classification and interpretation carried out with a set of numerical simulations. This study sheds light on the multiplicity of solutions obtained for the same tasks and shows the link between the spectra of linearized trained networks and the dynamics of the counterparts. The patterns in the distribution of the eigenvalues of the recurrent weight matrix were studied and properly related to the dynamics in each task. Supplementary Information The online version contains supplementary material available at 10.1007/s11571-022-09802-5.
Collapse
|
4
|
Shao Y, Ostojic S. Relating local connectivity and global dynamics in recurrent excitatory-inhibitory networks. PLoS Comput Biol 2023; 19:e1010855. [PMID: 36689488 PMCID: PMC9894562 DOI: 10.1371/journal.pcbi.1010855] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/29/2022] [Revised: 02/02/2023] [Accepted: 01/06/2023] [Indexed: 01/24/2023] Open
Abstract
How the connectivity of cortical networks determines the neural dynamics and the resulting computations is one of the key questions in neuroscience. Previous works have pursued two complementary approaches to quantify the structure in connectivity. One approach starts from the perspective of biological experiments where only the local statistics of connectivity motifs between small groups of neurons are accessible. Another approach is based instead on the perspective of artificial neural networks where the global connectivity matrix is known, and in particular its low-rank structure can be used to determine the resulting low-dimensional dynamics. A direct relationship between these two approaches is however currently missing. Specifically, it remains to be clarified how local connectivity statistics and the global low-rank connectivity structure are inter-related and shape the low-dimensional activity. To bridge this gap, here we develop a method for mapping local connectivity statistics onto an approximate global low-rank structure. Our method rests on approximating the global connectivity matrix using dominant eigenvectors, which we compute using perturbation theory for random matrices. We demonstrate that multi-population networks defined from local connectivity statistics for which the central limit theorem holds can be approximated by low-rank connectivity with Gaussian-mixture statistics. We specifically apply this method to excitatory-inhibitory networks with reciprocal motifs, and show that it yields reliable predictions for both the low-dimensional dynamics, and statistics of population activity. Importantly, it analytically accounts for the activity heterogeneity of individual neurons in specific realizations of local connectivity. Altogether, our approach allows us to disentangle the effects of mean connectivity and reciprocal motifs on the global recurrent feedback, and provides an intuitive picture of how local connectivity shapes global network dynamics.
Collapse
Affiliation(s)
- Yuxiu Shao
- Laboratoire de Neurosciences Cognitives et Computationnelles, INSERM U960, Ecole Normale Superieure—PSL Research University, Paris, France
- * E-mail: (YS); (SO)
| | - Srdjan Ostojic
- Laboratoire de Neurosciences Cognitives et Computationnelles, INSERM U960, Ecole Normale Superieure—PSL Research University, Paris, France
- * E-mail: (YS); (SO)
| |
Collapse
|
5
|
Mazzucato L. Neural mechanisms underlying the temporal organization of naturalistic animal behavior. eLife 2022; 11:76577. [PMID: 35792884 PMCID: PMC9259028 DOI: 10.7554/elife.76577] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/11/2022] [Accepted: 06/07/2022] [Indexed: 12/17/2022] Open
Abstract
Naturalistic animal behavior exhibits a strikingly complex organization in the temporal domain, with variability arising from at least three sources: hierarchical, contextual, and stochastic. What neural mechanisms and computational principles underlie such intricate temporal features? In this review, we provide a critical assessment of the existing behavioral and neurophysiological evidence for these sources of temporal variability in naturalistic behavior. Recent research converges on an emergent mechanistic theory of temporal variability based on attractor neural networks and metastable dynamics, arising via coordinated interactions between mesoscopic neural circuits. We highlight the crucial role played by structural heterogeneities as well as noise from mesoscopic feedback loops in regulating flexible behavior. We assess the shortcomings and missing links in the current theoretical and experimental literature and propose new directions of investigation to fill these gaps.
Collapse
Affiliation(s)
- Luca Mazzucato
- Institute of Neuroscience, Departments of Biology, Mathematics and Physics, University of Oregon
| |
Collapse
|
6
|
Affiliation(s)
- Johannes Alt
- Section of Mathematics, University of Geneva, 24, rue du Général Dufour, Case postale 64, 1211 Genève 4, Switzerland
| | - Torben Krüger
- Department of Mathematical Sciences, University of Copenhagen, Universitetsparken 5, 2100 København, Denmark
| |
Collapse
|
7
|
Krishnamurthy K, Can T, Schwab DJ. Theory of Gating in Recurrent Neural Networks. PHYSICAL REVIEW. X 2022; 12:011011. [PMID: 36545030 PMCID: PMC9762509 DOI: 10.1103/physrevx.12.011011] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
Recurrent neural networks (RNNs) are powerful dynamical models, widely used in machine learning (ML) and neuroscience. Prior theoretical work has focused on RNNs with additive interactions. However gating i.e., multiplicative interactions are ubiquitous in real neurons and also the central feature of the best-performing RNNs in ML. Here, we show that gating offers flexible control of two salient features of the collective dynamics: (i) timescales and (ii) dimensionality. The gate controlling timescales leads to a novel marginally stable state, where the network functions as a flexible integrator. Unlike previous approaches, gating permits this important function without parameter fine-tuning or special symmetries. Gates also provide a flexible, context-dependent mechanism to reset the memory trace, thus complementing the memory function. The gate modulating the dimensionality can induce a novel, discontinuous chaotic transition, where inputs push a stable system to strong chaotic activity, in contrast to the typically stabilizing effect of inputs. At this transition, unlike additive RNNs, the proliferation of critical points (topological complexity) is decoupled from the appearance of chaotic dynamics (dynamical complexity). The rich dynamics are summarized in phase diagrams, thus providing a map for principled parameter initialization choices to ML practitioners.
Collapse
Affiliation(s)
- Kamesh Krishnamurthy
- Joseph Henry Laboratories of Physics and PNI, Princeton University, Princeton, New Jersey 08544, USA
| | - Tankut Can
- Institute for Advanced Study, Princeton, New Jersey 08540, USA
| | - David J. Schwab
- Initiative for Theoretical Sciences, Graduate Center, CUNY, New York, New York 10016, USA
| |
Collapse
|
8
|
Mambuca AM, Cammarota C, Neri I. Dynamical systems on large networks with predator-prey interactions are stable and exhibit oscillations. Phys Rev E 2022; 105:014305. [PMID: 35193197 DOI: 10.1103/physreve.105.014305] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/29/2021] [Accepted: 12/12/2021] [Indexed: 06/14/2023]
Abstract
We analyze the stability of linear dynamical systems defined on sparse, random graphs with predator-prey, competitive, and mutualistic interactions. These systems are aimed at modeling the stability of fixed points in large systems defined on complex networks, such as ecosystems consisting of a large number of species that interact through a food web. We develop an exact theory for the spectral distribution and the leading eigenvalue of the corresponding sparse Jacobian matrices. This theory reveals that the nature of local interactions has a strong influence on a system's stability. We show that, in general, linear dynamical systems defined on random graphs with a prescribed degree distribution of unbounded support are unstable if they are large enough, implying a tradeoff between stability and diversity. Remarkably, in contrast to the generic case, antagonistic systems that contain only interactions of the predator-prey type can be stable in the infinite size limit. This feature for antagonistic systems is accompanied by a peculiar oscillatory behavior of the dynamical response of the system after a perturbation, when the mean degree of the graph is small enough. Moreover, for antagonistic systems we also find that there exist a dynamical phase transition and critical mean degree above which the response becomes nonoscillatory.
Collapse
Affiliation(s)
| | - Chiara Cammarota
- Department of Mathematics, King's College London, Strand, London, WC2R 2LS, United Kingdom
- Dipartimento di Fisica, Sapienza Università di Roma, P. le A. Moro 5, 00185 Rome, Italy
| | - Izaak Neri
- Department of Mathematics, King's College London, Strand, London, WC2R 2LS, United Kingdom
| |
Collapse
|
9
|
van Meegen A, Kühn T, Helias M. Large-Deviation Approach to Random Recurrent Neuronal Networks: Parameter Inference and Fluctuation-Induced Transitions. PHYSICAL REVIEW LETTERS 2021; 127:158302. [PMID: 34678014 DOI: 10.1103/physrevlett.127.158302] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/21/2020] [Revised: 07/05/2021] [Accepted: 08/19/2021] [Indexed: 06/13/2023]
Abstract
We here unify the field-theoretical approach to neuronal networks with large deviations theory. For a prototypical random recurrent network model with continuous-valued units, we show that the effective action is identical to the rate function and derive the latter using field theory. This rate function takes the form of a Kullback-Leibler divergence which enables data-driven inference of model parameters and calculation of fluctuations beyond mean-field theory. Lastly, we expose a regime with fluctuation-induced transitions between mean-field solutions.
Collapse
Affiliation(s)
- Alexander van Meegen
- Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA-Institute Brain Structure-Function Relationships (INM-10), Jülich Research Centre, 52428 Jülich, Germany
- Institute of Zoology, University of Cologne, 50674 Cologne, Germany
| | - Tobias Kühn
- Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA-Institute Brain Structure-Function Relationships (INM-10), Jülich Research Centre, 52428 Jülich, Germany
- Department of Physics, Faculty 1, RWTH Aachen University, 52074 Aachen, Germany
- Laboratoire de Physique de l'Ecole Normale Supérieure, ENS, Université PSL, CNRS, Sorbonne Université, Université de Paris, F-75005 Paris, France
| | - Moritz Helias
- Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA-Institute Brain Structure-Function Relationships (INM-10), Jülich Research Centre, 52428 Jülich, Germany
- Department of Physics, Faculty 1, RWTH Aachen University, 52074 Aachen, Germany
| |
Collapse
|
10
|
Nobukawa S, Nishimura H, Wagatsuma N, Ando S, Yamanishi T. Long-Tailed Characteristic of Spiking Pattern Alternation Induced by Log-Normal Excitatory Synaptic Distribution. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2021; 32:3525-3537. [PMID: 32822305 DOI: 10.1109/tnnls.2020.3015208] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
Studies of structural connectivity at the synaptic level show that in synaptic connections of the cerebral cortex, the excitatory postsynaptic potential (EPSP) in most synapses exhibits sub-mV values, while a small number of synapses exhibit large EPSPs ( >~1.0 [mV]). This means that the distribution of EPSP fits a log-normal distribution. While not restricting structural connectivity, skewed and long-tailed distributions have been widely observed in neural activities, such as the occurrences of spiking rates and the size of a synchronously spiking population. Many studies have been modeled this long-tailed EPSP neural activity distribution; however, its causal factors remain controversial. This study focused on the long-tailed EPSP distributions and interlateral synaptic connections primarily observed in the cortical network structures, thereby having constructed a spiking neural network consistent with these features. Especially, we constructed two coupled modules of spiking neural networks with excitatory and inhibitory neural populations with a log-normal EPSP distribution. We evaluated the spiking activities for different input frequencies and with/without strong synaptic connections. These coupled modules exhibited intermittent intermodule-alternative behavior, given moderate input frequency and the existence of strong synaptic and intermodule connections. Moreover, the power analysis, multiscale entropy analysis, and surrogate data analysis revealed that the long-tailed EPSP distribution and intermodule connections enhanced the complexity of spiking activity at large temporal scales and induced nonlinear dynamics and neural activity that followed the long-tailed distribution.
Collapse
|
11
|
Metz FL, Neri I. Localization and Universality of Eigenvectors in Directed Random Graphs. PHYSICAL REVIEW LETTERS 2021; 126:040604. [PMID: 33576654 DOI: 10.1103/physrevlett.126.040604] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/27/2020] [Revised: 12/07/2020] [Accepted: 01/08/2021] [Indexed: 06/12/2023]
Abstract
Although the spectral properties of random graphs have been a long-standing focus of network theory, the properties of right eigenvectors of directed graphs have so far eluded an exact analytic treatment. We present a general theory for the statistics of the right eigenvector components in directed random graphs with a prescribed degree distribution and with randomly weighted links. We obtain exact analytic expressions for the inverse participation ratio and show that right eigenvectors of directed random graphs with a small average degree are localized. Remarkably, if the fourth moment of the degree distribution is finite, then the critical mean degree of the localization transition is independent of the degree fluctuations, which is different from localization in undirected graphs that is governed by degree fluctuations. We also show that in the high connectivity limit the distribution of the right eigenvector components is solely determined by the degree distribution. For delocalized eigenvectors, we recover in this limit the universal results from standard random matrix theory that are independent of the degree distribution, while for localized eigenvectors the eigenvector distribution depends on the degree distribution.
Collapse
Affiliation(s)
- Fernando Lucas Metz
- Physics Institute, Federal University of Rio Grande do Sul, 91501-970 Porto Alegre, Brazil and London Mathematical Laboratory, 18 Margravine Gardens, London W6 8RH, United Kingdom
| | - Izaak Neri
- Department of Mathematics, King's College London, Strand, London WC2R 2LS, United Kingdom
| |
Collapse
|
12
|
René A, Longtin A, Macke JH. Inference of a Mesoscopic Population Model from Population Spike Trains. Neural Comput 2020; 32:1448-1498. [DOI: 10.1162/neco_a_01292] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Understanding how rich dynamics emerge in neural populations requires models exhibiting a wide range of behaviors while remaining interpretable in terms of connectivity and single-neuron dynamics. However, it has been challenging to fit such mechanistic spiking networks at the single-neuron scale to empirical population data. To close this gap, we propose to fit such data at a mesoscale, using a mechanistic but low-dimensional and, hence, statistically tractable model. The mesoscopic representation is obtained by approximating a population of neurons as multiple homogeneous pools of neurons and modeling the dynamics of the aggregate population activity within each pool. We derive the likelihood of both single-neuron and connectivity parameters given this activity, which can then be used to optimize parameters by gradient ascent on the log likelihood or perform Bayesian inference using Markov chain Monte Carlo (MCMC) sampling. We illustrate this approach using a model of generalized integrate-and-fire neurons for which mesoscopic dynamics have been previously derived and show that both single-neuron and connectivity parameters can be recovered from simulated data. In particular, our inference method extracts posterior correlations between model parameters, which define parameter subsets able to reproduce the data. We compute the Bayesian posterior for combinations of parameters using MCMC sampling and investigate how the approximations inherent in a mesoscopic population model affect the accuracy of the inferred single-neuron parameters.
Collapse
Affiliation(s)
- Alexandre René
- Department of Physics, University of Ottawa, Ottawa K1N 6N5, Canada; Max Planck Research Group Neural Systems Analysis, Center of Advanced European Studies and Research (caesar), Bonn 53175, Germany; and Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA-Institute Brain Structure-Function Relationships (INM-10), Jülich Research Centre, Jülich 52425, Germany
| | - André Longtin
- Department of Physics, University of Ottawa, Ottawa K1N 6N5, Canada, and Brain and Mind Research Institute, University of Ottawa, Ottawa K1H 8M5, Canada
| | - Jakob H. Macke
- Max Planck Research Group Neural Systems Analysis, Center of Advanced European Studies and Research (caesar), Bonn 53175, Germany, and Computational Neuroengineering, Department of Electrical and Computer Engineering, Technical University of Munich, Munich 80333, Germany
| |
Collapse
|
13
|
Stapmanns J, Kühn T, Dahmen D, Luu T, Honerkamp C, Helias M. Self-consistent formulations for stochastic nonlinear neuronal dynamics. Phys Rev E 2020; 101:042124. [PMID: 32422832 DOI: 10.1103/physreve.101.042124] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/16/2019] [Accepted: 12/18/2019] [Indexed: 01/28/2023]
Abstract
Neural dynamics is often investigated with tools from bifurcation theory. However, many neuron models are stochastic, mimicking fluctuations in the input from unknown parts of the brain or the spiking nature of signals. Noise changes the dynamics with respect to the deterministic model; in particular classical bifurcation theory cannot be applied. We formulate the stochastic neuron dynamics in the Martin-Siggia-Rose de Dominicis-Janssen (MSRDJ) formalism and present the fluctuation expansion of the effective action and the functional renormalization group (fRG) as two systematic ways to incorporate corrections to the mean dynamics and time-dependent statistics due to fluctuations in the presence of nonlinear neuronal gain. To formulate self-consistency equations, we derive a fundamental link between the effective action in the Onsager-Machlup (OM) formalism, which allows the study of phase transitions, and the MSRDJ effective action, which is computationally advantageous. These results in particular allow the derivation of an OM effective action for systems with non-Gaussian noise. This approach naturally leads to effective deterministic equations for the first moment of the stochastic system; they explain how nonlinearities and noise cooperate to produce memory effects. Moreover, the MSRDJ formulation yields an effective linear system that has identical power spectra and linear response. Starting from the better known loopwise approximation, we then discuss the use of the fRG as a method to obtain self-consistency beyond the mean. We present a new efficient truncation scheme for the hierarchy of flow equations for the vertex functions by adapting the Blaizot, Méndez, and Wschebor approximation from the derivative expansion to the vertex expansion. The methods are presented by means of the simplest possible example of a stochastic differential equation that has generic features of neuronal dynamics.
Collapse
Affiliation(s)
- Jonas Stapmanns
- Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA BRAIN Institute I, Jülich Research Centre, Jülich, Germany.,Institute for Theoretical Solid State Physics, RWTH Aachen University, 52074 Aachen, Germany
| | - Tobias Kühn
- Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA BRAIN Institute I, Jülich Research Centre, Jülich, Germany.,Institute for Theoretical Solid State Physics, RWTH Aachen University, 52074 Aachen, Germany
| | - David Dahmen
- Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA BRAIN Institute I, Jülich Research Centre, Jülich, Germany
| | - Thomas Luu
- Institut für Kernphysik (IKP-3), Institute for Advanced Simulation (IAS-4) and Jülich Center for Hadron Physics, Jülich Research Centre, Jülich, Germany
| | - Carsten Honerkamp
- Institute for Theoretical Solid State Physics, RWTH Aachen University, 52074 Aachen, Germany.,JARA-FIT, Jülich Aachen Research Alliance-Fundamentals of Future Information Technology, Germany
| | - Moritz Helias
- Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA BRAIN Institute I, Jülich Research Centre, Jülich, Germany.,Institute for Theoretical Solid State Physics, RWTH Aachen University, 52074 Aachen, Germany
| |
Collapse
|
14
|
Nobukawa S, Yamanishi T, Kasakawa S, Nishimura H, Kikuchi M, Takahashi T. Classification Methods Based on Complexity and Synchronization of Electroencephalography Signals in Alzheimer's Disease. Front Psychiatry 2020; 11:255. [PMID: 32317994 PMCID: PMC7154080 DOI: 10.3389/fpsyt.2020.00255] [Citation(s) in RCA: 25] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/10/2020] [Accepted: 03/16/2020] [Indexed: 12/22/2022] Open
Abstract
Electroencephalography (EEG) has long been studied as a potential diagnostic method for Alzheimer's disease (AD). The pathological progression of AD leads to cortical disconnection. These disconnections may manifest as functional connectivity alterations, measured by the degree of synchronization between different brain regions, and alterations in complex behaviors produced by the interaction among wide-spread brain regions. Recently, machine learning methods, such as clustering algorithms and classification methods, have been adopted to detect disease-related changes in functional connectivity and classify the features of these changes. Although complexity of EEG signals can also reflect AD-related changes, few machine learning studies have focused on the changes in complexity. Therefore, in this study, we compared the ability of EEG signals to detect characteristics of AD using different machine learning approaches one focused on functional connectivity and the other focused on signal complexity. We examined functional connectivity, estimated by phase lag index (PLI) in EEG signals in healthy older participants [healthy control (HC)] and patients with AD. We estimated signal complexity using multi-scale entropy. Utilizing a support vector machine, we compared the identification accuracy of AD based on functional connectivity at each frequency band and complexity component. Additionally, we evaluated the relationship between synchronization and complexity. The identification accuracy of functional connectivity of the alpha, beta, and gamma bands was significantly high (AUC 1.0), and the identification accuracy of complexity was sufficiently high (AUC 0.81). Moreover, the relationship between functional connectivity and complexity exhibited various temporal-scale-and-regional-specific dependency in both HC participants and patients with AD. In conclusion, the combination of functional connectivity and complexity might reflect complex pathological process of AD. Applying a combination of both machine learning methods to neurophysiological data may provide a novel understanding of the neural network processes in both healthy brains and pathological conditions.
Collapse
Affiliation(s)
- Sou Nobukawa
- Department of Computer Science, Chiba Institute of Technology, Narashino, Japan
| | - Teruya Yamanishi
- AI & IoT Center, Department of Management Information Science, Fukui University of Technology, Fukui, Japan
| | - Shinya Kasakawa
- AI & IoT Center, Department of Management Information Science, Fukui University of Technology, Fukui, Japan
| | - Haruhiko Nishimura
- Graduate School of Applied Informatics, University of Hyogo, Kobe, Japan
| | - Mitsuru Kikuchi
- Research Center for Child Mental Development, Kanazawa University, Kanazawa, Japan
- Department of Psychiatry & Behavioral Science, Kanazawa University, Ishikawa, Japan
| | - Tetsuya Takahashi
- Research Center for Child Mental Development, Kanazawa University, Kanazawa, Japan
- Department of Neuropsychiatry, University of Fukui, Yoshida, Japan
| |
Collapse
|
15
|
Bondanelli G, Ostojic S. Coding with transient trajectories in recurrent neural networks. PLoS Comput Biol 2020; 16:e1007655. [PMID: 32053594 PMCID: PMC7043794 DOI: 10.1371/journal.pcbi.1007655] [Citation(s) in RCA: 16] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/09/2019] [Revised: 02/26/2020] [Accepted: 01/14/2020] [Indexed: 01/04/2023] Open
Abstract
Following a stimulus, the neural response typically strongly varies in time and across neurons before settling to a steady-state. While classical population coding theory disregards the temporal dimension, recent works have argued that trajectories of transient activity can be particularly informative about stimulus identity and may form the basis of computations through dynamics. Yet the dynamical mechanisms needed to generate a population code based on transient trajectories have not been fully elucidated. Here we examine transient coding in a broad class of high-dimensional linear networks of recurrently connected units. We start by reviewing a well-known result that leads to a distinction between two classes of networks: networks in which all inputs lead to weak, decaying transients, and networks in which specific inputs elicit amplified transient responses and are mapped onto output states during the dynamics. Theses two classes are simply distinguished based on the spectrum of the symmetric part of the connectivity matrix. For the second class of networks, which is a sub-class of non-normal networks, we provide a procedure to identify transiently amplified inputs and the corresponding readouts. We first apply these results to standard randomly-connected and two-population networks. We then build minimal, low-rank networks that robustly implement trajectories mapping a specific input onto a specific orthogonal output state. Finally, we demonstrate that the capacity of the obtained networks increases proportionally with their size.
Collapse
Affiliation(s)
- Giulio Bondanelli
- Laboratoire de Neurosciences Cognitives et Computationelles, Département d’Études Cognitives, École Normale Supérieure, INSERM U960, PSL University, Paris, France
| | - Srdjan Ostojic
- Laboratoire de Neurosciences Cognitives et Computationelles, Département d’Études Cognitives, École Normale Supérieure, INSERM U960, PSL University, Paris, France
| |
Collapse
|
16
|
Zaszczynska A, Sajkiewicz P, Gradys A. Piezoelectric Scaffolds as Smart Materials for Neural Tissue Engineering. Polymers (Basel) 2020; 12:E161. [PMID: 31936240 PMCID: PMC7022784 DOI: 10.3390/polym12010161] [Citation(s) in RCA: 68] [Impact Index Per Article: 17.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/29/2019] [Revised: 12/31/2019] [Accepted: 01/05/2020] [Indexed: 01/03/2023] Open
Abstract
Injury to the central or peripheral nervous systems leads to the loss of cognitive and/or sensorimotor capabilities, which still lacks an effective treatment. Tissue engineering in the post-injury brain represents a promising option for cellular replacement and rescue, providing a cell scaffold for either transplanted or resident cells. Tissue engineering relies on scaffolds for supporting cell differentiation and growth with recent emphasis on stimuli responsive scaffolds, sometimes called smart scaffolds. One of the representatives of this material group is piezoelectric scaffolds, being able to generate electrical charges under mechanical stimulation, which creates a real prospect for using such scaffolds in non-invasive therapy of neural tissue. This paper summarizes the recent knowledge on piezoelectric materials used for tissue engineering, especially neural tissue engineering. The most used materials for tissue engineering strategies are reported together with the main achievements, challenges, and future needs for research and actual therapies. This review provides thus a compilation of the most relevant results and strategies and serves as a starting point for novel research pathways in the most relevant and challenging open questions.
Collapse
Affiliation(s)
- Angelika Zaszczynska
- Institute of Fundamental Technological Research, Polish Academy of Sciences, Pawinskiego 5b St., 02-106 Warsaw, Poland
| | - Paweł Sajkiewicz
- Institute of Fundamental Technological Research, Polish Academy of Sciences, Pawinskiego 5b St., 02-106 Warsaw, Poland
| | - Arkadiusz Gradys
- Institute of Fundamental Technological Research, Polish Academy of Sciences, Pawinskiego 5b St., 02-106 Warsaw, Poland
| |
Collapse
|
17
|
Gudowska-Nowak E, Nowak MA, Chialvo DR, Ochab JK, Tarnowski W. From Synaptic Interactions to Collective Dynamics in Random Neuronal Networks Models: Critical Role of Eigenvectors and Transient Behavior. Neural Comput 2019; 32:395-423. [PMID: 31835001 DOI: 10.1162/neco_a_01253] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
The study of neuronal interactions is at the center of several big collaborative neuroscience projects (including the Human Connectome Project, the Blue Brain Project, and the Brainome) that attempt to obtain a detailed map of the entire brain. Under certain constraints, mathematical theory can advance predictions of the expected neural dynamics based solely on the statistical properties of the synaptic interaction matrix. This work explores the application of free random variables to the study of large synaptic interaction matrices. Besides recovering in a straightforward way known results on eigenspectra in types of models of neural networks proposed by Rajan and Abbott (2006), we extend them to heavy-tailed distributions of interactions. More important, we analytically derive the behavior of eigenvector overlaps, which determine the stability of the spectra. We observe that on imposing the neuronal excitation/inhibition balance, despite the eigenvalues remaining unchanged, their stability dramatically decreases due to the strong nonorthogonality of associated eigenvectors. This leads us to the conclusion that understanding the temporal evolution of asymmetric neural networks requires considering the entangled dynamics of both eigenvectors and eigenvalues, which might bear consequences for learning and memory processes in these models. Considering the success of free random variables theory in a wide variety of disciplines, we hope that the results presented here foster the additional application of these ideas in the area of brain sciences.
Collapse
Affiliation(s)
- E Gudowska-Nowak
- Marian Smoluchowski Institute of Physics and Mark Kac Complex Systems Research Center, Jagiellonian University, PL 30-348 Kraków, Poland
| | - M A Nowak
- Marian Smoluchowski Institute of Physics and Mark Kac Complex Systems Research Center, Jagiellonian University, PL 30-348 Kraków, Poland
| | - D R Chialvo
- Center for Complex Systems and Brain Sciences, Escuela de Ciencia y Tecnología, Universidad Nacional de San Martín, San Martín, 1650 Buenos Aires, Argentina and Consejo Nacional de Investigaciones Científicas y Tecnológicas, 1650 Buenos Aires, Argentina
| | - J K Ochab
- Marian Smoluchowski Institute of Physics and Mark Kac Complex Systems Research Center, Jagiellonian University, PL 30-348 Kraków, Poland
| | - W Tarnowski
- Marian Smoluchowski Institute of Physics and Mark Kac Complex Systems Research Center, Jagiellonian University, PL 30-348 Kraków, Poland
| |
Collapse
|
18
|
Nobukawa S, Nishimura H, Yamanishi T. Temporal-specific complexity of spiking patterns in spontaneous activity induced by a dual complex network structure. Sci Rep 2019; 9:12749. [PMID: 31484990 PMCID: PMC6726653 DOI: 10.1038/s41598-019-49286-8] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/14/2018] [Accepted: 08/22/2019] [Indexed: 11/08/2022] Open
Abstract
Temporal fluctuation of neural activity in the brain has an important function in optimal information processing. Spontaneous activity is a source of such fluctuation. The distribution of excitatory postsynaptic potentials (EPSPs) between cortical pyramidal neurons can follow a log-normal distribution. Recent studies have shown that networks connected by weak synapses exhibit characteristics of a random network, whereas networks connected by strong synapses have small-world characteristics of small path lengths and large cluster coefficients. To investigate the relationship between temporal complexity spontaneous activity and structural network duality in synaptic connections, we executed a simulation study using the leaky integrate-and-fire spiking neural network with log-normal synaptic weight distribution for the EPSPs and duality of synaptic connectivity, depending on synaptic weight. We conducted multiscale entropy analysis of the temporal spiking activity. Our simulation demonstrated that, when strong synaptic connections approach a small-world network, specific spiking patterns arise during irregular spatio-temporal spiking activity, and the complexity at the large temporal scale (i.e., slow frequency) is enhanced. Moreover, we confirmed through a surrogate data analysis that slow temporal dynamics reflect a deterministic process in the spiking neural networks. This modelling approach may improve the understanding of the spatio-temporal complex neural activity in the brain.
Collapse
Affiliation(s)
- Sou Nobukawa
- Department of Computer Science, Chiba Institute of Technology, 2-17-1 Tsudanuma, Narashino, Chiba, 275-0016, Japan.
| | - Haruhiko Nishimura
- Graduate School of Applied Informatics, University of Hyogo, 7-1-28 Chuo-ku, Kobe, Hyogo, 650-8588, Japan
| | - Teruya Yamanishi
- AI & IoT Center, Department of Management and Information Sciences, Fukui University of Technology, 3-6-1 Gakuen, Fukui, 910-8505, Japan
| |
Collapse
|
19
|
Mastrogiuseppe F, Ostojic S. A Geometrical Analysis of Global Stability in Trained Feedback Networks. Neural Comput 2019; 31:1139-1182. [DOI: 10.1162/neco_a_01187] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Recurrent neural networks have been extensively studied in the context of neuroscience and machine learning due to their ability to implement complex computations. While substantial progress in designing effective learning algorithms has been achieved, a full understanding of trained recurrent networks is still lacking. Specifically, the mechanisms that allow computations to emerge from the underlying recurrent dynamics are largely unknown. Here we focus on a simple yet underexplored computational setup: a feedback architecture trained to associate a stationary output to a stationary input. As a starting point, we derive an approximate analytical description of global dynamics in trained networks, which assumes uncorrelated connectivity weights in the feedback and in the random bulk. The resulting mean-field theory suggests that the task admits several classes of solutions, which imply different stability properties. Different classes are characterized in terms of the geometrical arrangement of the readout with respect to the input vectors, defined in the high-dimensional space spanned by the network population. We find that such an approximate theoretical approach can be used to understand how standard training techniques implement the input-output task in finite-size feedback networks. In particular, our simplified description captures the local and the global stability properties of the target solution, and thus predicts training performance.
Collapse
Affiliation(s)
- Francesca Mastrogiuseppe
- Laboratoire de Neurosciences Cognitives et Computationelles, INSERM U960, and Laboratoire de Physique Statistique, CNRS UMR 8550, Ecole Normale Supérieure–PSL Research University, Paris 75005, France
| | - Srdjan Ostojic
- Laboratoire de Neurosciences Cognitives et Computationelles, INSERM U960, Ecole Normale Supérieure–PSL Research University, Paris 75005, France
| |
Collapse
|
20
|
Beiran M, Ostojic S. Contrasting the effects of adaptation and synaptic filtering on the timescales of dynamics in recurrent networks. PLoS Comput Biol 2019; 15:e1006893. [PMID: 30897092 PMCID: PMC6445477 DOI: 10.1371/journal.pcbi.1006893] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/19/2018] [Revised: 04/02/2019] [Accepted: 02/19/2019] [Indexed: 11/19/2022] Open
Abstract
Neural activity in awake behaving animals exhibits a vast range of timescales that can be several fold larger than the membrane time constant of individual neurons. Two types of mechanisms have been proposed to explain this conundrum. One possibility is that large timescales are generated by a network mechanism based on positive feedback, but this hypothesis requires fine-tuning of the strength or structure of the synaptic connections. A second possibility is that large timescales in the neural dynamics are inherited from large timescales of underlying biophysical processes, two prominent candidates being intrinsic adaptive ionic currents and synaptic transmission. How the timescales of adaptation or synaptic transmission influence the timescale of the network dynamics has however not been fully explored. To address this question, here we analyze large networks of randomly connected excitatory and inhibitory units with additional degrees of freedom that correspond to adaptation or synaptic filtering. We determine the fixed points of the systems, their stability to perturbations and the corresponding dynamical timescales. Furthermore, we apply dynamical mean field theory to study the temporal statistics of the activity in the fluctuating regime, and examine how the adaptation and synaptic timescales transfer from individual units to the whole population. Our overarching finding is that synaptic filtering and adaptation in single neurons have very different effects at the network level. Unexpectedly, the macroscopic network dynamics do not inherit the large timescale present in adaptive currents. In contrast, the timescales of network activity increase proportionally to the time constant of the synaptic filter. Altogether, our study demonstrates that the timescales of different biophysical processes have different effects on the network level, so that the slow processes within individual neurons do not necessarily induce slow activity in large recurrent neural networks.
Collapse
Affiliation(s)
- Manuel Beiran
- Group for Neural Theory, Laboratoire de Neurosciences Cognitives Computationnelles, Département d’Études Cognitives, École Normale Supérieure, INSERM U960, PSL University, Paris, France
| | - Srdjan Ostojic
- Group for Neural Theory, Laboratoire de Neurosciences Cognitives Computationnelles, Département d’Études Cognitives, École Normale Supérieure, INSERM U960, PSL University, Paris, France
| |
Collapse
|
21
|
Kim CM, Chow CC. Learning recurrent dynamics in spiking networks. eLife 2018; 7:37124. [PMID: 30234488 PMCID: PMC6195349 DOI: 10.7554/elife.37124] [Citation(s) in RCA: 27] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/29/2018] [Accepted: 09/14/2018] [Indexed: 01/27/2023] Open
Abstract
Spiking activity of neurons engaged in learning and performing a task show complex spatiotemporal dynamics. While the output of recurrent network models can learn to perform various tasks, the possible range of recurrent dynamics that emerge after learning remains unknown. Here we show that modifying the recurrent connectivity with a recursive least squares algorithm provides sufficient flexibility for synaptic and spiking rate dynamics of spiking networks to produce a wide range of spatiotemporal activity. We apply the training method to learn arbitrary firing patterns, stabilize irregular spiking activity in a network of excitatory and inhibitory neurons respecting Dale's law, and reproduce the heterogeneous spiking rate patterns of cortical neurons engaged in motor planning and movement. We identify sufficient conditions for successful learning, characterize two types of learning errors, and assess the network capacity. Our findings show that synaptically-coupled recurrent spiking networks possess a vast computational capability that can support the diverse activity patterns in the brain.
Collapse
Affiliation(s)
- Christopher M Kim
- Laboratory of Biological Modeling, National Institute of Diabetes and Digestive and Kidney DiseasesNational Institutes of HealthBethesdaUnited States
| | - Carson C Chow
- Laboratory of Biological Modeling, National Institute of Diabetes and Digestive and Kidney DiseasesNational Institutes of HealthBethesdaUnited States
| |
Collapse
|