1
|
Evaluating the statistical similarity of neural network activity and connectivity via eigenvector angles. Biosystems 2023; 223:104813. [PMID: 36460172 DOI: 10.1016/j.biosystems.2022.104813] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/19/2022] [Revised: 11/15/2022] [Accepted: 11/15/2022] [Indexed: 12/02/2022]
Abstract
Neural systems are networks, and strategic comparisons between multiple networks are a prevalent task in many research scenarios. In this study, we construct a statistical test for the comparison of matrices representing pairwise aspects of neural networks, in particular, the correlation between spiking activity and connectivity. The "eigenangle test" quantifies the similarity of two matrices by the angles between their ranked eigenvectors. We calibrate the behavior of the test for use with correlation matrices using stochastic models of correlated spiking activity and demonstrate how it compares to classical two-sample tests, such as the Kolmogorov-Smirnov distance, in the sense that it is able to evaluate also structural aspects of pairwise measures. Furthermore, the principle of the eigenangle test can be applied to compare the similarity of adjacency matrices of certain types of networks. Thus, the approach can be used to quantitatively explore the relationship between connectivity and activity with the same metric. By applying the eigenangle test to the comparison of connectivity matrices and correlation matrices of a random balanced network model before and after a specific synaptic rewiring intervention, we gauge the influence of connectivity features on the correlated activity. Potential applications of the eigenangle test include simulation experiments, model validation, and data analysis.
Collapse
|
2
|
Time-convergent random matrices from mean-field pinned interacting eigenvalues. J Appl Probab 2022. [DOI: 10.1017/jpr.2022.53] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/19/2022]
Abstract
Abstract
We study a multivariate system over a finite lifespan represented by a Hermitian-valued random matrix process whose eigenvalues (i) interact in a mean-field way and (ii) converge to their weighted ensemble average at their terminal time. We prove that such a system is guaranteed to converge in time to the identity matrix that is scaled by a Gaussian random variable whose variance is inversely proportional to the dimension of the matrix. As the size of the system grows asymptotically, the eigenvalues tend to mutually independent diffusions that converge to zero at their terminal time, a Brownian bridge being the archetypal example. Unlike commonly studied random matrices that have non-colliding eigenvalues, the proposed eigenvalues of the given system here may collide. We provide the dynamics of the eigenvalue gap matrix, which is a random skew-symmetric matrix that converges in time to the
$\textbf{0}$
matrix. Our framework can be applied in producing mean-field interacting counterparts of stochastic quantum reduction models for which the convergence points are determined with respect to the average state of the entire composite system.
Collapse
|
3
|
Li J, Shew WL. Tuning network dynamics from criticality to an asynchronous state. PLoS Comput Biol 2020; 16:e1008268. [PMID: 32986705 PMCID: PMC7544040 DOI: 10.1371/journal.pcbi.1008268] [Citation(s) in RCA: 13] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/09/2020] [Revised: 10/08/2020] [Accepted: 08/17/2020] [Indexed: 01/03/2023] Open
Abstract
According to many experimental observations, neurons in cerebral cortex tend to operate in an asynchronous regime, firing independently of each other. In contrast, many other experimental observations reveal cortical population firing dynamics that are relatively coordinated and occasionally synchronous. These discrepant observations have naturally led to competing hypotheses. A commonly hypothesized explanation of asynchronous firing is that excitatory and inhibitory synaptic inputs are precisely correlated, nearly canceling each other, sometimes referred to as 'balanced' excitation and inhibition. On the other hand, the 'criticality' hypothesis posits an explanation of the more coordinated state that also requires a certain balance of excitatory and inhibitory interactions. Both hypotheses claim the same qualitative mechanism-properly balanced excitation and inhibition. Thus, a natural question arises: how are asynchronous population dynamics and critical dynamics related, how do they differ? Here we propose an answer to this question based on investigation of a simple, network-level computational model. We show that the strength of inhibitory synapses relative to excitatory synapses can be tuned from weak to strong to generate a family of models that spans a continuum from critical dynamics to asynchronous dynamics. Our results demonstrate that the coordinated dynamics of criticality and asynchronous dynamics can be generated by the same neural system if excitatory and inhibitory synapses are tuned appropriately.
Collapse
Affiliation(s)
- Jingwen Li
- Department of Physics, University of Arkansas, Fayetteville, Arkansas, United States of America
| | - Woodrow L. Shew
- Department of Physics, University of Arkansas, Fayetteville, Arkansas, United States of America
- * E-mail:
| |
Collapse
|
4
|
Sadeh S, Silver RA, Mrsic-Flogel TD, Muir DR. Assessing the Role of Inhibition in Stabilizing Neocortical Networks Requires Large-Scale Perturbation of the Inhibitory Population. J Neurosci 2017; 37:12050-12067. [PMID: 29074575 PMCID: PMC5719979 DOI: 10.1523/jneurosci.0963-17.2017] [Citation(s) in RCA: 31] [Impact Index Per Article: 4.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/11/2017] [Revised: 09/12/2017] [Accepted: 10/08/2017] [Indexed: 12/20/2022] Open
Abstract
Neurons within cortical microcircuits are interconnected with recurrent excitatory synaptic connections that are thought to amplify signals (Douglas and Martin, 2007), form selective subnetworks (Ko et al., 2011), and aid feature discrimination. Strong inhibition (Haider et al., 2013) counterbalances excitation, enabling sensory features to be sharpened and represented by sparse codes (Willmore et al., 2011). This balance between excitation and inhibition makes it difficult to assess the strength, or gain, of recurrent excitatory connections within cortical networks, which is key to understanding their operational regime and the computations that they perform. Networks that combine an unstable high-gain excitatory population with stabilizing inhibitory feedback are known as inhibition-stabilized networks (ISNs) (Tsodyks et al., 1997). Theoretical studies using reduced network models predict that ISNs produce paradoxical responses to perturbation, but experimental perturbations failed to find evidence for ISNs in cortex (Atallah et al., 2012). Here, we reexamined this question by investigating how cortical network models consisting of many neurons behave after perturbations and found that results obtained from reduced network models fail to predict responses to perturbations in more realistic networks. Our models predict that a large proportion of the inhibitory network must be perturbed to reliably detect an ISN regime robustly in cortex. We propose that wide-field optogenetic suppression of inhibition under promoters targeting a large fraction of inhibitory neurons may provide a perturbation of sufficient strength to reveal the operating regime of cortex. Our results suggest that detailed computational models of optogenetic perturbations are necessary to interpret the results of experimental paradigms.SIGNIFICANCE STATEMENT Many useful computational mechanisms proposed for cortex require local excitatory recurrence to be very strong, such that local inhibitory feedback is necessary to avoid epileptiform runaway activity (an "inhibition-stabilized network" or "ISN" regime). However, recent experimental results suggest that this regime may not exist in cortex. We simulated activity perturbations in cortical networks of increasing realism and found that, to detect ISN-like properties in cortex, large proportions of the inhibitory population must be perturbed. Current experimental methods for inhibitory perturbation are unlikely to satisfy this requirement, implying that existing experimental observations are inconclusive about the computational regime of cortex. Our results suggest that new experimental designs targeting a majority of inhibitory neurons may be able to resolve this question.
Collapse
Affiliation(s)
- Sadra Sadeh
- Department of Neuroscience, Physiology, and Pharmacology, University College London, WC1E 6BT London, United Kingdom, and
| | - R Angus Silver
- Department of Neuroscience, Physiology, and Pharmacology, University College London, WC1E 6BT London, United Kingdom, and
| | | | | |
Collapse
|
5
|
Abstract
Recurrent neural network architectures can have useful computational properties, with complex temporal dynamics and input-sensitive attractor states. However, evaluation of recurrent dynamic architectures requires solving systems of differential equations, and the number of evaluations required to determine their response to a given input can vary with the input or can be indeterminate altogether in the case of oscillations or instability. In feedforward networks, by contrast, only a single pass through the network is needed to determine the response to a given input. Modern machine learning systems are designed to operate efficiently on feedforward architectures. We hypothesized that two-layer feedforward architectures with simple, deterministic dynamics could approximate the responses of single-layer recurrent network architectures. By identifying the fixed-point responses of a given recurrent network, we trained two-layer networks to directly approximate the fixed-point response to a given input. These feedforward networks then embodied useful computations, including competitive interactions, information transformations, and noise rejection. Our approach was able to find useful approximations to recurrent networks, which can then be evaluated in linear and deterministic time complexity.
Collapse
Affiliation(s)
- Dylan R Muir
- Biozentrum, University of Basel, Basel 4056, Switzerland
| |
Collapse
|
6
|
Barreiro AK, Kutz JN, Shlizerman E. Symmetries Constrain Dynamics in a Family of Balanced Neural Networks. JOURNAL OF MATHEMATICAL NEUROSCIENCE 2017; 7:10. [PMID: 29019105 PMCID: PMC5635020 DOI: 10.1186/s13408-017-0052-6] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 04/12/2017] [Accepted: 09/19/2017] [Indexed: 06/07/2023]
Abstract
We examine a family of random firing-rate neural networks in which we enforce the neurobiological constraint of Dale's Law-each neuron makes either excitatory or inhibitory connections onto its post-synaptic targets. We find that this constrained system may be described as a perturbation from a system with nontrivial symmetries. We analyze the symmetric system using the tools of equivariant bifurcation theory and demonstrate that the symmetry-implied structures remain evident in the perturbed system. In comparison, spectral characteristics of the network coupling matrix are relatively uninformative about the behavior of the constrained system.
Collapse
Affiliation(s)
- Andrea K. Barreiro
- Department of Mathematics, Southern Methodist University, POB 750156, Dallas, TX 75275 USA
| | - J. Nathan Kutz
- Department of Applied Mathematics, University of Washington, Box 353925, Seattle, WA 98195-3925 USA
| | - Eli Shlizerman
- Department of Applied Mathematics, University of Washington, Box 353925, Seattle, WA 98195-3925 USA
| |
Collapse
|
7
|
Abstract
The techniques of random matrices have played an important role in many machine learning models. In this letter, we present a new method to study the tail inequalities for sums of random matrices. Different from other work (Ahlswede & Winter, 2002 ; Tropp, 2012 ; Hsu, Kakade, & Zhang, 2012 ), our tail results are based on the largest singular value (LSV) and independent of the matrix dimension. Since the LSV operation and the expectation are noncommutative, we introduce a diagonalization method to convert the LSV operation into the trace operation of an infinitely dimensional diagonal matrix. In this way, we obtain another version of Laplace-transform bounds and then achieve the LSV-based tail inequalities for sums of random matrices.
Collapse
Affiliation(s)
- Chao Zhang
- School of Mathematical Sciences, Dalian University of Technology, Dalian, Liaoning, 116024, P.R.C.
| | - Lei Du
- School of Mathematical Sciences, Dalian University of Technology, Dalian, Liaoning, 116024, P.R.C.
| | - Dacheng Tao
- Centre for Artificial Intelligence, Faculty of Engineering and Information Technology, University of Technology Sydney Ultimo, NSW 2007, Australia.
| |
Collapse
|
8
|
Aljadeff J, Renfrew D, Vegué M, Sharpee TO. Low-dimensional dynamics of structured random networks. Phys Rev E 2016; 93:022302. [PMID: 26986347 DOI: 10.1103/physreve.93.022302] [Citation(s) in RCA: 18] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/08/2015] [Indexed: 01/12/2023]
Abstract
Using a generalized random recurrent neural network model, and by extending our recently developed mean-field approach [J. Aljadeff, M. Stern, and T. Sharpee, Phys. Rev. Lett. 114, 088101 (2015)], we study the relationship between the network connectivity structure and its low-dimensional dynamics. Each connection in the network is a random number with mean 0 and variance that depends on pre- and postsynaptic neurons through a sufficiently smooth function g of their identities. We find that these networks undergo a phase transition from a silent to a chaotic state at a critical point we derive as a function of g. Above the critical point, although unit activation levels are chaotic, their autocorrelation functions are restricted to a low-dimensional subspace. This provides a direct link between the network's structure and some of its functional characteristics. We discuss example applications of the general results to neuroscience where we derive the support of the spectrum of connectivity matrices with heterogeneous and possibly correlated degree distributions, and to ecology where we study the stability of the cascade model for food web structure.
Collapse
Affiliation(s)
- Johnatan Aljadeff
- Department of Neurobiology, University of Chicago, Chicago, Illinois, USA.,Computational Neurobiology Laboratory, The Salk Institute for Biological Studies, La Jolla, California, USA
| | - David Renfrew
- Department of Mathematics, University of California Los Angeles, Los Angeles, California, USA
| | - Marina Vegué
- Centre de Recerca Matemàtica, Campus de Bellaterra, Barcelona, Spain.,Departament de Matemàtiques, Universitat Politècnica de Catalunya, Barcelona, Spain
| | - Tatyana O Sharpee
- Computational Neurobiology Laboratory, The Salk Institute for Biological Studies, La Jolla, California, USA
| |
Collapse
|