51
|
Curto C, Morrison K. Relating network connectivity to dynamics: opportunities and challenges for theoretical neuroscience. Curr Opin Neurobiol 2019; 58:11-20. [PMID: 31319287 DOI: 10.1016/j.conb.2019.06.003] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/24/2019] [Accepted: 06/22/2019] [Indexed: 11/29/2022]
Abstract
We review recent work relating network connectivity to the dynamics of neural activity. While concepts stemming from network science provide a valuable starting point, the interpretation of graph-theoretic structures and measures can be highly dependent on the dynamics associated to the network. Properties that are quite meaningful for linear dynamics, such as random walk and network flow models, may be of limited relevance in the neuroscience setting. Theoretical and computational neuroscience are playing a vital role in understanding the relationship between network connectivity and the nonlinear dynamics associated to neural networks.
Collapse
Affiliation(s)
- Carina Curto
- The Pennsylvania State University, PA 16802, United States.
| | - Katherine Morrison
- School of Mathematical Sciences, University of Northern Colorado, Greeley, CO 80639, USA
| |
Collapse
|
52
|
Radosevic M, Willumsen A, Petersen PC, Lindén H, Vestergaard M, Berg RW. Decoupling of timescales reveals sparse convergent CPG network in the adult spinal cord. Nat Commun 2019; 10:2937. [PMID: 31270315 PMCID: PMC6610135 DOI: 10.1038/s41467-019-10822-9] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/13/2018] [Accepted: 06/04/2019] [Indexed: 12/12/2022] Open
Abstract
During the generation of rhythmic movements, most spinal neurons receive an oscillatory synaptic drive. The neuronal architecture underlying this drive is unknown, and the corresponding network size and sparseness have not yet been addressed. If the input originates from a small central pattern generator (CPG) with dense divergent connectivity, it will induce correlated input to all receiving neurons, while sparse convergent wiring will induce a weak correlation, if any. Here, we use pairwise recordings of spinal neurons to measure synaptic correlations and thus infer the wiring architecture qualitatively. A strong correlation on a slow timescale implies functional relatedness and a common source, which will also cause correlation on fast timescale due to shared synaptic connections. However, we consistently find marginal coupling between slow and fast correlations regardless of neuronal identity. This suggests either sparse convergent connectivity or a CPG network with recurrent inhibition that actively decorrelates common input.
Collapse
Affiliation(s)
- Marija Radosevic
- Department of Neuroscience, Faculty of Health and Medical Sciences, University of Copenhagen, Blegdamsvej 3, DK-2200, Copenhagen N, Denmark
| | - Alex Willumsen
- Department of Neuroscience, Faculty of Health and Medical Sciences, University of Copenhagen, Blegdamsvej 3, DK-2200, Copenhagen N, Denmark
| | - Peter C Petersen
- Department of Neuroscience, Faculty of Health and Medical Sciences, University of Copenhagen, Blegdamsvej 3, DK-2200, Copenhagen N, Denmark
- Neuroscience Institute, New York University, New York, NY, 10016, USA
| | - Henrik Lindén
- Department of Neuroscience, Faculty of Health and Medical Sciences, University of Copenhagen, Blegdamsvej 3, DK-2200, Copenhagen N, Denmark
| | - Mikkel Vestergaard
- Department of Neuroscience, Faculty of Health and Medical Sciences, University of Copenhagen, Blegdamsvej 3, DK-2200, Copenhagen N, Denmark
- Department of Neuroscience, Max Delbrück Center for Molecular Medicine (MDC), 13125, Berlin-Buch, Germany
| | - Rune W Berg
- Department of Neuroscience, Faculty of Health and Medical Sciences, University of Copenhagen, Blegdamsvej 3, DK-2200, Copenhagen N, Denmark.
| |
Collapse
|
53
|
Recanatesi S, Ocker GK, Buice MA, Shea-Brown E. Dimensionality in recurrent spiking networks: Global trends in activity and local origins in connectivity. PLoS Comput Biol 2019; 15:e1006446. [PMID: 31299044 PMCID: PMC6655892 DOI: 10.1371/journal.pcbi.1006446] [Citation(s) in RCA: 31] [Impact Index Per Article: 6.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/13/2018] [Revised: 07/24/2019] [Accepted: 04/03/2019] [Indexed: 11/25/2022] Open
Abstract
The dimensionality of a network's collective activity is of increasing interest in neuroscience. This is because dimensionality provides a compact measure of how coordinated network-wide activity is, in terms of the number of modes (or degrees of freedom) that it can independently explore. A low number of modes suggests a compressed low dimensional neural code and reveals interpretable dynamics [1], while findings of high dimension may suggest flexible computations [2, 3]. Here, we address the fundamental question of how dimensionality is related to connectivity, in both autonomous and stimulus-driven networks. Working with a simple spiking network model, we derive three main findings. First, the dimensionality of global activity patterns can be strongly, and systematically, regulated by local connectivity structures. Second, the dimensionality is a better indicator than average correlations in determining how constrained neural activity is. Third, stimulus evoked neural activity interacts systematically with neural connectivity patterns, leading to network responses of either greater or lesser dimensionality than the stimulus.
Collapse
Affiliation(s)
- Stefano Recanatesi
- Center for Computational Neuroscience, University of Washington, Seattle, Washington, United States of America
| | - Gabriel Koch Ocker
- Allen Institute for Brain Science, Seattle, Washington, United States of America
| | - Michael A. Buice
- Center for Computational Neuroscience, University of Washington, Seattle, Washington, United States of America
- Allen Institute for Brain Science, Seattle, Washington, United States of America
- Department of Applied Mathematics, University of Washington, Seattle, Washington, United States of America
| | - Eric Shea-Brown
- Center for Computational Neuroscience, University of Washington, Seattle, Washington, United States of America
- Allen Institute for Brain Science, Seattle, Washington, United States of America
- Department of Applied Mathematics, University of Washington, Seattle, Washington, United States of America
| |
Collapse
|
54
|
Abstract
Parallel recordings of motor cortex show weak pairwise correlations on average but a wide dispersion across cells. This observation runs counter to the prevailing notion that optimal information processing requires networks to operate at a critical point, entailing strong correlations. We here reconcile this apparent contradiction by showing that the observed structure of correlations is consistent with network models that operate close to a critical point of a different nature than previously considered: dynamics that is dominated by inhibition yet nearly unstable due to heterogeneous connectivity. Our findings provide a different perspective on criticality in neural systems: network topology and heterogeneity endow the brain with two complementary substrates for critical dynamics of largely different complexities. Cortical networks that have been found to operate close to a critical point exhibit joint activations of large numbers of neurons. However, in motor cortex of the awake macaque monkey, we observe very different dynamics: massively parallel recordings of 155 single-neuron spiking activities show weak fluctuations on the population level. This a priori suggests that motor cortex operates in a noncritical regime, which in models, has been found to be suboptimal for computational performance. However, here, we show the opposite: The large dispersion of correlations across neurons is the signature of a second critical regime. This regime exhibits a rich dynamical repertoire hidden from macroscopic brain signals but essential for high performance in such concepts as reservoir computing. An analytical link between the eigenvalue spectrum of the dynamics, the heterogeneity of connectivity, and the dispersion of correlations allows us to assess the closeness to the critical point.
Collapse
|
55
|
Zanoci C, Dehghani N, Tegmark M. Ensemble inhibition and excitation in the human cortex: An Ising-model analysis with uncertainties. Phys Rev E 2019; 99:032408. [PMID: 30999501 DOI: 10.1103/physreve.99.032408] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/22/2018] [Indexed: 11/07/2022]
Abstract
The pairwise maximum entropy model, also known as the Ising model, has been widely used to analyze the collective activity of neurons. However, controversy persists in the literature about seemingly inconsistent findings, whose significance is unclear due to lack of reliable error estimates. We therefore develop a method for accurately estimating parameter uncertainty based on random walks in parameter space using adaptive Markov-chain Monte Carlo after the convergence of the main optimization algorithm. We apply our method to the activity patterns of excitatory and inhibitory neurons recorded with multielectrode arrays in the human temporal cortex during the wake-sleep cycle. Our analysis shows that the Ising model captures neuronal collective behavior much better than the independent model during wakefulness, light sleep, and deep sleep when both excitatory (E) and inhibitory (I) neurons are modeled; ignoring the inhibitory effects of I neurons dramatically overestimates synchrony among E neurons. Furthermore, information-theoretic measures reveal that the Ising model explains about 80-95% of the correlations, depending on sleep state and neuron type. Thermodynamic measures show signatures of criticality, although we take this with a grain of salt as it may be merely a reflection of long-range neural correlations.
Collapse
Affiliation(s)
- Cristian Zanoci
- Department of Physics and Center for Brains, Minds and Machines, Massachusetts Institute of Technology, Cambridge, MA 02139, USA
| | - Nima Dehghani
- Department of Physics and Center for Brains, Minds and Machines, Massachusetts Institute of Technology, Cambridge, MA 02139, USA
| | - Max Tegmark
- Department of Physics and Center for Brains, Minds and Machines, Massachusetts Institute of Technology, Cambridge, MA 02139, USA
| |
Collapse
|
56
|
Baker C, Ebsch C, Lampl I, Rosenbaum R. Correlated states in balanced neuronal networks. Phys Rev E 2019; 99:052414. [PMID: 31212573 DOI: 10.1103/physreve.99.052414] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/18/2018] [Indexed: 06/09/2023]
Abstract
Understanding the magnitude and structure of interneuronal correlations and their relationship to synaptic connectivity structure is an important and difficult problem in computational neuroscience. Early studies show that neuronal network models with excitatory-inhibitory balance naturally create very weak spike train correlations, defining the "asynchronous state." Later work showed that, under some connectivity structures, balanced networks can produce larger correlations between some neuron pairs, even when the average correlation is very small. All of these previous studies assume that the local network receives feedforward synaptic input from a population of uncorrelated spike trains. We show that when spike trains providing feedforward input are correlated, the downstream recurrent network produces much larger correlations. We provide an in-depth analysis of the resulting "correlated state" in balanced networks and show that, unlike the asynchronous state, it produces a tight excitatory-inhibitory balance consistent with in vivo cortical recordings.
Collapse
Affiliation(s)
- Cody Baker
- Department of Applied and Computational Mathematics and Statistics, University of Notre Dame, Notre Dame, Indiana 46556, USA
| | - Christopher Ebsch
- Department of Applied and Computational Mathematics and Statistics, University of Notre Dame, Notre Dame, Indiana 46556, USA
| | - Ilan Lampl
- Department of Neurobiology, Weizmann Institute of Science, Rehovot, 7610001, Israel
| | - Robert Rosenbaum
- Department of Applied and Computational Mathematics and Statistics, University of Notre Dame, Notre Dame, Indiana 46556, USA
- Interdisciplinary Center for Network Science and Applications, University of Notre Dame, Notre Dame, Indiana 46556, USA
| |
Collapse
|
57
|
Abstract
Difficult tasks are commonly equated with complex tasks across many behaviors. Motor task difficulty is traditionally defined via Fitts' law, using evaluation criteria based on spatial movement constraints. Complexity of data is typically evaluated using non-linear computational approaches. In this project, we investigate the potential to evaluate task difficulty via behavioral (motor performance) complexity in a Fitts-type task. Use of non-linear approaches allows for inclusion of many features of motor actions that are not currently included in the Fitts-type paradigm. Our results indicate that tasks defined as more difficult (using Fitts movement IDs) are not associated with complex motor behaviors; rather, an inverse relationship exists between these two concepts. Use of non-linear techniques allowed for the detection of behavioral differences in motor performance over the entire action trajectory in the presence of action errors and among neutrally co-constrained effectors not detected using traditional Fitts'-type analyses utilizing movement time measures. Our findings indicate that task difficulty may potentially be inferred using non-linear measures, particularly in ecological situations that do not obey the Fitts-type testing paradigm. While we are optimistic regarding these initial findings, further work is needed to assess the full potential of the approach.
Collapse
|
58
|
Abstract
Brain connectivity and structure-function relationships are analyzed from a physical perspective in place of common graph-theoretic and statistical approaches that overwhelmingly ignore the brain's physical structure and geometry. Field theory is used to define connectivity tensors in terms of bare and dressed propagators, and discretized representations are implemented that respect the physical nature and dimensionality of the quantities involved, retain the correct continuum limit, and enable diagrammatic analysis. Eigenfunction analysis is used to simultaneously characterize and probe patterns of brain connectivity and activity, in place of statistical or phenomenological patterns. Physically based measures that characterize the connectivity are then developed in coordinate and spectral domains; some of which generalize or rectify graph-theoretic measures to implement correct dimensionality and continuum limits, and some replace graph-theoretic quantities. Traditional graph-based measures are shown to be highly prone to artifacts introduced by discretization and threshold, often because essential physical constraints have not been imposed, dimensionality has not been included, and/or distinctions between scalar, vector, and tensor quantities have not been considered. The results can replace them in ways that converge correctly and measure properties of brain structure, rather than of its discretization, and thus potentially enable physical interpretation of the many phenomenological results in the literature. Geometric effects are shown to dominate in determining many brain properties and care must be taken not to interpret geometric differences as differences in intrinsic neural connectivity. The results demonstrate the need to use systematic physical methods to analyze the brain and the potential of such methods to obtain new insights from data, make new predictions for experimental test, and go beyond phenomenological classification to dynamics and mechanisms.
Collapse
Affiliation(s)
- P A Robinson
- School of Physics, University of Sydney, Sydney, New South Wales 2006, Australia and Center for Integrative Brain Function, University of Sydney, Sydney, New South Wales 2006, Australia
| |
Collapse
|
59
|
van Meegen A, Lindner B. Self-Consistent Correlations of Randomly Coupled Rotators in the Asynchronous State. PHYSICAL REVIEW LETTERS 2018; 121:258302. [PMID: 30608814 DOI: 10.1103/physrevlett.121.258302] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/14/2017] [Revised: 10/09/2018] [Indexed: 06/09/2023]
Abstract
We study a network of unidirectionally coupled rotators with independent identically distributed (i.i.d.) frequencies and i.i.d. coupling coefficients. Similar to biological networks, this system can attain an asynchronous state with pronounced temporal autocorrelations of the rotators. We derive differential equations for the self-consistent autocorrelation function that can be solved analytically in limit cases. For more involved scenarios, its numerical solution is confirmed by simulations of networks with Gaussian or sparsely distributed coupling coefficients. The theory is finally generalized for pulse-coupled units and tested on a standard model of computational neuroscience, a recurrent network of sparsely coupled exponential integrate-and-fire neurons.
Collapse
Affiliation(s)
- Alexander van Meegen
- Bernstein Center for Computational Neuroscience Berlin, Philippstraße 13, Haus 2, 10115 Berlin, Germany and Physics Department of Humboldt University Berlin, Newtonstraße 15, 12489 Berlin, Germany
| | - Benjamin Lindner
- Bernstein Center for Computational Neuroscience Berlin, Philippstraße 13, Haus 2, 10115 Berlin, Germany and Physics Department of Humboldt University Berlin, Newtonstraße 15, 12489 Berlin, Germany
| |
Collapse
|
60
|
Brinkman BAW, Rieke F, Shea-Brown E, Buice MA. Predicting how and when hidden neurons skew measured synaptic interactions. PLoS Comput Biol 2018; 14:e1006490. [PMID: 30346943 PMCID: PMC6219819 DOI: 10.1371/journal.pcbi.1006490] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/07/2017] [Revised: 11/06/2018] [Accepted: 09/05/2018] [Indexed: 11/18/2022] Open
Abstract
A major obstacle to understanding neural coding and computation is the fact that experimental recordings typically sample only a small fraction of the neurons in a circuit. Measured neural properties are skewed by interactions between recorded neurons and the “hidden” portion of the network. To properly interpret neural data and determine how biological structure gives rise to neural circuit function, we thus need a better understanding of the relationships between measured effective neural properties and the true underlying physiological properties. Here, we focus on how the effective spatiotemporal dynamics of the synaptic interactions between neurons are reshaped by coupling to unobserved neurons. We find that the effective interactions from a pre-synaptic neuron r′ to a post-synaptic neuron r can be decomposed into a sum of the true interaction from r′ to r plus corrections from every directed path from r′ to r through unobserved neurons. Importantly, the resulting formula reveals when the hidden units have—or do not have—major effects on reshaping the interactions among observed neurons. As a particular example of interest, we derive a formula for the impact of hidden units in random networks with “strong” coupling—connection weights that scale with 1/N, where N is the network size, precisely the scaling observed in recent experiments. With this quantitative relationship between measured and true interactions, we can study how network properties shape effective interactions, which properties are relevant for neural computations, and how to manipulate effective interactions. No experiment in neuroscience can record from more than a tiny fraction of the total number of neurons present in a circuit. This severely complicates measurement of a network’s true properties, as unobserved neurons skew measurements away from what would be measured if all neurons were observed. For example, the measured post-synaptic response of a neuron to a spike from a particular pre-synaptic neuron incorporates direct connections between the two neurons as well as the effect of any number of indirect connections, including through unobserved neurons. To understand how measured quantities are distorted by unobserved neurons, we calculate a general relationship between measured “effective” synaptic interactions and the ground-truth interactions in the network. This allows us to identify conditions under which hidden neurons substantially alter measured interactions. Moreover, it provides a foundation for future work on manipulating effective interactions between neurons to better understand and potentially alter circuit function—or dysfunction.
Collapse
Affiliation(s)
- Braden A W Brinkman
- Department of Applied Mathematics, University of Washington, Seattle, Washington, United States of America.,Department of Physiology and Biophysics, University of Washington, Seattle, Washington, United States of America
| | - Fred Rieke
- Department of Physiology and Biophysics, University of Washington, Seattle, Washington, United States of America.,Graduate Program in Neuroscience, University of Washington, Seattle, Washington, United States of America
| | - Eric Shea-Brown
- Department of Applied Mathematics, University of Washington, Seattle, Washington, United States of America.,Department of Physiology and Biophysics, University of Washington, Seattle, Washington, United States of America.,Graduate Program in Neuroscience, University of Washington, Seattle, Washington, United States of America.,Allen Institute for Brain Science, Seattle, Washington, United States of America
| | - Michael A Buice
- Department of Applied Mathematics, University of Washington, Seattle, Washington, United States of America.,Allen Institute for Brain Science, Seattle, Washington, United States of America
| |
Collapse
|
61
|
Surampudi SG, Misra J, Deco G, Bapi RS, Sharma A, Roy D. Resting state dynamics meets anatomical structure: Temporal multiple kernel learning (tMKL) model. Neuroimage 2018; 184:609-620. [PMID: 30267857 DOI: 10.1016/j.neuroimage.2018.09.054] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/03/2018] [Accepted: 09/19/2018] [Indexed: 12/13/2022] Open
Abstract
Over the last decade there has been growing interest in understanding the brain activity, in the absence of any task or stimulus, captured by the resting-state functional magnetic resonance imaging (rsfMRI). The resting state patterns have been observed to be exhibiting complex spatio-temporal dynamics and substantial effort has been made to characterize the dynamic functional connectivity (dFC) configurations. However, the dynamics governing the state transitions that the brain undergoes and their relationship to stationary functional connectivity still remains an open problem. One class of approaches attempts to characterize the dynamics in terms of finite number of latent brain states, however, such attempts are yet to amalgamate the underlying anatomical structural connectivity (SC) with the dynamics. Another class of methods links individual dynamic FCs with the underlying SC but does not characterize the temporal evolution of FC. Further, the latent states discovered by previous approaches could not be directly linked to the SC, thereby motivating us to discover the underlying lower-dimensional manifold that represents the temporal structure. In the proposed approach, the discovered manifold is further parameterized as a set of local density distributions, or latent transient states. We propose an innovative method that learns parameters specific to the latent states using a graph-theoretic model (temporal Multiple Kernel Learning, tMKL) that inherently links dynamics to the structure and finally predicts the grand average FC of the test subjects by leveraging a state transition Markov model. The proposed solution does not make strong assumptions about the underlying data and is generally applicable to resting or task data for learning subject-specific state transitions and for successfully characterizing SC-dFC-FC relationship through a unifying framework. Training and testing were done using the rs-fMRI data of 46 healthy participants. tMKL model performs significantly better than the existing models for predicting resting state functional connectivity based on whole-brain dynamic mean-field model (DMF), single diffusion kernel (SDK) model and multiple kernel learning (MKL) model. Further, the learned model was tested on an independent cohort of 100 young, healthy participants from the Human Connectome Project (HCP) and the results establish the generalizability of the proposed solution. More importantly, the model retains sensitivity toward subject-specific anatomy, a unique contribution towards a holistic approach for SC-FC characterization.
Collapse
Affiliation(s)
- Sriniwas Govinda Surampudi
- Center for Visual Information Technology, Kohli Center on Intelligent Systems, International Institute of Information Technology Hyderabad, Hyderabad, 500032, India
| | - Joyneel Misra
- Center for Visual Information Technology, Kohli Center on Intelligent Systems, International Institute of Information Technology Hyderabad, Hyderabad, 500032, India
| | - Gustavo Deco
- Center for Brain and Cognition, Dept. of Technology and Information, Universitat Pompeu Fabra, Carrer Tanger, 122-140, 08018, Barcelona, Spain; Institució Catalana de la Recerca i Estudis Avançats, Universitat Barcelona, Passeig Lluís Companys 23, 08010, Barcelona, Spain
| | - Raju Surampudi Bapi
- School of Computer and Information Sciences, University of Hyderabad, Hyderabad, 500046, India
| | - Avinash Sharma
- Center for Visual Information Technology, Kohli Center on Intelligent Systems, International Institute of Information Technology Hyderabad, Hyderabad, 500032, India
| | - Dipanjan Roy
- Cognitive Brain Dynamics Lab, National Brain Research Centre, Manesar, Gurgaon, Haryana, 122051, India.
| |
Collapse
|
62
|
Kossio FYK, Goedeke S, van den Akker B, Ibarz B, Memmesheimer RM. Growing Critical: Self-Organized Criticality in a Developing Neural System. PHYSICAL REVIEW LETTERS 2018; 121:058301. [PMID: 30118252 DOI: 10.1103/physrevlett.121.058301] [Citation(s) in RCA: 23] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/17/2017] [Revised: 05/15/2018] [Indexed: 06/08/2023]
Abstract
Experiments in various neural systems found avalanches: bursts of activity with characteristics typical for critical dynamics. A possible explanation for their occurrence is an underlying network that self-organizes into a critical state. We propose a simple spiking model for developing neural networks, showing how these may "grow into" criticality. Avalanches generated by our model correspond to clusters of widely applied Hawkes processes. We analytically derive the cluster size and duration distributions and find that they agree with those of experimentally observed neuronal avalanches.
Collapse
Affiliation(s)
| | - Sven Goedeke
- Neural Network Dynamics and Computation, Institute of Genetics, University of Bonn, Bonn, Germany
| | | | - Borja Ibarz
- Nonlinear Dynamics and Chaos Group, Departamento de Fisica, Universidad Rey Juan Carlos, Madrid, Spain
| | - Raoul-Martin Memmesheimer
- Neural Network Dynamics and Computation, Institute of Genetics, University of Bonn, Bonn, Germany
- Department of Neuroinformatics, Radboud University Nijmegen, Nijmegen, Netherlands
| |
Collapse
|
63
|
Barreiro AK, Ly C. Investigating the Correlation-Firing Rate Relationship in Heterogeneous Recurrent Networks. JOURNAL OF MATHEMATICAL NEUROSCIENCE 2018; 8:8. [PMID: 29872932 PMCID: PMC5989010 DOI: 10.1186/s13408-018-0063-y] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/05/2018] [Accepted: 05/21/2018] [Indexed: 05/13/2023]
Abstract
The structure of spiking activity in cortical networks has important implications for how the brain ultimately codes sensory signals. However, our understanding of how network and intrinsic cellular mechanisms affect spiking is still incomplete. In particular, whether cell pairs in a neural network show a positive (or no) relationship between pairwise spike count correlation and average firing rate is generally unknown. This relationship is important because it has been observed experimentally in some sensory systems, and it can enhance information in a common population code. Here we extend our prior work in developing mathematical tools to succinctly characterize the correlation and firing rate relationship in heterogeneous coupled networks. We find that very modest changes in how heterogeneous networks occupy parameter space can dramatically alter the correlation-firing rate relationship.
Collapse
Affiliation(s)
| | - Cheng Ly
- Department of Statistical Science and Operations Research, Virginia Commonwealth University, Richmond, USA
| |
Collapse
|
64
|
What We Know About the Brain Structure-Function Relationship. Behav Sci (Basel) 2018; 8:bs8040039. [PMID: 29670045 PMCID: PMC5946098 DOI: 10.3390/bs8040039] [Citation(s) in RCA: 65] [Impact Index Per Article: 10.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/08/2018] [Revised: 04/11/2018] [Accepted: 04/16/2018] [Indexed: 11/21/2022] Open
Abstract
How the human brain works is still a question, as is its implication with brain architecture: the non-trivial structure–function relationship. The main hypothesis is that the anatomic architecture conditions, but does not determine, the neural network dynamic. The functional connectivity cannot be explained only considering the anatomical substrate. This involves complex and controversial aspects of the neuroscience field and that the methods and methodologies to obtain structural and functional connectivity are not always rigorously applied. The goal of the present article is to discuss about the progress made to elucidate the structure–function relationship of the Central Nervous System, particularly at the brain level, based on results from human and animal studies. The current novel systems and neuroimaging techniques with high resolutive physio-structural capacity have brought about the development of an integral framework of different structural and morphometric tools such as image processing, computational modeling and graph theory. Different laboratories have contributed with in vivo, in vitro and computational/mathematical models to study the intrinsic neural activity patterns based on anatomical connections. We conclude that multi-modal techniques of neuroimaging are required such as an improvement on methodologies for obtaining structural and functional connectivity. Even though simulations of the intrinsic neural activity based on anatomical connectivity can reproduce much of the observed patterns of empirical functional connectivity, future models should be multifactorial to elucidate multi-scale relationships and to infer disorder mechanisms.
Collapse
|
65
|
From correlation to causation: Estimating effective connectivity from zero-lag covariances of brain signals. PLoS Comput Biol 2018; 14:e1006056. [PMID: 29579045 PMCID: PMC5886625 DOI: 10.1371/journal.pcbi.1006056] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/12/2017] [Revised: 04/05/2018] [Accepted: 02/26/2018] [Indexed: 11/28/2022] Open
Abstract
Knowing brain connectivity is of great importance both in basic research and for clinical applications. We are proposing a method to infer directed connectivity from zero-lag covariances of neuronal activity recorded at multiple sites. This allows us to identify causal relations that are reflected in neuronal population activity. To derive our strategy, we assume a generic linear model of interacting continuous variables, the components of which represent the activity of local neuronal populations. The suggested method for inferring connectivity from recorded signals exploits the fact that the covariance matrix derived from the observed activity contains information about the existence, the direction and the sign of connections. Assuming a sparsely coupled network, we disambiguate the underlying causal structure via L1-minimization, which is known to prefer sparse solutions. In general, this method is suited to infer effective connectivity from resting state data of various types. We show that our method is applicable over a broad range of structural parameters regarding network size and connection probability of the network. We also explored parameters affecting its activity dynamics, like the eigenvalue spectrum. Also, based on the simulation of suitable Ornstein-Uhlenbeck processes to model BOLD dynamics, we show that with our method it is possible to estimate directed connectivity from zero-lag covariances derived from such signals. In this study, we consider measurement noise and unobserved nodes as additional confounding factors. Furthermore, we investigate the amount of data required for a reliable estimate. Additionally, we apply the proposed method on full-brain resting-state fast fMRI datasets. The resulting network exhibits a tendency for close-by areas being connected as well as inter-hemispheric connections between corresponding areas. In addition, we found that a surprisingly large fraction of more than one third of all identified connections were of inhibitory nature. Changes in brain connectivity are considered an important biomarker for certain brain diseases. This directly raises the question of accessibility of connectivity from measured brain signals. Here we show how directed effective connectivity can be inferred from continuous brain signals, like fMRI. The main idea is to extract the connectivity from the inverse zero-lag covariance matrix of the measured signals. This is done using L1-minimization via gradient descent algorithm on the manifold of unitary matrices. This ensures that the resulting network always fits the same covariance structure as the measured data, assuming a canonical linear model. Applying the estimation method on noise-free covariance matrices shows that the method works nicely on sparsely coupled networks with more than 40 nodes, provided network interaction is strong enough. Applying the estimation on simulated Ornstein-Uhlenbeck processes supposed to model BOLD signals demonstrates robustness against observation noise and unobserved nodes. In general, the proposed method can be applied to time-resolved covariance matrices in the frequency domain (cross-spectral densities), leading to frequency-resolved networks. We are able to demonstrate that our method leads to reliable results, if the sampled signals are long enough.
Collapse
|
66
|
Reconstructing the functional connectivity of multiple spike trains using Hawkes models. J Neurosci Methods 2018; 297:9-21. [PMID: 29294310 DOI: 10.1016/j.jneumeth.2017.12.026] [Citation(s) in RCA: 15] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/11/2017] [Revised: 12/05/2017] [Accepted: 12/29/2017] [Indexed: 11/23/2022]
Abstract
BACKGROUND Statistical models that predict neuron spike occurrence from the earlier spiking activity of the whole recorded network are promising tools to reconstruct functional connectivity graphs. Some of the previously used methods are in the general statistical framework of the multivariate Hawkes processes. However, they usually require a huge amount of data, some prior knowledge about the recorded network, and/or may produce an increasing number of spikes along time during simulation. NEW METHOD Here, we present a method, based on least-square estimators and LASSO penalty criteria, for a particular class of Hawkes processes that can be used for simulation. RESULTS Testing our method on small networks modeled with Leaky Integrate and Fire demonstrated that it efficiently detects both excitatory and inhibitory connections. The few errors that occasionally occur with complex networks including common inputs, weak and chained connections, can be discarded based on objective criteria. COMPARISON WITH EXISTING METHODS With respect to other existing methods, the present one allows to reconstruct functional connectivity of small networks without prior knowledge of their properties or architecture, using an experimentally realistic amount of data. CONCLUSIONS The present method is robust, stable, and can be used on a personal computer as a routine procedure to infer connectivity graphs and generate simulation models from simultaneous spike train recordings.
Collapse
|
67
|
Pernice V, da Silveira RA. Interpretation of correlated neural variability from models of feed-forward and recurrent circuits. PLoS Comput Biol 2018; 14:e1005979. [PMID: 29408930 PMCID: PMC5833435 DOI: 10.1371/journal.pcbi.1005979] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/06/2017] [Revised: 03/01/2018] [Accepted: 01/10/2018] [Indexed: 11/18/2022] Open
Abstract
Neural populations respond to the repeated presentations of a sensory stimulus with correlated variability. These correlations have been studied in detail, with respect to their mechanistic origin, as well as their influence on stimulus discrimination and on the performance of population codes. A number of theoretical studies have endeavored to link network architecture to the nature of the correlations in neural activity. Here, we contribute to this effort: in models of circuits of stochastic neurons, we elucidate the implications of various network architectures—recurrent connections, shared feed-forward projections, and shared gain fluctuations—on the stimulus dependence in correlations. Specifically, we derive mathematical relations that specify the dependence of population-averaged covariances on firing rates, for different network architectures. In turn, these relations can be used to analyze data on population activity. We examine recordings from neural populations in mouse auditory cortex. We find that a recurrent network model with random effective connections captures the observed statistics. Furthermore, using our circuit model, we investigate the relation between network parameters, correlations, and how well different stimuli can be discriminated from one another based on the population activity. As such, our approach allows us to relate properties of the neural circuit to information processing. The response of neurons to a stimulus is variable across trials. A natural solution for reliable coding in the face of noise is the averaging across a neural population. The nature of this averaging depends on the structure of noise correlations in the neural population. In turn, the correlation structure depends on the way noise and correlations are generated in neural circuits. It is in general difficult to identify the origin of correlations from the observed population activity alone. In this article, we explore different theoretical scenarios of the way in which correlations can be generated, and we relate these to the architecture of feed-forward and recurrent neural circuits. Analyzing population recordings of the activity in mouse auditory cortex in response to sound stimuli, we find that population statistics are consistent with those generated in a recurrent network model. Using this model, we can then quantify the effects of network properties on average population responses, noise correlations, and the representation of sensory information.
Collapse
Affiliation(s)
- Volker Pernice
- Department of Physics, Ecole Normale Supérieure, Paris, France
- Laboratoire de Physique Statistique, Ecole Normale Supérieure, PSL Research University; Université Paris Diderot Sorbonne Paris-Cité, Sorbonne Universités UPMC Univ Paris 06; CNRS, Paris, France
| | - Rava Azeredo da Silveira
- Department of Physics, Ecole Normale Supérieure, Paris, France
- Laboratoire de Physique Statistique, Ecole Normale Supérieure, PSL Research University; Université Paris Diderot Sorbonne Paris-Cité, Sorbonne Universités UPMC Univ Paris 06; CNRS, Paris, France
- Princeton Neuroscience Institute, Princeton University, Princeton, New Jersey, United States of America
- * E-mail:
| |
Collapse
|
68
|
Becker CO, Pequito S, Pappas GJ, Miller MB, Grafton ST, Bassett DS, Preciado VM. Spectral mapping of brain functional connectivity from diffusion imaging. Sci Rep 2018; 8:1411. [PMID: 29362436 PMCID: PMC5780460 DOI: 10.1038/s41598-017-18769-x] [Citation(s) in RCA: 48] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/05/2017] [Accepted: 12/15/2017] [Indexed: 01/22/2023] Open
Abstract
Understanding the relationship between the dynamics of neural processes and the anatomical substrate of the brain is a central question in neuroscience. On the one hand, modern neuroimaging technologies, such as diffusion tensor imaging, can be used to construct structural graphs representing the architecture of white matter streamlines linking cortical and subcortical structures. On the other hand, temporal patterns of neural activity can be used to construct functional graphs representing temporal correlations between brain regions. Although some studies provide evidence that whole-brain functional connectivity is shaped by the underlying anatomy, the observed relationship between function and structure is weak, and the rules by which anatomy constrains brain dynamics remain elusive. In this article, we introduce a methodology to map the functional connectivity of a subject at rest from his or her structural graph. Using our methodology, we are able to systematically account for the role of structural walks in the formation of functional correlations. Furthermore, in our empirical evaluations, we observe that the eigenmodes of the mapped functional connectivity are associated with activity patterns associated with different cognitive systems.
Collapse
Affiliation(s)
- Cassiano O Becker
- Department of Electrical and Systems Engineering, University of Pennsylvania, Philadelphia, USA
| | - Sérgio Pequito
- Department of Industrial and Systems Engineering, Rensselaer Polytechnic Institute, Troy, USA
| | - George J Pappas
- Department of Electrical and Systems Engineering, University of Pennsylvania, Philadelphia, USA
| | - Michael B Miller
- Department of Psychological and Brain Sciences, University of California at Santa Barbara, Santa Barbara, USA
| | - Scott T Grafton
- Department of Psychological and Brain Sciences, University of California at Santa Barbara, Santa Barbara, USA
| | - Danielle S Bassett
- Department of Electrical and Systems Engineering, University of Pennsylvania, Philadelphia, USA.,Department of Bioengineering, University of Pennsylvania, Philadelphia, USA
| | - Victor M Preciado
- Department of Electrical and Systems Engineering, University of Pennsylvania, Philadelphia, USA.
| |
Collapse
|
69
|
Wright NC, Hoseini MS, Yasar TB, Wessel R. Coupling of synaptic inputs to local cortical activity differs among neurons and adapts after stimulus onset. J Neurophysiol 2017; 118:3345-3359. [PMID: 28931610 DOI: 10.1152/jn.00398.2017] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
Cortical activity contributes significantly to the high variability of sensory responses of interconnected pyramidal neurons, which has crucial implications for sensory coding. Yet, largely because of technical limitations of in vivo intracellular recordings, the coupling of a pyramidal neuron's synaptic inputs to the local cortical activity has evaded full understanding. Here we obtained excitatory synaptic conductance ( g) measurements from putative pyramidal neurons and local field potential (LFP) recordings from adjacent cortical circuits during visual processing in the turtle whole brain ex vivo preparation. We found a range of g-LFP coupling across neurons. Importantly, for a given neuron, g-LFP coupling increased at stimulus onset and then relaxed toward intermediate values during continued visual stimulation. A model network with clustered connectivity and synaptic depression reproduced both the diversity and the dynamics of g-LFP coupling. In conclusion, these results establish a rich dependence of single-neuron responses on anatomical, synaptic, and emergent network properties. NEW & NOTEWORTHY Cortical neurons are strongly influenced by the networks in which they are embedded. To understand sensory processing, we must identify the nature of this influence and its underlying mechanisms. Here we investigate synaptic inputs to cortical neurons, and the nearby local field potential, during visual processing. We find a range of neuron-to-network coupling across cortical neurons. This coupling is dynamically modulated during visual processing via biophysical and emergent network properties.
Collapse
Affiliation(s)
- Nathaniel C Wright
- Department of Physics, Washington University in St. Louis , St. Louis, Missouri
| | - Mahmood S Hoseini
- Department of Physics, Washington University in St. Louis , St. Louis, Missouri
| | - Tansel Baran Yasar
- Department of Physics, Washington University in St. Louis , St. Louis, Missouri
| | - Ralf Wessel
- Department of Physics, Washington University in St. Louis , St. Louis, Missouri
| |
Collapse
|
70
|
Abstract
We expand the theory of Hawkes processes to the nonstationary case, in which the mutually exciting point processes receive time-dependent inputs. We derive an analytical expression for the time-dependent correlations, which can be applied to networks with arbitrary connectivity, and inputs with arbitrary statistics. The expression shows how the network correlations are determined by the interplay between the network topology, the transfer functions relating units within the network, and the pattern and statistics of the external inputs. We illustrate the correlation structure using several examples in which neural network dynamics are modeled as a Hawkes process. In particular, we focus on the interplay between internally and externally generated oscillations and their signatures in the spike and rate correlation functions.
Collapse
Affiliation(s)
- Neta Ravid Tannenbaum
- Edmond and Lily Safra Center for Brain Sciences, The Hebrew University of Jerusalem, Jerusalem 9190401, Israel
| | - Yoram Burak
- Edmond and Lily Safra Center for Brain Sciences, The Hebrew University of Jerusalem, Jerusalem 9190401, Israel and Racah Institute of Physics, The Hebrew University of Jerusalem, Jerusalem 9190401, Israel
| |
Collapse
|
71
|
Fasoli D, Cattani A, Panzeri S. Transitions between asynchronous and synchronous states: a theory of correlations in small neural circuits. J Comput Neurosci 2017; 44:25-43. [PMID: 29124505 PMCID: PMC5770155 DOI: 10.1007/s10827-017-0667-3] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/11/2017] [Revised: 09/04/2017] [Accepted: 10/10/2017] [Indexed: 12/11/2022]
Abstract
The study of correlations in neural circuits of different size, from the small size of cortical microcolumns to the large-scale organization of distributed networks studied with functional imaging, is a topic of central importance to systems neuroscience. However, a theory that explains how the parameters of mesoscopic networks composed of a few tens of neurons affect the underlying correlation structure is still missing. Here we consider a theory that can be applied to networks of arbitrary size with multiple populations of homogeneous fully-connected neurons, and we focus its analysis to a case of two populations of small size. We combine the analysis of local bifurcations of the dynamics of these networks with the analytical calculation of their cross-correlations. We study the correlation structure in different regimes, showing that a variation of the external stimuli causes the network to switch from asynchronous states, characterized by weak correlation and low variability, to synchronous states characterized by strong correlations and wide temporal fluctuations. We show that asynchronous states are generated by strong stimuli, while synchronous states occur through critical slowing down when the stimulus moves the network close to a local bifurcation. In particular, strongly positive correlations occur at the saddle-node and Andronov-Hopf bifurcations of the network, while strongly negative correlations occur when the network undergoes a spontaneous symmetry-breaking at the branching-point bifurcations. These results show how the correlation structure of firing-rate network models is strongly modulated by the external stimuli, even keeping the anatomical connections fixed. These results also suggest an effective mechanism through which biological networks may dynamically modulate the encoding and integration of sensory information.
Collapse
Affiliation(s)
- Diego Fasoli
- Laboratory of Neural Computation, Center for Neuroscience and Cognitive Systems @UniTn, Istituto Italiano di Tecnologia, 38068, Rovereto, Italy.
- Center for Brain and Cognition, Computational Neuroscience Group, Universitat Pompeu Fabra, 08002, Barcelona, Spain.
| | - Anna Cattani
- Laboratory of Neural Computation, Center for Neuroscience and Cognitive Systems @UniTn, Istituto Italiano di Tecnologia, 38068, Rovereto, Italy
- Department of Biomedical and Clinical Sciences "L. Sacco", University of Milan, Milan, Italy
| | - Stefano Panzeri
- Laboratory of Neural Computation, Center for Neuroscience and Cognitive Systems @UniTn, Istituto Italiano di Tecnologia, 38068, Rovereto, Italy
| |
Collapse
|
72
|
Rostami V, Porta Mana P, Grün S, Helias M. Bistability, non-ergodicity, and inhibition in pairwise maximum-entropy models. PLoS Comput Biol 2017; 13:e1005762. [PMID: 28968396 PMCID: PMC5645158 DOI: 10.1371/journal.pcbi.1005762] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/06/2016] [Revised: 10/17/2017] [Accepted: 09/05/2017] [Indexed: 11/30/2022] Open
Abstract
Pairwise maximum-entropy models have been used in neuroscience to predict the activity of neuronal populations, given only the time-averaged correlations of the neuron activities. This paper provides evidence that the pairwise model, applied to experimental recordings, would produce a bimodal distribution for the population-averaged activity, and for some population sizes the second mode would peak at high activities, that experimentally would be equivalent to 90% of the neuron population active within time-windows of few milliseconds. Several problems are connected with this bimodality: 1. The presence of the high-activity mode is unrealistic in view of observed neuronal activity and on neurobiological grounds. 2. Boltzmann learning becomes non-ergodic, hence the pairwise maximum-entropy distribution cannot be found: in fact, Boltzmann learning would produce an incorrect distribution; similarly, common variants of mean-field approximations also produce an incorrect distribution. 3. The Glauber dynamics associated with the model is unrealistically bistable and cannot be used to generate realistic surrogate data. This bimodality problem is first demonstrated for an experimental dataset from 159 neurons in the motor cortex of macaque monkey. Evidence is then provided that this problem affects typical neural recordings of population sizes of a couple of hundreds or more neurons. The cause of the bimodality problem is identified as the inability of standard maximum-entropy distributions with a uniform reference measure to model neuronal inhibition. To eliminate this problem a modified maximum-entropy model is presented, which reflects a basic effect of inhibition in the form of a simple but non-uniform reference measure. This model does not lead to unrealistic bimodalities, can be found with Boltzmann learning, and has an associated Glauber dynamics which incorporates a minimal asymmetric inhibition. Networks of interacting units are ubiquitous in various fields of biology; e.g. gene regulatory networks, neuronal networks, social structures. If a limited set of observables is accessible, maximum-entropy models provide a way to construct a statistical model for such networks, under particular assumptions. The pairwise maximum-entropy model only uses the first two moments among those observables, and can be interpreted as a network with only pairwise interactions. If correlations are on average positive, we here show that the maximum entropy distribution tends to become bimodal. In the application to neuronal activity this is a problem, because the bimodality is an artefact of the statistical model and not observed in real data. This problem could also affect other fields in biology. We here explain under which conditions bimodality arises and present a solution to the problem by introducing a collective negative feedback, corresponding to a modified maximum-entropy model. This result may point to the existence of a homeostatic mechanism active in the system that is not part of our set of observable units.
Collapse
Affiliation(s)
- Vahid Rostami
- Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA BRAIN Institute I, Jülich Research Centre, Jülich, Germany
- * E-mail:
| | - PierGianLuca Porta Mana
- Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA BRAIN Institute I, Jülich Research Centre, Jülich, Germany
| | - Sonja Grün
- Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA BRAIN Institute I, Jülich Research Centre, Jülich, Germany
- Theoretical Systems Neurobiology, RWTH Aachen University, Aachen, Germany
| | - Moritz Helias
- Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA BRAIN Institute I, Jülich Research Centre, Jülich, Germany
- Department of Physics, Faculty 1, RWTH Aachen University, Aachen, Germany
| |
Collapse
|
73
|
Muki-Marttunen T. An Algorithm for Motif-Based Network Design. IEEE/ACM TRANSACTIONS ON COMPUTATIONAL BIOLOGY AND BIOINFORMATICS 2017; 14:1181-1186. [PMID: 27295682 DOI: 10.1109/tcbb.2016.2576442] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/06/2023]
Abstract
A determinant property of the structure of a biological network is the distribution of local connectivity patterns, i.e., network motifs. In this work, a method for creating directed, unweighted networks while promoting a certain combination of motifs is presented. This motif-based network algorithm starts with an empty graph and randomly connects the nodes by advancing or discouraging the formation of chosen motifs. The in- or out-degree distribution of the generated networks can be explicitly chosen. The algorithm is shown to perform well in producing networks with high occurrences of the targeted motifs, both ones consisting of three nodes as well as ones consisting of four nodes. Moreover, the algorithm can also be tuned to bring about global network characteristics found in many natural networks, such as small-worldness and modularity.
Collapse
|
74
|
Ocker GK, Hu Y, Buice MA, Doiron B, Josić K, Rosenbaum R, Shea-Brown E. From the statistics of connectivity to the statistics of spike times in neuronal networks. Curr Opin Neurobiol 2017; 46:109-119. [PMID: 28863386 DOI: 10.1016/j.conb.2017.07.011] [Citation(s) in RCA: 28] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/08/2017] [Revised: 07/21/2017] [Accepted: 07/27/2017] [Indexed: 10/19/2022]
Abstract
An essential step toward understanding neural circuits is linking their structure and their dynamics. In general, this relationship can be almost arbitrarily complex. Recent theoretical work has, however, begun to identify some broad principles underlying collective spiking activity in neural circuits. The first is that local features of network connectivity can be surprisingly effective in predicting global statistics of activity across a network. The second is that, for the important case of large networks with excitatory-inhibitory balance, correlated spiking persists or vanishes depending on the spatial scales of recurrent and feedforward connectivity. We close by showing how these ideas, together with plasticity rules, can help to close the loop between network structure and activity statistics.
Collapse
Affiliation(s)
| | - Yu Hu
- Center for Brain Science, Harvard University, United States
| | - Michael A Buice
- Allen Institute for Brain Science, United States; Department of Applied Mathematics, University of Washington, United States
| | - Brent Doiron
- Department of Mathematics, University of Pittsburgh, United States; Center for the Neural Basis of Cognition, Pittsburgh, United States
| | - Krešimir Josić
- Department of Mathematics, University of Houston, United States; Department of Biology and Biochemistry, University of Houston, United States; Department of BioSciences, Rice University, United States
| | - Robert Rosenbaum
- Department of Mathematics, University of Notre Dame, United States
| | - Eric Shea-Brown
- Allen Institute for Brain Science, United States; Department of Applied Mathematics, University of Washington, United States; Department of Physiology and Biophysics, and University of Washington Institute for Neuroengineering, United States.
| |
Collapse
|
75
|
Abstract
Recent experimental advances are producing an avalanche of data on both neural connectivity and neural activity. To take full advantage of these two emerging datasets we need a framework that links them, revealing how collective neural activity arises from the structure of neural connectivity and intrinsic neural dynamics. This problem of structure-driven activity has drawn major interest in computational neuroscience. Existing methods for relating activity and architecture in spiking networks rely on linearizing activity around a central operating point and thus fail to capture the nonlinear responses of individual neurons that are the hallmark of neural information processing. Here, we overcome this limitation and present a new relationship between connectivity and activity in networks of nonlinear spiking neurons by developing a diagrammatic fluctuation expansion based on statistical field theory. We explicitly show how recurrent network structure produces pairwise and higher-order correlated activity, and how nonlinearities impact the networks’ spiking activity. Our findings open new avenues to investigating how single-neuron nonlinearities—including those of different cell types—combine with connectivity to shape population activity and function. Neuronal networks, like many biological systems, exhibit variable activity. This activity is shaped by both the underlying biology of the component neurons and the structure of their interactions. How can we combine knowledge of these two things—that is, models of individual neurons and of their interactions—to predict the statistics of single- and multi-neuron activity? Current approaches rely on linearizing neural activity around a stationary state. In the face of neural nonlinearities, however, these linear methods can fail to predict spiking statistics and even fail to correctly predict whether activity is stable or pathological. Here, we show how to calculate any spike train cumulant in a broad class of models, while systematically accounting for nonlinear effects. We then study a fundamental effect of nonlinear input-rate transfer–coupling between different orders of spiking statistic–and how this depends on single-neuron and network properties.
Collapse
Affiliation(s)
- Gabriel Koch Ocker
- Allen Institute for Brain Science, Seattle, Washington, United States of America
| | - Krešimir Josić
- Department of Mathematics and Department of Biology and Biochemistry, University of Houston, Houston, Texas, United States of America
- Department of BioSciences, Rice University, Houston, Texas, United States of America
| | - Eric Shea-Brown
- Allen Institute for Brain Science, Seattle, Washington, United States of America
- Department of Applied Mathematics, University of Washington, Seattle, Washington, United States of America
- Department of Physiology and Biophysics, and UW Institute of Neuroengineering, University of Washington, Seattle, Washington, United States of America
| | - Michael A. Buice
- Allen Institute for Brain Science, Seattle, Washington, United States of America
- Department of Applied Mathematics, University of Washington, Seattle, Washington, United States of America
- * E-mail:
| |
Collapse
|
76
|
Kühn T, Helias M. Locking of correlated neural activity to ongoing oscillations. PLoS Comput Biol 2017; 13:e1005534. [PMID: 28604771 PMCID: PMC5484611 DOI: 10.1371/journal.pcbi.1005534] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/30/2016] [Revised: 06/26/2017] [Accepted: 04/26/2017] [Indexed: 02/01/2023] Open
Abstract
Population-wide oscillations are ubiquitously observed in mesoscopic signals of cortical activity. In these network states a global oscillatory cycle modulates the propensity of neurons to fire. Synchronous activation of neurons has been hypothesized to be a separate channel of signal processing information in the brain. A salient question is therefore if and how oscillations interact with spike synchrony and in how far these channels can be considered separate. Experiments indeed showed that correlated spiking co-modulates with the static firing rate and is also tightly locked to the phase of beta-oscillations. While the dependence of correlations on the mean rate is well understood in feed-forward networks, it remains unclear why and by which mechanisms correlations tightly lock to an oscillatory cycle. We here demonstrate that such correlated activation of pairs of neurons is qualitatively explained by periodically-driven random networks. We identify the mechanisms by which covariances depend on a driving periodic stimulus. Mean-field theory combined with linear response theory yields closed-form expressions for the cyclostationary mean activities and pairwise zero-time-lag covariances of binary recurrent random networks. Two distinct mechanisms cause time-dependent covariances: the modulation of the susceptibility of single neurons (via the external input and network feedback) and the time-varying variances of single unit activities. For some parameters, the effectively inhibitory recurrent feedback leads to resonant covariances even if mean activities show non-resonant behavior. Our analytical results open the question of time-modulated synchronous activity to a quantitative analysis.
Collapse
Affiliation(s)
- Tobias Kühn
- Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA BRAIN Institute I, Jülich Research Centre, Jülich, Germany
- * E-mail:
| | - Moritz Helias
- Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA BRAIN Institute I, Jülich Research Centre, Jülich, Germany
- Department of Physics, Faculty 1, RWTH Aachen University, Aachen, Germany
| |
Collapse
|
77
|
Kanashiro T, Ocker GK, Cohen MR, Doiron B. Attentional modulation of neuronal variability in circuit models of cortex. eLife 2017; 6. [PMID: 28590902 PMCID: PMC5476447 DOI: 10.7554/elife.23978] [Citation(s) in RCA: 45] [Impact Index Per Article: 6.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/08/2016] [Accepted: 05/20/2017] [Indexed: 01/12/2023] Open
Abstract
The circuit mechanisms behind shared neural variability (noise correlation) and its dependence on neural state are poorly understood. Visual attention is well-suited to constrain cortical models of response variability because attention both increases firing rates and their stimulus sensitivity, as well as decreases noise correlations. We provide a novel analysis of population recordings in rhesus primate visual area V4 showing that a single biophysical mechanism may underlie these diverse neural correlates of attention. We explore model cortical networks where top-down mediated increases in excitability, distributed across excitatory and inhibitory targets, capture the key neuronal correlates of attention. Our models predict that top-down signals primarily affect inhibitory neurons, whereas excitatory neurons are more sensitive to stimulus specific bottom-up inputs. Accounting for trial variability in models of state dependent modulation of neuronal activity is a critical step in building a mechanistic theory of neuronal cognition. DOI:http://dx.doi.org/10.7554/eLife.23978.001 The world around us is complex and our brains need to navigate this complexity. We must focus on relevant inputs from our senses – such as the bus we need to catch – while ignoring distractions – such as the eye-catching displays in the shop windows we pass on the same street. Selective attention is a tool that enables us to filter complex sensory scenes and focus on whatever is most important at the time. But how does selective attention work? Our sense of vision results from the activity of cells in a region of the brain called visual cortex. Paying attention to an object affects the activity of visual cortex in two ways. First, it causes the average activity of the brain cells in the visual cortex that respond to that object to increase. Second, it reduces spontaneous moment-to-moment fluctuations in the activity of those brain cells, known as noise. Both of these effects make it easier for the brain to process the object in question. Kanashiro et al. set out to build a mathematical model of visual cortex that captures these two components of selective attention. The cortex contains two types of brain cells: excitatory neurons, which activate other cells, and inhibitory neurons, which suppress other cells. Experiments suggest that excitatory neurons contribute to the flow of activity within the cortex, whereas inhibitory neurons help cancel out noise. The new mathematical model predicts that paying attention affects inhibitory neurons far more than excitatory ones. According to the model, selective attention works mainly by reducing the noise that would otherwise distort the activity of visual cortex. The next step is to test this prediction directly. This will require measuring the activity of the inhibitory neurons in an animal performing a selective attention task. Such experiments, which should be achievable using existing technologies, will allow scientists to confirm or disprove the current model, and to dissect the mechanisms that underlie visual attention. DOI:http://dx.doi.org/10.7554/eLife.23978.002
Collapse
Affiliation(s)
- Tatjana Kanashiro
- Program for Neural Computation, Carnegie Mellon University and University of Pittsburgh, Pittsburgh, United States.,Department of Mathematics, University of Pittsburgh, Pittsburgh, United States.,Center for the Neural Basis of Cognition, Pittsburgh, United States
| | - Gabriel Koch Ocker
- Department of Mathematics, University of Pittsburgh, Pittsburgh, United States.,Center for the Neural Basis of Cognition, Pittsburgh, United States.,Allen Institute for Brain Science, Seattle, United States
| | - Marlene R Cohen
- Center for the Neural Basis of Cognition, Pittsburgh, United States.,Department of Neuroscience, University of Pittsburgh, Pittsburgh, United States
| | - Brent Doiron
- Department of Mathematics, University of Pittsburgh, Pittsburgh, United States.,Center for the Neural Basis of Cognition, Pittsburgh, United States
| |
Collapse
|
78
|
Hahne J, Dahmen D, Schuecker J, Frommer A, Bolten M, Helias M, Diesmann M. Integration of Continuous-Time Dynamics in a Spiking Neural Network Simulator. Front Neuroinform 2017; 11:34. [PMID: 28596730 PMCID: PMC5442232 DOI: 10.3389/fninf.2017.00034] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/22/2016] [Accepted: 05/01/2017] [Indexed: 01/21/2023] Open
Abstract
Contemporary modeling approaches to the dynamics of neural networks include two important classes of models: biologically grounded spiking neuron models and functionally inspired rate-based units. We present a unified simulation framework that supports the combination of the two for multi-scale modeling, enables the quantitative validation of mean-field approaches by spiking network simulations, and provides an increase in reliability by usage of the same simulation code and the same network model specifications for both model classes. While most spiking simulations rely on the communication of discrete events, rate models require time-continuous interactions between neurons. Exploiting the conceptual similarity to the inclusion of gap junctions in spiking network simulations, we arrive at a reference implementation of instantaneous and delayed interactions between rate-based models in a spiking network simulator. The separation of rate dynamics from the general connection and communication infrastructure ensures flexibility of the framework. In addition to the standard implementation we present an iterative approach based on waveform-relaxation techniques to reduce communication and increase performance for large-scale simulations of rate-based models with instantaneous interactions. Finally we demonstrate the broad applicability of the framework by considering various examples from the literature, ranging from random networks to neural-field models. The study provides the prerequisite for interactions between rate-based and spiking models in a joint simulation.
Collapse
Affiliation(s)
- Jan Hahne
- School of Mathematics and Natural Sciences, Bergische Universität WuppertalWuppertal, Germany
| | - David Dahmen
- Institute of Neuroscience and Medicine (INM-6), Institute for Advanced Simulation (IAS-6), JARA BRAIN Institute I, Jülich Research CentreJülich, Germany
| | - Jannis Schuecker
- Institute of Neuroscience and Medicine (INM-6), Institute for Advanced Simulation (IAS-6), JARA BRAIN Institute I, Jülich Research CentreJülich, Germany
| | - Andreas Frommer
- School of Mathematics and Natural Sciences, Bergische Universität WuppertalWuppertal, Germany
| | - Matthias Bolten
- School of Mathematics and Natural Sciences, Bergische Universität WuppertalWuppertal, Germany
| | - Moritz Helias
- Institute of Neuroscience and Medicine (INM-6), Institute for Advanced Simulation (IAS-6), JARA BRAIN Institute I, Jülich Research CentreJülich, Germany
- Department of Physics, Faculty 1, RWTH Aachen UniversityAachen, Germany
| | - Markus Diesmann
- Institute of Neuroscience and Medicine (INM-6), Institute for Advanced Simulation (IAS-6), JARA BRAIN Institute I, Jülich Research CentreJülich, Germany
- Department of Physics, Faculty 1, RWTH Aachen UniversityAachen, Germany
- Department of Psychiatry, Psychotherapy and Psychosomatics, Medical Faculty, RWTH Aachen UniversityAachen, Germany
| |
Collapse
|
79
|
Sprekeler H. Functional consequences of inhibitory plasticity: homeostasis, the excitation-inhibition balance and beyond. Curr Opin Neurobiol 2017; 43:198-203. [PMID: 28500933 DOI: 10.1016/j.conb.2017.03.014] [Citation(s) in RCA: 45] [Impact Index Per Article: 6.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/28/2016] [Revised: 03/12/2017] [Accepted: 03/22/2017] [Indexed: 11/18/2022]
Abstract
Computational neuroscience has a long-standing tradition of investigating the consequences of excitatory synaptic plasticity. In contrast, the functions of inhibitory plasticity are still largely nebulous, particularly given the bewildering diversity of interneurons in the brain. Here, we review recent computational advances that provide first suggestions for the functional roles of inhibitory plasticity, such as a maintenance of the excitation-inhibition balance, a stabilization of recurrent network dynamics and a decorrelation of sensory responses. The field is still in its infancy, but given the existing body of theory for excitatory plasticity, it is likely to mature quickly and deliver important insights into the self-organization of inhibitory circuits in the brain.
Collapse
Affiliation(s)
- Henning Sprekeler
- Department for Electrical Engineering and Computer Science, Berlin Institute of Technology, and Bernstein Center for Computational Neuroscience, Marchstr. 23, 10587 Berlin, Germany.
| |
Collapse
|
80
|
When do correlations increase with firing rates in recurrent networks? PLoS Comput Biol 2017; 13:e1005506. [PMID: 28448499 PMCID: PMC5426798 DOI: 10.1371/journal.pcbi.1005506] [Citation(s) in RCA: 15] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/19/2016] [Revised: 05/11/2017] [Accepted: 04/07/2017] [Indexed: 02/04/2023] Open
Abstract
A central question in neuroscience is to understand how noisy firing patterns are used to transmit information. Because neural spiking is noisy, spiking patterns are often quantified via pairwise correlations, or the probability that two cells will spike coincidentally, above and beyond their baseline firing rate. One observation frequently made in experiments, is that correlations can increase systematically with firing rate. Theoretical studies have determined that stimulus-dependent correlations that increase with firing rate can have beneficial effects on information coding; however, we still have an incomplete understanding of what circuit mechanisms do, or do not, produce this correlation-firing rate relationship. Here, we studied the relationship between pairwise correlations and firing rates in recurrently coupled excitatory-inhibitory spiking networks with conductance-based synapses. We found that with stronger excitatory coupling, a positive relationship emerged between pairwise correlations and firing rates. To explain these findings, we used linear response theory to predict the full correlation matrix and to decompose correlations in terms of graph motifs. We then used this decomposition to explain why covariation of correlations with firing rate—a relationship previously explained in feedforward networks driven by correlated input—emerges in some recurrent networks but not in others. Furthermore, when correlations covary with firing rate, this relationship is reflected in low-rank structure in the correlation matrix. A central question in neuroscience is to understand how noisy firing patterns are used to transmit information. We quantify spiking patterns by using pairwise correlations, or the probability that two cells will spike coincidentally, above and beyond their baseline firing rate. One observation frequently made in experiments is that correlations can increase systematically with firing rate. Recent studies of a type of output cell in mouse retina found this relationship; furthermore, they determined that the increase of correlation with firing rate helped the cells encode information, provided the correlations were stimulus-dependent. Several theoretical studies have explored this basic structure, and found that it is generally beneficial to modulate correlations in this way. However—aside from mouse retinal cells referenced here—we do not yet have many examples of real neural circuits that show this correlation-firing rate pattern, so we do not know what common features (or mechanisms) might occur between them. In this study, we address this question via a computational model. We set up a computational model with features representative of a generic cortical network, to see whether correlations would increase with firing rate. To produce different firing patterns, we varied excitatory coupling. We found that with stronger excitatory coupling, there was a positive relationship between pairwise correlations and firing rates. We used a network linear response theory to show why correlations could increase with firing rates in some networks, but not in others; this could be explained by how cells responded to fluctuations in inhibitory conductances.
Collapse
|
81
|
Mehta-Pandejee G, Robinson PA, Henderson JA, Aquino KM, Sarkar S. Inference of direct and multistep effective connectivities from functional connectivity of the brain and of relationships to cortical geometry. J Neurosci Methods 2017; 283:42-54. [PMID: 28342831 DOI: 10.1016/j.jneumeth.2017.03.014] [Citation(s) in RCA: 18] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/01/2016] [Revised: 03/15/2017] [Accepted: 03/18/2017] [Indexed: 01/26/2023]
Abstract
BACKGROUND The problem of inferring effective brain connectivity from functional connectivity is under active investigation, and connectivity via multistep paths is poorly understood. NEW METHOD A method is presented to calculate the direct effective connection matrix (deCM), which embodies direct connection strengths between brain regions, from functional CMs (fCMs) by minimizing the difference between an experimental fCM and one calculated via neural field theory from an ansatz deCM based on an experimental anatomical CM. RESULTS The best match between fCMs occurs close to a critical point, consistent with independent published stability estimates. Residual mismatch between fCMs is identified to be largely due to interhemispheric connections that are poorly estimated in an initial ansatz deCM due to experimental limitations; improved ansatzes substantially reduce the mismatch and enable interhemispheric connections to be estimated. Various levels of significant multistep connections are then imaged via the neural field theory (NFT) result that these correspond to powers of the deCM; these are shown to be predictable from geometric distances between regions. COMPARISON WITH EXISTING METHODS This method gives insight into direct and multistep effective connectivity from fCMs and relating to physiology and brain geometry. This contrasts with other methods, which progressively adjust connections without an overarching physiologically based framework to deal with multistep or poorly estimated connections. CONCLUSIONS deCMs can be usefully estimated using this method and the results enable multistep connections to be investigated systematically.
Collapse
Affiliation(s)
- Grishma Mehta-Pandejee
- School of Physics, The University of Sydney, Sydney, New South Wales 2006, Australia; Center of Excellence for Integrative Brain Function, The University of Sydney, New South Wales 2006, Australia.
| | - P A Robinson
- School of Physics, The University of Sydney, Sydney, New South Wales 2006, Australia; Center of Excellence for Integrative Brain Function, The University of Sydney, New South Wales 2006, Australia
| | - James A Henderson
- School of Physics, The University of Sydney, Sydney, New South Wales 2006, Australia; Center of Excellence for Integrative Brain Function, The University of Sydney, New South Wales 2006, Australia; School of Information Technology and Electrical Engineering, University of Queensland, St Lucia, Queensland 4072, Australia
| | - K M Aquino
- School of Physics, The University of Sydney, Sydney, New South Wales 2006, Australia; Center of Excellence for Integrative Brain Function, The University of Sydney, New South Wales 2006, Australia; Sir Peter Mansfield Imaging Center, University of Nottingham, Nottingham NG7 2RD, United Kingdom
| | - Somwrita Sarkar
- Center of Excellence for Integrative Brain Function, The University of Sydney, New South Wales 2006, Australia; Design Lab, University of Sydney, Sydney, New South Wales 2006, Australia
| |
Collapse
|
82
|
Schuecker J, Schmidt M, van Albada SJ, Diesmann M, Helias M. Fundamental Activity Constraints Lead to Specific Interpretations of the Connectome. PLoS Comput Biol 2017; 13:e1005179. [PMID: 28146554 PMCID: PMC5287462 DOI: 10.1371/journal.pcbi.1005179] [Citation(s) in RCA: 19] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/01/2016] [Accepted: 10/03/2016] [Indexed: 01/11/2023] Open
Abstract
The continuous integration of experimental data into coherent models of the brain is an increasing challenge of modern neuroscience. Such models provide a bridge between structure and activity, and identify the mechanisms giving rise to experimental observations. Nevertheless, structurally realistic network models of spiking neurons are necessarily underconstrained even if experimental data on brain connectivity are incorporated to the best of our knowledge. Guided by physiological observations, any model must therefore explore the parameter ranges within the uncertainty of the data. Based on simulation results alone, however, the mechanisms underlying stable and physiologically realistic activity often remain obscure. We here employ a mean-field reduction of the dynamics, which allows us to include activity constraints into the process of model construction. We shape the phase space of a multi-scale network model of the vision-related areas of macaque cortex by systematically refining its connectivity. Fundamental constraints on the activity, i.e., prohibiting quiescence and requiring global stability, prove sufficient to obtain realistic layer- and area-specific activity. Only small adaptations of the structure are required, showing that the network operates close to an instability. The procedure identifies components of the network critical to its collective dynamics and creates hypotheses for structural data and future experiments. The method can be applied to networks involving any neuron model with a known gain function.
Collapse
Affiliation(s)
- Jannis Schuecker
- Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA BRAIN Institute I, Jülich Research Centre, Jülich, Germany
| | - Maximilian Schmidt
- Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA BRAIN Institute I, Jülich Research Centre, Jülich, Germany
| | - Sacha J. van Albada
- Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA BRAIN Institute I, Jülich Research Centre, Jülich, Germany
| | - Markus Diesmann
- Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA BRAIN Institute I, Jülich Research Centre, Jülich, Germany
- Department of Psychiatry, Psychotherapy and Psychosomatics, Medical Faculty, RWTH Aachen University, Aachen, Germany
- Department of Physics, Faculty 1, RWTH Aachen University, Aachen, Germany
| | - Moritz Helias
- Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA BRAIN Institute I, Jülich Research Centre, Jülich, Germany
- Department of Physics, Faculty 1, RWTH Aachen University, Aachen, Germany
| |
Collapse
|
83
|
Liang H, Wang H. Structure-Function Network Mapping and Its Assessment via Persistent Homology. PLoS Comput Biol 2017; 13:e1005325. [PMID: 28046127 PMCID: PMC5242543 DOI: 10.1371/journal.pcbi.1005325] [Citation(s) in RCA: 26] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/05/2016] [Revised: 01/18/2017] [Accepted: 12/20/2016] [Indexed: 11/18/2022] Open
Abstract
Understanding the relationship between brain structure and function is a fundamental problem in network neuroscience. This work deals with the general method of structure-function mapping at the whole-brain level. We formulate the problem as a topological mapping of structure-function connectivity via matrix function, and find a stable solution by exploiting a regularization procedure to cope with large matrices. We introduce a novel measure of network similarity based on persistent homology for assessing the quality of the network mapping, which enables a detailed comparison of network topological changes across all possible thresholds, rather than just at a single, arbitrary threshold that may not be optimal. We demonstrate that our approach can uncover the direct and indirect structural paths for predicting functional connectivity, and our network similarity measure outperforms other currently available methods. We systematically validate our approach with (1) a comparison of regularized vs. non-regularized procedures, (2) a null model of the degree-preserving random rewired structural matrix, (3) different network types (binary vs. weighted matrices), and (4) different brain parcellation schemes (low vs. high resolutions). Finally, we evaluate the scalability of our method with relatively large matrices (2514x2514) of structural and functional connectivity obtained from 12 healthy human subjects measured non-invasively while at rest. Our results reveal a nonlinear structure-function relationship, suggesting that the resting-state functional connectivity depends on direct structural connections, as well as relatively parsimonious indirect connections via polysynaptic pathways.
Collapse
Affiliation(s)
- Hualou Liang
- School of Biomedical Engineering, Science & Health Systems, Drexel University, Philadelphia, PA, United States of America
- * E-mail:
| | - Hongbin Wang
- Center for Biomedical Informatics, Texas A&M University Health Science Center, Houston, TX, United States of America
| |
Collapse
|
84
|
Deniz T, Rotter S. Solving the two-dimensional Fokker-Planck equation for strongly correlated neurons. Phys Rev E 2017; 95:012412. [PMID: 28208505 DOI: 10.1103/physreve.95.012412] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/14/2016] [Indexed: 06/06/2023]
Abstract
Pairs of neurons in brain networks often share much of the input they receive from other neurons. Due to essential nonlinearities of the neuronal dynamics, the consequences for the correlation of the output spike trains are generally not well understood. Here we analyze the case of two leaky integrate-and-fire neurons using an approach which is nonperturbative with respect to the degree of input correlation. Our treatment covers both weakly and strongly correlated dynamics, generalizing previous results based on linear response theory.
Collapse
Affiliation(s)
- Taşkın Deniz
- Bernstein Center Freiburg & Faculty of Biology, University of Freiburg, Hansastraße 9a, 79104 Freiburg, Germany
| | - Stefan Rotter
- Bernstein Center Freiburg & Faculty of Biology, University of Freiburg, Hansastraße 9a, 79104 Freiburg, Germany
| |
Collapse
|
85
|
The Impact of Structural Heterogeneity on Excitation-Inhibition Balance in Cortical Networks. Neuron 2016; 92:1106-1121. [PMID: 27866797 PMCID: PMC5158120 DOI: 10.1016/j.neuron.2016.10.027] [Citation(s) in RCA: 68] [Impact Index Per Article: 8.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/15/2015] [Revised: 08/26/2016] [Accepted: 09/29/2016] [Indexed: 11/21/2022]
Abstract
Models of cortical dynamics often assume a homogeneous connectivity structure. However, we show that heterogeneous input connectivity can prevent the dynamic balance between excitation and inhibition, a hallmark of cortical dynamics, and yield unrealistically sparse and temporally regular firing. Anatomically based estimates of the connectivity of layer 4 (L4) rat barrel cortex and numerical simulations of this circuit indicate that the local network possesses substantial heterogeneity in input connectivity, sufficient to disrupt excitation-inhibition balance. We show that homeostatic plasticity in inhibitory synapses can align the functional connectivity to compensate for structural heterogeneity. Alternatively, spike-frequency adaptation can give rise to a novel state in which local firing rates adjust dynamically so that adaptation currents and synaptic inputs are balanced. This theory is supported by simulations of L4 barrel cortex during spontaneous and stimulus-evoked conditions. Our study shows how synaptic and cellular mechanisms yield fluctuation-driven dynamics despite structural heterogeneity in cortical circuits. Structural heterogeneity threatens the dynamic balance of excitation and inhibition Reconstruction of cortical networks reveals significant structural heterogeneity Spike-frequency adaptation can act locally to facilitate global balance Inhibitory homeostatic plasticity can compensate for structural imbalance
Collapse
|
86
|
Rosero EJ, Barbosa WAS, Martinez Avila JF, Khoury AZ, Rios Leite JR. Correlations in electrically coupled chaotic lasers. Phys Rev E 2016; 94:032210. [PMID: 27739756 DOI: 10.1103/physreve.94.032210] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/01/2014] [Indexed: 11/07/2022]
Abstract
We show how two electrically coupled semiconductor lasers having optical feedback can present simultaneous antiphase correlated fast power fluctuations, and strong in-phase synchronized spikes of chaotic power drops. This quite counterintuitive phenomenon is demonstrated experimentally and confirmed by numerical solutions of a deterministic dynamical system of rate equations. The occurrence of negative and positive cross correlation between parts of a complex system according to time scales, as proved in our simple arrangement, is relevant for the understanding and characterization of collective properties in complex networks.
Collapse
Affiliation(s)
- E J Rosero
- Departamento de Física, Universidade Federal de Pernambuco, 50670-901 Cidade Universitária, Recife, PE, Brazil
| | - W A S Barbosa
- Departamento de Física, Universidade Federal de Pernambuco, 50670-901 Cidade Universitária, Recife, PE, Brazil
| | - J F Martinez Avila
- Departamento de Física, Universidade Federal de Pernambuco, 50670-901 Cidade Universitária, Recife, PE, Brazil.,Departamento de Física, Universidade Federal de Sergipe, Av. Marechal Rondon, S/N Jardim Rosa Elze, 49100-000 São Cristóvão, SE, Brazil
| | - A Z Khoury
- Departamento de Física, Universidade Federal de Pernambuco, 50670-901 Cidade Universitária, Recife, PE, Brazil.,Instituto de Física, Universidade Federal Fluminense, Av. Gal. Milton Tavares de Souza S/N, 24210-346 Niteroi, RJ, Brazil
| | - J R Rios Leite
- Departamento de Física, Universidade Federal de Pernambuco, 50670-901 Cidade Universitária, Recife, PE, Brazil
| |
Collapse
|
87
|
Jaisson T, Rosenbaum M. Rough fractional diffusions as scaling limits of nearly unstable heavy tailed Hawkes processes. ANN APPL PROBAB 2016. [DOI: 10.1214/15-aap1164] [Citation(s) in RCA: 60] [Impact Index Per Article: 7.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/19/2022]
|
88
|
Onaga T, Shinomoto S. Emergence of event cascades in inhomogeneous networks. Sci Rep 2016; 6:33321. [PMID: 27625183 PMCID: PMC5022041 DOI: 10.1038/srep33321] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/05/2016] [Accepted: 08/24/2016] [Indexed: 11/09/2022] Open
Abstract
There is a commonality among contagious diseases, tweets, and neuronal firings that past events facilitate the future occurrence of events. The spread of events has been extensively studied such that the systems exhibit catastrophic chain reactions if the interaction represented by the ratio of reproduction exceeds unity; however, their subthreshold states are not fully understood. Here, we report that these systems are possessed by nonstationary cascades of event-occurrences already in the subthreshold regime. Event cascades can be harmful in some contexts, when the peak-demand causes vaccine shortages, heavy traffic on communication lines, but may be beneficial in other contexts, such that spontaneous activity in neural networks may be used to generate motion or store memory. Thus it is important to comprehend the mechanism by which such cascades appear, and consider controlling a system to tame or facilitate fluctuations in the event-occurrences. The critical interaction for the emergence of cascades depends greatly on the network structure in which individuals are connected. We demonstrate that we can predict whether cascades may emerge, given information about the interactions between individuals. Furthermore, we develop a method of reallocating connections among individuals so that event cascades may be either impeded or impelled in a network.
Collapse
Affiliation(s)
- Tomokatsu Onaga
- Department of Physics, Kyoto University, Kyoto 606-8502, Japan
| | | |
Collapse
|
89
|
Lusch B, Maia PD, Kutz JN. Inferring connectivity in networked dynamical systems: Challenges using Granger causality. Phys Rev E 2016; 94:032220. [PMID: 27739857 DOI: 10.1103/physreve.94.032220] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/05/2016] [Indexed: 06/06/2023]
Abstract
Determining the interactions and causal relationships between nodes in an unknown networked dynamical system from measurement data alone is a challenging, contemporary task across the physical, biological, and engineering sciences. Statistical methods, such as the increasingly popular Granger causality, are being broadly applied for data-driven discovery of connectivity in fields from economics to neuroscience. A common version of the algorithm is called pairwise-conditional Granger causality, which we systematically test on data generated from a nonlinear model with known causal network structure. Specifically, we simulate networked systems of Kuramoto oscillators and use the Multivariate Granger Causality Toolbox to discover the underlying coupling structure of the system. We compare the inferred results to the original connectivity for a wide range of parameters such as initial conditions, connection strengths, community structures, and natural frequencies. Our results show a significant systematic disparity between the original and inferred network, unless the true structure is extremely sparse or dense. Specifically, the inferred networks have significant discrepancies in the number of edges and the eigenvalues of the connectivity matrix, demonstrating that they typically generate dynamics which are inconsistent with the ground truth. We provide a detailed account of the dynamics for the Erdős-Rényi network model due to its importance in random graph theory and network science. We conclude that Granger causal methods for inferring network structure are highly suspect and should always be checked against a ground truth model. The results also advocate the need to perform such comparisons with any network inference method since the inferred connectivity results appear to have very little to do with the ground truth system.
Collapse
Affiliation(s)
- Bethany Lusch
- Department of Applied Mathematics, University of Washington, Seattle, Washington 98195-3925, USA
| | - Pedro D Maia
- Department of Applied Mathematics, University of Washington, Seattle, Washington 98195-3925, USA
| | - J Nathan Kutz
- Department of Applied Mathematics, University of Washington, Seattle, Washington 98195-3925, USA
| |
Collapse
|
90
|
Ravid Tannenbaum N, Burak Y. Shaping Neural Circuits by High Order Synaptic Interactions. PLoS Comput Biol 2016; 12:e1005056. [PMID: 27517461 PMCID: PMC4982676 DOI: 10.1371/journal.pcbi.1005056] [Citation(s) in RCA: 24] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/07/2015] [Accepted: 06/30/2016] [Indexed: 11/19/2022] Open
Abstract
Spike timing dependent plasticity (STDP) is believed to play an important role in shaping the structure of neural circuits. Here we show that STDP generates effective interactions between synapses of different neurons, which were neglected in previous theoretical treatments, and can be described as a sum over contributions from structural motifs. These interactions can have a pivotal influence on the connectivity patterns that emerge under the influence of STDP. In particular, we consider two highly ordered forms of structure: wide synfire chains, in which groups of neurons project to each other sequentially, and self connected assemblies. We show that high order synaptic interactions can enable the formation of both structures, depending on the form of the STDP function and the time course of synaptic currents. Furthermore, within a certain regime of biophysical parameters, emergence of the ordered connectivity occurs robustly and autonomously in a stochastic network of spiking neurons, without a need to expose the neural network to structured inputs during learning. Plasticity between neural connections plays a key role in our ability to process and store information. One of the fundamental questions on plasticity, is the extent to which local processes, affecting individual synapses, are responsible for large scale structures of neural connectivity. Here we focus on two types of structures: synfire chains and self connected assemblies. These structures are often proposed as forms of neural connectivity that can support brain functions such as memory and generation of motor activity. We show that an important plasticity mechanism, spike timing dependent plasticity, can lead to autonomous emergence of these large scale structures in the brain: in contrast to previous theoretical proposals, we show that the emergence can occur autonomously even if instructive signals are not fed into the neural network while its form is shaped by synaptic plasticity.
Collapse
Affiliation(s)
- Neta Ravid Tannenbaum
- Edmond and Lily Safra Center for Brain Sciences, Hebrew University, Jerusalem, Israel
| | - Yoram Burak
- Edmond and Lily Safra Center for Brain Sciences, Hebrew University, Jerusalem, Israel
- Racah Institute of Physics, Hebrew University, Jerusalem, Israel
- * E-mail:
| |
Collapse
|
91
|
Doiron B, Litwin-Kumar A, Rosenbaum R, Ocker GK, Josić K. The mechanics of state-dependent neural correlations. Nat Neurosci 2016; 19:383-93. [PMID: 26906505 DOI: 10.1038/nn.4242] [Citation(s) in RCA: 173] [Impact Index Per Article: 21.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/14/2015] [Accepted: 01/12/2016] [Indexed: 12/12/2022]
Abstract
Simultaneous recordings from large neural populations are becoming increasingly common. An important feature of population activity is the trial-to-trial correlated fluctuation of spike train outputs from recorded neuron pairs. Similar to the firing rate of single neurons, correlated activity can be modulated by a number of factors, from changes in arousal and attentional state to learning and task engagement. However, the physiological mechanisms that underlie these changes are not fully understood. We review recent theoretical results that identify three separate mechanisms that modulate spike train correlations: changes in input correlations, internal fluctuations and the transfer function of single neurons. We first examine these mechanisms in feedforward pathways and then show how the same approach can explain the modulation of correlations in recurrent networks. Such mechanistic constraints on the modulation of population activity will be important in statistical analyses of high-dimensional neural data.
Collapse
Affiliation(s)
- Brent Doiron
- Department of Mathematics, University of Pittsburgh, Pittsburgh, Pennsylvania, USA.,Center for the Neural Basis of Cognition, Pittsburgh, Pennsylvania, USA
| | - Ashok Litwin-Kumar
- Department of Mathematics, University of Pittsburgh, Pittsburgh, Pennsylvania, USA.,Center for the Neural Basis of Cognition, Pittsburgh, Pennsylvania, USA.,Center for Theoretical Neuroscience, Columbia University, New York, New York, USA
| | - Robert Rosenbaum
- Department of Mathematics, University of Pittsburgh, Pittsburgh, Pennsylvania, USA.,Center for the Neural Basis of Cognition, Pittsburgh, Pennsylvania, USA.,Department of Applied and Computational Mathematics and Statistics, University of Notre Dame, Notre Dame, Indiana, USA.,Interdisciplinary Center for Network Science and Applications, University of Notre Dame, Notre Dame, Indiana, USA
| | - Gabriel K Ocker
- Department of Mathematics, University of Pittsburgh, Pittsburgh, Pennsylvania, USA.,Center for the Neural Basis of Cognition, Pittsburgh, Pennsylvania, USA.,Allen Institute for Brain Science, Seattle, Washington, USA
| | - Krešimir Josić
- Department of Mathematics, University of Houston, Houston, Texas, USA.,Department of Biology and Biochemistry, University of Houston, Houston, Texas, USA
| |
Collapse
|
92
|
Jovanović S, Rotter S. Interplay between Graph Topology and Correlations of Third Order in Spiking Neuronal Networks. PLoS Comput Biol 2016; 12:e1004963. [PMID: 27271768 PMCID: PMC4894630 DOI: 10.1371/journal.pcbi.1004963] [Citation(s) in RCA: 16] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/26/2016] [Accepted: 05/02/2016] [Indexed: 01/06/2023] Open
Abstract
The study of processes evolving on networks has recently become a very popular research field, not only because of the rich mathematical theory that underpins it, but also because of its many possible applications, a number of them in the field of biology. Indeed, molecular signaling pathways, gene regulation, predator-prey interactions and the communication between neurons in the brain can be seen as examples of networks with complex dynamics. The properties of such dynamics depend largely on the topology of the underlying network graph. In this work, we want to answer the following question: Knowing network connectivity, what can be said about the level of third-order correlations that will characterize the network dynamics? We consider a linear point process as a model for pulse-coded, or spiking activity in a neuronal network. Using recent results from theory of such processes, we study third-order correlations between spike trains in such a system and explain which features of the network graph (i.e. which topological motifs) are responsible for their emergence. Comparing two different models of network topology—random networks of Erdős-Rényi type and networks with highly interconnected hubs—we find that, in random networks, the average measure of third-order correlations does not depend on the local connectivity properties, but rather on global parameters, such as the connection probability. This, however, ceases to be the case in networks with a geometric out-degree distribution, where topological specificities have a strong impact on average correlations. Many biological phenomena can be viewed as dynamical processes on a graph. Understanding coordinated activity of nodes in such a network is of some importance, as it helps to characterize the behavior of the complex system. Of course, the topology of a network plays a pivotal role in determining the level of coordination among its different vertices. In particular, correlations between triplets of events (here: action potentials generated by neurons) have recently garnered some interest in the theoretical neuroscience community. In this paper, we present a decomposition of an average measure of third-order coordinated activity of neurons in a spiking neuronal network in terms of the relevant topological motifs present in the underlying graph. We study different network topologies and show, in particular, that the presence of certain tree motifs in the synaptic connectivity graph greatly affects the strength of third-order correlations between spike trains of different neurons.
Collapse
Affiliation(s)
- Stojan Jovanović
- Bernstein Center Freiburg & Faculty of Biology, University of Freiburg, Freiburg, Germany
- CB, CSC, KTH Royal Institute of Technology, Stockholm, Sweden
- * E-mail:
| | - Stefan Rotter
- Bernstein Center Freiburg & Faculty of Biology, University of Freiburg, Freiburg, Germany
| |
Collapse
|
93
|
Puelma Touzel M, Wolf F. Complete Firing-Rate Response of Neurons with Complex Intrinsic Dynamics. PLoS Comput Biol 2015; 11:e1004636. [PMID: 26720924 PMCID: PMC4697854 DOI: 10.1371/journal.pcbi.1004636] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/25/2015] [Accepted: 10/29/2015] [Indexed: 11/23/2022] Open
Abstract
The response of a neuronal population over a space of inputs depends on the intrinsic properties of its constituent neurons. Two main modes of single neuron dynamics–integration and resonance–have been distinguished. While resonator cell types exist in a variety of brain areas, few models incorporate this feature and fewer have investigated its effects. To understand better how a resonator’s frequency preference emerges from its intrinsic dynamics and contributes to its local area’s population firing rate dynamics, we analyze the dynamic gain of an analytically solvable two-degree of freedom neuron model. In the Fokker-Planck approach, the dynamic gain is intractable. The alternative Gauss-Rice approach lifts the resetting of the voltage after a spike. This allows us to derive a complete expression for the dynamic gain of a resonator neuron model in terms of a cascade of filters on the input. We find six distinct response types and use them to fully characterize the routes to resonance across all values of the relevant timescales. We find that resonance arises primarily due to slow adaptation with an intrinsic frequency acting to sharpen and adjust the location of the resonant peak. We determine the parameter regions for the existence of an intrinsic frequency and for subthreshold and spiking resonance, finding all possible intersections of the three. The expressions and analysis presented here provide an account of how intrinsic neuron dynamics shape dynamic population response properties and can facilitate the construction of an exact theory of correlations and stability of population activity in networks containing populations of resonator neurons. Dynamic gain, the amount by which features at specific frequencies in the input to a neuron are amplified or attenuated in its output spiking, is fundamental for the encoding of information by neural populations. Most studies of dynamic gain have focused on neurons without intrinsic degrees of freedom exhibiting integrator-type subthreshold dynamics. Many neuron types in the brain, however, exhibit complex subthreshold dynamics such as resonance, found for instance in cortical interneurons, stellate cells, and mitral cells. A resonator neuron has at least two degrees of freedom for which the classical Fokker-Planck approach to calculating the dynamic gain is largely intractable. Here, we lift the voltage-reset rule after a spike, allowing us to derive a complete expression of the dynamic gain of a resonator neuron model. We find the gain can exhibit only six shapes. The resonant ones have peaks that become large due to intrinsic adaptation and become sharp due to an intrinsic frequency. A resonance can nevertheless result from either property. The analysis presented here helps explain how intrinsic neuron dynamics shape population-level response properties and provides a powerful tool for developing theories of inter-neuron correlations and dynamic responses of neural populations.
Collapse
Affiliation(s)
- Maximilian Puelma Touzel
- Department for Nonlinear Dynamics, Max Planck Institute for Dynamics and Self-Organization, Goettingen, Germany
- Bernstein Center for Computational Neuroscience, Goettingen, Germany
- Institute for Nonlinear Dynamics, Georg-August University School of Science, Goettingen, Germany
- * E-mail:
| | - Fred Wolf
- Department for Nonlinear Dynamics, Max Planck Institute for Dynamics and Self-Organization, Goettingen, Germany
- Bernstein Center for Computational Neuroscience, Goettingen, Germany
- Institute for Nonlinear Dynamics, Georg-August University School of Science, Goettingen, Germany
- Kavli Institute for Theoretical Physics, University of California Santa Barbara, Santa Barbara, California, United States of America
| |
Collapse
|
94
|
Effect of edge pruning on structural controllability and observability of complex networks. Sci Rep 2015; 5:18145. [PMID: 26674854 PMCID: PMC4682190 DOI: 10.1038/srep18145] [Citation(s) in RCA: 18] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/13/2015] [Accepted: 11/11/2015] [Indexed: 02/07/2023] Open
Abstract
Controllability and observability of complex systems are vital concepts in many fields of science. The network structure of the system plays a crucial role in determining its controllability and observability. Because most naturally occurring complex systems show dynamic changes in their network connectivity, it is important to understand how perturbations in the connectivity affect the controllability of the system. To this end, we studied the control structure of different types of artificial, social and biological neuronal networks (BNN) as their connections were progressively pruned using four different pruning strategies. We show that the BNNs are more similar to scale-free networks than to small-world networks, when comparing the robustness of their control structure to structural perturbations. We introduce a new graph descriptor, ‘the cardinality curve’, to quantify the robustness of the control structure of a network to progressive edge pruning. Knowing the susceptibility of control structures to different pruning methods could help design strategies to destroy the control structures of dangerous networks such as epidemic networks. On the other hand, it could help make useful networks more resistant to edge attacks.
Collapse
|
95
|
Chanes L, Barrett LF. Redefining the Role of Limbic Areas in Cortical Processing. Trends Cogn Sci 2015; 20:96-106. [PMID: 26704857 DOI: 10.1016/j.tics.2015.11.005] [Citation(s) in RCA: 146] [Impact Index Per Article: 16.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/16/2015] [Revised: 11/10/2015] [Accepted: 11/16/2015] [Indexed: 12/13/2022]
Abstract
There is increasing evidence that the brain actively constructs action and perception using past experience. In this paper, we propose that the direction of information flow along gradients of laminar differentiation provides important insight on the role of limbic cortices in cortical processing. Cortical limbic areas, with a simple laminar structure (e.g., no or rudimentary layer IV), send 'feedback' projections to lower level better laminated areas. We propose that this 'feedback' functions as predictions that drive processing throughout the cerebral cortex. This hypothesis has the potential to provide a unifying framework for an increasing number of proposals that use predictive coding to explain a myriad of neural processes and disorders, and has important implications for hypotheses about consciousness.
Collapse
Affiliation(s)
- Lorena Chanes
- Northeastern University, Department of Psychology, Boston, MA, USA; Massachusetts General Hospital, Department of Psychiatry and the Athinoula A. Martinos Center for Biomedical Imaging, Charlestown, MA, USA
| | - Lisa Feldman Barrett
- Northeastern University, Department of Psychology, Boston, MA, USA; Massachusetts General Hospital, Department of Psychiatry and the Athinoula A. Martinos Center for Biomedical Imaging, Charlestown, MA, USA.
| |
Collapse
|
96
|
Albert M, Bouret Y, Fromont M, Reynaud-Bouret P. Bootstrap and permutation tests of independence for point processes. Ann Stat 2015. [DOI: 10.1214/15-aos1351] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/19/2022]
|
97
|
Scalability of Asynchronous Networks Is Limited by One-to-One Mapping between Effective Connectivity and Correlations. PLoS Comput Biol 2015; 11:e1004490. [PMID: 26325661 PMCID: PMC4556689 DOI: 10.1371/journal.pcbi.1004490] [Citation(s) in RCA: 31] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/18/2014] [Accepted: 08/05/2015] [Indexed: 11/19/2022] Open
Abstract
Network models are routinely downscaled compared to nature in terms of numbers of nodes or edges because of a lack of computational resources, often without explicit mention of the limitations this entails. While reliable methods have long existed to adjust parameters such that the first-order statistics of network dynamics are conserved, here we show that limitations already arise if also second-order statistics are to be maintained. The temporal structure of pairwise averaged correlations in the activity of recurrent networks is determined by the effective population-level connectivity. We first show that in general the converse is also true and explicitly mention degenerate cases when this one-to-one relationship does not hold. The one-to-one correspondence between effective connectivity and the temporal structure of pairwise averaged correlations implies that network scalings should preserve the effective connectivity if pairwise averaged correlations are to be held constant. Changes in effective connectivity can even push a network from a linearly stable to an unstable, oscillatory regime and vice versa. On this basis, we derive conditions for the preservation of both mean population-averaged activities and pairwise averaged correlations under a change in numbers of neurons or synapses in the asynchronous regime typical of cortical networks. We find that mean activities and correlation structure can be maintained by an appropriate scaling of the synaptic weights, but only over a range of numbers of synapses that is limited by the variance of external inputs to the network. Our results therefore show that the reducibility of asynchronous networks is fundamentally limited.
Collapse
|
98
|
Ocker GK, Litwin-Kumar A, Doiron B. Self-Organization of Microcircuits in Networks of Spiking Neurons with Plastic Synapses. PLoS Comput Biol 2015; 11:e1004458. [PMID: 26291697 PMCID: PMC4546203 DOI: 10.1371/journal.pcbi.1004458] [Citation(s) in RCA: 54] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/22/2014] [Accepted: 07/19/2015] [Indexed: 11/18/2022] Open
Abstract
The synaptic connectivity of cortical networks features an overrepresentation of certain wiring motifs compared to simple random-network models. This structure is shaped, in part, by synaptic plasticity that promotes or suppresses connections between neurons depending on their joint spiking activity. Frequently, theoretical studies focus on how feedforward inputs drive plasticity to create this network structure. We study the complementary scenario of self-organized structure in a recurrent network, with spike timing-dependent plasticity driven by spontaneous dynamics. We develop a self-consistent theory for the evolution of network structure by combining fast spiking covariance with a slow evolution of synaptic weights. Through a finite-size expansion of network dynamics we obtain a low-dimensional set of nonlinear differential equations for the evolution of two-synapse connectivity motifs. With this theory in hand, we explore how the form of the plasticity rule drives the evolution of microcircuits in cortical networks. When potentiation and depression are in approximate balance, synaptic dynamics depend on weighted divergent, convergent, and chain motifs. For additive, Hebbian STDP these motif interactions create instabilities in synaptic dynamics that either promote or suppress the initial network structure. Our work provides a consistent theoretical framework for studying how spiking activity in recurrent networks interacts with synaptic plasticity to determine network structure.
Collapse
Affiliation(s)
- Gabriel Koch Ocker
- Department of Neuroscience, University of Pittsburgh, Pittsburgh, Pennsylvania, United States of America
- Center for the Neural Basis of Cognition, University of Pittsburgh and Carnegie Melon University, Pittsburgh, Pennsylvania, United States of America
| | - Ashok Litwin-Kumar
- Center for the Neural Basis of Cognition, University of Pittsburgh and Carnegie Melon University, Pittsburgh, Pennsylvania, United States of America
- Department of Mathematics, University of Pittsburgh, Pittsburgh, Pennsylvania, United States of America
- Center for Theoretical Neuroscience, Columbia University, New York, New York, United States of America
| | - Brent Doiron
- Center for the Neural Basis of Cognition, University of Pittsburgh and Carnegie Melon University, Pittsburgh, Pennsylvania, United States of America
- Department of Mathematics, University of Pittsburgh, Pittsburgh, Pennsylvania, United States of America
| |
Collapse
|
99
|
Payeur A, Maler L, Longtin A. Oscillatorylike behavior in feedforward neuronal networks. PHYSICAL REVIEW. E, STATISTICAL, NONLINEAR, AND SOFT MATTER PHYSICS 2015; 92:012703. [PMID: 26274199 DOI: 10.1103/physreve.92.012703] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/27/2015] [Indexed: 06/04/2023]
Abstract
We demonstrate how rhythmic activity can arise in neural networks from feedforward rather than recurrent circuitry and, in so doing, we provide a mechanism capable of explaining the temporal decorrelation of γ-band oscillations. We compare the spiking activity of a delayed recurrent network of inhibitory neurons with that of a feedforward network with the same neural properties and axonal delays. Paradoxically, these very different connectivities can yield very similar spike-train statistics in response to correlated input. This happens when neurons are noisy and axonal delays are short. A Taylor expansion of the feedback network's susceptibility-or frequency-dependent gain function-can then be stopped at first order to a good approximation, thus matching the feedforward net's susceptibility. The feedback network is known to display oscillations; these oscillations imply that the spiking activity of the population is felt by all neurons within the network, leading to direct spike correlations in a given neuron. On the other hand, in the output layer of the feedforward net, the interaction between the external drive and the delayed feedforward projection of this drive by the input layer causes indirect spike correlations: spikes fired by a given output layer neuron are correlated only through the activity of the input layer neurons. High noise and short delays partially bridge the gap between these two types of correlation, yielding similar spike-train statistics for both networks. This similarity is even stronger when the delay is distributed, as confirmed by linear response theory.
Collapse
Affiliation(s)
- Alexandre Payeur
- Department of Physics, University of Ottawa, 150 Louis-Pasteur, Ottawa, Canada K1N 6N5
| | - Leonard Maler
- Department of Cell and Molecular Medicine, University of Ottawa, 451 Smyth Road, Ottawa, Canada K1H 8M5
| | - André Longtin
- Department of Physics, University of Ottawa, 150 Louis-Pasteur, Ottawa, Canada K1N 6N5 and Department of Cell and Molecular Medicine, University of Ottawa, 451 Smyth Road, Ottawa, Canada K1H 8M5
| |
Collapse
|
100
|
Shi L, Niu X, Wan H, Shang Z, Wang Z. A small-world-based population encoding model of the primary visual cortex. BIOLOGICAL CYBERNETICS 2015; 109:377-388. [PMID: 25753903 DOI: 10.1007/s00422-015-0649-3] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/11/2014] [Accepted: 02/16/2015] [Indexed: 06/04/2023]
Abstract
A wide range of evidence has shown that information encoding performed by the visual cortex involves complex activities of neuronal populations. However, the effects of the neuronal connectivity structure on the population's encoding performance remain poorly understood. In this paper, a small-world-based population encoding model of the primary visual cortex (V1) is established on the basis of the generalized linear model (GLM) to describe the computation of the neuronal population. The model mainly consists of three sets of filters, including a spatiotemporal stimulus filter, a post-spike history filter, and a set of coupled filters with the coupling neurons organizing as a small-world network. The parameters of the model were fitted with neuronal data of the rat V1 recorded with a micro-electrode array. Compared to the traditional GLM, without considering the small-world structure of the neuronal population, the proposed model was proved to produce more accurate spiking response to grating stimuli and enhance the capability of the neuronal population to carry information. The comparison results proved the validity of the proposed model and further suggest the role of small-world structure in the encoding performance of local populations in V1, which provides new insights for understanding encoding mechanisms of a small scale population in visual system.
Collapse
Affiliation(s)
- Li Shi
- The School of Electrical Engineering, Zhengzhou University, Zhengzhou, 450001, China
| | | | | | | | | |
Collapse
|