151
|
Smith RX, Jann K, Ances B, Wang DJ. Wavelet-based regularity analysis reveals recurrent spatiotemporal behavior in resting-state fMRI. Hum Brain Mapp 2015; 36:3603-20. [PMID: 26096080 PMCID: PMC4635674 DOI: 10.1002/hbm.22865] [Citation(s) in RCA: 16] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/20/2014] [Revised: 05/04/2015] [Accepted: 05/18/2015] [Indexed: 11/12/2022] Open
Abstract
One of the major findings from multimodal neuroimaging studies in the past decade is that the human brain is anatomically and functionally organized into large-scale networks. In resting state fMRI (rs-fMRI), spatial patterns emerge when temporal correlations between various brain regions are tallied, evidencing networks of ongoing intercortical cooperation. However, the dynamic structure governing the brain's spontaneous activity is far less understood due to the short and noisy nature of the rs-fMRI signal. Here, we develop a wavelet-based regularity analysis based on noise estimation capabilities of the wavelet transform to measure recurrent temporal pattern stability within the rs-fMRI signal across multiple temporal scales. The method consists of performing a stationary wavelet transform to preserve signal structure, followed by construction of "lagged" subsequences to adjust for correlated features, and finally the calculation of sample entropy across wavelet scales based on an "objective" estimate of noise level at each scale. We found that the brain's default mode network (DMN) areas manifest a higher level of irregularity in rs-fMRI time series than rest of the brain. In 25 aged subjects with mild cognitive impairment and 25 matched healthy controls, wavelet-based regularity analysis showed improved sensitivity in detecting changes in the regularity of rs-fMRI signals between the two groups within the DMN and executive control networks, compared with standard multiscale entropy analysis. Wavelet-based regularity analysis based on noise estimation capabilities of the wavelet transform is a promising technique to characterize the dynamic structure of rs-fMRI as well as other biological signals.
Collapse
Affiliation(s)
- Robert X. Smith
- Laboratory of FMRI Technology (LOFT), Department of Neurology, Ahmanson‐Lovelace Brain Mapping CenterUniversity of CaliforniaLos AngelesCalifornia
| | - Kay Jann
- Laboratory of FMRI Technology (LOFT), Department of Neurology, Ahmanson‐Lovelace Brain Mapping CenterUniversity of CaliforniaLos AngelesCalifornia
| | - Beau Ances
- Department of Neurology, School of MedicineWashington University in Saint LouisSaint LouisMissouri
| | - Danny J.J. Wang
- Laboratory of FMRI Technology (LOFT), Department of Neurology, Ahmanson‐Lovelace Brain Mapping CenterUniversity of CaliforniaLos AngelesCalifornia
| |
Collapse
|
152
|
Logiaco L, Quilodran R, Procyk E, Arleo A. Spatiotemporal Spike Coding of Behavioral Adaptation in the Dorsal Anterior Cingulate Cortex. PLoS Biol 2015; 13:e1002222. [PMID: 26266537 PMCID: PMC4534466 DOI: 10.1371/journal.pbio.1002222] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/06/2015] [Accepted: 07/06/2015] [Indexed: 11/18/2022] Open
Abstract
The frontal cortex controls behavioral adaptation in environments governed by complex rules. Many studies have established the relevance of firing rate modulation after informative events signaling whether and how to update the behavioral policy. However, whether the spatiotemporal features of these neuronal activities contribute to encoding imminent behavioral updates remains unclear. We investigated this issue in the dorsal anterior cingulate cortex (dACC) of monkeys while they adapted their behavior based on their memory of feedback from past choices. We analyzed spike trains of both single units and pairs of simultaneously recorded neurons using an algorithm that emulates different biologically plausible decoding circuits. This method permits the assessment of the performance of both spike-count and spike-timing sensitive decoders. In response to the feedback, single neurons emitted stereotypical spike trains whose temporal structure identified informative events with higher accuracy than mere spike count. The optimal decoding time scale was in the range of 70-200 ms, which is significantly shorter than the memory time scale required by the behavioral task. Importantly, the temporal spiking patterns of single units were predictive of the monkeys' behavioral response time. Furthermore, some features of these spiking patterns often varied between jointly recorded neurons. All together, our results suggest that dACC drives behavioral adaptation through complex spatiotemporal spike coding. They also indicate that downstream networks, which decode dACC feedback signals, are unlikely to act as mere neural integrators.
Collapse
Affiliation(s)
- Laureline Logiaco
- INSERM, U968, Paris, France
- Sorbonne Universités, UPMC Univ Paris 06, UMR_S 968, Institut de la Vision, Paris, France
- CNRS, UMR_7210, Paris, France
- * E-mail: (LL); (AA)
| | - René Quilodran
- Escuela de Medicina, Departamento de Pre-clínicas, Universidad de Valparaíso, Hontaneda, Valparaíso, Chile
| | - Emmanuel Procyk
- Stem Cell and Brain Research Institute, Institut National de la Santé et de la Recherche Médicale U846, 69500 Bron, France
- Université de Lyon, Université Lyon 1, Lyon, France
| | - Angelo Arleo
- INSERM, U968, Paris, France
- Sorbonne Universités, UPMC Univ Paris 06, UMR_S 968, Institut de la Vision, Paris, France
- CNRS, UMR_7210, Paris, France
- * E-mail: (LL); (AA)
| |
Collapse
|
153
|
Bellay T, Klaus A, Seshadri S, Plenz D. Irregular spiking of pyramidal neurons organizes as scale-invariant neuronal avalanches in the awake state. eLife 2015; 4:e07224. [PMID: 26151674 PMCID: PMC4492006 DOI: 10.7554/elife.07224] [Citation(s) in RCA: 77] [Impact Index Per Article: 8.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/05/2015] [Accepted: 06/10/2015] [Indexed: 12/22/2022] Open
Abstract
Spontaneous fluctuations in neuronal activity emerge at many spatial and temporal scales in cortex. Population measures found these fluctuations to organize as scale-invariant neuronal avalanches, suggesting cortical dynamics to be critical. Macroscopic dynamics, though, depend on physiological states and are ambiguous as to their cellular composition, spatiotemporal origin, and contributions from synaptic input or action potential (AP) output. Here, we study spontaneous firing in pyramidal neurons (PNs) from rat superficial cortical layers in vivo and in vitro using 2-photon imaging. As the animal transitions from the anesthetized to awake state, spontaneous single neuron firing increases in irregularity and assembles into scale-invariant avalanches at the group level. In vitro spike avalanches emerged naturally yet required balanced excitation and inhibition. This demonstrates that neuronal avalanches are linked to the global physiological state of wakefulness and that cortical resting activity organizes as avalanches from firing of local PN groups to global population activity. DOI:http://dx.doi.org/10.7554/eLife.07224.001 Even when we are not engaged in any specific task, the brain shows coordinated patterns of spontaneous activity that can be monitored using electrodes placed on the scalp. This resting activity shapes the way that the brain responds to subsequent stimuli. Changes in resting activity patterns are seen in various neurological and psychiatric disorders, as well as in healthy individuals following sleep deprivation. The brain's outer layer is known as the cortex. On a large scale, when monitoring many thousands of neurons, resting activity in the cortex demonstrates propagation in the brain in an organized manner. Specifically, resting activity was found to organize as so-called neuronal avalanches, in which large bursts of neuronal activity are grouped with medium-sized and smaller bursts in a very characteristic order. In fact, the sizes of these bursts—that is, the number of neurons that fire—are found to be scale-invariant, that is, the ratio of large bursts to medium-sized bursts is the same as that of medium-sized to small bursts. Such scale-invariance suggests that neuronal bursts are not independent of one another. However, it is largely unclear how neuronal avalanches arise from individual neurons, which fire simply in a noisy, irregular manner. Bellay, Klaus et al. have now provided insights into this process by examining patterns of firing of a particular type of neuron—known as a pyramidal cell—in the cortex of rats as they recover from anesthesia. As the animals awaken, the firing of individual pyramidal cells in the cortex becomes even more irregular than under anesthesia. However, by considering the activity of a group of these neurons, Bellay, Klaus et al. realized that it is this more irregular firing that gives rise to neuronal avalanches, and that this occurs only when the animals are awake. Further experiments on individual pyramidal cells grown in the laboratory confirmed that neuronal avalanches emerge spontaneously from the irregular firing of individual neurons. These avalanches depend on there being a balance between two types of activity among the cells: ‘excitatory’ activity that causes other neurons to fire, and ‘inhibitory’ activity that prevents neuronal firing. Given that resting activity influences the brain's responses to the outside world, the origins of neuronal avalanches are likely to provide clues about the way the brain processes information. Future experiments should also examine the possibility that the emergence of neuronal avalanches marks the transition from unconsciousness to wakefulness within the brain. DOI:http://dx.doi.org/10.7554/eLife.07224.002
Collapse
Affiliation(s)
- Timothy Bellay
- Section on Critical Brain Dynamics, National Institute of Mental Health, Bethesda, United States
| | - Andreas Klaus
- Section on Critical Brain Dynamics, National Institute of Mental Health, Bethesda, United States
| | - Saurav Seshadri
- Section on Critical Brain Dynamics, National Institute of Mental Health, Bethesda, United States
| | - Dietmar Plenz
- Section on Critical Brain Dynamics, National Institute of Mental Health, Bethesda, United States
| |
Collapse
|
154
|
Becoming a mother-circuit plasticity underlying maternal behavior. Curr Opin Neurobiol 2015; 35:49-56. [PMID: 26143475 DOI: 10.1016/j.conb.2015.06.007] [Citation(s) in RCA: 26] [Impact Index Per Article: 2.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/16/2015] [Accepted: 06/15/2015] [Indexed: 11/20/2022]
Abstract
The transition to motherhood is a dramatic event during the lifetime of many animals. In mammals, motherhood is accompanied by hormonal changes in the brain that start during pregnancy, followed by experience dependent plasticity after parturition. Together, these changes prime the nervous system of the mother for efficient nurturing of her offspring. Recent work has described how neural circuits are modified during the transition to motherhood. Here we discuss changes in the auditory cortex during motherhood as a model for maternal plasticity in sensory systems. We compare classical plasticity paradigms with changes that arise naturally in mothers, highlighting current efforts to establish a mechanistic understanding of plasticity and its different components in the context of maternal behavior.
Collapse
|
155
|
Harish O, Hansel D. Asynchronous Rate Chaos in Spiking Neuronal Circuits. PLoS Comput Biol 2015; 11:e1004266. [PMID: 26230679 PMCID: PMC4521798 DOI: 10.1371/journal.pcbi.1004266] [Citation(s) in RCA: 48] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/22/2014] [Accepted: 04/03/2015] [Indexed: 01/25/2023] Open
Abstract
The brain exhibits temporally complex patterns of activity with features similar to those of chaotic systems. Theoretical studies over the last twenty years have described various computational advantages for such regimes in neuronal systems. Nevertheless, it still remains unclear whether chaos requires specific cellular properties or network architectures, or whether it is a generic property of neuronal circuits. We investigate the dynamics of networks of excitatory-inhibitory (EI) spiking neurons with random sparse connectivity operating in the regime of balance of excitation and inhibition. Combining Dynamical Mean-Field Theory with numerical simulations, we show that chaotic, asynchronous firing rate fluctuations emerge generically for sufficiently strong synapses. Two different mechanisms can lead to these chaotic fluctuations. One mechanism relies on slow I-I inhibition which gives rise to slow subthreshold voltage and rate fluctuations. The decorrelation time of these fluctuations is proportional to the time constant of the inhibition. The second mechanism relies on the recurrent E-I-E feedback loop. It requires slow excitation but the inhibition can be fast. In the corresponding dynamical regime all neurons exhibit rate fluctuations on the time scale of the excitation. Another feature of this regime is that the population-averaged firing rate is substantially smaller in the excitatory population than in the inhibitory population. This is not necessarily the case in the I-I mechanism. Finally, we discuss the neurophysiological and computational significance of our results.
Collapse
Affiliation(s)
- Omri Harish
- Center for Neurophysics, Physiology and Pathologies, CNRS UMR8119 and Institute of Neuroscience and Cognition, Université Paris Descartes, Paris, France
| | - David Hansel
- Center for Neurophysics, Physiology and Pathologies, CNRS UMR8119 and Institute of Neuroscience and Cognition, Université Paris Descartes, Paris, France
- The Alexander Silberman Institute of Life Sciences, The Hebrew University of Jerusalem, Jerusalem, Israel
| |
Collapse
|
156
|
Sadeh S, Clopath C, Rotter S. Processing of Feature Selectivity in Cortical Networks with Specific Connectivity. PLoS One 2015; 10:e0127547. [PMID: 26083363 PMCID: PMC4471232 DOI: 10.1371/journal.pone.0127547] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/27/2015] [Accepted: 04/07/2015] [Indexed: 02/02/2023] Open
Abstract
Although non-specific at the onset of eye opening, networks in rodent visual cortex attain a non-random structure after eye opening, with a specific bias for connections between neurons of similar preferred orientations. As orientation selectivity is already present at eye opening, it remains unclear how this specificity in network wiring contributes to feature selectivity. Using large-scale inhibition-dominated spiking networks as a model, we show that feature-specific connectivity leads to a linear amplification of feedforward tuning, consistent with recent electrophysiological single-neuron recordings in rodent neocortex. Our results show that optimal amplification is achieved at an intermediate regime of specific connectivity. In this configuration a moderate increase of pairwise correlations is observed, consistent with recent experimental findings. Furthermore, we observed that feature-specific connectivity leads to the emergence of orientation-selective reverberating activity, and entails pattern completion in network responses. Our theoretical analysis provides a mechanistic understanding of subnetworks’ responses to visual stimuli, and casts light on the regime of operation of sensory cortices in the presence of specific connectivity.
Collapse
Affiliation(s)
- Sadra Sadeh
- Bernstein Center Freiburg & Faculty of Biology, University of Freiburg, Feiburg, Germany
- Bioengineering Department, Imperial College London, London, UK
- * E-mail:
| | - Claudia Clopath
- Bioengineering Department, Imperial College London, London, UK
| | - Stefan Rotter
- Bernstein Center Freiburg & Faculty of Biology, University of Freiburg, Feiburg, Germany
| |
Collapse
|
157
|
Muir DR, Mrsic-Flogel T. Eigenspectrum bounds for semirandom matrices with modular and spatial structure for neural networks. PHYSICAL REVIEW. E, STATISTICAL, NONLINEAR, AND SOFT MATTER PHYSICS 2015; 91:042808. [PMID: 25974548 DOI: 10.1103/physreve.91.042808] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/01/2014] [Indexed: 06/04/2023]
Abstract
The eigenvalue spectrum of the matrix of directed weights defining a neural network model is informative of several stability and dynamical properties of network activity. Existing results for eigenspectra of sparse asymmetric random matrices neglect spatial or other constraints in determining entries in these matrices, and so are of partial applicability to cortical-like architectures. Here we examine a parameterized class of networks that are defined by sparse connectivity, with connection weighting modulated by physical proximity (i.e., asymmetric Euclidean random matrices), modular network partitioning, and functional specificity within the excitatory population. We present a set of analytical constraints that apply to the eigenvalue spectra of associated weight matrices, highlighting the relationship between connectivity rules and classes of network dynamics.
Collapse
Affiliation(s)
- Dylan R Muir
- Biozentrum, University of Basel, 4056 Basel, Switzerland
| | | |
Collapse
|
158
|
Shastri BJ, Nahmias MA, Tait AN, Wu B, Prucnal PR. SIMPEL: circuit model for photonic spike processing laser neurons. OPTICS EXPRESS 2015; 23:8029-8044. [PMID: 25837141 DOI: 10.1364/oe.23.008029] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/04/2023]
Abstract
We propose an equivalent circuit model for photonic spike processing laser neurons with an embedded saturable absorber—a simulation model for photonic excitable lasers (SIMPEL). We show that by mapping the laser neuron rate equations into a circuit model, SPICE analysis can be used as an efficient and accurate engine for numerical calculations, capable of generalization to a variety of different types of laser neurons with saturable absorber found in literature. The development of this model parallels the Hodgkin-Huxley model of neuron biophysics, a circuit framework which brought efficiency, modularity, and generalizability to the study of neural dynamics. We employ the model to study various signal-processing effects such as excitability with excitatory and inhibitory pulses, binary all-or-nothing response, and bistable dynamics.
Collapse
|
159
|
Toyoizumi T, Huang H. Structure of attractors in randomly connected networks. PHYSICAL REVIEW. E, STATISTICAL, NONLINEAR, AND SOFT MATTER PHYSICS 2015; 91:032802. [PMID: 25871152 DOI: 10.1103/physreve.91.032802] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/14/2014] [Indexed: 06/04/2023]
Abstract
The deterministic dynamics of randomly connected neural networks are studied, where a state of binary neurons evolves according to a discrete-time synchronous update rule. We give theoretical support that the overlap of systems' states between the current and a previous time develops in time according to a Markovian stochastic process in large networks. This Markovian process predicts how often a network revisits one of the previously visited states, depending on the system size. The state concentration probability, i.e., the probability that two distinct states coevolve to the same state, is utilized to analytically derive various characteristics that quantify attractors' structure. The analytical predictions about the total number of attractors, the typical cycle length, and the number of states belonging to all attractive cycles match well with numerical simulations for relatively large system sizes.
Collapse
Affiliation(s)
- Taro Toyoizumi
- RIKEN Brain Science Institute, Wako-shi, Saitama 351-0198, Japan and Department of Computational Intelligence and Systems Science, Tokyo Institute of Technology, Yokohama 226-8502, Japan
| | - Haiping Huang
- RIKEN Brain Science Institute, Wako-shi, Saitama 351-0198, Japan
| |
Collapse
|
160
|
Meisel C, Klaus A, Kuehn C, Plenz D. Critical slowing down governs the transition to neuron spiking. PLoS Comput Biol 2015; 11:e1004097. [PMID: 25706912 PMCID: PMC4338190 DOI: 10.1371/journal.pcbi.1004097] [Citation(s) in RCA: 42] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/23/2014] [Accepted: 12/18/2014] [Indexed: 12/02/2022] Open
Abstract
Many complex systems have been found to exhibit critical transitions, or so-called tipping points, which are sudden changes to a qualitatively different system state. These changes can profoundly impact the functioning of a system ranging from controlled state switching to a catastrophic break-down; signals that predict critical transitions are therefore highly desirable. To this end, research efforts have focused on utilizing qualitative changes in markers related to a system’s tendency to recover more slowly from a perturbation the closer it gets to the transition—a phenomenon called critical slowing down. The recently studied scaling of critical slowing down offers a refined path to understand critical transitions: to identify the transition mechanism and improve transition prediction using scaling laws. Here, we outline and apply this strategy for the first time in a real-world system by studying the transition to spiking in neurons of the mammalian cortex. The dynamical system approach has identified two robust mechanisms for the transition from subthreshold activity to spiking, saddle-node and Hopf bifurcation. Although theory provides precise predictions on signatures of critical slowing down near the bifurcation to spiking, quantitative experimental evidence has been lacking. Using whole-cell patch-clamp recordings from pyramidal neurons and fast-spiking interneurons, we show that 1) the transition to spiking dynamically corresponds to a critical transition exhibiting slowing down, 2) the scaling laws suggest a saddle-node bifurcation governing slowing down, and 3) these precise scaling laws can be used to predict the bifurcation point from a limited window of observation. To our knowledge this is the first report of scaling laws of critical slowing down in an experiment. They present a missing link for a broad class of neuroscience modeling and suggest improved estimation of tipping points by incorporating scaling laws of critical slowing down as a strategy applicable to other complex systems. Neurons efficiently convey information by being able to switch rapidly between two different states: quiescence and spiking. Such sudden shifts to a qualitatively different state are observed in many complex systems; the often dramatic consequences of these tipping points for diverse fields such as economics, ecology, and the brain have spurred interest to better understand their transition mechanisms and predict their sudden occurrences. By studying the transition from neuronal quiescence to spiking, we show that the quantitative scaling laws for critical slowing down, i.e., a system’s tendency to recover more slowly from perturbations upon approaching its transition point, inform about the underlying bifurcation mechanism and can be used to improve the prediction of a system’s tipping point.
Collapse
Affiliation(s)
- Christian Meisel
- Section on Critical Brain Dynamics, National Institute of Mental Health, Bethesda, Maryland, United States of America
- Department of Neurology, University Clinic Carl Gustav Carus, Dresden, Germany
- * E-mail:
| | - Andreas Klaus
- Section on Critical Brain Dynamics, National Institute of Mental Health, Bethesda, Maryland, United States of America
| | - Christian Kuehn
- Institute for Analysis and Scientific Computing, Vienna University of Technology, Vienna, Austria
| | - Dietmar Plenz
- Section on Critical Brain Dynamics, National Institute of Mental Health, Bethesda, Maryland, United States of America
| |
Collapse
|
161
|
Zhang JW, Rangan AV. A reduction for spiking integrate-and-fire network dynamics ranging from homogeneity to synchrony. J Comput Neurosci 2015; 38:355-404. [PMID: 25601481 DOI: 10.1007/s10827-014-0543-3] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/18/2014] [Revised: 11/29/2014] [Accepted: 12/09/2014] [Indexed: 10/24/2022]
Abstract
In this paper we provide a general methodology for systematically reducing the dynamics of a class of integrate-and-fire networks down to an augmented 4-dimensional system of ordinary-differential-equations. The class of integrate-and-fire networks we focus on are homogeneously-structured, strongly coupled, and fluctuation-driven. Our reduction succeeds where most current firing-rate and population-dynamics models fail because we account for the emergence of 'multiple-firing-events' involving the semi-synchronous firing of many neurons. These multiple-firing-events are largely responsible for the fluctuations generated by the network and, as a result, our reduction faithfully describes many dynamic regimes ranging from homogeneous to synchronous. Our reduction is based on first principles, and provides an analyzable link between the integrate-and-fire network parameters and the relatively low-dimensional dynamics underlying the 4-dimensional augmented ODE.
Collapse
Affiliation(s)
- J W Zhang
- Courant Institute of Mathematical Sciences, New York University, New York, NY, USA
| | | |
Collapse
|
162
|
Synaptic plasticity enables adaptive self-tuning critical networks. PLoS Comput Biol 2015; 11:e1004043. [PMID: 25590427 PMCID: PMC4295840 DOI: 10.1371/journal.pcbi.1004043] [Citation(s) in RCA: 37] [Impact Index Per Article: 4.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/07/2014] [Accepted: 11/17/2014] [Indexed: 11/19/2022] Open
Abstract
During rest, the mammalian cortex displays spontaneous neural activity. Spiking of single neurons during rest has been described as irregular and asynchronous. In contrast, recent in vivo and in vitro population measures of spontaneous activity, using the LFP, EEG, MEG or fMRI suggest that the default state of the cortex is critical, manifested by spontaneous, scale-invariant, cascades of activity known as neuronal avalanches. Criticality keeps a network poised for optimal information processing, but this view seems to be difficult to reconcile with apparently irregular single neuron spiking. Here, we simulate a 10,000 neuron, deterministic, plastic network of spiking neurons. We show that a combination of short- and long-term synaptic plasticity enables these networks to exhibit criticality in the face of intrinsic, i.e. self-sustained, asynchronous spiking. Brief external perturbations lead to adaptive, long-term modification of intrinsic network connectivity through long-term excitatory plasticity, whereas long-term inhibitory plasticity enables rapid self-tuning of the network back to a critical state. The critical state is characterized by a branching parameter oscillating around unity, a critical exponent close to -3/2 and a long tail distribution of a self-similarity parameter between 0.5 and 1.
Collapse
|
163
|
Sadeh S, Rotter S. Distribution of orientation selectivity in recurrent networks of spiking neurons with different random topologies. PLoS One 2014; 9:e114237. [PMID: 25469704 PMCID: PMC4254981 DOI: 10.1371/journal.pone.0114237] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/13/2014] [Accepted: 11/04/2014] [Indexed: 11/18/2022] Open
Abstract
Neurons in the primary visual cortex are more or less selective for the orientation of a light bar used for stimulation. A broad distribution of individual grades of orientation selectivity has in fact been reported in all species. A possible reason for emergence of broad distributions is the recurrent network within which the stimulus is being processed. Here we compute the distribution of orientation selectivity in randomly connected model networks that are equipped with different spatial patterns of connectivity. We show that, for a wide variety of connectivity patterns, a linear theory based on firing rates accurately approximates the outcome of direct numerical simulations of networks of spiking neurons. Distance dependent connectivity in networks with a more biologically realistic structure does not compromise our linear analysis, as long as the linearized dynamics, and hence the uniform asynchronous irregular activity state, remain stable. We conclude that linear mechanisms of stimulus processing are indeed responsible for the emergence of orientation selectivity and its distribution in recurrent networks with functionally heterogeneous synaptic connectivity.
Collapse
Affiliation(s)
- Sadra Sadeh
- Bernstein Center Freiburg & Faculty of Biology, University of Freiburg, Freiburg, Germany
| | - Stefan Rotter
- Bernstein Center Freiburg & Faculty of Biology, University of Freiburg, Freiburg, Germany
- * E-mail:
| |
Collapse
|
164
|
Lagzi F, Rotter S. A Markov model for the temporal dynamics of balanced random networks of finite size. Front Comput Neurosci 2014; 8:142. [PMID: 25520644 PMCID: PMC4253948 DOI: 10.3389/fncom.2014.00142] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/25/2014] [Accepted: 10/20/2014] [Indexed: 11/21/2022] Open
Abstract
The balanced state of recurrent networks of excitatory and inhibitory spiking neurons is characterized by fluctuations of population activity about an attractive fixed point. Numerical simulations show that these dynamics are essentially nonlinear, and the intrinsic noise (self-generated fluctuations) in networks of finite size is state-dependent. Therefore, stochastic differential equations with additive noise of fixed amplitude cannot provide an adequate description of the stochastic dynamics. The noise model should, rather, result from a self-consistent description of the network dynamics. Here, we consider a two-state Markovian neuron model, where spikes correspond to transitions from the active state to the refractory state. Excitatory and inhibitory input to this neuron affects the transition rates between the two states. The corresponding nonlinear dependencies can be identified directly from numerical simulations of networks of leaky integrate-and-fire neurons, discretized at a time resolution in the sub-millisecond range. Deterministic mean-field equations, and a noise component that depends on the dynamic state of the network, are obtained from this model. The resulting stochastic model reflects the behavior observed in numerical simulations quite well, irrespective of the size of the network. In particular, a strong temporal correlation between the two populations, a hallmark of the balanced state in random recurrent networks, are well represented by our model. Numerical simulations of such networks show that a log-normal distribution of short-term spike counts is a property of balanced random networks with fixed in-degree that has not been considered before, and our model shares this statistical property. Furthermore, the reconstruction of the flow from simulated time series suggests that the mean-field dynamics of finite-size networks are essentially of Wilson-Cowan type. We expect that this novel nonlinear stochastic model of the interaction between neuronal populations also opens new doors to analyze the joint dynamics of multiple interacting networks.
Collapse
Affiliation(s)
- Fereshteh Lagzi
- Bernstein Center Freiburg and Faculty of Biology, University of FreiburgFreiburg, Germany
| | | |
Collapse
|
165
|
Kriener B, Enger H, Tetzlaff T, Plesser HE, Gewaltig MO, Einevoll GT. Dynamics of self-sustained asynchronous-irregular activity in random networks of spiking neurons with strong synapses. Front Comput Neurosci 2014; 8:136. [PMID: 25400575 PMCID: PMC4214205 DOI: 10.3389/fncom.2014.00136] [Citation(s) in RCA: 27] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/16/2014] [Accepted: 10/10/2014] [Indexed: 11/13/2022] Open
Abstract
Random networks of integrate-and-fire neurons with strong current-based synapses can, unlike previously believed, assume stable states of sustained asynchronous and irregular firing, even without external random background or pacemaker neurons. We analyze the mechanisms underlying the emergence, lifetime and irregularity of such self-sustained activity states. We first demonstrate how the competition between the mean and the variance of the synaptic input leads to a non-monotonic firing-rate transfer in the network. Thus, by increasing the synaptic coupling strength, the system can become bistable: In addition to the quiescent state, a second stable fixed-point at moderate firing rates can emerge by a saddle-node bifurcation. Inherently generated fluctuations of the population firing rate around this non-trivial fixed-point can trigger transitions into the quiescent state. Hence, the trade-off between the magnitude of the population-rate fluctuations and the size of the basin of attraction of the non-trivial rate fixed-point determines the onset and the lifetime of self-sustained activity states. During self-sustained activity, individual neuronal activity is moreover highly irregular, switching between long periods of low firing rate to short burst-like states. We show that this is an effect of the strong synaptic weights and the finite time constant of synaptic and neuronal integration, and can actually serve to stabilize the self-sustained state.
Collapse
Affiliation(s)
- Birgit Kriener
- Neural Coding and Dynamics, Center for Learning and Memory, University of Texas at Austin Austin, TX, USA ; Computational Neuroscience, Department of Mathematical Sciences and Technology, Norwegian University of Life Sciences Ås, Norway
| | - Håkon Enger
- Computational Neuroscience, Department of Mathematical Sciences and Technology, Norwegian University of Life Sciences Ås, Norway ; Simula Research Laboratory, Kalkulo AS Fornebu, Norway
| | - Tom Tetzlaff
- Institute of Neuroscience and Medicine (INM-6), Computational and Systems Neuroscience and Institute for Advanced Simulation (IAS-6), Theoretical Neuroscience, Jülich Research Centre and JARA Jülich, Germany
| | - Hans E Plesser
- Computational Neuroscience, Department of Mathematical Sciences and Technology, Norwegian University of Life Sciences Ås, Norway
| | - Marc-Oliver Gewaltig
- Blue Brain Project, In-Silico Neuroscience - Cognitive Architectures, École Polytechnique Fédérale de Lausanne Lausanne, Switzerland
| | - Gaute T Einevoll
- Computational Neuroscience, Department of Mathematical Sciences and Technology, Norwegian University of Life Sciences Ås, Norway ; Department of Physics, University of Oslo Oslo, Norway
| |
Collapse
|
166
|
Duarte RCF, Morrison A. Dynamic stability of sequential stimulus representations in adapting neuronal networks. Front Comput Neurosci 2014; 8:124. [PMID: 25374534 PMCID: PMC4205815 DOI: 10.3389/fncom.2014.00124] [Citation(s) in RCA: 23] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/07/2014] [Accepted: 09/16/2014] [Indexed: 12/16/2022] Open
Abstract
The ability to acquire and maintain appropriate representations of time-varying, sequential stimulus events is a fundamental feature of neocortical circuits and a necessary first step toward more specialized information processing. The dynamical properties of such representations depend on the current state of the circuit, which is determined primarily by the ongoing, internally generated activity, setting the ground state from which input-specific transformations emerge. Here, we begin by demonstrating that timing-dependent synaptic plasticity mechanisms have an important role to play in the active maintenance of an ongoing dynamics characterized by asynchronous and irregular firing, closely resembling cortical activity in vivo. Incoming stimuli, acting as perturbations of the local balance of excitation and inhibition, require fast adaptive responses to prevent the development of unstable activity regimes, such as those characterized by a high degree of population-wide synchrony. We establish a link between such pathological network activity, which is circumvented by the action of plasticity, and a reduced computational capacity. Additionally, we demonstrate that the action of plasticity shapes and stabilizes the transient network states exhibited in the presence of sequentially presented stimulus events, allowing the development of adequate and discernible stimulus representations. The main feature responsible for the increased discriminability of stimulus-driven population responses in plastic networks is shown to be the decorrelating action of inhibitory plasticity and the consequent maintenance of the asynchronous irregular dynamic regime both for ongoing activity and stimulus-driven responses, whereas excitatory plasticity is shown to play only a marginal role.
Collapse
Affiliation(s)
- Renato C F Duarte
- Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6), Jülich Research Center and JARA Jülich, Germany ; Bernstein Center Freiburg, Albert-Ludwig University of Freiburg Freiburg im Breisgau, Germany ; Faculty of Biology, Albert-Ludwig University of Freiburg Freiburg im Breisgau, Germany ; School of Informatics, Institute of Adaptive and Neural Computation, University of Edinburgh Edinburgh, UK
| | - Abigail Morrison
- Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6), Jülich Research Center and JARA Jülich, Germany ; Bernstein Center Freiburg, Albert-Ludwig University of Freiburg Freiburg im Breisgau, Germany ; Faculty of Biology, Albert-Ludwig University of Freiburg Freiburg im Breisgau, Germany ; Faculty of Psychology, Institute of Cognitive Neuroscience, Ruhr-University Bochum Bochum, Germany
| |
Collapse
|
167
|
Dummer B, Wieland S, Lindner B. Self-consistent determination of the spike-train power spectrum in a neural network with sparse connectivity. Front Comput Neurosci 2014; 8:104. [PMID: 25278869 PMCID: PMC4166962 DOI: 10.3389/fncom.2014.00104] [Citation(s) in RCA: 39] [Impact Index Per Article: 3.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/25/2014] [Accepted: 08/13/2014] [Indexed: 11/13/2022] Open
Abstract
A major source of random variability in cortical networks is the quasi-random arrival of presynaptic action potentials from many other cells. In network studies as well as in the study of the response properties of single cells embedded in a network, synaptic background input is often approximated by Poissonian spike trains. However, the output statistics of the cells is in most cases far from being Poisson. This is inconsistent with the assumption of similar spike-train statistics for pre- and postsynaptic cells in a recurrent network. Here we tackle this problem for the popular class of integrate-and-fire neurons and study a self-consistent statistics of input and output spectra of neural spike trains. Instead of actually using a large network, we use an iterative scheme, in which we simulate a single neuron over several generations. In each of these generations, the neuron is stimulated with surrogate stochastic input that has a similar statistics as the output of the previous generation. For the surrogate input, we employ two distinct approximations: (i) a superposition of renewal spike trains with the same interspike interval density as observed in the previous generation and (ii) a Gaussian current with a power spectrum proportional to that observed in the previous generation. For input parameters that correspond to balanced input in the network, both the renewal and the Gaussian iteration procedure converge quickly and yield comparable results for the self-consistent spike-train power spectrum. We compare our results to large-scale simulations of a random sparsely connected network of leaky integrate-and-fire neurons (Brunel, 2000) and show that in the asynchronous regime close to a state of balanced synaptic input from the network, our iterative schemes provide an excellent approximations to the autocorrelation of spike trains in the recurrent network.
Collapse
Affiliation(s)
- Benjamin Dummer
- Theory of Complex Systems and Neurophysics, Bernstein Center for Computational Neuroscience Berlin, Germany ; Department of Physics, Humboldt Universität zu Berlin Berlin, Germany
| | - Stefan Wieland
- Theory of Complex Systems and Neurophysics, Bernstein Center for Computational Neuroscience Berlin, Germany ; Department of Physics, Humboldt Universität zu Berlin Berlin, Germany
| | - Benjamin Lindner
- Theory of Complex Systems and Neurophysics, Bernstein Center for Computational Neuroscience Berlin, Germany ; Department of Physics, Humboldt Universität zu Berlin Berlin, Germany
| |
Collapse
|
168
|
Abstract
The mechanisms underlying the dynamics of movement-related neural activity are not known. In this issue of Neuron, Hennequin et al. (2014) show that a recurrent network whose spontaneous activity is stabilized by learning reproduces many aspects of preparatory and movement-related activity.
Collapse
Affiliation(s)
- Alfonso Renart
- Champalimaud Neuroscience Programme, Champalimaud Centre for the Unknown, 1400-038 Lisbon, Portugal.
| |
Collapse
|
169
|
Hennequin G, Vogels TP, Gerstner W. Optimal control of transient dynamics in balanced networks supports generation of complex movements. Neuron 2014; 82:1394-406. [PMID: 24945778 DOI: 10.1016/j.neuron.2014.04.045] [Citation(s) in RCA: 166] [Impact Index Per Article: 16.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 04/14/2014] [Indexed: 01/27/2023]
Abstract
Populations of neurons in motor cortex engage in complex transient dynamics of large amplitude during the execution of limb movements. Traditional network models with stochastically assigned synapses cannot reproduce this behavior. Here we introduce a class of cortical architectures with strong and random excitatory recurrence that is stabilized by intricate, fine-tuned inhibition, optimized from a control theory perspective. Such networks transiently amplify specific activity states and can be used to reliably execute multidimensional movement patterns. Similar to the experimental observations, these transients must be preceded by a steady-state initialization phase from which the network relaxes back into the background state by way of complex internal dynamics. In our networks, excitation and inhibition are as tightly balanced as recently reported in experiments across several brain areas, suggesting inhibitory control of complex excitatory recurrence as a generic organizational principle in cortex.
Collapse
Affiliation(s)
- Guillaume Hennequin
- School of Computer and Communication Sciences and Brain Mind Institute, School of Life Sciences, Ecole Polytechnique Fédérale de Lausanne (EPFL), 1015 Lausanne, Switzerland; Department of Engineering, University of Cambridge, Cambridge CB2 1PZ, UK.
| | - Tim P Vogels
- School of Computer and Communication Sciences and Brain Mind Institute, School of Life Sciences, Ecole Polytechnique Fédérale de Lausanne (EPFL), 1015 Lausanne, Switzerland; Centre for Neural Circuits and Behaviour, University of Oxford, Oxford OX1 3SR, UK
| | - Wulfram Gerstner
- School of Computer and Communication Sciences and Brain Mind Institute, School of Life Sciences, Ecole Polytechnique Fédérale de Lausanne (EPFL), 1015 Lausanne, Switzerland
| |
Collapse
|
170
|
Lintas A. Discharge properties of neurons recorded in the parvalbumin-positive (PV1) nucleus of the rat lateral hypothalamus. Neurosci Lett 2014; 571:29-33. [PMID: 24780564 DOI: 10.1016/j.neulet.2014.04.023] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/15/2014] [Revised: 04/13/2014] [Accepted: 04/17/2014] [Indexed: 10/25/2022]
Abstract
This study reports for the first time the extracellular activity recorded, in anesthetized rats, from cells located in an identified cluster of parvalbumin (PV)-positive neurons of the lateral hypothalamus forming the PV1-nucleus. Random-like firing characterized the majority (21/30) of the cells, termed regular cells, with a median firing rate of 1.7 spikes/s, Fano factor equal to 1, and evenly distributed along the rostro-caudal axis. Four cells exhibiting an oscillatory activity in the range 1.6-2.1Hz were observed only in the posterior part of the PV1-nucleus. The asynchronous activity of PV1 neurons is likely to produce a "network-driven" effect on their main target within the periaqueductal gray matter. The hypothesis is raised that background random-like firing of PV1-nucleus is associated with functional network activity likely to contribute dynamic information related to condition transitions of awareness and non-conscious perception.
Collapse
Affiliation(s)
- Alessandra Lintas
- Department of Medicine/Unit of Anatomy, University of Fribourg, Switzerland; Neuroheuristic Research Group, HEC Lausanne, University of Lausanne, Switzerland.
| |
Collapse
|
171
|
Doiron B, Litwin-Kumar A. Balanced neural architecture and the idling brain. Front Comput Neurosci 2014; 8:56. [PMID: 24904394 PMCID: PMC4034496 DOI: 10.3389/fncom.2014.00056] [Citation(s) in RCA: 26] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/12/2013] [Accepted: 05/07/2014] [Indexed: 12/05/2022] Open
Abstract
A signature feature of cortical spike trains is their trial-to-trial variability. This variability is large in the spontaneous state and is reduced when cortex is driven by a stimulus or task. Models of recurrent cortical networks with unstructured, yet balanced, excitation and inhibition generate variability consistent with evoked conditions. However, these models produce spike trains which lack the long timescale fluctuations and large variability exhibited during spontaneous cortical dynamics. We propose that global network architectures which support a large number of stable states (attractor networks) allow balanced networks to capture key features of neural variability in both spontaneous and evoked conditions. We illustrate this using balanced spiking networks with clustered assembly, feedforward chain, and ring structures. By assuming that global network structure is related to stimulus preference, we show that signal correlations are related to the magnitude of correlations in the spontaneous state. Finally, we contrast the impact of stimulation on the trial-to-trial variability in attractor networks with that of strongly coupled spiking networks with chaotic firing rate instabilities, recently investigated by Ostojic (2014). We find that only attractor networks replicate an experimentally observed stimulus-induced quenching of trial-to-trial variability. In total, the comparison of the trial-variable dynamics of single neurons or neuron pairs during spontaneous and evoked activity can be a window into the global structure of balanced cortical networks.
Collapse
Affiliation(s)
- Brent Doiron
- Department of Mathematics, University of Pittsburgh Pittsburgh, PA, USA ; Center for the Neural Basis of Cognition, University of Pittsburgh and Carnegie Mellon University Pittsburgh, PA, USA
| | - Ashok Litwin-Kumar
- Center for the Neural Basis of Cognition, University of Pittsburgh and Carnegie Mellon University Pittsburgh, PA, USA ; Program for Neural Computation, University of Pittsburgh and Carnegie Mellon University Pittsburgh, PA, USA
| |
Collapse
|
172
|
|
173
|
Wolf F, Engelken R, Puelma-Touzel M, Weidinger JDF, Neef A. Dynamical models of cortical circuits. Curr Opin Neurobiol 2014; 25:228-36. [PMID: 24658059 DOI: 10.1016/j.conb.2014.01.017] [Citation(s) in RCA: 26] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/31/2013] [Revised: 01/21/2014] [Accepted: 01/22/2014] [Indexed: 11/27/2022]
Abstract
Cortical neurons operate within recurrent neuronal circuits. Dissecting their operation is key to understanding information processing in the cortex and requires transparent and adequate dynamical models of circuit function. Convergent evidence from experimental and theoretical studies indicates that strong feedback inhibition shapes the operating regime of cortical circuits. For circuits operating in inhibition-dominated regimes, mathematical and computational studies over the past several years achieved substantial advances in understanding response modulation and heterogeneity, emergent stimulus selectivity, inter-neuron correlations, and microstate dynamics. The latter indicate a surprisingly strong dependence of the collective circuit dynamics on the features of single neuron action potential generation. New approaches are needed to definitely characterize the cortical operating regime.
Collapse
Affiliation(s)
- Fred Wolf
- Max Planck Institute for Dynamics and Self-Organization, Göttingen, Germany; Bernstein Center for Computational Neuroscience, Göttingen, Germany; Bernstein Focus Neurotechnology, Göttingen, Germany; Faculty of Physics, Göttingen University, Göttingen, Germany.
| | - Rainer Engelken
- Max Planck Institute for Dynamics and Self-Organization, Göttingen, Germany; Bernstein Center for Computational Neuroscience, Göttingen, Germany; Bernstein Focus Neurotechnology, Göttingen, Germany; Faculty of Physics, Göttingen University, Göttingen, Germany
| | - Maximilian Puelma-Touzel
- Max Planck Institute for Dynamics and Self-Organization, Göttingen, Germany; Bernstein Center for Computational Neuroscience, Göttingen, Germany; Bernstein Focus Neurotechnology, Göttingen, Germany; Faculty of Physics, Göttingen University, Göttingen, Germany
| | - Juan Daniel Flórez Weidinger
- Max Planck Institute for Dynamics and Self-Organization, Göttingen, Germany; Bernstein Center for Computational Neuroscience, Göttingen, Germany; Bernstein Focus Neurotechnology, Göttingen, Germany; Faculty of Physics, Göttingen University, Göttingen, Germany
| | - Andreas Neef
- Max Planck Institute for Dynamics and Self-Organization, Göttingen, Germany; Bernstein Center for Computational Neuroscience, Göttingen, Germany; Bernstein Focus Neurotechnology, Göttingen, Germany; Faculty of Physics, Göttingen University, Göttingen, Germany
| |
Collapse
|