1
|
Northoff G, Zilio F, Zhang J. Beyond task response-Pre-stimulus activity modulates contents of consciousness. Phys Life Rev 2024; 49:19-37. [PMID: 38492473 DOI: 10.1016/j.plrev.2024.03.002] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/28/2024] [Accepted: 03/03/2024] [Indexed: 03/18/2024]
Abstract
The current discussion on the neural correlates of the contents of consciousness (NCCc) focuses mainly on the post-stimulus period of task-related activity. This neglects the substantial impact of the spontaneous or ongoing activity of the brain as manifest in pre-stimulus activity. Does the interaction of pre- and post-stimulus activity shape the contents of consciousness? Addressing this gap in our knowledge, we review and converge two recent lines of findings, that is, pre-stimulus alpha power and pre- and post-stimulus alpha trial-to-trial variability (TTV). The data show that pre-stimulus alpha power modulates post-stimulus activity including specifically the subjective features of conscious contents like confidence and vividness. At the same time, alpha pre-stimulus variability shapes post-stimulus TTV reduction including the associated contents of consciousness. We propose that non-additive rather than merely additive interaction of the internal pre-stimulus activity with the external stimulus in the alpha band is key for contents to become conscious. This is mediated by mechanisms on different levels including neurophysiological, neurocomputational, neurodynamic, neuropsychological and neurophenomenal levels. Overall, considering the interplay of pre-stimulus intrinsic and post-stimulus extrinsic activity across wider timescales, not just evoked responses in the post-stimulus period, is critical for identifying neural correlates of consciousness. This is well in line with both processing and especially the Temporo-spatial theory of consciousness (TTC).
Collapse
Affiliation(s)
- Georg Northoff
- University of Ottawa, Institute of Mental Health Research at the Royal Ottawa Hospital, Ottawa, Canada.
| | - Federico Zilio
- Department of Philosophy, Sociology, Education and Applied Psychology, University of Padua, Padua, Italy
| | - Jianfeng Zhang
- Center for Brain Disorders and Cognitive Sciences, School of Psychology, Shenzhen University, Shenzhen, China.
| |
Collapse
|
2
|
Yang X, La Camera G. Co-existence of synaptic plasticity and metastable dynamics in a spiking model of cortical circuits. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2023.12.07.570692. [PMID: 38106233 PMCID: PMC10723399 DOI: 10.1101/2023.12.07.570692] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/19/2023]
Abstract
Evidence for metastable dynamics and its role in brain function is emerging at a fast pace and is changing our understanding of neural coding by putting an emphasis on hidden states of transient activity. Clustered networks of spiking neurons have enhanced synaptic connections among groups of neurons forming structures called cell assemblies; such networks are capable of producing metastable dynamics that is in agreement with many experimental results. However, it is unclear how a clustered network structure producing metastable dynamics may emerge from a fully local plasticity rule, i.e., a plasticity rule where each synapse has only access to the activity of the neurons it connects (as opposed to the activity of other neurons or other synapses). Here, we propose a local plasticity rule producing ongoing metastable dynamics in a deterministic, recurrent network of spiking neurons. The metastable dynamics co-exists with ongoing plasticity and is the consequence of a self-tuning mechanism that keeps the synaptic weights close to the instability line where memories are spontaneously reactivated. In turn, the synaptic structure is stable to ongoing dynamics and random perturbations, yet it remains sufficiently plastic to remap sensory representations to encode new sets of stimuli. Both the plasticity rule and the metastable dynamics scale well with network size, with synaptic stability increasing with the number of neurons. Overall, our results show that it is possible to generate metastable dynamics over meaningful hidden states using a simple but biologically plausible plasticity rule which co-exists with ongoing neural dynamics.
Collapse
Affiliation(s)
- Xiaoyu Yang
- Graduate Program in Physics and Astronomy, Stony Brook University
- Department of Neurobiology & Behavior, Stony Brook University
- Center for Neural Circuit Dynamics, Stony Brook University
| | - Giancarlo La Camera
- Department of Neurobiology & Behavior, Stony Brook University
- Center for Neural Circuit Dynamics, Stony Brook University
| |
Collapse
|
3
|
Kogan JF, Fontanini A. Learning enhances representations of taste-guided decisions in the mouse gustatory insular cortex. Curr Biol 2024; 34:1880-1892.e5. [PMID: 38631343 PMCID: PMC11188718 DOI: 10.1016/j.cub.2024.03.034] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/16/2023] [Revised: 02/07/2024] [Accepted: 03/19/2024] [Indexed: 04/19/2024]
Abstract
Learning to discriminate overlapping gustatory stimuli that predict distinct outcomes-a feat known as discrimination learning-can mean the difference between ingesting a poison or a nutritive meal. Despite the obvious importance of this process, very little is known about the neural basis of taste discrimination learning. In other sensory modalities, this form of learning can be mediated by either the sharpening of sensory representations or the enhanced ability of "decision-making" circuits to interpret sensory information. Given the dual role of the gustatory insular cortex (GC) in encoding both sensory and decision-related variables, this region represents an ideal site for investigating how neural activity changes as animals learn a novel taste discrimination. Here, we present results from experiments relying on two-photon calcium imaging of GC neural activity in mice performing a taste-guided mixture discrimination task. The task allows for the recording of neural activity before and after learning induced by training mice to discriminate increasingly similar pairs of taste mixtures. Single-neuron and population analyses show a time-varying pattern of activity, with early sensory responses emerging after taste delivery and binary, choice-encoding responses emerging later in the delay before a decision is made. Our results demonstrate that, while both sensory and decision-related information is encoded by GC in the context of a taste mixture discrimination task, learning and improved performance are associated with a specific enhancement of decision-related responses.
Collapse
Affiliation(s)
- Joshua F Kogan
- Graduate Program in Neuroscience, Stony Brook University, Stony Brook, NY 11794, USA; Medical Scientist Training Program, Stony Brook University, Stony Brook, NY 11794, USA; Department of Neurobiology and Behavior, Stony Brook University, Stony Brook, NY 11794, USA.
| | - Alfredo Fontanini
- Graduate Program in Neuroscience, Stony Brook University, Stony Brook, NY 11794, USA; Medical Scientist Training Program, Stony Brook University, Stony Brook, NY 11794, USA; Department of Neurobiology and Behavior, Stony Brook University, Stony Brook, NY 11794, USA.
| |
Collapse
|
4
|
Papadopoulos L, Jo S, Zumwalt K, Wehr M, McCormick DA, Mazzucato L. Modulation of metastable ensemble dynamics explains optimal coding at moderate arousal in auditory cortex. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.04.04.588209. [PMID: 38617286 PMCID: PMC11014582 DOI: 10.1101/2024.04.04.588209] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 04/16/2024]
Abstract
Performance during perceptual decision-making exhibits an inverted-U relationship with arousal, but the underlying network mechanisms remain unclear. Here, we recorded from auditory cortex (A1) of behaving mice during passive tone presentation, while tracking arousal via pupillometry. We found that tone discriminability in A1 ensembles was optimal at intermediate arousal, revealing a population-level neural correlate of the inverted-U relationship. We explained this arousal-dependent coding using a spiking network model with a clustered architecture. Specifically, we show that optimal stimulus discriminability is achieved near a transition between a multi-attractor phase with metastable cluster dynamics (low arousal) and a single-attractor phase (high arousal). Additional signatures of this transition include arousal-induced reductions of overall neural variability and the extent of stimulus-induced variability quenching, which we observed in the empirical data. Altogether, this study elucidates computational principles underlying interactions between pupil-linked arousal, sensory processing, and neural variability, and suggests a role for phase transitions in explaining nonlinear modulations of cortical computations.
Collapse
Affiliation(s)
| | - Suhyun Jo
- Institute of Neuroscience, University of Oregon, Eugene, Oregon
| | - Kevin Zumwalt
- Institute of Neuroscience, University of Oregon, Eugene, Oregon
| | - Michael Wehr
- Institute of Neuroscience, University of Oregon, Eugene, Oregon and Department of Psychology, University of Oregon, Eugene, Oregon
| | - David A McCormick
- Institute of Neuroscience, University of Oregon, Eugene, Oregon and Department of Biology, University of Oregon, Eugene, Oregon
| | - Luca Mazzucato
- Institute of Neuroscience, University of Oregon, Eugene, Oregon
- Department of Biology, University of Oregon, Eugene, Oregon
- Department of Mathematics, University of Oregon, Eugene, Oregon and Department of Physics, University of Oregon, Eugene, Oregon
| |
Collapse
|
5
|
Crosser JT, Brinkman BAW. Applications of information geometry to spiking neural network activity. Phys Rev E 2024; 109:024302. [PMID: 38491696 DOI: 10.1103/physreve.109.024302] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/18/2023] [Accepted: 01/10/2024] [Indexed: 03/18/2024]
Abstract
The space of possible behaviors that complex biological systems may exhibit is unimaginably vast, and these systems often appear to be stochastic, whether due to variable noisy environmental inputs or intrinsically generated chaos. The brain is a prominent example of a biological system with complex behaviors. The number of possible patterns of spikes emitted by a local brain circuit is combinatorially large, although the brain may not make use of all of them. Understanding which of these possible patterns are actually used by the brain, and how those sets of patterns change as properties of neural circuitry change is a major goal in neuroscience. Recently, tools from information geometry have been used to study embeddings of probabilistic models onto a hierarchy of model manifolds that encode how model outputs change as a function of their parameters, giving a quantitative notion of "distances" between outputs. We apply this method to a network model of excitatory and inhibitory neural populations to understand how the competition between membrane and synaptic response timescales shapes the network's information geometry. The hyperbolic embedding allows us to identify the statistical parameters to which the model behavior is most sensitive, and demonstrate how the ranking of these coordinates changes with the balance of excitation and inhibition in the network.
Collapse
Affiliation(s)
- Jacob T Crosser
- Department of Applied Mathematics and Statistics, Stony Brook University, Stony Brook, New York 11794, USA and Department of Neurobiology and Behavior, Stony Brook University, Stony Brook, New York 11794, USA
| | - Braden A W Brinkman
- Department of Applied Mathematics and Statistics, Stony Brook University, Stony Brook, New York 11794, USA and Department of Neurobiology and Behavior, Stony Brook University, Stony Brook, New York 11794, USA
| |
Collapse
|
6
|
Stern M, Istrate N, Mazzucato L. A reservoir of timescales emerges in recurrent circuits with heterogeneous neural assemblies. eLife 2023; 12:e86552. [PMID: 38084779 PMCID: PMC10810607 DOI: 10.7554/elife.86552] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/31/2023] [Accepted: 12/07/2023] [Indexed: 01/26/2024] Open
Abstract
The temporal activity of many physical and biological systems, from complex networks to neural circuits, exhibits fluctuations simultaneously varying over a large range of timescales. Long-tailed distributions of intrinsic timescales have been observed across neurons simultaneously recorded within the same cortical circuit. The mechanisms leading to this striking temporal heterogeneity are yet unknown. Here, we show that neural circuits, endowed with heterogeneous neural assemblies of different sizes, naturally generate multiple timescales of activity spanning several orders of magnitude. We develop an analytical theory using rate networks, supported by simulations of spiking networks with cell-type specific connectivity, to explain how neural timescales depend on assembly size and show that our model can naturally explain the long-tailed timescale distribution observed in the awake primate cortex. When driving recurrent networks of heterogeneous neural assemblies by a time-dependent broadband input, we found that large and small assemblies preferentially entrain slow and fast spectral components of the input, respectively. Our results suggest that heterogeneous assemblies can provide a biologically plausible mechanism for neural circuits to demix complex temporal input signals by transforming temporal into spatial neural codes via frequency-selective neural assemblies.
Collapse
Affiliation(s)
- Merav Stern
- Institute of Neuroscience, University of OregonEugeneUnited States
- Faculty of Medicine, The Hebrew University of JerusalemJerusalemIsrael
| | - Nicolae Istrate
- Institute of Neuroscience, University of OregonEugeneUnited States
- Departments of Physics, University of OregonEugeneUnited States
| | - Luca Mazzucato
- Institute of Neuroscience, University of OregonEugeneUnited States
- Departments of Physics, University of OregonEugeneUnited States
- Mathematics and Biology, University of OregonEugeneUnited States
| |
Collapse
|
7
|
Breffle J, Mokashe S, Qiu S, Miller P. Multistability in neural systems with random cross-connections. BIOLOGICAL CYBERNETICS 2023; 117:485-506. [PMID: 38133664 DOI: 10.1007/s00422-023-00981-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/05/2023] [Accepted: 12/05/2023] [Indexed: 12/23/2023]
Abstract
Neural circuits with multiple discrete attractor states could support a variety of cognitive tasks according to both empirical data and model simulations. We assess the conditions for such multistability in neural systems using a firing rate model framework, in which clusters of similarly responsive neurons are represented as single units, which interact with each other through independent random connections. We explore the range of conditions in which multistability arises via recurrent input from other units while individual units, typically with some degree of self-excitation, lack sufficient self-excitation to become bistable on their own. We find many cases of multistability-defined as the system possessing more than one stable fixed point-in which stable states arise via a network effect, allowing subsets of units to maintain each others' activity because their net input to each other when active is sufficiently positive. In terms of the strength of within-unit self-excitation and standard deviation of random cross-connections, the region of multistability depends on the response function of units. Indeed, multistability can arise with zero self-excitation, purely through zero-mean random cross-connections, if the response function rises supralinearly at low inputs from a value near zero at zero input. We simulate and analyze finite systems, showing that the probability of multistability can peak at intermediate system size, and connect with other literature analyzing similar systems in the infinite-size limit. We find regions of multistability with a bimodal distribution for the number of active units in a stable state. Finally, we find evidence for a log-normal distribution of sizes of attractor basins, which produces Zipf's Law when enumerating the proportion of trials within which random initial conditions lead to a particular stable state of the system.
Collapse
Affiliation(s)
- Jordan Breffle
- Neuroscience Program, Brandeis University, 415 South St, Waltham, MA, 02454, USA
| | - Subhadra Mokashe
- Neuroscience Program, Brandeis University, 415 South St, Waltham, MA, 02454, USA
| | - Siwei Qiu
- Volen National Center for Complex Systems, Brandeis University, 415 South St, Waltham, MA, 02454, USA
- Department of Neurology, Cedars-Sinai Medical Center, Los Angeles, CA, USA
| | - Paul Miller
- Neuroscience Program, Brandeis University, 415 South St, Waltham, MA, 02454, USA.
- Volen National Center for Complex Systems, Brandeis University, 415 South St, Waltham, MA, 02454, USA.
- Department of Biology, Brandeis University, 415 South St, Waltham, MA, 02454, USA.
| |
Collapse
|
8
|
Kogan JF, Fontanini A. Learning enhances representations of taste-guided decisions in the mouse gustatory insular cortex. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2023:2023.10.16.562605. [PMID: 37905010 PMCID: PMC10614904 DOI: 10.1101/2023.10.16.562605] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/02/2023]
Abstract
Learning to discriminate overlapping gustatory stimuli that predict distinct outcomes - a feat known as discrimination learning - can mean the difference between ingesting a poison or a nutritive meal. Despite the obvious importance of this process, very little is known on the neural basis of taste discrimination learning. In other sensory modalities, this form of learning can be mediated by either sharpening of sensory representations, or enhanced ability of "decision-making" circuits to interpret sensory information. Given the dual role of the gustatory insular cortex (GC) in encoding both sensory and decision-related variables, this region represents an ideal site for investigating how neural activity changes as animals learn a novel taste discrimination. Here we present results from experiments relying on two photon calcium imaging of GC neural activity in mice performing a taste-guided mixture discrimination task. The task allows for recording of neural activity before and after learning induced by training mice to discriminate increasingly similar pairs of taste mixtures. Single neuron and population analyses show a time-varying pattern of activity, with early sensory responses emerging after taste delivery and binary, choice encoding responses emerging later in the delay before a decision is made. Our results demonstrate that while both sensory and decision-related information is encoded by GC in the context of a taste mixture discrimination task, learning and improved performance are associated with a specific enhancement of decision-related responses.
Collapse
|
9
|
Breffle J, Mokashe S, Qiu S, Miller P. Multistability in neural systems with random cross-connections. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2023:2023.06.05.543727. [PMID: 37333310 PMCID: PMC10274702 DOI: 10.1101/2023.06.05.543727] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/20/2023]
Abstract
Neural circuits with multiple discrete attractor states could support a variety of cognitive tasks according to both empirical data and model simulations. We assess the conditions for such multistability in neural systems, using a firing-rate model framework, in which clusters of neurons with net self-excitation are represented as units, which interact with each other through random connections. We focus on conditions in which individual units lack sufficient self-excitation to become bistable on their own. Rather, multistability can arise via recurrent input from other units as a network effect for subsets of units, whose net input to each other when active is sufficiently positive to maintain such activity. In terms of the strength of within-unit self-excitation and standard-deviation of random cross-connections, the region of multistability depends on the firing-rate curve of units. Indeed, bistability can arise with zero self-excitation, purely through zero-mean random cross-connections, if the firing-rate curve rises supralinearly at low inputs from a value near zero at zero input. We simulate and analyze finite systems, showing that the probability of multistability can peak at intermediate system size, and connect with other literature analyzing similar systems in the infinite-size limit. We find regions of multistability with a bimodal distribution for the number of active units in a stable state. Finally, we find evidence for a log-normal distribution of sizes of attractor basins, which can appear as Zipf's Law when sampled as the proportion of trials within which random initial conditions lead to a particular stable state of the system.
Collapse
Affiliation(s)
- Jordan Breffle
- Neuroscience Program, Brandeis University, 415 South St, Waltham, MA 02454
| | - Subhadra Mokashe
- Neuroscience Program, Brandeis University, 415 South St, Waltham, MA 02454
| | - Siwei Qiu
- Volen National Center for Complex Systems, Brandeis University, 415 South St, Waltham, MA 02454
- Current address: Department of Neurology, Cedars-Sinai Medical Center, Los Angeles, CA, USA
| | - Paul Miller
- Neuroscience Program, Brandeis University, 415 South St, Waltham, MA 02454
- Volen National Center for Complex Systems, Brandeis University, 415 South St, Waltham, MA 02454
- Department of Biology, Brandeis University, 415 South St, Waltham, MA 02454
| |
Collapse
|
10
|
Dwarakanath A, Kapoor V, Werner J, Safavi S, Fedorov LA, Logothetis NK, Panagiotaropoulos TI. Bistability of prefrontal states gates access to consciousness. Neuron 2023; 111:1666-1683.e4. [PMID: 36921603 DOI: 10.1016/j.neuron.2023.02.027] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/10/2022] [Revised: 10/24/2022] [Accepted: 02/16/2023] [Indexed: 03/15/2023]
Abstract
Access of sensory information to consciousness has been linked to the ignition of content-specific representations in association cortices. How does ignition interact with intrinsic cortical state fluctuations to give rise to conscious perception? We addressed this question in the prefrontal cortex (PFC) by combining multi-electrode recordings with a binocular rivalry (BR) paradigm inducing spontaneously driven changes in the content of consciousness, inferred from the reflexive optokinetic nystagmus (OKN) pattern. We find that fluctuations between low-frequency (LF, 1-9 Hz) and beta (∼20-40 Hz) local field potentials (LFPs) reflect competition between spontaneous updates and stability of conscious contents, respectively. Both LF and beta events were locally modulated. The phase of the former locked differentially to the competing populations just before a spontaneous transition while the latter synchronized the neuronal ensemble coding the consciously perceived content. These results suggest that prefrontal state fluctuations gate conscious perception by mediating internal states that facilitate perceptual update and stability.
Collapse
Affiliation(s)
- Abhilash Dwarakanath
- Department of Physiology of Cognitive Processes, Max Planck Institute for Biological Cybernetics, Tübingen 72076, Germany; Cognitive Neuroimaging Unit, Institut National de la Santé et de la Recherche Médicale, Commissariat à l'Energie Atomique et aux énergies alternatives, Université Paris-Saclay, NeuroSpin Center, 91191 Gif-sur-Yvette, France.
| | - Vishal Kapoor
- Department of Physiology of Cognitive Processes, Max Planck Institute for Biological Cybernetics, Tübingen 72076, Germany; International Center for Primate Brain Research, Center for Excellence in Brain Science and Intelligence Technology (CEBSIT), Chinese Academy of Sciences, Shanghai, China
| | - Joachim Werner
- Department of Physiology of Cognitive Processes, Max Planck Institute for Biological Cybernetics, Tübingen 72076, Germany
| | - Shervin Safavi
- Department of Physiology of Cognitive Processes, Max Planck Institute for Biological Cybernetics, Tübingen 72076, Germany; International Max Planck Research School, Tübingen 72076, Germany
| | - Leonid A Fedorov
- Department of Physiology of Cognitive Processes, Max Planck Institute for Biological Cybernetics, Tübingen 72076, Germany
| | - Nikos K Logothetis
- Department of Physiology of Cognitive Processes, Max Planck Institute for Biological Cybernetics, Tübingen 72076, Germany; Division of Imaging Science and Biomedical Engineering, University of Manchester, Manchester M13 9PT, UK; International Center for Primate Brain Research, Center for Excellence in Brain Science and Intelligence Technology (CEBSIT), Chinese Academy of Sciences, Shanghai, China
| | - Theofanis I Panagiotaropoulos
- Department of Physiology of Cognitive Processes, Max Planck Institute for Biological Cybernetics, Tübingen 72076, Germany; Cognitive Neuroimaging Unit, Institut National de la Santé et de la Recherche Médicale, Commissariat à l'Energie Atomique et aux énergies alternatives, Université Paris-Saclay, NeuroSpin Center, 91191 Gif-sur-Yvette, France.
| |
Collapse
|
11
|
Schmitt FJ, Rostami V, Nawrot MP. Efficient parameter calibration and real-time simulation of large-scale spiking neural networks with GeNN and NEST. Front Neuroinform 2023; 17:941696. [PMID: 36844916 PMCID: PMC9950635 DOI: 10.3389/fninf.2023.941696] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/23/2022] [Accepted: 01/16/2023] [Indexed: 02/12/2023] Open
Abstract
Spiking neural networks (SNNs) represent the state-of-the-art approach to the biologically realistic modeling of nervous system function. The systematic calibration for multiple free model parameters is necessary to achieve robust network function and demands high computing power and large memory resources. Special requirements arise from closed-loop model simulation in virtual environments and from real-time simulation in robotic application. Here, we compare two complementary approaches to efficient large-scale and real-time SNN simulation. The widely used NEural Simulation Tool (NEST) parallelizes simulation across multiple CPU cores. The GPU-enhanced Neural Network (GeNN) simulator uses the highly parallel GPU-based architecture to gain simulation speed. We quantify fixed and variable simulation costs on single machines with different hardware configurations. As a benchmark model, we use a spiking cortical attractor network with a topology of densely connected excitatory and inhibitory neuron clusters with homogeneous or distributed synaptic time constants and in comparison to the random balanced network. We show that simulation time scales linearly with the simulated biological model time and, for large networks, approximately linearly with the model size as dominated by the number of synaptic connections. Additional fixed costs with GeNN are almost independent of model size, while fixed costs with NEST increase linearly with model size. We demonstrate how GeNN can be used for simulating networks with up to 3.5 · 106 neurons (> 3 · 1012synapses) on a high-end GPU, and up to 250, 000 neurons (25 · 109 synapses) on a low-cost GPU. Real-time simulation was achieved for networks with 100, 000 neurons. Network calibration and parameter grid search can be efficiently achieved using batch processing. We discuss the advantages and disadvantages of both approaches for different use cases.
Collapse
Affiliation(s)
- Felix Johannes Schmitt
- Computational Systems Neuroscience, Institute of Zoology, University of Cologne, Cologne, Germany
| | - Vahid Rostami
- Computational Systems Neuroscience, Institute of Zoology, University of Cologne, Cologne, Germany
| | | |
Collapse
|
12
|
Temporal progression along discrete coding states during decision-making in the mouse gustatory cortex. PLoS Comput Biol 2023; 19:e1010865. [PMID: 36749734 PMCID: PMC9904478 DOI: 10.1371/journal.pcbi.1010865] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/15/2022] [Accepted: 01/10/2023] [Indexed: 02/08/2023] Open
Abstract
The mouse gustatory cortex (GC) is involved in taste-guided decision-making in addition to sensory processing. Rodent GC exhibits metastable neural dynamics during ongoing and stimulus-evoked activity, but how these dynamics evolve in the context of a taste-based decision-making task remains unclear. Here we employ analytical and modeling approaches to i) extract metastable dynamics in ensemble spiking activity recorded from the GC of mice performing a perceptual decision-making task; ii) investigate the computational mechanisms underlying GC metastability in this task; and iii) establish a relationship between GC dynamics and behavioral performance. Our results show that activity in GC during perceptual decision-making is metastable and that this metastability may serve as a substrate for sequentially encoding sensory, abstract cue, and decision information over time. Perturbations of the model's metastable dynamics indicate that boosting inhibition in different coding epochs differentially impacts network performance, explaining a counterintuitive effect of GC optogenetic silencing on mouse behavior.
Collapse
|
13
|
Coupled Dynamics of Stimulus-Evoked Gustatory Cortical and Basolateral Amygdalar Activity. J Neurosci 2023; 43:386-404. [PMID: 36443002 PMCID: PMC9864615 DOI: 10.1523/jneurosci.1412-22.2022] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/20/2022] [Revised: 11/12/2022] [Accepted: 11/15/2022] [Indexed: 11/29/2022] Open
Abstract
Gustatory cortical (GC) single-neuron taste responses reflect taste quality and palatability in successive epochs. Ensemble analyses reveal epoch-to-epoch firing-rate changes in these responses to be sudden, coherent transitions. Such nonlinear dynamics suggest that GC is part of a recurrent network, producing these dynamics in concert with other structures. Basolateral amygdala (BLA), which is reciprocally connected to GC and central to hedonic processing, is a strong candidate partner for GC, in that BLA taste responses evolve on the same general clock as GC and because inhibition of activity in the BLA→GC pathway degrades the sharpness of GC transitions. These facts motivate, but do not test, our overarching hypothesis that BLA and GC act as a single, comodulated network during taste processing. Here, we provide just this test of simultaneous (BLA and GC) extracellular taste responses in female rats, probing the multiregional dynamics of activity to directly test whether BLA and GC responses contain coupled dynamics. We show that BLA and GC response magnitudes covary across trials and within single responses, and that changes in BLA-GC local field potential phase coherence are epoch specific. Such classic coherence analyses, however, obscure the most salient facet of BLA-GC coupling: sudden transitions in and out of the epoch known to be involved in driving gaping behavior happen near simultaneously in the two regions, despite huge trial-to-trial variability in transition latencies. This novel form of inter-regional coupling, which we show is easily replicated in model networks, suggests collective processing in a distributed neural network.SIGNIFICANCE STATEMENT There has been little investigation into real-time communication between brain regions during taste processing, a fact reflecting the dominant belief that taste circuitry is largely feedforward. Here, we perform an in-depth analysis of real-time interactions between GC and BLA in response to passive taste deliveries, using both conventional coherence metrics and a novel methodology that explicitly considers trial-to-trial variability and fast single-trial dynamics in evoked responses. Our results demonstrate that BLA-GC coherence changes as the taste response unfolds, and that BLA and GC specifically couple for the sudden transition into (and out of) the behaviorally relevant neural response epoch, suggesting (although not proving) that: (1) recurrent interactions subserve the function of the dyad as (2) a putative attractor network.
Collapse
|
14
|
Scott DN, Frank MJ. Adaptive control of synaptic plasticity integrates micro- and macroscopic network function. Neuropsychopharmacology 2023; 48:121-144. [PMID: 36038780 PMCID: PMC9700774 DOI: 10.1038/s41386-022-01374-6] [Citation(s) in RCA: 6] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/11/2022] [Revised: 06/23/2022] [Accepted: 06/24/2022] [Indexed: 11/09/2022]
Abstract
Synaptic plasticity configures interactions between neurons and is therefore likely to be a primary driver of behavioral learning and development. How this microscopic-macroscopic interaction occurs is poorly understood, as researchers frequently examine models within particular ranges of abstraction and scale. Computational neuroscience and machine learning models offer theoretically powerful analyses of plasticity in neural networks, but results are often siloed and only coarsely linked to biology. In this review, we examine connections between these areas, asking how network computations change as a function of diverse features of plasticity and vice versa. We review how plasticity can be controlled at synapses by calcium dynamics and neuromodulatory signals, the manifestation of these changes in networks, and their impacts in specialized circuits. We conclude that metaplasticity-defined broadly as the adaptive control of plasticity-forges connections across scales by governing what groups of synapses can and can't learn about, when, and to what ends. The metaplasticity we discuss acts by co-opting Hebbian mechanisms, shifting network properties, and routing activity within and across brain systems. Asking how these operations can go awry should also be useful for understanding pathology, which we address in the context of autism, schizophrenia and Parkinson's disease.
Collapse
Affiliation(s)
- Daniel N Scott
- Cognitive Linguistic, and Psychological Sciences, Brown University, Providence, RI, USA.
- Carney Institute for Brain Science, Brown University, Providence, RI, USA.
| | - Michael J Frank
- Cognitive Linguistic, and Psychological Sciences, Brown University, Providence, RI, USA.
- Carney Institute for Brain Science, Brown University, Providence, RI, USA.
| |
Collapse
|
15
|
Neurodynamical Computing at the Information Boundaries of Intelligent Systems. Cognit Comput 2022. [DOI: 10.1007/s12559-022-10081-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/28/2022]
Abstract
AbstractArtificial intelligence has not achieved defining features of biological intelligence despite models boasting more parameters than neurons in the human brain. In this perspective article, we synthesize historical approaches to understanding intelligent systems and argue that methodological and epistemic biases in these fields can be resolved by shifting away from cognitivist brain-as-computer theories and recognizing that brains exist within large, interdependent living systems. Integrating the dynamical systems view of cognition with the massive distributed feedback of perceptual control theory highlights a theoretical gap in our understanding of nonreductive neural mechanisms. Cell assemblies—properly conceived as reentrant dynamical flows and not merely as identified groups of neurons—may fill that gap by providing a minimal supraneuronal level of organization that establishes a neurodynamical base layer for computation. By considering information streams from physical embodiment and situational embedding, we discuss this computational base layer in terms of conserved oscillatory and structural properties of cortical-hippocampal networks. Our synthesis of embodied cognition, based in dynamical systems and perceptual control, aims to bypass the neurosymbolic stalemates that have arisen in artificial intelligence, cognitive science, and computational neuroscience.
Collapse
|
16
|
Pietras B, Schmutz V, Schwalger T. Mesoscopic description of hippocampal replay and metastability in spiking neural networks with short-term plasticity. PLoS Comput Biol 2022; 18:e1010809. [PMID: 36548392 PMCID: PMC9822116 DOI: 10.1371/journal.pcbi.1010809] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/29/2022] [Revised: 01/06/2023] [Accepted: 12/11/2022] [Indexed: 12/24/2022] Open
Abstract
Bottom-up models of functionally relevant patterns of neural activity provide an explicit link between neuronal dynamics and computation. A prime example of functional activity patterns are propagating bursts of place-cell activities called hippocampal replay, which is critical for memory consolidation. The sudden and repeated occurrences of these burst states during ongoing neural activity suggest metastable neural circuit dynamics. As metastability has been attributed to noise and/or slow fatigue mechanisms, we propose a concise mesoscopic model which accounts for both. Crucially, our model is bottom-up: it is analytically derived from the dynamics of finite-size networks of Linear-Nonlinear Poisson neurons with short-term synaptic depression. As such, noise is explicitly linked to stochastic spiking and network size, and fatigue is explicitly linked to synaptic dynamics. To derive the mesoscopic model, we first consider a homogeneous spiking neural network and follow the temporal coarse-graining approach of Gillespie to obtain a "chemical Langevin equation", which can be naturally interpreted as a stochastic neural mass model. The Langevin equation is computationally inexpensive to simulate and enables a thorough study of metastable dynamics in classical setups (population spikes and Up-Down-states dynamics) by means of phase-plane analysis. An extension of the Langevin equation for small network sizes is also presented. The stochastic neural mass model constitutes the basic component of our mesoscopic model for replay. We show that the mesoscopic model faithfully captures the statistical structure of individual replayed trajectories in microscopic simulations and in previously reported experimental data. Moreover, compared to the deterministic Romani-Tsodyks model of place-cell dynamics, it exhibits a higher level of variability regarding order, direction and timing of replayed trajectories, which seems biologically more plausible and could be functionally desirable. This variability is the product of a new dynamical regime where metastability emerges from a complex interplay between finite-size fluctuations and local fatigue.
Collapse
Affiliation(s)
- Bastian Pietras
- Institute for Mathematics, Technische Universität Berlin, Berlin, Germany
- Bernstein Center for Computational Neuroscience, Berlin, Germany
- Department of Information and Communication Technologies, Universitat Pompeu Fabra, Barcelona, Spain
| | - Valentin Schmutz
- Brain Mind Institute, School of Computer and Communication Sciences and School of Life Sciences, École Polytechnique Fédérale de Lausanne (EPFL), Lausanne, Switzerland
| | - Tilo Schwalger
- Institute for Mathematics, Technische Universität Berlin, Berlin, Germany
- Bernstein Center for Computational Neuroscience, Berlin, Germany
- * E-mail:
| |
Collapse
|
17
|
Morrison M, Young LS. Chaotic heteroclinic networks as models of switching behavior in biological systems. CHAOS (WOODBURY, N.Y.) 2022; 32:123102. [PMID: 36587320 DOI: 10.1063/5.0122184] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/22/2022] [Accepted: 11/04/2022] [Indexed: 06/17/2023]
Abstract
Key features of biological activity can often be captured by transitions between a finite number of semi-stable states that correspond to behaviors or decisions. We present here a broad class of dynamical systems that are ideal for modeling such activity. The models we propose are chaotic heteroclinic networks with nontrivial intersections of stable and unstable manifolds. Due to the sensitive dependence on initial conditions, transitions between states are seemingly random. Dwell times, exit distributions, and other transition statistics can be built into the model through geometric design and can be controlled by tunable parameters. To test our model's ability to simulate realistic biological phenomena, we turned to one of the most studied organisms, C. elegans, well known for its limited behavioral states. We reconstructed experimental data from two laboratories, demonstrating the model's ability to quantitatively reproduce dwell times and transition statistics under a variety of conditions. Stochastic switching between dominant states in complex dynamical systems has been extensively studied and is often modeled as Markov chains. As an alternative, we propose here a new paradigm, namely, chaotic heteroclinic networks generated by deterministic rules (without the necessity for noise). Chaotic heteroclinic networks can be used to model systems with arbitrary architecture and size without a commensurate increase in phase dimension. They are highly flexible and able to capture a wide range of transition characteristics that can be adjusted through control parameters.
Collapse
Affiliation(s)
- Megan Morrison
- Courant Institute, New York University, New York, New York 10012, USA
| | - Lai-Sang Young
- Courant Institute, New York University, New York, New York 10012, USA
| |
Collapse
|
18
|
van der Heijden ME, Brown AM, Sillitoe RV. Influence of data sampling methods on the representation of neural spiking activity in vivo. iScience 2022; 25:105429. [PMID: 36388953 PMCID: PMC9641233 DOI: 10.1016/j.isci.2022.105429] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/28/2022] [Revised: 08/06/2022] [Accepted: 10/19/2022] [Indexed: 11/06/2022] Open
Abstract
In vivo single-unit recordings distinguish the basal spiking properties of neurons in different experimental settings and disease states. Here, we examined over 300 spike trains recorded from Purkinje cells and cerebellar nuclei neurons to test whether data sampling approaches influence the extraction of rich descriptors of firing properties. Our analyses included neurons recorded in awake and anesthetized control mice, and disease models of ataxia, dystonia, and tremor. We find that recording duration circumscribes overall representations of firing rate and pattern. Notably, shorter recording durations skew estimates for global firing rate variability toward lower values. We also find that only some populations of neurons in the same mouse are more similar to each other than to neurons recorded in different mice. These data reveal that recording duration and approach are primary considerations when interpreting task-independent single neuron firing properties. If not accounted for, group differences may be concealed or exaggerated.
Collapse
Affiliation(s)
- Meike E. van der Heijden
- Department of Pathology and Immunology, Baylor College of Medicine, Houston, TX, USA
- Jan and Dan Duncan Neurological Research Institute at Texas Children’s Hospital, Houston, TX, USA
| | - Amanda M. Brown
- Department of Pathology and Immunology, Baylor College of Medicine, Houston, TX, USA
- Jan and Dan Duncan Neurological Research Institute at Texas Children’s Hospital, Houston, TX, USA
| | - Roy V. Sillitoe
- Department of Pathology and Immunology, Baylor College of Medicine, Houston, TX, USA
- Department of Neuroscience, Baylor College of Medicine, Houston, TX, USA
- Department of Pediatrics, Baylor College of Medicine, Houston, TX, USA
- Development, Disease Models and Therapeutics Graduate Program, Baylor College of Medicine, Houston, TX, USA
- Jan and Dan Duncan Neurological Research Institute at Texas Children’s Hospital, Houston, TX, USA
| |
Collapse
|
19
|
Miehl C, Onasch S, Festa D, Gjorgjieva J. Formation and computational implications of assemblies in neural circuits. J Physiol 2022. [PMID: 36068723 DOI: 10.1113/jp282750] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/20/2022] [Accepted: 08/22/2022] [Indexed: 11/08/2022] Open
Abstract
In the brain, patterns of neural activity represent sensory information and store it in non-random synaptic connectivity. A prominent theoretical hypothesis states that assemblies, groups of neurons that are strongly connected to each other, are the key computational units underlying perception and memory formation. Compatible with these hypothesised assemblies, experiments have revealed groups of neurons that display synchronous activity, either spontaneously or upon stimulus presentation, and exhibit behavioural relevance. While it remains unclear how assemblies form in the brain, theoretical work has vastly contributed to the understanding of various interacting mechanisms in this process. Here, we review the recent theoretical literature on assembly formation by categorising the involved mechanisms into four components: synaptic plasticity, symmetry breaking, competition and stability. We highlight different approaches and assumptions behind assembly formation and discuss recent ideas of assemblies as the key computational unit in the brain. Abstract figure legend Assembly Formation. Assemblies are groups of strongly connected neurons formed by the interaction of multiple mechanisms and with vast computational implications. Four interacting components are thought to drive assembly formation: synaptic plasticity, symmetry breaking, competition and stability. This article is protected by copyright. All rights reserved.
Collapse
Affiliation(s)
- Christoph Miehl
- Computation in Neural Circuits, Max Planck Institute for Brain Research, 60438, Frankfurt, Germany.,School of Life Sciences, Technical University of Munich, 85354, Freising, Germany
| | - Sebastian Onasch
- Computation in Neural Circuits, Max Planck Institute for Brain Research, 60438, Frankfurt, Germany.,School of Life Sciences, Technical University of Munich, 85354, Freising, Germany
| | - Dylan Festa
- Computation in Neural Circuits, Max Planck Institute for Brain Research, 60438, Frankfurt, Germany.,School of Life Sciences, Technical University of Munich, 85354, Freising, Germany
| | - Julijana Gjorgjieva
- Computation in Neural Circuits, Max Planck Institute for Brain Research, 60438, Frankfurt, Germany.,School of Life Sciences, Technical University of Munich, 85354, Freising, Germany
| |
Collapse
|
20
|
Mazzucato L. Neural mechanisms underlying the temporal organization of naturalistic animal behavior. eLife 2022; 11:76577. [PMID: 35792884 PMCID: PMC9259028 DOI: 10.7554/elife.76577] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/11/2022] [Accepted: 06/07/2022] [Indexed: 12/17/2022] Open
Abstract
Naturalistic animal behavior exhibits a strikingly complex organization in the temporal domain, with variability arising from at least three sources: hierarchical, contextual, and stochastic. What neural mechanisms and computational principles underlie such intricate temporal features? In this review, we provide a critical assessment of the existing behavioral and neurophysiological evidence for these sources of temporal variability in naturalistic behavior. Recent research converges on an emergent mechanistic theory of temporal variability based on attractor neural networks and metastable dynamics, arising via coordinated interactions between mesoscopic neural circuits. We highlight the crucial role played by structural heterogeneities as well as noise from mesoscopic feedback loops in regulating flexible behavior. We assess the shortcomings and missing links in the current theoretical and experimental literature and propose new directions of investigation to fill these gaps.
Collapse
Affiliation(s)
- Luca Mazzucato
- Institute of Neuroscience, Departments of Biology, Mathematics and Physics, University of Oregon
| |
Collapse
|
21
|
Gradient-based learning drives robust representations in recurrent neural networks by balancing compression and expansion. NAT MACH INTELL 2022. [DOI: 10.1038/s42256-022-00498-0] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/02/2023]
|
22
|
Fontanier V, Sarazin M, Stoll FM, Delord B, Procyk E. Inhibitory control of frontal metastability sets the temporal signature of cognition. eLife 2022; 11:63795. [PMID: 35635439 PMCID: PMC9200403 DOI: 10.7554/elife.63795] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/07/2020] [Accepted: 05/27/2022] [Indexed: 11/13/2022] Open
Abstract
Cortical dynamics are organized over multiple anatomical and temporal scales. The mechanistic origin of the temporal organization and its contribution to cognition remain unknown. Here we demonstrate the cause of this organization by studying a specific temporal signature (time constant and latency) of neural activity. In monkey frontal areas, recorded during flexible decisions, temporal signatures display specific area-dependent ranges, as well as anatomical and cell-type distributions. Moreover, temporal signatures are functionally adapted to behaviorally relevant timescales. Fine-grained biophysical network models, constrained to account for experimentally observed temporal signatures, reveal that after-hyperpolarization potassium and inhibitory GABA-B conductances critically determine areas' specificity. They mechanistically account for temporal signatures by organizing activity into metastable states, with inhibition controlling state stability and transitions. As predicted by models, state durations non-linearly scale with temporal signatures in monkey, matching behavioral timescales. Thus, local inhibitory-controlled metastability constitutes the dynamical core specifying the temporal organization of cognitive functions in frontal areas.
Collapse
Affiliation(s)
| | - Matthieu Sarazin
- Institute of Intelligent Systems and Robotics (ISIR) - UMR 7222, Sorbonne Université, CNRS, Paris, France
| | - Frederic M Stoll
- Nash Family Department of Neuroscience, Icahn School of Medicine at Mount Sinai, New York, United States
| | - Bruno Delord
- Institute of Intelligent Systems and Robotics (ISIR) - UMR 7222, Sorbonne Université, CNRS, Paris, France
| | - Emmanuel Procyk
- Stem Cell and Brain Research Institute U1208, Inserm, Lyon, France
| |
Collapse
|
23
|
Adaptive erasure of spurious sequences in sensory cortical circuits. Neuron 2022; 110:1857-1868.e5. [PMID: 35358415 PMCID: PMC9616807 DOI: 10.1016/j.neuron.2022.03.006] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/23/2021] [Revised: 11/12/2021] [Accepted: 03/07/2022] [Indexed: 12/02/2022]
Abstract
Sequential activity reflecting previously experienced temporal sequences is considered a hallmark of learning across cortical areas. However, it is unknown how cortical circuits avoid the converse problem: producing spurious sequences that are not reflecting sequences in their inputs. We develop methods to quantify and study sequentiality in neural responses. We show that recurrent circuit responses generally include spurious sequences, which are specifically prevented in circuits that obey two widely known features of cortical microcircuit organization: Dale’s law and Hebbian connectivity. In particular, spike-timing-dependent plasticity in excitation-inhibition networks leads to an adaptive erasure of spurious sequences. We tested our theory in multielectrode recordings from the visual cortex of awake ferrets. Although responses to natural stimuli were largely non-sequential, responses to artificial stimuli initially included spurious sequences, which diminished over extended exposure. These results reveal an unexpected role for Hebbian experience-dependent plasticity and Dale’s law in sensory cortical circuits. Recurrent circuits generate spurious sequences without sequential inputs A principled measure of total sequentiality in population responses is developed Theory predicts that Hebbian plasticity should abolish spurious sequences Spurious sequences in the visual cortex diminish with experience
Collapse
|
24
|
Brinkman BAW, Yan H, Maffei A, Park IM, Fontanini A, Wang J, La Camera G. Metastable dynamics of neural circuits and networks. APPLIED PHYSICS REVIEWS 2022; 9:011313. [PMID: 35284030 PMCID: PMC8900181 DOI: 10.1063/5.0062603] [Citation(s) in RCA: 13] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/06/2021] [Accepted: 01/31/2022] [Indexed: 05/14/2023]
Abstract
Cortical neurons emit seemingly erratic trains of action potentials or "spikes," and neural network dynamics emerge from the coordinated spiking activity within neural circuits. These rich dynamics manifest themselves in a variety of patterns, which emerge spontaneously or in response to incoming activity produced by sensory inputs. In this Review, we focus on neural dynamics that is best understood as a sequence of repeated activations of a number of discrete hidden states. These transiently occupied states are termed "metastable" and have been linked to important sensory and cognitive functions. In the rodent gustatory cortex, for instance, metastable dynamics have been associated with stimulus coding, with states of expectation, and with decision making. In frontal, parietal, and motor areas of macaques, metastable activity has been related to behavioral performance, choice behavior, task difficulty, and attention. In this article, we review the experimental evidence for neural metastable dynamics together with theoretical approaches to the study of metastable activity in neural circuits. These approaches include (i) a theoretical framework based on non-equilibrium statistical physics for network dynamics; (ii) statistical approaches to extract information about metastable states from a variety of neural signals; and (iii) recent neural network approaches, informed by experimental results, to model the emergence of metastable dynamics. By discussing these topics, we aim to provide a cohesive view of how transitions between different states of activity may provide the neural underpinnings for essential functions such as perception, memory, expectation, or decision making, and more generally, how the study of metastable neural activity may advance our understanding of neural circuit function in health and disease.
Collapse
Affiliation(s)
| | - H. Yan
- State Key Laboratory of Electroanalytical Chemistry, Changchun Institute of Applied Chemistry, Chinese Academy of Sciences, Changchun, Jilin 130022, People's Republic of China
| | | | | | | | - J. Wang
- Authors to whom correspondence should be addressed: and
| | - G. La Camera
- Authors to whom correspondence should be addressed: and
| |
Collapse
|
25
|
Teşileanu T, Golkar S, Nasiri S, Sengupta AM, Chklovskii DB. Neural Circuits for Dynamics-Based Segmentation of Time Series. Neural Comput 2022; 34:891-938. [PMID: 35026035 DOI: 10.1162/neco_a_01476] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/23/2021] [Accepted: 10/15/2021] [Indexed: 11/04/2022]
Abstract
The brain must extract behaviorally relevant latent variables from the signals streamed by the sensory organs. Such latent variables are often encoded in the dynamics that generated the signal rather than in the specific realization of the waveform. Therefore, one problem faced by the brain is to segment time series based on underlying dynamics. We present two algorithms for performing this segmentation task that are biologically plausible, which we define as acting in a streaming setting and all learning rules being local. One algorithm is model based and can be derived from an optimization problem involving a mixture of autoregressive processes. This algorithm relies on feedback in the form of a prediction error and can also be used for forecasting future samples. In some brain regions, such as the retina, the feedback connections necessary to use the prediction error for learning are absent. For this case, we propose a second, model-free algorithm that uses a running estimate of the autocorrelation structure of the signal to perform the segmentation. We show that both algorithms do well when tasked with segmenting signals drawn from autoregressive models with piecewise-constant parameters. In particular, the segmentation accuracy is similar to that obtained from oracle-like methods in which the ground-truth parameters of the autoregressive models are known. We also test our methods on data sets generated by alternating snippets of voice recordings. We provide implementations of our algorithms at https://github.com/ttesileanu/bio-time-series.
Collapse
Affiliation(s)
- Tiberiu Teşileanu
- Center for Computational Neuroscience, Flatiron Institute, New York, NY 10010, U.S.A.
| | - Siavash Golkar
- Center for Computational Neuroscience, Flatiron Institute, New York, NY 10010, U.S.A.
| | - Samaneh Nasiri
- Department of Neurology, Harvard Medical School, Boston, MA 02115, U.S.A.
| | - Anirvan M Sengupta
- Center for Computational Neuroscience, Flatiron Institute, New York, NY 10010, and Department of Physics and Astronomy, Rutgers University, Piscataway, NJ 08854, U.S.A.
| | - Dmitri B Chklovskii
- Center for Computational Neuroscience, Flatiron Institute, New York, NY 10010, and Neuroscience Institute, NYU Langone Medical Center, New York, NY, U.S.A.
| |
Collapse
|
26
|
Metastable attractors explain the variable timing of stable behavioral action sequences. Neuron 2022; 110:139-153.e9. [PMID: 34717794 PMCID: PMC9194601 DOI: 10.1016/j.neuron.2021.10.011] [Citation(s) in RCA: 20] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/10/2021] [Revised: 08/30/2021] [Accepted: 10/05/2021] [Indexed: 01/07/2023]
Abstract
The timing of self-initiated actions shows large variability even when they are executed in stable, well-learned sequences. Could this mix of reliability and stochasticity arise within the same neural circuit? We trained rats to perform a stereotyped sequence of self-initiated actions and recorded neural ensemble activity in secondary motor cortex (M2), which is known to reflect trial-by-trial action-timing fluctuations. Using hidden Markov models, we established a dictionary between activity patterns and actions. We then showed that metastable attractors, representing activity patterns with a reliable sequential structure and large transition timing variability, could be produced by reciprocally coupling a high-dimensional recurrent network and a low-dimensional feedforward one. Transitions between attractors relied on correlated variability in this mesoscale feedback loop, predicting a specific structure of low-dimensional correlations that were empirically verified in M2 recordings. Our results suggest a novel mesoscale network motif based on correlated variability supporting naturalistic animal behavior.
Collapse
|
27
|
The Mean Field Approach for Populations of Spiking Neurons. ADVANCES IN EXPERIMENTAL MEDICINE AND BIOLOGY 2022; 1359:125-157. [DOI: 10.1007/978-3-030-89439-9_6] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
Abstract
AbstractMean field theory is a device to analyze the collective behavior of a dynamical system comprising many interacting particles. The theory allows to reduce the behavior of the system to the properties of a handful of parameters. In neural circuits, these parameters are typically the firing rates of distinct, homogeneous subgroups of neurons. Knowledge of the firing rates under conditions of interest can reveal essential information on both the dynamics of neural circuits and the way they can subserve brain function. The goal of this chapter is to provide an elementary introduction to the mean field approach for populations of spiking neurons. We introduce the general idea in networks of binary neurons, starting from the most basic results and then generalizing to more relevant situations. This allows to derive the mean field equations in a simplified setting. We then derive the mean field equations for populations of integrate-and-fire neurons. An effort is made to derive the main equations of the theory using only elementary methods from calculus and probability theory. The chapter ends with a discussion of the assumptions of the theory and some of the consequences of violating those assumptions. This discussion includes an introduction to balanced and metastable networks and a brief catalogue of successful applications of the mean field approach to the study of neural circuits.
Collapse
|
28
|
Kurikawa T, Kaneko K. Multiple-Timescale Neural Networks: Generation of History-Dependent Sequences and Inference Through Autonomous Bifurcations. Front Comput Neurosci 2021; 15:743537. [PMID: 34955798 PMCID: PMC8702558 DOI: 10.3389/fncom.2021.743537] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/18/2021] [Accepted: 11/09/2021] [Indexed: 11/17/2022] Open
Abstract
Sequential transitions between metastable states are ubiquitously observed in the neural system and underlying various cognitive functions such as perception and decision making. Although a number of studies with asymmetric Hebbian connectivity have investigated how such sequences are generated, the focused sequences are simple Markov ones. On the other hand, fine recurrent neural networks trained with supervised machine learning methods can generate complex non-Markov sequences, but these sequences are vulnerable against perturbations and such learning methods are biologically implausible. How stable and complex sequences are generated in the neural system still remains unclear. We have developed a neural network with fast and slow dynamics, which are inspired by the hierarchy of timescales on neural activities in the cortex. The slow dynamics store the history of inputs and outputs and affect the fast dynamics depending on the stored history. We show that the learning rule that requires only local information can form the network generating the complex and robust sequences in the fast dynamics. The slow dynamics work as bifurcation parameters for the fast one, wherein they stabilize the next pattern of the sequence before the current pattern is destabilized depending on the previous patterns. This co-existence period leads to the stable transition between the current and the next pattern in the non-Markov sequence. We further find that timescale balance is critical to the co-existence period. Our study provides a novel mechanism generating robust complex sequences with multiple timescales. Considering the multiple timescales are widely observed, the mechanism advances our understanding of temporal processing in the neural system.
Collapse
Affiliation(s)
- Tomoki Kurikawa
- Department of Physics, Kansai Medical University, Hirakata, Japan
| | - Kunihiko Kaneko
- Department of Basic Science, Graduate School of Arts and Sciences, University of Tokyo, Tokyo, Japan.,Center for Complex Systems Biology, Universal Biology Institute, University of Tokyo, Tokyo, Japan
| |
Collapse
|
29
|
Goc GL, Lafaye J, Karpenko S, Bormuth V, Candelier R, Debrégeas G. Thermal modulation of Zebrafish exploratory statistics reveals constraints on individual behavioral variability. BMC Biol 2021; 19:208. [PMID: 34548084 PMCID: PMC8456632 DOI: 10.1186/s12915-021-01126-w] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/11/2021] [Accepted: 08/18/2021] [Indexed: 11/10/2022] Open
Abstract
BACKGROUND Variability is a hallmark of animal behavior. It contributes to survival by endowing individuals and populations with the capacity to adapt to ever-changing environmental conditions. Intra-individual variability is thought to reflect both endogenous and exogenous modulations of the neural dynamics of the central nervous system. However, how variability is internally regulated and modulated by external cues remains elusive. Here, we address this question by analyzing the statistics of spontaneous exploration of freely swimming zebrafish larvae and by probing how these locomotor patterns are impacted when changing the water temperatures within an ethologically relevant range. RESULTS We show that, for this simple animal model, five short-term kinematic parameters - interbout interval, turn amplitude, travelled distance, turn probability, and orientational flipping rate - together control the long-term exploratory dynamics. We establish that the bath temperature consistently impacts the means of these parameters, but leave their pairwise covariance unchanged. These results indicate that the temperature merely controls the sampling statistics within a well-defined kinematic space delineated by this robust statistical structure. At a given temperature, individual animals explore the behavioral space over a timescale of tens of minutes, suggestive of a slow internal state modulation that could be externally biased through the bath temperature. By combining these various observations into a minimal stochastic model of navigation, we show that this thermal modulation of locomotor kinematics results in a thermophobic behavior, complementing direct gradient-sensing mechanisms. CONCLUSIONS This study establishes the existence of a well-defined locomotor space accessible to zebrafish larvae during spontaneous exploration, and quantifies self-generated modulation of locomotor patterns. Intra-individual variability reflects a slow diffusive-like probing of this space by the animal. The bath temperature in turn restricts the sampling statistics to sub-regions, endowing the animal with basic thermophobicity. This study suggests that in zebrafish, as well as in other ectothermic animals, ambient temperature could be used to efficiently manipulate internal states in a simple and ethological way.
Collapse
Affiliation(s)
- Guillaume Le Goc
- Sorbonne Université, CNRS, Institut de Biologie Paris-Seine (IBPS), Laboratoire Jean Perrin (LJP), Paris, France
| | - Julie Lafaye
- Sorbonne Université, CNRS, Institut de Biologie Paris-Seine (IBPS), Laboratoire Jean Perrin (LJP), Paris, France
| | - Sophia Karpenko
- Sorbonne Université, CNRS, Institut de Biologie Paris-Seine (IBPS), Laboratoire Jean Perrin (LJP), Paris, France.,Université Paris Sciences et Lettres, Paris, France.,Present address : Department of Collective Behavior, Max Planck Institute of Animal Behavior, Konstanz, Germany
| | - Volker Bormuth
- Sorbonne Université, CNRS, Institut de Biologie Paris-Seine (IBPS), Laboratoire Jean Perrin (LJP), Paris, France
| | - Raphaël Candelier
- Sorbonne Université, CNRS, Institut de Biologie Paris-Seine (IBPS), Laboratoire Jean Perrin (LJP), Paris, France
| | - Georges Debrégeas
- Sorbonne Université, CNRS, Institut de Biologie Paris-Seine (IBPS), Laboratoire Jean Perrin (LJP), Paris, France.
| |
Collapse
|
30
|
Cao R, Pastukhov A, Aleshin S, Mattia M, Braun J. Binocular rivalry reveals an out-of-equilibrium neural dynamics suited for decision-making. eLife 2021; 10:61581. [PMID: 34369875 PMCID: PMC8352598 DOI: 10.7554/elife.61581] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/29/2020] [Accepted: 05/24/2021] [Indexed: 12/19/2022] Open
Abstract
In ambiguous or conflicting sensory situations, perception is often ‘multistable’ in that it perpetually changes at irregular intervals, shifting abruptly between distinct alternatives. The interval statistics of these alternations exhibits quasi-universal characteristics, suggesting a general mechanism. Using binocular rivalry, we show that many aspects of this perceptual dynamics are reproduced by a hierarchical model operating out of equilibrium. The constitutive elements of this model idealize the metastability of cortical networks. Independent elements accumulate visual evidence at one level, while groups of coupled elements compete for dominance at another level. As soon as one group dominates perception, feedback inhibition suppresses supporting evidence. Previously unreported features in the serial dependencies of perceptual alternations compellingly corroborate this mechanism. Moreover, the proposed out-of-equilibrium dynamics satisfies normative constraints of continuous decision-making. Thus, multistable perception may reflect decision-making in a volatile world: integrating evidence over space and time, choosing categorically between hypotheses, while concurrently evaluating alternatives.
Collapse
Affiliation(s)
- Robin Cao
- Cognitive Biology, Center for Behavioral Brain Sciences, Magdeburg, Germany.,Gatsby Computational Neuroscience Unit, London, United Kingdom.,Istituto Superiore di Sanità, Rome, Italy
| | | | - Stepan Aleshin
- Cognitive Biology, Center for Behavioral Brain Sciences, Magdeburg, Germany
| | | | - Jochen Braun
- Cognitive Biology, Center for Behavioral Brain Sciences, Magdeburg, Germany
| |
Collapse
|
31
|
Wolff A, Chen L, Tumati S, Golesorkhi M, Gomez-Pilar J, Hu J, Jiang S, Mao Y, Longtin A, Northoff G. Prestimulus dynamics blend with the stimulus in neural variability quenching. Neuroimage 2021; 238:118160. [PMID: 34058331 DOI: 10.1016/j.neuroimage.2021.118160] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/09/2021] [Revised: 04/30/2021] [Accepted: 05/09/2021] [Indexed: 01/08/2023] Open
Abstract
Neural responses to the same stimulus show significant variability over trials, with this variability typically reduced (quenched) after a stimulus is presented. This trial-to-trial variability (TTV) has been much studied, however how this neural variability quenching is influenced by the ongoing dynamics of the prestimulus period is unknown. Utilizing a human intracranial stereo-electroencephalography (sEEG) data set, we investigate how prestimulus dynamics, as operationalized by standard deviation (SD), shapes poststimulus activity through trial-to-trial variability (TTV). We first observed greater poststimulus variability quenching in those real trials exhibiting high prestimulus variability as observed in all frequency bands. Next, we found that the relative effect of the stimulus was higher in the later (300-600ms) than the earlier (0-300ms) poststimulus period. Lastly, we replicate our findings in a separate EEG dataset and extend them by finding that trials with high prestimulus variability in the theta and alpha bands had faster reaction times. Together, our results demonstrate that stimulus-related activity, including its variability, is a blend of two factors: 1) the effects of the external stimulus itself, and 2) the effects of the ongoing dynamics spilling over from the prestimulus period - the state at stimulus onset - with the second dwarfing the influence of the first.
Collapse
Affiliation(s)
- Annemarie Wolff
- University of Ottawa Institute of Mental Health Research, Ottawa, Canada.
| | - Liang Chen
- Department of Neurological Surgery, Huashan Hospital, Fudan University, Wulumuqi Middle Rd, Shanghai, China.
| | - Shankar Tumati
- University of Ottawa Institute of Mental Health Research, Ottawa, Canada
| | - Mehrshad Golesorkhi
- School of Electrical Engineering and Computer Science, University of Ottawa, Ottawa, Canada
| | - Javier Gomez-Pilar
- Biomedical Engineering Group, Higher Technical School of Telecommunications Engineering, University of Valladolid, Valladolid, Spain; Centro de Investigación Biomédica en Red-Bioingeniería, Biomateriales y Nanomedicina, (CIBER-BBN), Spain
| | - Jie Hu
- Department of Neurological Surgery, Huashan Hospital, Fudan University, Wulumuqi Middle Rd, Shanghai, China
| | - Shize Jiang
- Department of Neurological Surgery, Huashan Hospital, Fudan University, Wulumuqi Middle Rd, Shanghai, China
| | - Ying Mao
- Department of Neurological Surgery, Huashan Hospital, Fudan University, Wulumuqi Middle Rd, Shanghai, China
| | - André Longtin
- Brain and Mind Research Institute, University of Ottawa, Ottawa, Canada; Physics Department, University of Ottawa, Ottawa, Canada
| | - Georg Northoff
- University of Ottawa Institute of Mental Health Research, Ottawa, Canada; Brain and Mind Research Institute, University of Ottawa, Ottawa, Canada
| |
Collapse
|
32
|
Lin JY, Mukherjee N, Bernstein MJ, Katz DB. Perturbation of amygdala-cortical projections reduces ensemble coherence of palatability coding in gustatory cortex. eLife 2021; 10:e65766. [PMID: 34018924 PMCID: PMC8139825 DOI: 10.7554/elife.65766] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/18/2020] [Accepted: 04/30/2021] [Indexed: 01/01/2023] Open
Abstract
Taste palatability is centrally involved in consumption decisions-we ingest foods that taste good and reject those that don't. Gustatory cortex (GC) and basolateral amygdala (BLA) almost certainly work together to mediate palatability-driven behavior, but the precise nature of their interplay during taste decision-making is still unknown. To probe this issue, we discretely perturbed (with optogenetics) activity in rats' BLA→GC axons during taste deliveries. This perturbation strongly altered GC taste responses, but while the perturbation itself was tonic (2.5 s), the alterations were not-changes preferentially aligned with the onset times of previously-described taste response epochs, and reduced evidence of palatability-related activity in the 'late-epoch' of the responses without reducing the amount of taste identity information available in the 'middle epoch.' Finally, BLA→GC perturbations changed behavior-linked taste response dynamics themselves, distinctively diminishing the abruptness of ensemble transitions into the late epoch. These results suggest that BLA 'organizes' behavior-related GC taste dynamics.
Collapse
Affiliation(s)
- Jian-You Lin
- Department of PsychologyWalthamUnited States
- The Volen National Center for Complex Systems, Brandeis UniversityWalthamUnited States
| | - Narendra Mukherjee
- The Volen National Center for Complex Systems, Brandeis UniversityWalthamUnited States
| | - Max J Bernstein
- Department of PsychologyWalthamUnited States
- The Volen National Center for Complex Systems, Brandeis UniversityWalthamUnited States
| | - Donald B Katz
- Department of PsychologyWalthamUnited States
- The Volen National Center for Complex Systems, Brandeis UniversityWalthamUnited States
| |
Collapse
|
33
|
Wyrick D, Mazzucato L. State-Dependent Regulation of Cortical Processing Speed via Gain Modulation. J Neurosci 2021; 41:3988-4005. [PMID: 33858943 PMCID: PMC8176754 DOI: 10.1523/jneurosci.1895-20.2021] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/20/2020] [Revised: 03/04/2021] [Accepted: 03/08/2021] [Indexed: 11/21/2022] Open
Abstract
To thrive in dynamic environments, animals must be capable of rapidly and flexibly adapting behavioral responses to a changing context and internal state. Examples of behavioral flexibility include faster stimulus responses when attentive and slower responses when distracted. Contextual or state-dependent modulations may occur early in the cortical hierarchy and may be implemented via top-down projections from corticocortical or neuromodulatory pathways. However, the computational mechanisms mediating the effects of such projections are not known. Here, we introduce a theoretical framework to classify the effects of cell type-specific top-down perturbations on the information processing speed of cortical circuits. Our theory demonstrates that perturbation effects on stimulus processing can be predicted by intrinsic gain modulation, which controls the timescale of the circuit dynamics. Our theory leads to counterintuitive effects, such as improved performance with increased input variance. We tested the model predictions using large-scale electrophysiological recordings from the visual hierarchy in freely running mice, where we found that a decrease in single-cell intrinsic gain during locomotion led to an acceleration of visual processing. Our results establish a novel theory of cell type-specific perturbations, applicable to top-down modulation as well as optogenetic and pharmacological manipulations. Our theory links connectivity, dynamics, and information processing via gain modulation.SIGNIFICANCE STATEMENT To thrive in dynamic environments, animals adapt their behavior to changing circumstances and different internal states. Examples of behavioral flexibility include faster responses to sensory stimuli when attentive and slower responses when distracted. Previous work suggested that contextual modulations may be implemented via top-down inputs to sensory cortex coming from higher brain areas or neuromodulatory pathways. Here, we introduce a theory explaining how the speed at which sensory cortex processes incoming information is adjusted by changes in these top-down projections, which control the timescale of neural activity. We tested our model predictions in freely running mice, revealing that locomotion accelerates visual processing. Our theory is applicable to internal modulation as well as optogenetic and pharmacological manipulations and links circuit connectivity, dynamics, and information processing.
Collapse
Affiliation(s)
- David Wyrick
- Department of Biology and Institute of Neuroscience
| | - Luca Mazzucato
- Department of Biology and Institute of Neuroscience
- Departments of Mathematics and Physics, University of Oregon, Eugene, Oregon 97403
| |
Collapse
|
34
|
Benozzo D, La Camera G, Genovesio A. Slower prefrontal metastable dynamics during deliberation predicts error trials in a distance discrimination task. Cell Rep 2021; 35:108934. [PMID: 33826896 PMCID: PMC8083966 DOI: 10.1016/j.celrep.2021.108934] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/15/2020] [Revised: 01/10/2021] [Accepted: 03/11/2021] [Indexed: 11/20/2022] Open
Abstract
Cortical activity related to erroneous behavior in discrimination or decision-making tasks is rarely analyzed, yet it can help clarify which computations are essential during a specific task. Here, we use a hidden Markov model (HMM) to perform a trial-by-trial analysis of the ensemble activity of dorsolateral prefrontal cortex (PFdl) neurons of rhesus monkeys performing a distance discrimination task. By segmenting the neural activity into sequences of metastable states, HMM allows us to uncover modulations of the neural dynamics related to internal computations. We find that metastable dynamics slow down during error trials, while state transitions at a pivotal point during the trial take longer in difficult correct trials. Both these phenomena occur during the decision interval, with errors occurring in both easy and difficult trials. Our results provide further support for the emerging role of metastable cortical dynamics in mediating complex cognitive functions and behavior.
Collapse
Affiliation(s)
- Danilo Benozzo
- Department of Physiology and Pharmacology, Sapienza University of Rome, Rome, Italy
| | - Giancarlo La Camera
- Department of Neurobiology and Behavior, Center for Neural Circuit Dynamics and Institute for Advanced Computational Science, State University of New York at Stony Brook, Stony Brook, NY, USA.
| | - Aldo Genovesio
- Department of Physiology and Pharmacology, Sapienza University of Rome, Rome, Italy.
| |
Collapse
|
35
|
Schmutz V, Gerstner W, Schwalger T. Mesoscopic population equations for spiking neural networks with synaptic short-term plasticity. JOURNAL OF MATHEMATICAL NEUROSCIENCE 2020; 10:5. [PMID: 32253526 PMCID: PMC7136387 DOI: 10.1186/s13408-020-00082-z] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/11/2019] [Accepted: 03/25/2020] [Indexed: 06/07/2023]
Abstract
Coarse-graining microscopic models of biological neural networks to obtain mesoscopic models of neural activities is an essential step towards multi-scale models of the brain. Here, we extend a recent theory for mesoscopic population dynamics with static synapses to the case of dynamic synapses exhibiting short-term plasticity (STP). The extended theory offers an approximate mean-field dynamics for the synaptic input currents arising from populations of spiking neurons and synapses undergoing Tsodyks-Markram STP. The approximate mean-field dynamics accounts for both finite number of synapses and correlation between the two synaptic variables of the model (utilization and available resources) and its numerical implementation is simple. Comparisons with Monte Carlo simulations of the microscopic model show that in both feedforward and recurrent networks, the mesoscopic mean-field model accurately reproduces the first- and second-order statistics of the total synaptic input into a postsynaptic neuron and accounts for stochastic switches between Up and Down states and for population spikes. The extended mesoscopic population theory of spiking neural networks with STP may be useful for a systematic reduction of detailed biophysical models of cortical microcircuits to numerically efficient and mathematically tractable mean-field models.
Collapse
Affiliation(s)
- Valentin Schmutz
- Brain Mind Institute, École Polytechnique Féderale de Lausanne (EPFL), Lausanne, Switzerland.
| | - Wulfram Gerstner
- Brain Mind Institute, École Polytechnique Féderale de Lausanne (EPFL), Lausanne, Switzerland
| | - Tilo Schwalger
- Brain Mind Institute, École Polytechnique Féderale de Lausanne (EPFL), Lausanne, Switzerland
- Bernstein Center for Computational Neuroscience, Institut für Mathematik, Technische Universität Berlin, Berlin, Germany
| |
Collapse
|
36
|
Schwalger T, Chizhov AV. Mind the last spike - firing rate models for mesoscopic populations of spiking neurons. Curr Opin Neurobiol 2019; 58:155-166. [PMID: 31590003 DOI: 10.1016/j.conb.2019.08.003] [Citation(s) in RCA: 24] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/01/2019] [Accepted: 08/25/2019] [Indexed: 02/07/2023]
Abstract
The dominant modeling framework for understanding cortical computations are heuristic firing rate models. Despite their success, these models fall short to capture spike synchronization effects, to link to biophysical parameters and to describe finite-size fluctuations. In this opinion article, we propose that the refractory density method (RDM), also known as age-structured population dynamics or quasi-renewal theory, yields a powerful theoretical framework to build rate-based models for mesoscopic neural populations from realistic neuron dynamics at the microscopic level. We review recent advances achieved by the RDM to obtain efficient population density equations for networks of generalized integrate-and-fire (GIF) neurons - a class of neuron models that has been successfully fitted to various cell types. The theory not only predicts the nonstationary dynamics of large populations of neurons but also permits an extension to finite-size populations and a systematic reduction to low-dimensional rate dynamics. The new types of rate models will allow a re-examination of models of cortical computations under biological constraints.
Collapse
Affiliation(s)
- Tilo Schwalger
- Bernstein Center for Computational Neuroscience, 10115 Berlin, Germany; Institut für Mathematik, Technische Universität Berlin, 10623 Berlin, Germany.
| | - Anton V Chizhov
- Ioffe Institute, 194021 Saint-Petersburg, Russia; Sechenov Institute of Evolutionary Physiology and Biochemistry of the Russian Academy of Sciences, 194223 Saint-Petersburg, Russia
| |
Collapse
|
37
|
Okujeni S, Egert U. Self-organization of modular network architecture by activity-dependent neuronal migration and outgrowth. eLife 2019; 8:47996. [PMID: 31526478 PMCID: PMC6783273 DOI: 10.7554/elife.47996] [Citation(s) in RCA: 16] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/26/2019] [Accepted: 09/16/2019] [Indexed: 12/17/2022] Open
Abstract
The spatial distribution of neurons and activity-dependent neurite outgrowth shape long-range interaction, recurrent local connectivity and the modularity in neuronal networks. We investigated how this mesoscale architecture develops by interaction of neurite outgrowth, cell migration and activity in cultured networks of rat cortical neurons and show that simple rules can explain variations of network modularity. In contrast to theoretical studies on activity-dependent outgrowth but consistent with predictions for modular networks, spontaneous activity and the rate of synchronized bursts increased with clustering, whereas peak firing rates in bursts increased in highly interconnected homogeneous networks. As Ca2+ influx increased exponentially with increasing network recruitment during bursts, its modulation was highly correlated to peak firing rates. During network maturation, long-term estimates of Ca2+ influx showed convergence, even for highly different mesoscale architectures, neurite extent, connectivity, modularity and average activity levels, indicating homeostatic regulation towards a common set-point of Ca2+ influx.
Collapse
Affiliation(s)
- Samora Okujeni
- Laboratory for Biomicrotechnology, Department of Microsystems Engineering-IMTEK, University of Freiburg, Freiburg, Germany.,Bernstein Center Freiburg, University of Freiburg, Freiburg, Germany
| | - Ulrich Egert
- Laboratory for Biomicrotechnology, Department of Microsystems Engineering-IMTEK, University of Freiburg, Freiburg, Germany.,Bernstein Center Freiburg, University of Freiburg, Freiburg, Germany
| |
Collapse
|
38
|
Marcos E, Londei F, Genovesio A. Hidden Markov Models Predict the Future Choice Better Than a PSTH-Based Method. Neural Comput 2019; 31:1874-1890. [PMID: 31335289 DOI: 10.1162/neco_a_01216] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Beyond average firing rate, other measurable signals of neuronal activity are fundamental to an understanding of behavior. Recently, hidden Markov models (HMMs) have been applied to neural recordings and have described how neuronal ensembles process information by going through sequences of different states. Such collective dynamics are impossible to capture by just looking at the average firing rate. To estimate how well HMMs can decode information contained in single trials, we compared HMMs with a recently developed classification method based on the peristimulus time histogram (PSTH). The accuracy of the two methods was tested by using the activity of prefrontal neurons recorded while two monkeys were engaged in a strategy task. In this task, the monkeys had to select one of three spatial targets based on an instruction cue and on their previous choice. We show that by using the single trial's neural activity in a period preceding action execution, both models were able to classify the monkeys' choice with an accuracy higher than by chance. Moreover, the HMM was significantly more accurate than the PSTH-based method, even in cases in which the HMM performance was low, although always above chance. Furthermore, the accuracy of both methods was related to the number of neurons exhibiting spatial selectivity within an experimental session. Overall, our study shows that neural activity is better described when not only the mean activity of individual neurons is considered and that therefore, the study of other signals rather than only the average firing rate is fundamental to an understanding of the dynamics of neuronal ensembles.
Collapse
Affiliation(s)
- Encarni Marcos
- Department of Physiology and Pharmacology, Sapienza University of Rome, Rome 00185, Italy, and Instituto de Neurociencias de Alicante, Consejo Superior de Investigaciones Científicas-Universidad Miguel Hernández de Elche, Sant Joan d'Alacant, Alicante 03550, Spain
| | - Fabrizio Londei
- Department of Physiology and Pharmacology, Sapienza University of Rome, Rome 00185, Italy
| | - Aldo Genovesio
- Department of Physiology and Pharmacology, Sapienza University of Rome, Rome 00185, Italy
| |
Collapse
|
39
|
La Camera G, Fontanini A, Mazzucato L. Cortical computations via metastable activity. Curr Opin Neurobiol 2019; 58:37-45. [PMID: 31326722 DOI: 10.1016/j.conb.2019.06.007] [Citation(s) in RCA: 29] [Impact Index Per Article: 5.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/27/2018] [Accepted: 06/22/2019] [Indexed: 12/27/2022]
Abstract
Metastable brain dynamics are characterized by abrupt, jump-like modulations so that the neural activity in single trials appears to unfold as a sequence of discrete, quasi-stationary 'states'. Evidence that cortical neural activity unfolds as a sequence of metastable states is accumulating at fast pace. Metastable activity occurs both in response to an external stimulus and during ongoing, self-generated activity. These spontaneous metastable states are increasingly found to subserve internal representations that are not locked to external triggers, including states of deliberations, attention and expectation. Moreover, decoding stimuli or decisions via metastable states can be carried out trial-by-trial. Focusing on metastability will allow us to shift our perspective on neural coding from traditional concepts based on trial-averaging to models based on dynamic ensemble representations. Recent theoretical work has started to characterize the mechanistic origin and potential roles of metastable representations. In this article we review recent findings on metastable activity, how it may arise in biologically realistic models, and its potential role for representing internal states as well as relevant task variables.
Collapse
Affiliation(s)
- Giancarlo La Camera
- Department of Neurobiology and Behavior, State University of New York at Stony Brook, Stony Brook, NY 11794, United States; Graduate Program in Neuroscience, State University of New York at Stony Brook, Stony Brook, NY 11794, United States.
| | - Alfredo Fontanini
- Department of Neurobiology and Behavior, State University of New York at Stony Brook, Stony Brook, NY 11794, United States; Graduate Program in Neuroscience, State University of New York at Stony Brook, Stony Brook, NY 11794, United States
| | - Luca Mazzucato
- Departments of Biology and Mathematics and Institute of Neuroscience, University of Oregon, Eugene, OR 97403, United States
| |
Collapse
|
40
|
Mukherjee N, Wachutka J, Katz DB. Impact of precisely-timed inhibition of gustatory cortex on taste behavior depends on single-trial ensemble dynamics. eLife 2019; 8:e45968. [PMID: 31232693 PMCID: PMC6625792 DOI: 10.7554/elife.45968] [Citation(s) in RCA: 24] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/10/2019] [Accepted: 06/21/2019] [Indexed: 11/21/2022] Open
Abstract
Sensation and action are necessarily coupled during stimulus perception - while tasting, for instance, perception happens while an animal decides to expel or swallow the substance in the mouth (the former via a behavior known as 'gaping'). Taste responses in the rodent gustatory cortex (GC) span this sensorimotor divide, progressing through firing-rate epochs that culminate in the emergence of action-related firing. Population analyses reveal this emergence to be a sudden, coherent and variably-timed ensemble transition that reliably precedes gaping onset by 0.2-0.3s. Here, we tested whether this transition drives gaping, by delivering 0.5s GC perturbations in tasting trials. Perturbations significantly delayed gaping, but only when they preceded the action-related transition - thus, the same perturbation impacted behavior or not, depending on the transition latency in that particular trial. Our results suggest a distributed attractor network model of taste processing, and a dynamical role for cortex in driving motor behavior.
Collapse
Affiliation(s)
- Narendra Mukherjee
- Program in NeuroscienceBrandeis UniversityWalthamUnited States
- Volen National Center for Complex SystemsBrandeis UniversityWalthamUnited States
- Department of PsychologyBrandeis UniversityWalthamUnited States
| | - Joseph Wachutka
- Program in NeuroscienceBrandeis UniversityWalthamUnited States
- Volen National Center for Complex SystemsBrandeis UniversityWalthamUnited States
- Department of PsychologyBrandeis UniversityWalthamUnited States
| | - Donald B Katz
- Program in NeuroscienceBrandeis UniversityWalthamUnited States
- Volen National Center for Complex SystemsBrandeis UniversityWalthamUnited States
- Department of PsychologyBrandeis UniversityWalthamUnited States
| |
Collapse
|
41
|
Spatiotemporal discrimination in attractor networks with short-term synaptic plasticity. J Comput Neurosci 2019; 46:279-297. [PMID: 31134433 PMCID: PMC6571095 DOI: 10.1007/s10827-019-00717-5] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/05/2018] [Revised: 03/04/2019] [Accepted: 04/02/2019] [Indexed: 12/28/2022]
Abstract
We demonstrate that a randomly connected attractor network with dynamic synapses can discriminate between similar sequences containing multiple stimuli suggesting such networks provide a general basis for neural computations in the brain. The network contains units representing assemblies of pools of neurons, with preferentially strong recurrent excitatory connections rendering each unit bi-stable. Weak interactions between units leads to a multiplicity of attractor states, within which information can persist beyond stimulus offset. When a new stimulus arrives, the prior state of the network impacts the encoding of the incoming information, with short-term synaptic depression ensuring an itinerancy between sets of active units. We assess the ability of such a network to encode the identity of sequences of stimuli, so as to provide a template for sequence recall, or decisions based on accumulation of evidence. Across a range of parameters, such networks produce the primacy (better final encoding of the earliest stimuli) and recency (better final encoding of the latest stimuli) observed in human recall data and can retain the information needed to make a binary choice based on total number of presentations of a specific stimulus. Similarities and differences in the final states of the network produced by different sequences lead to predictions of specific errors that could arise when an animal or human subject generalizes from training data, when the training data comprises a subset of the entire stimulus repertoire. We suggest that such networks can provide the general purpose computational engines needed for us to solve many cognitive tasks.
Collapse
|
42
|
Neural variability quenching during decision-making: Neural individuality and its prestimulus complexity. Neuroimage 2019; 192:1-14. [DOI: 10.1016/j.neuroimage.2019.02.070] [Citation(s) in RCA: 16] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/09/2018] [Revised: 01/31/2019] [Accepted: 02/27/2019] [Indexed: 11/20/2022] Open
|
43
|
Expectation-induced modulation of metastable activity underlies faster coding of sensory stimuli. Nat Neurosci 2019; 22:787-796. [PMID: 30936557 PMCID: PMC6516078 DOI: 10.1038/s41593-019-0364-9] [Citation(s) in RCA: 41] [Impact Index Per Article: 8.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/06/2017] [Accepted: 02/15/2019] [Indexed: 11/22/2022]
Abstract
Sensory stimuli can be recognized more rapidly when they are expected. This phenomenon depends on expectation affecting the cortical processing of sensory information. However, the mechanisms responsible for the effects of expectation on sensory circuits remain elusive. Here, we report a novel computational mechanism underlying the expectation-dependent acceleration of coding observed in the gustatory cortex of alert rats. We use a recurrent spiking network model with a clustered architecture capturing essential features of cortical activity, such as its intrinsically generated metastable dynamics. Relying on network theory and computer simulations, we propose that expectation exerts its function by modulating the intrinsically generated dynamics preceding taste delivery. Our model’s predictions were confirmed in the experimental data, demonstrating how the modulation of ongoing activity can shape sensory coding. Altogether, these results provide a biologically plausible theory of expectation and ascribe a new functional role to intrinsically generated, metastable activity.
Collapse
|
44
|
Abstract
The gustatory system contributes to the flavor of foods and beverages and communicates information about nutrients and poisons. This system has evolved to detect and ultimately respond to hydrophilic molecules dissolved in saliva. Taste receptor cells, located in taste buds and distributed throughout the oral cavity, activate nerve afferents that project to the brainstem. From here, information propagates to thalamic, subcortical, and cortical areas, where it is integrated with information from other sensory systems and with homeostatic, visceral, and affective processes. There is considerable divergence, as well as convergence, of information between multiple regions of the central nervous system that interact with the taste pathways, with reciprocal connections occurring between the involved regions. These widespread interactions among multiple systems are crucial for the perception of food. For example, memory, hunger, satiety, and visceral changes can directly affect and can be affected by the experience of tasting. In this chapter, we review the literature on the central processing of taste with a specific focus on the anatomic and physiologic responses of single neurons. Emphasis is placed on how information is distributed along multiple systems with the goal of better understanding how the rich and complex sensations associated with flavor emerge from large-scale, systems-wide, interactions.
Collapse
|
45
|
Tingley D, Alexander AS, Quinn LK, Chiba AA, Nitz D. Multiplexed oscillations and phase rate coding in the basal forebrain. SCIENCE ADVANCES 2018; 4:eaar3230. [PMID: 30083600 PMCID: PMC6070333 DOI: 10.1126/sciadv.aar3230] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/26/2017] [Accepted: 06/19/2018] [Indexed: 05/30/2023]
Abstract
Complex behaviors demand temporal coordination among functionally distinct brain regions. The basal forebrain's afferent and efferent structure suggests a capacity for mediating this coordination at a large scale. During performance of a spatial orientation task, synaptic activity in this region was dominated by four amplitude-independent oscillations temporally organized by the phase of the slowest, a theta-frequency rhythm. Oscillation amplitudes were also organized by task epoch and positively correlated to the task-related modulation of individual neuron firing rates. For many neurons, spiking was temporally organized through phase precession against theta band field potential oscillations. Theta phase precession advanced in parallel to task progression, rather than absolute spatial location or time. Together, the findings reveal a process by which associative brain regions can integrate independent oscillatory inputs and transform them into sequence-specific, rate-coded outputs that are adaptive to the pace with which organisms interact with their environment.
Collapse
Affiliation(s)
- David Tingley
- New York University (NYU) Neuroscience Institute, School of Medicine, NYU, New York, NY 10016, USA
- Department of Cognitive Science, University of California, San Diego, San Diego, CA 92093–0515, USA
| | - Andrew S. Alexander
- Department of Cognitive Science, University of California, San Diego, San Diego, CA 92093–0515, USA
- Department of Psychological and Brain Science, Boston University, Boston, MA 02215, USA
| | - Laleh K. Quinn
- Department of Cognitive Science, University of California, San Diego, San Diego, CA 92093–0515, USA
| | - Andrea A. Chiba
- Department of Cognitive Science, University of California, San Diego, San Diego, CA 92093–0515, USA
| | - Douglas Nitz
- Department of Cognitive Science, University of California, San Diego, San Diego, CA 92093–0515, USA
| |
Collapse
|
46
|
Riehle A, Brochier T, Nawrot M, Grün S. Behavioral Context Determines Network State and Variability Dynamics in Monkey Motor Cortex. Front Neural Circuits 2018; 12:52. [PMID: 30050415 PMCID: PMC6052126 DOI: 10.3389/fncir.2018.00052] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/08/2018] [Accepted: 06/15/2018] [Indexed: 11/13/2022] Open
Abstract
Variability of spiking activity is ubiquitous throughout the brain but little is known about its contextual dependance. Trial-to-trial spike count variability, estimated by the Fano Factor (FF), and within-trial spike time irregularity, quantified by the coefficient of variation (CV), reflect variability on long and short time scales, respectively. We co-analyzed FF and the local coefficient of variation (CV2) in monkey motor cortex comparing two behavioral contexts, movement preparation (wait) and execution (movement). We find that the FF significantly decreases from wait to movement, while the CV2 increases. The more regular firing (expressed by a low CV2) during wait is related to an increased power of local field potential (LFP) beta oscillations and phase locking of spikes to these oscillations. In renewal processes, a widely used model for spiking activity under stationary input conditions, both measures are related as FF ≈ CV2. This expectation was met during movement, but not during wait where FF ≫ CV22. Our interpretation is that during movement preparation, ongoing brain processes result in changing network states and thus in high trial-to-trial variability (expressed by a high FF). During movement execution, the network is recruited for performing the stereotyped motor task, resulting in reliable single neuron output. Our interpretation is in the light of recent computational models that generate non-stationary network conditions.
Collapse
Affiliation(s)
- Alexa Riehle
- UMR7289 Institut de Neurosciences de la Timone (INT), Centre National de la Recherche Scientifique (CNRS)-Aix-Marseille Université (AMU), Marseille, France.,Institute of Neuroscience and Medicine (INM-6), Institute for Advanced Simulation (IAS-6) and JARA Brain Institute I, Forschungszentrum Jülich, Jülich, Germany
| | - Thomas Brochier
- UMR7289 Institut de Neurosciences de la Timone (INT), Centre National de la Recherche Scientifique (CNRS)-Aix-Marseille Université (AMU), Marseille, France
| | - Martin Nawrot
- Computational Systems Neuroscience, Institute for Zoology, University of Cologne, Cologne, Germany
| | - Sonja Grün
- Institute of Neuroscience and Medicine (INM-6), Institute for Advanced Simulation (IAS-6) and JARA Brain Institute I, Forschungszentrum Jülich, Jülich, Germany.,RIKEN Brain Science Institute (BSI), Wako, Japan.,Theoretical Systems Neurobiology, RWTH Aachen University, Aachen, Germany
| |
Collapse
|
47
|
Setareh H, Deger M, Gerstner W. Excitable neuronal assemblies with adaptation as a building block of brain circuits for velocity-controlled signal propagation. PLoS Comput Biol 2018; 14:e1006216. [PMID: 29979674 PMCID: PMC6051644 DOI: 10.1371/journal.pcbi.1006216] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/27/2017] [Revised: 07/18/2018] [Accepted: 05/21/2018] [Indexed: 01/07/2023] Open
Abstract
The time scale of neuronal network dynamics is determined by synaptic interactions and neuronal signal integration, both of which occur on the time scale of milliseconds. Yet many behaviors like the generation of movements or vocalizations of sounds occur on the much slower time scale of seconds. Here we ask the question of how neuronal networks of the brain can support reliable behavior on this time scale. We argue that excitable neuronal assemblies with spike-frequency adaptation may serve as building blocks that can flexibly adjust the speed of execution of neural circuit function. We show in simulations that a chain of neuronal assemblies can propagate signals reliably, similar to the well-known synfire chain, but with the crucial difference that the propagation speed is slower and tunable to the behaviorally relevant range. Moreover we study a grid of excitable neuronal assemblies as a simplified model of the somatosensory barrel cortex of the mouse and demonstrate that various patterns of experimentally observed spatial activity propagation can be explained. Models of activity propagation in cortical networks have often been based on feedforward structures. Here we propose a model of activity propagation, called excitation chain, which does not need such a feedforward structure. The model is composed of excitable neural assemblies with spike-frequency adaptation, connected bidirectionally in a row or a grid. This prototypical neural circuit can propagate activity forwards, backwards or in both directions. Furthermore, the propagation speed is slow enough to trigger the generation of behaviors on the time scale of hundreds of milliseconds. A two-dimensional variant of the model is able to generate different activity propagation patterns, similar to spontaneous activity and stimulus-evoked responses in anesthetized mouse barrel cortex. We propose the excitation chain model as a basic component that can be employed in various ways to create spiking neural circuit models that generate signals on behavioral time scales. In contrast to abstract models of excitable media, our model makes an explicit link to the time scale of neuronal spikes.
Collapse
Affiliation(s)
- Hesam Setareh
- School of Computer and Communication Sciences and Brain Mind Institute, School of Life Sciences, Ecole Polytechnique Fédérale de Lausanne, Switzerland
| | - Moritz Deger
- School of Computer and Communication Sciences and Brain Mind Institute, School of Life Sciences, Ecole Polytechnique Fédérale de Lausanne, Switzerland
- Institute for Zoology, Faculty of Mathematics and Natural Sciences, University of Cologne, Köln, Germany
| | - Wulfram Gerstner
- School of Computer and Communication Sciences and Brain Mind Institute, School of Life Sciences, Ecole Polytechnique Fédérale de Lausanne, Switzerland
- * E-mail:
| |
Collapse
|
48
|
Huang Z, Zhang J, Wu J, Liu X, Xu J, Zhang J, Qin P, Dai R, Yang Z, Mao Y, Hudetz AG, Northoff G. Disrupted neural variability during propofol-induced sedation and unconsciousness. Hum Brain Mapp 2018; 39:4533-4544. [PMID: 29974570 DOI: 10.1002/hbm.24304] [Citation(s) in RCA: 24] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/04/2017] [Revised: 06/04/2018] [Accepted: 06/24/2018] [Indexed: 12/16/2022] Open
Abstract
Variability quenching is a widespread neural phenomenon in which trial-to-trial variability (TTV) of neural activity is reduced by repeated presentations of a sensory stimulus. However, its neural mechanism and functional significance remain poorly understood. Recurrent network dynamics are suggested as a candidate mechanism of TTV, and they play a key role in consciousness. We thus asked whether the variability-quenching phenomenon is related to the level of consciousness. We hypothesized that TTV reduction would be compromised during reduced level of consciousness by propofol anesthetics. We recorded functional magnetic resonance imaging signals of resting-state and stimulus-induced activities in three conditions: wakefulness, sedation, and unconsciousness (i.e., deep anesthesia). We measured the average (trial-to-trial mean, TTM) and variability (TTV) of auditory stimulus-induced activity under the three conditions. We also examined another form of neural variability (temporal variability, TV), which quantifies the overall dynamic range of ongoing neural activity across time, during both the resting-state and the task. We found that (a) TTM deceased gradually from wakefulness through sedation to anesthesia, (b) stimulus-induced TTV reduction normally seen during wakefulness was abolished during both sedation and anesthesia, and (c) TV increased in the task state as compared to resting-state during both wakefulness and sedation, but not anesthesia. Together, our results reveal distinct effects of propofol on the two forms of neural variability (TTV and TV). They imply that the anesthetic disrupts recurrent network dynamics, thus prevents the stabilization of cortical activity states. These findings shed new light on the temporal dynamics of neuronal variability and its alteration during anesthetic-induced unconsciousness.
Collapse
Affiliation(s)
- Zirui Huang
- Department of Anesthesiology and Center for Consciousness Science, University of Michigan, Ann Arbor, Michigan
| | - Jun Zhang
- Department of Anesthesiology, Huashan Hospital, Fudan University, Shanghai, People's Republic of China
| | - Jinsong Wu
- Neurological Surgery Department, Huashan Hospital, Shanghai Medical College, Fudan University, Shanghai, People's Republic of China
| | - Xiaoge Liu
- Department of Anesthesiology, Huashan Hospital, Fudan University, Shanghai, People's Republic of China
| | - Jianghui Xu
- Department of Anesthesiology, Huashan Hospital, Fudan University, Shanghai, People's Republic of China
| | - Jianfeng Zhang
- College of Biomedical Engineering and Instrument Science, Zhejiang University, Hangzhou, People's Republic of China
| | - Pengmin Qin
- School of Psychology, South China Normal University, Guangzhou, People's Republic of China
| | - Rui Dai
- State Key Laboratory of Brain and Cognitive Science, Institute of Biophysics, Chinese Academy of Sciences, Beijing, People's Republic of China
| | - Zhong Yang
- Department of Radiology, Huashan Hospital, Fudan University, Shanghai, People's Republic of China
| | - Ying Mao
- Neurological Surgery Department, Huashan Hospital, Shanghai Medical College, Fudan University, Shanghai, People's Republic of China
| | - Anthony G Hudetz
- Department of Anesthesiology and Center for Consciousness Science, University of Michigan, Ann Arbor, Michigan
| | - Georg Northoff
- Institute of Mental Health Research, University of Ottawa, Ottawa, Ontario, Canada.,Center for Cognition and Brain Disorders, Hangzhou Normal University, Hangzhou, People's Republic of China.,Mental Health Centre, Zhejiang University School of Medicine, Hangzhou, People's Republic of China
| |
Collapse
|
49
|
Saberi-Moghadam S, Simi A, Setareh H, Mikhail C, Tafti M. In vitro Cortical Network Firing is Homeostatically Regulated: A Model for Sleep Regulation. Sci Rep 2018; 8:6297. [PMID: 29674729 PMCID: PMC5908861 DOI: 10.1038/s41598-018-24339-6] [Citation(s) in RCA: 21] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/08/2017] [Accepted: 03/27/2018] [Indexed: 12/14/2022] Open
Abstract
Prolonged wakefulness leads to a homeostatic response manifested in increased amplitude and number of electroencephalogram (EEG) slow waves during recovery sleep. Cortical networks show a slow oscillation when the excitatory inputs are reduced (during slow wave sleep, anesthesia), or absent (in vitro preparations). It was recently shown that a homeostatic response to electrical stimulation can be induced in cortical cultures. Here we used cortical cultures grown on microelectrode arrays and stimulated them with a cocktail of waking neuromodulators. We found that recovery from stimulation resulted in a dose-dependent homeostatic response. Specifically, the inter-burst intervals decreased, the burst duration increased, the network showed higher cross-correlation and strong phasic synchronized burst activity. Spectral power below <1.75 Hz significantly increased and the increase was related to steeper slopes of bursts. Computer simulation suggested that a small number of clustered neurons could potently drive the behavior of the network both at baseline and during recovery. Thus, this in vitro model appears valuable for dissecting network mechanisms of sleep homeostasis.
Collapse
Affiliation(s)
- Sohrab Saberi-Moghadam
- Center for Integrative Genomics, Faculty of Biology and Medicine, University of Lausanne, Génopode, 1015, Lausanne, Switzerland
| | - Alessandro Simi
- Center for Integrative Genomics, Faculty of Biology and Medicine, University of Lausanne, Génopode, 1015, Lausanne, Switzerland
| | - Hesam Setareh
- Laboratory of Computational Neuroscience, School of Computer and Communication Sciences, EPFL, 1015, Lausanne, Switzerland
| | - Cyril Mikhail
- Center for Integrative Genomics, Faculty of Biology and Medicine, University of Lausanne, Génopode, 1015, Lausanne, Switzerland
| | - Mehdi Tafti
- Center for Integrative Genomics, Faculty of Biology and Medicine, University of Lausanne, Génopode, 1015, Lausanne, Switzerland. .,Department of Physiology, Faculty of Biology and Medicine, University of Lausanne, Bugnon 7, 1005, Lausanne, Switzerland.
| |
Collapse
|
50
|
Rost T, Deger M, Nawrot MP. Winnerless competition in clustered balanced networks: inhibitory assemblies do the trick. BIOLOGICAL CYBERNETICS 2018; 112:81-98. [PMID: 29075845 PMCID: PMC5908874 DOI: 10.1007/s00422-017-0737-7] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 02/21/2017] [Accepted: 10/11/2017] [Indexed: 06/07/2023]
Abstract
Balanced networks are a frequently employed basic model for neuronal networks in the mammalian neocortex. Large numbers of excitatory and inhibitory neurons are recurrently connected so that the numerous positive and negative inputs that each neuron receives cancel out on average. Neuronal firing is therefore driven by fluctuations in the input and resembles the irregular and asynchronous activity observed in cortical in vivo data. Recently, the balanced network model has been extended to accommodate clusters of strongly interconnected excitatory neurons in order to explain persistent activity in working memory-related tasks. This clustered topology introduces multistability and winnerless competition between attractors and can capture the high trial-to-trial variability and its reduction during stimulation that has been found experimentally. In this prospect article, we review the mean field description of balanced networks of binary neurons and apply the theory to clustered networks. We show that the stable fixed points of networks with clustered excitatory connectivity tend quickly towards firing rate saturation, which is generally inconsistent with experimental data. To remedy this shortcoming, we then present a novel perspective on networks with locally balanced clusters of both excitatory and inhibitory neuron populations. This approach allows for true multistability and moderate firing rates in activated clusters over a wide range of parameters. Our findings are supported by mean field theory and numerical network simulations. Finally, we discuss possible applications of the concept of joint excitatory and inhibitory clustering in future cortical network modelling studies.
Collapse
Affiliation(s)
- Thomas Rost
- Computational Systems Neuroscience, Institute for Zoology, Faculty of Mathematics and Natural Sciences, University of Cologne, Cologne, Germany
| | - Moritz Deger
- Computational Systems Neuroscience, Institute for Zoology, Faculty of Mathematics and Natural Sciences, University of Cologne, Cologne, Germany
| | - Martin P Nawrot
- Computational Systems Neuroscience, Institute for Zoology, Faculty of Mathematics and Natural Sciences, University of Cologne, Cologne, Germany.
| |
Collapse
|