1
|
Morales-Gregorio A, Kurth AC, Ito J, Kleinjohann A, Barthélemy FV, Brochier T, Grün S, van Albada SJ. Neural manifolds in V1 change with top-down signals from V4 targeting the foveal region. Cell Rep 2024; 43:114371. [PMID: 38923458 DOI: 10.1016/j.celrep.2024.114371] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/29/2023] [Revised: 03/25/2024] [Accepted: 05/31/2024] [Indexed: 06/28/2024] Open
Abstract
High-dimensional brain activity is often organized into lower-dimensional neural manifolds. However, the neural manifolds of the visual cortex remain understudied. Here, we study large-scale multi-electrode electrophysiological recordings of macaque (Macaca mulatta) areas V1, V4, and DP with a high spatiotemporal resolution. We find that the population activity of V1 contains two separate neural manifolds, which correlate strongly with eye closure (eyes open/closed) and have distinct dimensionalities. Moreover, we find strong top-down signals from V4 to V1, particularly to the foveal region of V1, which are significantly stronger during the eyes-open periods. Finally, in silico simulations of a balanced spiking neuron network qualitatively reproduce the experimental findings. Taken together, our analyses and simulations suggest that top-down signals modulate the population activity of V1. We postulate that the top-down modulation during the eyes-open periods prepares V1 for fast and efficient visual responses, resulting in a type of visual stand-by state.
Collapse
Affiliation(s)
- Aitor Morales-Gregorio
- Institute for Advanced Simulation (IAS-6), Jülich Research Centre, Jülich, Germany; Institute of Zoology, University of Cologne, Cologne, Germany.
| | - Anno C Kurth
- Institute for Advanced Simulation (IAS-6), Jülich Research Centre, Jülich, Germany; RWTH Aachen University, Aachen, Germany
| | - Junji Ito
- Institute for Advanced Simulation (IAS-6), Jülich Research Centre, Jülich, Germany
| | - Alexander Kleinjohann
- Institute for Advanced Simulation (IAS-6), Jülich Research Centre, Jülich, Germany; Theoretical Systems Neurobiology, RWTH Aachen University, Aachen, Germany
| | - Frédéric V Barthélemy
- Institute for Advanced Simulation (IAS-6), Jülich Research Centre, Jülich, Germany; Institut de Neurosciences de la Timone (INT), CNRS and Aix-Marseille Université, Marseille, France
| | - Thomas Brochier
- Institut de Neurosciences de la Timone (INT), CNRS and Aix-Marseille Université, Marseille, France
| | - Sonja Grün
- Institute for Advanced Simulation (IAS-6), Jülich Research Centre, Jülich, Germany; Theoretical Systems Neurobiology, RWTH Aachen University, Aachen, Germany; JARA-Institut Brain Structure-Function Relationships (INM-10), Jülich Research Centre, Jülich, Germany
| | - Sacha J van Albada
- Institute for Advanced Simulation (IAS-6), Jülich Research Centre, Jülich, Germany; Institute of Zoology, University of Cologne, Cologne, Germany
| |
Collapse
|
2
|
Northoff G, Zilio F, Zhang J. Beyond task response-Pre-stimulus activity modulates contents of consciousness. Phys Life Rev 2024; 49:19-37. [PMID: 38492473 DOI: 10.1016/j.plrev.2024.03.002] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/28/2024] [Accepted: 03/03/2024] [Indexed: 03/18/2024]
Abstract
The current discussion on the neural correlates of the contents of consciousness (NCCc) focuses mainly on the post-stimulus period of task-related activity. This neglects the substantial impact of the spontaneous or ongoing activity of the brain as manifest in pre-stimulus activity. Does the interaction of pre- and post-stimulus activity shape the contents of consciousness? Addressing this gap in our knowledge, we review and converge two recent lines of findings, that is, pre-stimulus alpha power and pre- and post-stimulus alpha trial-to-trial variability (TTV). The data show that pre-stimulus alpha power modulates post-stimulus activity including specifically the subjective features of conscious contents like confidence and vividness. At the same time, alpha pre-stimulus variability shapes post-stimulus TTV reduction including the associated contents of consciousness. We propose that non-additive rather than merely additive interaction of the internal pre-stimulus activity with the external stimulus in the alpha band is key for contents to become conscious. This is mediated by mechanisms on different levels including neurophysiological, neurocomputational, neurodynamic, neuropsychological and neurophenomenal levels. Overall, considering the interplay of pre-stimulus intrinsic and post-stimulus extrinsic activity across wider timescales, not just evoked responses in the post-stimulus period, is critical for identifying neural correlates of consciousness. This is well in line with both processing and especially the Temporo-spatial theory of consciousness (TTC).
Collapse
Affiliation(s)
- Georg Northoff
- University of Ottawa, Institute of Mental Health Research at the Royal Ottawa Hospital, Ottawa, Canada.
| | - Federico Zilio
- Department of Philosophy, Sociology, Education and Applied Psychology, University of Padua, Padua, Italy
| | - Jianfeng Zhang
- Center for Brain Disorders and Cognitive Sciences, School of Psychology, Shenzhen University, Shenzhen, China.
| |
Collapse
|
3
|
Yang X, La Camera G. Co-existence of synaptic plasticity and metastable dynamics in a spiking model of cortical circuits. PLoS Comput Biol 2024; 20:e1012220. [PMID: 38950068 PMCID: PMC11244818 DOI: 10.1371/journal.pcbi.1012220] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/05/2024] [Revised: 07/12/2024] [Accepted: 06/01/2024] [Indexed: 07/03/2024] Open
Abstract
Evidence for metastable dynamics and its role in brain function is emerging at a fast pace and is changing our understanding of neural coding by putting an emphasis on hidden states of transient activity. Clustered networks of spiking neurons have enhanced synaptic connections among groups of neurons forming structures called cell assemblies; such networks are capable of producing metastable dynamics that is in agreement with many experimental results. However, it is unclear how a clustered network structure producing metastable dynamics may emerge from a fully local plasticity rule, i.e., a plasticity rule where each synapse has only access to the activity of the neurons it connects (as opposed to the activity of other neurons or other synapses). Here, we propose a local plasticity rule producing ongoing metastable dynamics in a deterministic, recurrent network of spiking neurons. The metastable dynamics co-exists with ongoing plasticity and is the consequence of a self-tuning mechanism that keeps the synaptic weights close to the instability line where memories are spontaneously reactivated. In turn, the synaptic structure is stable to ongoing dynamics and random perturbations, yet it remains sufficiently plastic to remap sensory representations to encode new sets of stimuli. Both the plasticity rule and the metastable dynamics scale well with network size, with synaptic stability increasing with the number of neurons. Overall, our results show that it is possible to generate metastable dynamics over meaningful hidden states using a simple but biologically plausible plasticity rule which co-exists with ongoing neural dynamics.
Collapse
Affiliation(s)
- Xiaoyu Yang
- Graduate Program in Physics and Astronomy, Stony Brook University, Stony Brook, New York, United States of America
- Department of Neurobiology and Behavior, Stony Brook University, Stony Brook, New York, United States of America
- Center for Neural Circuit Dynamics, Stony Brook University, Stony Brook, New York, United States of America
| | - Giancarlo La Camera
- Department of Neurobiology and Behavior, Stony Brook University, Stony Brook, New York, United States of America
- Center for Neural Circuit Dynamics, Stony Brook University, Stony Brook, New York, United States of America
| |
Collapse
|
4
|
Lindsay AJ, Gallello I, Caracheo BF, Seamans JK. Reconfiguration of Behavioral Signals in the Anterior Cingulate Cortex Based on Emotional State. J Neurosci 2024; 44:e1670232024. [PMID: 38637155 PMCID: PMC11154859 DOI: 10.1523/jneurosci.1670-23.2024] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/05/2023] [Revised: 03/28/2024] [Accepted: 04/03/2024] [Indexed: 04/20/2024] Open
Abstract
Behaviors and their execution depend on the context and emotional state in which they are performed. The contextual modulation of behavior likely relies on regions such as the anterior cingulate cortex (ACC) that multiplex information about emotional/autonomic states and behaviors. The objective of the present study was to understand how the representations of behaviors by ACC neurons become modified when performed in different emotional states. A pipeline of machine learning techniques was developed to categorize and classify complex, spontaneous behaviors in male rats from the video. This pipeline, termed Hierarchical Unsupervised Behavioural Discovery Tool (HUB-DT), discovered a range of statistically separable behaviors during a task in which motivationally significant outcomes were delivered in blocks of trials that created three unique "emotional contexts." HUB-DT was capable of detecting behaviors specific to each emotional context and was able to identify and segregate the portions of a neural signal related to a behavior and to emotional context. Overall, ∼10× as many neurons responded to behaviors in a contextually dependent versus a fixed manner, highlighting the extreme impact of emotional state on representations of behaviors that were precisely defined based on detailed analyses of limb kinematics. This type of modulation may be a key mechanism that allows the ACC to modify the behavioral output based on emotional states and contextual demands.
Collapse
Affiliation(s)
- Adrian J Lindsay
- Department of Psychiatry, Djavad Mowafaghian Centre for Brain Health, University of British Columbia, Vancouver, British Columbia V6T2B5, Canada
| | - Isabella Gallello
- Department of Psychiatry, Djavad Mowafaghian Centre for Brain Health, University of British Columbia, Vancouver, British Columbia V6T2B5, Canada
| | - Barak F Caracheo
- Department of Psychiatry, Djavad Mowafaghian Centre for Brain Health, University of British Columbia, Vancouver, British Columbia V6T2B5, Canada
| | - Jeremy K Seamans
- Department of Psychiatry, Djavad Mowafaghian Centre for Brain Health, University of British Columbia, Vancouver, British Columbia V6T2B5, Canada
| |
Collapse
|
5
|
Papadopoulos L, Jo S, Zumwalt K, Wehr M, McCormick DA, Mazzucato L. Modulation of metastable ensemble dynamics explains optimal coding at moderate arousal in auditory cortex. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.04.04.588209. [PMID: 38617286 PMCID: PMC11014582 DOI: 10.1101/2024.04.04.588209] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 04/16/2024]
Abstract
Performance during perceptual decision-making exhibits an inverted-U relationship with arousal, but the underlying network mechanisms remain unclear. Here, we recorded from auditory cortex (A1) of behaving mice during passive tone presentation, while tracking arousal via pupillometry. We found that tone discriminability in A1 ensembles was optimal at intermediate arousal, revealing a population-level neural correlate of the inverted-U relationship. We explained this arousal-dependent coding using a spiking network model with a clustered architecture. Specifically, we show that optimal stimulus discriminability is achieved near a transition between a multi-attractor phase with metastable cluster dynamics (low arousal) and a single-attractor phase (high arousal). Additional signatures of this transition include arousal-induced reductions of overall neural variability and the extent of stimulus-induced variability quenching, which we observed in the empirical data. Altogether, this study elucidates computational principles underlying interactions between pupil-linked arousal, sensory processing, and neural variability, and suggests a role for phase transitions in explaining nonlinear modulations of cortical computations.
Collapse
Affiliation(s)
| | - Suhyun Jo
- Institute of Neuroscience, University of Oregon, Eugene, Oregon
| | - Kevin Zumwalt
- Institute of Neuroscience, University of Oregon, Eugene, Oregon
| | - Michael Wehr
- Institute of Neuroscience, University of Oregon, Eugene, Oregon and Department of Psychology, University of Oregon, Eugene, Oregon
| | - David A McCormick
- Institute of Neuroscience, University of Oregon, Eugene, Oregon and Department of Biology, University of Oregon, Eugene, Oregon
| | - Luca Mazzucato
- Institute of Neuroscience, University of Oregon, Eugene, Oregon
- Department of Biology, University of Oregon, Eugene, Oregon
- Department of Mathematics, University of Oregon, Eugene, Oregon and Department of Physics, University of Oregon, Eugene, Oregon
| |
Collapse
|
6
|
Crosser JT, Brinkman BAW. Applications of information geometry to spiking neural network activity. Phys Rev E 2024; 109:024302. [PMID: 38491696 DOI: 10.1103/physreve.109.024302] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/18/2023] [Accepted: 01/10/2024] [Indexed: 03/18/2024]
Abstract
The space of possible behaviors that complex biological systems may exhibit is unimaginably vast, and these systems often appear to be stochastic, whether due to variable noisy environmental inputs or intrinsically generated chaos. The brain is a prominent example of a biological system with complex behaviors. The number of possible patterns of spikes emitted by a local brain circuit is combinatorially large, although the brain may not make use of all of them. Understanding which of these possible patterns are actually used by the brain, and how those sets of patterns change as properties of neural circuitry change is a major goal in neuroscience. Recently, tools from information geometry have been used to study embeddings of probabilistic models onto a hierarchy of model manifolds that encode how model outputs change as a function of their parameters, giving a quantitative notion of "distances" between outputs. We apply this method to a network model of excitatory and inhibitory neural populations to understand how the competition between membrane and synaptic response timescales shapes the network's information geometry. The hyperbolic embedding allows us to identify the statistical parameters to which the model behavior is most sensitive, and demonstrate how the ranking of these coordinates changes with the balance of excitation and inhibition in the network.
Collapse
Affiliation(s)
- Jacob T Crosser
- Department of Applied Mathematics and Statistics, Stony Brook University, Stony Brook, New York 11794, USA and Department of Neurobiology and Behavior, Stony Brook University, Stony Brook, New York 11794, USA
| | - Braden A W Brinkman
- Department of Applied Mathematics and Statistics, Stony Brook University, Stony Brook, New York 11794, USA and Department of Neurobiology and Behavior, Stony Brook University, Stony Brook, New York 11794, USA
| |
Collapse
|
7
|
Trepka E, Spitmaan M, Qi XL, Constantinidis C, Soltani A. Training-Dependent Gradients of Timescales of Neural Dynamics in the Primate Prefrontal Cortex and Their Contributions to Working Memory. J Neurosci 2024; 44:e2442212023. [PMID: 37973375 PMCID: PMC10866190 DOI: 10.1523/jneurosci.2442-21.2023] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/13/2021] [Revised: 10/23/2023] [Accepted: 10/26/2023] [Indexed: 11/19/2023] Open
Abstract
Cortical neurons exhibit multiple timescales related to dynamics of spontaneous fluctuations (intrinsic timescales) and response to task events (seasonal timescales) in addition to selectivity to task-relevant signals. These timescales increase systematically across the cortical hierarchy, for example, from parietal to prefrontal and cingulate cortex, pointing to their role in cortical computations. It is currently unknown whether these timescales are inherent properties of neurons and/or depend on training in a specific task and if the latter, how their modulations contribute to task performance. To address these questions, we analyzed single-cell recordings within five subregions of the prefrontal cortex (PFC) of male macaques before and after training on a working-memory task. We found fine-grained but opposite gradients of intrinsic and seasonal timescales that mainly appeared after training. Intrinsic timescales decreased whereas seasonal timescales increased from posterior to anterior subregions within both dorsal and ventral PFC. Moreover, training was accompanied by increases in proportions of neurons that exhibited intrinsic and seasonal timescales. These effects were comparable to the emergence of response selectivity due to training. Finally, task selectivity accompanied opposite neural dynamics such that neurons with task-relevant selectivity exhibited longer intrinsic and shorter seasonal timescales. Notably, neurons with longer intrinsic and shorter seasonal timescales exhibited superior population-level coding, but these advantages extended to the delay period mainly after training. Together, our results provide evidence for plastic, fine-grained gradients of timescales within PFC that can influence both single-cell and population coding, pointing to the importance of these timescales in understanding cognition.
Collapse
Affiliation(s)
- Ethan Trepka
- Department of Psychological and Brain Sciences, Dartmouth College, Hanover 03755, New Hampshire
- Neurosciences Program, Stanford University, Stanford 94305, California
| | - Mehran Spitmaan
- Department of Psychological and Brain Sciences, Dartmouth College, Hanover 03755, New Hampshire
| | - Xue-Lian Qi
- Department of Neurobiology and Anatomy, Wake Forest School of Medicine, Winston-Salem 27157, North Carolina
| | | | - Alireza Soltani
- Department of Psychological and Brain Sciences, Dartmouth College, Hanover 03755, New Hampshire
| |
Collapse
|
8
|
Stern M, Istrate N, Mazzucato L. A reservoir of timescales emerges in recurrent circuits with heterogeneous neural assemblies. eLife 2023; 12:e86552. [PMID: 38084779 PMCID: PMC10810607 DOI: 10.7554/elife.86552] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/31/2023] [Accepted: 12/07/2023] [Indexed: 01/26/2024] Open
Abstract
The temporal activity of many physical and biological systems, from complex networks to neural circuits, exhibits fluctuations simultaneously varying over a large range of timescales. Long-tailed distributions of intrinsic timescales have been observed across neurons simultaneously recorded within the same cortical circuit. The mechanisms leading to this striking temporal heterogeneity are yet unknown. Here, we show that neural circuits, endowed with heterogeneous neural assemblies of different sizes, naturally generate multiple timescales of activity spanning several orders of magnitude. We develop an analytical theory using rate networks, supported by simulations of spiking networks with cell-type specific connectivity, to explain how neural timescales depend on assembly size and show that our model can naturally explain the long-tailed timescale distribution observed in the awake primate cortex. When driving recurrent networks of heterogeneous neural assemblies by a time-dependent broadband input, we found that large and small assemblies preferentially entrain slow and fast spectral components of the input, respectively. Our results suggest that heterogeneous assemblies can provide a biologically plausible mechanism for neural circuits to demix complex temporal input signals by transforming temporal into spatial neural codes via frequency-selective neural assemblies.
Collapse
Affiliation(s)
- Merav Stern
- Institute of Neuroscience, University of OregonEugeneUnited States
- Faculty of Medicine, The Hebrew University of JerusalemJerusalemIsrael
| | - Nicolae Istrate
- Institute of Neuroscience, University of OregonEugeneUnited States
- Departments of Physics, University of OregonEugeneUnited States
| | - Luca Mazzucato
- Institute of Neuroscience, University of OregonEugeneUnited States
- Departments of Physics, University of OregonEugeneUnited States
- Mathematics and Biology, University of OregonEugeneUnited States
| |
Collapse
|
9
|
Hoang H, Tsutsumi S, Matsuzaki M, Kano M, Kawato M, Kitamura K, Toyama K. Dynamic organization of cerebellar climbing fiber response and synchrony in multiple functional components reduces dimensions for reinforcement learning. eLife 2023; 12:e86340. [PMID: 37712651 PMCID: PMC10531405 DOI: 10.7554/elife.86340] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/20/2023] [Accepted: 09/13/2023] [Indexed: 09/16/2023] Open
Abstract
Cerebellar climbing fibers convey diverse signals, but how they are organized in the compartmental structure of the cerebellar cortex during learning remains largely unclear. We analyzed a large amount of coordinate-localized two-photon imaging data from cerebellar Crus II in mice undergoing 'Go/No-go' reinforcement learning. Tensor component analysis revealed that a majority of climbing fiber inputs to Purkinje cells were reduced to only four functional components, corresponding to accurate timing control of motor initiation related to a Go cue, cognitive error-based learning, reward processing, and inhibition of erroneous behaviors after a No-go cue. Changes in neural activities during learning of the first two components were correlated with corresponding changes in timing control and error learning across animals, indirectly suggesting causal relationships. Spatial distribution of these components coincided well with boundaries of Aldolase-C/zebrin II expression in Purkinje cells, whereas several components are mixed in single neurons. Synchronization within individual components was bidirectionally regulated according to specific task contexts and learning stages. These findings suggest that, in close collaborations with other brain regions including the inferior olive nucleus, the cerebellum, based on anatomical compartments, reduces dimensions of the learning space by dynamically organizing multiple functional components, a feature that may inspire new-generation AI designs.
Collapse
Affiliation(s)
- Huu Hoang
- ATR Neural Information Analysis LaboratoriesKyotoJapan
| | | | | | - Masanobu Kano
- Department of Neurophysiology, The University of TokyoTokyoJapan
- International Research Center for Neurointelligence (WPI-IRCN), The University of TokyoTokyoJapan
| | - Mitsuo Kawato
- ATR Brain Information Communication Research Laboratory GroupKyotoJapan
| | - Kazuo Kitamura
- Department of Neurophysiology, University of YamanashiKofuJapan
| | | |
Collapse
|
10
|
Naik S, Dehaene-Lambertz G, Battaglia D. Repairing Artifacts in Neural Activity Recordings Using Low-Rank Matrix Estimation. SENSORS (BASEL, SWITZERLAND) 2023; 23:4847. [PMID: 37430760 DOI: 10.3390/s23104847] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/11/2023] [Revised: 05/09/2023] [Accepted: 05/10/2023] [Indexed: 07/12/2023]
Abstract
Electrophysiology recordings are frequently affected by artifacts (e.g., subject motion or eye movements), which reduces the number of available trials and affects the statistical power. When artifacts are unavoidable and data are scarce, signal reconstruction algorithms that allow for the retention of sufficient trials become crucial. Here, we present one such algorithm that makes use of large spatiotemporal correlations in neural signals and solves the low-rank matrix completion problem, to fix artifactual entries. The method uses a gradient descent algorithm in lower dimensions to learn the missing entries and provide faithful reconstruction of signals. We carried out numerical simulations to benchmark the method and estimate optimal hyperparameters for actual EEG data. The fidelity of reconstruction was assessed by detecting event-related potentials (ERP) from a highly artifacted EEG time series from human infants. The proposed method significantly improved the standardized error of the mean in ERP group analysis and a between-trial variability analysis compared to a state-of-the-art interpolation technique. This improvement increased the statistical power and revealed significant effects that would have been deemed insignificant without reconstruction. The method can be applied to any time-continuous neural signal where artifacts are sparse and spread out across epochs and channels, increasing data retention and statistical power.
Collapse
Affiliation(s)
- Shruti Naik
- Cognitive Neuroimaging Unit, Centre National de la Recherche Scientifique (CNRS), Institut National de la Santé et de la Recherche Médicale (INSERM), CEA, Université Paris-Saclay, NeuroSpin Center, F-91190 Gif-sur-Yvette, France
| | - Ghislaine Dehaene-Lambertz
- Cognitive Neuroimaging Unit, Centre National de la Recherche Scientifique (CNRS), Institut National de la Santé et de la Recherche Médicale (INSERM), CEA, Université Paris-Saclay, NeuroSpin Center, F-91190 Gif-sur-Yvette, France
| | - Demian Battaglia
- Institut de Neurosciences des Systèmes, U1106, Centre National de la Recherche Scientifique (CNRS) Aix-Marseille Université, F-13005 Marseille, France
- Institute for Advanced Studies, University of Strasbourg, (USIAS), F-67000 Strasbourg, France
| |
Collapse
|
11
|
Boucher-Routhier M, Thivierge JP. A deep generative adversarial network capturing complex spiral waves in disinhibited circuits of the cerebral cortex. BMC Neurosci 2023; 24:22. [PMID: 36964493 PMCID: PMC10039524 DOI: 10.1186/s12868-023-00792-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/09/2022] [Accepted: 03/17/2023] [Indexed: 03/26/2023] Open
Abstract
BACKGROUND In the cerebral cortex, disinhibited activity is characterized by propagating waves that spread across neural tissue. In this pathological state, a widely reported form of activity are spiral waves that travel in a circular pattern around a fixed spatial locus termed the center of mass. Spiral waves exhibit stereotypical activity and involve broad patterns of co-fluctuations, suggesting that they may be of lower complexity than healthy activity. RESULTS To evaluate this hypothesis, we performed dense multi-electrode recordings of cortical networks where disinhibition was induced by perfusing a pro-epileptiform solution containing 4-Aminopyridine as well as increased potassium and decreased magnesium. Spiral waves were identified based on a spatially delimited center of mass and a broad distribution of instantaneous phases across electrodes. Individual waves were decomposed into "snapshots" that captured instantaneous neural activation across the entire network. The complexity of these snapshots was examined using a measure termed the participation ratio. Contrary to our expectations, an eigenspectrum analysis of these snapshots revealed a broad distribution of eigenvalues and an increase in complexity compared to baseline networks. A deep generative adversarial network was trained to generate novel exemplars of snapshots that closely captured cortical spiral waves. These synthetic waves replicated key features of experimental data including a tight center of mass, a broad eigenvalue distribution, spatially-dependent correlations, and a high complexity. By adjusting the input to the model, new samples were generated that deviated in systematic ways from the experimental data, thus allowing the exploration of a broad range of states from healthy to pathologically disinhibited neural networks. CONCLUSIONS Together, results show that the complexity of population activity serves as a marker along a continuum from healthy to disinhibited brain states. The proposed generative adversarial network opens avenues for replicating the dynamics of cortical seizures and accelerating the design of optimal neurostimulation aimed at suppressing pathological brain activity.
Collapse
Affiliation(s)
- Megan Boucher-Routhier
- School of Psychology, University of Ottawa, 156 Jean-Jacques Lussier, Ottawa, ON, K1N 6N5, Canada
| | - Jean-Philippe Thivierge
- School of Psychology, University of Ottawa, 156 Jean-Jacques Lussier, Ottawa, ON, K1N 6N5, Canada.
- University of Ottawa Brain and Mind Research Institute, 451 Smyth Rd., Ottawa, ON, K1H 8M5, Canada.
| |
Collapse
|
12
|
DePasquale B, Sussillo D, Abbott LF, Churchland MM. The centrality of population-level factors to network computation is demonstrated by a versatile approach for training spiking networks. Neuron 2023; 111:631-649.e10. [PMID: 36630961 PMCID: PMC10118067 DOI: 10.1016/j.neuron.2022.12.007] [Citation(s) in RCA: 14] [Impact Index Per Article: 14.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/21/2020] [Revised: 06/17/2022] [Accepted: 12/05/2022] [Indexed: 01/12/2023]
Abstract
Neural activity is often described in terms of population-level factors extracted from the responses of many neurons. Factors provide a lower-dimensional description with the aim of shedding light on network computations. Yet, mechanistically, computations are performed not by continuously valued factors but by interactions among neurons that spike discretely and variably. Models provide a means of bridging these levels of description. We developed a general method for training model networks of spiking neurons by leveraging factors extracted from either data or firing-rate-based networks. In addition to providing a useful model-building framework, this formalism illustrates how reliable and continuously valued factors can arise from seemingly stochastic spiking. Our framework establishes procedures for embedding this property in network models with different levels of realism. The relationship between spikes and factors in such networks provides a foundation for interpreting (and subtly redefining) commonly used quantities such as firing rates.
Collapse
Affiliation(s)
- Brian DePasquale
- Princeton Neuroscience Institute, Princeton University, Princeton NJ, USA; Department of Neuroscience, Columbia University, New York, NY, USA; Center for Theoretical Neuroscience, Columbia University, New York, NY, USA.
| | - David Sussillo
- Department of Electrical Engineering, Stanford University, Stanford, CA, USA; Wu Tsai Neurosciences Institute, Stanford University, Stanford, CA, USA
| | - L F Abbott
- Department of Neuroscience, Columbia University, New York, NY, USA; Center for Theoretical Neuroscience, Columbia University, New York, NY, USA; Zuckerman Mind Brain Behavior Institute, Columbia University, New York, NY, USA; Department of Physiology and Cellular Biophysics, Columbia University, New York, NY, USA; Kavli Institute for Brain Science, Columbia University, New York, NY, USA
| | - Mark M Churchland
- Department of Neuroscience, Columbia University, New York, NY, USA; Zuckerman Mind Brain Behavior Institute, Columbia University, New York, NY, USA; Kavli Institute for Brain Science, Columbia University, New York, NY, USA; Grossman Center for the Statistics of Mind, Columbia University, New York, NY, USA
| |
Collapse
|
13
|
Temporal progression along discrete coding states during decision-making in the mouse gustatory cortex. PLoS Comput Biol 2023; 19:e1010865. [PMID: 36749734 PMCID: PMC9904478 DOI: 10.1371/journal.pcbi.1010865] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/15/2022] [Accepted: 01/10/2023] [Indexed: 02/08/2023] Open
Abstract
The mouse gustatory cortex (GC) is involved in taste-guided decision-making in addition to sensory processing. Rodent GC exhibits metastable neural dynamics during ongoing and stimulus-evoked activity, but how these dynamics evolve in the context of a taste-based decision-making task remains unclear. Here we employ analytical and modeling approaches to i) extract metastable dynamics in ensemble spiking activity recorded from the GC of mice performing a perceptual decision-making task; ii) investigate the computational mechanisms underlying GC metastability in this task; and iii) establish a relationship between GC dynamics and behavioral performance. Our results show that activity in GC during perceptual decision-making is metastable and that this metastability may serve as a substrate for sequentially encoding sensory, abstract cue, and decision information over time. Perturbations of the model's metastable dynamics indicate that boosting inhibition in different coding epochs differentially impacts network performance, explaining a counterintuitive effect of GC optogenetic silencing on mouse behavior.
Collapse
|
14
|
Suryadi, Cheng RK, Birkett E, Jesuthasan S, Chew LY. Dynamics and potential significance of spontaneous activity in the habenula. eNeuro 2022; 9:ENEURO.0287-21.2022. [PMID: 35981869 PMCID: PMC9450562 DOI: 10.1523/eneuro.0287-21.2022] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/01/2021] [Revised: 05/31/2022] [Accepted: 06/27/2022] [Indexed: 11/21/2022] Open
Abstract
The habenula is an evolutionarily conserved structure of the vertebrate brain that is essential for behavioural flexibility and mood control. It is spontaneously active and is able to access diverse states when the animal is exposed to sensory stimuli. Here we investigate the dynamics of habenula spontaneous activity, to gain insight into how sensitivity is optimized. Two-photon calcium imaging was performed in resting zebrafish larvae at single cell resolution. An analysis of avalanches of inferred spikes suggests that the habenula is subcritical. Activity had low covariance and a small mean, arguing against dynamic criticality. A multiple regression estimator of autocorrelation time suggests that the habenula is neither fully asynchronous nor perfectly critical, but is reverberating. This pattern of dynamics may enable integration of information and high flexibility in the tuning of network properties, thus providing a potential mechanism for the optimal responses to a changing environment.Significance StatementSpontaneous activity in neurons shapes the response to stimuli. One structure with a high level of spontaneous neuronal activity is the habenula, a regulator of broadly acting neuromodulators involved in mood and learning. How does this activity influence habenula function? We show here that the habenula of a resting animal is near criticality, in a state termed reverberation. This pattern of dynamics is consistent with high sensitivity and flexibility, and may enable the habenula to respond optimally to a wide range of stimuli.
Collapse
Affiliation(s)
- Suryadi
- School of Physical & Mathematical Sciences, Nanyang Technological University, Singapore 637371
| | - Ruey-Kuang Cheng
- Lee Kong Chian School of Medicine, Nanyang Technological University, Singapore 636921
| | - Elliot Birkett
- Institute of Molecular and Cell Biology, Singapore 138673
- School of Biosciences, University of Sheffield, Sheffield, United Kingdom
| | - Suresh Jesuthasan
- Lee Kong Chian School of Medicine, Nanyang Technological University, Singapore 636921
- Institute of Molecular and Cell Biology, Singapore 138673
| | - Lock Yue Chew
- School of Physical & Mathematical Sciences, Nanyang Technological University, Singapore 637371
- Complexity Institute, Nanyang Technological University, Singapore 637335
| |
Collapse
|
15
|
Recanatesi S, Bradde S, Balasubramanian V, Steinmetz NA, Shea-Brown E. A scale-dependent measure of system dimensionality. PATTERNS 2022; 3:100555. [PMID: 36033586 PMCID: PMC9403367 DOI: 10.1016/j.patter.2022.100555] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/17/2021] [Revised: 04/12/2022] [Accepted: 06/24/2022] [Indexed: 11/28/2022]
Abstract
A fundamental problem in science is uncovering the effective number of degrees of freedom in a complex system: its dimensionality. A system’s dimensionality depends on its spatiotemporal scale. Here, we introduce a scale-dependent generalization of a classic enumeration of latent variables, the participation ratio. We demonstrate how the scale-dependent participation ratio identifies the appropriate dimension at local, intermediate, and global scales in several systems such as the Lorenz attractor, hidden Markov models, and switching linear dynamical systems. We show analytically how, at different limiting scales, the scale-dependent participation ratio relates to well-established measures of dimensionality. This measure applied in neural population recordings across multiple brain areas and brain states shows fundamental trends in the dimensionality of neural activity—for example, in behaviorally engaged versus spontaneous states. Our novel method unifies widely used measures of dimensionality and applies broadly to multivariate data across several fields of science. The scale-dependent dimensionality unifies widely used measures of dimensionality Dynamical systems show distinct dimensionality properties at different scales The scale-dependent dimensionality allows us to identify critical scales of the system Fundamental trends in dimensionality of neural activity depend on the brain state
Data mining is based on the discovery of structure within data. However, such a structure is often complex. The fact that the properties of data distributions vary depending on the scale at which they are examined is a fundamental component of this complexity. For example, a manifold may appear smooth at small scales but jagged or even fractal at larger scales. This scale dependence is critical, yet it is commonly overlooked. We introduce a fundamental approach for analyzing the properties of data distributions at all scales. This single scale-dependent description enables simultaneous examination of how characteristics vary across all scales, offering insight into the structure of the data distribution. This will help us gain a better grasp of data structures and pave the way for future theoretical advances in data science.
Collapse
|
16
|
Weninger L, Srivastava P, Zhou D, Kim JZ, Cornblath EJ, Bertolero MA, Habel U, Merhof D, Bassett DS. Information content of brain states is explained by structural constraints on state energetics. Phys Rev E 2022; 106:014401. [PMID: 35974521 DOI: 10.1103/physreve.106.014401] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/29/2021] [Accepted: 04/27/2022] [Indexed: 06/15/2023]
Abstract
Signal propagation along the structural connectome of the brain induces changes in the patterns of activity. These activity patterns define global brain states and contain information in accordance with their expected probability of occurrence. Being the physical substrate upon which information propagates, the structural connectome, in conjunction with the dynamics, determines the set of possible brain states and constrains the transition between accessible states. Yet, precisely how these structural constraints on state transitions relate to their information content remains unexplored. To address this gap in knowledge, we defined the information content as a function of the activation distribution, where statistically rare values of activation correspond to high information content. With this numerical definition in hand, we studied the spatiotemporal distribution of information content in functional magnetic resonance imaging (fMRI) data from the Human Connectome Project during different tasks, and report four key findings. First, information content strongly depends on cognitive context; its absolute level and spatial distribution depend on the cognitive task. Second, while information content shows similarities to other measures of brain activity, it is distinct from both Neurosynth maps and task contrast maps generated by a general linear model applied to the fMRI data. Third, the brain's structural wiring constrains the cost to control its state, where the cost to transition into high information content states is larger than that to transition into low information content states. Finally, all state transitions-especially those to high information content states-are less costly than expected from random network null models, thereby indicating the brains marked efficiency. Taken together, our findings establish an explanatory link between the information contained in a brain state and the energetic cost of attaining that state, thereby laying important groundwork for our understanding of large-scale cognitive computations.
Collapse
Affiliation(s)
- Leon Weninger
- Department of Bioengineering, School of Engineering & Applied Sciences, University of Pennsylvania, Philadelphia, Pennsylvania 19104, USA
- Institute of Imaging & Computer Vision, RWTH Aachen University, 52072 Aachen, Germany
| | - Pragya Srivastava
- Department of Bioengineering, School of Engineering & Applied Sciences, University of Pennsylvania, Philadelphia, Pennsylvania 19104, USA
| | - Dale Zhou
- Department of Bioengineering, School of Engineering & Applied Sciences, University of Pennsylvania, Philadelphia, Pennsylvania 19104, USA
- Department of Neuroscience, Perelman School of Medicine, University of Pennsylvania, Philadelphia, Pennsylvania 19104, USA
| | - Jason Z Kim
- Department of Bioengineering, School of Engineering & Applied Sciences, University of Pennsylvania, Philadelphia, Pennsylvania 19104, USA
| | - Eli J Cornblath
- Perelman School of Medicine, University of Pennsylvania, Philadelphia, Pennsylvania 19104, USA
| | - Maxwell A Bertolero
- Department of Psychiatry, Perelman School of Medicine, University of Pennsylvania, Philadelphia, Pennsylvania 19104, USA
| | - Ute Habel
- Department of Psychiatry, Psychotherapy and Psychosomatics, Medical Faculty, RWTH Aachen University, 52074 Aachen, Germany
- Institute of Neuroscience and Medicine 10, Research Centre Jülich, 52428 Jülich, Germany
| | - Dorit Merhof
- Institute of Imaging & Computer Vision, RWTH Aachen University, 52072 Aachen, Germany
| | - Dani S Bassett
- Department of Bioengineering, School of Engineering & Applied Sciences, University of Pennsylvania, Philadelphia, Pennsylvania 19104, USA
- Department of Psychiatry, Perelman School of Medicine, University of Pennsylvania, Philadelphia, Pennsylvania 19104, USA
- Department of Physics & Astronomy, College of Arts and Sciences, University of Pennsylvania, Philadelphia, Pennsylvania 19104, USA
- Department of Neurology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, Pennsylvania 19104, USA
- Department of Electrical & Systems Engineering, School of Engineering and Applied Sciences, University of Pennsylvania, Philadelphia, Pennsylvania 19104, USA
- Santa Fe Institute, Santa Fe, New Mexico 87501, USA
| |
Collapse
|
17
|
Hu Y, Sompolinsky H. The spectrum of covariance matrices of randomly connected recurrent neuronal networks with linear dynamics. PLoS Comput Biol 2022; 18:e1010327. [PMID: 35862445 PMCID: PMC9345493 DOI: 10.1371/journal.pcbi.1010327] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/08/2021] [Revised: 08/02/2022] [Accepted: 06/24/2022] [Indexed: 11/18/2022] Open
Abstract
A key question in theoretical neuroscience is the relation between the connectivity structure and the collective dynamics of a network of neurons. Here we study the connectivity-dynamics relation as reflected in the distribution of eigenvalues of the covariance matrix of the dynamic fluctuations of the neuronal activities, which is closely related to the network dynamics’ Principal Component Analysis (PCA) and the associated effective dimensionality. We consider the spontaneous fluctuations around a steady state in a randomly connected recurrent network of stochastic neurons. An exact analytical expression for the covariance eigenvalue distribution in the large-network limit can be obtained using results from random matrices. The distribution has a finitely supported smooth bulk spectrum and exhibits an approximate power-law tail for coupling matrices near the critical edge. We generalize the results to include second-order connectivity motifs and discuss extensions to excitatory-inhibitory networks. The theoretical results are compared with those from finite-size networks and the effects of temporal and spatial sampling are studied. Preliminary application to whole-brain imaging data is presented. Using simple connectivity models, our work provides theoretical predictions for the covariance spectrum, a fundamental property of recurrent neuronal dynamics, that can be compared with experimental data. Here we study the distribution of eigenvalues, or spectrum, of the neuron-to-neuron covariance matrix in recurrently connected neuronal networks. The covariance spectrum is an important global feature of neuron population dynamics that requires simultaneous recordings of neurons. The spectrum is essential to the widely used Principal Component Analysis (PCA) and generalizes the dimensionality measure of population dynamics. We use a simple model to emulate the complex connections between neurons, where all pairs of neurons interact linearly at a strength specified randomly and independently. We derive a closed-form expression of the covariance spectrum, revealing an interesting long tail of large eigenvalues following a power law as the connection strength increases. To incorporate connectivity features important to biological neural circuits, we generalize the result to networks with an additional low-rank connectivity component that could come from learning and networks consisting of sparsely connected excitatory and inhibitory neurons. To facilitate comparing the theoretical results to experimental data, we derive the precise modifications needed to account for the effect of limited time samples and having unobserved neurons. Preliminary applications to large-scale calcium imaging data suggest our model can well capture the high dimensional population activity of neurons.
Collapse
Affiliation(s)
- Yu Hu
- Department of Mathematics and Division of Life Science, The Hong Kong University of Science and Technology, Hong Kong SAR, China
- * E-mail: (YH); (HS)
| | - Haim Sompolinsky
- Edmond and Lily Safra Center for Brain Sciences, The Hebrew University of Jerusalem, Jerusalem, Israel
- Center for Brain Science, Harvard University, Cambridge, Massachusetts, United States of America
- * E-mail: (YH); (HS)
| |
Collapse
|
18
|
Gradient-based learning drives robust representations in recurrent neural networks by balancing compression and expansion. NAT MACH INTELL 2022. [DOI: 10.1038/s42256-022-00498-0] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/02/2023]
|
19
|
Brinkman BAW, Yan H, Maffei A, Park IM, Fontanini A, Wang J, La Camera G. Metastable dynamics of neural circuits and networks. APPLIED PHYSICS REVIEWS 2022; 9:011313. [PMID: 35284030 PMCID: PMC8900181 DOI: 10.1063/5.0062603] [Citation(s) in RCA: 13] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/06/2021] [Accepted: 01/31/2022] [Indexed: 05/14/2023]
Abstract
Cortical neurons emit seemingly erratic trains of action potentials or "spikes," and neural network dynamics emerge from the coordinated spiking activity within neural circuits. These rich dynamics manifest themselves in a variety of patterns, which emerge spontaneously or in response to incoming activity produced by sensory inputs. In this Review, we focus on neural dynamics that is best understood as a sequence of repeated activations of a number of discrete hidden states. These transiently occupied states are termed "metastable" and have been linked to important sensory and cognitive functions. In the rodent gustatory cortex, for instance, metastable dynamics have been associated with stimulus coding, with states of expectation, and with decision making. In frontal, parietal, and motor areas of macaques, metastable activity has been related to behavioral performance, choice behavior, task difficulty, and attention. In this article, we review the experimental evidence for neural metastable dynamics together with theoretical approaches to the study of metastable activity in neural circuits. These approaches include (i) a theoretical framework based on non-equilibrium statistical physics for network dynamics; (ii) statistical approaches to extract information about metastable states from a variety of neural signals; and (iii) recent neural network approaches, informed by experimental results, to model the emergence of metastable dynamics. By discussing these topics, we aim to provide a cohesive view of how transitions between different states of activity may provide the neural underpinnings for essential functions such as perception, memory, expectation, or decision making, and more generally, how the study of metastable neural activity may advance our understanding of neural circuit function in health and disease.
Collapse
Affiliation(s)
| | - H. Yan
- State Key Laboratory of Electroanalytical Chemistry, Changchun Institute of Applied Chemistry, Chinese Academy of Sciences, Changchun, Jilin 130022, People's Republic of China
| | | | | | | | - J. Wang
- Authors to whom correspondence should be addressed: and
| | - G. La Camera
- Authors to whom correspondence should be addressed: and
| |
Collapse
|
20
|
Cognitive strategies shift information from single neurons to populations in prefrontal cortex. Neuron 2022; 110:709-721.e4. [PMID: 34932940 PMCID: PMC8857053 DOI: 10.1016/j.neuron.2021.11.021] [Citation(s) in RCA: 12] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/26/2021] [Revised: 09/27/2021] [Accepted: 11/19/2021] [Indexed: 11/24/2022]
Abstract
Neurons in primate lateral prefrontal cortex (LPFC) play a critical role in working memory (WM) and cognitive strategies. Consistent with adaptive coding models, responses of these neurons are not fixed but flexibly adjust on the basis of cognitive demands. However, little is known about how these adjustments affect population codes. Here, we investigated ensemble coding in LPFC while monkeys implemented different strategies in a WM task. Although single neurons were less tuned when monkeys used more stereotyped strategies, task information could still be accurately decoded from neural populations. This was due to changes in population codes that distributed information among a greater number of neurons, each contributing less to the overall population. Moreover, this shift occurred for task-relevant, but not irrelevant, information. These results demonstrate that cognitive strategies that impose structure on information held in mind rearrange population codes in LPFC, such that information becomes more distributed among neurons in an ensemble.
Collapse
|
21
|
Dahmen D, Layer M, Deutz L, Dąbrowska PA, Voges N, von Papen M, Brochier T, Riehle A, Diesmann M, Grün S, Helias M. Global organization of neuronal activity only requires unstructured local connectivity. eLife 2022; 11:e68422. [PMID: 35049496 PMCID: PMC8776256 DOI: 10.7554/elife.68422] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/15/2021] [Accepted: 11/18/2021] [Indexed: 11/13/2022] Open
Abstract
Modern electrophysiological recordings simultaneously capture single-unit spiking activities of hundreds of neurons spread across large cortical distances. Yet, this parallel activity is often confined to relatively low-dimensional manifolds. This implies strong coordination also among neurons that are most likely not even connected. Here, we combine in vivo recordings with network models and theory to characterize the nature of mesoscopic coordination patterns in macaque motor cortex and to expose their origin: We find that heterogeneity in local connectivity supports network states with complex long-range cooperation between neurons that arises from multi-synaptic, short-range connections. Our theory explains the experimentally observed spatial organization of covariances in resting state recordings as well as the behaviorally related modulation of covariance patterns during a reach-to-grasp task. The ubiquity of heterogeneity in local cortical circuits suggests that the brain uses the described mechanism to flexibly adapt neuronal coordination to momentary demands.
Collapse
Affiliation(s)
- David Dahmen
- Institute of Neuroscience and Medicine and Institute for Advanced Simulation and JARA Institut Brain Structure-Function Relationships, Jülich Research CentreJülichGermany
| | - Moritz Layer
- Institute of Neuroscience and Medicine and Institute for Advanced Simulation and JARA Institut Brain Structure-Function Relationships, Jülich Research CentreJülichGermany
- RWTH Aachen UniversityAachenGermany
| | - Lukas Deutz
- Institute of Neuroscience and Medicine and Institute for Advanced Simulation and JARA Institut Brain Structure-Function Relationships, Jülich Research CentreJülichGermany
- School of Computing, University of LeedsLeedsUnited Kingdom
| | - Paulina Anna Dąbrowska
- Institute of Neuroscience and Medicine and Institute for Advanced Simulation and JARA Institut Brain Structure-Function Relationships, Jülich Research CentreJülichGermany
- RWTH Aachen UniversityAachenGermany
| | - Nicole Voges
- Institute of Neuroscience and Medicine and Institute for Advanced Simulation and JARA Institut Brain Structure-Function Relationships, Jülich Research CentreJülichGermany
- Institut de Neurosciences de la Timone, CNRS - Aix-Marseille UniversityMarseilleFrance
| | - Michael von Papen
- Institute of Neuroscience and Medicine and Institute for Advanced Simulation and JARA Institut Brain Structure-Function Relationships, Jülich Research CentreJülichGermany
| | - Thomas Brochier
- Institut de Neurosciences de la Timone, CNRS - Aix-Marseille UniversityMarseilleFrance
| | - Alexa Riehle
- Institute of Neuroscience and Medicine and Institute for Advanced Simulation and JARA Institut Brain Structure-Function Relationships, Jülich Research CentreJülichGermany
- Institut de Neurosciences de la Timone, CNRS - Aix-Marseille UniversityMarseilleFrance
| | - Markus Diesmann
- Institute of Neuroscience and Medicine and Institute for Advanced Simulation and JARA Institut Brain Structure-Function Relationships, Jülich Research CentreJülichGermany
- Department of Physics, Faculty 1, RWTH Aachen UniversityAachenGermany
- Department of Psychiatry, Psychotherapy and Psychosomatics, School of Medicine, RWTH Aachen UniversityAachenGermany
| | - Sonja Grün
- Institute of Neuroscience and Medicine and Institute for Advanced Simulation and JARA Institut Brain Structure-Function Relationships, Jülich Research CentreJülichGermany
- Theoretical Systems Neurobiology, RWTH Aachen UniversityAachenGermany
| | - Moritz Helias
- Institute of Neuroscience and Medicine and Institute for Advanced Simulation and JARA Institut Brain Structure-Function Relationships, Jülich Research CentreJülichGermany
- Department of Physics, Faculty 1, RWTH Aachen UniversityAachenGermany
| |
Collapse
|
22
|
Voina D, Recanatesi S, Hu B, Shea-Brown E, Mihalas S. Single Circuit in V1 Capable of Switching Contexts during Movement Using an Inhibitory Population as a Switch. Neural Comput 2022; 34:541-594. [PMID: 35016220 DOI: 10.1162/neco_a_01472] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/06/2021] [Accepted: 09/21/2021] [Indexed: 11/04/2022]
Abstract
As animals adapt to their environments, their brains are tasked with processing stimuli in different sensory contexts. Whether these computations are context dependent or independent, they are all implemented in the same neural tissue. A crucial question is what neural architectures can respond flexibly to a range of stimulus conditions and switch between them. This is a particular case of flexible architecture that permits multiple related computations within a single circuit. Here, we address this question in the specific case of the visual system circuitry, focusing on context integration, defined as the integration of feedforward and surround information across visual space. We show that a biologically inspired microcircuit with multiple inhibitory cell types can switch between visual processing of the static context and the moving context. In our model, the VIP population acts as the switch and modulates the visual circuit through a disinhibitory motif. Moreover, the VIP population is efficient, requiring only a relatively small number of neurons to switch contexts. This circuit eliminates noise in videos by using appropriate lateral connections for contextual spatiotemporal surround modulation, having superior denoising performance compared to circuits where only one context is learned. Our findings shed light on a minimally complex architecture that is capable of switching between two naturalistic contexts using few switching units.
Collapse
Affiliation(s)
- Doris Voina
- Applied Mathematics, University of Washington, Seattle, WA 98195 U.S.A.
| | - Stefano Recanatesi
- Department of Physiology and Biophysics, University of Washington, Seattle, WA 98195, U.S.A.
| | - Brian Hu
- Allen Institute for Brain Science, Seattle, WA 98109 U.S.A
| | - Eric Shea-Brown
- Applied Mathematics, University of Washington, Seattle, WA 98195, U.S.A., and Allen Institute for Brain Science, Seattle, WA 98109, U.S.A.
| | - Stefan Mihalas
- Applied Mathematics, University of Washington, Seattle, WA 98195, U.S.A., and Allen Institute for Brain Science, Seattle, WA 98109, U.S.A.
| |
Collapse
|
23
|
Metastable attractors explain the variable timing of stable behavioral action sequences. Neuron 2022; 110:139-153.e9. [PMID: 34717794 PMCID: PMC9194601 DOI: 10.1016/j.neuron.2021.10.011] [Citation(s) in RCA: 20] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/10/2021] [Revised: 08/30/2021] [Accepted: 10/05/2021] [Indexed: 01/07/2023]
Abstract
The timing of self-initiated actions shows large variability even when they are executed in stable, well-learned sequences. Could this mix of reliability and stochasticity arise within the same neural circuit? We trained rats to perform a stereotyped sequence of self-initiated actions and recorded neural ensemble activity in secondary motor cortex (M2), which is known to reflect trial-by-trial action-timing fluctuations. Using hidden Markov models, we established a dictionary between activity patterns and actions. We then showed that metastable attractors, representing activity patterns with a reliable sequential structure and large transition timing variability, could be produced by reciprocally coupling a high-dimensional recurrent network and a low-dimensional feedforward one. Transitions between attractors relied on correlated variability in this mesoscale feedback loop, predicting a specific structure of low-dimensional correlations that were empirically verified in M2 recordings. Our results suggest a novel mesoscale network motif based on correlated variability supporting naturalistic animal behavior.
Collapse
|
24
|
Osborne H, Deutz L, de Kamps M. Multidimensional Dynamical Systems with Noise. ADVANCES IN EXPERIMENTAL MEDICINE AND BIOLOGY 2022; 1359:159-178. [DOI: 10.1007/978-3-030-89439-9_7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
|
25
|
The Mean Field Approach for Populations of Spiking Neurons. ADVANCES IN EXPERIMENTAL MEDICINE AND BIOLOGY 2022; 1359:125-157. [DOI: 10.1007/978-3-030-89439-9_6] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
Abstract
AbstractMean field theory is a device to analyze the collective behavior of a dynamical system comprising many interacting particles. The theory allows to reduce the behavior of the system to the properties of a handful of parameters. In neural circuits, these parameters are typically the firing rates of distinct, homogeneous subgroups of neurons. Knowledge of the firing rates under conditions of interest can reveal essential information on both the dynamics of neural circuits and the way they can subserve brain function. The goal of this chapter is to provide an elementary introduction to the mean field approach for populations of spiking neurons. We introduce the general idea in networks of binary neurons, starting from the most basic results and then generalizing to more relevant situations. This allows to derive the mean field equations in a simplified setting. We then derive the mean field equations for populations of integrate-and-fire neurons. An effort is made to derive the main equations of the theory using only elementary methods from calculus and probability theory. The chapter ends with a discussion of the assumptions of the theory and some of the consequences of violating those assumptions. This discussion includes an introduction to balanced and metastable networks and a brief catalogue of successful applications of the mean field approach to the study of neural circuits.
Collapse
|
26
|
Guo X, Wang J. Low-Dimensional Dynamics of Brain Activity Associated with Manual Acupuncture in Healthy Subjects. SENSORS 2021; 21:s21227432. [PMID: 34833508 PMCID: PMC8619579 DOI: 10.3390/s21227432] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 09/24/2021] [Revised: 11/03/2021] [Accepted: 11/06/2021] [Indexed: 11/24/2022]
Abstract
Acupuncture is one of the oldest traditional medical treatments in Asian countries. However, the scientific explanation regarding the therapeutic effect of acupuncture is still unknown. The much-discussed hypothesis it that acupuncture’s effects are mediated via autonomic neural networks; nevertheless, dynamic brain activity involved in the acupuncture response has still not been elicited. In this work, we hypothesized that there exists a lower-dimensional subspace of dynamic brain activity across subjects, underpinning the brain’s response to manual acupuncture stimulation. To this end, we employed a variational auto-encoder to probe the latent variables from multichannel EEG signals associated with acupuncture stimulation at the ST36 acupoint. The experimental results demonstrate that manual acupuncture stimuli can reduce the dimensionality of brain activity, which results from the enhancement of oscillatory activity in the delta and alpha frequency bands induced by acupuncture. Moreover, it was found that large-scale brain activity could be constrained within a low-dimensional neural subspace, which is spanned by the “acupuncture mode”. In each neural subspace, the steady dynamics of the brain in response to acupuncture stimuli converge to topologically similar elliptic-shaped attractors across different subjects. The attractor morphology is closely related to the frequency of the acupuncture stimulation. These results shed light on probing the large-scale brain response to manual acupuncture stimuli.
Collapse
Affiliation(s)
- Xinmeng Guo
- School of Electrical and Information Engineering, Tianjin University, Tianjin 300072, China;
- Academy of Medical Engineering and Translational Medicine, Tianjin University, Tianjin 300072, China
- Correspondence:
| | - Jiang Wang
- School of Electrical and Information Engineering, Tianjin University, Tianjin 300072, China;
| |
Collapse
|
27
|
Altan E, Solla SA, Miller LE, Perreault EJ. Estimating the dimensionality of the manifold underlying multi-electrode neural recordings. PLoS Comput Biol 2021; 17:e1008591. [PMID: 34843461 PMCID: PMC8659648 DOI: 10.1371/journal.pcbi.1008591] [Citation(s) in RCA: 17] [Impact Index Per Article: 5.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/04/2020] [Revised: 12/09/2021] [Accepted: 11/11/2021] [Indexed: 01/07/2023] Open
Abstract
It is generally accepted that the number of neurons in a given brain area far exceeds the number of neurons needed to carry any specific function controlled by that area. For example, motor areas of the human brain contain tens of millions of neurons that control the activation of tens or at most hundreds of muscles. This massive redundancy implies the covariation of many neurons, which constrains the population activity to a low-dimensional manifold within the space of all possible patterns of neural activity. To gain a conceptual understanding of the complexity of the neural activity within a manifold, it is useful to estimate its dimensionality, which quantifies the number of degrees of freedom required to describe the observed population activity without significant information loss. While there are many algorithms for dimensionality estimation, we do not know which are well suited for analyzing neural activity. The objective of this study was to evaluate the efficacy of several representative algorithms for estimating the dimensionality of linearly and nonlinearly embedded data. We generated synthetic neural recordings with known intrinsic dimensionality and used them to test the algorithms' accuracy and robustness. We emulated some of the important challenges associated with experimental data by adding noise, altering the nature of the embedding of the low-dimensional manifold within the high-dimensional recordings, varying the dimensionality of the manifold, and limiting the amount of available data. We demonstrated that linear algorithms overestimate the dimensionality of nonlinear, noise-free data. In cases of high noise, most algorithms overestimated the dimensionality. We thus developed a denoising algorithm based on deep learning, the "Joint Autoencoder", which significantly improved subsequent dimensionality estimation. Critically, we found that all algorithms failed when the intrinsic dimensionality was high (above 20) or when the amount of data used for estimation was low. Based on the challenges we observed, we formulated a pipeline for estimating the dimensionality of experimental neural data.
Collapse
Affiliation(s)
- Ege Altan
- Department of Neuroscience, Northwestern University, Chicago, Illinois, United States of America
- Department of Biomedical Engineering, Northwestern University, Evanston, Illinois, United States of America
| | - Sara A. Solla
- Department of Neuroscience, Northwestern University, Chicago, Illinois, United States of America
- Department of Physics and Astronomy, Northwestern University, Evanston, Illinois, United States of America
| | - Lee E. Miller
- Department of Neuroscience, Northwestern University, Chicago, Illinois, United States of America
- Department of Biomedical Engineering, Northwestern University, Evanston, Illinois, United States of America
- Department of Physical Medicine and Rehabilitation, Northwestern University, Chicago, Illinois, United States of America
- Shirley Ryan AbilityLab, Chicago, Illinois, United States of America
| | - Eric J. Perreault
- Department of Biomedical Engineering, Northwestern University, Evanston, Illinois, United States of America
- Department of Physical Medicine and Rehabilitation, Northwestern University, Chicago, Illinois, United States of America
- Shirley Ryan AbilityLab, Chicago, Illinois, United States of America
| |
Collapse
|
28
|
Cai Y, Wu T, Tao L, Xiao ZC. Model Reduction Captures Stochastic Gamma Oscillations on Low-Dimensional Manifolds. Front Comput Neurosci 2021; 15:678688. [PMID: 34489666 PMCID: PMC8418102 DOI: 10.3389/fncom.2021.678688] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/10/2021] [Accepted: 07/23/2021] [Indexed: 12/02/2022] Open
Abstract
Gamma frequency oscillations (25–140 Hz), observed in the neural activities within many brain regions, have long been regarded as a physiological basis underlying many brain functions, such as memory and attention. Among numerous theoretical and computational modeling studies, gamma oscillations have been found in biologically realistic spiking network models of the primary visual cortex. However, due to its high dimensionality and strong non-linearity, it is generally difficult to perform detailed theoretical analysis of the emergent gamma dynamics. Here we propose a suite of Markovian model reduction methods with varying levels of complexity and apply it to spiking network models exhibiting heterogeneous dynamical regimes, ranging from nearly homogeneous firing to strong synchrony in the gamma band. The reduced models not only successfully reproduce gamma oscillations in the full model, but also exhibit the same dynamical features as we vary parameters. Most remarkably, the invariant measure of the coarse-grained Markov process reveals a two-dimensional surface in state space upon which the gamma dynamics mainly resides. Our results suggest that the statistical features of gamma oscillations strongly depend on the subthreshold neuronal distributions. Because of the generality of the Markovian assumptions, our dimensional reduction methods offer a powerful toolbox for theoretical examinations of other complex cortical spatio-temporal behaviors observed in both neurophysiological experiments and numerical simulations.
Collapse
Affiliation(s)
- Yuhang Cai
- Department of Statistics, University of Chicago, Chicago, IL, United States
| | - Tianyi Wu
- School of Mathematical Sciences, Peking University, Beijing, China.,Center for Bioinformatics, National Laboratory of Protein Engineering and Plant Genetic Engineering, School of Life Sciences, Peking University, Beijing, China
| | - Louis Tao
- Center for Bioinformatics, National Laboratory of Protein Engineering and Plant Genetic Engineering, School of Life Sciences, Peking University, Beijing, China.,Center for Quantitative Biology, Peking University, Beijing, China
| | - Zhuo-Cheng Xiao
- Courant Institute of Mathematical Sciences, New York University, New York, NY, United States
| |
Collapse
|
29
|
Umakantha A, Morina R, Cowley BR, Snyder AC, Smith MA, Yu BM. Bridging neuronal correlations and dimensionality reduction. Neuron 2021; 109:2740-2754.e12. [PMID: 34293295 PMCID: PMC8505167 DOI: 10.1016/j.neuron.2021.06.028] [Citation(s) in RCA: 17] [Impact Index Per Article: 5.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/12/2020] [Revised: 05/05/2021] [Accepted: 06/25/2021] [Indexed: 01/01/2023]
Abstract
Two commonly used approaches to study interactions among neurons are spike count correlation, which describes pairs of neurons, and dimensionality reduction, applied to a population of neurons. Although both approaches have been used to study trial-to-trial neuronal variability correlated among neurons, they are often used in isolation and have not been directly related. We first established concrete mathematical and empirical relationships between pairwise correlation and metrics of population-wide covariability based on dimensionality reduction. Applying these insights to macaque V4 population recordings, we found that the previously reported decrease in mean pairwise correlation associated with attention stemmed from three distinct changes in population-wide covariability. Overall, our work builds the intuition and formalism to bridge between pairwise correlation and population-wide covariability and presents a cautionary tale about the inferences one can make about population activity by using a single statistic, whether it be mean pairwise correlation or dimensionality.
Collapse
Affiliation(s)
- Akash Umakantha
- Carnegie Mellon Neuroscience Institute, Pittsburgh, PA 15213, USA; Machine Learning Department, Carnegie Mellon University, Pittsburgh, PA 15213, USA
| | - Rudina Morina
- Department of Electrical and Computer Engineering, Carnegie Mellon University, Pittsburgh, PA 15213, USA
| | - Benjamin R Cowley
- Machine Learning Department, Carnegie Mellon University, Pittsburgh, PA 15213, USA; Princeton Neuroscience Institute, Princeton University, Princeton, NJ 08544, USA
| | - Adam C Snyder
- Department of Electrical and Computer Engineering, Carnegie Mellon University, Pittsburgh, PA 15213, USA; Department of Brain and Cognitive Sciences, University of Rochester, Rochester, NY 14642, USA; Department of Neuroscience, University of Rochester, Rochester, NY 14642, USA; Center for Visual Science, University of Rochester, Rochester, NY 14642, USA
| | - Matthew A Smith
- Carnegie Mellon Neuroscience Institute, Pittsburgh, PA 15213, USA; Department of Biomedical Engineering, Carnegie Mellon University, Pittsburgh, PA 15213, USA.
| | - Byron M Yu
- Carnegie Mellon Neuroscience Institute, Pittsburgh, PA 15213, USA; Department of Electrical and Computer Engineering, Carnegie Mellon University, Pittsburgh, PA 15213, USA; Department of Biomedical Engineering, Carnegie Mellon University, Pittsburgh, PA 15213, USA.
| |
Collapse
|
30
|
Gozel O, Gerstner W. A functional model of adult dentate gyrus neurogenesis. eLife 2021; 10:66463. [PMID: 34137370 PMCID: PMC8260225 DOI: 10.7554/elife.66463] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/12/2021] [Accepted: 06/16/2021] [Indexed: 12/27/2022] Open
Abstract
In adult dentate gyrus neurogenesis, the link between maturation of newborn neurons and their function, such as behavioral pattern separation, has remained puzzling. By analyzing a theoretical model, we show that the switch from excitation to inhibition of the GABAergic input onto maturing newborn cells is crucial for their proper functional integration. When the GABAergic input is excitatory, cooperativity drives the growth of synapses such that newborn cells become sensitive to stimuli similar to those that activate mature cells. When GABAergic input switches to inhibitory, competition pushes the configuration of synapses onto newborn cells toward stimuli that are different from previously stored ones. This enables the maturing newborn cells to code for concepts that are novel, yet similar to familiar ones. Our theory of newborn cell maturation explains both how adult-born dentate granule cells integrate into the preexisting network and why they promote separation of similar but not distinct patterns.
Collapse
Affiliation(s)
- Olivia Gozel
- School of Life Sciences and School of Computer and Communication Sciences, Ecole Polytechnique Fédérale de Lausanne, Lausanne, Switzerland.,Departments of Neurobiology and Statistics, University of Chicago, Chicago, United States.,Grossman Center for Quantitative Biology and Human Behavior, University of Chicago, Chicago, United States
| | - Wulfram Gerstner
- School of Life Sciences and School of Computer and Communication Sciences, Ecole Polytechnique Fédérale de Lausanne, Lausanne, Switzerland
| |
Collapse
|
31
|
Wolff A, Chen L, Tumati S, Golesorkhi M, Gomez-Pilar J, Hu J, Jiang S, Mao Y, Longtin A, Northoff G. Prestimulus dynamics blend with the stimulus in neural variability quenching. Neuroimage 2021; 238:118160. [PMID: 34058331 DOI: 10.1016/j.neuroimage.2021.118160] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/09/2021] [Revised: 04/30/2021] [Accepted: 05/09/2021] [Indexed: 01/08/2023] Open
Abstract
Neural responses to the same stimulus show significant variability over trials, with this variability typically reduced (quenched) after a stimulus is presented. This trial-to-trial variability (TTV) has been much studied, however how this neural variability quenching is influenced by the ongoing dynamics of the prestimulus period is unknown. Utilizing a human intracranial stereo-electroencephalography (sEEG) data set, we investigate how prestimulus dynamics, as operationalized by standard deviation (SD), shapes poststimulus activity through trial-to-trial variability (TTV). We first observed greater poststimulus variability quenching in those real trials exhibiting high prestimulus variability as observed in all frequency bands. Next, we found that the relative effect of the stimulus was higher in the later (300-600ms) than the earlier (0-300ms) poststimulus period. Lastly, we replicate our findings in a separate EEG dataset and extend them by finding that trials with high prestimulus variability in the theta and alpha bands had faster reaction times. Together, our results demonstrate that stimulus-related activity, including its variability, is a blend of two factors: 1) the effects of the external stimulus itself, and 2) the effects of the ongoing dynamics spilling over from the prestimulus period - the state at stimulus onset - with the second dwarfing the influence of the first.
Collapse
Affiliation(s)
- Annemarie Wolff
- University of Ottawa Institute of Mental Health Research, Ottawa, Canada.
| | - Liang Chen
- Department of Neurological Surgery, Huashan Hospital, Fudan University, Wulumuqi Middle Rd, Shanghai, China.
| | - Shankar Tumati
- University of Ottawa Institute of Mental Health Research, Ottawa, Canada
| | - Mehrshad Golesorkhi
- School of Electrical Engineering and Computer Science, University of Ottawa, Ottawa, Canada
| | - Javier Gomez-Pilar
- Biomedical Engineering Group, Higher Technical School of Telecommunications Engineering, University of Valladolid, Valladolid, Spain; Centro de Investigación Biomédica en Red-Bioingeniería, Biomateriales y Nanomedicina, (CIBER-BBN), Spain
| | - Jie Hu
- Department of Neurological Surgery, Huashan Hospital, Fudan University, Wulumuqi Middle Rd, Shanghai, China
| | - Shize Jiang
- Department of Neurological Surgery, Huashan Hospital, Fudan University, Wulumuqi Middle Rd, Shanghai, China
| | - Ying Mao
- Department of Neurological Surgery, Huashan Hospital, Fudan University, Wulumuqi Middle Rd, Shanghai, China
| | - André Longtin
- Brain and Mind Research Institute, University of Ottawa, Ottawa, Canada; Physics Department, University of Ottawa, Ottawa, Canada
| | - Georg Northoff
- University of Ottawa Institute of Mental Health Research, Ottawa, Canada; Brain and Mind Research Institute, University of Ottawa, Ottawa, Canada
| |
Collapse
|
32
|
Dąbrowska PA, Voges N, von Papen M, Ito J, Dahmen D, Riehle A, Brochier T, Grün S. On the Complexity of Resting State Spiking Activity in Monkey Motor Cortex. Cereb Cortex Commun 2021; 2:tgab033. [PMID: 34296183 PMCID: PMC8271144 DOI: 10.1093/texcom/tgab033] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/11/2020] [Revised: 04/16/2021] [Accepted: 04/23/2021] [Indexed: 11/13/2022] Open
Abstract
Resting state has been established as a classical paradigm of brain activity studies, mostly based on large-scale measurements such as functional magnetic resonance imaging or magneto- and electroencephalography. This term typically refers to a behavioral state characterized by the absence of any task or stimuli. The corresponding neuronal activity is often called idle or ongoing. Numerous modeling studies on spiking neural networks claim to mimic such idle states, but compare their results with task- or stimulus-driven experiments, or to results from experiments with anesthetized subjects. Both approaches might lead to misleading conclusions. To provide a proper basis for comparing physiological and simulated network dynamics, we characterize simultaneously recorded single neurons' spiking activity in monkey motor cortex at rest and show the differences from spontaneous and task- or stimulus-induced movement conditions. We also distinguish between rest with open eyes and sleepy rest with eyes closed. The resting state with open eyes shows a significantly higher dimensionality, reduced firing rates, and less balance between population level excitation and inhibition than behavior-related states.
Collapse
Affiliation(s)
- Paulina Anna Dąbrowska
- Institute of Neuroscience and Medicine (INM-6 and INM-10) and Institute for Advanced Simulation (IAS-6), Jülich Research Centre, Jülich 52425, Germany
| | - Nicole Voges
- Institute of Neuroscience and Medicine (INM-6 and INM-10) and Institute for Advanced Simulation (IAS-6), Jülich Research Centre, Jülich 52425, Germany.,RWTH Aachen University, Aachen 52062, Germany
| | - Michael von Papen
- Institute of Neuroscience and Medicine (INM-6 and INM-10) and Institute for Advanced Simulation (IAS-6), Jülich Research Centre, Jülich 52425, Germany
| | - Junji Ito
- Institute of Neuroscience and Medicine (INM-6 and INM-10) and Institute for Advanced Simulation (IAS-6), Jülich Research Centre, Jülich 52425, Germany
| | - David Dahmen
- Institute of Neuroscience and Medicine (INM-6 and INM-10) and Institute for Advanced Simulation (IAS-6), Jülich Research Centre, Jülich 52425, Germany
| | - Alexa Riehle
- Institut de Neurosciences de la Timone, CNRS-AMU, Marseille 13005, France.,Institute of Neuroscience and Medicine (INM-6 and INM-10) and Institute for Advanced Simulation (IAS-6), Jülich Research Centre, Jülich 52425, Germany
| | - Thomas Brochier
- Institut de Neurosciences de la Timone, CNRS-AMU, Marseille 13005, France
| | - Sonja Grün
- Institute of Neuroscience and Medicine (INM-6 and INM-10) and Institute for Advanced Simulation (IAS-6), Jülich Research Centre, Jülich 52425, Germany.,Theoretical Systems Neurobiology, RWTH Aachen University, Aachen 52056, Germany
| |
Collapse
|
33
|
Wyrick D, Mazzucato L. State-Dependent Regulation of Cortical Processing Speed via Gain Modulation. J Neurosci 2021; 41:3988-4005. [PMID: 33858943 PMCID: PMC8176754 DOI: 10.1523/jneurosci.1895-20.2021] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/20/2020] [Revised: 03/04/2021] [Accepted: 03/08/2021] [Indexed: 11/21/2022] Open
Abstract
To thrive in dynamic environments, animals must be capable of rapidly and flexibly adapting behavioral responses to a changing context and internal state. Examples of behavioral flexibility include faster stimulus responses when attentive and slower responses when distracted. Contextual or state-dependent modulations may occur early in the cortical hierarchy and may be implemented via top-down projections from corticocortical or neuromodulatory pathways. However, the computational mechanisms mediating the effects of such projections are not known. Here, we introduce a theoretical framework to classify the effects of cell type-specific top-down perturbations on the information processing speed of cortical circuits. Our theory demonstrates that perturbation effects on stimulus processing can be predicted by intrinsic gain modulation, which controls the timescale of the circuit dynamics. Our theory leads to counterintuitive effects, such as improved performance with increased input variance. We tested the model predictions using large-scale electrophysiological recordings from the visual hierarchy in freely running mice, where we found that a decrease in single-cell intrinsic gain during locomotion led to an acceleration of visual processing. Our results establish a novel theory of cell type-specific perturbations, applicable to top-down modulation as well as optogenetic and pharmacological manipulations. Our theory links connectivity, dynamics, and information processing via gain modulation.SIGNIFICANCE STATEMENT To thrive in dynamic environments, animals adapt their behavior to changing circumstances and different internal states. Examples of behavioral flexibility include faster responses to sensory stimuli when attentive and slower responses when distracted. Previous work suggested that contextual modulations may be implemented via top-down inputs to sensory cortex coming from higher brain areas or neuromodulatory pathways. Here, we introduce a theory explaining how the speed at which sensory cortex processes incoming information is adjusted by changes in these top-down projections, which control the timescale of neural activity. We tested our model predictions in freely running mice, revealing that locomotion accelerates visual processing. Our theory is applicable to internal modulation as well as optogenetic and pharmacological manipulations and links circuit connectivity, dynamics, and information processing.
Collapse
Affiliation(s)
- David Wyrick
- Department of Biology and Institute of Neuroscience
| | - Luca Mazzucato
- Department of Biology and Institute of Neuroscience
- Departments of Mathematics and Physics, University of Oregon, Eugene, Oregon 97403
| |
Collapse
|
34
|
Benozzo D, La Camera G, Genovesio A. Slower prefrontal metastable dynamics during deliberation predicts error trials in a distance discrimination task. Cell Rep 2021; 35:108934. [PMID: 33826896 PMCID: PMC8083966 DOI: 10.1016/j.celrep.2021.108934] [Citation(s) in RCA: 14] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/15/2020] [Revised: 01/10/2021] [Accepted: 03/11/2021] [Indexed: 11/20/2022] Open
Abstract
Cortical activity related to erroneous behavior in discrimination or decision-making tasks is rarely analyzed, yet it can help clarify which computations are essential during a specific task. Here, we use a hidden Markov model (HMM) to perform a trial-by-trial analysis of the ensemble activity of dorsolateral prefrontal cortex (PFdl) neurons of rhesus monkeys performing a distance discrimination task. By segmenting the neural activity into sequences of metastable states, HMM allows us to uncover modulations of the neural dynamics related to internal computations. We find that metastable dynamics slow down during error trials, while state transitions at a pivotal point during the trial take longer in difficult correct trials. Both these phenomena occur during the decision interval, with errors occurring in both easy and difficult trials. Our results provide further support for the emerging role of metastable cortical dynamics in mediating complex cognitive functions and behavior.
Collapse
Affiliation(s)
- Danilo Benozzo
- Department of Physiology and Pharmacology, Sapienza University of Rome, Rome, Italy
| | - Giancarlo La Camera
- Department of Neurobiology and Behavior, Center for Neural Circuit Dynamics and Institute for Advanced Computational Science, State University of New York at Stony Brook, Stony Brook, NY, USA.
| | - Aldo Genovesio
- Department of Physiology and Pharmacology, Sapienza University of Rome, Rome, Italy.
| |
Collapse
|
35
|
Weidel P, Duarte R, Morrison A. Unsupervised Learning and Clustered Connectivity Enhance Reinforcement Learning in Spiking Neural Networks. Front Comput Neurosci 2021; 15:543872. [PMID: 33746728 PMCID: PMC7970044 DOI: 10.3389/fncom.2021.543872] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/18/2020] [Accepted: 02/08/2021] [Indexed: 11/13/2022] Open
Abstract
Reinforcement learning is a paradigm that can account for how organisms learn to adapt their behavior in complex environments with sparse rewards. To partition an environment into discrete states, implementations in spiking neuronal networks typically rely on input architectures involving place cells or receptive fields specified ad hoc by the researcher. This is problematic as a model for how an organism can learn appropriate behavioral sequences in unknown environments, as it fails to account for the unsupervised and self-organized nature of the required representations. Additionally, this approach presupposes knowledge on the part of the researcher on how the environment should be partitioned and represented and scales poorly with the size or complexity of the environment. To address these issues and gain insights into how the brain generates its own task-relevant mappings, we propose a learning architecture that combines unsupervised learning on the input projections with biologically motivated clustered connectivity within the representation layer. This combination allows input features to be mapped to clusters; thus the network self-organizes to produce clearly distinguishable activity patterns that can serve as the basis for reinforcement learning on the output projections. On the basis of the MNIST and Mountain Car tasks, we show that our proposed model performs better than either a comparable unclustered network or a clustered network with static input projections. We conclude that the combination of unsupervised learning and clustered connectivity provides a generic representational substrate suitable for further computation.
Collapse
Affiliation(s)
- Philipp Weidel
- Institute of Neuroscience and Medicine (INM-6) & Institute for Advanced Simulation (IAS-6) & JARA-Institute Brain Structure-Function Relationship (JBI-1 / INM-10), Research Centre Jülich, Jülich, Germany.,Department of Computer Science 3 - Software Engineering, RWTH Aachen University, Aachen, Germany
| | - Renato Duarte
- Institute of Neuroscience and Medicine (INM-6) & Institute for Advanced Simulation (IAS-6) & JARA-Institute Brain Structure-Function Relationship (JBI-1 / INM-10), Research Centre Jülich, Jülich, Germany
| | - Abigail Morrison
- Institute of Neuroscience and Medicine (INM-6) & Institute for Advanced Simulation (IAS-6) & JARA-Institute Brain Structure-Function Relationship (JBI-1 / INM-10), Research Centre Jülich, Jülich, Germany.,Department of Computer Science 3 - Software Engineering, RWTH Aachen University, Aachen, Germany
| |
Collapse
|
36
|
Recanatesi S, Farrell M, Lajoie G, Deneve S, Rigotti M, Shea-Brown E. Predictive learning as a network mechanism for extracting low-dimensional latent space representations. Nat Commun 2021; 12:1417. [PMID: 33658520 PMCID: PMC7930246 DOI: 10.1038/s41467-021-21696-1] [Citation(s) in RCA: 28] [Impact Index Per Article: 9.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/12/2019] [Accepted: 01/22/2021] [Indexed: 01/02/2023] Open
Abstract
Artificial neural networks have recently achieved many successes in solving sequential processing and planning tasks. Their success is often ascribed to the emergence of the task’s low-dimensional latent structure in the network activity – i.e., in the learned neural representations. Here, we investigate the hypothesis that a means for generating representations with easily accessed low-dimensional latent structure, possibly reflecting an underlying semantic organization, is through learning to predict observations about the world. Specifically, we ask whether and when network mechanisms for sensory prediction coincide with those for extracting the underlying latent variables. Using a recurrent neural network model trained to predict a sequence of observations we show that network dynamics exhibit low-dimensional but nonlinearly transformed representations of sensory inputs that map the latent structure of the sensory environment. We quantify these results using nonlinear measures of intrinsic dimensionality and linear decodability of latent variables, and provide mathematical arguments for why such useful predictive representations emerge. We focus throughout on how our results can aid the analysis and interpretation of experimental data. Neural networks trained using predictive models generate representations that recover the underlying low-dimensional latent structure in the data. Here, the authors demonstrate that a network trained on a spatial navigation task generates place-related neural activations similar to those observed in the hippocampus and show that these are related to the latent structure.
Collapse
Affiliation(s)
- Stefano Recanatesi
- University of Washington Center for Computational Neuroscience and Swartz Center for Theoretical Neuroscience, Seattle, WA, USA.
| | - Matthew Farrell
- Department of Applied Mathematics, University of Washington, Seattle, WA, USA
| | - Guillaume Lajoie
- Department of Mathematics and Statistics, Université de Montréal, Montreal, QC, Canada.,Mila-Quebec Artificial Intelligence Institute, Montreal, QC, Canada
| | - Sophie Deneve
- Group for Neural Theory, Ecole Normal Superieur, Paris, France
| | | | - Eric Shea-Brown
- University of Washington Center for Computational Neuroscience and Swartz Center for Theoretical Neuroscience, Seattle, WA, USA.,Department of Applied Mathematics, University of Washington, Seattle, WA, USA.,Allen Institute for Brain Science, Seattle, WA, USA
| |
Collapse
|
37
|
Feulner B, Clopath C. Neural manifold under plasticity in a goal driven learning behaviour. PLoS Comput Biol 2021; 17:e1008621. [PMID: 33544700 PMCID: PMC7864452 DOI: 10.1371/journal.pcbi.1008621] [Citation(s) in RCA: 20] [Impact Index Per Article: 6.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/10/2020] [Accepted: 12/08/2020] [Indexed: 11/19/2022] Open
Abstract
Neural activity is often low dimensional and dominated by only a few prominent neural covariation patterns. It has been hypothesised that these covariation patterns could form the building blocks used for fast and flexible motor control. Supporting this idea, recent experiments have shown that monkeys can learn to adapt their neural activity in motor cortex on a timescale of minutes, given that the change lies within the original low-dimensional subspace, also called neural manifold. However, the neural mechanism underlying this within-manifold adaptation remains unknown. Here, we show in a computational model that modification of recurrent weights, driven by a learned feedback signal, can account for the observed behavioural difference between within- and outside-manifold learning. Our findings give a new perspective, showing that recurrent weight changes do not necessarily lead to change in the neural manifold. On the contrary, successful learning is naturally constrained to a common subspace.
Collapse
Affiliation(s)
- Barbara Feulner
- Department of Bioengineering, Imperial College London, London, United Kingdom
| | - Claudia Clopath
- Department of Bioengineering, Imperial College London, London, United Kingdom
| |
Collapse
|
38
|
Bernardi S, Benna MK, Rigotti M, Munuera J, Fusi S, Salzman CD. The Geometry of Abstraction in the Hippocampus and Prefrontal Cortex. Cell 2020; 183:954-967.e21. [PMID: 33058757 PMCID: PMC8451959 DOI: 10.1016/j.cell.2020.09.031] [Citation(s) in RCA: 134] [Impact Index Per Article: 33.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/21/2020] [Revised: 06/09/2020] [Accepted: 09/09/2020] [Indexed: 01/24/2023]
Abstract
The curse of dimensionality plagues models of reinforcement learning and decision making. The process of abstraction solves this by constructing variables describing features shared by different instances, reducing dimensionality and enabling generalization in novel situations. Here, we characterized neural representations in monkeys performing a task described by different hidden and explicit variables. Abstraction was defined operationally using the generalization performance of neural decoders across task conditions not used for training, which requires a particular geometry of neural representations. Neural ensembles in prefrontal cortex, hippocampus, and simulated neural networks simultaneously represented multiple variables in a geometry reflecting abstraction but that still allowed a linear classifier to decode a large number of other variables (high shattering dimensionality). Furthermore, this geometry changed in relation to task events and performance. These findings elucidate how the brain and artificial systems represent variables in an abstract format while preserving the advantages conferred by high shattering dimensionality.
Collapse
Affiliation(s)
- Silvia Bernardi
- Department of Psychiatry, Columbia University, New York, NY, USA; Research Foundation for Mental Hygiene, Menands, NY, USA; Mortimer B. Zuckerman Mind Brain Behavior Institute, Columbia University, New York, NY, USA; New York State Psychiatric Institute, New York, NY, USA
| | - Marcus K Benna
- Department of Neuroscience, Columbia University, New York, NY, USA; Center for Theoretical Neuroscience, Columbia University, New York, NY, USA; Mortimer B. Zuckerman Mind Brain Behavior Institute, Columbia University, New York, NY, USA; Neurobiology Section, Division of Biological Sciences, University of California, San Diego, La Jolla, CA, USA
| | | | - Jérôme Munuera
- Department of Neuroscience, Columbia University, New York, NY, USA
| | - Stefano Fusi
- Department of Neuroscience, Columbia University, New York, NY, USA; Center for Theoretical Neuroscience, Columbia University, New York, NY, USA; Mortimer B. Zuckerman Mind Brain Behavior Institute, Columbia University, New York, NY, USA; Kavli Institute for Brain Sciences, Columbia University, New York, NY, USA.
| | - C Daniel Salzman
- Department of Neuroscience, Columbia University, New York, NY, USA; Department of Psychiatry, Columbia University, New York, NY, USA; Mortimer B. Zuckerman Mind Brain Behavior Institute, Columbia University, New York, NY, USA; Kavli Institute for Brain Sciences, Columbia University, New York, NY, USA; New York State Psychiatric Institute, New York, NY, USA.
| |
Collapse
|
39
|
Electrical coupling controls dimensionality and chaotic firing of inferior olive neurons. PLoS Comput Biol 2020; 16:e1008075. [PMID: 32730255 PMCID: PMC7419012 DOI: 10.1371/journal.pcbi.1008075] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/05/2019] [Revised: 08/11/2020] [Accepted: 06/18/2020] [Indexed: 01/15/2023] Open
Abstract
We previously proposed, on theoretical grounds, that the cerebellum must regulate the dimensionality of its neuronal activity during motor learning and control to cope with the low firing frequency of inferior olive neurons, which form one of two major inputs to the cerebellar cortex. Such dimensionality regulation is possible via modulation of electrical coupling through the gap junctions between inferior olive neurons by inhibitory GABAergic synapses. In addition, we previously showed in simulations that intermediate coupling strengths induce chaotic firing of inferior olive neurons and increase their information carrying capacity. However, there is no in vivo experimental data supporting these two theoretical predictions. Here, we computed the levels of synchrony, dimensionality, and chaos of the inferior olive code by analyzing in vivo recordings of Purkinje cell complex spike activity in three different coupling conditions: carbenoxolone (gap junctions blocker), control, and picrotoxin (GABA-A receptor antagonist). To examine the effect of electrical coupling on dimensionality and chaotic dynamics, we first determined the physiological range of effective coupling strengths between inferior olive neurons in the three conditions using a combination of a biophysical network model of the inferior olive and a novel Bayesian model averaging approach. We found that effective coupling co-varied with synchrony and was inversely related to the dimensionality of inferior olive firing dynamics, as measured via a principal component analysis of the spike trains in each condition. Furthermore, for both the model and the data, we found an inverted U-shaped relationship between coupling strengths and complexity entropy, a measure of chaos for spiking neural data. These results are consistent with our hypothesis according to which electrical coupling regulates the dimensionality and the complexity in the inferior olive neurons in order to optimize both motor learning and control of high dimensional motor systems by the cerebellum. Computational theory suggests that the cerebellum must decrease the dimensionality of its neuronal activity to learn and control high dimensional motor systems effectively, while being constrained by the low firing frequency of inferior olive neurons, one of the two major source of input signals to the cerebellum. We previously proposed that the cerebellum adaptively controls the dimensionality of inferior olive firing by adjusting the level of synchrony and that such control is made possible by modulating the electrical coupling strength between inferior olive neurons. Here, we developed a novel method that uses a biophysical model of the inferior olive to accurately estimate the effective coupling strengths between inferior olive neurons from in vivo recordings of spike activity in three different coupling conditions. We found that high coupling strengths induce synchronous firing and decrease the dimensionality of inferior olive firing dynamics. In contrast, intermediate coupling strengths lead to chaotic firing and increase the dimensionality of the firing dynamics. Thus, electrical coupling is a feasible mechanism to control dimensionality and chaotic firing of inferior olive neurons. In sum, our results provide insights into possible mechanisms underlying cerebellar function and, in general, a biologically plausible framework to control the dimensionality of neural coding.
Collapse
|
40
|
Zajzon B, Mahmoudian S, Morrison A, Duarte R. Passing the Message: Representation Transfer in Modular Balanced Networks. Front Comput Neurosci 2019; 13:79. [PMID: 31920605 PMCID: PMC6915101 DOI: 10.3389/fncom.2019.00079] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/19/2019] [Accepted: 10/29/2019] [Indexed: 01/08/2023] Open
Abstract
Neurobiological systems rely on hierarchical and modular architectures to carry out intricate computations using minimal resources. A prerequisite for such systems to operate adequately is the capability to reliably and efficiently transfer information across multiple modules. Here, we study the features enabling a robust transfer of stimulus representations in modular networks of spiking neurons, tuned to operate in a balanced regime. To capitalize on the complex, transient dynamics that such networks exhibit during active processing, we apply reservoir computing principles and probe the systems' computational efficacy with specific tasks. Focusing on the comparison of random feed-forward connectivity and biologically inspired topographic maps, we find that, in a sequential set-up, structured projections between the modules are strictly necessary for information to propagate accurately to deeper modules. Such mappings not only improve computational performance and efficiency, they also reduce response variability, increase robustness against interference effects, and boost memory capacity. We further investigate how information from two separate input streams is integrated and demonstrate that it is more advantageous to perform non-linear computations on the input locally, within a given module, and subsequently transfer the result downstream, rather than transferring intermediate information and performing the computation downstream. Depending on how information is integrated early on in the system, the networks achieve similar task-performance using different strategies, indicating that the dimensionality of the neural responses does not necessarily correlate with nonlinear integration, as predicted by previous studies. These findings highlight a key role of topographic maps in supporting fast, robust, and accurate neural communication over longer distances. Given the prevalence of such structural feature, particularly in the sensory systems, elucidating their functional purpose remains an important challenge toward which this work provides relevant, new insights. At the same time, these results shed new light on important requirements for designing functional hierarchical spiking networks.
Collapse
Affiliation(s)
- Barna Zajzon
- Jülich Research Centre, Institute of Neuroscience and Medicine (INM-6), Institute for Advanced Simulation (IAS-6) and JARA Institute Brain Structure-Function Relationships (JBI-1/INM-10), Jülich, Germany
- Department of Psychiatry, Psychotherapy and Psychosomatics, RWTH Aachen University, Aachen, Germany
| | - Sepehr Mahmoudian
- Department of Data-Driven Analysis of Biological Networks, Campus Institute for Dynamics of Biological Networks, Georg August University Göttingen, Göttingen, Germany
- MEG Unit, Brain Imaging Center, Goethe University, Frankfurt, Germany
| | - Abigail Morrison
- Jülich Research Centre, Institute of Neuroscience and Medicine (INM-6), Institute for Advanced Simulation (IAS-6) and JARA Institute Brain Structure-Function Relationships (JBI-1/INM-10), Jülich, Germany
- Faculty of Psychology, Institute of Cognitive Neuroscience, Ruhr-University Bochum, Bochum, Germany
| | - Renato Duarte
- Jülich Research Centre, Institute of Neuroscience and Medicine (INM-6), Institute for Advanced Simulation (IAS-6) and JARA Institute Brain Structure-Function Relationships (JBI-1/INM-10), Jülich, Germany
| |
Collapse
|
41
|
Robinson S, Courtney MJ. Spatial quantification of the synaptic activity phenotype across large populations of neurons with Markov random fields. Bioinformatics 2019; 34:3196-3204. [PMID: 29897415 DOI: 10.1093/bioinformatics/bty322] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/24/2017] [Accepted: 04/25/2018] [Indexed: 11/12/2022] Open
Abstract
Motivation The collective and co-ordinated synaptic activity of large neuronal populations is relevant to neuronal development as well as a range of neurological diseases. Quantification of synaptically-mediated neuronal signalling permits further downstream analysis as well as potential application in target validation and in vitro screening assays. Our aim is to develop a phenotypic quantification for neuronal activity imaging data of large populations of neurons, in particular relating to the spatial component of the activity. Results We extend the use of Markov random field (MRF) models to achieve this aim. In particular, we consider Bayesian posterior densities of model parameters in Gaussian MRFs to directly model changes in calcium fluorescence intensity rather than using spike trains. The basis of our model is defining neuron 'neighbours' by the relative spatial positions of the neuronal somata as obtained from the image data whereas previously this has been limited to defining an artificial square grid across the field of view and spike binning. We demonstrate that our spatial phenotypic quantification is applicable for both in vitro and in vivo data consisting of thousands of neurons over hundreds of time points. We show how our approach provides insight beyond that attained by conventional spike counting and discuss how it could be used to facilitate screening assays for modifiers of disease-associated defects of communication between cells. Availability and implementation We supply the MATLAB code and data to obtain all of the results in the paper. Supplementary information Supplementary data are available at Bioinformatics online.
Collapse
Affiliation(s)
- Sean Robinson
- Department of Mathematics and Statistics, University of Turku, Turku, Finland.,Université Grenoble Alpes, CEA, INSERM, Biology of Cancer and Infection UMR S 1036, Grenoble, France
| | - Michael J Courtney
- Neuronal Signalling Lab, Turku Centre for Biotechnology, University of Turku and Åbo Akademi University, Turku, Finland.,Screening Unit, Turku Centre for Biotechnology, University of Turku and Åbo Akademi University, and Institute of Biomedicine, University of Turku, Turku, Finland.,Turku Brain and Mind Center, University of Turku and Åbo Akademi University, Turku, Finland
| |
Collapse
|
42
|
Dehaqani MRA, Vahabie AH, Parsa M, Noudoost B, Soltani A. Selective Changes in Noise Correlations Contribute to an Enhanced Representation of Saccadic Targets in Prefrontal Neuronal Ensembles. Cereb Cortex 2019; 28:3046-3063. [PMID: 29893800 PMCID: PMC6041979 DOI: 10.1093/cercor/bhy141] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/16/2018] [Accepted: 05/20/2018] [Indexed: 01/08/2023] Open
Abstract
An ensemble of neurons can provide a dynamic representation of external stimuli, ongoing processes, or upcoming actions. This dynamic representation could be achieved by changes in the activity of individual neurons and/or their interactions. To investigate these possibilities, we simultaneously recorded from ensembles of prefrontal neurons in non-human primates during a memory-guided saccade task. Using both decoding and encoding methods, we examined changes in the information content of individual neurons and that of ensembles between visual encoding and saccadic target selection. We found that individual neurons maintained their limited spatial sensitivity between these cognitive states, whereas the ensemble selectively improved its encoding of spatial locations far from the neurons’ preferred locations. This population-level “encoding expansion” was not due to the ceiling effect at the preferred locations and was accompanied by selective changes in noise correlations for non-preferred locations. Moreover, the encoding expansion was observed for ensembles of different types of neurons and could not be explained by shifts in the preferred location of individual neurons. Our results demonstrate that the representation of space by neuronal ensembles is dynamically enhanced prior to saccades, and this enhancement occurs alongside changes in noise correlations more than changes in the activity of individual neurons.
Collapse
Affiliation(s)
- Mohammad-Reza A Dehaqani
- Cognitive Systems Laboratory, Control and Intelligent Processing Center of Excellence (CIPCE), School of Electrical and Computer Engineering, College of Engineering, University of Tehran, Tehran, Iran.,School of Cognitive Sciences, Institute for Research in Fundamental Sciences, Tehran, Iran
| | - Abdol-Hossein Vahabie
- School of Cognitive Sciences, Institute for Research in Fundamental Sciences, Tehran, Iran
| | | | - Behrad Noudoost
- Department of Ophthalmology and Visual Sciences, University of Utah, Salt Lake City UT, USA
| | - Alireza Soltani
- Department of Psychological and Brain Sciences, Dartmouth College, Hanover NH, USA
| |
Collapse
|
43
|
La Camera G, Fontanini A, Mazzucato L. Cortical computations via metastable activity. Curr Opin Neurobiol 2019; 58:37-45. [PMID: 31326722 DOI: 10.1016/j.conb.2019.06.007] [Citation(s) in RCA: 29] [Impact Index Per Article: 5.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/27/2018] [Accepted: 06/22/2019] [Indexed: 12/27/2022]
Abstract
Metastable brain dynamics are characterized by abrupt, jump-like modulations so that the neural activity in single trials appears to unfold as a sequence of discrete, quasi-stationary 'states'. Evidence that cortical neural activity unfolds as a sequence of metastable states is accumulating at fast pace. Metastable activity occurs both in response to an external stimulus and during ongoing, self-generated activity. These spontaneous metastable states are increasingly found to subserve internal representations that are not locked to external triggers, including states of deliberations, attention and expectation. Moreover, decoding stimuli or decisions via metastable states can be carried out trial-by-trial. Focusing on metastability will allow us to shift our perspective on neural coding from traditional concepts based on trial-averaging to models based on dynamic ensemble representations. Recent theoretical work has started to characterize the mechanistic origin and potential roles of metastable representations. In this article we review recent findings on metastable activity, how it may arise in biologically realistic models, and its potential role for representing internal states as well as relevant task variables.
Collapse
Affiliation(s)
- Giancarlo La Camera
- Department of Neurobiology and Behavior, State University of New York at Stony Brook, Stony Brook, NY 11794, United States; Graduate Program in Neuroscience, State University of New York at Stony Brook, Stony Brook, NY 11794, United States.
| | - Alfredo Fontanini
- Department of Neurobiology and Behavior, State University of New York at Stony Brook, Stony Brook, NY 11794, United States; Graduate Program in Neuroscience, State University of New York at Stony Brook, Stony Brook, NY 11794, United States
| | - Luca Mazzucato
- Departments of Biology and Mathematics and Institute of Neuroscience, University of Oregon, Eugene, OR 97403, United States
| |
Collapse
|
44
|
Recanatesi S, Ocker GK, Buice MA, Shea-Brown E. Dimensionality in recurrent spiking networks: Global trends in activity and local origins in connectivity. PLoS Comput Biol 2019; 15:e1006446. [PMID: 31299044 PMCID: PMC6655892 DOI: 10.1371/journal.pcbi.1006446] [Citation(s) in RCA: 31] [Impact Index Per Article: 6.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/13/2018] [Revised: 07/24/2019] [Accepted: 04/03/2019] [Indexed: 11/25/2022] Open
Abstract
The dimensionality of a network's collective activity is of increasing interest in neuroscience. This is because dimensionality provides a compact measure of how coordinated network-wide activity is, in terms of the number of modes (or degrees of freedom) that it can independently explore. A low number of modes suggests a compressed low dimensional neural code and reveals interpretable dynamics [1], while findings of high dimension may suggest flexible computations [2, 3]. Here, we address the fundamental question of how dimensionality is related to connectivity, in both autonomous and stimulus-driven networks. Working with a simple spiking network model, we derive three main findings. First, the dimensionality of global activity patterns can be strongly, and systematically, regulated by local connectivity structures. Second, the dimensionality is a better indicator than average correlations in determining how constrained neural activity is. Third, stimulus evoked neural activity interacts systematically with neural connectivity patterns, leading to network responses of either greater or lesser dimensionality than the stimulus.
Collapse
Affiliation(s)
- Stefano Recanatesi
- Center for Computational Neuroscience, University of Washington, Seattle, Washington, United States of America
| | - Gabriel Koch Ocker
- Allen Institute for Brain Science, Seattle, Washington, United States of America
| | - Michael A. Buice
- Center for Computational Neuroscience, University of Washington, Seattle, Washington, United States of America
- Allen Institute for Brain Science, Seattle, Washington, United States of America
- Department of Applied Mathematics, University of Washington, Seattle, Washington, United States of America
| | - Eric Shea-Brown
- Center for Computational Neuroscience, University of Washington, Seattle, Washington, United States of America
- Allen Institute for Brain Science, Seattle, Washington, United States of America
- Department of Applied Mathematics, University of Washington, Seattle, Washington, United States of America
| |
Collapse
|
45
|
Dahmen D, Grün S, Diesmann M, Helias M. Second type of criticality in the brain uncovers rich multiple-neuron dynamics. Proc Natl Acad Sci U S A 2019; 116:13051-13060. [PMID: 31189590 PMCID: PMC6600928 DOI: 10.1073/pnas.1818972116] [Citation(s) in RCA: 42] [Impact Index Per Article: 8.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/03/2022] Open
Abstract
Cortical networks that have been found to operate close to a critical point exhibit joint activations of large numbers of neurons. However, in motor cortex of the awake macaque monkey, we observe very different dynamics: massively parallel recordings of 155 single-neuron spiking activities show weak fluctuations on the population level. This a priori suggests that motor cortex operates in a noncritical regime, which in models, has been found to be suboptimal for computational performance. However, here, we show the opposite: The large dispersion of correlations across neurons is the signature of a second critical regime. This regime exhibits a rich dynamical repertoire hidden from macroscopic brain signals but essential for high performance in such concepts as reservoir computing. An analytical link between the eigenvalue spectrum of the dynamics, the heterogeneity of connectivity, and the dispersion of correlations allows us to assess the closeness to the critical point.
Collapse
Affiliation(s)
- David Dahmen
- Institute of Neuroscience and Medicine (INM-6), Jülich Research Centre, 52425 Jülich, Germany;
- Institute for Advanced Simulation (IAS-6), Jülich Research Centre, 52425 Jülich, Germany
- JARA Institute Brain Structure-Function Relationships (INM-10), Jülich-Aachen Research Alliance, Jülich Research Centre, 52425 Jülich, Germany
| | - Sonja Grün
- Institute of Neuroscience and Medicine (INM-6), Jülich Research Centre, 52425 Jülich, Germany
- Institute for Advanced Simulation (IAS-6), Jülich Research Centre, 52425 Jülich, Germany
- JARA Institute Brain Structure-Function Relationships (INM-10), Jülich-Aachen Research Alliance, Jülich Research Centre, 52425 Jülich, Germany
- Theoretical Systems Neurobiology, RWTH Aachen University, 52056 Aachen, Germany
| | - Markus Diesmann
- Institute of Neuroscience and Medicine (INM-6), Jülich Research Centre, 52425 Jülich, Germany
- Institute for Advanced Simulation (IAS-6), Jülich Research Centre, 52425 Jülich, Germany
- JARA Institute Brain Structure-Function Relationships (INM-10), Jülich-Aachen Research Alliance, Jülich Research Centre, 52425 Jülich, Germany
- Department of Psychiatry, Psychotherapy and Psychosomatics, School of Medicine, RWTH Aachen University, 52074 Aachen, Germany
- Department of Physics, Faculty 1, RWTH Aachen University, 52062 Aachen, Germany
| | - Moritz Helias
- Institute of Neuroscience and Medicine (INM-6), Jülich Research Centre, 52425 Jülich, Germany
- Institute for Advanced Simulation (IAS-6), Jülich Research Centre, 52425 Jülich, Germany
- JARA Institute Brain Structure-Function Relationships (INM-10), Jülich-Aachen Research Alliance, Jülich Research Centre, 52425 Jülich, Germany
- Department of Physics, Faculty 1, RWTH Aachen University, 52062 Aachen, Germany
| |
Collapse
|
46
|
Beyeler M, Rounds EL, Carlson KD, Dutt N, Krichmar JL. Neural correlates of sparse coding and dimensionality reduction. PLoS Comput Biol 2019; 15:e1006908. [PMID: 31246948 PMCID: PMC6597036 DOI: 10.1371/journal.pcbi.1006908] [Citation(s) in RCA: 39] [Impact Index Per Article: 7.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/05/2023] Open
Abstract
Supported by recent computational studies, there is increasing evidence that a wide range of neuronal responses can be understood as an emergent property of nonnegative sparse coding (NSC), an efficient population coding scheme based on dimensionality reduction and sparsity constraints. We review evidence that NSC might be employed by sensory areas to efficiently encode external stimulus spaces, by some associative areas to conjunctively represent multiple behaviorally relevant variables, and possibly by the basal ganglia to coordinate movement. In addition, NSC might provide a useful theoretical framework under which to understand the often complex and nonintuitive response properties of neurons in other brain areas. Although NSC might not apply to all brain areas (for example, motor or executive function areas) the success of NSC-based models, especially in sensory areas, warrants further investigation for neural correlates in other regions.
Collapse
Affiliation(s)
- Michael Beyeler
- Department of Psychology, University of Washington, Seattle, Washington, United States of America
- Institute for Neuroengineering, University of Washington, Seattle, Washington, United States of America
- eScience Institute, University of Washington, Seattle, Washington, United States of America
- Department of Computer Science, University of California, Irvine, California, United States of America
| | - Emily L. Rounds
- Department of Cognitive Sciences, University of California, Irvine, California, United States of America
| | - Kristofor D. Carlson
- Department of Cognitive Sciences, University of California, Irvine, California, United States of America
- Sandia National Laboratories, Albuquerque, New Mexico, United States of America
| | - Nikil Dutt
- Department of Computer Science, University of California, Irvine, California, United States of America
- Department of Cognitive Sciences, University of California, Irvine, California, United States of America
| | - Jeffrey L. Krichmar
- Department of Computer Science, University of California, Irvine, California, United States of America
- Department of Cognitive Sciences, University of California, Irvine, California, United States of America
| |
Collapse
|
47
|
Neural variability quenching during decision-making: Neural individuality and its prestimulus complexity. Neuroimage 2019; 192:1-14. [DOI: 10.1016/j.neuroimage.2019.02.070] [Citation(s) in RCA: 16] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/09/2018] [Revised: 01/31/2019] [Accepted: 02/27/2019] [Indexed: 11/20/2022] Open
|
48
|
Wärnberg E, Kumar A. Perturbing low dimensional activity manifolds in spiking neuronal networks. PLoS Comput Biol 2019; 15:e1007074. [PMID: 31150376 PMCID: PMC6586365 DOI: 10.1371/journal.pcbi.1007074] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/15/2017] [Revised: 06/20/2019] [Accepted: 05/07/2019] [Indexed: 11/19/2022] Open
Abstract
Several recent studies have shown that neural activity in vivo tends to be constrained to a low-dimensional manifold. Such activity does not arise in simulated neural networks with homogeneous connectivity and it has been suggested that it is indicative of some other connectivity pattern in neuronal networks. In particular, this connectivity pattern appears to be constraining learning so that only neural activity patterns falling within the intrinsic manifold can be learned and elicited. Here, we use three different models of spiking neural networks (echo-state networks, the Neural Engineering Framework and Efficient Coding) to demonstrate how the intrinsic manifold can be made a direct consequence of the circuit connectivity. Using this relationship between the circuit connectivity and the intrinsic manifold, we show that learning of patterns outside the intrinsic manifold corresponds to much larger changes in synaptic weights than learning of patterns within the intrinsic manifold. Assuming larger changes to synaptic weights requires extensive learning, this observation provides an explanation of why learning is easier when it does not require the neural activity to leave its intrinsic manifold.
Collapse
Affiliation(s)
- Emil Wärnberg
- Dept. of Computational Science and Technology, School of Electrical Engineering and Computer Science, KTH Royal Institute of Technology, Stockholm, Sweden
- Dept. of Neuroscience, Karolinska Institutet, Stockholm, Sweden
| | - Arvind Kumar
- Dept. of Computational Science and Technology, School of Electrical Engineering and Computer Science, KTH Royal Institute of Technology, Stockholm, Sweden
| |
Collapse
|
49
|
Expectation-induced modulation of metastable activity underlies faster coding of sensory stimuli. Nat Neurosci 2019; 22:787-796. [PMID: 30936557 PMCID: PMC6516078 DOI: 10.1038/s41593-019-0364-9] [Citation(s) in RCA: 45] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/06/2017] [Accepted: 02/15/2019] [Indexed: 11/22/2022]
Abstract
Sensory stimuli can be recognized more rapidly when they are expected. This phenomenon depends on expectation affecting the cortical processing of sensory information. However, the mechanisms responsible for the effects of expectation on sensory circuits remain elusive. Here, we report a novel computational mechanism underlying the expectation-dependent acceleration of coding observed in the gustatory cortex of alert rats. We use a recurrent spiking network model with a clustered architecture capturing essential features of cortical activity, such as its intrinsically generated metastable dynamics. Relying on network theory and computer simulations, we propose that expectation exerts its function by modulating the intrinsically generated dynamics preceding taste delivery. Our model’s predictions were confirmed in the experimental data, demonstrating how the modulation of ongoing activity can shape sensory coding. Altogether, these results provide a biologically plausible theory of expectation and ascribe a new functional role to intrinsically generated, metastable activity.
Collapse
|
50
|
de Kamps M, Lepperød M, Lai YM. Computational geometry for modeling neural populations: From visualization to simulation. PLoS Comput Biol 2019; 15:e1006729. [PMID: 30830903 PMCID: PMC6417745 DOI: 10.1371/journal.pcbi.1006729] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/24/2018] [Revised: 03/14/2019] [Accepted: 11/26/2018] [Indexed: 11/18/2022] Open
Abstract
The importance of a mesoscopic description level of the brain has now been well established. Rate based models are widely used, but have limitations. Recently, several extremely efficient population-level methods have been proposed that go beyond the characterization of a population in terms of a single variable. Here, we present a method for simulating neural populations based on two dimensional (2D) point spiking neuron models that defines the state of the population in terms of a density function over the neural state space. Our method differs in that we do not make the diffusion approximation, nor do we reduce the state space to a single dimension (1D). We do not hard code the neural model, but read in a grid describing its state space in the relevant simulation region. Novel models can be studied without even recompiling the code. The method is highly modular: variations of the deterministic neural dynamics and the stochastic process can be investigated independently. Currently, there is a trend to reduce complex high dimensional neuron models to 2D ones as they offer a rich dynamical repertoire that is not available in 1D, such as limit cycles. We will demonstrate that our method is ideally suited to investigate noise in such systems, replicating results obtained in the diffusion limit and generalizing them to a regime of large jumps. The joint probability density function is much more informative than 1D marginals, and we will argue that the study of 2D systems subject to noise is important complementary to 1D systems.
Collapse
Affiliation(s)
- Marc de Kamps
- Institute for Artificial and Biological Intelligence, University of Leeds, Leeds, West Yorkshire, United Kingdom
| | - Mikkel Lepperød
- Institute of Basic Medical Sciences, and Center for Integrative Neuroplasticity, University of Oslo, Oslo, Norway
| | - Yi Ming Lai
- Institute for Artificial and Biological Intelligence, University of Leeds, Leeds, West Yorkshire, United Kingdom.,Currently at the School of Mathematical Sciences, University of Nottingham, Nottingham, United Kingdom
| |
Collapse
|