1
|
Gort J. Emergence of Universal Computations Through Neural Manifold Dynamics. Neural Comput 2024; 36:227-270. [PMID: 38101328 DOI: 10.1162/neco_a_01631] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/07/2023] [Accepted: 09/05/2023] [Indexed: 12/17/2023]
Abstract
There is growing evidence that many forms of neural computation may be implemented by low-dimensional dynamics unfolding at the population scale. However, neither the connectivity structure nor the general capabilities of these embedded dynamical processes are currently understood. In this work, the two most common formalisms of firing-rate models are evaluated using tools from analysis, topology, and nonlinear dynamics in order to provide plausible explanations for these problems. It is shown that low-rank structured connectivities predict the formation of invariant and globally attracting manifolds in all these models. Regarding the dynamics arising in these manifolds, it is proved they are topologically equivalent across the considered formalisms. This letter also shows that under the low-rank hypothesis, the flows emerging in neural manifolds, including input-driven systems, are universal, which broadens previous findings. It explores how low-dimensional orbits can bear the production of continuous sets of muscular trajectories, the implementation of central pattern generators, and the storage of memory states. These dynamics can robustly simulate any Turing machine over arbitrary bounded memory strings, virtually endowing rate models with the power of universal computation. In addition, the letter shows how the low-rank hypothesis predicts the parsimonious correlation structure observed in cortical activity. Finally, it discusses how this theory could provide a useful tool from which to study neuropsychological phenomena using mathematical methods.
Collapse
Affiliation(s)
- Joan Gort
- Facultat de Psicologia, Universitat Autònoma de Barcelona, 08193, Bellaterra, Barcelona, Spain
| |
Collapse
|
2
|
Ramon C, Graichen U, Gargiulo P, Zanow F, Knösche TR, Haueisen J. Spatiotemporal phase slip patterns for visual evoked potentials, covert object naming tasks, and insight moments extracted from 256 channel EEG recordings. Front Integr Neurosci 2023; 17:1087976. [PMID: 37384237 PMCID: PMC10293627 DOI: 10.3389/fnint.2023.1087976] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/02/2022] [Accepted: 05/19/2023] [Indexed: 06/30/2023] Open
Abstract
Phase slips arise from state transitions of the coordinated activity of cortical neurons which can be extracted from the EEG data. The phase slip rates (PSRs) were studied from the high-density (256 channel) EEG data, sampled at 16.384 kHz, of five adult subjects during covert visual object naming tasks. Artifact-free data from 29 trials were averaged for each subject. The analysis was performed to look for phase slips in the theta (4-7 Hz), alpha (7-12 Hz), beta (12-30 Hz), and low gamma (30-49 Hz) bands. The phase was calculated with the Hilbert transform, then unwrapped and detrended to look for phase slip rates in a 1.0 ms wide stepping window with a step size of 0.06 ms. The spatiotemporal plots of the PSRs were made by using a montage layout of 256 equidistant electrode positions. The spatiotemporal profiles of EEG and PSRs during the stimulus and the first second of the post-stimulus period were examined in detail to study the visual evoked potentials and different stages of visual object recognition in the visual, language, and memory areas. It was found that the activity areas of PSRs were different as compared with EEG activity areas during the stimulus and post-stimulus periods. Different stages of the insight moments during the covert object naming tasks were examined from PSRs and it was found to be about 512 ± 21 ms for the 'Eureka' moment. Overall, these results indicate that information about the cortical phase transitions can be derived from the measured EEG data and can be used in a complementary fashion to study the cognitive behavior of the brain.
Collapse
Affiliation(s)
- Ceon Ramon
- Department of Electrical and Computer Engineering, University of Washington, Seattle, WA, United States
- Regional Epilepsy Center, Harborview Medical Center, University of Washington, Seattle, WA, United States
| | - Uwe Graichen
- Department of Biostatistics and Data Science, Karl Landsteiner University of Health Sciences, Krems an der Donau, Austria
| | - Paolo Gargiulo
- Institute of Biomedical and Neural Engineering, Reykjavik University, Reykjavik, Iceland
- Department of Science, Landspitali University Hospital, Reykjavik, Iceland
| | | | - Thomas R. Knösche
- Max Planck Institute for Human Cognitive and Neurosciences, Leipzig, Germany
| | - Jens Haueisen
- Institute of Biomedical Engineering and Informatics, Technische Universität Ilmenau, Ilmenau, Germany
| |
Collapse
|
3
|
Coppolino S, Giacopelli G, Migliore M. Sequence Learning in a Single Trial: A Spiking Neurons Model Based on Hippocampal Circuitry. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2022; 33:3178-3183. [PMID: 33481720 DOI: 10.1109/tnnls.2021.3049281] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
In contrast with our everyday experience using brain circuits, it can take a prohibitively long time to train a computational system to produce the correct sequence of outputs in the presence of a series of inputs. This suggests that something important is missing in the way in which models are trying to reproduce basic cognitive functions. In this work, we introduce a new neuronal network architecture that is able to learn, in a single trial, an arbitrary long sequence of any known objects. The key point of the model is the explicit use of mechanisms and circuitry observed in the hippocampus, which allow the model to reach a level of efficiency and accuracy that, to the best of our knowledge, is not possible with abstract network implementations. By directly following the natural system's layout and circuitry, this type of implementation has the additional advantage that the results can be more easily compared to the experimental data, allowing a deeper and more direct understanding of the mechanisms underlying cognitive functions and dysfunctions and opening the way to a new generation of learning architectures.
Collapse
|
4
|
Bachmann C, Tetzlaff T, Duarte R, Morrison A. Firing rate homeostasis counteracts changes in stability of recurrent neural networks caused by synapse loss in Alzheimer's disease. PLoS Comput Biol 2020; 16:e1007790. [PMID: 32841234 PMCID: PMC7505475 DOI: 10.1371/journal.pcbi.1007790] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/26/2019] [Revised: 09/21/2020] [Accepted: 03/17/2020] [Indexed: 11/19/2022] Open
Abstract
The impairment of cognitive function in Alzheimer's disease is clearly correlated to synapse loss. However, the mechanisms underlying this correlation are only poorly understood. Here, we investigate how the loss of excitatory synapses in sparsely connected random networks of spiking excitatory and inhibitory neurons alters their dynamical characteristics. Beyond the effects on the activity statistics, we find that the loss of excitatory synapses on excitatory neurons reduces the network's sensitivity to small perturbations. This decrease in sensitivity can be considered as an indication of a reduction of computational capacity. A full recovery of the network's dynamical characteristics and sensitivity can be achieved by firing rate homeostasis, here implemented by an up-scaling of the remaining excitatory-excitatory synapses. Mean-field analysis reveals that the stability of the linearised network dynamics is, in good approximation, uniquely determined by the firing rate, and thereby explains why firing rate homeostasis preserves not only the firing rate but also the network's sensitivity to small perturbations.
Collapse
Affiliation(s)
- Claudia Bachmann
- Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA BRAIN Institute I, Jülich Research Centre, Jülich, Germany
| | - Tom Tetzlaff
- Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA BRAIN Institute I, Jülich Research Centre, Jülich, Germany
| | - Renato Duarte
- Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA BRAIN Institute I, Jülich Research Centre, Jülich, Germany
| | - Abigail Morrison
- Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA BRAIN Institute I, Jülich Research Centre, Jülich, Germany
- Institute of Cognitive Neuroscience, Faculty of Psychology, Ruhr-University Bochum, Bochum, Germany
| |
Collapse
|
5
|
Heiberg T, Kriener B, Tetzlaff T, Einevoll GT, Plesser HE. Firing-rate models for neurons with a broad repertoire of spiking behaviors. J Comput Neurosci 2018; 45:103-132. [PMID: 30146661 PMCID: PMC6208914 DOI: 10.1007/s10827-018-0693-9] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/09/2018] [Revised: 08/01/2018] [Accepted: 08/02/2018] [Indexed: 11/29/2022]
Abstract
Capturing the response behavior of spiking neuron models with rate-based models facilitates the investigation of neuronal networks using powerful methods for rate-based network dynamics. To this end, we investigate the responses of two widely used neuron model types, the Izhikevich and augmented multi-adapative threshold (AMAT) models, to a range of spiking inputs ranging from step responses to natural spike data. We find (i) that linear-nonlinear firing rate models fitted to test data can be used to describe the firing-rate responses of AMAT and Izhikevich spiking neuron models in many cases; (ii) that firing-rate responses are generally too complex to be captured by first-order low-pass filters but require bandpass filters instead; (iii) that linear-nonlinear models capture the response of AMAT models better than of Izhikevich models; (iv) that the wide range of response types evoked by current-injection experiments collapses to few response types when neurons are driven by stationary or sinusoidally modulated Poisson input; and (v) that AMAT and Izhikevich models show different responses to spike input despite identical responses to current injections. Together, these findings suggest that rate-based models of network dynamics may capture a wider range of neuronal response properties by incorporating second-order bandpass filters fitted to responses of spiking model neurons. These models may contribute to bringing rate-based network modeling closer to the reality of biological neuronal networks.
Collapse
Affiliation(s)
- Thomas Heiberg
- Faculty of Science and Technology, Norwegian University of Life Sciences, Ås, Norway
| | - Birgit Kriener
- Faculty of Science and Technology, Norwegian University of Life Sciences, Ås, Norway.,Institute of Basic Medical Sciences, University of Oslo, Oslo, Norway
| | - Tom Tetzlaff
- Institute of Neuroscience and Medicine (INM-6), Jülich Research Centre, Jülich, Germany.,Institute for Advanced Simulation (IAS-6), Jülich Research Centre, Jülich, Germany.,JARA Institute Brain Structure-Function Relationships (INM-10), Jülich Research Centre, Jülich, Germany
| | - Gaute T Einevoll
- Faculty of Science and Technology, Norwegian University of Life Sciences, Ås, Norway.,Department of Physics, University of Oslo, Oslo, Norway
| | - Hans E Plesser
- Faculty of Science and Technology, Norwegian University of Life Sciences, Ås, Norway. .,Institute of Neuroscience and Medicine (INM-6), Jülich Research Centre, Jülich, Germany.
| |
Collapse
|
6
|
Cain N, Iyer R, Koch C, Mihalas S. The Computational Properties of a Simplified Cortical Column Model. PLoS Comput Biol 2016; 12:e1005045. [PMID: 27617444 PMCID: PMC5019422 DOI: 10.1371/journal.pcbi.1005045] [Citation(s) in RCA: 33] [Impact Index Per Article: 4.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/07/2015] [Accepted: 07/01/2016] [Indexed: 01/09/2023] Open
Abstract
The mammalian neocortex has a repetitious, laminar structure and performs functions integral to higher cognitive processes, including sensory perception, memory, and coordinated motor output. What computations does this circuitry subserve that link these unique structural elements to their function? Potjans and Diesmann (2014) parameterized a four-layer, two cell type (i.e. excitatory and inhibitory) model of a cortical column with homogeneous populations and cell type dependent connection probabilities. We implement a version of their model using a displacement integro-partial differential equation (DiPDE) population density model. This approach, exact in the limit of large homogeneous populations, provides a fast numerical method to solve equations describing the full probability density distribution of neuronal membrane potentials. It lends itself to quickly analyzing the mean response properties of population-scale firing rate dynamics. We use this strategy to examine the input-output relationship of the Potjans and Diesmann cortical column model to understand its computational properties. When inputs are constrained to jointly and equally target excitatory and inhibitory neurons, we find a large linear regime where the effect of a multi-layer input signal can be reduced to a linear combination of component signals. One of these, a simple subtractive operation, can act as an error signal passed between hierarchical processing stages. What computations do existing biophysically-plausible models of cortex perform on their inputs, and how do these computations relate to theories of cortical processing? We begin with a computational model of cortical tissue and seek to understand its input/output transformations. Our approach limits confirmation bias, and differs from a more constructionist approach of starting with a computational theory and then creating a model that can implement its necessary features. We here choose a population-level modeling technique that does not sacrifice accuracy, as it well-approximates the mean firing-rate of a population of leaky integrate-and-fire neurons. We extend this approach to simulate recurrently coupled neural populations, and characterize the computational properties of the Potjans and Diesmann cortical column model. We find that this model is capable of computing linear operations and naturally generates a subtraction operation implicated in theories of predictive coding. Although our quantitative findings are restricted to this particular model, we demonstrate that these conclusions are not highly sensitive to the model parameterization.
Collapse
Affiliation(s)
- Nicholas Cain
- Allen Institute for Brain Science, Seattle, Washington, United States of America
| | - Ramakrishnan Iyer
- Allen Institute for Brain Science, Seattle, Washington, United States of America
| | - Christof Koch
- Allen Institute for Brain Science, Seattle, Washington, United States of America
| | - Stefan Mihalas
- Allen Institute for Brain Science, Seattle, Washington, United States of America
| |
Collapse
|
7
|
Mannella F, Baldassarre G. Selection of cortical dynamics for motor behaviour by the basal ganglia. BIOLOGICAL CYBERNETICS 2015; 109:575-595. [PMID: 26537483 PMCID: PMC4656718 DOI: 10.1007/s00422-015-0662-6] [Citation(s) in RCA: 24] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 10/21/2014] [Accepted: 09/29/2015] [Indexed: 06/05/2023]
Abstract
The basal ganglia and cortex are strongly implicated in the control of motor preparation and execution. Re-entrant loops between these two brain areas are thought to determine the selection of motor repertoires for instrumental action. The nature of neural encoding and processing in the motor cortex as well as the way in which selection by the basal ganglia acts on them is currently debated. The classic view of the motor cortex implementing a direct mapping of information from perception to muscular responses is challenged by proposals viewing it as a set of dynamical systems controlling muscles. Consequently, the common idea that a competition between relatively segregated cortico-striato-nigro-thalamo-cortical channels selects patterns of activity in the motor cortex is no more sufficient to explain how action selection works. Here, we contribute to develop the dynamical view of the basal ganglia-cortical system by proposing a computational model in which a thalamo-cortical dynamical neural reservoir is modulated by disinhibitory selection of the basal ganglia guided by top-down information, so that it responds with different dynamics to the same bottom-up input. The model shows how different motor trajectories can so be produced by controlling the same set of joint actuators. Furthermore, the model shows how the basal ganglia might modulate cortical dynamics by preserving coarse-grained spatiotemporal information throughout cortico-cortical pathways.
Collapse
Affiliation(s)
- Francesco Mannella
- Laboratory of Computational Embodied Neuroscience, Institute of Cognitive Sciences and Technologies, National Research Council (CNR-ISTC-LOCEN), Via San Martino della Battaglia 44, 00185, Rome, Italy.
| | - Gianluca Baldassarre
- Laboratory of Computational Embodied Neuroscience, Institute of Cognitive Sciences and Technologies, National Research Council (CNR-ISTC-LOCEN), Via San Martino della Battaglia 44, 00185, Rome, Italy.
| |
Collapse
|
8
|
Einevoll GT, Kayser C, Logothetis NK, Panzeri S. Modelling and analysis of local field potentials for studying the function of cortical circuits. Nat Rev Neurosci 2013; 14:770-85. [PMID: 24135696 DOI: 10.1038/nrn3599] [Citation(s) in RCA: 477] [Impact Index Per Article: 43.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/14/2022]
Abstract
The past decade has witnessed a renewed interest in cortical local field potentials (LFPs)--that is, extracellularly recorded potentials with frequencies of up to ~500 Hz. This is due to both the advent of multielectrodes, which has enabled recording of LFPs at tens to hundreds of sites simultaneously, and the insight that LFPs offer a unique window into key integrative synaptic processes in cortical populations. However, owing to its numerous potential neural sources, the LFP is more difficult to interpret than are spikes. Careful mathematical modelling and analysis are needed to take full advantage of the opportunities that this signal offers in understanding signal processing in cortical circuits and, ultimately, the neural basis of perception and cognition.
Collapse
Affiliation(s)
- Gaute T Einevoll
- Department of Mathematical Sciences and Technology, Norwegian University of Life Sciences, 1432 Ås, Norway
| | | | | | | |
Collapse
|
9
|
Tetzlaff T, Helias M, Einevoll GT, Diesmann M. Decorrelation of neural-network activity by inhibitory feedback. PLoS Comput Biol 2012; 8:e1002596. [PMID: 23133368 PMCID: PMC3487539 DOI: 10.1371/journal.pcbi.1002596] [Citation(s) in RCA: 112] [Impact Index Per Article: 9.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/03/2011] [Accepted: 05/20/2012] [Indexed: 11/19/2022] Open
Abstract
Correlations in spike-train ensembles can seriously impair the encoding of information by their spatio-temporal structure. An inevitable source of correlation in finite neural networks is common presynaptic input to pairs of neurons. Recent studies demonstrate that spike correlations in recurrent neural networks are considerably smaller than expected based on the amount of shared presynaptic input. Here, we explain this observation by means of a linear network model and simulations of networks of leaky integrate-and-fire neurons. We show that inhibitory feedback efficiently suppresses pairwise correlations and, hence, population-rate fluctuations, thereby assigning inhibitory neurons the new role of active decorrelation. We quantify this decorrelation by comparing the responses of the intact recurrent network (feedback system) and systems where the statistics of the feedback channel is perturbed (feedforward system). Manipulations of the feedback statistics can lead to a significant increase in the power and coherence of the population response. In particular, neglecting correlations within the ensemble of feedback channels or between the external stimulus and the feedback amplifies population-rate fluctuations by orders of magnitude. The fluctuation suppression in homogeneous inhibitory networks is explained by a negative feedback loop in the one-dimensional dynamics of the compound activity. Similarly, a change of coordinates exposes an effective negative feedback loop in the compound dynamics of stable excitatory-inhibitory networks. The suppression of input correlations in finite networks is explained by the population averaged correlations in the linear network model: In purely inhibitory networks, shared-input correlations are canceled by negative spike-train correlations. In excitatory-inhibitory networks, spike-train correlations are typically positive. Here, the suppression of input correlations is not a result of the mere existence of correlations between excitatory (E) and inhibitory (I) neurons, but a consequence of a particular structure of correlations among the three possible pairings (EE, EI, II).
Collapse
Affiliation(s)
- Tom Tetzlaff
- Institute of Neuroscience and Medicine (INM-6), Computational and Systems Neuroscience, Research Center Jülich, Jülich, Germany.
| | | | | | | |
Collapse
|