1
|
Antolík J, Cagnol R, Rózsa T, Monier C, Frégnac Y, Davison AP. A comprehensive data-driven model of cat primary visual cortex. PLoS Comput Biol 2024; 20:e1012342. [PMID: 39167628 DOI: 10.1371/journal.pcbi.1012342] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/29/2023] [Accepted: 07/20/2024] [Indexed: 08/23/2024] Open
Abstract
Knowledge integration based on the relationship between structure and function of the neural substrate is one of the main targets of neuroinformatics and data-driven computational modeling. However, the multiplicity of data sources, the diversity of benchmarks, the mixing of observables of different natures, and the necessity of a long-term, systematic approach make such a task challenging. Here we present a first snapshot of a long-term integrative modeling program designed to address this issue in the domain of the visual system: a comprehensive spiking model of cat primary visual cortex. The presented model satisfies an extensive range of anatomical, statistical and functional constraints under a wide range of visual input statistics. In the presence of physiological levels of tonic stochastic bombardment by spontaneous thalamic activity, the modeled cortical reverberations self-generate a sparse asynchronous ongoing activity that quantitatively matches a range of experimentally measured statistics. When integrating feed-forward drive elicited by a high diversity of visual contexts, the simulated network produces a realistic, quantitatively accurate interplay between visually evoked excitatory and inhibitory conductances; contrast-invariant orientation-tuning width; center surround interactions; and stimulus-dependent changes in the precision of the neural code. This integrative model offers insights into how the studied properties interact, contributing to a better understanding of visual cortical dynamics. It provides a basis for future development towards a comprehensive model of low-level perception.
Collapse
Affiliation(s)
- Ján Antolík
- Faculty of Mathematics and Physics, Charles University, Malostranské nám. 25, Prague 1, Czechia
- Unit of Neuroscience, Information and Complexity (UNIC), CNRS FRE 3693, Gif-sur-Yvette, France
- INSERM UMRI S 968; Sorbonne Université, UPMC Univ Paris 06, UMR S 968; CNRS, UMR 7210, Institut de la Vision, Paris, France
| | - Rémy Cagnol
- Faculty of Mathematics and Physics, Charles University, Malostranské nám. 25, Prague 1, Czechia
| | - Tibor Rózsa
- Faculty of Mathematics and Physics, Charles University, Malostranské nám. 25, Prague 1, Czechia
| | - Cyril Monier
- Unit of Neuroscience, Information and Complexity (UNIC), CNRS FRE 3693, Gif-sur-Yvette, France
- Institut des neurosciences Paris-Saclay, Université Paris-Saclay, CNRS, Saclay, France
| | - Yves Frégnac
- Unit of Neuroscience, Information and Complexity (UNIC), CNRS FRE 3693, Gif-sur-Yvette, France
- Institut des neurosciences Paris-Saclay, Université Paris-Saclay, CNRS, Saclay, France
| | - Andrew P Davison
- Unit of Neuroscience, Information and Complexity (UNIC), CNRS FRE 3693, Gif-sur-Yvette, France
- Institut des neurosciences Paris-Saclay, Université Paris-Saclay, CNRS, Saclay, France
| |
Collapse
|
2
|
Kusch L, Diaz-Pier S, Klijn W, Sontheimer K, Bernard C, Morrison A, Jirsa V. Multiscale co-simulation design pattern for neuroscience applications. Front Neuroinform 2024; 18:1156683. [PMID: 38410682 PMCID: PMC10895016 DOI: 10.3389/fninf.2024.1156683] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/01/2023] [Accepted: 01/19/2024] [Indexed: 02/28/2024] Open
Abstract
Integration of information across heterogeneous sources creates added scientific value. Interoperability of data, tools and models is, however, difficult to accomplish across spatial and temporal scales. Here we introduce the toolbox Parallel Co-Simulation, which enables the interoperation of simulators operating at different scales. We provide a software science co-design pattern and illustrate its functioning along a neuroscience example, in which individual regions of interest are simulated on the cellular level allowing us to study detailed mechanisms, while the remaining network is efficiently simulated on the population level. A workflow is illustrated for the use case of The Virtual Brain and NEST, in which the CA1 region of the cellular-level hippocampus of the mouse is embedded into a full brain network involving micro and macro electrode recordings. This new tool allows integrating knowledge across scales in the same simulation framework and validating them against multiscale experiments, thereby largely widening the explanatory power of computational models.
Collapse
Affiliation(s)
- Lionel Kusch
- Institut de Neurosciences des Systèmes (INS), UMR1106, Aix-Marseille Université, Marseilles, France
| | - Sandra Diaz-Pier
- Simulation and Data Lab Neuroscience, Jülich Supercomputing Centre (JSC), Institute for Advanced Simulation, JARA, Forschungszentrum Jülich GmbH, Jülich, Germany
| | - Wouter Klijn
- Simulation and Data Lab Neuroscience, Jülich Supercomputing Centre (JSC), Institute for Advanced Simulation, JARA, Forschungszentrum Jülich GmbH, Jülich, Germany
| | - Kim Sontheimer
- Simulation and Data Lab Neuroscience, Jülich Supercomputing Centre (JSC), Institute for Advanced Simulation, JARA, Forschungszentrum Jülich GmbH, Jülich, Germany
| | - Christophe Bernard
- Institut de Neurosciences des Systèmes (INS), UMR1106, Aix-Marseille Université, Marseilles, France
| | - Abigail Morrison
- Simulation and Data Lab Neuroscience, Jülich Supercomputing Centre (JSC), Institute for Advanced Simulation, JARA, Forschungszentrum Jülich GmbH, Jülich, Germany
- Forschungszentrum Jülich GmbH, IAS-6/INM-6, JARA, Jülich, Germany
- Computer Science 3 - Software Engineering, RWTH Aachen University, Aachen, Germany
| | - Viktor Jirsa
- Institut de Neurosciences des Systèmes (INS), UMR1106, Aix-Marseille Université, Marseilles, France
| |
Collapse
|
3
|
Skaar JEW, Haug N, Stasik AJ, Einevoll GT, Tøndel K. Metamodelling of a two-population spiking neural network. PLoS Comput Biol 2023; 19:e1011625. [PMID: 38032904 PMCID: PMC10688753 DOI: 10.1371/journal.pcbi.1011625] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/13/2022] [Accepted: 10/23/2023] [Indexed: 12/02/2023] Open
Abstract
In computational neuroscience, hypotheses are often formulated as bottom-up mechanistic models of the systems in question, consisting of differential equations that can be numerically integrated forward in time. Candidate models can then be validated by comparison against experimental data. The model outputs of neural network models depend on both neuron parameters, connectivity parameters and other model inputs. Successful model fitting requires sufficient exploration of the model parameter space, which can be computationally demanding. Additionally, identifying degeneracy in the parameters, i.e. different combinations of parameter values that produce similar outputs, is of interest, as they define the subset of parameter values consistent with the data. In this computational study, we apply metamodels to a two-population recurrent spiking network of point-neurons, the so-called Brunel network. Metamodels are data-driven approximations to more complex models with more desirable computational properties, which can be run considerably faster than the original model. Specifically, we apply and compare two different metamodelling techniques, masked autoregressive flows (MAF) and deep Gaussian process regression (DGPR), to estimate the power spectra of two different signals; the population spiking activities and the local field potential. We find that the metamodels are able to accurately model the power spectra in the asynchronous irregular regime, and that the DGPR metamodel provides a more accurate representation of the simulator compared to the MAF metamodel. Using the metamodels, we estimate the posterior probability distributions over parameters given observed simulator outputs separately for both LFP and population spiking activities. We find that these distributions correctly identify parameter combinations that give similar model outputs, and that some parameters are significantly more constrained by observing the LFP than by observing the population spiking activities.
Collapse
Affiliation(s)
- Jan-Eirik W. Skaar
- Faculty of Science and Technology, Norwegian University of Life Sciences, Ås, Norway
| | - Nicolai Haug
- Faculty of Science and Technology, Norwegian University of Life Sciences, Ås, Norway
| | - Alexander J. Stasik
- Faculty of Science and Technology, Norwegian University of Life Sciences, Ås, Norway
- Department of Physics, University of Oslo, Oslo, Norway
| | - Gaute T. Einevoll
- Faculty of Science and Technology, Norwegian University of Life Sciences, Ås, Norway
- Department of Physics, University of Oslo, Oslo, Norway
| | - Kristin Tøndel
- Faculty of Science and Technology, Norwegian University of Life Sciences, Ås, Norway
| |
Collapse
|
4
|
Manninen T, Aćimović J, Linne ML. Analysis of Network Models with Neuron-Astrocyte Interactions. Neuroinformatics 2023; 21:375-406. [PMID: 36959372 PMCID: PMC10085960 DOI: 10.1007/s12021-023-09622-w] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 02/01/2023] [Indexed: 03/25/2023]
Abstract
Neural networks, composed of many neurons and governed by complex interactions between them, are a widely accepted formalism for modeling and exploring global dynamics and emergent properties in brain systems. In the past decades, experimental evidence of computationally relevant neuron-astrocyte interactions, as well as the astrocytic modulation of global neural dynamics, have accumulated. These findings motivated advances in computational glioscience and inspired several models integrating mechanisms of neuron-astrocyte interactions into the standard neural network formalism. These models were developed to study, for example, synchronization, information transfer, synaptic plasticity, and hyperexcitability, as well as classification tasks and hardware implementations. We here focus on network models of at least two neurons interacting bidirectionally with at least two astrocytes that include explicitly modeled astrocytic calcium dynamics. In this study, we analyze the evolution of these models and the biophysical, biochemical, cellular, and network mechanisms used to construct them. Based on our analysis, we propose how to systematically describe and categorize interaction schemes between cells in neuron-astrocyte networks. We additionally study the models in view of the existing experimental data and present future perspectives. Our analysis is an important first step towards understanding astrocytic contribution to brain functions. However, more advances are needed to collect comprehensive data about astrocyte morphology and physiology in vivo and to better integrate them in data-driven computational models. Broadening the discussion about theoretical approaches and expanding the computational tools is necessary to better understand astrocytes' roles in brain functions.
Collapse
Affiliation(s)
- Tiina Manninen
- Faculty of Medicine and Health Technology, Tampere University, Korkeakoulunkatu 3, FI-33720, Tampere, Finland.
| | - Jugoslava Aćimović
- Faculty of Medicine and Health Technology, Tampere University, Korkeakoulunkatu 3, FI-33720, Tampere, Finland
| | - Marja-Leena Linne
- Faculty of Medicine and Health Technology, Tampere University, Korkeakoulunkatu 3, FI-33720, Tampere, Finland.
| |
Collapse
|
5
|
Garnier Artiñano T, Andalibi V, Atula I, Maestri M, Vanni S. Biophysical parameters control signal transfer in spiking network. Front Comput Neurosci 2023; 17:1011814. [PMID: 36761840 PMCID: PMC9905747 DOI: 10.3389/fncom.2023.1011814] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/04/2022] [Accepted: 01/09/2023] [Indexed: 01/26/2023] Open
Abstract
Introduction Information transmission and representation in both natural and artificial networks is dependent on connectivity between units. Biological neurons, in addition, modulate synaptic dynamics and post-synaptic membrane properties, but how these relate to information transmission in a population of neurons is still poorly understood. A recent study investigated local learning rules and showed how a spiking neural network can learn to represent continuous signals. Our study builds on their model to explore how basic membrane properties and synaptic delays affect information transfer. Methods The system consisted of three input and output units and a hidden layer of 300 excitatory and 75 inhibitory leaky integrate-and-fire (LIF) or adaptive integrate-and-fire (AdEx) units. After optimizing the connectivity to accurately replicate the input patterns in the output units, we transformed the model to more biologically accurate units and included synaptic delay and concurrent action potential generation in distinct neurons. We examined three different parameter regimes which comprised either identical physiological values for both excitatory and inhibitory units (Comrade), more biologically accurate values (Bacon), or the Comrade regime whose output units were optimized for low reconstruction error (HiFi). We evaluated information transmission and classification accuracy of the network with four distinct metrics: coherence, Granger causality, transfer entropy, and reconstruction error. Results Biophysical parameters showed a major impact on information transfer metrics. The classification was surprisingly robust, surviving very low firing and information rates, whereas information transmission overall and particularly low reconstruction error were more dependent on higher firing rates in LIF units. In AdEx units, the firing rates were lower and less information was transferred, but interestingly the highest information transmission rates were no longer overlapping with the highest firing rates. Discussion Our findings can be reflected on the predictive coding theory of the cerebral cortex and may suggest information transfer qualities as a phenomenological quality of biological cells.
Collapse
Affiliation(s)
- Tomás Garnier Artiñano
- Helsinki University Hospital (HUS) Neurocenter, Neurology, Helsinki University Hospital, Helsinki, Finland,Department of Neurosciences, Clinicum, University of Helsinki, Helsinki, Finland
| | - Vafa Andalibi
- Department of Computer Science, Indiana University Bloomington, Bloomington, IN, United States
| | - Iiris Atula
- Helsinki University Hospital (HUS) Neurocenter, Neurology, Helsinki University Hospital, Helsinki, Finland,Department of Neurosciences, Clinicum, University of Helsinki, Helsinki, Finland
| | - Matteo Maestri
- Helsinki University Hospital (HUS) Neurocenter, Neurology, Helsinki University Hospital, Helsinki, Finland,Department of Neurosciences, Clinicum, University of Helsinki, Helsinki, Finland,Department of Biomedical and Neuromotor Sciences, University of Bologna, Bologna, Italy
| | - Simo Vanni
- Helsinki University Hospital (HUS) Neurocenter, Neurology, Helsinki University Hospital, Helsinki, Finland,Department of Neurosciences, Clinicum, University of Helsinki, Helsinki, Finland,Department of Physiology, Medicum, University of Helsinki, Helsinki, Finland,*Correspondence: Simo Vanni,
| |
Collapse
|
6
|
Beggs JM. Addressing skepticism of the critical brain hypothesis. Front Comput Neurosci 2022; 16:703865. [PMID: 36185712 PMCID: PMC9520604 DOI: 10.3389/fncom.2022.703865] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/30/2021] [Accepted: 08/08/2022] [Indexed: 11/16/2022] Open
Abstract
The hypothesis that living neural networks operate near a critical phase transition point has received substantial discussion. This “criticality hypothesis” is potentially important because experiments and theory show that optimal information processing and health are associated with operating near the critical point. Despite the promise of this idea, there have been several objections to it. While earlier objections have been addressed already, the more recent critiques of Touboul and Destexhe have not yet been fully met. The purpose of this paper is to describe their objections and offer responses. Their first objection is that the well-known Brunel model for cortical networks does not display a peak in mutual information near its phase transition, in apparent contradiction to the criticality hypothesis. In response I show that it does have such a peak near the phase transition point, provided it is not strongly driven by random inputs. Their second objection is that even simple models like a coin flip can satisfy multiple criteria of criticality. This suggests that the emergent criticality claimed to exist in cortical networks is just the consequence of a random walk put through a threshold. In response I show that while such processes can produce many signatures criticality, these signatures (1) do not emerge from collective interactions, (2) do not support information processing, and (3) do not have long-range temporal correlations. Because experiments show these three features are consistently present in living neural networks, such random walk models are inadequate. Nevertheless, I conclude that these objections have been valuable for refining research questions and should always be welcomed as a part of the scientific process.
Collapse
Affiliation(s)
- John M. Beggs
- Department of Physics, Indiana University Bloomington, Bloomington, IN, United States
- Program in Neuroscience, Indiana University Bloomington, Bloomington, IN, United States
- *Correspondence: John M. Beggs,
| |
Collapse
|
7
|
Connectivity concepts in neuronal network modeling. PLoS Comput Biol 2022; 18:e1010086. [PMID: 36074778 PMCID: PMC9455883 DOI: 10.1371/journal.pcbi.1010086] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/07/2021] [Accepted: 04/07/2022] [Indexed: 11/19/2022] Open
Abstract
Sustainable research on computational models of neuronal networks requires published models to be understandable, reproducible, and extendable. Missing details or ambiguities about mathematical concepts and assumptions, algorithmic implementations, or parameterizations hinder progress. Such flaws are unfortunately frequent and one reason is a lack of readily applicable standards and tools for model description. Our work aims to advance complete and concise descriptions of network connectivity but also to guide the implementation of connection routines in simulation software and neuromorphic hardware systems. We first review models made available by the computational neuroscience community in the repositories ModelDB and Open Source Brain, and investigate the corresponding connectivity structures and their descriptions in both manuscript and code. The review comprises the connectivity of networks with diverse levels of neuroanatomical detail and exposes how connectivity is abstracted in existing description languages and simulator interfaces. We find that a substantial proportion of the published descriptions of connectivity is ambiguous. Based on this review, we derive a set of connectivity concepts for deterministically and probabilistically connected networks and also address networks embedded in metric space. Beside these mathematical and textual guidelines, we propose a unified graphical notation for network diagrams to facilitate an intuitive understanding of network properties. Examples of representative network models demonstrate the practical use of the ideas. We hope that the proposed standardizations will contribute to unambiguous descriptions and reproducible implementations of neuronal network connectivity in computational neuroscience.
Collapse
|
8
|
Linne ML, Aćimović J, Saudargiene A, Manninen T. Neuron-Glia Interactions and Brain Circuits. ADVANCES IN EXPERIMENTAL MEDICINE AND BIOLOGY 2022; 1359:87-103. [PMID: 35471536 DOI: 10.1007/978-3-030-89439-9_4] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Subscribe] [Scholar Register] [Indexed: 02/03/2023]
Abstract
Recent evidence suggests that glial cells take an active role in a number of brain functions that were previously attributed solely to neurons. For example, astrocytes, one type of glial cells, have been shown to promote coordinated activation of neuronal networks, modulate sensory-evoked neuronal network activity, and influence brain state transitions during development. This reinforces the idea that astrocytes not only provide the "housekeeping" for the neurons, but that they also play a vital role in supporting and expanding the functions of brain circuits and networks. Despite this accumulated knowledge, the field of computational neuroscience has mostly focused on modeling neuronal functions, ignoring the glial cells and the interactions they have with the neurons. In this chapter, we introduce the biology of neuron-glia interactions, summarize the existing computational models and tools, and emphasize the glial properties that may be important in modeling brain functions in the future.
Collapse
Affiliation(s)
- Marja-Leena Linne
- Faculty of Medicine and Health Technology, Tampere University, Tampere, Finland.
| | - Jugoslava Aćimović
- Faculty of Medicine and Health Technology, Tampere University, Tampere, Finland
| | - Ausra Saudargiene
- Neuroscience Institute, Lithuanian University of Health Sciences, Kaunas, Lithuania.,Department of Informatics, Vytautas Magnus University, Kaunas, Lithuania
| | - Tiina Manninen
- Faculty of Medicine and Health Technology, Tampere University, Tampere, Finland
| |
Collapse
|
9
|
Herbers P, Calvo I, Diaz-Pier S, Robles OD, Mata S, Toharia P, Pastor L, Peyser A, Morrison A, Klijn W. ConGen—A Simulator-Agnostic Visual Language for Definition and Generation of Connectivity in Large and Multiscale Neural Networks. Front Neuroinform 2022; 15:766697. [PMID: 35069166 PMCID: PMC8777257 DOI: 10.3389/fninf.2021.766697] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/29/2021] [Accepted: 11/30/2021] [Indexed: 11/21/2022] Open
Abstract
An open challenge on the road to unraveling the brain's multilevel organization is establishing techniques to research connectivity and dynamics at different scales in time and space, as well as the links between them. This work focuses on the design of a framework that facilitates the generation of multiscale connectivity in large neural networks using a symbolic visual language capable of representing the model at different structural levels—ConGen. This symbolic language allows researchers to create and visually analyze the generated networks independently of the simulator to be used, since the visual model is translated into a simulator-independent language. The simplicity of the front end visual representation, together with the simulator independence provided by the back end translation, combine into a framework to enhance collaboration among scientists with expertise at different scales of abstraction and from different fields. On the basis of two use cases, we introduce the features and possibilities of our proposed visual language and associated workflow. We demonstrate that ConGen enables the creation, editing, and visualization of multiscale biological neural networks and provides a whole workflow to produce simulation scripts from the visual representation of the model.
Collapse
Affiliation(s)
- Patrick Herbers
- Simulation and Data Lab Neuroscience, Jülich Supercomputing Centre, Institute for Advanced Simulation, JARA, Forschungszentrum Jülich GmbH, Jülich, Germany
| | - Iago Calvo
- Department of Computer Science and Computer Architecture, Lenguajes y Sistemas Informáticos y Estadística e Investigación Operativa, Rey Juan Carlos University, Madrid, Spain
| | - Sandra Diaz-Pier
- Simulation and Data Lab Neuroscience, Jülich Supercomputing Centre, Institute for Advanced Simulation, JARA, Forschungszentrum Jülich GmbH, Jülich, Germany
- *Correspondence: Wouter Klijn
| | - Oscar D. Robles
- Department of Computer Science and Computer Architecture, Lenguajes y Sistemas Informáticos y Estadística e Investigación Operativa, Rey Juan Carlos University, Madrid, Spain
- Center for Computational Simulation, Universidad Politécnica de Madrid, Madrid, Spain
| | - Susana Mata
- Department of Computer Science and Computer Architecture, Lenguajes y Sistemas Informáticos y Estadística e Investigación Operativa, Rey Juan Carlos University, Madrid, Spain
- Center for Computational Simulation, Universidad Politécnica de Madrid, Madrid, Spain
| | - Pablo Toharia
- Center for Computational Simulation, Universidad Politécnica de Madrid, Madrid, Spain
- DATSI, ETSIINF, Universidad Politécnica de Madrid, Madrid, Spain
| | - Luis Pastor
- Department of Computer Science and Computer Architecture, Lenguajes y Sistemas Informáticos y Estadística e Investigación Operativa, Rey Juan Carlos University, Madrid, Spain
- Center for Computational Simulation, Universidad Politécnica de Madrid, Madrid, Spain
| | - Alexander Peyser
- Simulation and Data Lab Neuroscience, Jülich Supercomputing Centre, Institute for Advanced Simulation, JARA, Forschungszentrum Jülich GmbH, Jülich, Germany
| | - Abigail Morrison
- Simulation and Data Lab Neuroscience, Jülich Supercomputing Centre, Institute for Advanced Simulation, JARA, Forschungszentrum Jülich GmbH, Jülich, Germany
- Institute of Neuroscience and Medicine and Institute for Advanced Simulation and JARA BRAIN Institute I, Jülich Research Centre, Jülich, Germany
- Computer Science 3 - Software Engineering, RWTH Aachen University, Aachen, Germany
| | - Wouter Klijn
- Simulation and Data Lab Neuroscience, Jülich Supercomputing Centre, Institute for Advanced Simulation, JARA, Forschungszentrum Jülich GmbH, Jülich, Germany
- Sandra Diaz-Pier
| |
Collapse
|
10
|
Dasbach S, Tetzlaff T, Diesmann M, Senk J. Dynamical Characteristics of Recurrent Neuronal Networks Are Robust Against Low Synaptic Weight Resolution. Front Neurosci 2021; 15:757790. [PMID: 35002599 PMCID: PMC8740282 DOI: 10.3389/fnins.2021.757790] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/12/2021] [Accepted: 11/03/2021] [Indexed: 11/13/2022] Open
Abstract
The representation of the natural-density, heterogeneous connectivity of neuronal network models at relevant spatial scales remains a challenge for Computational Neuroscience and Neuromorphic Computing. In particular, the memory demands imposed by the vast number of synapses in brain-scale network simulations constitute a major obstacle. Limiting the number resolution of synaptic weights appears to be a natural strategy to reduce memory and compute load. In this study, we investigate the effects of a limited synaptic-weight resolution on the dynamics of recurrent spiking neuronal networks resembling local cortical circuits and develop strategies for minimizing deviations from the dynamics of networks with high-resolution synaptic weights. We mimic the effect of a limited synaptic weight resolution by replacing normally distributed synaptic weights with weights drawn from a discrete distribution, and compare the resulting statistics characterizing firing rates, spike-train irregularity, and correlation coefficients with the reference solution. We show that a naive discretization of synaptic weights generally leads to a distortion of the spike-train statistics. If the weights are discretized such that the mean and the variance of the total synaptic input currents are preserved, the firing statistics remain unaffected for the types of networks considered in this study. For networks with sufficiently heterogeneous in-degrees, the firing statistics can be preserved even if all synaptic weights are replaced by the mean of the weight distribution. We conclude that even for simple networks with non-plastic neurons and synapses, a discretization of synaptic weights can lead to substantial deviations in the firing statistics unless the discretization is performed with care and guided by a rigorous validation process. For the network model used in this study, the synaptic weights can be replaced by low-resolution weights without affecting its macroscopic dynamical characteristics, thereby saving substantial amounts of memory.
Collapse
Affiliation(s)
- Stefan Dasbach
- Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA Institute Brain Structure-Function Relationships (INM-10), Jülich Research Centre, Jülich, Germany
| | - Tom Tetzlaff
- Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA Institute Brain Structure-Function Relationships (INM-10), Jülich Research Centre, Jülich, Germany
| | - Markus Diesmann
- Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA Institute Brain Structure-Function Relationships (INM-10), Jülich Research Centre, Jülich, Germany
- Department of Physics, Faculty 1, RWTH Aachen University, Aachen, Germany
- Department of Psychiatry, Psychotherapy, and Psychosomatics, Medical School, RWTH Aachen University, Aachen, Germany
| | - Johanna Senk
- Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA Institute Brain Structure-Function Relationships (INM-10), Jülich Research Centre, Jülich, Germany
| |
Collapse
|
11
|
Jordan J, Schmidt M, Senn W, Petrovici MA. Evolving interpretable plasticity for spiking networks. eLife 2021; 10:66273. [PMID: 34709176 PMCID: PMC8553337 DOI: 10.7554/elife.66273] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/05/2021] [Accepted: 08/19/2021] [Indexed: 11/25/2022] Open
Abstract
Continuous adaptation allows survival in an ever-changing world. Adjustments in the synaptic coupling strength between neurons are essential for this capability, setting us apart from simpler, hard-wired organisms. How these changes can be mathematically described at the phenomenological level, as so-called ‘plasticity rules’, is essential both for understanding biological information processing and for developing cognitively performant artificial systems. We suggest an automated approach for discovering biophysically plausible plasticity rules based on the definition of task families, associated performance measures and biophysical constraints. By evolving compact symbolic expressions, we ensure the discovered plasticity rules are amenable to intuitive understanding, fundamental for successful communication and human-guided generalization. We successfully apply our approach to typical learning scenarios and discover previously unknown mechanisms for learning efficiently from rewards, recover efficient gradient-descent methods for learning from target signals, and uncover various functionally equivalent STDP-like rules with tuned homeostatic mechanisms. Our brains are incredibly adaptive. Every day we form memories, acquire new knowledge or refine existing skills. This stands in contrast to our current computers, which typically can only perform pre-programmed actions. Our own ability to adapt is the result of a process called synaptic plasticity, in which the strength of the connections between neurons can change. To better understand brain function and build adaptive machines, researchers in neuroscience and artificial intelligence (AI) are modeling the underlying mechanisms. So far, most work towards this goal was guided by human intuition – that is, by the strategies scientists think are most likely to succeed. Despite the tremendous progress, this approach has two drawbacks. First, human time is limited and expensive. And second, researchers have a natural – and reasonable – tendency to incrementally improve upon existing models, rather than starting from scratch. Jordan, Schmidt et al. have now developed a new approach based on ‘evolutionary algorithms’. These computer programs search for solutions to problems by mimicking the process of biological evolution, such as the concept of survival of the fittest. The approach exploits the increasing availability of cheap but powerful computers. Compared to its predecessors (or indeed human brains), it also uses search strategies that are less biased by previous models. The evolutionary algorithms were presented with three typical learning scenarios. In the first, the computer had to spot a repeating pattern in a continuous stream of input without receiving feedback on how well it was doing. In the second scenario, the computer received virtual rewards whenever it behaved in the desired manner – an example of reinforcement learning. Finally, in the third ‘supervised learning’ scenario, the computer was told exactly how much its behavior deviated from the desired behavior. For each of these scenarios, the evolutionary algorithms were able to discover mechanisms of synaptic plasticity to solve the new task successfully. Using evolutionary algorithms to study how computers ‘learn’ will provide new insights into how brains function in health and disease. It could also pave the way for developing intelligent machines that can better adapt to the needs of their users.
Collapse
Affiliation(s)
- Jakob Jordan
- Department of Physiology, University of Bern, Bern, Switzerland
| | - Maximilian Schmidt
- Ascent Robotics, Tokyo, Japan.,RIKEN Center for Brain Science, Tokyo, Japan
| | - Walter Senn
- Department of Physiology, University of Bern, Bern, Switzerland
| | - Mihai A Petrovici
- Department of Physiology, University of Bern, Bern, Switzerland.,Kirchhoff-Institute for Physics, Heidelberg University, Heidelberg, Germany
| |
Collapse
|
12
|
Martínez-Cañada P, Ness TV, Einevoll GT, Fellin T, Panzeri S. Computation of the electroencephalogram (EEG) from network models of point neurons. PLoS Comput Biol 2021; 17:e1008893. [PMID: 33798190 PMCID: PMC8046357 DOI: 10.1371/journal.pcbi.1008893] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/02/2020] [Revised: 04/14/2021] [Accepted: 03/18/2021] [Indexed: 12/28/2022] Open
Abstract
The electroencephalogram (EEG) is a major tool for non-invasively studying brain function and dysfunction. Comparing experimentally recorded EEGs with neural network models is important to better interpret EEGs in terms of neural mechanisms. Most current neural network models use networks of simple point neurons. They capture important properties of cortical dynamics, and are numerically or analytically tractable. However, point neurons cannot generate an EEG, as EEG generation requires spatially separated transmembrane currents. Here, we explored how to compute an accurate approximation of a rodent's EEG with quantities defined in point-neuron network models. We constructed different approximations (or proxies) of the EEG signal that can be computed from networks of leaky integrate-and-fire (LIF) point neurons, such as firing rates, membrane potentials, and combinations of synaptic currents. We then evaluated how well each proxy reconstructed a ground-truth EEG obtained when the synaptic currents of the LIF model network were fed into a three-dimensional network model of multicompartmental neurons with realistic morphologies. Proxies based on linear combinations of AMPA and GABA currents performed better than proxies based on firing rates or membrane potentials. A new class of proxies, based on an optimized linear combination of time-shifted AMPA and GABA currents, provided the most accurate estimate of the EEG over a wide range of network states. The new linear proxies explained 85-95% of the variance of the ground-truth EEG for a wide range of network configurations including different cell morphologies, distributions of presynaptic inputs, positions of the recording electrode, and spatial extensions of the network. Non-linear EEG proxies using a convolutional neural network (CNN) on synaptic currents increased proxy performance by a further 2-8%. Our proxies can be used to easily calculate a biologically realistic EEG signal directly from point-neuron simulations thus facilitating a quantitative comparison between computational models and experimental EEG recordings.
Collapse
Affiliation(s)
- Pablo Martínez-Cañada
- Neural Coding Laboratory, Istituto Italiano di Tecnologia, Genova, Italy
- Neural Computation Laboratory, Center for Neuroscience and Cognitive Systems @UniTn, Istituto Italiano di Tecnologia, Rovereto, Italy
- Optical Approaches to Brain Function Laboratory, Istituto Italiano di Tecnologia, Genova, Italy
| | - Torbjørn V. Ness
- Faculty of Science and Technology, Norwegian University of Life Sciences, Ås, Norway
| | - Gaute T. Einevoll
- Faculty of Science and Technology, Norwegian University of Life Sciences, Ås, Norway
- Department of Physics, University of Oslo, Oslo, Norway
| | - Tommaso Fellin
- Neural Coding Laboratory, Istituto Italiano di Tecnologia, Genova, Italy
- Optical Approaches to Brain Function Laboratory, Istituto Italiano di Tecnologia, Genova, Italy
| | - Stefano Panzeri
- Neural Coding Laboratory, Istituto Italiano di Tecnologia, Genova, Italy
- Neural Computation Laboratory, Center for Neuroscience and Cognitive Systems @UniTn, Istituto Italiano di Tecnologia, Rovereto, Italy
| |
Collapse
|
13
|
Zachariou M, Roberts MJ, Lowet E, De Weerd P, Hadjipapas A. Empirically constrained network models for contrast-dependent modulation of gamma rhythm in V1. Neuroimage 2021; 229:117748. [PMID: 33460798 DOI: 10.1016/j.neuroimage.2021.117748] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/30/2020] [Revised: 11/28/2020] [Accepted: 01/07/2021] [Indexed: 11/29/2022] Open
Abstract
Gamma oscillations are thought to play a key role in neuronal network function and neuronal communication, yet the underlying generating mechanisms have not been fully elucidated to date. At least partly, this may be due to the fact that even in simple network models of interconnected inhibitory (I) and excitatory (E) neurons, many parameters remain unknown and are set based on practical considerations or by convention. Here, we mitigate this problem by requiring PING (Pyramidal Interneuron Network Gamma) models to simultaneously satisfy a broad set of criteria for realistic behaviour based on empirical data spanning both the single unit (spikes) and local population (LFP) levels while unknown parameters are varied. By doing so, we were able to constrain the parameter ranges and select empirically valid models. The derived model constraints implied weak rather than strong PING as the generating mechanism for gamma, connectivity between E and I neurons within specific bounds, and variations of the external input to E but not I neurons. Constrained models showed valid behaviours, including gamma frequency increases with contrast and power saturation or decay at high contrasts. Using an empirically-validated model we studied the route to gamma instability at high contrasts. This involved increased heterogeneity of E neurons with increasing input triggering a breakdown of I neuron pacemaker function. Further, we illustrate the model's capacity to resolve disputes in the literature concerning gamma oscillation properties and GABA conductance proxies. We propose that the models derived in our study will be useful for other modelling studies, and that our approach to the empirical constraining of PING models can be expanded when richer empirical datasets become available. As local gamma networks are the building blocks of larger networks that aim to understand complex cognition through their interactions, there is considerable value in improving our models of these building blocks.
Collapse
Affiliation(s)
- Margarita Zachariou
- Medical School, University of Nicosia, Nicosia 2408, Cyprus; Bioinformatics Department, Cyprus Institute of Neurology and Genetics, Nicosia 1683, Cyprus.
| | - Mark J Roberts
- Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University, Maastricht 6229 ER, The Netherlands
| | - Eric Lowet
- Department of Biomedical Engineering, Boston University, Boston, MA 02215, USA
| | - Peter De Weerd
- Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University, Maastricht 6229 ER, The Netherlands; Maastricht Centre for Systems Biology (MaCSBio), Faculty of Science and Engineering, Maastricht University, Maastricht 6229 ER, the Netherlands
| | | |
Collapse
|
14
|
Berga D, Otazu X. Modeling bottom-up and top-down attention with a neurodynamic model of V1. Neurocomputing 2020. [DOI: 10.1016/j.neucom.2020.07.047] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/27/2022]
|
15
|
Bachmann C, Tetzlaff T, Duarte R, Morrison A. Firing rate homeostasis counteracts changes in stability of recurrent neural networks caused by synapse loss in Alzheimer's disease. PLoS Comput Biol 2020; 16:e1007790. [PMID: 32841234 PMCID: PMC7505475 DOI: 10.1371/journal.pcbi.1007790] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/26/2019] [Revised: 09/21/2020] [Accepted: 03/17/2020] [Indexed: 11/19/2022] Open
Abstract
The impairment of cognitive function in Alzheimer's disease is clearly correlated to synapse loss. However, the mechanisms underlying this correlation are only poorly understood. Here, we investigate how the loss of excitatory synapses in sparsely connected random networks of spiking excitatory and inhibitory neurons alters their dynamical characteristics. Beyond the effects on the activity statistics, we find that the loss of excitatory synapses on excitatory neurons reduces the network's sensitivity to small perturbations. This decrease in sensitivity can be considered as an indication of a reduction of computational capacity. A full recovery of the network's dynamical characteristics and sensitivity can be achieved by firing rate homeostasis, here implemented by an up-scaling of the remaining excitatory-excitatory synapses. Mean-field analysis reveals that the stability of the linearised network dynamics is, in good approximation, uniquely determined by the firing rate, and thereby explains why firing rate homeostasis preserves not only the firing rate but also the network's sensitivity to small perturbations.
Collapse
Affiliation(s)
- Claudia Bachmann
- Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA BRAIN Institute I, Jülich Research Centre, Jülich, Germany
| | - Tom Tetzlaff
- Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA BRAIN Institute I, Jülich Research Centre, Jülich, Germany
| | - Renato Duarte
- Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA BRAIN Institute I, Jülich Research Centre, Jülich, Germany
| | - Abigail Morrison
- Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA BRAIN Institute I, Jülich Research Centre, Jülich, Germany
- Institute of Cognitive Neuroscience, Faculty of Psychology, Ruhr-University Bochum, Bochum, Germany
| |
Collapse
|
16
|
Berteau S, Bullock D. Simulations reveal how M-currents and memory-based inputs from CA3 enable single neuron mismatch detection for EC3 inputs to the CA1 subfield of hippocampus. J Neurophysiol 2020; 124:544-556. [PMID: 32609564 DOI: 10.1152/jn.00238.2019] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
Significant evidence has accumulated to support the hypothesis that hippocampal region CA1 operates as an associative mismatch detector (e.g., Hasselmo ME, Schnell E, Barkai E. J Neurosci 15: 5249-5262, 1995; Duncan K, Curtis C, Davachi L. J Neurosci 29: 131-139, 2009; Kumaran D, Maguire EA. J Neurosci 27: 8517-8524, 2007; Lisman JE, Grace AA. Neuron 46: 703-713, 2005; Lisman JE, Otmakhova NA. Hippocampus 11: 551-568 2001; Lörincz A, Buzsáki G. Ann N Y Acad Sci 911: 83-111, 2000; Meeter M, Murre JMJ, Talamini LM. Hippocampus 14: 722-741, 2004; Schiffer AM, Ahlheim C, Wurm MF, Schubotz RI. PLoS One 7: e36445, 2012; Vinogradova OS. Hippocampus 11: 578-598 2001). CA1 compares predictive synaptic signals from CA3 with synaptic signals from EC3, which reflect actual sensory inputs. The new CA1 pyramidal model presented here shows that the distal-proximal segregation of synaptic inputs from EC3 versus CA3, along with other biophysical features, enable such pyramids to serve as comparators that switch output encoding from a brief burst, for a match, to prolonged tonic spiking, for a mismatch. By including often-overlooked features of CA1 pyramidal neurons, this new model allows simulation of pharmacological effects that can eliminate either the match (phasic mode) response or the mismatch (tonic mode) response. These simulations reveal that dysfunctions can arise from either too much or too little ACh stimulation of the muscarinic receptors that control KCNQ channels. Additionally, a dysfunction caused by administration of an N-methyl-d-aspartate antagonist could be rescued by simultaneous administration of a KCNQ channel agonist, such as retigabine.NEW & NOTEWORTHY Hippocampal region CA1 operates as an associative mismatch detector, comparing predictive signals from CA3 with signals from EC3 reflecting sensory inputs. This new CA1 pyramidal model shows that biophysical features enable these comparators to switch output between brief bursts for matches and tonic spiking for mismatches. This suggests that cognitive learning models (e.g., predictive coding) may require much less match/mismatch circuitry than commonly assumed. Additional simulations illuminate deficits seen in psychiatric disorders and drug-induced states.
Collapse
Affiliation(s)
- Stefan Berteau
- Cognitive & Neural Systems Program, Boston University, Boston, Massachusetts
| | - Daniel Bullock
- Cognitive & Neural Systems Program, Boston University, Boston, Massachusetts
| |
Collapse
|
17
|
Blanco W, Lopes PH, de S Souza AA, Mascagni M. Non-replicability circumstances in a neural network model with Hodgkin-Huxley-type neurons. J Comput Neurosci 2020; 48:357-363. [PMID: 32519227 DOI: 10.1007/s10827-020-00748-3] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/05/2019] [Revised: 03/30/2020] [Accepted: 04/22/2020] [Indexed: 11/24/2022]
Abstract
Building upon previous experiments can be used to accomplish new goals. In computing, it is imperative to reuse computer code to continue development on specific projects. Reproducibility is a fundamental building block in science, and experimental reproducibility issues have recently been of great concern. It may be surprising that reproducibility is also of concern in computational science. In this study, we used a previously published code to investigate neural network activity and we were unable to replicate our original results. This led us to investigate the code in question, and we found that several different aspects, attributable to floating-point arithmetic, were the cause of these replicability issues. Furthermore, we uncovered other manifestations of this lack of replicability in other parts of the computation with this model. The simulated model is a standard system of ordinary differential equations, very much like those commonly used in computational neuroscience. Thus, we believe that other researchers in the field should be vigilant when using such models and avoid drawing conclusions from calculations if their qualitative results can be substantially modified through non-reproducible circumstances.
Collapse
Affiliation(s)
- Wilfredo Blanco
- Computer Science Department, State University of Rio Grande do Norte, Natal, RN, Brazil. .,Bioinformatics Department, Federal University of Rio Grande do Norte, Natal, RN, Brazil.
| | - Paulo H Lopes
- Computer Science Department, State University of Rio Grande do Norte, Natal, RN, Brazil
| | | | - Michael Mascagni
- Computer Science Department, Florida State University, Tallahassee, FL, USA.,Applied and Computational Mathematics Division, National Institute of Standards and Technology, Gaithersburg, MD, USA
| |
Collapse
|
18
|
Skaar JEW, Stasik AJ, Hagen E, Ness TV, Einevoll GT. Estimation of neural network model parameters from local field potentials (LFPs). PLoS Comput Biol 2020; 16:e1007725. [PMID: 32155141 PMCID: PMC7083334 DOI: 10.1371/journal.pcbi.1007725] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/12/2019] [Revised: 03/20/2020] [Accepted: 02/12/2020] [Indexed: 11/20/2022] Open
Abstract
Most modeling in systems neuroscience has been descriptive where neural representations such as ‘receptive fields’, have been found by statistically correlating neural activity to sensory input. In the traditional physics approach to modelling, hypotheses are represented by mechanistic models based on the underlying building blocks of the system, and candidate models are validated by comparing with experiments. Until now validation of mechanistic cortical network models has been based on comparison with neuronal spikes, found from the high-frequency part of extracellular electrical potentials. In this computational study we investigated to what extent the low-frequency part of the signal, the local field potential (LFP), can be used to validate and infer properties of mechanistic cortical network models. In particular, we asked the question whether the LFP can be used to accurately estimate synaptic connection weights in the underlying network. We considered the thoroughly analysed Brunel network comprising an excitatory and an inhibitory population of recurrently connected integrate-and-fire (LIF) neurons. This model exhibits a high diversity of spiking network dynamics depending on the values of only three network parameters. The LFP generated by the network was computed using a hybrid scheme where spikes computed from the point-neuron network were replayed on biophysically detailed multicompartmental neurons. We assessed how accurately the three model parameters could be estimated from power spectra of stationary ‘background’ LFP signals by application of convolutional neural nets (CNNs). All network parameters could be very accurately estimated, suggesting that LFPs indeed can be used for network model validation. Most of what we have learned about brain networks in vivo has come from the measurement of spikes (action potentials) recorded by extracellular electrodes. The low-frequency part of these signals, the local field potential (LFP), contains unique information about how dendrites in neuronal populations integrate synaptic inputs, but has so far played a lesser role. To investigate whether the LFP can be used to validate network models, we computed LFP signals for a recurrent network model (the Brunel network) for which the ground-truth parameters are known. By application of convolutional neural nets (CNNs) we found that the synaptic weights indeed could be accurately estimated from ‘background’ LFP signals, suggesting a future key role for LFP in development of network models.
Collapse
Affiliation(s)
- Jan-Eirik W. Skaar
- Faculty of Science and Technology, Norwegian University of Life Sciences, Ås, Norway
| | | | - Espen Hagen
- Department of Physics, University of Oslo, Oslo, Norway
| | - Torbjørn V. Ness
- Faculty of Science and Technology, Norwegian University of Life Sciences, Ås, Norway
| | - Gaute T. Einevoll
- Faculty of Science and Technology, Norwegian University of Life Sciences, Ås, Norway
- Department of Physics, University of Oslo, Oslo, Norway
- * E-mail:
| |
Collapse
|
19
|
Dai K, Hernando J, Billeh YN, Gratiy SL, Planas J, Davison AP, Dura-Bernal S, Gleeson P, Devresse A, Dichter BK, Gevaert M, King JG, Van Geit WAH, Povolotsky AV, Muller E, Courcol JD, Arkhipov A. The SONATA data format for efficient description of large-scale network models. PLoS Comput Biol 2020; 16:e1007696. [PMID: 32092054 PMCID: PMC7058350 DOI: 10.1371/journal.pcbi.1007696] [Citation(s) in RCA: 16] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/18/2019] [Revised: 03/05/2020] [Accepted: 01/28/2020] [Indexed: 12/04/2022] Open
Abstract
Increasing availability of comprehensive experimental datasets and of high-performance computing resources are driving rapid growth in scale, complexity, and biological realism of computational models in neuroscience. To support construction and simulation, as well as sharing of such large-scale models, a broadly applicable, flexible, and high-performance data format is necessary. To address this need, we have developed the Scalable Open Network Architecture TemplAte (SONATA) data format. It is designed for memory and computational efficiency and works across multiple platforms. The format represents neuronal circuits and simulation inputs and outputs via standardized files and provides much flexibility for adding new conventions or extensions. SONATA is used in multiple modeling and visualization tools, and we also provide reference Application Programming Interfaces and model examples to catalyze further adoption. SONATA format is free and open for the community to use and build upon with the goal of enabling efficient model building, sharing, and reproducibility. Neuroscience is experiencing a rapid growth of data streams characterizing composition, connectivity, and activity of brain networks in ever increasing details. Data-driven modeling will be essential to integrate these multimodal and complex data into predictive simulations to advance our understanding of brain function and mechanisms. To enable efficient development and sharing of such large-scale models utilizing diverse data types, we have developed the Scalable Open Network Architecture TemplAte (SONATA) data format. The format represents neuronal circuits and simulation inputs and outputs via standardized files and provides much flexibility for adding new conventions or extensions. SONATA is already supported by several popular tools for model building, simulations, and visualization. It is free and open for everyone to use and build upon and will enable increased efficiency, reproducibility, and scientific exchange in the community.
Collapse
Affiliation(s)
- Kael Dai
- Allen Institute for Brain Science, Seattle, Washington, United States of America
| | - Juan Hernando
- Blue Brain Project, École Polytechnique Fédérale de Lausanne (EPFL), Campus Biotech, Geneva, Switzerland
| | - Yazan N. Billeh
- Allen Institute for Brain Science, Seattle, Washington, United States of America
| | - Sergey L. Gratiy
- Allen Institute for Brain Science, Seattle, Washington, United States of America
| | - Judit Planas
- Blue Brain Project, École Polytechnique Fédérale de Lausanne (EPFL), Campus Biotech, Geneva, Switzerland
| | - Andrew P. Davison
- Paris-Saclay Institute of Neuroscience UMR, Centre National de la Recherche Scientifique/Université Paris-Saclay, Gif-sur-Yvette, France
| | - Salvador Dura-Bernal
- State University of New York Downstate Medical Center, Brooklyn, New York, United States of America
- Nathan Kline Institute for Psychiatric Research, Orangeburg, New York, United States of America
| | - Padraig Gleeson
- Department of Neuroscience, Physiology and Pharmacology, University College London, London, United Kingdom
| | - Adrien Devresse
- Blue Brain Project, École Polytechnique Fédérale de Lausanne (EPFL), Campus Biotech, Geneva, Switzerland
| | - Benjamin K. Dichter
- Department of Neurosurgery, Stanford University, Stanford, California, United States of America
- Biological Systems and Engineering, Lawrence Berkeley National Laboratory, Berkeley, California, United States of America
| | - Michael Gevaert
- Blue Brain Project, École Polytechnique Fédérale de Lausanne (EPFL), Campus Biotech, Geneva, Switzerland
| | - James G. King
- Blue Brain Project, École Polytechnique Fédérale de Lausanne (EPFL), Campus Biotech, Geneva, Switzerland
| | - Werner A. H. Van Geit
- Blue Brain Project, École Polytechnique Fédérale de Lausanne (EPFL), Campus Biotech, Geneva, Switzerland
| | - Arseny V. Povolotsky
- Blue Brain Project, École Polytechnique Fédérale de Lausanne (EPFL), Campus Biotech, Geneva, Switzerland
| | - Eilif Muller
- Blue Brain Project, École Polytechnique Fédérale de Lausanne (EPFL), Campus Biotech, Geneva, Switzerland
| | - Jean-Denis Courcol
- Blue Brain Project, École Polytechnique Fédérale de Lausanne (EPFL), Campus Biotech, Geneva, Switzerland
| | - Anton Arkhipov
- Allen Institute for Brain Science, Seattle, Washington, United States of America
- * E-mail:
| |
Collapse
|
20
|
Bardin JB, Spreemann G, Hess K. Topological exploration of artificial neuronal network dynamics. Netw Neurosci 2019; 3:725-743. [PMID: 31410376 PMCID: PMC6663191 DOI: 10.1162/netn_a_00080] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/24/2018] [Accepted: 01/10/2019] [Indexed: 11/04/2022] Open
Abstract
One of the paramount challenges in neuroscience is to understand the dynamics of individual neurons and how they give rise to network dynamics when interconnected. Historically, researchers have resorted to graph theory, statistics, and statistical mechanics to describe the spatiotemporal structure of such network dynamics. Our novel approach employs tools from algebraic topology to characterize the global properties of network structure and dynamics. We propose a method based on persistent homology to automatically classify network dynamics using topological features of spaces built from various spike train distances. We investigate the efficacy of our method by simulating activity in three small artificial neural networks with different sets of parameters, giving rise to dynamics that can be classified into four regimes. We then compute three measures of spike train similarity and use persistent homology to extract topological features that are fundamentally different from those used in traditional methods. Our results show that a machine learning classifier trained on these features can accurately predict the regime of the network it was trained on and also generalize to other networks that were not presented during training. Moreover, we demonstrate that using features extracted from multiple spike train distances systematically improves the performance of our method.
Collapse
Affiliation(s)
- Jean-Baptiste Bardin
- Laboratory for Topology and Neuroscience, Brain Mind Institute, École Polytechnique Fédérale de Lausanne, Lausanne, Switzerland
| | - Gard Spreemann
- Laboratory for Topology and Neuroscience, Brain Mind Institute, École Polytechnique Fédérale de Lausanne, Lausanne, Switzerland
| | - Kathryn Hess
- Laboratory for Topology and Neuroscience, Brain Mind Institute, École Polytechnique Fédérale de Lausanne, Lausanne, Switzerland
| |
Collapse
|
21
|
Gu Y, Qi Y, Gong P. Rich-club connectivity, diverse population coupling, and dynamical activity patterns emerging from local cortical circuits. PLoS Comput Biol 2019; 15:e1006902. [PMID: 30939135 PMCID: PMC6461296 DOI: 10.1371/journal.pcbi.1006902] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/20/2018] [Revised: 04/12/2019] [Accepted: 02/25/2019] [Indexed: 11/19/2022] Open
Abstract
Experimental studies have begun revealing essential properties of the structural connectivity and the spatiotemporal activity dynamics of cortical circuits. To integrate these properties from anatomy and physiology, and to elucidate the links between them, we develop a novel cortical circuit model that captures a range of realistic features of synaptic connectivity. We show that the model accounts for the emergence of higher-order connectivity structures, including highly connected hub neurons that form an interconnected rich-club. The circuit model exhibits a rich repertoire of dynamical activity states, ranging from asynchronous to localized and global propagating wave states. We find that around the transition between asynchronous and localized propagating wave states, our model quantitatively reproduces a variety of major empirical findings regarding neural spatiotemporal dynamics, which otherwise remain disjointed in existing studies. These dynamics include diverse coupling (correlation) between spiking activity of individual neurons and the population, dynamical wave patterns with variable speeds and precise temporal structures of neural spikes. We further illustrate how these neural dynamics are related to the connectivity properties by analysing structural contributions to variable spiking dynamics and by showing that the rich-club structure is related to the diverse population coupling. These findings establish an integrated account of structural connectivity and activity dynamics of local cortical circuits, and provide new insights into understanding their working mechanisms. To integrate essential anatomical and physiological properties of local cortical circuits and to elucidate mechanistic links between them, we develop a novel circuit model capturing key synaptic connectivity features. We show that the model explains the emergence of a range of connectivity patterns such as rich-club connectivity, and gives rise to a rich repertoire of cortical states. We identify both the anatomical and physiological mechanisms underlying the transition of these cortical states, and show that our model reconciles an otherwise disparate set of key physiological findings on neural activity dynamics. We further illustrate how these neural dynamics are related to the connectivity properties by analysing structural contributions to variable spiking dynamics and by showing that the rich-club structure is related to diverse neural population correlations as observed recently. Our model thus provides a framework for integrating and explaining a variety of neural connectivity properties and spatiotemporal activity dynamics observed in experimental studies, and provides novel experimentally testable predictions.
Collapse
Affiliation(s)
- Yifan Gu
- School of Physics, University of Sydney, New South Wales, Australia
- ARC Centre of Excellence for Integrative Brain Function, University of Sydney, New South Wales, Australia
| | - Yang Qi
- School of Physics, University of Sydney, New South Wales, Australia
- ARC Centre of Excellence for Integrative Brain Function, University of Sydney, New South Wales, Australia
| | - Pulin Gong
- School of Physics, University of Sydney, New South Wales, Australia
- ARC Centre of Excellence for Integrative Brain Function, University of Sydney, New South Wales, Australia
- * E-mail:
| |
Collapse
|
22
|
Duarte R, Morrison A. Leveraging heterogeneity for neural computation with fading memory in layer 2/3 cortical microcircuits. PLoS Comput Biol 2019; 15:e1006781. [PMID: 31022182 PMCID: PMC6504118 DOI: 10.1371/journal.pcbi.1006781] [Citation(s) in RCA: 16] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/19/2018] [Revised: 05/07/2019] [Accepted: 01/09/2019] [Indexed: 11/24/2022] Open
Abstract
Complexity and heterogeneity are intrinsic to neurobiological systems, manifest in every process, at every scale, and are inextricably linked to the systems' emergent collective behaviours and function. However, the majority of studies addressing the dynamics and computational properties of biologically inspired cortical microcircuits tend to assume (often for the sake of analytical tractability) a great degree of homogeneity in both neuronal and synaptic/connectivity parameters. While simplification and reductionism are necessary to understand the brain's functional principles, disregarding the existence of the multiple heterogeneities in the cortical composition, which may be at the core of its computational proficiency, will inevitably fail to account for important phenomena and limit the scope and generalizability of cortical models. We address these issues by studying the individual and composite functional roles of heterogeneities in neuronal, synaptic and structural properties in a biophysically plausible layer 2/3 microcircuit model, built and constrained by multiple sources of empirical data. This approach was made possible by the emergence of large-scale, well curated databases, as well as the substantial improvements in experimental methodologies achieved over the last few years. Our results show that variability in single neuron parameters is the dominant source of functional specialization, leading to highly proficient microcircuits with much higher computational power than their homogeneous counterparts. We further show that fully heterogeneous circuits, which are closest to the biophysical reality, owe their response properties to the differential contribution of different sources of heterogeneity.
Collapse
Affiliation(s)
- Renato Duarte
- Institute of Neuroscience and Medicine (INM-6), Institute for Advanced Simulation (IAS-6) and JARA Institute Brain Structure-Function Relationships (JBI-1 / INM-10), Jülich Research Centre, Jülich, Germany
- Bernstein Center Freiburg, Albert-Ludwig University of Freiburg, Freiburg im Breisgau, Germany
- Faculty of Biology, Albert-Ludwig University of Freiburg, Freiburg im Breisgau, Germany
- Institute of Adaptive and Neural Computation, School of Informatics, University of Edinburgh, Edinburgh, United Kingdom
| | - Abigail Morrison
- Institute of Neuroscience and Medicine (INM-6), Institute for Advanced Simulation (IAS-6) and JARA Institute Brain Structure-Function Relationships (JBI-1 / INM-10), Jülich Research Centre, Jülich, Germany
- Bernstein Center Freiburg, Albert-Ludwig University of Freiburg, Freiburg im Breisgau, Germany
- Institute of Cognitive Neuroscience, Faculty of Psychology, Ruhr-University Bochum, Bochum, Germany
| |
Collapse
|
23
|
Gutzen R, von Papen M, Trensch G, Quaglio P, Grün S, Denker M. Reproducible Neural Network Simulations: Statistical Methods for Model Validation on the Level of Network Activity Data. Front Neuroinform 2018; 12:90. [PMID: 30618696 PMCID: PMC6305903 DOI: 10.3389/fninf.2018.00090] [Citation(s) in RCA: 18] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/18/2018] [Accepted: 11/14/2018] [Indexed: 11/13/2022] Open
Abstract
Computational neuroscience relies on simulations of neural network models to bridge the gap between the theory of neural networks and the experimentally observed activity dynamics in the brain. The rigorous validation of simulation results against reference data is thus an indispensable part of any simulation workflow. Moreover, the availability of different simulation environments and levels of model description require also validation of model implementations against each other to evaluate their equivalence. Despite rapid advances in the formalized description of models, data, and analysis workflows, there is no accepted consensus regarding the terminology and practical implementation of validation workflows in the context of neural simulations. This situation prevents the generic, unbiased comparison between published models, which is a key element of enhancing reproducibility of computational research in neuroscience. In this study, we argue for the establishment of standardized statistical test metrics that enable the quantitative validation of network models on the level of the population dynamics. Despite the importance of validating the elementary components of a simulation, such as single cell dynamics, building networks from validated building blocks does not entail the validity of the simulation on the network scale. Therefore, we introduce a corresponding set of validation tests and present an example workflow that practically demonstrates the iterative model validation of a spiking neural network model against its reproduction on the SpiNNaker neuromorphic hardware system. We formally implement the workflow using a generic Python library that we introduce for validation tests on neural network activity data. Together with the companion study (Trensch et al., 2018), the work presents a consistent definition, formalization, and implementation of the verification and validation process for neural network simulations.
Collapse
Affiliation(s)
- Robin Gutzen
- Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA-Institut Brain Structure-Function Relationships (INM-10), Jülich Research Centre, Jülich, Germany.,Theoretical Systems Neurobiology, RWTH Aachen University, Aachen, Germany
| | - Michael von Papen
- Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA-Institut Brain Structure-Function Relationships (INM-10), Jülich Research Centre, Jülich, Germany
| | - Guido Trensch
- Simulation Lab Neuroscience, Jülich Supercomputing Centre, Institute for Advanced Simulation, JARA, Jülich Research Centre, Jülich, Germany
| | - Pietro Quaglio
- Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA-Institut Brain Structure-Function Relationships (INM-10), Jülich Research Centre, Jülich, Germany.,Theoretical Systems Neurobiology, RWTH Aachen University, Aachen, Germany
| | - Sonja Grün
- Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA-Institut Brain Structure-Function Relationships (INM-10), Jülich Research Centre, Jülich, Germany.,Theoretical Systems Neurobiology, RWTH Aachen University, Aachen, Germany
| | - Michael Denker
- Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA-Institut Brain Structure-Function Relationships (INM-10), Jülich Research Centre, Jülich, Germany
| |
Collapse
|
24
|
Khalil R, Karim AA, Khedr E, Moftah M, Moustafa AA. Dynamic Communications Between GABA A Switch, Local Connectivity, and Synapses During Cortical Development: A Computational Study. Front Cell Neurosci 2018; 12:468. [PMID: 30618625 PMCID: PMC6304749 DOI: 10.3389/fncel.2018.00468] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/10/2018] [Accepted: 11/16/2018] [Indexed: 11/13/2022] Open
Abstract
Several factors regulate cortical development, such as changes in local connectivity and the influences of dynamical synapses. In this study, we simulated various factors affecting the regulation of neural network activity during cortical development. Previous studies have shown that during early cortical development, the reversal potential of GABAA shifts from depolarizing to hyperpolarizing. Here we provide the first integrative computational model to simulate the combined effects of these factors in a unified framework (building on our prior work: Khalil et al., 2017a,b). In the current study, we extend our model to monitor firing activity in response to the excitatory action of GABAA. Precisely, we created a Spiking Neural Network model that included certain biophysical parameters for lateral connectivity (distance between adjacent neurons) and nearby local connectivity (complex connections involving those between neuronal groups). We simulated different network scenarios (for immature and mature conditions) based on these biophysical parameters. Then, we implemented two forms of Short-term synaptic plasticity (depression and facilitation). Each form has two distinct kinds according to its synaptic time constant value. Finally, in both sets of networks, we compared firing rate activity responses before and after simulating dynamical synapses. Based on simulation results, we found that the modulation effect of dynamical synapses for evaluating and shaping the firing activity of the neural network is strongly dependent on the physiological state of GABAA. Moreover, the STP mechanism acts differently in every network scenario, mirroring the crucial modulating roles of these critical parameters during cortical development. Clinical implications for pathological alterations of GABAergic signaling in neurological and psychiatric disorders are discussed.
Collapse
Affiliation(s)
- Radwa Khalil
- Department of Psychology and Methods, Jacobs University Bremen, Bremen, Germany
| | - Ahmed A Karim
- Department of Psychology and Methods, Jacobs University Bremen, Bremen, Germany.,University Clinic of Psychiatry and Psychotherapy, Tübingen, Germany
| | - Eman Khedr
- Department of Neuropsychiatry, Faculty of Medicine, Assiut University, Assiut, Egypt
| | - Marie Moftah
- Zoology Department, Faculty of Science, Alexandria University, Alexandria, Egypt
| | - Ahmed A Moustafa
- MARCS Institute for Brain and Behaviour, Western Sydney University, Sydney, NSW, Australia.,Department of Social Sciences, College of Arts and Sciences, Qatar University, Doha, Qatar
| |
Collapse
|
25
|
Miłkowski M, Hensel WM, Hohol M. Replicability or reproducibility? On the replication crisis in computational neuroscience and sharing only relevant detail. J Comput Neurosci 2018; 45:163-172. [PMID: 30377880 PMCID: PMC6306493 DOI: 10.1007/s10827-018-0702-z] [Citation(s) in RCA: 34] [Impact Index Per Article: 5.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/28/2018] [Revised: 10/05/2018] [Accepted: 10/17/2018] [Indexed: 01/25/2023]
Abstract
Replicability and reproducibility of computational models has been somewhat understudied by "the replication movement." In this paper, we draw on methodological studies into the replicability of psychological experiments and on the mechanistic account of explanation to analyze the functions of model replications and model reproductions in computational neuroscience. We contend that model replicability, or independent researchers' ability to obtain the same output using original code and data, and model reproducibility, or independent researchers' ability to recreate a model without original code, serve different functions and fail for different reasons. This means that measures designed to improve model replicability may not enhance (and, in some cases, may actually damage) model reproducibility. We claim that although both are undesirable, low model reproducibility poses more of a threat to long-term scientific progress than low model replicability. In our opinion, low model reproducibility stems mostly from authors' omitting to provide crucial information in scientific papers and we stress that sharing all computer code and data is not a solution. Reports of computational studies should remain selective and include all and only relevant bits of code.
Collapse
Affiliation(s)
- Marcin Miłkowski
- Institute of Philosophy and Sociology, Polish Academy of Sciences, Nowy Świat 72, 00-330, Warsaw, Poland
| | - Witold M Hensel
- Faculty of History and Sociology, University of Białystok, Plac NZS 1, 15-420, Białystok, Poland
| | - Mateusz Hohol
- Copernicus Center for Interdisciplinary Studies, Jagiellonian University, Szczepańska 1/5, 31-011, Kraków, Poland.
| |
Collapse
|
26
|
Senk J, Carde C, Hagen E, Kuhlen TW, Diesmann M, Weyers B. VIOLA-A Multi-Purpose and Web-Based Visualization Tool for Neuronal-Network Simulation Output. Front Neuroinform 2018; 12:75. [PMID: 30467469 PMCID: PMC6236002 DOI: 10.3389/fninf.2018.00075] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/27/2018] [Accepted: 10/10/2018] [Indexed: 11/13/2022] Open
Abstract
Neuronal network models and corresponding computer simulations are invaluable tools to aid the interpretation of the relationship between neuron properties, connectivity, and measured activity in cortical tissue. Spatiotemporal patterns of activity propagating across the cortical surface as observed experimentally can for example be described by neuronal network models with layered geometry and distance-dependent connectivity. In order to cover the surface area captured by today's experimental techniques and to achieve sufficient self-consistency, such models contain millions of nerve cells. The interpretation of the resulting stream of multi-modal and multi-dimensional simulation data calls for integrating interactive visualization steps into existing simulation-analysis workflows. Here, we present a set of interactive visualization concepts called views for the visual analysis of activity data in topological network models, and a corresponding reference implementation VIOLA (VIsualization Of Layer Activity). The software is a lightweight, open-source, web-based, and platform-independent application combining and adapting modern interactive visualization paradigms, such as coordinated multiple views, for massively parallel neurophysiological data. For a use-case demonstration we consider spiking activity data of a two-population, layered point-neuron network model incorporating distance-dependent connectivity subject to a spatially confined excitation originating from an external population. With the multiple coordinated views, an explorative and qualitative assessment of the spatiotemporal features of neuronal activity can be performed upfront of a detailed quantitative data analysis of specific aspects of the data. Interactive multi-view analysis therefore assists existing data analysis workflows. Furthermore, ongoing efforts including the European Human Brain Project aim at providing online user portals for integrated model development, simulation, analysis, and provenance tracking, wherein interactive visual analysis tools are one component. Browser-compatible, web-technology based solutions are therefore required. Within this scope, with VIOLA we provide a first prototype.
Collapse
Affiliation(s)
- Johanna Senk
- Institute of Neuroscience and Medicine (INM-6), Institute for Advanced Simulation (IAS-6), JARA Institute Brain Structure-Function Relationships (INM-10), Jülich Research Centre, Jülich, Germany
| | - Corto Carde
- Visual Computing Institute, RWTH Aachen University, Aachen, Germany
- JARA - High-Performance Computing, Aachen, Germany
- IMT Atlantique Bretagne-Pays de la Loire, Brest, France
| | - Espen Hagen
- Institute of Neuroscience and Medicine (INM-6), Institute for Advanced Simulation (IAS-6), JARA Institute Brain Structure-Function Relationships (INM-10), Jülich Research Centre, Jülich, Germany
- Department of Physics, University of Oslo, Oslo, Norway
| | - Torsten W. Kuhlen
- Visual Computing Institute, RWTH Aachen University, Aachen, Germany
- JARA - High-Performance Computing, Aachen, Germany
| | - Markus Diesmann
- Institute of Neuroscience and Medicine (INM-6), Institute for Advanced Simulation (IAS-6), JARA Institute Brain Structure-Function Relationships (INM-10), Jülich Research Centre, Jülich, Germany
- Department of Psychiatry, Psychotherapy and Psychosomatics, Medical Faculty, RWTH Aachen University, Aachen, Germany
- Department of Physics, Faculty 1, RWTH Aachen University, Aachen, Germany
| | - Benjamin Weyers
- Visual Computing Institute, RWTH Aachen University, Aachen, Germany
- JARA - High-Performance Computing, Aachen, Germany
| |
Collapse
|
27
|
Schmidt M, Bakker R, Shen K, Bezgin G, Diesmann M, van Albada SJ. A multi-scale layer-resolved spiking network model of resting-state dynamics in macaque visual cortical areas. PLoS Comput Biol 2018; 14:e1006359. [PMID: 30335761 PMCID: PMC6193609 DOI: 10.1371/journal.pcbi.1006359] [Citation(s) in RCA: 47] [Impact Index Per Article: 7.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/08/2017] [Accepted: 07/12/2018] [Indexed: 11/28/2022] Open
Abstract
Cortical activity has distinct features across scales, from the spiking statistics of individual cells to global resting-state networks. We here describe the first full-density multi-area spiking network model of cortex, using macaque visual cortex as a test system. The model represents each area by a microcircuit with area-specific architecture and features layer- and population-resolved connectivity between areas. Simulations reveal a structured asynchronous irregular ground state. In a metastable regime, the network reproduces spiking statistics from electrophysiological recordings and cortico-cortical interaction patterns in fMRI functional connectivity under resting-state conditions. Stable inter-area propagation is supported by cortico-cortical synapses that are moderately strong onto excitatory neurons and stronger onto inhibitory neurons. Causal interactions depend on both cortical structure and the dynamical state of populations. Activity propagates mainly in the feedback direction, similar to experimental results associated with visual imagery and sleep. The model unifies local and large-scale accounts of cortex, and clarifies how the detailed connectivity of cortex shapes its dynamics on multiple scales. Based on our simulations, we hypothesize that in the spontaneous condition the brain operates in a metastable regime where cortico-cortical projections target excitatory and inhibitory populations in a balanced manner that produces substantial inter-area interactions while maintaining global stability. The mammalian cortex fulfills its complex tasks by operating on multiple temporal and spatial scales from single cells to entire areas comprising millions of cells. These multi-scale dynamics are supported by specific network structures at all levels of organization. Since models of cortex hitherto tend to concentrate on a single scale, little is known about how cortical structure shapes the multi-scale dynamics of the network. We here present dynamical simulations of a multi-area network model at neuronal and synaptic resolution with population-specific connectivity based on extensive experimental data which accounts for a wide range of dynamical phenomena. Our model elucidates relationships between local and global scales in cortex and provides a platform for future studies of cortical function.
Collapse
Affiliation(s)
- Maximilian Schmidt
- Laboratory for Neural Coding and Brain Computing, RIKEN Center for Brain Science, Wako-Shi, Saitama, Japan
- Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA Institute Brain Structure-Function Relationships (INM-10), Jülich Research Centre, Jülich, Germany
| | - Rembrandt Bakker
- Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA Institute Brain Structure-Function Relationships (INM-10), Jülich Research Centre, Jülich, Germany
- Donders Institute for Brain, Cognition and Behavior, Radboud University Nijmegen, Nijmegen, Netherlands
| | - Kelly Shen
- Rotman Research Institute, Baycrest, Toronto, Ontario, Canada
| | - Gleb Bezgin
- McConnell Brain Imaging Centre, Montreal Neurological Institute, McGill University, Montreal, Canada
| | - Markus Diesmann
- Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA Institute Brain Structure-Function Relationships (INM-10), Jülich Research Centre, Jülich, Germany
- Department of Psychiatry, Psychotherapy and Psychosomatics, Medical Faculty, RWTH Aachen University, Aachen, Germany
- Department of Physics, RWTH Aachen University, Aachen, Germany
| | - Sacha Jennifer van Albada
- Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA Institute Brain Structure-Function Relationships (INM-10), Jülich Research Centre, Jülich, Germany
- * E-mail:
| |
Collapse
|
28
|
Pauli R, Weidel P, Kunkel S, Morrison A. Reproducing Polychronization: A Guide to Maximizing the Reproducibility of Spiking Network Models. Front Neuroinform 2018; 12:46. [PMID: 30123121 PMCID: PMC6085985 DOI: 10.3389/fninf.2018.00046] [Citation(s) in RCA: 22] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/01/2018] [Accepted: 06/26/2018] [Indexed: 01/02/2023] Open
Abstract
Any modeler who has attempted to reproduce a spiking neural network model from its description in a paper has discovered what a painful endeavor this is. Even when all parameters appear to have been specified, which is rare, typically the initial attempt to reproduce the network does not yield results that are recognizably akin to those in the original publication. Causes include inaccurately reported or hidden parameters (e.g., wrong unit or the existence of an initialization distribution), differences in implementation of model dynamics, and ambiguities in the text description of the network experiment. The very fact that adequate reproduction often cannot be achieved until a series of such causes have been tracked down and resolved is in itself disconcerting, as it reveals unreported model dependencies on specific implementation choices that either were not clear to the original authors, or that they chose not to disclose. In either case, such dependencies diminish the credibility of the model's claims about the behavior of the target system. To demonstrate these issues, we provide a worked example of reproducing a seminal study for which, unusually, source code was provided at time of publication. Despite this seemingly optimal starting position, reproducing the results was time consuming and frustrating. Further examination of the correctly reproduced model reveals that it is highly sensitive to implementation choices such as the realization of background noise, the integration timestep, and the thresholding parameter of the analysis algorithm. From this process, we derive a guideline of best practices that would substantially reduce the investment in reproducing neural network studies, whilst simultaneously increasing their scientific quality. We propose that this guideline can be used by authors and reviewers to assess and improve the reproducibility of future network models.
Collapse
Affiliation(s)
- Robin Pauli
- Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA BRAIN Institute I, Jülich Research Centre, Jülich, Germany
| | - Philipp Weidel
- Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA BRAIN Institute I, Jülich Research Centre, Jülich, Germany
| | - Susanne Kunkel
- Faculty of Science and Technology, Norwegian University of Life Sciences, Ås, Norway
- Department of Computational Science and Technology, School of Computer Science and Communication, KTH Royal Institute of Technology, Stockholm, Sweden
| | - Abigail Morrison
- Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA BRAIN Institute I, Jülich Research Centre, Jülich, Germany
- Institute of Cognitive Neuroscience, Faculty of Psychology, Ruhr-University Bochum, Bochum, Germany
| |
Collapse
|
29
|
Ongoing brain rhythms shape I-wave properties in a computational model. Brain Stimul 2018; 11:828-838. [DOI: 10.1016/j.brs.2018.03.010] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/21/2017] [Revised: 03/07/2018] [Accepted: 03/12/2018] [Indexed: 01/27/2023] Open
|
30
|
Manninen T, Aćimović J, Havela R, Teppola H, Linne ML. Challenges in Reproducibility, Replicability, and Comparability of Computational Models and Tools for Neuronal and Glial Networks, Cells, and Subcellular Structures. Front Neuroinform 2018; 12:20. [PMID: 29765315 PMCID: PMC5938413 DOI: 10.3389/fninf.2018.00020] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/01/2018] [Accepted: 04/06/2018] [Indexed: 01/26/2023] Open
Abstract
The possibility to replicate and reproduce published research results is one of the biggest challenges in all areas of science. In computational neuroscience, there are thousands of models available. However, it is rarely possible to reimplement the models based on the information in the original publication, let alone rerun the models just because the model implementations have not been made publicly available. We evaluate and discuss the comparability of a versatile choice of simulation tools: tools for biochemical reactions and spiking neuronal networks, and relatively new tools for growth in cell cultures. The replicability and reproducibility issues are considered for computational models that are equally diverse, including the models for intracellular signal transduction of neurons and glial cells, in addition to single glial cells, neuron-glia interactions, and selected examples of spiking neuronal networks. We also address the comparability of the simulation results with one another to comprehend if the studied models can be used to answer similar research questions. In addition to presenting the challenges in reproducibility and replicability of published results in computational neuroscience, we highlight the need for developing recommendations and good practices for publishing simulation tools and computational models. Model validation and flexible model description must be an integral part of the tool used to simulate and develop computational models. Constant improvement on experimental techniques and recording protocols leads to increasing knowledge about the biophysical mechanisms in neural systems. This poses new challenges for computational neuroscience: extended or completely new computational methods and models may be required. Careful evaluation and categorization of the existing models and tools provide a foundation for these future needs, for constructing multiscale models or extending the models to incorporate additional or more detailed biophysical mechanisms. Improving the quality of publications in computational neuroscience, enabling progressive building of advanced computational models and tools, can be achieved only through adopting publishing standards which underline replicability and reproducibility of research results.
Collapse
Affiliation(s)
- Tiina Manninen
- Computational Neuroscience Group, BioMediTech Institute and Faculty of Biomedical Sciences and Engineering, Tampere University of Technology, Tampere, Finland
- Laboratory of Signal Processing, Tampere University of Technology, Tampere, Finland
| | - Jugoslava Aćimović
- Computational Neuroscience Group, BioMediTech Institute and Faculty of Biomedical Sciences and Engineering, Tampere University of Technology, Tampere, Finland
- Laboratory of Signal Processing, Tampere University of Technology, Tampere, Finland
| | - Riikka Havela
- Computational Neuroscience Group, BioMediTech Institute and Faculty of Biomedical Sciences and Engineering, Tampere University of Technology, Tampere, Finland
- Laboratory of Signal Processing, Tampere University of Technology, Tampere, Finland
| | - Heidi Teppola
- Computational Neuroscience Group, BioMediTech Institute and Faculty of Biomedical Sciences and Engineering, Tampere University of Technology, Tampere, Finland
- Laboratory of Signal Processing, Tampere University of Technology, Tampere, Finland
| | - Marja-Leena Linne
- Computational Neuroscience Group, BioMediTech Institute and Faculty of Biomedical Sciences and Engineering, Tampere University of Technology, Tampere, Finland
- Laboratory of Signal Processing, Tampere University of Technology, Tampere, Finland
| |
Collapse
|
31
|
Manninen T, Havela R, Linne ML. Computational Models for Calcium-Mediated Astrocyte Functions. Front Comput Neurosci 2018; 12:14. [PMID: 29670517 PMCID: PMC5893839 DOI: 10.3389/fncom.2018.00014] [Citation(s) in RCA: 36] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/06/2018] [Accepted: 02/28/2018] [Indexed: 12/16/2022] Open
Abstract
The computational neuroscience field has heavily concentrated on the modeling of neuronal functions, largely ignoring other brain cells, including one type of glial cell, the astrocytes. Despite the short history of modeling astrocytic functions, we were delighted about the hundreds of models developed so far to study the role of astrocytes, most often in calcium dynamics, synchronization, information transfer, and plasticity in vitro, but also in vascular events, hyperexcitability, and homeostasis. Our goal here is to present the state-of-the-art in computational modeling of astrocytes in order to facilitate better understanding of the functions and dynamics of astrocytes in the brain. Due to the large number of models, we concentrated on a hundred models that include biophysical descriptions for calcium signaling and dynamics in astrocytes. We categorized the models into four groups: single astrocyte models, astrocyte network models, neuron-astrocyte synapse models, and neuron-astrocyte network models to ease their use in future modeling projects. We characterized the models based on which earlier models were used for building the models and which type of biological entities were described in the astrocyte models. Features of the models were compared and contrasted so that similarities and differences were more readily apparent. We discovered that most of the models were basically generated from a small set of previously published models with small variations. However, neither citations to all the previous models with similar core structure nor explanations of what was built on top of the previous models were provided, which made it possible, in some cases, to have the same models published several times without an explicit intention to make new predictions about the roles of astrocytes in brain functions. Furthermore, only a few of the models are available online which makes it difficult to reproduce the simulation results and further develop the models. Thus, we would like to emphasize that only via reproducible research are we able to build better computational models for astrocytes, which truly advance science. Our study is the first to characterize in detail the biophysical and biochemical mechanisms that have been modeled for astrocytes.
Collapse
Affiliation(s)
- Tiina Manninen
- Computational Neuroscience Group, BioMediTech Institute and Faculty of Biomedical Sciences and Engineering, Tampere University of Technology, Tampere, Finland
| | | | - Marja-Leena Linne
- Computational Neuroscience Group, BioMediTech Institute and Faculty of Biomedical Sciences and Engineering, Tampere University of Technology, Tampere, Finland
| |
Collapse
|
32
|
Jordan J, Ippen T, Helias M, Kitayama I, Sato M, Igarashi J, Diesmann M, Kunkel S. Extremely Scalable Spiking Neuronal Network Simulation Code: From Laptops to Exascale Computers. Front Neuroinform 2018; 12:2. [PMID: 29503613 PMCID: PMC5820465 DOI: 10.3389/fninf.2018.00002] [Citation(s) in RCA: 42] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/03/2017] [Accepted: 01/18/2018] [Indexed: 11/13/2022] Open
Abstract
State-of-the-art software tools for neuronal network simulations scale to the largest computing systems available today and enable investigations of large-scale networks of up to 10 % of the human cortex at a resolution of individual neurons and synapses. Due to an upper limit on the number of incoming connections of a single neuron, network connectivity becomes extremely sparse at this scale. To manage computational costs, simulation software ultimately targeting the brain scale needs to fully exploit this sparsity. Here we present a two-tier connection infrastructure and a framework for directed communication among compute nodes accounting for the sparsity of brain-scale networks. We demonstrate the feasibility of this approach by implementing the technology in the NEST simulation code and we investigate its performance in different scaling scenarios of typical network simulations. Our results show that the new data structures and communication scheme prepare the simulation kernel for post-petascale high-performance computing facilities without sacrificing performance in smaller systems.
Collapse
Affiliation(s)
- Jakob Jordan
- Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA Institute Brain Structure-Function Relationships (INM-10), Jülich Research Centre, Jülich, Germany
| | - Tammo Ippen
- Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA Institute Brain Structure-Function Relationships (INM-10), Jülich Research Centre, Jülich, Germany.,Faculty of Science and Technology, Norwegian University of Life Sciences, Ås, Norway
| | - Moritz Helias
- Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA Institute Brain Structure-Function Relationships (INM-10), Jülich Research Centre, Jülich, Germany.,Department of Physics, Faculty 1, RWTH Aachen University, Aachen, Germany
| | - Itaru Kitayama
- Advanced Institute for Computational Science, RIKEN, Kobe, Japan
| | - Mitsuhisa Sato
- Advanced Institute for Computational Science, RIKEN, Kobe, Japan
| | - Jun Igarashi
- Computational Engineering Applications Unit, RIKEN, Wako, Japan
| | - Markus Diesmann
- Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA Institute Brain Structure-Function Relationships (INM-10), Jülich Research Centre, Jülich, Germany.,Department of Physics, Faculty 1, RWTH Aachen University, Aachen, Germany.,Department of Psychiatry, Psychotherapy and Psychosomatics, Medical Faculty, RWTH Aachen University, Aachen, Germany
| | - Susanne Kunkel
- Department of Computational Science and Technology, School of Computer Science and Communication, KTH Royal Institute of Technology, Stockholm, Sweden.,Simulation Laboratory Neuroscience - Bernstein Facility for Simulation and Database Technology, Jülich Research Centre, Jülich, Germany
| |
Collapse
|
33
|
Ashida G, Tollin DJ, Kretzberg J. Physiological models of the lateral superior olive. PLoS Comput Biol 2017; 13:e1005903. [PMID: 29281618 PMCID: PMC5744914 DOI: 10.1371/journal.pcbi.1005903] [Citation(s) in RCA: 24] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/11/2017] [Accepted: 11/28/2017] [Indexed: 01/09/2023] Open
Abstract
In computational biology, modeling is a fundamental tool for formulating, analyzing and predicting complex phenomena. Most neuron models, however, are designed to reproduce certain small sets of empirical data. Hence their outcome is usually not compatible or comparable with other models or datasets, making it unclear how widely applicable such models are. In this study, we investigate these aspects of modeling, namely credibility and generalizability, with a specific focus on auditory neurons involved in the localization of sound sources. The primary cues for binaural sound localization are comprised of interaural time and level differences (ITD/ILD), which are the timing and intensity differences of the sound waves arriving at the two ears. The lateral superior olive (LSO) in the auditory brainstem is one of the locations where such acoustic information is first computed. An LSO neuron receives temporally structured excitatory and inhibitory synaptic inputs that are driven by ipsi- and contralateral sound stimuli, respectively, and changes its spike rate according to binaural acoustic differences. Here we examine seven contemporary models of LSO neurons with different levels of biophysical complexity, from predominantly functional ones (‘shot-noise’ models) to those with more detailed physiological components (variations of integrate-and-fire and Hodgkin-Huxley-type). These models, calibrated to reproduce known monaural and binaural characteristics of LSO, generate largely similar results to each other in simulating ITD and ILD coding. Our comparisons of physiological detail, computational efficiency, predictive performances, and further expandability of the models demonstrate (1) that the simplistic, functional LSO models are suitable for applications where low computational costs and mathematical transparency are needed, (2) that more complex models with detailed membrane potential dynamics are necessary for simulation studies where sub-neuronal nonlinear processes play important roles, and (3) that, for general purposes, intermediate models might be a reasonable compromise between simplicity and biological plausibility. Computational models help our understanding of complex biological systems, by identifying their key elements and revealing their operational principles. Close comparisons between model predictions and empirical observations ensure our confidence in a model as a building block for further applications. Most current neuronal models, however, are constructed to replicate only a small specific set of experimental data. Thus, it is usually unclear how these models can be generalized to different datasets and how they compare with each other. In this paper, seven neuronal models are examined that are designed to reproduce known physiological characteristics of auditory neurons involved in the detection of sound source location. Despite their different levels of complexity, the models generate largely similar results when their parameters are tuned with common criteria. Comparisons show that simple models are computationally more efficient and theoretically transparent, and therefore suitable for rigorous mathematical analyses and engineering applications including real-time simulations. In contrast, complex models are necessary for investigating the relationship between underlying biophysical processes and sub- and suprathreshold spiking properties, although they have a large number of unconstrained, unverified parameters. Having identified their advantages and drawbacks, these auditory neuron models may readily be used for future studies and applications.
Collapse
Affiliation(s)
- Go Ashida
- Cluster of Excellence "Hearing4all", Department of Neuroscience, University of Oldenburg, Oldenburg, Germany
| | - Daniel J Tollin
- Department of Physiology and Biophysics, University of Colorado School of Medicine, Aurora, Colorado, United States of America
| | - Jutta Kretzberg
- Cluster of Excellence "Hearing4all", Department of Neuroscience, University of Oldenburg, Oldenburg, Germany
| |
Collapse
|
34
|
Abstract
There is a growing requirement in computational neuroscience for tools that permit collaborative model building, model sharing, combining existing models into a larger system (multi-scale model integration), and are able to simulate models using a variety of simulation engines and hardware platforms. Layered XML model specification formats solve many of these problems, however they are difficult to write and visualise without tools. Here we describe a new graphical software tool, SpineCreator, which facilitates the creation and visualisation of layered models of point spiking neurons or rate coded neurons without requiring the need for programming. We demonstrate the tool through the reproduction and visualisation of published models and show simulation results using code generation interfaced directly into SpineCreator. As a unique application for the graphical creation of neural networks, SpineCreator represents an important step forward for neuronal modelling.
Collapse
|
35
|
Buxton D, Bracci E, Overton PG, Gurney K. Striatal Neuropeptides Enhance Selection and Rejection of Sequential Actions. Front Comput Neurosci 2017; 11:62. [PMID: 28798678 PMCID: PMC5529366 DOI: 10.3389/fncom.2017.00062] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/31/2017] [Accepted: 06/27/2017] [Indexed: 12/05/2022] Open
Abstract
The striatum is the primary input nucleus for the basal ganglia, and receives glutamatergic afferents from the cortex. Under the hypothesis that basal ganglia perform action selection, these cortical afferents encode potential “action requests.” Previous studies have suggested the striatum may utilize a mutually inhibitory network of medium spiny neurons (MSNs) to filter these requests so that only those of high salience are selected. However, the mechanisms enabling the striatum to perform clean, rapid switching between distinct actions that form part of a learned action sequence are still poorly understood. Substance P (SP) and enkephalin are neuropeptides co-released with GABA in MSNs preferentially expressing D1 or D2 dopamine receptors respectively. SP has a facilitatory effect on subsequent glutamatergic inputs to target MSNs, while enkephalin has an inhibitory effect. Blocking the action of SP in the striatum is also known to affect behavioral transitions. We constructed phenomenological models of the effects of SP and enkephalin, and integrated these into a hybrid model of basal ganglia comprising a spiking striatal microcircuit and rate–coded populations representing other major structures. We demonstrated that diffuse neuropeptide connectivity enhanced the selection of unordered action requests, and that for true action sequences, where action semantics define a fixed structure, a patterning of the SP connectivity reflecting this ordering enhanced selection of actions presented in the correct sequential order and suppressed incorrect ordering. We also showed that selective pruning of SP connections allowed context–sensitive inhibition of specific undesirable requests that otherwise interfered with selection of an action group. Our model suggests that the interaction of SP and enkephalin enhances the contrast between selection and rejection of action requests, and that patterned SP connectivity in the striatum allows the “chunking” of actions and improves selection of sequences. Efficient execution of action sequences may therefore result from a combination of ordered cortical inputs and patterned neuropeptide connectivity within striatum.
Collapse
Affiliation(s)
- David Buxton
- Adaptive Behaviour Research Group, Department of Psychology, The University of SheffieldSheffield, United Kingdom
| | - Enrico Bracci
- Adaptive Behaviour Research Group, Department of Psychology, The University of SheffieldSheffield, United Kingdom
| | - Paul G Overton
- Adaptive Behaviour Research Group, Department of Psychology, The University of SheffieldSheffield, United Kingdom
| | - Kevin Gurney
- Adaptive Behaviour Research Group, Department of Psychology, The University of SheffieldSheffield, United Kingdom
| |
Collapse
|
36
|
Caligiore D, Mannella F, Arbib MA, Baldassarre G. Dysfunctions of the basal ganglia-cerebellar-thalamo-cortical system produce motor tics in Tourette syndrome. PLoS Comput Biol 2017; 13:e1005395. [PMID: 28358814 PMCID: PMC5373520 DOI: 10.1371/journal.pcbi.1005395] [Citation(s) in RCA: 65] [Impact Index Per Article: 9.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/25/2016] [Accepted: 02/01/2017] [Indexed: 12/24/2022] Open
Abstract
Motor tics are a cardinal feature of Tourette syndrome and are traditionally associated with an excess of striatal dopamine in the basal ganglia. Recent evidence increasingly supports a more articulated view where cerebellum and cortex, working closely in concert with basal ganglia, are also involved in tic production. Building on such evidence, this article proposes a computational model of the basal ganglia-cerebellar-thalamo-cortical system to study how motor tics are generated in Tourette syndrome. In particular, the model: (i) reproduces the main results of recent experiments about the involvement of the basal ganglia-cerebellar-thalamo-cortical system in tic generation; (ii) suggests an explanation of the system-level mechanisms underlying motor tic production: in this respect, the model predicts that the interplay between dopaminergic signal and cortical activity contributes to triggering the tic event and that the recently discovered basal ganglia-cerebellar anatomical pathway may support the involvement of the cerebellum in tic production; (iii) furnishes predictions on the amount of tics generated when striatal dopamine increases and when the cortex is externally stimulated. These predictions could be important in identifying new brain target areas for future therapies. Finally, the model represents the first computational attempt to study the role of the recently discovered basal ganglia-cerebellar anatomical links. Studying this non-cortex-mediated basal ganglia-cerebellar interaction could radically change our perspective about how these areas interact with each other and with the cortex. Overall, the model also shows the utility of casting Tourette syndrome within a system-level perspective rather than viewing it as related to the dysfunction of a single brain area. Tourette syndrome is a neuropsychiatric disorder characterized by vocal and motor tics. Tics represent a cardinal symptom traditionally associated with a dysfunction of the basal ganglia leading to an excess of the dopamine neurotransmitter. This view gives a restricted clinical picture and limits therapeutic approaches because it ignores the influence of altered interactions between the basal ganglia and other brain areas. In this respect, recent evidence supports a more articulated framework where cerebellum and cortex are also involved in tic production. Building on these data, we propose a computational model of the basal ganglia-cerebellar-thalamo-cortical network to investigate the specific mechanisms underlying motor tic production. The model reproduces the results of recent experiments and suggests an explanation of the system-level processes underlying tic production. Moreover, it furnishes predictions related to the amount of tics generated when there are dysfunctions in the basal ganglia-cerebellar-thalamo-cortical circuits. These predictions could be important in identifying new brain target areas for future therapies based on a system-level view of Tourette syndrome.
Collapse
Affiliation(s)
- Daniele Caligiore
- Laboratory of Computational Embodied Neuroscience, Institute of Cognitive Sciences and Technologies, National Research Council (CNR-ISTC-LOCEN), Roma, Italy
- * E-mail:
| | - Francesco Mannella
- Laboratory of Computational Embodied Neuroscience, Institute of Cognitive Sciences and Technologies, National Research Council (CNR-ISTC-LOCEN), Roma, Italy
| | - Michael A. Arbib
- Neuroscience Program, USC Brain Project, Computer Science Department, University of Southern California, Los Angeles, California, United States of America
| | - Gianluca Baldassarre
- Laboratory of Computational Embodied Neuroscience, Institute of Cognitive Sciences and Technologies, National Research Council (CNR-ISTC-LOCEN), Roma, Italy
| |
Collapse
|
37
|
Manninen T, Havela R, Linne ML. Reproducibility and Comparability of Computational Models for Astrocyte Calcium Excitability. Front Neuroinform 2017; 11:11. [PMID: 28270761 PMCID: PMC5318440 DOI: 10.3389/fninf.2017.00011] [Citation(s) in RCA: 16] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/02/2016] [Accepted: 01/25/2017] [Indexed: 11/13/2022] Open
Abstract
The scientific community across all disciplines faces the same challenges of ensuring accessibility, reproducibility, and efficient comparability of scientific results. Computational neuroscience is a rapidly developing field, where reproducibility and comparability of research results have gained increasing interest over the past years. As the number of computational models of brain functions is increasing, we chose to address reproducibility using four previously published computational models of astrocyte excitability as an example. Although not conventionally taken into account when modeling neuronal systems, astrocytes have been shown to take part in a variety of in vitro and in vivo phenomena including synaptic transmission. Two of the selected astrocyte models describe spontaneous calcium excitability, and the other two neurotransmitter-evoked calcium excitability. We specifically addressed how well the original simulation results can be reproduced with a reimplementation of the models. Additionally, we studied how well the selected models can be reused and whether they are comparable in other stimulation conditions and research settings. Unexpectedly, we found out that three of the model publications did not give all the necessary information required to reimplement the models. In addition, we were able to reproduce the original results of only one of the models completely based on the information given in the original publications and in the errata. We actually found errors in the equations provided by two of the model publications; after modifying the equations accordingly, the original results were reproduced more accurately. Even though the selected models were developed to describe the same biological event, namely astrocyte calcium excitability, the models behaved quite differently compared to one another. Our findings on a specific set of published astrocyte models stress the importance of proper validation of the models against experimental wet-lab data from astrocytes as well as the careful review process of models. A variety of aspects of model development could be improved, including the presentation of models in publications and databases. Specifically, all necessary mathematical equations, as well as parameter values, initial values of variables, and stimuli used should be given precisely for successful reproduction of scientific results.
Collapse
Affiliation(s)
- Tiina Manninen
- Computational Neuroscience Group, Faculty of Biomedical Sciences and Engineering and BioMediTech Institute, Tampere University of Technology Tampere, Finland
| | - Riikka Havela
- Computational Neuroscience Group, Faculty of Biomedical Sciences and Engineering and BioMediTech Institute, Tampere University of Technology Tampere, Finland
| | - Marja-Leena Linne
- Computational Neuroscience Group, Faculty of Biomedical Sciences and Engineering and BioMediTech Institute, Tampere University of Technology Tampere, Finland
| |
Collapse
|
38
|
Hagen E, Dahmen D, Stavrinou ML, Lindén H, Tetzlaff T, van Albada SJ, Grün S, Diesmann M, Einevoll GT. Hybrid Scheme for Modeling Local Field Potentials from Point-Neuron Networks. Cereb Cortex 2016; 26:4461-4496. [PMID: 27797828 PMCID: PMC6193674 DOI: 10.1093/cercor/bhw237] [Citation(s) in RCA: 55] [Impact Index Per Article: 6.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/13/2016] [Revised: 05/31/2016] [Accepted: 07/12/2016] [Indexed: 12/21/2022] Open
Abstract
With rapidly advancing multi-electrode recording technology, the local field potential (LFP) has again become a popular measure of neuronal activity in both research and clinical applications. Proper understanding of the LFP requires detailed mathematical modeling incorporating the anatomical and electrophysiological features of neurons near the recording electrode, as well as synaptic inputs from the entire network. Here we propose a hybrid modeling scheme combining efficient point-neuron network models with biophysical principles underlying LFP generation by real neurons. The LFP predictions rely on populations of network-equivalent multicompartment neuron models with layer-specific synaptic connectivity, can be used with an arbitrary number of point-neuron network populations, and allows for a full separation of simulated network dynamics and LFPs. We apply the scheme to a full-scale cortical network model for a ∼1 mm2 patch of primary visual cortex, predict laminar LFPs for different network states, assess the relative LFP contribution from different laminar populations, and investigate effects of input correlations and neuron density on the LFP. The generic nature of the hybrid scheme and its public implementation in hybridLFPy form the basis for LFP predictions from other and larger point-neuron network models, as well as extensions of the current application with additional biological detail.
Collapse
Affiliation(s)
- Espen Hagen
- Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA BRAIN Institute I, Jülich Research Centre, 52425 Jülich, Germany.,Department of Mathematical Sciences and Technology, Norwegian University of Life Sciences, 1430 Ås, Norway
| | - David Dahmen
- Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA BRAIN Institute I, Jülich Research Centre, 52425 Jülich, Germany
| | - Maria L Stavrinou
- Department of Mathematical Sciences and Technology, Norwegian University of Life Sciences, 1430 Ås, Norway.,Department of Psychology, University of Oslo, 0373 Oslo, Norway
| | - Henrik Lindén
- Department of Neuroscience and Pharmacology, University of Copenhagen, 2200 Copenhagen, Denmark.,Department of Computational Biology, School of Computer Science and Communication, Royal Institute of Technology, 100 44 Stockholm, Sweden
| | - Tom Tetzlaff
- Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA BRAIN Institute I, Jülich Research Centre, 52425 Jülich, Germany
| | - Sacha J van Albada
- Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA BRAIN Institute I, Jülich Research Centre, 52425 Jülich, Germany
| | - Sonja Grün
- Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA BRAIN Institute I, Jülich Research Centre, 52425 Jülich, Germany.,Theoretical Systems Neurobiology, RWTH Aachen University, 52056 Aachen, Germany
| | - Markus Diesmann
- Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA BRAIN Institute I, Jülich Research Centre, 52425 Jülich, Germany.,Department of Psychiatry, Psychotherapy and Psychosomatics, Medical Faculty, RWTH Aachen University, 52074 Aachen, Germany.,Department of Physics, Faculty 1, RWTH Aachen University, 52062 Aachen, Germany
| | - Gaute T Einevoll
- Department of Mathematical Sciences and Technology, Norwegian University of Life Sciences, 1430 Ås, Norway.,Department of Physics, University of Oslo, 0316 Oslo, Norway
| |
Collapse
|
39
|
McDougal RA, Bulanova AS, Lytton WW. Reproducibility in Computational Neuroscience Models and Simulations. IEEE Trans Biomed Eng 2016; 63:2021-35. [PMID: 27046845 PMCID: PMC5016202 DOI: 10.1109/tbme.2016.2539602] [Citation(s) in RCA: 37] [Impact Index Per Article: 4.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/28/2022]
Abstract
OBJECTIVE Like all scientific research, computational neuroscience research must be reproducible. Big data science, including simulation research, cannot depend exclusively on journal articles as the method to provide the sharing and transparency required for reproducibility. METHODS Ensuring model reproducibility requires the use of multiple standard software practices and tools, including version control, strong commenting and documentation, and code modularity. RESULTS Building on these standard practices, model-sharing sites and tools have been developed that fit into several categories: 1) standardized neural simulators; 2) shared computational resources; 3) declarative model descriptors, ontologies, and standardized annotations; and 4) model-sharing repositories and sharing standards. CONCLUSION A number of complementary innovations have been proposed to enhance sharing, transparency, and reproducibility. The individual user can be encouraged to make use of version control, commenting, documentation, and modularity in development of models. The community can help by requiring model sharing as a condition of publication and funding. SIGNIFICANCE Model management will become increasingly important as multiscale models become larger, more detailed, and correspondingly more difficult to manage by any single investigator or single laboratory. Additional big data management complexity will come as the models become more useful in interpreting experiments, thus increasing the need to ensure clear alignment between modeling data, both parameters and results, and experiment.
Collapse
|
40
|
Grajski KA. Emergent Spatial Patterns of Excitatory and Inhibitory Synaptic Strengths Drive Somatotopic Representational Discontinuities and their Plasticity in a Computational Model of Primary Sensory Cortical Area 3b. Front Comput Neurosci 2016; 10:72. [PMID: 27504086 PMCID: PMC4958931 DOI: 10.3389/fncom.2016.00072] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/12/2016] [Accepted: 06/29/2016] [Indexed: 11/13/2022] Open
Abstract
Mechanisms underlying the emergence and plasticity of representational discontinuities in the mammalian primary somatosensory cortical representation of the hand are investigated in a computational model. The model consists of an input lattice organized as a three-digit hand forward-connected to a lattice of cortical columns each of which contains a paired excitatory and inhibitory cell. Excitatory and inhibitory synaptic plasticity of feedforward and lateral connection weights is implemented as a simple covariance rule and competitive normalization. Receptive field properties are computed independently for excitatory and inhibitory cells and compared within and across columns. Within digit representational zones intracolumnar excitatory and inhibitory receptive field extents are concentric, single-digit, small, and unimodal. Exclusively in representational boundary-adjacent zones, intracolumnar excitatory and inhibitory receptive field properties diverge: excitatory cell receptive fields are single-digit, small, and unimodal; and the paired inhibitory cell receptive fields are bimodal, double-digit, and large. In simulated syndactyly (webbed fingers), boundary-adjacent intracolumnar receptive field properties reorganize to within-representation type; divergent properties are reacquired following syndactyly release. This study generates testable hypotheses for assessment of cortical laminar-dependent receptive field properties and plasticity within and between cortical representational zones. For computational studies, present results suggest that concurrent excitatory and inhibitory plasticity may underlie novel emergent properties.
Collapse
|
41
|
Neymotin SA, Dura-Bernal S, Lakatos P, Sanger TD, Lytton WW. Multitarget Multiscale Simulation for Pharmacological Treatment of Dystonia in Motor Cortex. Front Pharmacol 2016; 7:157. [PMID: 27378922 PMCID: PMC4906029 DOI: 10.3389/fphar.2016.00157] [Citation(s) in RCA: 23] [Impact Index Per Article: 2.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/17/2016] [Accepted: 05/30/2016] [Indexed: 12/20/2022] Open
Abstract
A large number of physiomic pathologies can produce hyperexcitability in cortex. Depending on severity, cortical hyperexcitability may manifest clinically as a hyperkinetic movement disorder or as epilpesy. We focus here on dystonia, a movement disorder that produces involuntary muscle contractions and involves pathology in multiple brain areas including basal ganglia, thalamus, cerebellum, and sensory and motor cortices. Most research in dystonia has focused on basal ganglia, while much pharmacological treatment is provided directly at muscles to prevent contraction. Motor cortex is another potential target for therapy that exhibits pathological dynamics in dystonia, including heightened activity and altered beta oscillations. We developed a multiscale model of primary motor cortex, ranging from molecular, up to cellular, and network levels, containing 1715 compartmental model neurons with multiple ion channels and intracellular molecular dynamics. We wired the model based on electrophysiological data obtained from mouse motor cortex circuit mapping experiments. We used the model to reproduce patterns of heightened activity seen in dystonia by applying independent random variations in parameters to identify pathological parameter sets. These models demonstrated degeneracy, meaning that there were many ways of obtaining the pathological syndrome. There was no single parameter alteration which would consistently distinguish pathological from physiological dynamics. At higher dimensions in parameter space, we were able to use support vector machines to distinguish the two patterns in different regions of space and thereby trace multitarget routes from dystonic to physiological dynamics. These results suggest the use of in silico models for discovery of multitarget drug cocktails.
Collapse
Affiliation(s)
- Samuel A Neymotin
- Department Physiology and Pharmacology, SUNY Downstate Medical Center, State University of New YorkBrooklyn, NY, USA; Department Neuroscience, Yale University School of MedicineNew Haven, CT, USA
| | - Salvador Dura-Bernal
- Department Physiology and Pharmacology, SUNY Downstate Medical Center, State University of New York Brooklyn, NY, USA
| | - Peter Lakatos
- Nathan S. Kline Institute for Psychiatric Research Orangeburg, NY, USA
| | - Terence D Sanger
- Department Biomedical Engineering, University of Southern CaliforniaLos Angeles, CA, USA; Division Neurology, Child Neurology and Movement Disorders, Children's Hospital Los AngelesLos Angeles, CA, USA
| | - William W Lytton
- Department Physiology and Pharmacology, SUNY Downstate Medical Center, State University of New YorkBrooklyn, NY, USA; Department Neurology, SUNY Downstate Medical CenterBrooklyn, NY, USA; Department Neurology, Kings County Hospital CenterBrooklyn, NY, USA; The Robert F. Furchgott Center for Neural and Behavioral ScienceBrooklyn, NY, US
| |
Collapse
|
42
|
Knight JC, Tully PJ, Kaplan BA, Lansner A, Furber SB. Large-Scale Simulations of Plastic Neural Networks on Neuromorphic Hardware. Front Neuroanat 2016; 10:37. [PMID: 27092061 PMCID: PMC4823276 DOI: 10.3389/fnana.2016.00037] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/30/2015] [Accepted: 03/18/2016] [Indexed: 11/17/2022] Open
Abstract
SpiNNaker is a digital, neuromorphic architecture designed for simulating large-scale spiking neural networks at speeds close to biological real-time. Rather than using bespoke analog or digital hardware, the basic computational unit of a SpiNNaker system is a general-purpose ARM processor, allowing it to be programmed to simulate a wide variety of neuron and synapse models. This flexibility is particularly valuable in the study of biological plasticity phenomena. A recently proposed learning rule based on the Bayesian Confidence Propagation Neural Network (BCPNN) paradigm offers a generic framework for modeling the interaction of different plasticity mechanisms using spiking neurons. However, it can be computationally expensive to simulate large networks with BCPNN learning since it requires multiple state variables for each synapse, each of which needs to be updated every simulation time-step. We discuss the trade-offs in efficiency and accuracy involved in developing an event-based BCPNN implementation for SpiNNaker based on an analytical solution to the BCPNN equations, and detail the steps taken to fit this within the limited computational and memory resources of the SpiNNaker architecture. We demonstrate this learning rule by learning temporal sequences of neural activity within a recurrent attractor network which we simulate at scales of up to 2.0 × 104 neurons and 5.1 × 107 plastic synapses: the largest plastic neural network ever to be simulated on neuromorphic hardware. We also run a comparable simulation on a Cray XC-30 supercomputer system and find that, if it is to match the run-time of our SpiNNaker simulation, the super computer system uses approximately 45× more power. This suggests that cheaper, more power efficient neuromorphic systems are becoming useful discovery tools in the study of plasticity in large-scale brain models.
Collapse
Affiliation(s)
- James C Knight
- Advanced Processor Technologies Group, School of Computer Science, University of Manchester Manchester, UK
| | - Philip J Tully
- Department of Computational Biology, Royal Institute of TechnologyStockholm, Sweden; Stockholm Brain Institute, Karolinska InstituteStockholm, Sweden; Institute for Adaptive and Neural Computation, School of Informatics, University of EdinburghEdinburgh, UK
| | - Bernhard A Kaplan
- Department of Visualization and Data Analysis, Zuse Institute Berlin Berlin, Germany
| | - Anders Lansner
- Department of Computational Biology, Royal Institute of TechnologyStockholm, Sweden; Stockholm Brain Institute, Karolinska InstituteStockholm, Sweden; Department of Numerical analysis and Computer Science, Stockholm UniversityStockholm, Sweden
| | - Steve B Furber
- Advanced Processor Technologies Group, School of Computer Science, University of Manchester Manchester, UK
| |
Collapse
|
43
|
Tomková M, Tomek J, Novák O, Zelenka O, Syka J, Brom C. Formation and disruption of tonotopy in a large-scale model of the auditory cortex. J Comput Neurosci 2015; 39:131-53. [PMID: 26344164 DOI: 10.1007/s10827-015-0568-2] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/30/2014] [Revised: 05/15/2015] [Accepted: 05/19/2015] [Indexed: 12/19/2022]
Abstract
There is ample experimental evidence describing changes of tonotopic organisation in the auditory cortex due to environmental factors. In order to uncover the underlying mechanisms, we designed a large-scale computational model of the auditory cortex. The model has up to 100 000 Izhikevich's spiking neurons of 17 different types, almost 21 million synapses, which are evolved according to Spike-Timing-Dependent Plasticity (STDP) and have an architecture akin to existing observations. Validation of the model revealed alternating synchronised/desynchronised states and different modes of oscillatory activity. We provide insight into these phenomena via analysing the activity of neuronal subtypes and testing different causal interventions into the simulation. Our model is able to produce experimental predictions on a cell type basis. To study the influence of environmental factors on the tonotopy, different types of auditory stimulations during the evolution of the network were modelled and compared. We found that strong white noise resulted in completely disrupted tonotopy, which is consistent with in vivo experimental observations. Stimulation with pure tones or spontaneous activity led to a similar degree of tonotopy as in the initial state of the network. Interestingly, weak white noise led to a substantial increase in tonotopy. As the STDP was the only mechanism of plasticity in our model, our results suggest that STDP is a sufficient condition for the emergence and disruption of tonotopy under various types of stimuli. The presented large-scale model of the auditory cortex and the core simulator, SUSNOIMAC, have been made publicly available.
Collapse
Affiliation(s)
- Markéta Tomková
- Faculty of Mathematics and Physics, Charles University in Prague, Prague, Czech Republic. .,Life Sciences Interface Doctoral Training Centre, University of Oxford, Oxford, UK.
| | - Jakub Tomek
- Faculty of Mathematics and Physics, Charles University in Prague, Prague, Czech Republic.,Life Sciences Interface Doctoral Training Centre, University of Oxford, Oxford, UK
| | - Ondřej Novák
- Department of Auditory Neuroscience, Institute of Experimental Medicine, Academy of Sciences of the Czech Republic, Prague, Czech Republic.
| | - Ondřej Zelenka
- Department of Auditory Neuroscience, Institute of Experimental Medicine, Academy of Sciences of the Czech Republic, Prague, Czech Republic
| | - Josef Syka
- Department of Auditory Neuroscience, Institute of Experimental Medicine, Academy of Sciences of the Czech Republic, Prague, Czech Republic
| | - Cyril Brom
- Faculty of Mathematics and Physics, Charles University in Prague, Prague, Czech Republic
| |
Collapse
|
44
|
Vitay J, Dinkelbach HÜ, Hamker FH. ANNarchy: a code generation approach to neural simulations on parallel hardware. Front Neuroinform 2015; 9:19. [PMID: 26283957 PMCID: PMC4521356 DOI: 10.3389/fninf.2015.00019] [Citation(s) in RCA: 47] [Impact Index Per Article: 5.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/31/2015] [Accepted: 07/13/2015] [Indexed: 11/22/2022] Open
Abstract
Many modern neural simulators focus on the simulation of networks of spiking neurons on parallel hardware. Another important framework in computational neuroscience, rate-coded neural networks, is mostly difficult or impossible to implement using these simulators. We present here the ANNarchy (Artificial Neural Networks architect) neural simulator, which allows to easily define and simulate rate-coded and spiking networks, as well as combinations of both. The interface in Python has been designed to be close to the PyNN interface, while the definition of neuron and synapse models can be specified using an equation-oriented mathematical description similar to the Brian neural simulator. This information is used to generate C++ code that will efficiently perform the simulation on the chosen parallel hardware (multi-core system or graphical processing unit). Several numerical methods are available to transform ordinary differential equations into an efficient C++code. We compare the parallel performance of the simulator to existing solutions.
Collapse
Affiliation(s)
- Julien Vitay
- Department of Computer Science, Chemnitz University of Technology Chemnitz, Germany
| | - Helge Ü Dinkelbach
- Department of Computer Science, Chemnitz University of Technology Chemnitz, Germany
| | - Fred H Hamker
- Department of Computer Science, Chemnitz University of Technology Chemnitz, Germany ; Bernstein Center for Computational Neuroscience, Charité University Medicine Berlin, Germany
| |
Collapse
|
45
|
Topalidou M, Leblois A, Boraud T, Rougier NP. A long journey into reproducible computational neuroscience. Front Comput Neurosci 2015; 9:30. [PMID: 25798104 PMCID: PMC4350388 DOI: 10.3389/fncom.2015.00030] [Citation(s) in RCA: 18] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/30/2015] [Accepted: 02/19/2015] [Indexed: 11/13/2022] Open
Affiliation(s)
- Meropi Topalidou
- INRIA Bordeaux Sud-Ouest Bordeaux, France ; Institut des Maladies Neurodégénératives, Université de Bordeaux, UMR 5293 Bordeaux, France ; Institut des Maladies Neurodégénératives, Centre National de la Recherche Scientifique, UMR 5293 Bordeaux, France ; LaBRI, Université de Bordeaux, Institut Polytechnique de Bordeaux, Centre National de la Recherche Scientifique, UMR 5800 Talence, France
| | - Arthur Leblois
- Centre de Neurophysique, Physiologie et Pathologies, Université Paris Descartes, Centre National de la Recherche Scientifique, UMR 8119 Paris, France
| | - Thomas Boraud
- Institut des Maladies Neurodégénératives, Université de Bordeaux, UMR 5293 Bordeaux, France ; Institut des Maladies Neurodégénératives, Centre National de la Recherche Scientifique, UMR 5293 Bordeaux, France
| | - Nicolas P Rougier
- INRIA Bordeaux Sud-Ouest Bordeaux, France ; Institut des Maladies Neurodégénératives, Université de Bordeaux, UMR 5293 Bordeaux, France ; Institut des Maladies Neurodégénératives, Centre National de la Recherche Scientifique, UMR 5293 Bordeaux, France ; LaBRI, Université de Bordeaux, Institut Polytechnique de Bordeaux, Centre National de la Recherche Scientifique, UMR 5800 Talence, France
| |
Collapse
|
46
|
Wimmer K, Compte A, Roxin A, Peixoto D, Renart A, de la Rocha J. Sensory integration dynamics in a hierarchical network explains choice probabilities in cortical area MT. Nat Commun 2015; 6:6177. [PMID: 25649611 PMCID: PMC4347303 DOI: 10.1038/ncomms7177] [Citation(s) in RCA: 90] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/27/2014] [Accepted: 12/29/2014] [Indexed: 11/09/2022] Open
Abstract
Neuronal variability in sensory cortex predicts perceptual decisions. This relationship, termed choice probability (CP), can arise from sensory variability biasing behaviour and from top-down signals reflecting behaviour. To investigate the interaction of these mechanisms during the decision-making process, we use a hierarchical network model composed of reciprocally connected sensory and integration circuits. Consistent with monkey behaviour in a fixed-duration motion discrimination task, the model integrates sensory evidence transiently, giving rise to a decaying bottom-up CP component. However, the dynamics of the hierarchical loop recruits a concurrently rising top-down component, resulting in sustained CP. We compute the CP time-course of neurons in the medial temporal area (MT) and find an early transient component and a separate late contribution reflecting decision build-up. The stability of individual CPs and the dynamics of noise correlations further support this decomposition. Our model provides a unified understanding of the circuit dynamics linking neural and behavioural variability.
Collapse
Affiliation(s)
- Klaus Wimmer
- Institut d'Investigacions Biomèdiques August Pi i Sunyer (IDIBAPS), C/Rosselló 149, 08036 Barcelona, Spain
| | - Albert Compte
- Institut d'Investigacions Biomèdiques August Pi i Sunyer (IDIBAPS), C/Rosselló 149, 08036 Barcelona, Spain
| | - Alex Roxin
- 1] Institut d'Investigacions Biomèdiques August Pi i Sunyer (IDIBAPS), C/Rosselló 149, 08036 Barcelona, Spain [2] Centre de Recerca Matemàtica (CRM), Campus de Bellaterra, Edifici C, 08193 Barcelona, Spain
| | - Diogo Peixoto
- 1] Champalimaud Neuroscience Programme, Champalimaud Centre for the Unknown, 1400-038 Lisbon, Portugal [2] Department of Neurobiology, Stanford University, 299 Campus Drive West , Stanford, California 94305-5125, USA
| | - Alfonso Renart
- Champalimaud Neuroscience Programme, Champalimaud Centre for the Unknown, 1400-038 Lisbon, Portugal
| | - Jaime de la Rocha
- Institut d'Investigacions Biomèdiques August Pi i Sunyer (IDIBAPS), C/Rosselló 149, 08036 Barcelona, Spain
| |
Collapse
|
47
|
Pannunzi M, Pérez-Bellido A, Pereda-Baños A, López-Moliner J, Deco G, Soto-Faraco S. Deconstructing multisensory enhancement in detection. J Neurophysiol 2014; 113:1800-18. [PMID: 25520431 DOI: 10.1152/jn.00341.2014] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
The mechanisms responsible for the integration of sensory information from different modalities have become a topic of intense interest in psychophysics and neuroscience. Many authors now claim that early, sensory-based cross-modal convergence improves performance in detection tasks. An important strand of supporting evidence for this claim is based on statistical models such as the Pythagorean model or the probabilistic summation model. These models establish statistical benchmarks representing the best predicted performance under the assumption that there are no interactions between the two sensory paths. Following this logic, when observed detection performances surpass the predictions of these models, it is often inferred that such improvement indicates cross-modal convergence. We present a theoretical analyses scrutinizing some of these models and the statistical criteria most frequently used to infer early cross-modal interactions during detection tasks. Our current analysis shows how some common misinterpretations of these models lead to their inadequate use and, in turn, to contradictory results and misleading conclusions. To further illustrate the latter point, we introduce a model that accounts for detection performances in multimodal detection tasks but for which surpassing of the Pythagorean or probabilistic summation benchmark can be explained without resorting to early cross-modal interactions. Finally, we report three experiments that put our theoretical interpretation to the test and further propose how to adequately measure multimodal interactions in audiotactile detection tasks.
Collapse
Affiliation(s)
| | | | | | - Joan López-Moliner
- Universitat de Barcelona, Barcelona, Spain; Institute for Brain, Cognition and Behaviour (IR3C), Barcelona, Spain; and
| | - Gustavo Deco
- Universitat Pompeu Fabra, Barcelona, Spain; Institució Catalana de Recerca i Estudis Avançats (ICREA), Barcelona, Spain
| | - Salvador Soto-Faraco
- Universitat Pompeu Fabra, Barcelona, Spain; Institució Catalana de Recerca i Estudis Avançats (ICREA), Barcelona, Spain
| |
Collapse
|
48
|
Abstract
A declarative extensible markup language (SpineML) for describing the dynamics, network and experiments of large-scale spiking neural network simulations is described which builds upon the NineML standard. It utilises a level of abstraction which targets point neuron representation but addresses the limitations of existing tools by allowing arbitrary dynamics to be expressed. The use of XML promotes model sharing, is human readable and allows collaborative working. The syntax uses a high-level self explanatory format which allows straight forward code generation or translation of a model description to a native simulator format. This paper demonstrates the use of code generation in order to translate, simulate and reproduce the results of a benchmark model across a range of simulators. The flexibility of the SpineML syntax is highlighted by reproducing a pre-existing, biologically constrained model of a neural microcircuit (the striatum). The SpineML code is open source and is available at http://bimpa.group.shef.ac.uk/SpineML .
Collapse
|
49
|
Kriener B, Enger H, Tetzlaff T, Plesser HE, Gewaltig MO, Einevoll GT. Dynamics of self-sustained asynchronous-irregular activity in random networks of spiking neurons with strong synapses. Front Comput Neurosci 2014; 8:136. [PMID: 25400575 PMCID: PMC4214205 DOI: 10.3389/fncom.2014.00136] [Citation(s) in RCA: 27] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/16/2014] [Accepted: 10/10/2014] [Indexed: 11/13/2022] Open
Abstract
Random networks of integrate-and-fire neurons with strong current-based synapses can, unlike previously believed, assume stable states of sustained asynchronous and irregular firing, even without external random background or pacemaker neurons. We analyze the mechanisms underlying the emergence, lifetime and irregularity of such self-sustained activity states. We first demonstrate how the competition between the mean and the variance of the synaptic input leads to a non-monotonic firing-rate transfer in the network. Thus, by increasing the synaptic coupling strength, the system can become bistable: In addition to the quiescent state, a second stable fixed-point at moderate firing rates can emerge by a saddle-node bifurcation. Inherently generated fluctuations of the population firing rate around this non-trivial fixed-point can trigger transitions into the quiescent state. Hence, the trade-off between the magnitude of the population-rate fluctuations and the size of the basin of attraction of the non-trivial rate fixed-point determines the onset and the lifetime of self-sustained activity states. During self-sustained activity, individual neuronal activity is moreover highly irregular, switching between long periods of low firing rate to short burst-like states. We show that this is an effect of the strong synaptic weights and the finite time constant of synaptic and neuronal integration, and can actually serve to stabilize the self-sustained state.
Collapse
Affiliation(s)
- Birgit Kriener
- Neural Coding and Dynamics, Center for Learning and Memory, University of Texas at Austin Austin, TX, USA ; Computational Neuroscience, Department of Mathematical Sciences and Technology, Norwegian University of Life Sciences Ås, Norway
| | - Håkon Enger
- Computational Neuroscience, Department of Mathematical Sciences and Technology, Norwegian University of Life Sciences Ås, Norway ; Simula Research Laboratory, Kalkulo AS Fornebu, Norway
| | - Tom Tetzlaff
- Institute of Neuroscience and Medicine (INM-6), Computational and Systems Neuroscience and Institute for Advanced Simulation (IAS-6), Theoretical Neuroscience, Jülich Research Centre and JARA Jülich, Germany
| | - Hans E Plesser
- Computational Neuroscience, Department of Mathematical Sciences and Technology, Norwegian University of Life Sciences Ås, Norway
| | - Marc-Oliver Gewaltig
- Blue Brain Project, In-Silico Neuroscience - Cognitive Architectures, École Polytechnique Fédérale de Lausanne Lausanne, Switzerland
| | - Gaute T Einevoll
- Computational Neuroscience, Department of Mathematical Sciences and Technology, Norwegian University of Life Sciences Ås, Norway ; Department of Physics, University of Oslo Oslo, Norway
| |
Collapse
|
50
|
A neuroinformatics of brain modeling and its implementation in the Brain Operation Database BODB. Neuroinformatics 2014; 12:5-26. [PMID: 24234915 DOI: 10.1007/s12021-013-9209-y] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/26/2022]
Abstract
We present principles for an integrated neuroinformatics framework which makes explicit how models are grounded on empirical evidence, explain (or not) existing empirical results and make testable predictions. The new ontological framework makes explicit how models bring together structural, functional, and related empirical observations. We emphasize schematics of the model’s operation linked to summaries of empirical data (SEDs) used in both the design and testing of the model, with tests comparing SEDs to summaries of simulation results (SSRs) from the model. We stress the importance of protocols for models as well as experiments. We complement the structural ontology of nested brain structures with a functional ontology of Brain Operating Principles (BOPs) for observed neural function and an ontological framework for grounding models in empirical data. We present an implementation of this ontological framework in the Brain Operation Database (BODB), an environment in which modelers and experimentalists can work together by making use of their shared empirical data, models and expertise.
Collapse
|