1
|
Munn BR, Müller EJ, Aru J, Whyte CJ, Gidon A, Larkum ME, Shine JM. A thalamocortical substrate for integrated information via critical synchronous bursting. Proc Natl Acad Sci U S A 2023; 120:e2308670120. [PMID: 37939085 PMCID: PMC10655573 DOI: 10.1073/pnas.2308670120] [Citation(s) in RCA: 6] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/23/2023] [Accepted: 09/21/2023] [Indexed: 11/10/2023] Open
Abstract
Understanding the neurobiological mechanisms underlying consciousness remains a significant challenge. Recent evidence suggests that the coupling between distal-apical and basal-somatic dendrites in thick-tufted layer 5 pyramidal neurons (L5PN), regulated by the nonspecific-projecting thalamus, is crucial for consciousness. Yet, it is uncertain whether this thalamocortical mechanism can support emergent signatures of consciousness, such as integrated information. To address this question, we constructed a biophysical network of dual-compartment thick-tufted L5PN, with dendrosomatic coupling controlled by thalamic inputs. Our findings demonstrate that integrated information is maximized when nonspecific thalamic inputs drive the system into a regime of time-varying synchronous bursting. Here, the system exhibits variable spiking dynamics with broad pairwise correlations, supporting the enhanced integrated information. Further, the observed peak in integrated information aligns with criticality signatures and empirically observed layer 5 pyramidal bursting rates. These results suggest that the thalamocortical core of the mammalian brain may be evolutionarily configured to optimize effective information processing, providing a potential neuronal mechanism that integrates microscale theories with macroscale signatures of consciousness.
Collapse
Affiliation(s)
- Brandon R. Munn
- Brain and Mind Centre, School of Medical Sciences, Faculty of Medicine and Health, University of Sydney, Sydney2050, Australia
- Complex Systems, School of Physics, Faculty of Science, University of Sydney, Sydney2050, Australia
| | - Eli J. Müller
- Brain and Mind Centre, School of Medical Sciences, Faculty of Medicine and Health, University of Sydney, Sydney2050, Australia
- Complex Systems, School of Physics, Faculty of Science, University of Sydney, Sydney2050, Australia
| | - Jaan Aru
- Institute of Computer Science, University of Tartu, Tartu51009, Estonia
| | - Christopher J. Whyte
- Brain and Mind Centre, School of Medical Sciences, Faculty of Medicine and Health, University of Sydney, Sydney2050, Australia
- Complex Systems, School of Physics, Faculty of Science, University of Sydney, Sydney2050, Australia
| | - Albert Gidon
- Institute of Biology, Humboldt University of Berlin, Berlin10099, Germany
- NeuroCure Center of Excellence, Charité Universitätsmedizin Berlin, Berlin10099, Germany
| | - Matthew E. Larkum
- Institute of Biology, Humboldt University of Berlin, Berlin10099, Germany
- NeuroCure Center of Excellence, Charité Universitätsmedizin Berlin, Berlin10099, Germany
| | - James M. Shine
- Brain and Mind Centre, School of Medical Sciences, Faculty of Medicine and Health, University of Sydney, Sydney2050, Australia
- Complex Systems, School of Physics, Faculty of Science, University of Sydney, Sydney2050, Australia
| |
Collapse
|
2
|
Munn BR, Müller EJ, Medel V, Naismith SL, Lizier JT, Sanders RD, Shine JM. Neuronal connected burst cascades bridge macroscale adaptive signatures across arousal states. Nat Commun 2023; 14:6846. [PMID: 37891167 PMCID: PMC10611774 DOI: 10.1038/s41467-023-42465-2] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/29/2022] [Accepted: 10/11/2023] [Indexed: 10/29/2023] Open
Abstract
The human brain displays a rich repertoire of states that emerge from the microscopic interactions of cortical and subcortical neurons. Difficulties inherent within large-scale simultaneous neuronal recording limit our ability to link biophysical processes at the microscale to emergent macroscopic brain states. Here we introduce a microscale biophysical network model of layer-5 pyramidal neurons that display graded coarse-sampled dynamics matching those observed in macroscale electrophysiological recordings from macaques and humans. We invert our model to identify the neuronal spike and burst dynamics that differentiate unconscious, dreaming, and awake arousal states and provide insights into their functional signatures. We further show that neuromodulatory arousal can mediate different modes of neuronal dynamics around a low-dimensional energy landscape, which in turn changes the response of the model to external stimuli. Our results highlight the promise of multiscale modelling to bridge theories of consciousness across spatiotemporal scales.
Collapse
Affiliation(s)
- Brandon R Munn
- Brain and Mind Centre, The University of Sydney, Sydney, NSW, Australia.
- Complex Systems, School of Physics, University of Sydney, Sydney, NSW, Australia.
- Centre for Complex Systems, The University of Sydney, Sydney, NSW, Australia.
| | - Eli J Müller
- Brain and Mind Centre, The University of Sydney, Sydney, NSW, Australia
- Complex Systems, School of Physics, University of Sydney, Sydney, NSW, Australia
- Centre for Complex Systems, The University of Sydney, Sydney, NSW, Australia
| | - Vicente Medel
- Brain and Mind Centre, The University of Sydney, Sydney, NSW, Australia
- Latin American Brain Health Institute (BrainLat), Universidad Adolfo Ibañez, Santiago, Chile
| | - Sharon L Naismith
- Brain and Mind Centre, The University of Sydney, Sydney, NSW, Australia
- School of Psychology, Faculty of Science & Charles Perkins Centre, The University of Sydney, Sydney, NSW, Australia
| | - Joseph T Lizier
- Centre for Complex Systems, The University of Sydney, Sydney, NSW, Australia
- School of Computer Science, The University of Sydney, Sydney, NSW, Australia
| | - Robert D Sanders
- Department of Anaesthetics & Institute of Academic Surgery, Royal Prince Alfred Hospital, Camperdown, Australia
- Central Clinical School & NHMRC Clinical Trials Centre, The University of Sydney, Sydney, NSW, Australia
| | - James M Shine
- Brain and Mind Centre, The University of Sydney, Sydney, NSW, Australia
- Complex Systems, School of Physics, University of Sydney, Sydney, NSW, Australia
- Centre for Complex Systems, The University of Sydney, Sydney, NSW, Australia
| |
Collapse
|
3
|
Zajzon B, Duarte R, Morrison A. Toward reproducible models of sequence learning: replication and analysis of a modular spiking network with reward-based learning. Front Integr Neurosci 2023; 17:935177. [PMID: 37396571 PMCID: PMC10310927 DOI: 10.3389/fnint.2023.935177] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/03/2022] [Accepted: 05/15/2023] [Indexed: 07/04/2023] Open
Abstract
To acquire statistical regularities from the world, the brain must reliably process, and learn from, spatio-temporally structured information. Although an increasing number of computational models have attempted to explain how such sequence learning may be implemented in the neural hardware, many remain limited in functionality or lack biophysical plausibility. If we are to harvest the knowledge within these models and arrive at a deeper mechanistic understanding of sequential processing in cortical circuits, it is critical that the models and their findings are accessible, reproducible, and quantitatively comparable. Here we illustrate the importance of these aspects by providing a thorough investigation of a recently proposed sequence learning model. We re-implement the modular columnar architecture and reward-based learning rule in the open-source NEST simulator, and successfully replicate the main findings of the original study. Building on these, we perform an in-depth analysis of the model's robustness to parameter settings and underlying assumptions, highlighting its strengths and weaknesses. We demonstrate a limitation of the model consisting in the hard-wiring of the sequence order in the connectivity patterns, and suggest possible solutions. Finally, we show that the core functionality of the model is retained under more biologically-plausible constraints.
Collapse
Affiliation(s)
- Barna Zajzon
- Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA-BRAIN Institute I, Jülich Research Centre, Jülich, Germany
- Department of Computer Science 3—Software Engineering, RWTH Aachen University, Aachen, Germany
| | - Renato Duarte
- Donders Institute for Brain, Cognition and Behavior, Radboud University, Nijmegen, Netherlands
| | - Abigail Morrison
- Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA-BRAIN Institute I, Jülich Research Centre, Jülich, Germany
- Department of Computer Science 3—Software Engineering, RWTH Aachen University, Aachen, Germany
| |
Collapse
|
4
|
Lehnigk R, Bruschewski M, Huste T, Lucas D, Rehm M, Schlegel F. Sustainable development of simulation setups and addons for OpenFOAM for nuclear reactor safety research. KERNTECHNIK 2023. [DOI: 10.1515/kern-2022-0107] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/17/2023]
Abstract
Abstract
Open-source environments such as the Computational Fluid Dynamics software OpenFOAM are very appealing for research groups since they allow for an efficient prototyping of new models or concepts. However, for downstream developments to be sustainable, i.e. reproducible and reusable in the long term, a significant amount of maintenance work must be accounted for. To allow for growth and extensibility, the maintenance work should be underpinned by a high degree of automation for repetitive tasks such as build tests, code deployment and validation runs, in order to keep the focus on scientific work. Here, an information technology environment referred to as OpenFOAM_RCS is presented that aids the centralized maintenance of simulation code and setup files for OpenFOAM developments concerned with reactor coolant system safety research. It fosters collaborative developments and review processes. State-of-the-art tools for managing software developments are adapted to meet the requirements of OpenFOAM. A flexible approach for upgrading the underlying installation is proposed, based on snapshots of the OpenFOAM development line rather than yearly version releases, to make new functionality available when needed by associated research projects. The process of upgrading within so-called sprint cycles is accompanied by several checks to ensure compatibility of downstream code and simulation setups. Furthermore, the foundation for building a validation data base from contributed simulation setups is laid, creating a basis for continuous quality assurance.
Collapse
Affiliation(s)
- Ronald Lehnigk
- Helmholtz-Zentrum Dresden – Rossendorf e.V. , Bautzner Landstraße 400 , Dresden , Germany
| | - Martin Bruschewski
- University of Rostock , Chair of Fluid Mechanics , Albert-Einstein-Straße 2 , Rostock , Germany
| | - Tobias Huste
- Helmholtz-Zentrum Dresden – Rossendorf e.V. , Bautzner Landstraße 400 , Dresden , Germany
| | - Dirk Lucas
- Helmholtz-Zentrum Dresden – Rossendorf e.V. , Bautzner Landstraße 400 , Dresden , Germany
| | - Markus Rehm
- Framatome GmbH , Paul-Gossen-Straße 100 , Erlangen , Germany
| | - Fabian Schlegel
- Helmholtz-Zentrum Dresden – Rossendorf e.V. , Bautzner Landstraße 400 , Dresden , Germany
| |
Collapse
|
5
|
Zajzon B, Dahmen D, Morrison A, Duarte R. Signal denoising through topographic modularity of neural circuits. eLife 2023; 12:77009. [PMID: 36700545 PMCID: PMC9981157 DOI: 10.7554/elife.77009] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/12/2022] [Accepted: 01/25/2023] [Indexed: 01/27/2023] Open
Abstract
Information from the sensory periphery is conveyed to the cortex via structured projection pathways that spatially segregate stimulus features, providing a robust and efficient encoding strategy. Beyond sensory encoding, this prominent anatomical feature extends throughout the neocortex. However, the extent to which it influences cortical processing is unclear. In this study, we combine cortical circuit modeling with network theory to demonstrate that the sharpness of topographic projections acts as a bifurcation parameter, controlling the macroscopic dynamics and representational precision across a modular network. By shifting the balance of excitation and inhibition, topographic modularity gradually increases task performance and improves the signal-to-noise ratio across the system. We demonstrate that in biologically constrained networks, such a denoising behavior is contingent on recurrent inhibition. We show that this is a robust and generic structural feature that enables a broad range of behaviorally relevant operating regimes, and provide an in-depth theoretical analysis unraveling the dynamical principles underlying the mechanism.
Collapse
Affiliation(s)
- Barna Zajzon
- Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA-BRAIN Institute I, Jülich Research CentreJülichGermany
- Department of Psychiatry, Psychotherapy and Psychosomatics, RWTH Aachen UniversityAachenGermany
| | - David Dahmen
- Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA-BRAIN Institute I, Jülich Research CentreJülichGermany
| | - Abigail Morrison
- Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA-BRAIN Institute I, Jülich Research CentreJülichGermany
- Department of Computer Science 3 - Software Engineering, RWTH Aachen UniversityAachenGermany
| | - Renato Duarte
- Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA-BRAIN Institute I, Jülich Research CentreJülichGermany
- Donders Institute for Brain, Cognition and Behavior, Radboud University NijmegenNijmegenNetherlands
| |
Collapse
|
6
|
Grimaldi A, Gruel A, Besnainou C, Jérémie JN, Martinet J, Perrinet LU. Precise Spiking Motifs in Neurobiological and Neuromorphic Data. Brain Sci 2022; 13:68. [PMID: 36672049 PMCID: PMC9856822 DOI: 10.3390/brainsci13010068] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/16/2022] [Revised: 12/20/2022] [Accepted: 12/23/2022] [Indexed: 12/31/2022] Open
Abstract
Why do neurons communicate through spikes? By definition, spikes are all-or-none neural events which occur at continuous times. In other words, spikes are on one side binary, existing or not without further details, and on the other, can occur at any asynchronous time, without the need for a centralized clock. This stands in stark contrast to the analog representation of values and the discretized timing classically used in digital processing and at the base of modern-day neural networks. As neural systems almost systematically use this so-called event-based representation in the living world, a better understanding of this phenomenon remains a fundamental challenge in neurobiology in order to better interpret the profusion of recorded data. With the growing need for intelligent embedded systems, it also emerges as a new computing paradigm to enable the efficient operation of a new class of sensors and event-based computers, called neuromorphic, which could enable significant gains in computation time and energy consumption-a major societal issue in the era of the digital economy and global warming. In this review paper, we provide evidence from biology, theory and engineering that the precise timing of spikes plays a crucial role in our understanding of the efficiency of neural networks.
Collapse
Affiliation(s)
- Antoine Grimaldi
- INT UMR 7289, Aix Marseille Univ, CNRS, 27 Bd Jean Moulin, 13005 Marseille, France
| | - Amélie Gruel
- SPARKS, Côte d’Azur, CNRS, I3S, 2000 Rte des Lucioles, 06900 Sophia-Antipolis, France
| | - Camille Besnainou
- INT UMR 7289, Aix Marseille Univ, CNRS, 27 Bd Jean Moulin, 13005 Marseille, France
| | - Jean-Nicolas Jérémie
- INT UMR 7289, Aix Marseille Univ, CNRS, 27 Bd Jean Moulin, 13005 Marseille, France
| | - Jean Martinet
- SPARKS, Côte d’Azur, CNRS, I3S, 2000 Rte des Lucioles, 06900 Sophia-Antipolis, France
| | - Laurent U. Perrinet
- INT UMR 7289, Aix Marseille Univ, CNRS, 27 Bd Jean Moulin, 13005 Marseille, France
| |
Collapse
|
7
|
Oberländer J, Bouhadjar Y, Morrison A. Learning and replaying spatiotemporal sequences: A replication study. Front Integr Neurosci 2022; 16:974177. [PMID: 36310714 PMCID: PMC9614051 DOI: 10.3389/fnint.2022.974177] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/20/2022] [Accepted: 08/17/2022] [Indexed: 11/13/2022] Open
Abstract
Learning and replaying spatiotemporal sequences are fundamental computations performed by the brain and specifically the neocortex. These features are critical for a wide variety of cognitive functions, including sensory perception and the execution of motor and language skills. Although several computational models demonstrate this capability, many are either hard to reconcile with biological findings or have limited functionality. To address this gap, a recent study proposed a biologically plausible model based on a spiking recurrent neural network supplemented with read-out neurons. After learning, the recurrent network develops precise switching dynamics by successively activating and deactivating small groups of neurons. The read-out neurons are trained to respond to particular groups and can thereby reproduce the learned sequence. For the model to serve as the basis for further research, it is important to determine its replicability. In this Brief Report, we give a detailed description of the model and identify missing details, inconsistencies or errors in or between the original paper and its reference implementation. We re-implement the full model in the neural simulator NEST in conjunction with the NESTML modeling language and confirm the main findings of the original work.
Collapse
Affiliation(s)
- Jette Oberländer
- Institute of Neuroscience and Medicine (INM-6), Institute for Advanced Simulation (IAS-6), JARA-Institute Brain Structure-Function Relationship (JBI-1/INM-10), Research Centre Jülich, Jülich, Germany
- Department of Computer Science 3-Software Engineering, RWTH Aachen University, Aachen, Germany
| | - Younes Bouhadjar
- Institute of Neuroscience and Medicine (INM-6), Institute for Advanced Simulation (IAS-6), JARA-Institute Brain Structure-Function Relationship (JBI-1/INM-10), Research Centre Jülich, Jülich, Germany
- Jülich Research Centre and JARA, Peter Grünberg Institute (PGI-7, 10), Jülich, Germany
- RWTH Aachen University, Aachen, Germany
| | - Abigail Morrison
- Institute of Neuroscience and Medicine (INM-6), Institute for Advanced Simulation (IAS-6), JARA-Institute Brain Structure-Function Relationship (JBI-1/INM-10), Research Centre Jülich, Jülich, Germany
- Department of Computer Science 3-Software Engineering, RWTH Aachen University, Aachen, Germany
| |
Collapse
|
8
|
Schulte to Brinke T, Duarte R, Morrison A. Characteristic columnar connectivity caters to cortical computation: Replication, simulation, and evaluation of a microcircuit model. Front Integr Neurosci 2022; 16:923468. [PMID: 36310713 PMCID: PMC9615567 DOI: 10.3389/fnint.2022.923468] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/19/2022] [Accepted: 09/01/2022] [Indexed: 11/14/2022] Open
Abstract
The neocortex, and with it the mammalian brain, achieves a level of computational efficiency like no other existing computational engine. A deeper understanding of its building blocks (cortical microcircuits), and their underlying computational principles is thus of paramount interest. To this end, we need reproducible computational models that can be analyzed, modified, extended and quantitatively compared. In this study, we further that aim by providing a replication of a seminal cortical column model. This model consists of noisy Hodgkin-Huxley neurons connected by dynamic synapses, whose connectivity scheme is based on empirical findings from intracellular recordings. Our analysis confirms the key original finding that the specific, data-based connectivity structure enhances the computational performance compared to a variety of alternatively structured control circuits. For this comparison, we use tasks based on spike patterns and rates that require the systems not only to have simple classification capabilities, but also to retain information over time and to be able to compute nonlinear functions. Going beyond the scope of the original study, we demonstrate that this finding is independent of the complexity of the neuron model, which further strengthens the argument that it is the connectivity which is crucial. Finally, a detailed analysis of the memory capabilities of the circuits reveals a stereotypical memory profile common across all circuit variants. Notably, the circuit with laminar structure does not retain stimulus any longer than any other circuit type. We therefore conclude that the model's computational advantage lies in a sharper representation of the stimuli.
Collapse
Affiliation(s)
- Tobias Schulte to Brinke
- Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA-BRAIN Institute I, Jülich Research Centre, Jülich, Germany
- Department of Computer Science 3 - Software Engineering, RWTH Aachen University, Aachen, Germany
| | - Renato Duarte
- Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA-BRAIN Institute I, Jülich Research Centre, Jülich, Germany
- Donders Institute for Brain, Cognition and Behavior, Radboud University, Nijmegen, Netherlands
| | - Abigail Morrison
- Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA-BRAIN Institute I, Jülich Research Centre, Jülich, Germany
- Department of Computer Science 3 - Software Engineering, RWTH Aachen University, Aachen, Germany
| |
Collapse
|
9
|
Trensch G, Morrison A. A System-on-Chip Based Hybrid Neuromorphic Compute Node Architecture for Reproducible Hyper-Real-Time Simulations of Spiking Neural Networks. Front Neuroinform 2022; 16:884033. [PMID: 35846779 PMCID: PMC9277345 DOI: 10.3389/fninf.2022.884033] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/25/2022] [Accepted: 05/23/2022] [Indexed: 11/23/2022] Open
Abstract
Despite the great strides neuroscience has made in recent decades, the underlying principles of brain function remain largely unknown. Advancing the field strongly depends on the ability to study large-scale neural networks and perform complex simulations. In this context, simulations in hyper-real-time are of high interest, as they would enable both comprehensive parameter scans and the study of slow processes, such as learning and long-term memory. Not even the fastest supercomputer available today is able to meet the challenge of accurate and reproducible simulation with hyper-real acceleration. The development of novel neuromorphic computer architectures holds out promise, but the high costs and long development cycles for application-specific hardware solutions makes it difficult to keep pace with the rapid developments in neuroscience. However, advances in System-on-Chip (SoC) device technology and tools are now providing interesting new design possibilities for application-specific implementations. Here, we present a novel hybrid software-hardware architecture approach for a neuromorphic compute node intended to work in a multi-node cluster configuration. The node design builds on the Xilinx Zynq-7000 SoC device architecture that combines a powerful programmable logic gate array (FPGA) and a dual-core ARM Cortex-A9 processor extension on a single chip. Our proposed architecture makes use of both and takes advantage of their tight coupling. We show that available SoC device technology can be used to build smaller neuromorphic computing clusters that enable hyper-real-time simulation of networks consisting of tens of thousands of neurons, and are thus capable of meeting the high demands for modeling and simulation in neuroscience.
Collapse
Affiliation(s)
- Guido Trensch
- Simulation and Data Laboratory Neuroscience, Jülich Supercomputing Centre, Institute for Advanced Simulation, Jülich Research Centre, Jülich, Germany
- Department of Computer Science 3—Software Engineering, RWTH Aachen University, Aachen, Germany
| | - Abigail Morrison
- Simulation and Data Laboratory Neuroscience, Jülich Supercomputing Centre, Institute for Advanced Simulation, Jülich Research Centre, Jülich, Germany
- Department of Computer Science 3—Software Engineering, RWTH Aachen University, Aachen, Germany
- Institute of Neuroscience and Medicine (INM-6), Institute for Advanced Simulation (IAS-6), JARA-Institute Brain Structure-Function Relationship (JBI-1/INM-10), Research Centre Jülich, Jülich, Germany
| |
Collapse
|
10
|
Albers J, Pronold J, Kurth AC, Vennemo SB, Haghighi Mood K, Patronis A, Terhorst D, Jordan J, Kunkel S, Tetzlaff T, Diesmann M, Senk J. A Modular Workflow for Performance Benchmarking of Neuronal Network Simulations. Front Neuroinform 2022; 16:837549. [PMID: 35645755 PMCID: PMC9131021 DOI: 10.3389/fninf.2022.837549] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/16/2021] [Accepted: 03/11/2022] [Indexed: 11/13/2022] Open
Abstract
Modern computational neuroscience strives to develop complex network models to explain dynamics and function of brains in health and disease. This process goes hand in hand with advancements in the theory of neuronal networks and increasing availability of detailed anatomical data on brain connectivity. Large-scale models that study interactions between multiple brain areas with intricate connectivity and investigate phenomena on long time scales such as system-level learning require progress in simulation speed. The corresponding development of state-of-the-art simulation engines relies on information provided by benchmark simulations which assess the time-to-solution for scientifically relevant, complementary network models using various combinations of hardware and software revisions. However, maintaining comparability of benchmark results is difficult due to a lack of standardized specifications for measuring the scaling performance of simulators on high-performance computing (HPC) systems. Motivated by the challenging complexity of benchmarking, we define a generic workflow that decomposes the endeavor into unique segments consisting of separate modules. As a reference implementation for the conceptual workflow, we develop beNNch: an open-source software framework for the configuration, execution, and analysis of benchmarks for neuronal network simulations. The framework records benchmarking data and metadata in a unified way to foster reproducibility. For illustration, we measure the performance of various versions of the NEST simulator across network models with different levels of complexity on a contemporary HPC system, demonstrating how performance bottlenecks can be identified, ultimately guiding the development toward more efficient simulation technology.
Collapse
Affiliation(s)
- Jasper Albers
- Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA-Institute Brain Structure-Function Relationships (INM-10), Jülich Research Centre, Jülich, Germany
- RWTH Aachen University, Aachen, Germany
- *Correspondence: Jasper Albers
| | - Jari Pronold
- Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA-Institute Brain Structure-Function Relationships (INM-10), Jülich Research Centre, Jülich, Germany
- RWTH Aachen University, Aachen, Germany
| | - Anno Christopher Kurth
- Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA-Institute Brain Structure-Function Relationships (INM-10), Jülich Research Centre, Jülich, Germany
- RWTH Aachen University, Aachen, Germany
| | - Stine Brekke Vennemo
- Faculty of Science and Technology, Norwegian University of Life Sciences, Ås, Norway
| | | | - Alexander Patronis
- Jülich Supercomputing Centre (JSC), Jülich Research Centre, Jülich, Germany
| | - Dennis Terhorst
- Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA-Institute Brain Structure-Function Relationships (INM-10), Jülich Research Centre, Jülich, Germany
| | - Jakob Jordan
- Department of Physiology, University of Bern, Bern, Switzerland
| | - Susanne Kunkel
- Faculty of Science and Technology, Norwegian University of Life Sciences, Ås, Norway
| | - Tom Tetzlaff
- Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA-Institute Brain Structure-Function Relationships (INM-10), Jülich Research Centre, Jülich, Germany
| | - Markus Diesmann
- Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA-Institute Brain Structure-Function Relationships (INM-10), Jülich Research Centre, Jülich, Germany
- Department of Physics, Faculty 1, RWTH Aachen University, Aachen, Germany
- Department of Psychiatry, Psychotherapy and Psychosomatics, School of Medicine, RWTH Aachen University, Aachen, Germany
| | - Johanna Senk
- Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA-Institute Brain Structure-Function Relationships (INM-10), Jülich Research Centre, Jülich, Germany
| |
Collapse
|
11
|
Herbers P, Calvo I, Diaz-Pier S, Robles OD, Mata S, Toharia P, Pastor L, Peyser A, Morrison A, Klijn W. ConGen—A Simulator-Agnostic Visual Language for Definition and Generation of Connectivity in Large and Multiscale Neural Networks. Front Neuroinform 2022; 15:766697. [PMID: 35069166 PMCID: PMC8777257 DOI: 10.3389/fninf.2021.766697] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/29/2021] [Accepted: 11/30/2021] [Indexed: 11/21/2022] Open
Abstract
An open challenge on the road to unraveling the brain's multilevel organization is establishing techniques to research connectivity and dynamics at different scales in time and space, as well as the links between them. This work focuses on the design of a framework that facilitates the generation of multiscale connectivity in large neural networks using a symbolic visual language capable of representing the model at different structural levels—ConGen. This symbolic language allows researchers to create and visually analyze the generated networks independently of the simulator to be used, since the visual model is translated into a simulator-independent language. The simplicity of the front end visual representation, together with the simulator independence provided by the back end translation, combine into a framework to enhance collaboration among scientists with expertise at different scales of abstraction and from different fields. On the basis of two use cases, we introduce the features and possibilities of our proposed visual language and associated workflow. We demonstrate that ConGen enables the creation, editing, and visualization of multiscale biological neural networks and provides a whole workflow to produce simulation scripts from the visual representation of the model.
Collapse
Affiliation(s)
- Patrick Herbers
- Simulation and Data Lab Neuroscience, Jülich Supercomputing Centre, Institute for Advanced Simulation, JARA, Forschungszentrum Jülich GmbH, Jülich, Germany
| | - Iago Calvo
- Department of Computer Science and Computer Architecture, Lenguajes y Sistemas Informáticos y Estadística e Investigación Operativa, Rey Juan Carlos University, Madrid, Spain
| | - Sandra Diaz-Pier
- Simulation and Data Lab Neuroscience, Jülich Supercomputing Centre, Institute for Advanced Simulation, JARA, Forschungszentrum Jülich GmbH, Jülich, Germany
- *Correspondence: Wouter Klijn
| | - Oscar D. Robles
- Department of Computer Science and Computer Architecture, Lenguajes y Sistemas Informáticos y Estadística e Investigación Operativa, Rey Juan Carlos University, Madrid, Spain
- Center for Computational Simulation, Universidad Politécnica de Madrid, Madrid, Spain
| | - Susana Mata
- Department of Computer Science and Computer Architecture, Lenguajes y Sistemas Informáticos y Estadística e Investigación Operativa, Rey Juan Carlos University, Madrid, Spain
- Center for Computational Simulation, Universidad Politécnica de Madrid, Madrid, Spain
| | - Pablo Toharia
- Center for Computational Simulation, Universidad Politécnica de Madrid, Madrid, Spain
- DATSI, ETSIINF, Universidad Politécnica de Madrid, Madrid, Spain
| | - Luis Pastor
- Department of Computer Science and Computer Architecture, Lenguajes y Sistemas Informáticos y Estadística e Investigación Operativa, Rey Juan Carlos University, Madrid, Spain
- Center for Computational Simulation, Universidad Politécnica de Madrid, Madrid, Spain
| | - Alexander Peyser
- Simulation and Data Lab Neuroscience, Jülich Supercomputing Centre, Institute for Advanced Simulation, JARA, Forschungszentrum Jülich GmbH, Jülich, Germany
| | - Abigail Morrison
- Simulation and Data Lab Neuroscience, Jülich Supercomputing Centre, Institute for Advanced Simulation, JARA, Forschungszentrum Jülich GmbH, Jülich, Germany
- Institute of Neuroscience and Medicine and Institute for Advanced Simulation and JARA BRAIN Institute I, Jülich Research Centre, Jülich, Germany
- Computer Science 3 - Software Engineering, RWTH Aachen University, Aachen, Germany
| | - Wouter Klijn
- Simulation and Data Lab Neuroscience, Jülich Supercomputing Centre, Institute for Advanced Simulation, JARA, Forschungszentrum Jülich GmbH, Jülich, Germany
- Sandra Diaz-Pier
| |
Collapse
|
12
|
Dasbach S, Tetzlaff T, Diesmann M, Senk J. Dynamical Characteristics of Recurrent Neuronal Networks Are Robust Against Low Synaptic Weight Resolution. Front Neurosci 2021; 15:757790. [PMID: 35002599 PMCID: PMC8740282 DOI: 10.3389/fnins.2021.757790] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/12/2021] [Accepted: 11/03/2021] [Indexed: 11/13/2022] Open
Abstract
The representation of the natural-density, heterogeneous connectivity of neuronal network models at relevant spatial scales remains a challenge for Computational Neuroscience and Neuromorphic Computing. In particular, the memory demands imposed by the vast number of synapses in brain-scale network simulations constitute a major obstacle. Limiting the number resolution of synaptic weights appears to be a natural strategy to reduce memory and compute load. In this study, we investigate the effects of a limited synaptic-weight resolution on the dynamics of recurrent spiking neuronal networks resembling local cortical circuits and develop strategies for minimizing deviations from the dynamics of networks with high-resolution synaptic weights. We mimic the effect of a limited synaptic weight resolution by replacing normally distributed synaptic weights with weights drawn from a discrete distribution, and compare the resulting statistics characterizing firing rates, spike-train irregularity, and correlation coefficients with the reference solution. We show that a naive discretization of synaptic weights generally leads to a distortion of the spike-train statistics. If the weights are discretized such that the mean and the variance of the total synaptic input currents are preserved, the firing statistics remain unaffected for the types of networks considered in this study. For networks with sufficiently heterogeneous in-degrees, the firing statistics can be preserved even if all synaptic weights are replaced by the mean of the weight distribution. We conclude that even for simple networks with non-plastic neurons and synapses, a discretization of synaptic weights can lead to substantial deviations in the firing statistics unless the discretization is performed with care and guided by a rigorous validation process. For the network model used in this study, the synaptic weights can be replaced by low-resolution weights without affecting its macroscopic dynamical characteristics, thereby saving substantial amounts of memory.
Collapse
Affiliation(s)
- Stefan Dasbach
- Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA Institute Brain Structure-Function Relationships (INM-10), Jülich Research Centre, Jülich, Germany
| | - Tom Tetzlaff
- Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA Institute Brain Structure-Function Relationships (INM-10), Jülich Research Centre, Jülich, Germany
| | - Markus Diesmann
- Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA Institute Brain Structure-Function Relationships (INM-10), Jülich Research Centre, Jülich, Germany
- Department of Physics, Faculty 1, RWTH Aachen University, Aachen, Germany
- Department of Psychiatry, Psychotherapy, and Psychosomatics, Medical School, RWTH Aachen University, Aachen, Germany
| | - Johanna Senk
- Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA Institute Brain Structure-Function Relationships (INM-10), Jülich Research Centre, Jülich, Germany
| |
Collapse
|
13
|
Vieth M, Stöber TM, Triesch J. PymoNNto: A Flexible Modular Toolbox for Designing Brain-Inspired Neural Networks. Front Neuroinform 2021; 15:715131. [PMID: 34790108 PMCID: PMC8591031 DOI: 10.3389/fninf.2021.715131] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/26/2021] [Accepted: 09/07/2021] [Indexed: 11/13/2022] Open
Abstract
The Python Modular Neural Network Toolbox (PymoNNto) provides a versatile and adaptable Python-based framework to develop and investigate brain-inspired neural networks. In contrast to other commonly used simulators such as Brian2 and NEST, PymoNNto imposes only minimal restrictions for implementation and execution. The basic structure of PymoNNto consists of one network class with several neuron- and synapse-groups. The behaviour of each group can be flexibly defined by exchangeable modules. The implementation of these modules is up to the user and only limited by Python itself. Behaviours can be implemented in Python, Numpy, Tensorflow, and other libraries to perform computations on CPUs and GPUs. PymoNNto comes with convenient high level behaviour modules, allowing differential equation-based implementations similar to Brian2, and an adaptable modular Graphical User Interface for real-time observation and modification of the simulated network and its parameters.
Collapse
Affiliation(s)
- Marius Vieth
- Frankfurt Institute for Advanced Studies, Frankfurt am Main, Germany
| | | | - Jochen Triesch
- Frankfurt Institute for Advanced Studies, Frankfurt am Main, Germany
| |
Collapse
|
14
|
Knight JC, Komissarov A, Nowotny T. PyGeNN: A Python Library for GPU-Enhanced Neural Networks. Front Neuroinform 2021; 15:659005. [PMID: 33967731 PMCID: PMC8100330 DOI: 10.3389/fninf.2021.659005] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/26/2021] [Accepted: 03/15/2021] [Indexed: 11/23/2022] Open
Abstract
More than half of the Top 10 supercomputing sites worldwide use GPU accelerators and they are becoming ubiquitous in workstations and edge computing devices. GeNN is a C++ library for generating efficient spiking neural network simulation code for GPUs. However, until now, the full flexibility of GeNN could only be harnessed by writing model descriptions and simulation code in C++. Here we present PyGeNN, a Python package which exposes all of GeNN's functionality to Python with minimal overhead. This provides an alternative, arguably more user-friendly, way of using GeNN and allows modelers to use GeNN within the growing Python-based machine learning and computational neuroscience ecosystems. In addition, we demonstrate that, in both Python and C++ GeNN simulations, the overheads of recording spiking data can strongly affect runtimes and show how a new spike recording system can reduce these overheads by up to 10×. Using the new recording system, we demonstrate that by using PyGeNN on a modern GPU, we can simulate a full-scale model of a cortical column faster even than real-time neuromorphic systems. Finally, we show that long simulations of a smaller model with complex stimuli and a custom three-factor learning rule defined in PyGeNN can be simulated almost two orders of magnitude faster than real-time.
Collapse
Affiliation(s)
- James C. Knight
- Centre for Computational Neuroscience and Robotics, School of Engineering and Informatics, University of Sussex, Brighton, United Kingdom
| | - Anton Komissarov
- Bernstein Center for Computational Neuroscience Berlin, Berlin, Germany
- Department of Engineering and Computer Science, Technische Universität Berlin, Berlin, Germany
| | - Thomas Nowotny
- Centre for Computational Neuroscience and Robotics, School of Engineering and Informatics, University of Sussex, Brighton, United Kingdom
| |
Collapse
|
15
|
Abstract
Brain dynamics show a rich spatiotemporal behavior whose stability is neither ordered nor chaotic, indicating that neural networks operate at intermediate stability regimes including critical dynamics represented by a negative power-law distribution of avalanche sizes with exponent α=−1.5. However, it is unknown which stability regimen allows global and local information transmission with reduced metabolic costs, which are measured in terms of synaptic potentials and action potentials. In this work, using a hierarchical neuron model with rich-club organization, we measure the average number of action potentials required to activate n different neurons (avalanche size). Besides, we develop a mathematical formula to represent the metabolic synaptic potential cost. We develop simulations variating the synaptic amplitude, synaptic time course (ms), and hub excitatory/inhibitory ratio. We compare different dynamic regimes in terms of avalanche sizes vs. metabolic cost. We also implement the dynamic model in a Drosophila and Erdos–Renyi networks to computer dynamics and metabolic costs. The results show that the synaptic amplitude and time course play a key role in information propagation. They can drive the system from subcritical to supercritical regimes. The later result promotes the coexistence of critical regimes with a wide range of excitation/inhibition hub ratios. Moreover, subcritical or silent regimes minimize metabolic cost for local avalanche sizes, whereas critical and intermediate stability regimes show the best compromise between information propagation and reduced metabolic consumption, also minimizing metabolic cost for a wide range of avalanche sizes.
Collapse
|
16
|
Zajzon B, Mahmoudian S, Morrison A, Duarte R. Passing the Message: Representation Transfer in Modular Balanced Networks. Front Comput Neurosci 2019; 13:79. [PMID: 31920605 PMCID: PMC6915101 DOI: 10.3389/fncom.2019.00079] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/19/2019] [Accepted: 10/29/2019] [Indexed: 01/08/2023] Open
Abstract
Neurobiological systems rely on hierarchical and modular architectures to carry out intricate computations using minimal resources. A prerequisite for such systems to operate adequately is the capability to reliably and efficiently transfer information across multiple modules. Here, we study the features enabling a robust transfer of stimulus representations in modular networks of spiking neurons, tuned to operate in a balanced regime. To capitalize on the complex, transient dynamics that such networks exhibit during active processing, we apply reservoir computing principles and probe the systems' computational efficacy with specific tasks. Focusing on the comparison of random feed-forward connectivity and biologically inspired topographic maps, we find that, in a sequential set-up, structured projections between the modules are strictly necessary for information to propagate accurately to deeper modules. Such mappings not only improve computational performance and efficiency, they also reduce response variability, increase robustness against interference effects, and boost memory capacity. We further investigate how information from two separate input streams is integrated and demonstrate that it is more advantageous to perform non-linear computations on the input locally, within a given module, and subsequently transfer the result downstream, rather than transferring intermediate information and performing the computation downstream. Depending on how information is integrated early on in the system, the networks achieve similar task-performance using different strategies, indicating that the dimensionality of the neural responses does not necessarily correlate with nonlinear integration, as predicted by previous studies. These findings highlight a key role of topographic maps in supporting fast, robust, and accurate neural communication over longer distances. Given the prevalence of such structural feature, particularly in the sensory systems, elucidating their functional purpose remains an important challenge toward which this work provides relevant, new insights. At the same time, these results shed new light on important requirements for designing functional hierarchical spiking networks.
Collapse
Affiliation(s)
- Barna Zajzon
- Jülich Research Centre, Institute of Neuroscience and Medicine (INM-6), Institute for Advanced Simulation (IAS-6) and JARA Institute Brain Structure-Function Relationships (JBI-1/INM-10), Jülich, Germany
- Department of Psychiatry, Psychotherapy and Psychosomatics, RWTH Aachen University, Aachen, Germany
| | - Sepehr Mahmoudian
- Department of Data-Driven Analysis of Biological Networks, Campus Institute for Dynamics of Biological Networks, Georg August University Göttingen, Göttingen, Germany
- MEG Unit, Brain Imaging Center, Goethe University, Frankfurt, Germany
| | - Abigail Morrison
- Jülich Research Centre, Institute of Neuroscience and Medicine (INM-6), Institute for Advanced Simulation (IAS-6) and JARA Institute Brain Structure-Function Relationships (JBI-1/INM-10), Jülich, Germany
- Faculty of Psychology, Institute of Cognitive Neuroscience, Ruhr-University Bochum, Bochum, Germany
| | - Renato Duarte
- Jülich Research Centre, Institute of Neuroscience and Medicine (INM-6), Institute for Advanced Simulation (IAS-6) and JARA Institute Brain Structure-Function Relationships (JBI-1/INM-10), Jülich, Germany
| |
Collapse
|
17
|
Stimberg M, Brette R, Goodman DFM. Brian 2, an intuitive and efficient neural simulator. eLife 2019; 8:e47314. [PMID: 31429824 PMCID: PMC6786860 DOI: 10.7554/elife.47314] [Citation(s) in RCA: 203] [Impact Index Per Article: 40.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/01/2019] [Accepted: 08/19/2019] [Indexed: 01/20/2023] Open
Abstract
Brian 2 allows scientists to simply and efficiently simulate spiking neural network models. These models can feature novel dynamical equations, their interactions with the environment, and experimental protocols. To preserve high performance when defining new models, most simulators offer two options: low-level programming or description languages. The first option requires expertise, is prone to errors, and is problematic for reproducibility. The second option cannot describe all aspects of a computational experiment, such as the potentially complex logic of a stimulation protocol. Brian addresses these issues using runtime code generation. Scientists write code with simple and concise high-level descriptions, and Brian transforms them into efficient low-level code that can run interleaved with their code. We illustrate this with several challenging examples: a plastic model of the pyloric network, a closed-loop sensorimotor model, a programmatic exploration of a neuron model, and an auditory model with real-time input.
Collapse
Affiliation(s)
- Marcel Stimberg
- Sorbonne Université, INSERM, CNRS, Institut de la VisionParisFrance
| | - Romain Brette
- Sorbonne Université, INSERM, CNRS, Institut de la VisionParisFrance
| | - Dan FM Goodman
- Department of Electrical and Electronic EngineeringImperial College LondonLondonUnited Kingdom
| |
Collapse
|
18
|
Duarte R, Morrison A. Leveraging heterogeneity for neural computation with fading memory in layer 2/3 cortical microcircuits. PLoS Comput Biol 2019; 15:e1006781. [PMID: 31022182 PMCID: PMC6504118 DOI: 10.1371/journal.pcbi.1006781] [Citation(s) in RCA: 16] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/19/2018] [Revised: 05/07/2019] [Accepted: 01/09/2019] [Indexed: 11/24/2022] Open
Abstract
Complexity and heterogeneity are intrinsic to neurobiological systems, manifest in every process, at every scale, and are inextricably linked to the systems' emergent collective behaviours and function. However, the majority of studies addressing the dynamics and computational properties of biologically inspired cortical microcircuits tend to assume (often for the sake of analytical tractability) a great degree of homogeneity in both neuronal and synaptic/connectivity parameters. While simplification and reductionism are necessary to understand the brain's functional principles, disregarding the existence of the multiple heterogeneities in the cortical composition, which may be at the core of its computational proficiency, will inevitably fail to account for important phenomena and limit the scope and generalizability of cortical models. We address these issues by studying the individual and composite functional roles of heterogeneities in neuronal, synaptic and structural properties in a biophysically plausible layer 2/3 microcircuit model, built and constrained by multiple sources of empirical data. This approach was made possible by the emergence of large-scale, well curated databases, as well as the substantial improvements in experimental methodologies achieved over the last few years. Our results show that variability in single neuron parameters is the dominant source of functional specialization, leading to highly proficient microcircuits with much higher computational power than their homogeneous counterparts. We further show that fully heterogeneous circuits, which are closest to the biophysical reality, owe their response properties to the differential contribution of different sources of heterogeneity.
Collapse
Affiliation(s)
- Renato Duarte
- Institute of Neuroscience and Medicine (INM-6), Institute for Advanced Simulation (IAS-6) and JARA Institute Brain Structure-Function Relationships (JBI-1 / INM-10), Jülich Research Centre, Jülich, Germany
- Bernstein Center Freiburg, Albert-Ludwig University of Freiburg, Freiburg im Breisgau, Germany
- Faculty of Biology, Albert-Ludwig University of Freiburg, Freiburg im Breisgau, Germany
- Institute of Adaptive and Neural Computation, School of Informatics, University of Edinburgh, Edinburgh, United Kingdom
| | - Abigail Morrison
- Institute of Neuroscience and Medicine (INM-6), Institute for Advanced Simulation (IAS-6) and JARA Institute Brain Structure-Function Relationships (JBI-1 / INM-10), Jülich Research Centre, Jülich, Germany
- Bernstein Center Freiburg, Albert-Ludwig University of Freiburg, Freiburg im Breisgau, Germany
- Institute of Cognitive Neuroscience, Faculty of Psychology, Ruhr-University Bochum, Bochum, Germany
| |
Collapse
|
19
|
Aguilar-Velázquez D, Guzmán-Vargas L. Critical synchronization and 1/f noise in inhibitory/excitatory rich-club neural networks. Sci Rep 2019; 9:1258. [PMID: 30718817 PMCID: PMC6361933 DOI: 10.1038/s41598-018-37920-w] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/20/2018] [Accepted: 12/17/2018] [Indexed: 12/16/2022] Open
Abstract
In recent years, diverse studies have reported that different brain regions, which are internally densely connected, are also highly connected to each other. This configuration seems to play a key role in integrating and interchanging information between brain areas. Also, changes in the rich-club connectivity and the shift from inhibitory to excitatory behavior of hub neurons have been associated with several diseases. However, there is not a clear understanding about the role of the proportion of inhibitory/excitatory hub neurons, the dynamic consequences of rich-club disconnection, and hub inhibitory/excitatory shifts. Here, we study the synchronization and temporal correlations in the neural Izhikevich model, which comprises excitatory and inhibitory neurons located in a scale-free hierarchical network with rich-club connectivity. We evaluated the temporal autocorrelations and global synchronization dynamics displayed by the system in terms of rich-club connectivity and hub inhibitory/excitatory population. We evaluated the synchrony between pairs of sets of neurons by means of the global lability synchronization, based on the rate of change in the total number of synchronized signals. The results show that for a wide range of excitatory/inhibitory hub ratios the network displays 1/f dynamics with critical synchronization that is concordant with numerous health brain registers, while a network configuration with a vast majority of excitatory hubs mostly exhibits short-term autocorrelations with numerous large avalanches. Furthermore, rich-club connectivity promotes the increase of the global lability of synchrony and the temporal persistence of the system.
Collapse
Affiliation(s)
- Daniel Aguilar-Velázquez
- Unidad Profesional Interdisciplinaria en Ingeniería y Tecnologías Avanzadas, Instituto Politécnico Nacional, Av. IPN No. 2580, L. Ticomán, Ciudad de México, 07340, Mexico
| | - Lev Guzmán-Vargas
- Unidad Profesional Interdisciplinaria en Ingeniería y Tecnologías Avanzadas, Instituto Politécnico Nacional, Av. IPN No. 2580, L. Ticomán, Ciudad de México, 07340, Mexico.
| |
Collapse
|
20
|
Gutzen R, von Papen M, Trensch G, Quaglio P, Grün S, Denker M. Reproducible Neural Network Simulations: Statistical Methods for Model Validation on the Level of Network Activity Data. Front Neuroinform 2018; 12:90. [PMID: 30618696 PMCID: PMC6305903 DOI: 10.3389/fninf.2018.00090] [Citation(s) in RCA: 18] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/18/2018] [Accepted: 11/14/2018] [Indexed: 11/13/2022] Open
Abstract
Computational neuroscience relies on simulations of neural network models to bridge the gap between the theory of neural networks and the experimentally observed activity dynamics in the brain. The rigorous validation of simulation results against reference data is thus an indispensable part of any simulation workflow. Moreover, the availability of different simulation environments and levels of model description require also validation of model implementations against each other to evaluate their equivalence. Despite rapid advances in the formalized description of models, data, and analysis workflows, there is no accepted consensus regarding the terminology and practical implementation of validation workflows in the context of neural simulations. This situation prevents the generic, unbiased comparison between published models, which is a key element of enhancing reproducibility of computational research in neuroscience. In this study, we argue for the establishment of standardized statistical test metrics that enable the quantitative validation of network models on the level of the population dynamics. Despite the importance of validating the elementary components of a simulation, such as single cell dynamics, building networks from validated building blocks does not entail the validity of the simulation on the network scale. Therefore, we introduce a corresponding set of validation tests and present an example workflow that practically demonstrates the iterative model validation of a spiking neural network model against its reproduction on the SpiNNaker neuromorphic hardware system. We formally implement the workflow using a generic Python library that we introduce for validation tests on neural network activity data. Together with the companion study (Trensch et al., 2018), the work presents a consistent definition, formalization, and implementation of the verification and validation process for neural network simulations.
Collapse
Affiliation(s)
- Robin Gutzen
- Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA-Institut Brain Structure-Function Relationships (INM-10), Jülich Research Centre, Jülich, Germany.,Theoretical Systems Neurobiology, RWTH Aachen University, Aachen, Germany
| | - Michael von Papen
- Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA-Institut Brain Structure-Function Relationships (INM-10), Jülich Research Centre, Jülich, Germany
| | - Guido Trensch
- Simulation Lab Neuroscience, Jülich Supercomputing Centre, Institute for Advanced Simulation, JARA, Jülich Research Centre, Jülich, Germany
| | - Pietro Quaglio
- Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA-Institut Brain Structure-Function Relationships (INM-10), Jülich Research Centre, Jülich, Germany.,Theoretical Systems Neurobiology, RWTH Aachen University, Aachen, Germany
| | - Sonja Grün
- Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA-Institut Brain Structure-Function Relationships (INM-10), Jülich Research Centre, Jülich, Germany.,Theoretical Systems Neurobiology, RWTH Aachen University, Aachen, Germany
| | - Michael Denker
- Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA-Institut Brain Structure-Function Relationships (INM-10), Jülich Research Centre, Jülich, Germany
| |
Collapse
|
21
|
Knight JC, Nowotny T. GPUs Outperform Current HPC and Neuromorphic Solutions in Terms of Speed and Energy When Simulating a Highly-Connected Cortical Model. Front Neurosci 2018; 12:941. [PMID: 30618570 PMCID: PMC6299048 DOI: 10.3389/fnins.2018.00941] [Citation(s) in RCA: 26] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/27/2018] [Accepted: 11/29/2018] [Indexed: 11/15/2022] Open
Abstract
While neuromorphic systems may be the ultimate platform for deploying spiking neural networks (SNNs), their distributed nature and optimization for specific types of models makes them unwieldy tools for developing them. Instead, SNN models tend to be developed and simulated on computers or clusters of computers with standard von Neumann CPU architectures. Over the last decade, as well as becoming a common fixture in many workstations, NVIDIA GPU accelerators have entered the High Performance Computing field and are now used in 50 % of the Top 10 super computing sites worldwide. In this paper we use our GeNN code generator to re-implement two neo-cortex-inspired, circuit-scale, point neuron network models on GPU hardware. We verify the correctness of our GPU simulations against prior results obtained with NEST running on traditional HPC hardware and compare the performance with respect to speed and energy consumption against published data from CPU-based HPC and neuromorphic hardware. A full-scale model of a cortical column can be simulated at speeds approaching 0.5× real-time using a single NVIDIA Tesla V100 accelerator-faster than is currently possible using a CPU based cluster or the SpiNNaker neuromorphic system. In addition, we find that, across a range of GPU systems, the energy to solution as well as the energy per synaptic event of the microcircuit simulation is as much as 14× lower than either on SpiNNaker or in CPU-based simulations. Besides performance in terms of speed and energy consumption of the simulation, efficient initialization of models is also a crucial concern, particularly in a research context where repeated runs and parameter-space exploration are required. Therefore, we also introduce in this paper some of the novel parallel initialization methods implemented in the latest version of GeNN and demonstrate how they can enable further speed and energy advantages.
Collapse
Affiliation(s)
- James C. Knight
- Centre for Computational Neuroscience and Robotics, School of Engineering and Informatics, University of Sussex, Brighton, United Kingdom
| | | |
Collapse
|
22
|
Trensch G, Gutzen R, Blundell I, Denker M, Morrison A. Rigorous Neural Network Simulations: A Model Substantiation Methodology for Increasing the Correctness of Simulation Results in the Absence of Experimental Validation Data. Front Neuroinform 2018; 12:81. [PMID: 30534066 PMCID: PMC6275234 DOI: 10.3389/fninf.2018.00081] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/11/2018] [Accepted: 10/22/2018] [Indexed: 01/29/2023] Open
Abstract
The reproduction and replication of scientific results is an indispensable aspect of good scientific practice, enabling previous studies to be built upon and increasing our level of confidence in them. However, reproducibility and replicability are not sufficient: an incorrect result will be accurately reproduced if the same incorrect methods are used. For the field of simulations of complex neural networks, the causes of incorrect results vary from insufficient model implementations and data analysis methods, deficiencies in workmanship (e.g., simulation planning, setup, and execution) to errors induced by hardware constraints (e.g., limitations in numerical precision). In order to build credibility, methods such as verification and validation have been developed, but they are not yet well-established in the field of neural network modeling and simulation, partly due to ambiguity concerning the terminology. In this manuscript, we propose a terminology for model verification and validation in the field of neural network modeling and simulation. We outline a rigorous workflow derived from model verification and validation methodologies for increasing model credibility when it is not possible to validate against experimental data. We compare a published minimal spiking network model capable of exhibiting the development of polychronous groups, to its reproduction on the SpiNNaker neuromorphic system, where we consider the dynamics of several selected network states. As a result, by following a formalized process, we show that numerical accuracy is critically important, and even small deviations in the dynamics of individual neurons are expressed in the dynamics at network level.
Collapse
Affiliation(s)
- Guido Trensch
- Simulation Lab Neuroscience, Jülich Supercomputing Centre, Institute for Advanced Simulation, JARAJülich Research Centre, Jülich, Germany
| | - Robin Gutzen
- Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA Institute Brain Structure-Function Relationships (INM-10)Jülich Research Centre, Jülich, Germany
| | - Inga Blundell
- Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA Institute Brain Structure-Function Relationships (INM-10)Jülich Research Centre, Jülich, Germany
| | - Michael Denker
- Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA Institute Brain Structure-Function Relationships (INM-10)Jülich Research Centre, Jülich, Germany
| | - Abigail Morrison
- Simulation Lab Neuroscience, Jülich Supercomputing Centre, Institute for Advanced Simulation, JARAJülich Research Centre, Jülich, Germany
- Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA Institute Brain Structure-Function Relationships (INM-10)Jülich Research Centre, Jülich, Germany
- Faculty of Psychology, Institute of Cognitive NeuroscienceRuhr-University Bochum, Bochum, Germany
| |
Collapse
|
23
|
Schmidt M, Bakker R, Shen K, Bezgin G, Diesmann M, van Albada SJ. A multi-scale layer-resolved spiking network model of resting-state dynamics in macaque visual cortical areas. PLoS Comput Biol 2018; 14:e1006359. [PMID: 30335761 PMCID: PMC6193609 DOI: 10.1371/journal.pcbi.1006359] [Citation(s) in RCA: 47] [Impact Index Per Article: 7.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/08/2017] [Accepted: 07/12/2018] [Indexed: 11/28/2022] Open
Abstract
Cortical activity has distinct features across scales, from the spiking statistics of individual cells to global resting-state networks. We here describe the first full-density multi-area spiking network model of cortex, using macaque visual cortex as a test system. The model represents each area by a microcircuit with area-specific architecture and features layer- and population-resolved connectivity between areas. Simulations reveal a structured asynchronous irregular ground state. In a metastable regime, the network reproduces spiking statistics from electrophysiological recordings and cortico-cortical interaction patterns in fMRI functional connectivity under resting-state conditions. Stable inter-area propagation is supported by cortico-cortical synapses that are moderately strong onto excitatory neurons and stronger onto inhibitory neurons. Causal interactions depend on both cortical structure and the dynamical state of populations. Activity propagates mainly in the feedback direction, similar to experimental results associated with visual imagery and sleep. The model unifies local and large-scale accounts of cortex, and clarifies how the detailed connectivity of cortex shapes its dynamics on multiple scales. Based on our simulations, we hypothesize that in the spontaneous condition the brain operates in a metastable regime where cortico-cortical projections target excitatory and inhibitory populations in a balanced manner that produces substantial inter-area interactions while maintaining global stability. The mammalian cortex fulfills its complex tasks by operating on multiple temporal and spatial scales from single cells to entire areas comprising millions of cells. These multi-scale dynamics are supported by specific network structures at all levels of organization. Since models of cortex hitherto tend to concentrate on a single scale, little is known about how cortical structure shapes the multi-scale dynamics of the network. We here present dynamical simulations of a multi-area network model at neuronal and synaptic resolution with population-specific connectivity based on extensive experimental data which accounts for a wide range of dynamical phenomena. Our model elucidates relationships between local and global scales in cortex and provides a platform for future studies of cortical function.
Collapse
Affiliation(s)
- Maximilian Schmidt
- Laboratory for Neural Coding and Brain Computing, RIKEN Center for Brain Science, Wako-Shi, Saitama, Japan
- Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA Institute Brain Structure-Function Relationships (INM-10), Jülich Research Centre, Jülich, Germany
| | - Rembrandt Bakker
- Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA Institute Brain Structure-Function Relationships (INM-10), Jülich Research Centre, Jülich, Germany
- Donders Institute for Brain, Cognition and Behavior, Radboud University Nijmegen, Nijmegen, Netherlands
| | - Kelly Shen
- Rotman Research Institute, Baycrest, Toronto, Ontario, Canada
| | - Gleb Bezgin
- McConnell Brain Imaging Centre, Montreal Neurological Institute, McGill University, Montreal, Canada
| | - Markus Diesmann
- Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA Institute Brain Structure-Function Relationships (INM-10), Jülich Research Centre, Jülich, Germany
- Department of Psychiatry, Psychotherapy and Psychosomatics, Medical Faculty, RWTH Aachen University, Aachen, Germany
- Department of Physics, RWTH Aachen University, Aachen, Germany
| | - Sacha Jennifer van Albada
- Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA Institute Brain Structure-Function Relationships (INM-10), Jülich Research Centre, Jülich, Germany
- * E-mail:
| |
Collapse
|
24
|
Blundell I, Plotnikov D, Eppler JM, Morrison A. Automatically Selecting a Suitable Integration Scheme for Systems of Differential Equations in Neuron Models. Front Neuroinform 2018; 12:50. [PMID: 30349471 PMCID: PMC6186990 DOI: 10.3389/fninf.2018.00050] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/19/2017] [Accepted: 07/23/2018] [Indexed: 12/17/2022] Open
Abstract
On the level of the spiking activity, the integrate-and-fire neuron is one of the most commonly used descriptions of neural activity. A multitude of variants has been proposed to cope with the huge diversity of behaviors observed in biological nerve cells. The main appeal of this class of model is that it can be defined in terms of a hybrid model, where a set of mathematical equations describes the sub-threshold dynamics of the membrane potential and the generation of action potentials is often only added algorithmically without the shape of spikes being part of the equations. In contrast to more detailed biophysical models, this simple description of neuron models allows the routine simulation of large biological neuronal networks on standard hardware widely available in most laboratories these days. The time evolution of the relevant state variables is usually defined by a small set of ordinary differential equations (ODEs). A small number of evolution schemes for the corresponding systems of ODEs are commonly used for many neuron models, and form the basis of the neuron model implementations built into commonly used simulators like Brian, NEST and NEURON. However, an often neglected problem is that the implemented evolution schemes are only rarely selected through a structured process based on numerical criteria. This practice cannot guarantee accurate and stable solutions for the equations and the actual quality of the solution depends largely on the parametrization of the model. In this article, we give an overview of typical equations and state descriptions for the dynamics of the relevant variables in integrate-and-fire models. We then describe a formal mathematical process to automate the design or selection of a suitable evolution scheme for this large class of models. Finally, we present the reference implementation of our symbolic analysis toolbox for ODEs that can guide modelers during the implementation of custom neuron models.
Collapse
Affiliation(s)
- Inga Blundell
- Institute of Neuroscience and Medicine (INM-6), Institute for Advanced Simulation (IAS-6), Jülich Aachen Research Alliance BRAIN Institute I, Forschungszentrum Jülich, Jülich, Germany
| | - Dimitri Plotnikov
- Simulation Lab Neuroscience, Institute for Advanced Simulation, Jülich Aachen Research Alliance, Jülich Supercomputing Centre (JSC), Forschungszentrum Jülich, Jülich, Germany.,Chair of Software Engineering, Jülich Aachen Research Alliance, RWTH Aachen University, Aachen, Germany
| | - Jochen M Eppler
- Simulation Lab Neuroscience, Institute for Advanced Simulation, Jülich Aachen Research Alliance, Jülich Supercomputing Centre (JSC), Forschungszentrum Jülich, Jülich, Germany
| | - Abigail Morrison
- Institute of Neuroscience and Medicine (INM-6), Institute for Advanced Simulation (IAS-6), Jülich Aachen Research Alliance BRAIN Institute I, Forschungszentrum Jülich, Jülich, Germany.,Simulation Lab Neuroscience, Institute for Advanced Simulation, Jülich Aachen Research Alliance, Jülich Supercomputing Centre (JSC), Forschungszentrum Jülich, Jülich, Germany.,Faculty of Psychology, Institute of Cognitive Neuroscience, Ruhr-University Bochum, Bochum, Germany
| |
Collapse
|