1
|
Otero M, Lea-Carnall C, Prado P, Escobar MJ, El-Deredy W. Modelling neural entrainment and its persistence: influence of frequency of stimulation and phase at the stimulus offset. Biomed Phys Eng Express 2022; 8:045014. [PMID: 35320793 DOI: 10.1088/2057-1976/ac605a] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/02/2021] [Accepted: 03/23/2022] [Indexed: 11/12/2022]
Abstract
Neural entrainment, the synchronization of brain oscillations to the frequency of an external stimuli, is a key mechanism that shapes perceptual and cognitive processes.Objective.Using simulations, we investigated the dynamics of neural entrainment, particularly the period following the end of the stimulation, since the persistence (reverberation) of neural entrainment may condition future sensory representations based on predictions about stimulus rhythmicity.Methods.Neural entrainment was assessed using a modified Jansen-Rit neural mass model (NMM) of coupled cortical columns, in which the spectral features of the output resembled that of the electroencephalogram (EEG). We evaluated spectro-temporal features of entrainment as a function of the stimulation frequency, the resonant frequency of the neural populations comprising the NMM, and the coupling strength between cortical columns. Furthermore, we tested if the entrainment persistence depended on the phase of the EEG-like oscillation at the time the stimulus ended.Main Results.The entrainment of the column that received the stimulation was maximum when the frequency of the entrainer was within a narrow range around the resonant frequency of the column. When this occurred, entrainment persisted for several cycles after the stimulus terminated, and the propagation of the entrainment to other columns was facilitated. Propagation also depended on the resonant frequency of the second column, and the coupling strength between columns. The duration of the persistence of the entrainment depended on the phase of the neural oscillation at the time the entrainer terminated, such that falling phases (fromπ/2 to 3π/2 in a sine function) led to longer persistence than rising phases (from 0 toπ/2 and 3π/2 to 2π).Significance.The study bridges between models of neural oscillations and empirical electrophysiology, providing insights to the mechanisms underlying neural entrainment and the use of rhythmic sensory stimulation for neuroenhancement.
Collapse
Affiliation(s)
- Mónica Otero
- Escuela de Ingeniería Biomédica, Universidad de Valparaíso, Chile
- Advanced Center for Electric and Electronic Engineering, Valparaíso, Chile
| | - Caroline Lea-Carnall
- Division of Neuroscience and Experimental Psychology, School of Biological Sciences, Faculty of Biology, Medicine and Health, University of Manchester, Manchester Academic Health Science Centre, Manchester, United Kingdom
| | - Pavel Prado
- Latin-American Brain Health Institute (BrainLat), Universidad Adolfo Ibañez, Chile
| | | | - Wael El-Deredy
- Escuela de Ingeniería Biomédica, Universidad de Valparaíso, Chile
- Advanced Center for Electric and Electronic Engineering, Valparaíso, Chile
- Division of Neuroscience and Experimental Psychology, School of Biological Sciences, Faculty of Biology, Medicine and Health, University of Manchester, Manchester Academic Health Science Centre, Manchester, United Kingdom
| |
Collapse
|
2
|
Pronold J, Jordan J, Wylie BJN, Kitayama I, Diesmann M, Kunkel S. Routing Brain Traffic Through the Von Neumann Bottleneck: Parallel Sorting and Refactoring. Front Neuroinform 2022; 15:785068. [PMID: 35300490 PMCID: PMC8921864 DOI: 10.3389/fninf.2021.785068] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/28/2021] [Accepted: 12/24/2021] [Indexed: 11/26/2022] Open
Abstract
Generic simulation code for spiking neuronal networks spends the major part of the time in the phase where spikes have arrived at a compute node and need to be delivered to their target neurons. These spikes were emitted over the last interval between communication steps by source neurons distributed across many compute nodes and are inherently irregular and unsorted with respect to their targets. For finding those targets, the spikes need to be dispatched to a three-dimensional data structure with decisions on target thread and synapse type to be made on the way. With growing network size, a compute node receives spikes from an increasing number of different source neurons until in the limit each synapse on the compute node has a unique source. Here, we show analytically how this sparsity emerges over the practically relevant range of network sizes from a hundred thousand to a billion neurons. By profiling a production code we investigate opportunities for algorithmic changes to avoid indirections and branching. Every thread hosts an equal share of the neurons on a compute node. In the original algorithm, all threads search through all spikes to pick out the relevant ones. With increasing network size, the fraction of hits remains invariant but the absolute number of rejections grows. Our new alternative algorithm equally divides the spikes among the threads and immediately sorts them in parallel according to target thread and synapse type. After this, every thread completes delivery solely of the section of spikes for its own neurons. Independent of the number of threads, all spikes are looked at only two times. The new algorithm halves the number of instructions in spike delivery which leads to a reduction of simulation time of up to 40 %. Thus, spike delivery is a fully parallelizable process with a single synchronization point and thereby well suited for many-core systems. Our analysis indicates that further progress requires a reduction of the latency that the instructions experience in accessing memory. The study provides the foundation for the exploration of methods of latency hiding like software pipelining and software-induced prefetching.
Collapse
Affiliation(s)
- Jari Pronold
- Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA-Institute Brain Structure-Function Relationships (INM-10), Jülich Research Centre, Jülich, Germany
- RWTH Aachen University, Aachen, Germany
| | - Jakob Jordan
- Department of Physiology, University of Bern, Bern, Switzerland
| | - Brian J. N. Wylie
- Jülich Supercomputing Centre, Jülich Research Centre, Jülich, Germany
| | | | - Markus Diesmann
- Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA-Institute Brain Structure-Function Relationships (INM-10), Jülich Research Centre, Jülich, Germany
- Department of Physics, Faculty 1, RWTH Aachen University, Aachen, Germany
- Department of Psychiatry, Psychotherapy and Psychosomatics, Medical Faculty, RWTH Aachen University, Aachen, Germany
| | - Susanne Kunkel
- Faculty of Science and Technology, Norwegian University of Life Sciences, Ås, Norway
| |
Collapse
|
3
|
A Hierarchical Attractor Network Model of perceptual versus intentional decision updates. Nat Commun 2021; 12:2020. [PMID: 33795665 PMCID: PMC8016916 DOI: 10.1038/s41467-021-22017-2] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/23/2019] [Accepted: 02/05/2021] [Indexed: 02/01/2023] Open
Abstract
Changes of Mind are a striking example of our ability to flexibly reverse decisions and change our own actions. Previous studies largely focused on Changes of Mind in decisions about perceptual information. Here we report reversals of decisions that require integrating multiple classes of information: 1) Perceptual evidence, 2) higher-order, voluntary intentions, and 3) motor costs. In an adapted version of the random-dot motion task, participants moved to a target that matched both the external (exogenous) evidence about dot-motion direction and a preceding internally-generated (endogenous) intention about which colour to paint the dots. Movement trajectories revealed whether and when participants changed their mind about the dot-motion direction, or additionally changed their mind about which colour to choose. Our results show that decision reversals about colour intentions are less frequent in participants with stronger intentions (Exp. 1) and when motor costs of intention pursuit are lower (Exp. 2). We further show that these findings can be explained by a hierarchical, multimodal Attractor Network Model that continuously integrates higher-order voluntary intentions with perceptual evidence and motor costs. Our model thus provides a unifying framework in which voluntary actions emerge from a dynamic combination of internal action tendencies and external environmental factors, each of which can be subject to Change of Mind.
Collapse
|
4
|
Giannakakis E, Han CE, Weber B, Hutchings F, Kaiser M. Towards simulations of long-term behavior of neural networks: Modeling synaptic plasticity of connections within and between human brain regions. Neurocomputing 2020; 416:38-44. [PMID: 33250573 PMCID: PMC7598092 DOI: 10.1016/j.neucom.2020.01.050] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/06/2023]
Abstract
Simulations of neural networks can be used to study the direct effect of internal or external changes on brain dynamics. However, some changes are not immediate but occur on the timescale of weeks, months, or years. Examples include effects of strokes, surgical tissue removal, or traumatic brain injury but also gradual changes during brain development. Simulating network activity over a long time, even for a small number of nodes, is a computational challenge. Here, we model a coupled network of human brain regions with a modified Wilson-Cowan model representing dynamics for each region and with synaptic plasticity adjusting connection weights within and between regions. Using strategies ranging from different models for plasticity, vectorization and a different differential equation solver setup, we achieved one second runtime for one second biological time.
Collapse
Affiliation(s)
- Emmanouil Giannakakis
- Interdisciplinary Computing and Complex BioSystems (ICOS) research group, School of Computing, Newcastle University, Newcastle upon Tyne NE4 5TG, United Kingdom
| | - Cheol E Han
- Department of Electronics and Information Engineering, Korea University, Sejong, Republic of Korea
| | - Bernd Weber
- Institute of Experimental Epileptology and Cognition Research, University of Bonn, Germany
| | - Frances Hutchings
- Interdisciplinary Computing and Complex BioSystems (ICOS) research group, School of Computing, Newcastle University, Newcastle upon Tyne NE4 5TG, United Kingdom
| | - Marcus Kaiser
- Interdisciplinary Computing and Complex BioSystems (ICOS) research group, School of Computing, Newcastle University, Newcastle upon Tyne NE4 5TG, United Kingdom.,Institute of Neuroscience, Newcastle University, the Henry Wellcome Building, Newcastle upon Tyne NE2 4HH, United Kingdom.,Department of Functional Neurosurgery, Ruijin Hospital, School of Medicine, Shanghai Jiao Tong University, Shanghai 200025, China
| |
Collapse
|
5
|
Jordan J, Helias M, Diesmann M, Kunkel S. Efficient Communication in Distributed Simulations of Spiking Neuronal Networks With Gap Junctions. Front Neuroinform 2020; 14:12. [PMID: 32431602 PMCID: PMC7214808 DOI: 10.3389/fninf.2020.00012] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/17/2020] [Accepted: 03/06/2020] [Indexed: 12/01/2022] Open
Abstract
Investigating the dynamics and function of large-scale spiking neuronal networks with realistic numbers of synapses is made possible today by state-of-the-art simulation code that scales to the largest contemporary supercomputers. However, simulations that involve electrical interactions, also called gap junctions, besides chemical synapses scale only poorly due to a communication scheme that collects global data on each compute node. In comparison to chemical synapses, gap junctions are far less abundant. To improve scalability we exploit this sparsity by integrating an existing framework for continuous interactions with a recently proposed directed communication scheme for spikes. Using a reference implementation in the NEST simulator we demonstrate excellent scalability of the integrated framework, accelerating large-scale simulations with gap junctions by more than an order of magnitude. This allows, for the first time, the efficient exploration of the interactions of chemical and electrical coupling in large-scale neuronal networks models with natural synapse density distributed across thousands of compute nodes.
Collapse
Affiliation(s)
- Jakob Jordan
- Department of Physiology, University of Bern, Bern, Switzerland.,Institute of Neuroscience and Medicine (INM-6), Jülich Research Centre, Jülich, Germany.,Institute for Advanced Simulation (IAS-6), Jülich Research Centre, Jülich, Germany.,JARA Institute Brain Structure Function Relationship (INM-10), Jülich Research Centre, Jülich, Germany
| | - Moritz Helias
- Institute of Neuroscience and Medicine (INM-6), Jülich Research Centre, Jülich, Germany.,Institute for Advanced Simulation (IAS-6), Jülich Research Centre, Jülich, Germany.,JARA Institute Brain Structure Function Relationship (INM-10), Jülich Research Centre, Jülich, Germany.,Department of Physics, Faculty 1, RWTH Aachen University, Aachen, Germany
| | - Markus Diesmann
- Institute of Neuroscience and Medicine (INM-6), Jülich Research Centre, Jülich, Germany.,Institute for Advanced Simulation (IAS-6), Jülich Research Centre, Jülich, Germany.,JARA Institute Brain Structure Function Relationship (INM-10), Jülich Research Centre, Jülich, Germany.,Department of Physics, Faculty 1, RWTH Aachen University, Aachen, Germany.,Department of Psychiatry, Psychotherapy and Psychosomatics, Medical Faculty, RWTH Aachen University, Aachen, Germany
| | - Susanne Kunkel
- Faculty of Science and Technology, Norwegian University of Life Sciences, Ås, Norway
| |
Collapse
|
6
|
Stimberg M, Brette R, Goodman DFM. Brian 2, an intuitive and efficient neural simulator. eLife 2019; 8:e47314. [PMID: 31429824 PMCID: PMC6786860 DOI: 10.7554/elife.47314] [Citation(s) in RCA: 216] [Impact Index Per Article: 36.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/01/2019] [Accepted: 08/19/2019] [Indexed: 01/20/2023] Open
Abstract
Brian 2 allows scientists to simply and efficiently simulate spiking neural network models. These models can feature novel dynamical equations, their interactions with the environment, and experimental protocols. To preserve high performance when defining new models, most simulators offer two options: low-level programming or description languages. The first option requires expertise, is prone to errors, and is problematic for reproducibility. The second option cannot describe all aspects of a computational experiment, such as the potentially complex logic of a stimulation protocol. Brian addresses these issues using runtime code generation. Scientists write code with simple and concise high-level descriptions, and Brian transforms them into efficient low-level code that can run interleaved with their code. We illustrate this with several challenging examples: a plastic model of the pyloric network, a closed-loop sensorimotor model, a programmatic exploration of a neuron model, and an auditory model with real-time input.
Collapse
Affiliation(s)
- Marcel Stimberg
- Sorbonne Université, INSERM, CNRS, Institut de la VisionParisFrance
| | - Romain Brette
- Sorbonne Université, INSERM, CNRS, Institut de la VisionParisFrance
| | - Dan FM Goodman
- Department of Electrical and Electronic EngineeringImperial College LondonLondonUnited Kingdom
| |
Collapse
|
7
|
Jordan J, Weidel P, Morrison A. A Closed-Loop Toolchain for Neural Network Simulations of Learning Autonomous Agents. Front Comput Neurosci 2019; 13:46. [PMID: 31427939 PMCID: PMC6687756 DOI: 10.3389/fncom.2019.00046] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/31/2019] [Accepted: 06/25/2019] [Indexed: 11/17/2022] Open
Abstract
Neural network simulation is an important tool for generating and evaluating hypotheses on the structure, dynamics, and function of neural circuits. For scientific questions addressing organisms operating autonomously in their environments, in particular where learning is involved, it is crucial to be able to operate such simulations in a closed-loop fashion. In such a set-up, the neural agent continuously receives sensory stimuli from the environment and provides motor signals that manipulate the environment or move the agent within it. So far, most studies requiring such functionality have been conducted with custom simulation scripts and manually implemented tasks. This makes it difficult for other researchers to reproduce and build upon previous work and nearly impossible to compare the performance of different learning architectures. In this work, we present a novel approach to solve this problem, connecting benchmark tools from the field of machine learning and state-of-the-art neural network simulators from computational neuroscience. The resulting toolchain enables researchers in both fields to make use of well-tested high-performance simulation software supporting biologically plausible neuron, synapse and network models and allows them to evaluate and compare their approach on the basis of standardized environments with various levels of complexity. We demonstrate the functionality of the toolchain by implementing a neuronal actor-critic architecture for reinforcement learning in the NEST simulator and successfully training it on two different environments from the OpenAI Gym. We compare its performance to a previously suggested neural network model of reinforcement learning in the basal ganglia and a generic Q-learning algorithm.
Collapse
Affiliation(s)
- Jakob Jordan
- Department of Physiology, University of Bern, Bern, Switzerland
- Institute of Neuroscience and Medicine (INM-6) & Institute for Advanced Simulation (IAS-6) & JARA-Institute Brain Structure Function Relationship (JBI 1/INM-10), Research Centre Jülich, Jülich, Germany
| | - Philipp Weidel
- Institute of Neuroscience and Medicine (INM-6) & Institute for Advanced Simulation (IAS-6) & JARA-Institute Brain Structure Function Relationship (JBI 1/INM-10), Research Centre Jülich, Jülich, Germany
- aiCTX, Zurich, Switzerland
- Department of Computer Science, RWTH Aachen University, Aachen, Germany
| | - Abigail Morrison
- Institute of Neuroscience and Medicine (INM-6) & Institute for Advanced Simulation (IAS-6) & JARA-Institute Brain Structure Function Relationship (JBI 1/INM-10), Research Centre Jülich, Jülich, Germany
- Faculty of Psychology, Institute of Cognitive Neuroscience, Ruhr-University Bochum, Bochum, Germany
| |
Collapse
|
8
|
Verduzco-Flores S, De Schutter E. Draculab: A Python Simulator for Firing Rate Neural Networks With Delayed Adaptive Connections. Front Neuroinform 2019; 13:18. [PMID: 31001101 PMCID: PMC6454197 DOI: 10.3389/fninf.2019.00018] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/06/2018] [Accepted: 03/08/2019] [Indexed: 11/13/2022] Open
Abstract
Draculab is a neural simulator with a particular use scenario: firing rate units with delayed connections, using custom-made unit and synapse models, possibly controlling simulated physical systems. Draculab also has a particular design philosophy. It aims to blur the line between users and developers. Three factors help to achieve this: a simple design using Python's data structures, extensive use of standard libraries, and profusely commented source code. This paper is an introduction to Draculab's architecture and philosophy. After presenting some example networks it explains basic algorithms and data structures that constitute the essence of this approach. The relation with other simulators is discussed, as well as the reasons why connection delays and interaction with simulated physical systems are emphasized.
Collapse
Affiliation(s)
- Sergio Verduzco-Flores
- Computational Neuroscience Unit, Okinawa Institute of Science and Technology, Okinawa, Japan
| | - Erik De Schutter
- Computational Neuroscience Unit, Okinawa Institute of Science and Technology, Okinawa, Japan
| |
Collapse
|
9
|
Schmidt M, Bakker R, Shen K, Bezgin G, Diesmann M, van Albada SJ. A multi-scale layer-resolved spiking network model of resting-state dynamics in macaque visual cortical areas. PLoS Comput Biol 2018; 14:e1006359. [PMID: 30335761 PMCID: PMC6193609 DOI: 10.1371/journal.pcbi.1006359] [Citation(s) in RCA: 47] [Impact Index Per Article: 6.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/08/2017] [Accepted: 07/12/2018] [Indexed: 11/28/2022] Open
Abstract
Cortical activity has distinct features across scales, from the spiking statistics of individual cells to global resting-state networks. We here describe the first full-density multi-area spiking network model of cortex, using macaque visual cortex as a test system. The model represents each area by a microcircuit with area-specific architecture and features layer- and population-resolved connectivity between areas. Simulations reveal a structured asynchronous irregular ground state. In a metastable regime, the network reproduces spiking statistics from electrophysiological recordings and cortico-cortical interaction patterns in fMRI functional connectivity under resting-state conditions. Stable inter-area propagation is supported by cortico-cortical synapses that are moderately strong onto excitatory neurons and stronger onto inhibitory neurons. Causal interactions depend on both cortical structure and the dynamical state of populations. Activity propagates mainly in the feedback direction, similar to experimental results associated with visual imagery and sleep. The model unifies local and large-scale accounts of cortex, and clarifies how the detailed connectivity of cortex shapes its dynamics on multiple scales. Based on our simulations, we hypothesize that in the spontaneous condition the brain operates in a metastable regime where cortico-cortical projections target excitatory and inhibitory populations in a balanced manner that produces substantial inter-area interactions while maintaining global stability. The mammalian cortex fulfills its complex tasks by operating on multiple temporal and spatial scales from single cells to entire areas comprising millions of cells. These multi-scale dynamics are supported by specific network structures at all levels of organization. Since models of cortex hitherto tend to concentrate on a single scale, little is known about how cortical structure shapes the multi-scale dynamics of the network. We here present dynamical simulations of a multi-area network model at neuronal and synaptic resolution with population-specific connectivity based on extensive experimental data which accounts for a wide range of dynamical phenomena. Our model elucidates relationships between local and global scales in cortex and provides a platform for future studies of cortical function.
Collapse
Affiliation(s)
- Maximilian Schmidt
- Laboratory for Neural Coding and Brain Computing, RIKEN Center for Brain Science, Wako-Shi, Saitama, Japan
- Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA Institute Brain Structure-Function Relationships (INM-10), Jülich Research Centre, Jülich, Germany
| | - Rembrandt Bakker
- Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA Institute Brain Structure-Function Relationships (INM-10), Jülich Research Centre, Jülich, Germany
- Donders Institute for Brain, Cognition and Behavior, Radboud University Nijmegen, Nijmegen, Netherlands
| | - Kelly Shen
- Rotman Research Institute, Baycrest, Toronto, Ontario, Canada
| | - Gleb Bezgin
- McConnell Brain Imaging Centre, Montreal Neurological Institute, McGill University, Montreal, Canada
| | - Markus Diesmann
- Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA Institute Brain Structure-Function Relationships (INM-10), Jülich Research Centre, Jülich, Germany
- Department of Psychiatry, Psychotherapy and Psychosomatics, Medical Faculty, RWTH Aachen University, Aachen, Germany
- Department of Physics, RWTH Aachen University, Aachen, Germany
| | - Sacha Jennifer van Albada
- Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA Institute Brain Structure-Function Relationships (INM-10), Jülich Research Centre, Jülich, Germany
- * E-mail:
| |
Collapse
|
10
|
Jordan J, Ippen T, Helias M, Kitayama I, Sato M, Igarashi J, Diesmann M, Kunkel S. Extremely Scalable Spiking Neuronal Network Simulation Code: From Laptops to Exascale Computers. Front Neuroinform 2018; 12:2. [PMID: 29503613 PMCID: PMC5820465 DOI: 10.3389/fninf.2018.00002] [Citation(s) in RCA: 45] [Impact Index Per Article: 6.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/03/2017] [Accepted: 01/18/2018] [Indexed: 11/13/2022] Open
Abstract
State-of-the-art software tools for neuronal network simulations scale to the largest computing systems available today and enable investigations of large-scale networks of up to 10 % of the human cortex at a resolution of individual neurons and synapses. Due to an upper limit on the number of incoming connections of a single neuron, network connectivity becomes extremely sparse at this scale. To manage computational costs, simulation software ultimately targeting the brain scale needs to fully exploit this sparsity. Here we present a two-tier connection infrastructure and a framework for directed communication among compute nodes accounting for the sparsity of brain-scale networks. We demonstrate the feasibility of this approach by implementing the technology in the NEST simulation code and we investigate its performance in different scaling scenarios of typical network simulations. Our results show that the new data structures and communication scheme prepare the simulation kernel for post-petascale high-performance computing facilities without sacrificing performance in smaller systems.
Collapse
Affiliation(s)
- Jakob Jordan
- Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA Institute Brain Structure-Function Relationships (INM-10), Jülich Research Centre, Jülich, Germany
| | - Tammo Ippen
- Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA Institute Brain Structure-Function Relationships (INM-10), Jülich Research Centre, Jülich, Germany.,Faculty of Science and Technology, Norwegian University of Life Sciences, Ås, Norway
| | - Moritz Helias
- Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA Institute Brain Structure-Function Relationships (INM-10), Jülich Research Centre, Jülich, Germany.,Department of Physics, Faculty 1, RWTH Aachen University, Aachen, Germany
| | - Itaru Kitayama
- Advanced Institute for Computational Science, RIKEN, Kobe, Japan
| | - Mitsuhisa Sato
- Advanced Institute for Computational Science, RIKEN, Kobe, Japan
| | - Jun Igarashi
- Computational Engineering Applications Unit, RIKEN, Wako, Japan
| | - Markus Diesmann
- Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA Institute Brain Structure-Function Relationships (INM-10), Jülich Research Centre, Jülich, Germany.,Department of Physics, Faculty 1, RWTH Aachen University, Aachen, Germany.,Department of Psychiatry, Psychotherapy and Psychosomatics, Medical Faculty, RWTH Aachen University, Aachen, Germany
| | - Susanne Kunkel
- Department of Computational Science and Technology, School of Computer Science and Communication, KTH Royal Institute of Technology, Stockholm, Sweden.,Simulation Laboratory Neuroscience - Bernstein Facility for Simulation and Database Technology, Jülich Research Centre, Jülich, Germany
| |
Collapse
|