1
|
Huang C, Zeldenrust F, Celikel T. Cortical Representation of Touch in Silico. Neuroinformatics 2022; 20:1013-1039. [PMID: 35486347 PMCID: PMC9588483 DOI: 10.1007/s12021-022-09576-5] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 02/19/2022] [Indexed: 12/31/2022]
Abstract
With its six layers and ~ 12,000 neurons, a cortical column is a complex network whose function is plausibly greater than the sum of its constituents'. Functional characterization of its network components will require going beyond the brute-force modulation of the neural activity of a small group of neurons. Here we introduce an open-source, biologically inspired, computationally efficient network model of the somatosensory cortex's granular and supragranular layers after reconstructing the barrel cortex in soma resolution. Comparisons of the network activity to empirical observations showed that the in silico network replicates the known properties of touch representations and whisker deprivation-induced changes in synaptic strength induced in vivo. Simulations show that the history of the membrane potential acts as a spatial filter that determines the presynaptic population of neurons contributing to a post-synaptic action potential; this spatial filtering might be critical for synaptic integration of top-down and bottom-up information.
Collapse
Affiliation(s)
- Chao Huang
- grid.9647.c0000 0004 7669 9786Department of Biology, University of Leipzig, Leipzig, Germany
| | - Fleur Zeldenrust
- grid.5590.90000000122931605Donders Institute for Brain, Cognition, and Behaviour, Radboud University, Nijmegen, the Netherlands
| | - Tansu Celikel
- grid.5590.90000000122931605Donders Institute for Brain, Cognition, and Behaviour, Radboud University, Nijmegen, the Netherlands ,grid.213917.f0000 0001 2097 4943School of Psychology, Georgia Institute of Technology, Atlanta, GA USA
| |
Collapse
|
2
|
Ma L, Patel M. A model of lateral interactions as the origin of multiwhisker receptive fields in rat barrel cortex. J Comput Neurosci 2021; 50:181-201. [PMID: 34854018 DOI: 10.1007/s10827-021-00804-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/25/2021] [Revised: 09/03/2021] [Accepted: 11/10/2021] [Indexed: 11/30/2022]
Abstract
While cells within barrel cortex respond primarily to deflections of their principal whisker (PW), they also exhibit responses to non-principal, or adjacent, whiskers (AWs), albeit responses with diminished amplitudes and longer latencies. The origin of multiwhisker receptive fields of barrel cells remains a point of controversy within the experimental literature, with three contending possibilities: (i) barrel cells inherit their AW responses from the AW responses of thalamocortical (TC) cells within their aligned barreloid; (ii) the axons of TC cells within a barreloid ramify to innervate multiple barrels, rather than only terminating within their aligned barrel; (iii) lateral intracortical transmission between barrels conveys AW responsivity to barrel cells. In this work, we develop a detailed, biologically plausible model of multiple barrels in order to examine possibility (iii); in order to isolate the dynamics that possibility (iii) entails, we incorporate lateral connections between barrels while assuming that TC cells respond only to their PW and that TC cell axons are confined to their home barrel. We show that our model is capable of capturing a broad swath of experimental observations on multiwhisker receptive field dynamics within barrels, and we compare and contrast the dynamics of this model with model dynamics from prior work in which employ a similar general modeling strategy to examine possibility (i).
Collapse
Affiliation(s)
- Linda Ma
- Department of Mathematics, 200 Ukrop Way, Jones Hall, William & Mary, Williamsburg, 23185, VA, USA
| | - Mainak Patel
- Department of Mathematics, 200 Ukrop Way, Jones Hall, William & Mary, Williamsburg, 23185, VA, USA.
| |
Collapse
|
3
|
Knight JC, Nowotny T. GPUs Outperform Current HPC and Neuromorphic Solutions in Terms of Speed and Energy When Simulating a Highly-Connected Cortical Model. Front Neurosci 2018; 12:941. [PMID: 30618570 PMCID: PMC6299048 DOI: 10.3389/fnins.2018.00941] [Citation(s) in RCA: 26] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/27/2018] [Accepted: 11/29/2018] [Indexed: 11/15/2022] Open
Abstract
While neuromorphic systems may be the ultimate platform for deploying spiking neural networks (SNNs), their distributed nature and optimization for specific types of models makes them unwieldy tools for developing them. Instead, SNN models tend to be developed and simulated on computers or clusters of computers with standard von Neumann CPU architectures. Over the last decade, as well as becoming a common fixture in many workstations, NVIDIA GPU accelerators have entered the High Performance Computing field and are now used in 50 % of the Top 10 super computing sites worldwide. In this paper we use our GeNN code generator to re-implement two neo-cortex-inspired, circuit-scale, point neuron network models on GPU hardware. We verify the correctness of our GPU simulations against prior results obtained with NEST running on traditional HPC hardware and compare the performance with respect to speed and energy consumption against published data from CPU-based HPC and neuromorphic hardware. A full-scale model of a cortical column can be simulated at speeds approaching 0.5× real-time using a single NVIDIA Tesla V100 accelerator-faster than is currently possible using a CPU based cluster or the SpiNNaker neuromorphic system. In addition, we find that, across a range of GPU systems, the energy to solution as well as the energy per synaptic event of the microcircuit simulation is as much as 14× lower than either on SpiNNaker or in CPU-based simulations. Besides performance in terms of speed and energy consumption of the simulation, efficient initialization of models is also a crucial concern, particularly in a research context where repeated runs and parameter-space exploration are required. Therefore, we also introduce in this paper some of the novel parallel initialization methods implemented in the latest version of GeNN and demonstrate how they can enable further speed and energy advantages.
Collapse
Affiliation(s)
- James C. Knight
- Centre for Computational Neuroscience and Robotics, School of Engineering and Informatics, University of Sussex, Brighton, United Kingdom
| | | |
Collapse
|
4
|
van Albada SJ, Rowley AG, Senk J, Hopkins M, Schmidt M, Stokes AB, Lester DR, Diesmann M, Furber SB. Performance Comparison of the Digital Neuromorphic Hardware SpiNNaker and the Neural Network Simulation Software NEST for a Full-Scale Cortical Microcircuit Model. Front Neurosci 2018; 12:291. [PMID: 29875620 PMCID: PMC5974216 DOI: 10.3389/fnins.2018.00291] [Citation(s) in RCA: 78] [Impact Index Per Article: 13.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/12/2017] [Accepted: 04/13/2018] [Indexed: 01/12/2023] Open
Abstract
The digital neuromorphic hardware SpiNNaker has been developed with the aim of enabling large-scale neural network simulations in real time and with low power consumption. Real-time performance is achieved with 1 ms integration time steps, and thus applies to neural networks for which faster time scales of the dynamics can be neglected. By slowing down the simulation, shorter integration time steps and hence faster time scales, which are often biologically relevant, can be incorporated. We here describe the first full-scale simulations of a cortical microcircuit with biological time scales on SpiNNaker. Since about half the synapses onto the neurons arise within the microcircuit, larger cortical circuits have only moderately more synapses per neuron. Therefore, the full-scale microcircuit paves the way for simulating cortical circuits of arbitrary size. With approximately 80, 000 neurons and 0.3 billion synapses, this model is the largest simulated on SpiNNaker to date. The scale-up is enabled by recent developments in the SpiNNaker software stack that allow simulations to be spread across multiple boards. Comparison with simulations using the NEST software on a high-performance cluster shows that both simulators can reach a similar accuracy, despite the fixed-point arithmetic of SpiNNaker, demonstrating the usability of SpiNNaker for computational neuroscience applications with biological time scales and large network size. The runtime and power consumption are also assessed for both simulators on the example of the cortical microcircuit model. To obtain an accuracy similar to that of NEST with 0.1 ms time steps, SpiNNaker requires a slowdown factor of around 20 compared to real time. The runtime for NEST saturates around 3 times real time using hybrid parallelization with MPI and multi-threading. However, achieving this runtime comes at the cost of increased power and energy consumption. The lowest total energy consumption for NEST is reached at around 144 parallel threads and 4.6 times slowdown. At this setting, NEST and SpiNNaker have a comparable energy consumption per synaptic event. Our results widen the application domain of SpiNNaker and help guide its development, showing that further optimizations such as synapse-centric network representation are necessary to enable real-time simulation of large biological neural networks.
Collapse
Affiliation(s)
- Sacha J van Albada
- Institute of Neuroscience and Medicine (INM-6), Institute for Advanced Simulation (IAS-6), JARA Institute Brain Structure-Function Relationships (INM-10), Jülich Research Centre Jülich, Germany
| | - Andrew G Rowley
- Advanced Processor Technologies Group, School of Computer Science, University of Manchester, Manchester, United Kingdom
| | - Johanna Senk
- Institute of Neuroscience and Medicine (INM-6), Institute for Advanced Simulation (IAS-6), JARA Institute Brain Structure-Function Relationships (INM-10), Jülich Research Centre Jülich, Germany
| | - Michael Hopkins
- Advanced Processor Technologies Group, School of Computer Science, University of Manchester, Manchester, United Kingdom
| | - Maximilian Schmidt
- Institute of Neuroscience and Medicine (INM-6), Institute for Advanced Simulation (IAS-6), JARA Institute Brain Structure-Function Relationships (INM-10), Jülich Research Centre Jülich, Germany.,Laboratory for Neural Circuit Theory, RIKEN Brain Science Institute, Wako, Japan
| | - Alan B Stokes
- Advanced Processor Technologies Group, School of Computer Science, University of Manchester, Manchester, United Kingdom
| | - David R Lester
- Advanced Processor Technologies Group, School of Computer Science, University of Manchester, Manchester, United Kingdom
| | - Markus Diesmann
- Institute of Neuroscience and Medicine (INM-6), Institute for Advanced Simulation (IAS-6), JARA Institute Brain Structure-Function Relationships (INM-10), Jülich Research Centre Jülich, Germany.,Department of Physics, Faculty 1, RWTH Aachen University, Aachen, Germany.,Department of Psychiatry, Psychotherapy and Psychosomatics, Medical Faculty, RWTH Aachen University, Aachen, Germany
| | - Steve B Furber
- Advanced Processor Technologies Group, School of Computer Science, University of Manchester, Manchester, United Kingdom
| |
Collapse
|
5
|
Serruya MD. Connecting the Brain to Itself through an Emulation. Front Neurosci 2017; 11:373. [PMID: 28713235 PMCID: PMC5492113 DOI: 10.3389/fnins.2017.00373] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/01/2017] [Accepted: 06/15/2017] [Indexed: 01/03/2023] Open
Abstract
Pilot clinical trials of human patients implanted with devices that can chronically record and stimulate ensembles of hundreds to thousands of individual neurons offer the possibility of expanding the substrate of cognition. Parallel trains of firing rate activity can be delivered in real-time to an array of intermediate external modules that in turn can trigger parallel trains of stimulation back into the brain. These modules may be built in software, VLSI firmware, or biological tissue as in vitro culture preparations or in vivo ectopic construct organoids. Arrays of modules can be constructed as early stage whole brain emulators, following canonical intra- and inter-regional circuits. By using machine learning algorithms and classic tasks known to activate quasi-orthogonal functional connectivity patterns, bedside testing can rapidly identify ensemble tuning properties and in turn cycle through a sequence of external module architectures to explore which can causatively alter perception and behavior. Whole brain emulation both (1) serves to augment human neural function, compensating for disease and injury as an auxiliary parallel system, and (2) has its independent operation bootstrapped by a human-in-the-loop to identify optimal micro- and macro-architectures, update synaptic weights, and entrain behaviors. In this manner, closed-loop brain-computer interface pilot clinical trials can advance strong artificial intelligence development and forge new therapies to restore independence in children and adults with neurological conditions.
Collapse
Affiliation(s)
- Mijail D Serruya
- Neurology, Thomas Jefferson UniversityPhiladelphia, PA, United States
| |
Collapse
|
6
|
Knight JC, Furber SB. Synapse-Centric Mapping of Cortical Models to the SpiNNaker Neuromorphic Architecture. Front Neurosci 2016; 10:420. [PMID: 27683540 PMCID: PMC5022244 DOI: 10.3389/fnins.2016.00420] [Citation(s) in RCA: 16] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/08/2016] [Accepted: 08/29/2016] [Indexed: 01/30/2023] Open
Abstract
While the adult human brain has approximately 8.8 × 10(10) neurons, this number is dwarfed by its 1 × 10(15) synapses. From the point of view of neuromorphic engineering and neural simulation in general this makes the simulation of these synapses a particularly complex problem. SpiNNaker is a digital, neuromorphic architecture designed for simulating large-scale spiking neural networks at speeds close to biological real-time. Current solutions for simulating spiking neural networks on SpiNNaker are heavily inspired by work on distributed high-performance computing. However, while SpiNNaker shares many characteristics with such distributed systems, its component nodes have much more limited resources and, as the system lacks global synchronization, the computation performed on each node must complete within a fixed time step. We first analyze the performance of the current SpiNNaker neural simulation software and identify several problems that occur when it is used to simulate networks of the type often used to model the cortex which contain large numbers of sparsely connected synapses. We then present a new, more flexible approach for mapping the simulation of such networks to SpiNNaker which solves many of these problems. Finally we analyze the performance of our new approach using both benchmarks, designed to represent cortical connectivity, and larger, functional cortical models. In a benchmark network where neurons receive input from 8000 STDP synapses, our new approach allows 4× more neurons to be simulated on each SpiNNaker core than has been previously possible. We also demonstrate that the largest plastic neural network previously simulated on neuromorphic hardware can be run in real time using our new approach: double the speed that was previously achieved. Additionally this network contains two types of plastic synapse which previously had to be trained separately but, using our new approach, can be trained simultaneously.
Collapse
Affiliation(s)
- James C Knight
- Advanced Processor Technologies Group, School of Computer Science, University of Manchester Manchester, UK
| | - Steve B Furber
- Advanced Processor Technologies Group, School of Computer Science, University of Manchester Manchester, UK
| |
Collapse
|
7
|
Knight JC, Tully PJ, Kaplan BA, Lansner A, Furber SB. Large-Scale Simulations of Plastic Neural Networks on Neuromorphic Hardware. Front Neuroanat 2016; 10:37. [PMID: 27092061 PMCID: PMC4823276 DOI: 10.3389/fnana.2016.00037] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/30/2015] [Accepted: 03/18/2016] [Indexed: 11/17/2022] Open
Abstract
SpiNNaker is a digital, neuromorphic architecture designed for simulating large-scale spiking neural networks at speeds close to biological real-time. Rather than using bespoke analog or digital hardware, the basic computational unit of a SpiNNaker system is a general-purpose ARM processor, allowing it to be programmed to simulate a wide variety of neuron and synapse models. This flexibility is particularly valuable in the study of biological plasticity phenomena. A recently proposed learning rule based on the Bayesian Confidence Propagation Neural Network (BCPNN) paradigm offers a generic framework for modeling the interaction of different plasticity mechanisms using spiking neurons. However, it can be computationally expensive to simulate large networks with BCPNN learning since it requires multiple state variables for each synapse, each of which needs to be updated every simulation time-step. We discuss the trade-offs in efficiency and accuracy involved in developing an event-based BCPNN implementation for SpiNNaker based on an analytical solution to the BCPNN equations, and detail the steps taken to fit this within the limited computational and memory resources of the SpiNNaker architecture. We demonstrate this learning rule by learning temporal sequences of neural activity within a recurrent attractor network which we simulate at scales of up to 2.0 × 104 neurons and 5.1 × 107 plastic synapses: the largest plastic neural network ever to be simulated on neuromorphic hardware. We also run a comparable simulation on a Cray XC-30 supercomputer system and find that, if it is to match the run-time of our SpiNNaker simulation, the super computer system uses approximately 45× more power. This suggests that cheaper, more power efficient neuromorphic systems are becoming useful discovery tools in the study of plasticity in large-scale brain models.
Collapse
Affiliation(s)
- James C Knight
- Advanced Processor Technologies Group, School of Computer Science, University of Manchester Manchester, UK
| | - Philip J Tully
- Department of Computational Biology, Royal Institute of TechnologyStockholm, Sweden; Stockholm Brain Institute, Karolinska InstituteStockholm, Sweden; Institute for Adaptive and Neural Computation, School of Informatics, University of EdinburghEdinburgh, UK
| | - Bernhard A Kaplan
- Department of Visualization and Data Analysis, Zuse Institute Berlin Berlin, Germany
| | - Anders Lansner
- Department of Computational Biology, Royal Institute of TechnologyStockholm, Sweden; Stockholm Brain Institute, Karolinska InstituteStockholm, Sweden; Department of Numerical analysis and Computer Science, Stockholm UniversityStockholm, Sweden
| | - Steve B Furber
- Advanced Processor Technologies Group, School of Computer Science, University of Manchester Manchester, UK
| |
Collapse
|
8
|
Cheung K, Schultz SR, Luk W. NeuroFlow: A General Purpose Spiking Neural Network Simulation Platform using Customizable Processors. Front Neurosci 2016; 9:516. [PMID: 26834542 PMCID: PMC4712299 DOI: 10.3389/fnins.2015.00516] [Citation(s) in RCA: 46] [Impact Index Per Article: 5.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/24/2015] [Accepted: 12/22/2015] [Indexed: 11/13/2022] Open
Abstract
NeuroFlow is a scalable spiking neural network simulation platform for off-the-shelf high performance computing systems using customizable hardware processors such as Field-Programmable Gate Arrays (FPGAs). Unlike multi-core processors and application-specific integrated circuits, the processor architecture of NeuroFlow can be redesigned and reconfigured to suit a particular simulation to deliver optimized performance, such as the degree of parallelism to employ. The compilation process supports using PyNN, a simulator-independent neural network description language, to configure the processor. NeuroFlow supports a number of commonly used current or conductance based neuronal models such as integrate-and-fire and Izhikevich models, and the spike-timing-dependent plasticity (STDP) rule for learning. A 6-FPGA system can simulate a network of up to ~600,000 neurons and can achieve a real-time performance of 400,000 neurons. Using one FPGA, NeuroFlow delivers a speedup of up to 33.6 times the speed of an 8-core processor, or 2.83 times the speed of GPU-based platforms. With high flexibility and throughput, NeuroFlow provides a viable environment for large-scale neural network simulation.
Collapse
Affiliation(s)
- Kit Cheung
- Custom Computing Research Group, Department of Computing, Imperial College LondonLondon, UK
- Centre for Neurotechnology, Department of Bioengineering, Imperial College LondonLondon, UK
| | - Simon R. Schultz
- Centre for Neurotechnology, Department of Bioengineering, Imperial College LondonLondon, UK
| | - Wayne Luk
- Custom Computing Research Group, Department of Computing, Imperial College LondonLondon, UK
| |
Collapse
|
9
|
Scalability of Asynchronous Networks Is Limited by One-to-One Mapping between Effective Connectivity and Correlations. PLoS Comput Biol 2015; 11:e1004490. [PMID: 26325661 PMCID: PMC4556689 DOI: 10.1371/journal.pcbi.1004490] [Citation(s) in RCA: 31] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/18/2014] [Accepted: 08/05/2015] [Indexed: 11/19/2022] Open
Abstract
Network models are routinely downscaled compared to nature in terms of numbers of nodes or edges because of a lack of computational resources, often without explicit mention of the limitations this entails. While reliable methods have long existed to adjust parameters such that the first-order statistics of network dynamics are conserved, here we show that limitations already arise if also second-order statistics are to be maintained. The temporal structure of pairwise averaged correlations in the activity of recurrent networks is determined by the effective population-level connectivity. We first show that in general the converse is also true and explicitly mention degenerate cases when this one-to-one relationship does not hold. The one-to-one correspondence between effective connectivity and the temporal structure of pairwise averaged correlations implies that network scalings should preserve the effective connectivity if pairwise averaged correlations are to be held constant. Changes in effective connectivity can even push a network from a linearly stable to an unstable, oscillatory regime and vice versa. On this basis, we derive conditions for the preservation of both mean population-averaged activities and pairwise averaged correlations under a change in numbers of neurons or synapses in the asynchronous regime typical of cortical networks. We find that mean activities and correlation structure can be maintained by an appropriate scaling of the synaptic weights, but only over a range of numbers of synapses that is limited by the variance of external inputs to the network. Our results therefore show that the reducibility of asynchronous networks is fundamentally limited.
Collapse
|