1
|
Gong B, Wang J, Chang S, Xue G, Wei X. A multiscale distributed neural computing model database (NCMD) for neuromorphic architecture. Neural Netw 2024; 180:106727. [PMID: 39288643 DOI: 10.1016/j.neunet.2024.106727] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/27/2024] [Revised: 07/30/2024] [Accepted: 09/09/2024] [Indexed: 09/19/2024]
Abstract
Distributed neuromorphic architecture is a promising technique for on-chip processing of multiple tasks. Deploying the constructed model in a distributed neuromorphic system, however, remains time-consuming and challenging due to considerations such as network topology, connection rules, and compatibility with multiple programming languages. We proposed a multiscale distributed neural computing model database (NCMD), which is a framework designed for ARM-based multi-core hardware. Various neural computing components, including ion channels, synapses, and neurons, are encompassed in NCMD. We demonstrated how NCMD constructs and deploys multi-compartmental detailed neuron models as well as spiking neural networks (SNNs) in BrainS, a distributed multi-ARM neuromorphic system. We demonstrated that the electrodiffusive Pinsky-Rinzel (edPR) model developed by NCMD is well-suited for BrainS. All dynamic properties, such as changes in membrane potential and ion concentrations, can be easily explored. In addition, SNNs constructed by NCMD can achieve an accuracy of 86.67% on the test set of the Iris dataset. The proposed NCMD offers an innovative approach to applying BrainS in neuroscience, cognitive decision-making, and artificial intelligence research.
Collapse
Affiliation(s)
- Bo Gong
- School of Electrical and Information Engineering, Tianjin University, Tianjin, 300072, PR China
| | - Jiang Wang
- School of Electrical and Information Engineering, Tianjin University, Tianjin, 300072, PR China
| | - Siyuan Chang
- School of Electrical and Information Engineering, Tianjin University, Tianjin, 300072, PR China
| | - Gang Xue
- School of Electrical and Information Engineering, Tianjin University, Tianjin, 300072, PR China
| | - Xile Wei
- School of Electrical and Information Engineering, Tianjin University, Tianjin, 300072, PR China.
| |
Collapse
|
2
|
Liu F, Zheng H, Ma S, Zhang W, Liu X, Chua Y, Shi L, Zhao R. Advancing brain-inspired computing with hybrid neural networks. Natl Sci Rev 2024; 11:nwae066. [PMID: 38577666 PMCID: PMC10989656 DOI: 10.1093/nsr/nwae066] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/27/2023] [Revised: 01/25/2024] [Accepted: 01/31/2024] [Indexed: 04/06/2024] Open
Abstract
Brain-inspired computing, drawing inspiration from the fundamental structure and information-processing mechanisms of the human brain, has gained significant momentum in recent years. It has emerged as a research paradigm centered on brain-computer dual-driven and multi-network integration. One noteworthy instance of this paradigm is the hybrid neural network (HNN), which integrates computer-science-oriented artificial neural networks (ANNs) with neuroscience-oriented spiking neural networks (SNNs). HNNs exhibit distinct advantages in various intelligent tasks, including perception, cognition and learning. This paper presents a comprehensive review of HNNs with an emphasis on their origin, concepts, biological perspective, construction framework and supporting systems. Furthermore, insights and suggestions for potential research directions are provided aiming to propel the advancement of the HNN paradigm.
Collapse
Affiliation(s)
- Faqiang Liu
- Center for Brain-Inspired Computing Research, Optical Memory National Engineering Research Center, Tsinghua University-China Electronics Technology HIK Group Co. Joint Research Center for Brain-inspired Computing, IDG/McGovern Institute for Brain Research, Department of Precision Instrument, Tsinghua University, Beijing 100084, China
| | - Hao Zheng
- Center for Brain-Inspired Computing Research, Optical Memory National Engineering Research Center, Tsinghua University-China Electronics Technology HIK Group Co. Joint Research Center for Brain-inspired Computing, IDG/McGovern Institute for Brain Research, Department of Precision Instrument, Tsinghua University, Beijing 100084, China
| | - Songchen Ma
- Center for Brain-Inspired Computing Research, Optical Memory National Engineering Research Center, Tsinghua University-China Electronics Technology HIK Group Co. Joint Research Center for Brain-inspired Computing, IDG/McGovern Institute for Brain Research, Department of Precision Instrument, Tsinghua University, Beijing 100084, China
| | - Weihao Zhang
- Center for Brain-Inspired Computing Research, Optical Memory National Engineering Research Center, Tsinghua University-China Electronics Technology HIK Group Co. Joint Research Center for Brain-inspired Computing, IDG/McGovern Institute for Brain Research, Department of Precision Instrument, Tsinghua University, Beijing 100084, China
| | - Xue Liu
- Center for Brain-Inspired Computing Research, Optical Memory National Engineering Research Center, Tsinghua University-China Electronics Technology HIK Group Co. Joint Research Center for Brain-inspired Computing, IDG/McGovern Institute for Brain Research, Department of Precision Instrument, Tsinghua University, Beijing 100084, China
| | - Yansong Chua
- Neuromorphic Computing Laboratory, China Nanhu Academy of Electronics and Information Technology, Jiaxing 314001, China
| | - Luping Shi
- Center for Brain-Inspired Computing Research, Optical Memory National Engineering Research Center, Tsinghua University-China Electronics Technology HIK Group Co. Joint Research Center for Brain-inspired Computing, IDG/McGovern Institute for Brain Research, Department of Precision Instrument, Tsinghua University, Beijing 100084, China
| | - Rong Zhao
- Center for Brain-Inspired Computing Research, Optical Memory National Engineering Research Center, Tsinghua University-China Electronics Technology HIK Group Co. Joint Research Center for Brain-inspired Computing, IDG/McGovern Institute for Brain Research, Department of Precision Instrument, Tsinghua University, Beijing 100084, China
| |
Collapse
|
3
|
Siddique MAB, Zhang Y, An H. Monitoring time domain characteristics of Parkinson's disease using 3D memristive neuromorphic system. Front Comput Neurosci 2023; 17:1274575. [PMID: 38162516 PMCID: PMC10754992 DOI: 10.3389/fncom.2023.1274575] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/08/2023] [Accepted: 11/06/2023] [Indexed: 01/03/2024] Open
Abstract
Introduction Parkinson's disease (PD) is a neurodegenerative disorder affecting millions of patients. Closed-Loop Deep Brain Stimulation (CL-DBS) is a therapy that can alleviate the symptoms of PD. The CL-DBS system consists of an electrode sending electrical stimulation signals to a specific region of the brain and a battery-powered stimulator implanted in the chest. The electrical stimuli in CL-DBS systems need to be adjusted in real-time in accordance with the state of PD symptoms. Therefore, fast and precise monitoring of PD symptoms is a critical function for CL-DBS systems. However, the current CL-DBS techniques suffer from high computational demands for real-time PD symptom monitoring, which are not feasible for implanted and wearable medical devices. Methods In this paper, we present an energy-efficient neuromorphic PD symptom detector using memristive three-dimensional integrated circuits (3D-ICs). The excessive oscillation at beta frequencies (13-35 Hz) at the subthalamic nucleus (STN) is used as a biomarker of PD symptoms. Results Simulation results demonstrate that our neuromorphic PD detector, implemented with an 8-layer spiking Long Short-Term Memory (S-LSTM), excels in recognizing PD symptoms, achieving a training accuracy of 99.74% and a validation accuracy of 99.52% for a 75%-25% data split. Furthermore, we evaluated the improvement of our neuromorphic CL-DBS detector using NeuroSIM. The chip area, latency, energy, and power consumption of our CL-DBS detector were reduced by 47.4%, 66.63%, 65.6%, and 67.5%, respectively, for monolithic 3D-ICs. Similarly, for heterogeneous 3D-ICs, employing memristive synapses to replace traditional Static Random Access Memory (SRAM) resulted in reductions of 44.8%, 64.75%, 65.28%, and 67.7% in chip area, latency, and power usage. Discussion This study introduces a novel approach for PD symptom evaluation by directly utilizing spiking signals from neural activities in the time domain. This method significantly reduces the time and energy required for signal conversion compared to traditional frequency domain approaches. The study pioneers the use of neuromorphic computing and memristors in designing CL-DBS systems, surpassing SRAM-based designs in chip design area, latency, and energy efficiency. Lastly, the proposed neuromorphic PD detector demonstrates high resilience to timing variations in brain neural signals, as confirmed by robustness analysis.
Collapse
Affiliation(s)
- Md Abu Bakr Siddique
- Department of Electrical and Computer Engineering, Michigan Technological University, Houghton, MI, United States
| | - Yan Zhang
- Department of Biological Sciences, Michigan Technological University, Houghton, MI, United States
| | - Hongyu An
- Department of Electrical and Computer Engineering, Michigan Technological University, Houghton, MI, United States
| |
Collapse
|
4
|
Michaelis C, Lehr AB, Oed W, Tetzlaff C. Brian2Loihi: An emulator for the neuromorphic chip Loihi using the spiking neural network simulator Brian. Front Neuroinform 2022; 16:1015624. [PMID: 36439945 PMCID: PMC9682266 DOI: 10.3389/fninf.2022.1015624] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/09/2022] [Accepted: 10/12/2022] [Indexed: 11/11/2022] Open
Abstract
Developing intelligent neuromorphic solutions remains a challenging endeavor. It requires a solid conceptual understanding of the hardware's fundamental building blocks. Beyond this, accessible and user-friendly prototyping is crucial to speed up the design pipeline. We developed an open source Loihi emulator based on the neural network simulator Brian that can easily be incorporated into existing simulation workflows. We demonstrate errorless Loihi emulation in software for a single neuron and for a recurrently connected spiking neural network. On-chip learning is also reviewed and implemented, with reasonable discrepancy due to stochastic rounding. This work provides a coherent presentation of Loihi's computational unit and introduces a new, easy-to-use Loihi prototyping package with the aim to help streamline conceptualization and deployment of new algorithms.
Collapse
Affiliation(s)
- Carlo Michaelis
- Department of Computational Neuroscience, University of Göttingen, Göttingen, Germany
- Bernstein Center for Computational Neuroscience, University of Göttingen, Göttingen, Germany
- *Correspondence: Carlo Michaelis
| | - Andrew B. Lehr
- Department of Computational Neuroscience, University of Göttingen, Göttingen, Germany
- Bernstein Center for Computational Neuroscience, University of Göttingen, Göttingen, Germany
| | - Winfried Oed
- Department of Computational Neuroscience, University of Göttingen, Göttingen, Germany
- Bernstein Center for Computational Neuroscience, University of Göttingen, Göttingen, Germany
| | - Christian Tetzlaff
- Department of Computational Neuroscience, University of Göttingen, Göttingen, Germany
- Bernstein Center for Computational Neuroscience, University of Göttingen, Göttingen, Germany
| |
Collapse
|
5
|
Müller E, Schmitt S, Mauch C, Billaudelle S, Grübl A, Güttler M, Husmann D, Ilmberger J, Jeltsch S, Kaiser J, Klähn J, Kleider M, Koke C, Montes J, Müller P, Partzsch J, Passenberg F, Schmidt H, Vogginger B, Weidner J, Mayr C, Schemmel J. The operating system of the neuromorphic BrainScaleS-1 system. Neurocomputing 2022. [DOI: 10.1016/j.neucom.2022.05.081] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
|
6
|
Dai K, Hernando J, Billeh YN, Gratiy SL, Planas J, Davison AP, Dura-Bernal S, Gleeson P, Devresse A, Dichter BK, Gevaert M, King JG, Van Geit WAH, Povolotsky AV, Muller E, Courcol JD, Arkhipov A. The SONATA data format for efficient description of large-scale network models. PLoS Comput Biol 2020; 16:e1007696. [PMID: 32092054 PMCID: PMC7058350 DOI: 10.1371/journal.pcbi.1007696] [Citation(s) in RCA: 16] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/18/2019] [Revised: 03/05/2020] [Accepted: 01/28/2020] [Indexed: 12/04/2022] Open
Abstract
Increasing availability of comprehensive experimental datasets and of high-performance computing resources are driving rapid growth in scale, complexity, and biological realism of computational models in neuroscience. To support construction and simulation, as well as sharing of such large-scale models, a broadly applicable, flexible, and high-performance data format is necessary. To address this need, we have developed the Scalable Open Network Architecture TemplAte (SONATA) data format. It is designed for memory and computational efficiency and works across multiple platforms. The format represents neuronal circuits and simulation inputs and outputs via standardized files and provides much flexibility for adding new conventions or extensions. SONATA is used in multiple modeling and visualization tools, and we also provide reference Application Programming Interfaces and model examples to catalyze further adoption. SONATA format is free and open for the community to use and build upon with the goal of enabling efficient model building, sharing, and reproducibility. Neuroscience is experiencing a rapid growth of data streams characterizing composition, connectivity, and activity of brain networks in ever increasing details. Data-driven modeling will be essential to integrate these multimodal and complex data into predictive simulations to advance our understanding of brain function and mechanisms. To enable efficient development and sharing of such large-scale models utilizing diverse data types, we have developed the Scalable Open Network Architecture TemplAte (SONATA) data format. The format represents neuronal circuits and simulation inputs and outputs via standardized files and provides much flexibility for adding new conventions or extensions. SONATA is already supported by several popular tools for model building, simulations, and visualization. It is free and open for everyone to use and build upon and will enable increased efficiency, reproducibility, and scientific exchange in the community.
Collapse
Affiliation(s)
- Kael Dai
- Allen Institute for Brain Science, Seattle, Washington, United States of America
| | - Juan Hernando
- Blue Brain Project, École Polytechnique Fédérale de Lausanne (EPFL), Campus Biotech, Geneva, Switzerland
| | - Yazan N. Billeh
- Allen Institute for Brain Science, Seattle, Washington, United States of America
| | - Sergey L. Gratiy
- Allen Institute for Brain Science, Seattle, Washington, United States of America
| | - Judit Planas
- Blue Brain Project, École Polytechnique Fédérale de Lausanne (EPFL), Campus Biotech, Geneva, Switzerland
| | - Andrew P. Davison
- Paris-Saclay Institute of Neuroscience UMR, Centre National de la Recherche Scientifique/Université Paris-Saclay, Gif-sur-Yvette, France
| | - Salvador Dura-Bernal
- State University of New York Downstate Medical Center, Brooklyn, New York, United States of America
- Nathan Kline Institute for Psychiatric Research, Orangeburg, New York, United States of America
| | - Padraig Gleeson
- Department of Neuroscience, Physiology and Pharmacology, University College London, London, United Kingdom
| | - Adrien Devresse
- Blue Brain Project, École Polytechnique Fédérale de Lausanne (EPFL), Campus Biotech, Geneva, Switzerland
| | - Benjamin K. Dichter
- Department of Neurosurgery, Stanford University, Stanford, California, United States of America
- Biological Systems and Engineering, Lawrence Berkeley National Laboratory, Berkeley, California, United States of America
| | - Michael Gevaert
- Blue Brain Project, École Polytechnique Fédérale de Lausanne (EPFL), Campus Biotech, Geneva, Switzerland
| | - James G. King
- Blue Brain Project, École Polytechnique Fédérale de Lausanne (EPFL), Campus Biotech, Geneva, Switzerland
| | - Werner A. H. Van Geit
- Blue Brain Project, École Polytechnique Fédérale de Lausanne (EPFL), Campus Biotech, Geneva, Switzerland
| | - Arseny V. Povolotsky
- Blue Brain Project, École Polytechnique Fédérale de Lausanne (EPFL), Campus Biotech, Geneva, Switzerland
| | - Eilif Muller
- Blue Brain Project, École Polytechnique Fédérale de Lausanne (EPFL), Campus Biotech, Geneva, Switzerland
| | - Jean-Denis Courcol
- Blue Brain Project, École Polytechnique Fédérale de Lausanne (EPFL), Campus Biotech, Geneva, Switzerland
| | - Anton Arkhipov
- Allen Institute for Brain Science, Seattle, Washington, United States of America
- * E-mail:
| |
Collapse
|
7
|
Pfeiffer M, Pfeil T. Deep Learning With Spiking Neurons: Opportunities and Challenges. Front Neurosci 2018; 12:774. [PMID: 30410432 PMCID: PMC6209684 DOI: 10.3389/fnins.2018.00774] [Citation(s) in RCA: 128] [Impact Index Per Article: 21.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/22/2018] [Accepted: 10/04/2018] [Indexed: 01/16/2023] Open
Abstract
Spiking neural networks (SNNs) are inspired by information processing in biology, where sparse and asynchronous binary signals are communicated and processed in a massively parallel fashion. SNNs on neuromorphic hardware exhibit favorable properties such as low power consumption, fast inference, and event-driven information processing. This makes them interesting candidates for the efficient implementation of deep neural networks, the method of choice for many machine learning tasks. In this review, we address the opportunities that deep spiking networks offer and investigate in detail the challenges associated with training SNNs in a way that makes them competitive with conventional deep learning, but simultaneously allows for efficient mapping to hardware. A wide range of training methods for SNNs is presented, ranging from the conversion of conventional deep networks into SNNs, constrained training before conversion, spiking variants of backpropagation, and biologically motivated variants of STDP. The goal of our review is to define a categorization of SNN training methods, and summarize their advantages and drawbacks. We further discuss relationships between SNNs and binary networks, which are becoming popular for efficient digital hardware implementation. Neuromorphic hardware platforms have great potential to enable deep spiking networks in real-world applications. We compare the suitability of various neuromorphic systems that have been developed over the past years, and investigate potential use cases. Neuromorphic approaches and conventional machine learning should not be considered simply two solutions to the same classes of problems, instead it is possible to identify and exploit their task-specific advantages. Deep SNNs offer great opportunities to work with new types of event-based sensors, exploit temporal codes and local on-chip learning, and we have so far just scratched the surface of realizing these advantages in practical applications.
Collapse
Affiliation(s)
- Michael Pfeiffer
- Bosch Center for Artificial Intelligence, Robert Bosch GmbH, Renningen, Germany
| | | |
Collapse
|
8
|
Neftci EO. Data and Power Efficient Intelligence with Neuromorphic Learning Machines. iScience 2018; 5:52-68. [PMID: 30240646 PMCID: PMC6123858 DOI: 10.1016/j.isci.2018.06.010] [Citation(s) in RCA: 35] [Impact Index Per Article: 5.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/05/2018] [Revised: 06/04/2018] [Accepted: 06/26/2018] [Indexed: 11/22/2022] Open
Abstract
The success of deep networks and recent industry involvement in brain-inspired computing is igniting a widespread interest in neuromorphic hardware that emulates the biological processes of the brain on an electronic substrate. This review explores interdisciplinary approaches anchored in machine learning theory that enable the applicability of neuromorphic technologies to real-world, human-centric tasks. We find that (1) recent work in binary deep networks and approximate gradient descent learning are strikingly compatible with a neuromorphic substrate; (2) where real-time adaptability and autonomy are necessary, neuromorphic technologies can achieve significant advantages over main-stream ones; and (3) challenges in memory technologies, compounded by a tradition of bottom-up approaches in the field, block the road to major breakthroughs. We suggest that a neuromorphic learning framework, tuned specifically for the spatial and temporal constraints of the neuromorphic substrate, will help guiding hardware algorithm co-design and deploying neuromorphic hardware for proactive learning of real-world data.
Collapse
Affiliation(s)
- Emre O Neftci
- Department of Cognitive Sciences, UC Irvine, Irvine, CA 92697-5100, USA; Department of Computer Science, UC Irvine, Irvine, CA 92697-5100, USA.
| |
Collapse
|
9
|
Stöckel A, Jenzen C, Thies M, Rückert U. Binary Associative Memories as a Benchmark for Spiking Neuromorphic Hardware. Front Comput Neurosci 2017; 11:71. [PMID: 28878642 PMCID: PMC5572441 DOI: 10.3389/fncom.2017.00071] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/29/2016] [Accepted: 07/20/2017] [Indexed: 11/14/2022] Open
Abstract
Large-scale neuromorphic hardware platforms, specialized computer systems for energy efficient simulation of spiking neural networks, are being developed around the world, for example as part of the European Human Brain Project (HBP). Due to conceptual differences, a universal performance analysis of these systems in terms of runtime, accuracy and energy efficiency is non-trivial, yet indispensable for further hard- and software development. In this paper we describe a scalable benchmark based on a spiking neural network implementation of the binary neural associative memory. We treat neuromorphic hardware and software simulators as black-boxes and execute exactly the same network description across all devices. Experiments on the HBP platforms under varying configurations of the associative memory show that the presented method allows to test the quality of the neuron model implementation, and to explain significant deviations from the expected reference output.
Collapse
Affiliation(s)
- Andreas Stöckel
- Cognitronics and Sensor Systems, Cluster of Excellence Cognitive Interaction Technology, Faculty of Technology, Bielefeld UniversityBielefeld, Germany
| | - Christoph Jenzen
- Cognitronics and Sensor Systems, Cluster of Excellence Cognitive Interaction Technology, Faculty of Technology, Bielefeld UniversityBielefeld, Germany
| | - Michael Thies
- Cognitronics and Sensor Systems, Cluster of Excellence Cognitive Interaction Technology, Faculty of Technology, Bielefeld UniversityBielefeld, Germany
| | - Ulrich Rückert
- Cognitronics and Sensor Systems, Cluster of Excellence Cognitive Interaction Technology, Faculty of Technology, Bielefeld UniversityBielefeld, Germany
| |
Collapse
|
10
|
|
11
|
Spike-Based Bayesian-Hebbian Learning of Temporal Sequences. PLoS Comput Biol 2016; 12:e1004954. [PMID: 27213810 PMCID: PMC4877102 DOI: 10.1371/journal.pcbi.1004954] [Citation(s) in RCA: 23] [Impact Index Per Article: 2.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/29/2015] [Accepted: 04/28/2016] [Indexed: 11/25/2022] Open
Abstract
Many cognitive and motor functions are enabled by the temporal representation and processing of stimuli, but it remains an open issue how neocortical microcircuits can reliably encode and replay such sequences of information. To better understand this, a modular attractor memory network is proposed in which meta-stable sequential attractor transitions are learned through changes to synaptic weights and intrinsic excitabilities via the spike-based Bayesian Confidence Propagation Neural Network (BCPNN) learning rule. We find that the formation of distributed memories, embodied by increased periods of firing in pools of excitatory neurons, together with asymmetrical associations between these distinct network states, can be acquired through plasticity. The model’s feasibility is demonstrated using simulations of adaptive exponential integrate-and-fire model neurons (AdEx). We show that the learning and speed of sequence replay depends on a confluence of biophysically relevant parameters including stimulus duration, level of background noise, ratio of synaptic currents, and strengths of short-term depression and adaptation. Moreover, sequence elements are shown to flexibly participate multiple times in the sequence, suggesting that spiking attractor networks of this type can support an efficient combinatorial code. The model provides a principled approach towards understanding how multiple interacting plasticity mechanisms can coordinate hetero-associative learning in unison. From one moment to the next, in an ever-changing world, and awash in a deluge of sensory data, the brain fluidly guides our actions throughout an astonishing variety of tasks. Processing this ongoing bombardment of information is a fundamental problem faced by its underlying neural circuits. Given that the structure of our actions along with the organization of the environment in which they are performed can be intuitively decomposed into sequences of simpler patterns, an encoding strategy reflecting the temporal nature of these patterns should offer an efficient approach for assembling more complex memories and behaviors. We present a model that demonstrates how activity could propagate through recurrent cortical microcircuits as a result of a learning rule based on neurobiologically plausible time courses and dynamics. The model predicts that the interaction between several learning and dynamical processes constitute a compound mnemonic engram that can flexibly generate sequential step-wise increases of activity within neural populations.
Collapse
|
12
|
Transient response characteristic of memristor circuits and biological-like current spikes. Neural Comput Appl 2016. [DOI: 10.1007/s00521-016-2248-1] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/22/2022]
|
13
|
Cheung K, Schultz SR, Luk W. NeuroFlow: A General Purpose Spiking Neural Network Simulation Platform using Customizable Processors. Front Neurosci 2016; 9:516. [PMID: 26834542 PMCID: PMC4712299 DOI: 10.3389/fnins.2015.00516] [Citation(s) in RCA: 46] [Impact Index Per Article: 5.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/24/2015] [Accepted: 12/22/2015] [Indexed: 11/13/2022] Open
Abstract
NeuroFlow is a scalable spiking neural network simulation platform for off-the-shelf high performance computing systems using customizable hardware processors such as Field-Programmable Gate Arrays (FPGAs). Unlike multi-core processors and application-specific integrated circuits, the processor architecture of NeuroFlow can be redesigned and reconfigured to suit a particular simulation to deliver optimized performance, such as the degree of parallelism to employ. The compilation process supports using PyNN, a simulator-independent neural network description language, to configure the processor. NeuroFlow supports a number of commonly used current or conductance based neuronal models such as integrate-and-fire and Izhikevich models, and the spike-timing-dependent plasticity (STDP) rule for learning. A 6-FPGA system can simulate a network of up to ~600,000 neurons and can achieve a real-time performance of 400,000 neurons. Using one FPGA, NeuroFlow delivers a speedup of up to 33.6 times the speed of an 8-core processor, or 2.83 times the speed of GPU-based platforms. With high flexibility and throughput, NeuroFlow provides a viable environment for large-scale neural network simulation.
Collapse
Affiliation(s)
- Kit Cheung
- Custom Computing Research Group, Department of Computing, Imperial College LondonLondon, UK; Centre for Neurotechnology, Department of Bioengineering, Imperial College LondonLondon, UK
| | - Simon R Schultz
- Centre for Neurotechnology, Department of Bioengineering, Imperial College London London, UK
| | - Wayne Luk
- Custom Computing Research Group, Department of Computing, Imperial College London London, UK
| |
Collapse
|
14
|
Almási AD, Woźniak S, Cristea V, Leblebici Y, Engbersen T. Review of advances in neural networks: Neural design technology stack. Neurocomputing 2016. [DOI: 10.1016/j.neucom.2015.02.092] [Citation(s) in RCA: 47] [Impact Index Per Article: 5.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
|
15
|
Jordan J, Tetzlaff T, Petrovici M, Breitwieser O, Bytschok I, Bill J, Schemmel J, Meier K, Diesmann M. Deterministic neural networks as sources of uncorrelated noise for probabilistic computations. BMC Neurosci 2015. [PMCID: PMC4697608 DOI: 10.1186/1471-2202-16-s1-p62] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022] Open
|
16
|
Neuromorphic implementations of neurobiological learning algorithms for spiking neural networks. Neural Netw 2015; 72:152-67. [DOI: 10.1016/j.neunet.2015.07.004] [Citation(s) in RCA: 44] [Impact Index Per Article: 4.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/15/2015] [Revised: 05/29/2015] [Accepted: 07/09/2015] [Indexed: 11/21/2022]
|
17
|
Partzsch J, Schüffny R. Network-driven design principles for neuromorphic systems. Front Neurosci 2015; 9:386. [PMID: 26539079 PMCID: PMC4611986 DOI: 10.3389/fnins.2015.00386] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/16/2015] [Accepted: 10/05/2015] [Indexed: 11/17/2022] Open
Abstract
Synaptic connectivity is typically the most resource-demanding part of neuromorphic systems. Commonly, the architecture of these systems is chosen mainly on technical considerations. As a consequence, the potential for optimization arising from the inherent constraints of connectivity models is left unused. In this article, we develop an alternative, network-driven approach to neuromorphic architecture design. We describe methods to analyse performance of existing neuromorphic architectures in emulating certain connectivity models. Furthermore, we show step-by-step how to derive a neuromorphic architecture from a given connectivity model. For this, we introduce a generalized description for architectures with a synapse matrix, which takes into account shared use of circuit components for reducing total silicon area. Architectures designed with this approach are fitted to a connectivity model, essentially adapting to its connection density. They are guaranteeing faithful reproduction of the model on chip, while requiring less total silicon area. In total, our methods allow designers to implement more area-efficient neuromorphic systems and verify usability of the connectivity resources in these systems.
Collapse
Affiliation(s)
- Johannes Partzsch
- Chair for Highly Parallel VLSI Systems and Neuromorphic Circuits, Department of Electrical Engineering and Information Technology, Technische Universität Dresden Dresden, Germany
| | - Rene Schüffny
- Chair for Highly Parallel VLSI Systems and Neuromorphic Circuits, Department of Electrical Engineering and Information Technology, Technische Universität Dresden Dresden, Germany
| |
Collapse
|
18
|
Bekolay T, Stewart TC, Eliasmith C. Benchmarking neuromorphic systems with Nengo. Front Neurosci 2015; 9:380. [PMID: 26539076 PMCID: PMC4609756 DOI: 10.3389/fnins.2015.00380] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/30/2015] [Accepted: 10/02/2015] [Indexed: 11/20/2022] Open
Abstract
Nengo is a software package for designing and simulating large-scale neural models. Nengo is architected such that the same Nengo model can be simulated on any of several Nengo backends with few to no modifications. Backends translate a model to specific platforms, which include GPUs and neuromorphic hardware. Nengo also contains a large test suite that can be run with any backend and focuses primarily on functional performance. We propose that Nengo's large test suite can be used to benchmark neuromorphic hardware's functional performance and simulation speed in an efficient, unbiased, and future-proof manner. We implement four benchmark models and show that Nengo can collect metrics across five different backends that identify situations in which some backends perform more accurately or quickly.
Collapse
|
19
|
Scalability of Asynchronous Networks Is Limited by One-to-One Mapping between Effective Connectivity and Correlations. PLoS Comput Biol 2015; 11:e1004490. [PMID: 26325661 PMCID: PMC4556689 DOI: 10.1371/journal.pcbi.1004490] [Citation(s) in RCA: 31] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/18/2014] [Accepted: 08/05/2015] [Indexed: 11/19/2022] Open
Abstract
Network models are routinely downscaled compared to nature in terms of numbers of nodes or edges because of a lack of computational resources, often without explicit mention of the limitations this entails. While reliable methods have long existed to adjust parameters such that the first-order statistics of network dynamics are conserved, here we show that limitations already arise if also second-order statistics are to be maintained. The temporal structure of pairwise averaged correlations in the activity of recurrent networks is determined by the effective population-level connectivity. We first show that in general the converse is also true and explicitly mention degenerate cases when this one-to-one relationship does not hold. The one-to-one correspondence between effective connectivity and the temporal structure of pairwise averaged correlations implies that network scalings should preserve the effective connectivity if pairwise averaged correlations are to be held constant. Changes in effective connectivity can even push a network from a linearly stable to an unstable, oscillatory regime and vice versa. On this basis, we derive conditions for the preservation of both mean population-averaged activities and pairwise averaged correlations under a change in numbers of neurons or synapses in the asynchronous regime typical of cortical networks. We find that mean activities and correlation structure can be maintained by an appropriate scaling of the synaptic weights, but only over a range of numbers of synapses that is limited by the variance of external inputs to the network. Our results therefore show that the reducibility of asynchronous networks is fundamentally limited.
Collapse
|
20
|
Qiao N, Mostafa H, Corradi F, Osswald M, Stefanini F, Sumislawska D, Indiveri G. A reconfigurable on-line learning spiking neuromorphic processor comprising 256 neurons and 128K synapses. Front Neurosci 2015; 9:141. [PMID: 25972778 PMCID: PMC4413675 DOI: 10.3389/fnins.2015.00141] [Citation(s) in RCA: 159] [Impact Index Per Article: 17.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/21/2014] [Accepted: 04/06/2015] [Indexed: 11/13/2022] Open
Abstract
Implementing compact, low-power artificial neural processing systems with real-time on-line learning abilities is still an open challenge. In this paper we present a full-custom mixed-signal VLSI device with neuromorphic learning circuits that emulate the biophysics of real spiking neurons and dynamic synapses for exploring the properties of computational neuroscience models and for building brain-inspired computing systems. The proposed architecture allows the on-chip configuration of a wide range of network connectivities, including recurrent and deep networks, with short-term and long-term plasticity. The device comprises 128 K analog synapse and 256 neuron circuits with biologically plausible dynamics and bi-stable spike-based plasticity mechanisms that endow it with on-line learning abilities. In addition to the analog circuits, the device comprises also asynchronous digital logic circuits for setting different synapse and neuron properties as well as different network configurations. This prototype device, fabricated using a 180 nm 1P6M CMOS process, occupies an area of 51.4 mm(2), and consumes approximately 4 mW for typical experiments, for example involving attractor networks. Here we describe the details of the overall architecture and of the individual circuits and present experimental results that showcase its potential. By supporting a wide range of cortical-like computational modules comprising plasticity mechanisms, this device will enable the realization of intelligent autonomous systems with on-line learning capabilities.
Collapse
Affiliation(s)
- Ning Qiao
- Institute of Neuroinformatics, University of Zurich and ETH Zurich Zurich, Switzerland
| | - Hesham Mostafa
- Institute of Neuroinformatics, University of Zurich and ETH Zurich Zurich, Switzerland
| | - Federico Corradi
- Institute of Neuroinformatics, University of Zurich and ETH Zurich Zurich, Switzerland
| | - Marc Osswald
- Institute of Neuroinformatics, University of Zurich and ETH Zurich Zurich, Switzerland
| | - Fabio Stefanini
- Institute of Neuroinformatics, University of Zurich and ETH Zurich Zurich, Switzerland
| | - Dora Sumislawska
- Institute of Neuroinformatics, University of Zurich and ETH Zurich Zurich, Switzerland
| | - Giacomo Indiveri
- Institute of Neuroinformatics, University of Zurich and ETH Zurich Zurich, Switzerland
| |
Collapse
|
21
|
Clady X, Ieng SH, Benosman R. Asynchronous event-based corner detection and matching. Neural Netw 2015; 66:91-106. [PMID: 25828960 DOI: 10.1016/j.neunet.2015.02.013] [Citation(s) in RCA: 34] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/16/2014] [Revised: 02/11/2015] [Accepted: 02/12/2015] [Indexed: 10/23/2022]
Abstract
This paper introduces an event-based luminance-free method to detect and match corner events from the output of asynchronous event-based neuromorphic retinas. The method relies on the use of space-time properties of moving edges. Asynchronous event-based neuromorphic retinas are composed of autonomous pixels, each of them asynchronously generating "spiking" events that encode relative changes in pixels' illumination at high temporal resolutions. Corner events are defined as the spatiotemporal locations where the aperture problem can be solved using the intersection of several geometric constraints in events' spatiotemporal spaces. A regularization process provides the required constraints, i.e. the motion attributes of the edges with respect to their spatiotemporal locations using local geometric properties of visual events. Experimental results are presented on several real scenes showing the stability and robustness of the detection and matching.
Collapse
Affiliation(s)
- Xavier Clady
- Vision Institute, Pierre and Marie Curie University, Paris, France.
| | - Sio-Hoi Ieng
- Vision Institute, Pierre and Marie Curie University, Paris, France
| | - Ryad Benosman
- Vision Institute, Pierre and Marie Curie University, Paris, France
| |
Collapse
|
22
|
Casellato C, Antonietti A, Garrido JA, Carrillo RR, Luque NR, Ros E, Pedrocchi A, D'Angelo E. Adaptive robotic control driven by a versatile spiking cerebellar network. PLoS One 2014; 9:e112265. [PMID: 25390365 PMCID: PMC4229206 DOI: 10.1371/journal.pone.0112265] [Citation(s) in RCA: 58] [Impact Index Per Article: 5.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/08/2014] [Accepted: 09/11/2014] [Indexed: 11/29/2022] Open
Abstract
The cerebellum is involved in a large number of different neural processes, especially in associative learning and in fine motor control. To develop a comprehensive theory of sensorimotor learning and control, it is crucial to determine the neural basis of coding and plasticity embedded into the cerebellar neural circuit and how they are translated into behavioral outcomes in learning paradigms. Learning has to be inferred from the interaction of an embodied system with its real environment, and the same cerebellar principles derived from cell physiology have to be able to drive a variety of tasks of different nature, calling for complex timing and movement patterns. We have coupled a realistic cerebellar spiking neural network (SNN) with a real robot and challenged it in multiple diverse sensorimotor tasks. Encoding and decoding strategies based on neuronal firing rates were applied. Adaptive motor control protocols with acquisition and extinction phases have been designed and tested, including an associative Pavlovian task (Eye blinking classical conditioning), a vestibulo-ocular task and a perturbed arm reaching task operating in closed-loop. The SNN processed in real-time mossy fiber inputs as arbitrary contextual signals, irrespective of whether they conveyed a tone, a vestibular stimulus or the position of a limb. A bidirectional long-term plasticity rule implemented at parallel fibers-Purkinje cell synapses modulated the output activity in the deep cerebellar nuclei. In all tasks, the neurorobot learned to adjust timing and gain of the motor responses by tuning its output discharge. It succeeded in reproducing how human biological systems acquire, extinguish and express knowledge of a noisy and changing world. By varying stimuli and perturbations patterns, real-time control robustness and generalizability were validated. The implicit spiking dynamics of the cerebellar model fulfill timing, prediction and learning functions.
Collapse
Affiliation(s)
- Claudia Casellato
- NeuroEngineering and Medical Robotics Laboratory, Department of Electronics, Information and Bioengineering, Politecnico di Milano, Milano, Italy
| | - Alberto Antonietti
- NeuroEngineering and Medical Robotics Laboratory, Department of Electronics, Information and Bioengineering, Politecnico di Milano, Milano, Italy; Brain Connectivity Center, Istituto di Ricovero e Cura a Carattere Scientifico Istituto Neurologico Nazionale Casimiro Mondino, Pavia, Italy
| | - Jesus A Garrido
- Brain Connectivity Center, Istituto di Ricovero e Cura a Carattere Scientifico Istituto Neurologico Nazionale Casimiro Mondino, Pavia, Italy; Department of Brain and Behavioral Sciences, University of Pavia, Pavia, Italy
| | - Richard R Carrillo
- Department of Computer Architecture and Technology, Escuela Técnica Superior de Ingegnerías Informática y de Telecomunicación, University of Granada, Granada, Spain
| | - Niceto R Luque
- Department of Computer Architecture and Technology, Escuela Técnica Superior de Ingegnerías Informática y de Telecomunicación, University of Granada, Granada, Spain
| | - Eduardo Ros
- Department of Computer Architecture and Technology, Escuela Técnica Superior de Ingegnerías Informática y de Telecomunicación, University of Granada, Granada, Spain
| | - Alessandra Pedrocchi
- NeuroEngineering and Medical Robotics Laboratory, Department of Electronics, Information and Bioengineering, Politecnico di Milano, Milano, Italy
| | - Egidio D'Angelo
- Brain Connectivity Center, Istituto di Ricovero e Cura a Carattere Scientifico Istituto Neurologico Nazionale Casimiro Mondino, Pavia, Italy; Department of Brain and Behavioral Sciences, University of Pavia, Pavia, Italy
| |
Collapse
|
23
|
Petrovici MA, Vogginger B, Müller P, Breitwieser O, Lundqvist M, Muller L, Ehrlich M, Destexhe A, Lansner A, Schüffny R, Schemmel J, Meier K. Characterization and compensation of network-level anomalies in mixed-signal neuromorphic modeling platforms. PLoS One 2014; 9:e108590. [PMID: 25303102 PMCID: PMC4193761 DOI: 10.1371/journal.pone.0108590] [Citation(s) in RCA: 32] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/29/2014] [Accepted: 08/22/2014] [Indexed: 11/18/2022] Open
Abstract
Advancing the size and complexity of neural network models leads to an ever increasing demand for computational resources for their simulation. Neuromorphic devices offer a number of advantages over conventional computing architectures, such as high emulation speed or low power consumption, but this usually comes at the price of reduced configurability and precision. In this article, we investigate the consequences of several such factors that are common to neuromorphic devices, more specifically limited hardware resources, limited parameter configurability and parameter variations due to fixed-pattern noise and trial-to-trial variability. Our final aim is to provide an array of methods for coping with such inevitable distortion mechanisms. As a platform for testing our proposed strategies, we use an executable system specification (ESS) of the BrainScaleS neuromorphic system, which has been designed as a universal emulation back-end for neuroscientific modeling. We address the most essential limitations of this device in detail and study their effects on three prototypical benchmark network models within a well-defined, systematic workflow. For each network model, we start by defining quantifiable functionality measures by which we then assess the effects of typical hardware-specific distortion mechanisms, both in idealized software simulations and on the ESS. For those effects that cause unacceptable deviations from the original network dynamics, we suggest generic compensation mechanisms and demonstrate their effectiveness. Both the suggested workflow and the investigated compensation mechanisms are largely back-end independent and do not require additional hardware configurability beyond the one required to emulate the benchmark networks in the first place. We hereby provide a generic methodological environment for configurable neuromorphic devices that are targeted at emulating large-scale, functional neural networks.
Collapse
Affiliation(s)
- Mihai A. Petrovici
- Ruprecht-Karls-Universität Heidelberg, Kirchhoff Institute for Physics, Heidelberg, Germany
| | - Bernhard Vogginger
- Technische Universität Dresden, Institute of Circuits and Systems, Dresden, Germany
| | - Paul Müller
- Ruprecht-Karls-Universität Heidelberg, Kirchhoff Institute for Physics, Heidelberg, Germany
| | - Oliver Breitwieser
- Ruprecht-Karls-Universität Heidelberg, Kirchhoff Institute for Physics, Heidelberg, Germany
| | - Mikael Lundqvist
- Department of Computational Biology, School of Computer Science and Communication, Stockholm University and Royal Institute of Technology, Stockholm, Sweden
| | - Lyle Muller
- CNRS, Unité de Neuroscience, Information et Complexité, Gif sur Yvette, France
| | - Matthias Ehrlich
- Technische Universität Dresden, Institute of Circuits and Systems, Dresden, Germany
| | - Alain Destexhe
- CNRS, Unité de Neuroscience, Information et Complexité, Gif sur Yvette, France
| | - Anders Lansner
- Department of Computational Biology, School of Computer Science and Communication, Stockholm University and Royal Institute of Technology, Stockholm, Sweden
| | - René Schüffny
- Technische Universität Dresden, Institute of Circuits and Systems, Dresden, Germany
| | - Johannes Schemmel
- Ruprecht-Karls-Universität Heidelberg, Kirchhoff Institute for Physics, Heidelberg, Germany
| | - Karlheinz Meier
- Ruprecht-Karls-Universität Heidelberg, Kirchhoff Institute for Physics, Heidelberg, Germany
| |
Collapse
|
24
|
Stefanini F, Neftci EO, Sheik S, Indiveri G. PyNCS: a microkernel for high-level definition and configuration of neuromorphic electronic systems. Front Neuroinform 2014; 8:73. [PMID: 25232314 PMCID: PMC4152885 DOI: 10.3389/fninf.2014.00073] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/01/2013] [Accepted: 08/01/2014] [Indexed: 11/13/2022] Open
Abstract
Neuromorphic hardware offers an electronic substrate for the realization of asynchronous event-based sensory-motor systems and large-scale spiking neural network architectures. In order to characterize these systems, configure them, and carry out modeling experiments, it is often necessary to interface them to workstations. The software used for this purpose typically consists of a large monolithic block of code which is highly specific to the hardware setup used. While this approach can lead to highly integrated hardware/software systems, it hampers the development of modular and reconfigurable infrastructures thus preventing a rapid evolution of such systems. To alleviate this problem, we propose PyNCS, an open-source front-end for the definition of neural network models that is interfaced to the hardware through a set of Python Application Programming Interfaces (APIs). The design of PyNCS promotes modularity, portability and expandability and separates implementation from hardware description. The high-level front-end that comes with PyNCS includes tools to define neural network models as well as to create, monitor and analyze spiking data. Here we report the design philosophy behind the PyNCS framework and describe its implementation. We demonstrate its functionality with two representative case studies, one using an event-based neuromorphic vision sensor, and one using a set of multi-neuron devices for carrying out a cognitive decision-making task involving state-dependent computation. PyNCS, already applicable to a wide range of existing spike-based neuromorphic setups, will accelerate the development of hybrid software/hardware neuromorphic systems, thanks to its code flexibility. The code is open-source and available online at https://github.com/inincs/pyNCS.
Collapse
Affiliation(s)
- Fabio Stefanini
- Department of Information Technology and Electrical Engineering, Institute of Neuroinformatics, University of Zurich and ETH Zurich Zurich, Switzerland
| | - Emre O Neftci
- Department of Bioengineering, Institute for Neural Computation, University of California at San Diego La Jolla, CA, USA
| | - Sadique Sheik
- Department of Information Technology and Electrical Engineering, Institute of Neuroinformatics, University of Zurich and ETH Zurich Zurich, Switzerland
| | - Giacomo Indiveri
- Department of Information Technology and Electrical Engineering, Institute of Neuroinformatics, University of Zurich and ETH Zurich Zurich, Switzerland
| |
Collapse
|
25
|
Sharp T, Petersen R, Furber S. Real-time million-synapse simulation of rat barrel cortex. Front Neurosci 2014; 8:131. [PMID: 24910593 PMCID: PMC4038760 DOI: 10.3389/fnins.2014.00131] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/03/2014] [Accepted: 05/13/2014] [Indexed: 11/13/2022] Open
Abstract
Simulations of neural circuits are bounded in scale and speed by available computing resources, and particularly by the differences in parallelism and communication patterns between the brain and high-performance computers. SpiNNaker is a computer architecture designed to address this problem by emulating the structure and function of neural tissue, using very many low-power processors and an interprocessor communication mechanism inspired by axonal arbors. Here we demonstrate that thousand-processor SpiNNaker prototypes can simulate models of the rodent barrel system comprising 50,000 neurons and 50 million synapses. We use the PyNN library to specify models, and the intrinsic features of Python to control experimental procedures and analysis. The models reproduce known thalamocortical response transformations, exhibit known, balanced dynamics of excitation and inhibition, and show a spatiotemporal spread of activity though the superficial cortical layers. These demonstrations are a significant step toward tractable simulations of entire cortical areas on the million-processor SpiNNaker machines in development.
Collapse
Affiliation(s)
- Thomas Sharp
- School of Computer Science, The University of Manchester Manchester, UK ; Laboratory for Neural Circuit Theory, RIKEN Brain Science Institute, Wakoshi Saitama, Japan
| | - Rasmus Petersen
- Faculty of Life Sciences, The University of Manchester Manchester, UK
| | - Steve Furber
- School of Computer Science, The University of Manchester Manchester, UK
| |
Collapse
|
26
|
Clady X, Clercq C, Ieng SH, Houseini F, Randazzo M, Natale L, Bartolozzi C, Benosman R. Asynchronous visual event-based time-to-contact. Front Neurosci 2014; 8:9. [PMID: 24570652 PMCID: PMC3916774 DOI: 10.3389/fnins.2014.00009] [Citation(s) in RCA: 22] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/30/2013] [Accepted: 01/16/2014] [Indexed: 11/13/2022] Open
Abstract
Reliable and fast sensing of the environment is a fundamental requirement for autonomous mobile robotic platforms. Unfortunately, the frame-based acquisition paradigm at the basis of main stream artificial perceptive systems is limited by low temporal dynamics and redundant data flow, leading to high computational costs. Hence, conventional sensing and relative computation are obviously incompatible with the design of high speed sensor-based reactive control for mobile applications, that pose strict limits on energy consumption and computational load. This paper introduces a fast obstacle avoidance method based on the output of an asynchronous event-based time encoded imaging sensor. The proposed method relies on an event-based Time To Contact (TTC) computation based on visual event-based motion flows. The approach is event-based in the sense that every incoming event adds to the computation process thus allowing fast avoidance responses. The method is validated indoor on a mobile robot, comparing the event-based TTC with a laser range finder TTC, showing that event-based sensing offers new perspectives for mobile robotics sensing.
Collapse
Affiliation(s)
- Xavier Clady
- Vision Institute, Universitée Pierre et Marie Curie, UMR S968 Inserm, UPMC, CNRS UMR 7210, CHNO des Quinze-Vingts Paris, France
| | - Charles Clercq
- Vision Institute, Universitée Pierre et Marie Curie, UMR S968 Inserm, UPMC, CNRS UMR 7210, CHNO des Quinze-Vingts Paris, France ; iCub Facility, Istituto Italiano di Tecnologia Genova, Italia
| | - Sio-Hoi Ieng
- Vision Institute, Universitée Pierre et Marie Curie, UMR S968 Inserm, UPMC, CNRS UMR 7210, CHNO des Quinze-Vingts Paris, France
| | | | - Marco Randazzo
- iCub Facility, Istituto Italiano di Tecnologia Genova, Italia
| | - Lorenzo Natale
- iCub Facility, Istituto Italiano di Tecnologia Genova, Italia
| | | | - Ryad Benosman
- Vision Institute, Universitée Pierre et Marie Curie, UMR S968 Inserm, UPMC, CNRS UMR 7210, CHNO des Quinze-Vingts Paris, France ; iCub Facility, Istituto Italiano di Tecnologia Genova, Italia
| |
Collapse
|
27
|
Carlson KD, Nageswaran JM, Dutt N, Krichmar JL. An efficient automated parameter tuning framework for spiking neural networks. Front Neurosci 2014; 8:10. [PMID: 24550771 PMCID: PMC3912986 DOI: 10.3389/fnins.2014.00010] [Citation(s) in RCA: 26] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/26/2013] [Accepted: 01/17/2014] [Indexed: 11/13/2022] Open
Abstract
As the desire for biologically realistic spiking neural networks (SNNs) increases, tuning the enormous number of open parameters in these models becomes a difficult challenge. SNNs have been used to successfully model complex neural circuits that explore various neural phenomena such as neural plasticity, vision systems, auditory systems, neural oscillations, and many other important topics of neural function. Additionally, SNNs are particularly well-adapted to run on neuromorphic hardware that will support biological brain-scale architectures. Although the inclusion of realistic plasticity equations, neural dynamics, and recurrent topologies has increased the descriptive power of SNNs, it has also made the task of tuning these biologically realistic SNNs difficult. To meet this challenge, we present an automated parameter tuning framework capable of tuning SNNs quickly and efficiently using evolutionary algorithms (EA) and inexpensive, readily accessible graphics processing units (GPUs). A sample SNN with 4104 neurons was tuned to give V1 simple cell-like tuning curve responses and produce self-organizing receptive fields (SORFs) when presented with a random sequence of counterphase sinusoidal grating stimuli. A performance analysis comparing the GPU-accelerated implementation to a single-threaded central processing unit (CPU) implementation was carried out and showed a speedup of 65× of the GPU implementation over the CPU implementation, or 0.35 h per generation for GPU vs. 23.5 h per generation for CPU. Additionally, the parameter value solutions found in the tuned SNN were studied and found to be stable and repeatable. The automated parameter tuning framework presented here will be of use to both the computational neuroscience and neuromorphic engineering communities, making the process of constructing and tuning large-scale SNNs much quicker and easier.
Collapse
Affiliation(s)
- Kristofor D Carlson
- Department of Cognitive Sciences, University of California Irvine Irvine, CA, USA
| | | | - Nikil Dutt
- Department of Computer Science, University of California Irvine Irvine, CA, USA
| | - Jeffrey L Krichmar
- Department of Cognitive Sciences, University of California Irvine Irvine, CA, USA ; Department of Computer Science, University of California Irvine Irvine, CA, USA
| |
Collapse
|
28
|
Abstract
Computational neuroscience has uncovered a number of computational principles used by nervous systems. At the same time, neuromorphic hardware has matured to a state where fast silicon implementations of complex neural networks have become feasible. En route to future technical applications of neuromorphic computing the current challenge lies in the identification and implementation of functional brain algorithms. Taking inspiration from the olfactory system of insects, we constructed a spiking neural network for the classification of multivariate data, a common problem in signal and data analysis. In this model, real-valued multivariate data are converted into spike trains using "virtual receptors" (VRs). Their output is processed by lateral inhibition and drives a winner-take-all circuit that supports supervised learning. VRs are conveniently implemented in software, whereas the lateral inhibition and classification stages run on accelerated neuromorphic hardware. When trained and tested on real-world datasets, we find that the classification performance is on par with a naïve Bayes classifier. An analysis of the network dynamics shows that stable decisions in output neuron populations are reached within less than 100 ms of biological time, matching the time-to-decision reported for the insect nervous system. Through leveraging a population code, the network tolerates the variability of neuronal transfer functions and trial-to-trial variation that is inevitably present on the hardware system. Our work provides a proof of principle for the successful implementation of a functional spiking neural network on a configurable neuromorphic hardware system that can readily be applied to real-world computing problems.
Collapse
|
29
|
Kaplan BA, Lansner A, Masson GS, Perrinet LU. Anisotropic connectivity implements motion-based prediction in a spiking neural network. Front Comput Neurosci 2013; 7:112. [PMID: 24062680 PMCID: PMC3775506 DOI: 10.3389/fncom.2013.00112] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/21/2013] [Accepted: 07/25/2013] [Indexed: 12/28/2022] Open
Abstract
Predictive coding hypothesizes that the brain explicitly infers upcoming sensory input to establish a coherent representation of the world. Although it is becoming generally accepted, it is not clear on which level spiking neural networks may implement predictive coding and what function their connectivity may have. We present a network model of conductance-based integrate-and-fire neurons inspired by the architecture of retinotopic cortical areas that assumes predictive coding is implemented through network connectivity, namely in the connection delays and in selectiveness for the tuning properties of source and target cells. We show that the applied connection pattern leads to motion-based prediction in an experiment tracking a moving dot. In contrast to our proposed model, a network with random or isotropic connectivity fails to predict the path when the moving dot disappears. Furthermore, we show that a simple linear decoding approach is sufficient to transform neuronal spiking activity into a probabilistic estimate for reading out the target trajectory.
Collapse
Affiliation(s)
- Bernhard A. Kaplan
- Department of Computational Biology, Royal Institute of TechnologyStockholm, Sweden
- Stockholm Brain Institute, Karolinska InstituteStockholm, Sweden
| | - Anders Lansner
- Department of Computational Biology, Royal Institute of TechnologyStockholm, Sweden
- Stockholm Brain Institute, Karolinska InstituteStockholm, Sweden
- Department of Numerical Analysis and Computer Science, Stockholm UniversityStockholm, Sweden
| | - Guillaume S. Masson
- Institut de Neurosciences de la Timone, UMR7289, Centre National de la Recherche Scientifique & Aix-Marseille UniversitéMarseille, France
| | - Laurent U. Perrinet
- Institut de Neurosciences de la Timone, UMR7289, Centre National de la Recherche Scientifique & Aix-Marseille UniversitéMarseille, France
| |
Collapse
|
30
|
Cassidy AS, Georgiou J, Andreou AG. Design of silicon brains in the nano-CMOS era: spiking neurons, learning synapses and neural architecture optimization. Neural Netw 2013; 45:4-26. [PMID: 23886551 DOI: 10.1016/j.neunet.2013.05.011] [Citation(s) in RCA: 80] [Impact Index Per Article: 7.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/31/2012] [Revised: 05/20/2013] [Accepted: 05/21/2013] [Indexed: 11/30/2022]
Abstract
We present a design framework for neuromorphic architectures in the nano-CMOS era. Our approach to the design of spiking neurons and STDP learning circuits relies on parallel computational structures where neurons are abstracted as digital arithmetic logic units and communication processors. Using this approach, we have developed arrays of silicon neurons that scale to millions of neurons in a single state-of-the-art Field Programmable Gate Array (FPGA). We demonstrate the validity of the design methodology through the implementation of cortical development in a circuit of spiking neurons, STDP synapses, and neural architecture optimization.
Collapse
Affiliation(s)
- Andrew S Cassidy
- Department of Electrical and Computer Engineering, Johns Hopkins University, Baltimore, MD 21218, USA.
| | | | | |
Collapse
|
31
|
Pfeil T, Grübl A, Jeltsch S, Müller E, Müller P, Petrovici MA, Schmuker M, Brüderle D, Schemmel J, Meier K. Six networks on a universal neuromorphic computing substrate. Front Neurosci 2013; 7:11. [PMID: 23423583 PMCID: PMC3575075 DOI: 10.3389/fnins.2013.00011] [Citation(s) in RCA: 115] [Impact Index Per Article: 10.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/19/2012] [Accepted: 01/18/2013] [Indexed: 11/28/2022] Open
Abstract
In this study, we present a highly configurable neuromorphic computing substrate and use it for emulating several types of neural networks. At the heart of this system lies a mixed-signal chip, with analog implementations of neurons and synapses and digital transmission of action potentials. Major advantages of this emulation device, which has been explicitly designed as a universal neural network emulator, are its inherent parallelism and high acceleration factor compared to conventional computers. Its configurability allows the realization of almost arbitrary network topologies and the use of widely varied neuronal and synaptic parameters. Fixed-pattern noise inherent to analog circuitry is reduced by calibration routines. An integrated development environment allows neuroscientists to operate the device without any prior knowledge of neuromorphic circuit design. As a showcase for the capabilities of the system, we describe the successful emulation of six different neural networks which cover a broad spectrum of both structure and functionality.
Collapse
Affiliation(s)
- Thomas Pfeil
- Kirchhoff-Institute for Physics, Universität Heidelberg Heidelberg, Germany
| | | | | | | | | | | | | | | | | | | |
Collapse
|
32
|
Pickett MD, Medeiros-Ribeiro G, Williams RS. A scalable neuristor built with Mott memristors. NATURE MATERIALS 2013; 12:114-7. [PMID: 23241533 DOI: 10.1038/nmat3510] [Citation(s) in RCA: 258] [Impact Index Per Article: 23.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/30/2012] [Accepted: 10/30/2012] [Indexed: 05/22/2023]
Abstract
The Hodgkin-Huxley model for action potential generation in biological axons is central for understanding the computational capability of the nervous system and emulating its functionality. Owing to the historical success of silicon complementary metal-oxide-semiconductors, spike-based computing is primarily confined to software simulations and specialized analogue metal-oxide-semiconductor field-effect transistor circuits. However, there is interest in constructing physical systems that emulate biological functionality more directly, with the goal of improving efficiency and scale. The neuristor was proposed as an electronic device with properties similar to the Hodgkin-Huxley axon, but previous implementations were not scalable. Here we demonstrate a neuristor built using two nanoscale Mott memristors, dynamical devices that exhibit transient memory and negative differential resistance arising from an insulating-to-conducting phase transition driven by Joule heating. This neuristor exhibits the important neural functions of all-or-nothing spiking with signal gain and diverse periodic spiking, using materials and structures that are amenable to extremely high-density integration with or without silicon transistors.
Collapse
|
33
|
Pfeil T, Potjans TC, Schrader S, Potjans W, Schemmel J, Diesmann M, Meier K. Is a 4-bit synaptic weight resolution enough? - constraints on enabling spike-timing dependent plasticity in neuromorphic hardware. Front Neurosci 2012; 6:90. [PMID: 22822388 PMCID: PMC3398398 DOI: 10.3389/fnins.2012.00090] [Citation(s) in RCA: 68] [Impact Index Per Article: 5.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/27/2012] [Accepted: 06/04/2012] [Indexed: 11/13/2022] Open
Abstract
Large-scale neuromorphic hardware systems typically bear the trade-off between detail level and required chip resources. Especially when implementing spike-timing dependent plasticity, reduction in resources leads to limitations as compared to floating point precision. By design, a natural modification that saves resources would be reducing synaptic weight resolution. In this study, we give an estimate for the impact of synaptic weight discretization on different levels, ranging from random walks of individual weights to computer simulations of spiking neural networks. The FACETS wafer-scale hardware system offers a 4-bit resolution of synaptic weights, which is shown to be sufficient within the scope of our network benchmark. Our findings indicate that increasing the resolution may not even be useful in light of further restrictions of customized mixed-signal synapses. In addition, variations due to production imperfections are investigated and shown to be uncritical in the context of the presented study. Our results represent a general framework for setting up and configuring hardware-constrained synapses. We suggest how weight discretization could be considered for other backends dedicated to large-scale simulations. Thus, our proposition of a good hardware verification practice may rise synergy effects between hardware developers and neuroscientists.
Collapse
Affiliation(s)
- Thomas Pfeil
- Kirchhoff Institute for Physics, Ruprecht-Karls-University Heidelberg Heidelberg, Germany
| | | | | | | | | | | | | |
Collapse
|
34
|
Perrinet LU, Masson GS. Motion-based prediction is sufficient to solve the aperture problem. Neural Comput 2012; 24:2726-50. [PMID: 22734489 DOI: 10.1162/neco_a_00332] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
In low-level sensory systems, it is still unclear how the noisy information collected locally by neurons may give rise to a coherent global percept. This is well demonstrated for the detection of motion in the aperture problem: as luminance of an elongated line is symmetrical along its axis, tangential velocity is ambiguous when measured locally. Here, we develop the hypothesis that motion-based predictive coding is sufficient to infer global motion. Our implementation is based on a context-dependent diffusion of a probabilistic representation of motion. We observe in simulations a progressive solution to the aperture problem similar to physiology and behavior. We demonstrate that this solution is the result of two underlying mechanisms. First, we demonstrate the formation of a tracking behavior favoring temporally coherent features independent of their texture. Second, we observe that incoherent features are explained away, while coherent information diffuses progressively to the global scale. Most previous models included ad hoc mechanisms such as end-stopped cells or a selection layer to track specific luminance-based features as necessary conditions to solve the aperture problem. Here, we have proved that motion-based predictive coding, as it is implemented in this functional model, is sufficient to solve the aperture problem. This solution may give insights into the role of prediction underlying a large class of sensory computations.
Collapse
Affiliation(s)
- Laurent U Perrinet
- Institut de Neurosciences de la Timone, CNRS/Aix-Marseille University 13385 Marseille Cedex 5, France.
| | | |
Collapse
|
35
|
Impact of adaptation currents on synchronization of coupled exponential integrate-and-fire neurons. PLoS Comput Biol 2012; 8:e1002478. [PMID: 22511861 PMCID: PMC3325187 DOI: 10.1371/journal.pcbi.1002478] [Citation(s) in RCA: 41] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/26/2011] [Accepted: 02/27/2012] [Indexed: 11/19/2022] Open
Abstract
The ability of spiking neurons to synchronize their activity in a network depends on the response behavior of these neurons as quantified by the phase response curve (PRC) and on coupling properties. The PRC characterizes the effects of transient inputs on spike timing and can be measured experimentally. Here we use the adaptive exponential integrate-and-fire (aEIF) neuron model to determine how subthreshold and spike-triggered slow adaptation currents shape the PRC. Based on that, we predict how synchrony and phase locked states of coupled neurons change in presence of synaptic delays and unequal coupling strengths. We find that increased subthreshold adaptation currents cause a transition of the PRC from only phase advances to phase advances and delays in response to excitatory perturbations. Increased spike-triggered adaptation currents on the other hand predominantly skew the PRC to the right. Both adaptation induced changes of the PRC are modulated by spike frequency, being more prominent at lower frequencies. Applying phase reduction theory, we show that subthreshold adaptation stabilizes synchrony for pairs of coupled excitatory neurons, while spike-triggered adaptation causes locking with a small phase difference, as long as synaptic heterogeneities are negligible. For inhibitory pairs synchrony is stable and robust against conduction delays, and adaptation can mediate bistability of in-phase and anti-phase locking. We further demonstrate that stable synchrony and bistable in/anti-phase locking of pairs carry over to synchronization and clustering of larger networks. The effects of adaptation in aEIF neurons on PRCs and network dynamics qualitatively reflect those of biophysical adaptation currents in detailed Hodgkin-Huxley-based neurons, which underscores the utility of the aEIF model for investigating the dynamical behavior of networks. Our results suggest neuronal spike frequency adaptation as a mechanism synchronizing low frequency oscillations in local excitatory networks, but indicate that inhibition rather than excitation generates coherent rhythms at higher frequencies. Synchronization of neuronal spiking in the brain is related to cognitive functions, such as perception, attention, and memory. It is therefore important to determine which properties of neurons influence their collective behavior in a network and to understand how. A prominent feature of many cortical neurons is spike frequency adaptation, which is caused by slow transmembrane currents. We investigated how these adaptation currents affect the synchronization tendency of coupled model neurons. Using the efficient adaptive exponential integrate-and-fire (aEIF) model and a biophysically detailed neuron model for validation, we found that increased adaptation currents promote synchronization of coupled excitatory neurons at lower spike frequencies, as long as the conduction delays between the neurons are negligible. Inhibitory neurons on the other hand synchronize in presence of conduction delays, with or without adaptation currents. Our results emphasize the utility of the aEIF model for computational studies of neuronal network dynamics. We conclude that adaptation currents provide a mechanism to generate low frequency oscillations in local populations of excitatory neurons, while faster rhythms seem to be caused by inhibition rather than excitation.
Collapse
|
36
|
Russell A, Mazurek K, Mihalaş S, Niebur E, Etienne-Cummings R. Parameter estimation of a spiking silicon neuron. IEEE TRANSACTIONS ON BIOMEDICAL CIRCUITS AND SYSTEMS 2012; 6:133-41. [PMID: 23852978 PMCID: PMC3712290 DOI: 10.1109/tbcas.2011.2182650] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/02/2023]
Abstract
Spiking neuron models are used in a multitude of tasks ranging from understanding neural behavior at its most basic level to neuroprosthetics. Parameter estimation of a single neuron model, such that the model's output matches that of a biological neuron is an extremely important task. Hand tuning of parameters to obtain such behaviors is a difficult and time consuming process. This is further complicated when the neuron is instantiated in silicon (an attractive medium in which to implement these models) as fabrication imperfections make the task of parameter configuration more complex. In this paper we show two methods to automate the configuration of a silicon (hardware) neuron's parameters. First, we show how a Maximum Likelihood method can be applied to a leaky integrate and fire silicon neuron with spike induced currents to fit the neuron's output to desired spike times. We then show how a distance based method which approximates the negative log likelihood of the lognormal distribution can also be used to tune the neuron's parameters. We conclude that the distance based method is better suited for parameter configuration of silicon neurons due to its superior optimization speed.
Collapse
Affiliation(s)
- Alexander Russell
- Department of Electrical and Computer Engineering, The Johns Hopkins University, Baltimore, MD 21218 USA
| | - Kevin Mazurek
- Department of Electrical and Computer Engineering, The Johns Hopkins University, Baltimore, MD 21218 USA
| | - Stefan Mihalaş
- Department of Neuroscience and the Zanvyl Krieger Mind Brain Institute, The Johns Hopkins University, Baltimore, MD 21218 USA
| | - Ernst Niebur
- Department of Neuroscience and the Zanvyl Krieger Mind Brain Institute, The Johns Hopkins University, Baltimore, MD 21218 USA
| | - Ralph Etienne-Cummings
- Department of Electrical and Computer Engineering, The Johns Hopkins University, Baltimore, MD 21218 USA
| |
Collapse
|
37
|
Grassia F, Buhry L, Lévi T, Tomas J, Destexhe A, Saïghi S. Tunable neuromimetic integrated system for emulating cortical neuron models. Front Neurosci 2011; 5:134. [PMID: 22163213 PMCID: PMC3233664 DOI: 10.3389/fnins.2011.00134] [Citation(s) in RCA: 23] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/15/2011] [Accepted: 11/18/2011] [Indexed: 11/13/2022] Open
Abstract
Nowadays, many software solutions are currently available for simulating neuron models. Less conventional than software-based systems, hardware-based solutions generally combine digital and analog forms of computation. In previous work, we designed several neuromimetic chips, including the Galway chip that we used for this paper. These silicon neurons are based on the Hodgkin–Huxley formalism and they are optimized for reproducing a large variety of neuron behaviors thanks to tunable parameters. Due to process variation and device mismatch in analog chips, we use a full-custom fitting method in voltage-clamp mode to tune our neuromimetic integrated circuits. By comparing them with experimental electrophysiological data of these cells, we show that the circuits can reproduce the main firing features of cortical cell types. In this paper, we present the experimental measurements of our system which mimic the four most prominent biological cells: fast spiking, regular spiking, intrinsically bursting, and low-threshold spiking neurons into analog neuromimetic integrated circuit dedicated to cortical neuron simulations. This hardware and software platform will allow to improve the hybrid technique, also called “dynamic-clamp,” that consists of connecting artificial and biological neurons to study the function of neuronal circuits.
Collapse
Affiliation(s)
- Filippo Grassia
- Laboratoire d'Intégration du Matériau au Système, UMR CNRS 5218, Université de Bordeaux Talence, France
| | | | | | | | | | | |
Collapse
|