1
|
Zareh M, Toulabinejad E, Manshaei MH, Zahabi SJ. A deep learning model of dorsal and ventral visual streams for DVSD. Sci Rep 2024; 14:27464. [PMID: 39523365 PMCID: PMC11551208 DOI: 10.1038/s41598-024-78304-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/11/2023] [Accepted: 10/30/2024] [Indexed: 11/16/2024] Open
Abstract
Artificial intelligence (AI) methods attempt to simulate the behavior and the neural activity of the brain. In particular, Convolutional Neural Networks (CNNs) offer state-of-the-art models of the ventral visual stream. Furthermore, no proposed model estimates the distance between objects as a function of the dorsal stream. In this paper, we present a quantitatively accurate model for the visual system. Specifically, we propose a VeDo-Net model that comprises both ventral and dorsal branches. As in the ventral visual stream, our model recognizes objects. The model also locates and estimates the distance between objects as a spatial relationship task performed by the dorsal stream. One application of the proposed model is in the simulation of visual impairments. In this study, however, we show how the proposed model can simulate the occurrence of dorsal stream impairments such as Autism Spectrum Disorder (ASD) and cerebral visual impairment (CVI). In the end, we explore the impacts of learning on the recovery of the synaptic disruptions of the dorsal visual stream. Results indicated a direct relationship between the positive and negative changes in the weights of the dorsal stream's last layers and the output of the dorsal stream under an allocentric situation. Our results also demonstrate that visual-spatial perception impairments in ASD may be caused by a disturbance in the last layers of the dorsal stream.
Collapse
Affiliation(s)
- Masoumeh Zareh
- Department of Electrical and Computer Engineering, Isfahan University of Technology, Isfahan, 84156-83111, Iran
| | - Elaheh Toulabinejad
- Department of Electrical and Computer Engineering, Isfahan University of Technology, Isfahan, 84156-83111, Iran
- Department of Computing Science, University of Alberta, Edmonton, AB, T6G 2E8, Canada
| | - Mohammad Hossein Manshaei
- Department of Electrical and Computer Engineering, Isfahan University of Technology, Isfahan, 84156-83111, Iran.
| | - Sayed Jalal Zahabi
- Department of Electrical and Computer Engineering, Isfahan University of Technology, Isfahan, 84156-83111, Iran
| |
Collapse
|
2
|
Lee J, Lee J, Bang H, Yoon TW, Ko JH, Zhang G, Park JS, Jeon I, Lee S, Kang B. One-Shot Remote Integration of Macromolecular Synaptic Elements on a Chip for Ultrathin Flexible Neural Network System. ADVANCED MATERIALS (DEERFIELD BEACH, FLA.) 2024:e2402361. [PMID: 38762775 DOI: 10.1002/adma.202402361] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/15/2024] [Revised: 04/23/2024] [Indexed: 05/20/2024]
Abstract
The field of biomimetic electronics that mimic synaptic functions has expanded significantly to overcome the limitations of the von Neumann bottleneck. However, the scaling down of the technology has led to an increasingly intricate manufacturing process. To address the issue, this work presents a one-shot integrable electropolymerization (OSIEP) method with remote controllability for the deposition of synaptic elements on a chip by exploiting bipolar electrochemistry. Condensing synthesis, deposition, and patterning into a single fabrication step is achieved by combining alternating-current voltage superimposed on direct-current voltage-bipolar electropolymerization and a specially designed dual source/drain bipolar electrodes. As a result, uniform 6 × 5 arrays of poly(3,4-ethylenedioxythiophene) channels are successfully fabricated on flexible ultrathin parylene substrates in one-shot process. The channels exhibited highly uniform characteristics and are directly used as electrochemical synaptic transistor with synaptic plasticity over 100 s. The synaptic transistors have demonstrated promising performance in an artificial neural network (NN) simulation, achieving a high recognition accuracy of 95.20%. Additionally, the array of synaptic transistor is easily reconfigured to a multi-gate synaptic circuit to implement the principles of operant conditioning. These results provide a compelling fabrication strategy for realizing cost-effective and disposable NN systems with high integration density.
Collapse
Affiliation(s)
- Jiyun Lee
- SKKU Advanced Institute of Nanotechnology (SAINT) and Department of Nano Science and Technology, Sungkyunkwan University, 2066 Seobu-ro, Jangan-gu, Suwon, 16419, South Korea
| | - Jaehoon Lee
- SKKU Advanced Institute of Nanotechnology (SAINT) and Department of Nano Science and Technology, Sungkyunkwan University, 2066 Seobu-ro, Jangan-gu, Suwon, 16419, South Korea
| | - Hyeonsu Bang
- Department of Electrical and Computer Engineering, Sungkyunkwan University, 2066 Seobu-ro, Jangan-gu, Suwon, 16419, South Korea
| | - Tae Woong Yoon
- SKKU Advanced Institute of Nanotechnology (SAINT) and Department of Nano Science and Technology, Sungkyunkwan University, 2066 Seobu-ro, Jangan-gu, Suwon, 16419, South Korea
| | - Jong Hwan Ko
- Department of Electrical and Computer Engineering, Sungkyunkwan University, 2066 Seobu-ro, Jangan-gu, Suwon, 16419, South Korea
- College of Information and Communication Engineering, Sungkyunkwan University, 2066 Seobu-ro, Jangan-gu, Suwon, 16419, South Korea
| | - Guobing Zhang
- Special Display and Imaging Innovation Center of Anhui Province, National Engineering Lab of Special Display Technology, Academy of Opto-Electronic Technology, Anhui Province Key Laboratory of Measuring Theory and Precision Instrument, School of Chemistry and Chemical Engineering, Hefei University of Technology, Key Laboratory of Advance Functional Materials and Devices of Anhui Province, Hefei, 230009, China
| | - Ji-Sang Park
- SKKU Advanced Institute of Nanotechnology (SAINT) and Department of Nano Science and Technology, Sungkyunkwan University, 2066 Seobu-ro, Jangan-gu, Suwon, 16419, South Korea
- Department of Nano Engineering, Sungkyunkwan University, 2066 Seobu-ro, Jangan-gu, Suwon, 16419, South Korea
| | - Il Jeon
- SKKU Advanced Institute of Nanotechnology (SAINT) and Department of Nano Science and Technology, Sungkyunkwan University, 2066 Seobu-ro, Jangan-gu, Suwon, 16419, South Korea
- Department of Nano Engineering, Sungkyunkwan University, 2066 Seobu-ro, Jangan-gu, Suwon, 16419, South Korea
| | - Sungjoo Lee
- SKKU Advanced Institute of Nanotechnology (SAINT) and Department of Nano Science and Technology, Sungkyunkwan University, 2066 Seobu-ro, Jangan-gu, Suwon, 16419, South Korea
- Department of Nano Engineering, Sungkyunkwan University, 2066 Seobu-ro, Jangan-gu, Suwon, 16419, South Korea
| | - Boseok Kang
- SKKU Advanced Institute of Nanotechnology (SAINT) and Department of Nano Science and Technology, Sungkyunkwan University, 2066 Seobu-ro, Jangan-gu, Suwon, 16419, South Korea
- Department of Nano Engineering, Sungkyunkwan University, 2066 Seobu-ro, Jangan-gu, Suwon, 16419, South Korea
| |
Collapse
|
3
|
Senden M, van Albada SJ, Pezzulo G, Falotico E, Hashim I, Kroner A, Kurth AC, Lanillos P, Narayanan V, Pennartz C, Petrovici MA, Steffen L, Weidler T, Goebel R. Modular-integrative modeling: a new framework for building brain models that blend biological realism and functional performance. Natl Sci Rev 2024; 11:nwad318. [PMID: 38577673 PMCID: PMC10989280 DOI: 10.1093/nsr/nwad318] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/19/2023] [Revised: 12/04/2023] [Accepted: 12/18/2023] [Indexed: 04/06/2024] Open
Abstract
This Perspective presents the Modular-Integrative Modeling approach, a novel framework in neuroscience for developing brain models that blend biological realism with functional performance to provide a holistic view on brain function in interaction with the body and environment.
Collapse
Affiliation(s)
- Mario Senden
- Department of Cognitive Neuroscience, Maastricht University, The Netherlands
- Maastricht Brain Imaging Centre, Maastricht University, The Netherlands
| | - Sacha J van Albada
- Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA-Institut Brain Structure-Function Relationships (INM-10), Jülich Research Center, Germany
- Institute of Zoology, University of Cologne, Germany
| | - Giovanni Pezzulo
- Institute of Cognitive Sciences and Technologies, National Research Council, Italy
| | - Egidio Falotico
- The BioRobotics Institute, Scuola Superiore Sant’Anna, Italy
| | - Ibrahim Hashim
- Department of Cognitive Neuroscience, Maastricht University, The Netherlands
- Maastricht Brain Imaging Centre, Maastricht University, The Netherlands
| | - Alexander Kroner
- Department of Cognitive Neuroscience, Maastricht University, The Netherlands
- Maastricht Brain Imaging Centre, Maastricht University, The Netherlands
| | - Anno C Kurth
- Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA-Institut Brain Structure-Function Relationships (INM-10), Jülich Research Center, Germany
- RWTH Aachen University, Germany
| | - Pablo Lanillos
- Donders Institute for Brain, Cognition and Behavior, Radboud University, The Netherlands
| | - Vaishnavi Narayanan
- Department of Cognitive Neuroscience, Maastricht University, The Netherlands
- Maastricht Brain Imaging Centre, Maastricht University, The Netherlands
| | - Cyriel Pennartz
- Cognitive and Systems Neuroscience Group, Swammerdam Institute for Life Sciences, University of Amsterdam, The Netherlands
| | | | - Lea Steffen
- FZI Research Center of Information Technology, Germany
| | - Tonio Weidler
- Department of Cognitive Neuroscience, Maastricht University, The Netherlands
- Maastricht Brain Imaging Centre, Maastricht University, The Netherlands
| | - Rainer Goebel
- Department of Cognitive Neuroscience, Maastricht University, The Netherlands
- Maastricht Brain Imaging Centre, Maastricht University, The Netherlands
| |
Collapse
|
4
|
Podlaski WF, Machens CK. Approximating Nonlinear Functions With Latent Boundaries in Low-Rank Excitatory-Inhibitory Spiking Networks. Neural Comput 2024; 36:803-857. [PMID: 38658028 DOI: 10.1162/neco_a_01658] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/21/2023] [Accepted: 01/02/2024] [Indexed: 04/26/2024]
Abstract
Deep feedforward and recurrent neural networks have become successful functional models of the brain, but they neglect obvious biological details such as spikes and Dale's law. Here we argue that these details are crucial in order to understand how real neural circuits operate. Towards this aim, we put forth a new framework for spike-based computation in low-rank excitatory-inhibitory spiking networks. By considering populations with rank-1 connectivity, we cast each neuron's spiking threshold as a boundary in a low-dimensional input-output space. We then show how the combined thresholds of a population of inhibitory neurons form a stable boundary in this space, and those of a population of excitatory neurons form an unstable boundary. Combining the two boundaries results in a rank-2 excitatory-inhibitory (EI) network with inhibition-stabilized dynamics at the intersection of the two boundaries. The computation of the resulting networks can be understood as the difference of two convex functions and is thereby capable of approximating arbitrary non-linear input-output mappings. We demonstrate several properties of these networks, including noise suppression and amplification, irregular activity and synaptic balance, as well as how they relate to rate network dynamics in the limit that the boundary becomes soft. Finally, while our work focuses on small networks (5-50 neurons), we discuss potential avenues for scaling up to much larger networks. Overall, our work proposes a new perspective on spiking networks that may serve as a starting point for a mechanistic understanding of biological spike-based computation.
Collapse
Affiliation(s)
- William F Podlaski
- Champalimaud Neuroscience Programme, Champalimaud Foundation, 1400-038 Lisbon, Portugal
| | - Christian K Machens
- Champalimaud Neuroscience Programme, Champalimaud Foundation, 1400-038 Lisbon, Portugal
| |
Collapse
|
5
|
Ramaswamy S. Data-driven multiscale computational models of cortical and subcortical regions. Curr Opin Neurobiol 2024; 85:102842. [PMID: 38320453 DOI: 10.1016/j.conb.2024.102842] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/23/2023] [Revised: 01/04/2024] [Accepted: 01/05/2024] [Indexed: 02/08/2024]
Abstract
Data-driven computational models of neurons, synapses, microcircuits, and mesocircuits have become essential tools in modern brain research. The goal of these multiscale models is to integrate and synthesize information from different levels of brain organization, from cellular properties, dendritic excitability, and synaptic dynamics to microcircuits, mesocircuits, and ultimately behavior. This article surveys recent advances in the genesis of data-driven computational models of mammalian neural networks in cortical and subcortical areas. I discuss the challenges and opportunities in developing data-driven multiscale models, including the need for interdisciplinary collaborations, the importance of model validation and comparison, and the potential impact on basic and translational neuroscience research. Finally, I highlight future directions and emerging technologies that will enable more comprehensive and predictive data-driven models of brain function and dysfunction.
Collapse
Affiliation(s)
- Srikanth Ramaswamy
- Neural Circuits Laboratory, Biosciences Institute, Newcastle University, Newcastle Upon Tyne, NE2 4HH, United Kingdom.
| |
Collapse
|
6
|
Stiefel KM, Coggan JS. The energy challenges of artificial superintelligence. Front Artif Intell 2023; 6:1240653. [PMID: 37941679 PMCID: PMC10629395 DOI: 10.3389/frai.2023.1240653] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/15/2023] [Accepted: 10/05/2023] [Indexed: 11/10/2023] Open
Abstract
We argue here that contemporary semiconductor computing technology poses a significant if not insurmountable barrier to the emergence of any artificial general intelligence system, let alone one anticipated by many to be "superintelligent". This limit on artificial superintelligence (ASI) emerges from the energy requirements of a system that would be more intelligent but orders of magnitude less efficient in energy use than human brains. An ASI would have to supersede not only a single brain but a large population given the effects of collective behavior on the advancement of societies, further multiplying the energy requirement. A hypothetical ASI would likely consume orders of magnitude more energy than what is available in highly-industrialized nations. We estimate the energy use of ASI with an equation we term the "Erasi equation", for the Energy Requirement for Artificial SuperIntelligence. Additional efficiency consequences will emerge from the current unfocussed and scattered developmental trajectory of AI research. Taken together, these arguments suggest that the emergence of an ASI is highly unlikely in the foreseeable future based on current computer architectures, primarily due to energy constraints, with biomimicry or other new technologies being possible solutions.
Collapse
Affiliation(s)
| | - Jay S. Coggan
- NeuroLinx Research Institute, La Jolla, CA, United States
| |
Collapse
|
7
|
Wilbers R, Metodieva VD, Duverdin S, Heyer DB, Galakhova AA, Mertens EJ, Versluis TD, Baayen JC, Idema S, Noske DP, Verburg N, Willemse RB, de Witt Hamer PC, Kole MH, de Kock CP, Mansvelder HD, Goriounova NA. Human voltage-gated Na + and K + channel properties underlie sustained fast AP signaling. SCIENCE ADVANCES 2023; 9:eade3300. [PMID: 37824607 PMCID: PMC10569700 DOI: 10.1126/sciadv.ade3300] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/09/2022] [Accepted: 01/09/2023] [Indexed: 10/14/2023]
Abstract
Human cortical pyramidal neurons are large, have extensive dendritic trees, and yet have unexpectedly fast input-output properties: Rapid subthreshold synaptic membrane potential changes are reliably encoded in timing of action potentials (APs). Here, we tested whether biophysical properties of voltage-gated sodium (Na+) and potassium (K+) currents in human pyramidal neurons can explain their fast input-output properties. Human Na+ and K+ currents exhibited more depolarized voltage dependence, slower inactivation, and faster recovery from inactivation compared with their mouse counterparts. Computational modeling showed that despite lower Na+ channel densities in human neurons, the biophysical properties of Na+ channels resulted in higher channel availability and contributed to fast AP kinetics stability. Last, human Na+ channel properties also resulted in a larger dynamic range for encoding of subthreshold membrane potential changes. Thus, biophysical adaptations of voltage-gated Na+ and K+ channels enable fast input-output properties of large human pyramidal neurons.
Collapse
Affiliation(s)
- René Wilbers
- Department of Integrative Neurophysiology, Center for Neurogenomics and Cognitive Research (CNCR), Vrije Universiteit Amsterdam, Amsterdam Neuroscience, Amsterdam 1081 HV, Netherlands
| | - Verjinia D. Metodieva
- Department of Integrative Neurophysiology, Center for Neurogenomics and Cognitive Research (CNCR), Vrije Universiteit Amsterdam, Amsterdam Neuroscience, Amsterdam 1081 HV, Netherlands
| | - Sarah Duverdin
- Department of Integrative Neurophysiology, Center for Neurogenomics and Cognitive Research (CNCR), Vrije Universiteit Amsterdam, Amsterdam Neuroscience, Amsterdam 1081 HV, Netherlands
| | - Djai B. Heyer
- Department of Integrative Neurophysiology, Center for Neurogenomics and Cognitive Research (CNCR), Vrije Universiteit Amsterdam, Amsterdam Neuroscience, Amsterdam 1081 HV, Netherlands
| | - Anna A. Galakhova
- Department of Integrative Neurophysiology, Center for Neurogenomics and Cognitive Research (CNCR), Vrije Universiteit Amsterdam, Amsterdam Neuroscience, Amsterdam 1081 HV, Netherlands
| | - Eline J. Mertens
- Department of Integrative Neurophysiology, Center for Neurogenomics and Cognitive Research (CNCR), Vrije Universiteit Amsterdam, Amsterdam Neuroscience, Amsterdam 1081 HV, Netherlands
| | - Tamara D. Versluis
- Department of Integrative Neurophysiology, Center for Neurogenomics and Cognitive Research (CNCR), Vrije Universiteit Amsterdam, Amsterdam Neuroscience, Amsterdam 1081 HV, Netherlands
| | - Johannes C. Baayen
- Department of Neurosurgery, Amsterdam UMC, Vrije Universiteit Amsterdam, Amsterdam Neuroscience, VUmc Cancer Center, Amsterdam Brain Tumor Center, Amsterdam 1081 HV, Netherlands
| | - Sander Idema
- Department of Neurosurgery, Amsterdam UMC, Vrije Universiteit Amsterdam, Amsterdam Neuroscience, VUmc Cancer Center, Amsterdam Brain Tumor Center, Amsterdam 1081 HV, Netherlands
| | - David P. Noske
- Department of Neurosurgery, Amsterdam UMC, Vrije Universiteit Amsterdam, Amsterdam Neuroscience, VUmc Cancer Center, Amsterdam Brain Tumor Center, Amsterdam 1081 HV, Netherlands
| | - Niels Verburg
- Department of Neurosurgery, Amsterdam UMC, Vrije Universiteit Amsterdam, Amsterdam Neuroscience, VUmc Cancer Center, Amsterdam Brain Tumor Center, Amsterdam 1081 HV, Netherlands
| | - Ronald B. Willemse
- Department of Neurosurgery, Amsterdam UMC, Vrije Universiteit Amsterdam, Amsterdam Neuroscience, VUmc Cancer Center, Amsterdam Brain Tumor Center, Amsterdam 1081 HV, Netherlands
| | - Philip C. de Witt Hamer
- Department of Neurosurgery, Amsterdam UMC, Vrije Universiteit Amsterdam, Amsterdam Neuroscience, VUmc Cancer Center, Amsterdam Brain Tumor Center, Amsterdam 1081 HV, Netherlands
| | - Maarten H. P. Kole
- Department of Axonal Signaling, Netherlands Institute for Neuroscience, Royal Netherlands Academy of Arts and Sciences, Amsterdam 1105 BA, Netherlands
- Cell Biology, Neurobiology and Biophysics, Department of Biology, Faculty of Science, Utrecht University, Utrecht 3584 CH, Netherlands
| | - Christiaan P. J. de Kock
- Department of Integrative Neurophysiology, Center for Neurogenomics and Cognitive Research (CNCR), Vrije Universiteit Amsterdam, Amsterdam Neuroscience, Amsterdam 1081 HV, Netherlands
| | - Huibert D. Mansvelder
- Department of Integrative Neurophysiology, Center for Neurogenomics and Cognitive Research (CNCR), Vrije Universiteit Amsterdam, Amsterdam Neuroscience, Amsterdam 1081 HV, Netherlands
| | - Natalia A. Goriounova
- Department of Integrative Neurophysiology, Center for Neurogenomics and Cognitive Research (CNCR), Vrije Universiteit Amsterdam, Amsterdam Neuroscience, Amsterdam 1081 HV, Netherlands
| |
Collapse
|
8
|
Borst JP, Aubin S, Stewart TC. A whole-task brain model of associative recognition that accounts for human behavior and neuroimaging data. PLoS Comput Biol 2023; 19:e1011427. [PMID: 37682986 PMCID: PMC10511112 DOI: 10.1371/journal.pcbi.1011427] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/23/2023] [Revised: 09/20/2023] [Accepted: 08/10/2023] [Indexed: 09/10/2023] Open
Abstract
Brain models typically focus either on low-level biological detail or on qualitative behavioral effects. In contrast, we present a biologically-plausible spiking-neuron model of associative learning and recognition that accounts for both human behavior and low-level brain activity across the whole task. Based on cognitive theories and insights from machine-learning analyses of M/EEG data, the model proceeds through five processing stages: stimulus encoding, familiarity judgement, associative retrieval, decision making, and motor response. The results matched human response times and source-localized MEG data in occipital, temporal, prefrontal, and precentral brain regions; as well as a classic fMRI effect in prefrontal cortex. This required two main conceptual advances: a basal-ganglia-thalamus action-selection system that relies on brief thalamic pulses to change the functional connectivity of the cortex, and a new unsupervised learning rule that causes very strong pattern separation in the hippocampus. The resulting model shows how low-level brain activity can result in goal-directed cognitive behavior in humans.
Collapse
Affiliation(s)
- Jelmer P. Borst
- Bernoulli Institute, University of Groningen; Groningen, The Netherlands
| | - Sean Aubin
- Centre for Theoretical Neuroscience, University of Waterloo; Waterloo, Ontario, Canada
| | - Terrence C. Stewart
- National Research Council Canada, University of Waterloo Collaboration Centre; Waterloo, Ontario, Canada
| |
Collapse
|
9
|
Schmitt FJ, Rostami V, Nawrot MP. Efficient parameter calibration and real-time simulation of large-scale spiking neural networks with GeNN and NEST. Front Neuroinform 2023; 17:941696. [PMID: 36844916 PMCID: PMC9950635 DOI: 10.3389/fninf.2023.941696] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/23/2022] [Accepted: 01/16/2023] [Indexed: 02/12/2023] Open
Abstract
Spiking neural networks (SNNs) represent the state-of-the-art approach to the biologically realistic modeling of nervous system function. The systematic calibration for multiple free model parameters is necessary to achieve robust network function and demands high computing power and large memory resources. Special requirements arise from closed-loop model simulation in virtual environments and from real-time simulation in robotic application. Here, we compare two complementary approaches to efficient large-scale and real-time SNN simulation. The widely used NEural Simulation Tool (NEST) parallelizes simulation across multiple CPU cores. The GPU-enhanced Neural Network (GeNN) simulator uses the highly parallel GPU-based architecture to gain simulation speed. We quantify fixed and variable simulation costs on single machines with different hardware configurations. As a benchmark model, we use a spiking cortical attractor network with a topology of densely connected excitatory and inhibitory neuron clusters with homogeneous or distributed synaptic time constants and in comparison to the random balanced network. We show that simulation time scales linearly with the simulated biological model time and, for large networks, approximately linearly with the model size as dominated by the number of synaptic connections. Additional fixed costs with GeNN are almost independent of model size, while fixed costs with NEST increase linearly with model size. We demonstrate how GeNN can be used for simulating networks with up to 3.5 · 106 neurons (> 3 · 1012synapses) on a high-end GPU, and up to 250, 000 neurons (25 · 109 synapses) on a low-cost GPU. Real-time simulation was achieved for networks with 100, 000 neurons. Network calibration and parameter grid search can be efficiently achieved using batch processing. We discuss the advantages and disadvantages of both approaches for different use cases.
Collapse
Affiliation(s)
| | | | - Martin Paul Nawrot
- Computational Systems Neuroscience, Institute of Zoology, University of Cologne, Cologne, Germany
| |
Collapse
|
10
|
Naudin L, Raison-Aubry L, Buhry L. A general pattern of non-spiking neuron dynamics under the effect of potassium and calcium channel modifications. J Comput Neurosci 2023; 51:173-186. [PMID: 36371576 DOI: 10.1007/s10827-022-00840-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/04/2022] [Revised: 10/08/2022] [Accepted: 11/02/2022] [Indexed: 11/13/2022]
Abstract
Electrical activity of excitable cells results from ion exchanges through cell membranes, so that genetic or epigenetic changes in genes encoding ion channels are likely to affect neuronal electrical signaling throughout the brain. There is a large literature on the effect of variations in ion channels on the dynamics of spiking neurons that represent the main type of neurons found in the vertebrate nervous systems. Nevertheless, non-spiking neurons are also ubiquitous in many nervous tissues and play a critical role in the processing of some sensory systems. To our knowledge, however, how conductance variations affect the dynamics of non-spiking neurons has never been assessed. Based on experimental observations reported in the biological literature and on mathematical considerations, we first propose a phenotypic classification of non-spiking neurons. Then, we determine a general pattern of the phenotypic evolution of non-spiking neurons as a function of changes in calcium and potassium conductances. Furthermore, we study the homeostatic compensatory mechanisms of ion channels in a well-posed non-spiking retinal cone model. We show that there is a restricted range of ion conductance values for which the behavior and phenotype of the neuron are maintained. Finally, we discuss the implications of the phenotypic changes of individual cells at the level of neuronal network functioning of the C. elegans worm and the retina, which are two non-spiking nervous tissues composed of neurons with various phenotypes.
Collapse
Affiliation(s)
- Loïs Naudin
- Laboratoire Lorrain de Recherche en Informatique et ses Applications, CNRS, Université de Lorraine, Nancy, France.
| | - Laetitia Raison-Aubry
- Laboratoire Lorrain de Recherche en Informatique et ses Applications, CNRS, Université de Lorraine, Nancy, France
| | - Laure Buhry
- Laboratoire Lorrain de Recherche en Informatique et ses Applications, CNRS, Université de Lorraine, Nancy, France.
| |
Collapse
|
11
|
Chlasta K, Sochaczewski P, Wójcik GM, Krejtz I. Neural simulation pipeline: Enabling container-based simulations on-premise and in public clouds. Front Neuroinform 2023; 17:1122470. [PMID: 37025550 PMCID: PMC10070792 DOI: 10.3389/fninf.2023.1122470] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/13/2022] [Accepted: 02/06/2023] [Indexed: 04/08/2023] Open
Abstract
In this study, we explore the simulation setup in computational neuroscience. We use GENESIS, a general purpose simulation engine for sub-cellular components and biochemical reactions, realistic neuron models, large neural networks, and system-level models. GENESIS supports developing and running computer simulations but leaves a gap for setting up today's larger and more complex models. The field of realistic models of brain networks has overgrown the simplicity of earliest models. The challenges include managing the complexity of software dependencies and various models, setting up model parameter values, storing the input parameters alongside the results, and providing execution statistics. Moreover, in the high performance computing (HPC) context, public cloud resources are becoming an alternative to the expensive on-premises clusters. We present Neural Simulation Pipeline (NSP), which facilitates the large-scale computer simulations and their deployment to multiple computing infrastructures using the infrastructure as the code (IaC) containerization approach. The authors demonstrate the effectiveness of NSP in a pattern recognition task programmed with GENESIS, through a custom-built visual system, called RetNet(8 × 5,1) that uses biologically plausible Hodgkin-Huxley spiking neurons. We evaluate the pipeline by performing 54 simulations executed on-premise, at the Hasso Plattner Institute's (HPI) Future Service-Oriented Computing (SOC) Lab, and through the Amazon Web Services (AWS), the biggest public cloud service provider in the world. We report on the non-containerized and containerized execution with Docker, as well as present the cost per simulation in AWS. The results show that our neural simulation pipeline can reduce entry barriers to neural simulations, making them more practical and cost-effective.
Collapse
Affiliation(s)
- Karol Chlasta
- Department of Computer Science, Polish-Japanese Academy of Information Technology, Warsaw, Poland
- Department of Management in Networked and Digital Societies, Kozminski University, Warsaw, Poland
- *Correspondence: Karol Chlasta
| | - Paweł Sochaczewski
- Department of Management in Networked and Digital Societies, Kozminski University, Warsaw, Poland
| | - Grzegorz M. Wójcik
- Department of Neuroinformatics and Biomedical Engineering, Institute of Computer Science, Maria Curie-Sklodowska University in Lublin, Lublin, Poland
| | - Izabela Krejtz
- Eye Tracking Research Center, SWPS University, Warsaw, Poland
| |
Collapse
|
12
|
Dvoretskii S, Gong Z, Gupta A, Parent J, Alicea B. Braitenberg Vehicles as Developmental Neurosimulation. ARTIFICIAL LIFE 2022; 28:369-395. [PMID: 35881679 DOI: 10.1162/artl_a_00384] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
Connecting brain and behavior is a longstanding issue in the areas of behavioral science, artificial intelligence, and neurobiology. As is standard among models of artificial and biological neural networks, an analogue of the fully mature brain is presented as a blank slate. However, this does not consider the realities of biological development and developmental learning. Our purpose is to model the development of an artificial organism that exhibits complex behaviors. We introduce three alternate approaches to demonstrate how developmental embodied agents can be implemented. The resulting developmental Braitenberg vehicles (dBVs) will generate behaviors ranging from stimulus responses to group behavior that resembles collective motion. We will situate this work in the domain of artificial brain networks along with broader themes such as embodied cognition, feedback, and emergence. Our perspective is exemplified by three software instantiations that demonstrate how a BV-genetic algorithm hybrid model, a multisensory Hebbian learning model, and multi-agent approaches can be used to approach BV development. We introduce use cases such as optimized spatial cognition (vehicle-genetic algorithm hybrid model), hinges connecting behavioral and neural models (multisensory Hebbian learning model), and cumulative classification (multi-agent approaches). In conclusion, we consider future applications of the developmental neurosimulation approach.
Collapse
Affiliation(s)
| | | | | | | | - Bradly Alicea
- Orthogonal Research and Education Laboratory
- OpenWorm Foundation.
| |
Collapse
|
13
|
Mattera A, Cavallo A, Granato G, Baldassarre G, Pagani M. A Biologically Inspired Neural Network Model to Gain Insight Into the Mechanisms of Post-Traumatic Stress Disorder and Eye Movement Desensitization and Reprocessing Therapy. Front Psychol 2022; 13:944838. [PMID: 35911047 PMCID: PMC9326218 DOI: 10.3389/fpsyg.2022.944838] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/15/2022] [Accepted: 06/06/2022] [Indexed: 01/09/2023] Open
Abstract
Eye movement desensitization and reprocessing (EMDR) therapy is a well-established therapeutic method to treat post-traumatic stress disorder (PTSD). However, how EMDR exerts its therapeutic action has been studied in many types of research but still needs to be completely understood. This is in part due to limited knowledge of the neurobiological mechanisms underlying EMDR, and in part to our incomplete understanding of PTSD. In order to model PTSD, we used a biologically inspired computational model based on firing rate units, encompassing the cortex, hippocampus, and amygdala. Through the modulation of its parameters, we fitted real data from patients treated with EMDR or classical exposure therapy. This allowed us to gain insights into PTSD mechanisms and to investigate how EMDR achieves trauma remission.
Collapse
|
14
|
Eriksson O, Bhalla US, Blackwell KT, Crook SM, Keller D, Kramer A, Linne ML, Saudargienė A, Wade RC, Hellgren Kotaleski J. Combining hypothesis- and data-driven neuroscience modeling in FAIR workflows. eLife 2022; 11:e69013. [PMID: 35792600 PMCID: PMC9259018 DOI: 10.7554/elife.69013] [Citation(s) in RCA: 12] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/06/2021] [Accepted: 05/13/2022] [Indexed: 12/22/2022] Open
Abstract
Modeling in neuroscience occurs at the intersection of different points of view and approaches. Typically, hypothesis-driven modeling brings a question into focus so that a model is constructed to investigate a specific hypothesis about how the system works or why certain phenomena are observed. Data-driven modeling, on the other hand, follows a more unbiased approach, with model construction informed by the computationally intensive use of data. At the same time, researchers employ models at different biological scales and at different levels of abstraction. Combining these models while validating them against experimental data increases understanding of the multiscale brain. However, a lack of interoperability, transparency, and reusability of both models and the workflows used to construct them creates barriers for the integration of models representing different biological scales and built using different modeling philosophies. We argue that the same imperatives that drive resources and policy for data - such as the FAIR (Findable, Accessible, Interoperable, Reusable) principles - also support the integration of different modeling approaches. The FAIR principles require that data be shared in formats that are Findable, Accessible, Interoperable, and Reusable. Applying these principles to models and modeling workflows, as well as the data used to constrain and validate them, would allow researchers to find, reuse, question, validate, and extend published models, regardless of whether they are implemented phenomenologically or mechanistically, as a few equations or as a multiscale, hierarchical system. To illustrate these ideas, we use a classical synaptic plasticity model, the Bienenstock-Cooper-Munro rule, as an example due to its long history, different levels of abstraction, and implementation at many scales.
Collapse
Affiliation(s)
- Olivia Eriksson
- Science for Life Laboratory, School of Electrical Engineering and Computer Science, KTH Royal Institute of TechnologyStockholmSweden
| | - Upinder Singh Bhalla
- National Center for Biological Sciences, Tata Institute of Fundamental ResearchBangaloreIndia
| | - Kim T Blackwell
- Department of Bioengineering, Volgenau School of Engineering, George Mason UniversityFairfaxUnited States
| | - Sharon M Crook
- School of Mathematical and Statistical Sciences, Arizona State UniversityTempeUnited States
| | - Daniel Keller
- Blue Brain Project, École Polytechnique Fédérale de LausanneLausanneSwitzerland
| | - Andrei Kramer
- Science for Life Laboratory, School of Electrical Engineering and Computer Science, KTH Royal Institute of TechnologyStockholmSweden
- Department of Neuroscience, Karolinska InstituteStockholmSweden
| | - Marja-Leena Linne
- Faculty of Medicine and Health Technology, Tampere UniversityTampereFinland
| | - Ausra Saudargienė
- Neuroscience Institute, Lithuanian University of Health SciencesKaunasLithuania
- Department of Informatics, Vytautas Magnus UniversityKaunasLithuania
| | - Rebecca C Wade
- Molecular and Cellular Modeling Group, Heidelberg Institute for Theoretical Studies (HITS)HeidelbergGermany
- Center for Molecular Biology (ZMBH), ZMBH-DKFZ Alliance, University of HeidelbergHeidelbergGermany
- Interdisciplinary Center for Scientific Computing (IWR), Heidelberg UniversityHeidelbergGermany
| | - Jeanette Hellgren Kotaleski
- Science for Life Laboratory, School of Electrical Engineering and Computer Science, KTH Royal Institute of TechnologyStockholmSweden
- Department of Neuroscience, Karolinska InstituteStockholmSweden
| |
Collapse
|
15
|
Ladd A, Kim KG, Balewski J, Bouchard K, Ben-Shalom R. Scaling and Benchmarking an Evolutionary Algorithm for Constructing Biophysical Neuronal Models. Front Neuroinform 2022; 16:882552. [PMID: 35784184 PMCID: PMC9248031 DOI: 10.3389/fninf.2022.882552] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/23/2022] [Accepted: 05/18/2022] [Indexed: 11/28/2022] Open
Abstract
Single neuron models are fundamental for computational modeling of the brain's neuronal networks, and understanding how ion channel dynamics mediate neural function. A challenge in defining such models is determining biophysically realistic channel distributions. Here, we present an efficient, highly parallel evolutionary algorithm for developing such models, named NeuroGPU-EA. NeuroGPU-EA uses CPUs and GPUs concurrently to simulate and evaluate neuron membrane potentials with respect to multiple stimuli. We demonstrate a logarithmic cost for scaling the stimuli used in the fitting procedure. NeuroGPU-EA outperforms the typically used CPU based evolutionary algorithm by a factor of 10 on a series of scaling benchmarks. We report observed performance bottlenecks and propose mitigation strategies. Finally, we also discuss the potential of this method for efficient simulation and evaluation of electrophysiological waveforms.
Collapse
Affiliation(s)
- Alexander Ladd
- Electrical Engineering and Computer Sciences, University of California, Berkeley, Berkeley, CA, United States
- *Correspondence: Alexander Ladd
| | - Kyung Geun Kim
- Electrical Engineering and Computer Sciences, University of California, Berkeley, Berkeley, CA, United States
| | - Jan Balewski
- NERSC, Lawrence Berkeley National Laboratory, Berkeley, CA, United States
| | - Kristofer Bouchard
- Helen Wills Neuroscience Institute & Redwood Center for Theoretical Neuroscience, University of California, Berkeley, Berkeley, CA, United States
- Scientific Data Division and Biological Systems and Engineering Division, Lawrence Berkeley National Laboratory, Berkeley, CA, United States
| | - Roy Ben-Shalom
- Neurology Department, MIND Institute, University of California, Davis, Sacramento, CA, United States
- Roy Ben-Shalom
| |
Collapse
|
16
|
|
17
|
Huang C, Zeldenrust F, Celikel T. Cortical Representation of Touch in Silico. Neuroinformatics 2022; 20:1013-1039. [PMID: 35486347 PMCID: PMC9588483 DOI: 10.1007/s12021-022-09576-5] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 02/19/2022] [Indexed: 12/31/2022]
Abstract
With its six layers and ~ 12,000 neurons, a cortical column is a complex network whose function is plausibly greater than the sum of its constituents'. Functional characterization of its network components will require going beyond the brute-force modulation of the neural activity of a small group of neurons. Here we introduce an open-source, biologically inspired, computationally efficient network model of the somatosensory cortex's granular and supragranular layers after reconstructing the barrel cortex in soma resolution. Comparisons of the network activity to empirical observations showed that the in silico network replicates the known properties of touch representations and whisker deprivation-induced changes in synaptic strength induced in vivo. Simulations show that the history of the membrane potential acts as a spatial filter that determines the presynaptic population of neurons contributing to a post-synaptic action potential; this spatial filtering might be critical for synaptic integration of top-down and bottom-up information.
Collapse
Affiliation(s)
- Chao Huang
- grid.9647.c0000 0004 7669 9786Department of Biology, University of Leipzig, Leipzig, Germany
| | - Fleur Zeldenrust
- grid.5590.90000000122931605Donders Institute for Brain, Cognition, and Behaviour, Radboud University, Nijmegen, the Netherlands
| | - Tansu Celikel
- grid.5590.90000000122931605Donders Institute for Brain, Cognition, and Behaviour, Radboud University, Nijmegen, the Netherlands ,grid.213917.f0000 0001 2097 4943School of Psychology, Georgia Institute of Technology, Atlanta, GA USA
| |
Collapse
|
18
|
Hazan A, Ezra Tsur E. Neuromorphic Analog Implementation of Neural Engineering Framework-Inspired Spiking Neuron for High-Dimensional Representation. Front Neurosci 2021; 15:627221. [PMID: 33692670 PMCID: PMC7937893 DOI: 10.3389/fnins.2021.627221] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/08/2020] [Accepted: 01/25/2021] [Indexed: 11/13/2022] Open
Abstract
Brain-inspired hardware designs realize neural principles in electronics to provide high-performing, energy-efficient frameworks for artificial intelligence. The Neural Engineering Framework (NEF) brings forth a theoretical framework for representing high-dimensional mathematical constructs with spiking neurons to implement functional large-scale neural networks. Here, we present OZ, a programable analog implementation of NEF-inspired spiking neurons. OZ neurons can be dynamically programmed to feature varying high-dimensional response curves with positive and negative encoders for a neuromorphic distributed representation of normalized input data. Our hardware design demonstrates full correspondence with NEF across firing rates, encoding vectors, and intercepts. OZ neurons can be independently configured in real-time to allow efficient spanning of a representation space, thus using fewer neurons and therefore less power for neuromorphic data representation.
Collapse
Affiliation(s)
- Avi Hazan
- Neuro-Biomorphic Engineering Lab, Department of Mathematics and Computer Science, The Open University of Israel, Ra'anana, Israel
| | - Elishai Ezra Tsur
- Neuro-Biomorphic Engineering Lab, Department of Mathematics and Computer Science, The Open University of Israel, Ra'anana, Israel
| |
Collapse
|
19
|
Nadji-Tehrani M, Eslami A. A Brain-Inspired Framework for Evolutionary Artificial General Intelligence. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2020; 31:5257-5271. [PMID: 32175876 DOI: 10.1109/tnnls.2020.2965567] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
From the medical field to agriculture, from energy to transportation, every industry is going through a revolution by embracing artificial intelligence (AI); nevertheless, AI is still in its infancy. Inspired by the evolution of the human brain, this article demonstrates a novel method and framework to synthesize an artificial brain with cognitive abilities by taking advantage of the same process responsible for the growth of the biological brain called "neuroembryogenesis." This framework shares some of the key behavioral aspects of the biological brain, such as spiking neurons, neuroplasticity, neuronal pruning, and excitatory and inhibitory interactions between neurons, together making it capable of learning and memorizing. One of the highlights of the proposed design is its potential to incrementally improve itself over generations based on system performance, using genetic algorithms. A proof of concept at the end of this article demonstrates how a simplified implementation of the human visual cortex using the proposed framework is capable of character recognition. Our framework is open source, and the code is shared with the scientific community at http://www.feagi.org.
Collapse
|
20
|
Azevedo Carvalho N, Contassot-Vivier S, Buhry L, Martinez D. Simulation of Large Scale Neural Models With Event-Driven Connectivity Generation. Front Neuroinform 2020; 14:522000. [PMID: 33154719 PMCID: PMC7591773 DOI: 10.3389/fninf.2020.522000] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/20/2019] [Accepted: 08/31/2020] [Indexed: 11/15/2022] Open
Abstract
Accurate simulations of brain structures is a major problem in neuroscience. Many works are dedicated to design better models or to develop more efficient simulation schemes. In this paper, we propose a hybrid simulation scheme that combines time-stepping second-order integration of Hodgkin-Huxley (HH) type neurons with event-driven updating of the synaptic currents. As the HH model is a continuous model, there is no explicit spike events. Thus, in order to preserve the accuracy of the integration method, a spike detection algorithm is developed that accurately determines spike times. This approach allows us to regenerate the outgoing connections at each event, thereby avoiding the storage of the connectivity. Consequently, memory consumption is significantly reduced while preserving execution time and accuracy of the simulations, especially the spike times of detailed point neuron models. The efficiency of the method, implemented in the SiReNe software, is demonstrated by the simulation of a striatum model which consists of more than 106 neurons and 108 synapses (each neuron has a fan-out of 504 post-synaptic neurons), under normal and Parkinson's conditions.
Collapse
Affiliation(s)
| | | | - Laure Buhry
- Université de Lorraine, CNRS, Inria, LORIA, Nancy, France
| | | |
Collapse
|
21
|
Riley SN, Davies J. A spiking neural network model of spatial and visual mental imagery. Cogn Neurodyn 2020; 14:239-251. [PMID: 32226565 PMCID: PMC7090122 DOI: 10.1007/s11571-019-09566-5] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/14/2019] [Revised: 09/30/2019] [Accepted: 11/26/2019] [Indexed: 12/18/2022] Open
Abstract
Mental imagery has long been of interest to the cognitive and neurosciences, but how it manifests itself in the mind and brain still remains unresolved. In pursuit of this, we built a spiking neural model that can perform mental rotation and mental map scanning using strategies informed by the psychology and neuroscience literature. Results: When performing mental map scanning, reaction times (RTs) for our model closely match behavioural studies (approx. 50 ms/cm), and replicate the cognitive penetrability of the task. When performing mental rotation, our model's RTs once again closely match behavioural studies (model: 55-65°/s; studies: 60°/s), and performed the task using the same task strategy (whole unit rotation of simple and familiar objects through intermediary points). Overall, our model suggests: (1) vector-based approaches to neuro-cognitive modelling are well equipped to re-produce behavioural findings, and (2) the cognitive (in)penetrability of imagery tasks may depend on whether or not the task makes use of (non)symbolic processing.
Collapse
Affiliation(s)
- Sean N. Riley
- Institute of Cognitive Science, Carleton University, 2201 Dunton Tower 1125 Colonel BY Drive, Ottawa, ON K1S 5B6 Canada
| | - Jim Davies
- Institute of Cognitive Science, Carleton University, 2201 Dunton Tower 1125 Colonel BY Drive, Ottawa, ON K1S 5B6 Canada
| |
Collapse
|
22
|
Liu Y, Qian K, Hu S, An K, Xu S, Zhan X, Wang JJ, Guo R, Wu Y, Chen TP, Yu Q, Liu Y. Application of Deep Compression Technique in Spiking Neural Network Chip. IEEE TRANSACTIONS ON BIOMEDICAL CIRCUITS AND SYSTEMS 2020; 14:274-282. [PMID: 31715570 DOI: 10.1109/tbcas.2019.2952714] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
In this paper, a reconfigurable and scalable spiking neural network processor, containing 192 neurons and 6144 synapses, is developed. By using deep compression technique in spiking neural network chip, the amount of physical synapses can be reduced to 1/16 of that needed in the original network, while the accuracy is maintained. This compression technique can greatly reduce the number of SRAMs inside the chip as well as the power consumption of the chip. This design achieves throughput per unit area of 1.1 GSOP/([Formula: see text]) at 1.2 V, and energy consumed per SOP of 35 pJ. A 2-layer fully-connected spiking neural network is mapped to the chip, and thus the chip is able to realize handwritten digit recognition on MNIST with an accuracy of 91.2%.
Collapse
|
23
|
|
24
|
Wei H, Bu Y, Zhu Z. Robotic arm controlling based on a spiking neural circuit and synaptic plasticity. Biomed Signal Process Control 2020. [DOI: 10.1016/j.bspc.2019.101640] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/26/2022]
|
25
|
|
26
|
Chen S, He Z, Han X, He X, Li R, Zhu H, Zhao D, Dai C, Zhang Y, Lu Z, Chi X, Niu B. How Big Data and High-performance Computing Drive Brain Science. GENOMICS PROTEOMICS & BIOINFORMATICS 2019; 17:381-392. [PMID: 31805369 PMCID: PMC6943776 DOI: 10.1016/j.gpb.2019.09.003] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 11/14/2018] [Revised: 09/12/2019] [Accepted: 09/29/2019] [Indexed: 12/17/2022]
Abstract
Brain science accelerates the study of intelligence and behavior, contributes fundamental insights into human cognition, and offers prospective treatments for brain disease. Faced with the challenges posed by imaging technologies and deep learning computational models, big data and high-performance computing (HPC) play essential roles in studying brain function, brain diseases, and large-scale brain models or connectomes. We review the driving forces behind big data and HPC methods applied to brain science, including deep learning, powerful data analysis capabilities, and computational performance solutions, each of which can be used to improve diagnostic accuracy and research output. This work reinforces predictions that big data and HPC will continue to improve brain science by making ultrahigh-performance analysis possible, by improving data standardization and sharing, and by providing new neuromorphic insights.
Collapse
Affiliation(s)
- Shanyu Chen
- Computer Network Information Center, Chinese Academy of Sciences, Beijing 100190, China; University of Chinese Academy of Sciences, Beijing 100190, China
| | - Zhipeng He
- Computer Network Information Center, Chinese Academy of Sciences, Beijing 100190, China; University of Chinese Academy of Sciences, Beijing 100190, China
| | - Xinyin Han
- Computer Network Information Center, Chinese Academy of Sciences, Beijing 100190, China; University of Chinese Academy of Sciences, Beijing 100190, China
| | - Xiaoyu He
- Computer Network Information Center, Chinese Academy of Sciences, Beijing 100190, China; University of Chinese Academy of Sciences, Beijing 100190, China
| | - Ruilin Li
- Computer Network Information Center, Chinese Academy of Sciences, Beijing 100190, China; University of Chinese Academy of Sciences, Beijing 100190, China
| | - Haidong Zhu
- Computer Network Information Center, Chinese Academy of Sciences, Beijing 100190, China; University of Chinese Academy of Sciences, Beijing 100190, China
| | - Dan Zhao
- Computer Network Information Center, Chinese Academy of Sciences, Beijing 100190, China; University of Chinese Academy of Sciences, Beijing 100190, China
| | - Chuangchuang Dai
- Computer Network Information Center, Chinese Academy of Sciences, Beijing 100190, China; University of Chinese Academy of Sciences, Beijing 100190, China
| | - Yu Zhang
- Computer Network Information Center, Chinese Academy of Sciences, Beijing 100190, China; University of Chinese Academy of Sciences, Beijing 100190, China
| | - Zhonghua Lu
- Computer Network Information Center, Chinese Academy of Sciences, Beijing 100190, China
| | - Xuebin Chi
- Computer Network Information Center, Chinese Academy of Sciences, Beijing 100190, China; University of Chinese Academy of Sciences, Beijing 100190, China; Center of Scientific Computing Applications & Research, Chinese Academy of Sciences, Beijing 100190, China
| | - Beifang Niu
- Computer Network Information Center, Chinese Academy of Sciences, Beijing 100190, China; University of Chinese Academy of Sciences, Beijing 100190, China; Guizhou University School of Medicine, Guiyang 550025, China.
| |
Collapse
|
27
|
Valadez-Godínez S, Sossa H, Santiago-Montero R. On the accuracy and computational cost of spiking neuron implementation. Neural Netw 2019; 122:196-217. [PMID: 31689679 DOI: 10.1016/j.neunet.2019.09.026] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/16/2019] [Revised: 09/12/2019] [Accepted: 09/17/2019] [Indexed: 10/25/2022]
Abstract
Since more than a decade ago, three statements about spiking neuron (SN) implementations have been widely accepted: 1) Hodgkin and Huxley (HH) model is computationally prohibitive, 2) Izhikevich (IZH) artificial neuron is as efficient as Leaky Integrate-and-Fire (LIF) model, and 3) IZH model is more efficient than HH model (Izhikevich, 2004). As suggested by Hodgkin and Huxley (1952), their model operates in two modes: by using the α's and β's rate functions directly (HH model) and by storing them into tables (HHT model) for computational cost reduction. Recently, it has been stated that: 1) HHT model (HH using tables) is not prohibitive, 2) IZH model is not efficient, and 3) both HHT and IZH models are comparable in computational cost (Skocik & Long, 2014). That controversy shows that there is no consensus concerning SN simulation capacities. Hence, in this work, we introduce a refined approach, based on the multiobjective optimization theory, describing the SN simulation capacities and ultimately choosing optimal simulation parameters. We have used normalized metrics to define the capacity levels of accuracy, computational cost, and efficiency. Normalized metrics allowed comparisons between SNs at the same level or scale. We conducted tests for balanced, lower, and upper boundary conditions under a regular spiking mode with constant and random current stimuli. We found optimal simulation parameters leading to a balance between computational cost and accuracy. Importantly, and, in general, we found that 1) HH model (without using tables) is the most accurate, computationally inexpensive, and efficient, 2) IZH model is the most expensive and inefficient, 3) both LIF and HHT models are the most inaccurate, 4) HHT model is more expensive and inaccurate than HH model due to α's and β's table discretization, and 5) HHT model is not comparable in computational cost to IZH model. These results refute the theory formulated over a decade ago (Izhikevich, 2004) and go more in-depth in the statements formulated by Skocik and Long (2014). Our statements imply that the number of dimensions or FLOPS in the SNs are theoretical but not practical indicators of the true computational cost. The metric we propose for the computational cost is more precise than FLOPS and was found to be invariant to computer architecture. Moreover, we found that the firing frequency used in previous works is a necessary but an insufficient metric to evaluate the simulation accuracy. We also show that our results are consistent with the theory of numerical methods and the theory of SN discontinuity. Discontinuous SNs, such LIF and IZH models, introduce a considerable error every time a spike is generated. In addition, compared to the constant input current, the random input current increases the computational cost and inaccuracy. Besides, we found that the search for optimal simulation parameters is problem-specific. That is important because most of the previous works have intended to find a general and unique optimal simulation. Here, we show that this solution could not exist because it is a multiobjective optimization problem that depends on several factors. This work sets up a renewed thesis concerning the SN simulation that is useful to several related research areas, including the emergent Deep Spiking Neural Networks.
Collapse
Affiliation(s)
- Sergio Valadez-Godínez
- Laboratorio de Robótica y Mecatrónica, Centro de Investigación en Computación, Instituto Politécnico Nacional, Av. Juan de Dios Bátiz, S/N, Col. Nva. Industrial Vallejo, Ciudad de México, México, 07738, Mexico; División de Ingeniería Informática, Instituto Tecnológico Superior de Purísima del Rincón, Gto., México, 36413, Mexico; División de Ingenierías de Educación Superior, Universidad Virtual del Estado de Guanajuato, Gto., México, 36400, Mexico.
| | - Humberto Sossa
- Laboratorio de Robótica y Mecatrónica, Centro de Investigación en Computación, Instituto Politécnico Nacional, Av. Juan de Dios Bátiz, S/N, Col. Nva. Industrial Vallejo, Ciudad de México, México, 07738, Mexico; Tecnológico de Monterrey, Campus Guadalajara, Av. Gral. Ramón Corona 2514, Zapopan, Jal., México, 45138, Mexico.
| | - Raúl Santiago-Montero
- División de Estudios de Posgrado e Investigación, Instituto Tecnológico de León, Av. Tecnológico S/N, León, Gto., México, 37290, Mexico.
| |
Collapse
|
28
|
Abstract
The study of psychological mechanisms is an interdisciplinary endeavour, requiring insights from many different domains (from electrophysiology, to psychology, to theoretical neuroscience, to computer science). In this article, I argue that philosophy plays an essential role in this interdisciplinary project, and that effective scientific study of psychological mechanisms requires that working scientists be responsible metaphysicians. This means adopting deliberate metaphysical positions when studying mechanisms that go beyond what is empirically justified regarding the nature of the phenomenon being studied, the conditions of its occurrence, and its boundaries. Such metaphysical commitments are necessary in order to set up experimental protocols, determine which variables to manipulate under experimental conditions, and which conclusions to draw from different scientific models and theories. It is important for scientists to be aware of the metaphysical commitments they adopt, since they can easily be led astray if invoked carelessly.
Collapse
|
29
|
Yang S, Wang J, Deng B, Liu C, Li H, Fietkiewicz C, Loparo KA. Real-Time Neuromorphic System for Large-Scale Conductance-Based Spiking Neural Networks. IEEE TRANSACTIONS ON CYBERNETICS 2019; 49:2490-2503. [PMID: 29993922 DOI: 10.1109/tcyb.2018.2823730] [Citation(s) in RCA: 35] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/08/2023]
Abstract
The investigation of the human intelligence, cognitive systems and functional complexity of human brain is significantly facilitated by high-performance computational platforms. In this paper, we present a real-time digital neuromorphic system for the simulation of large-scale conductance-based spiking neural networks (LaCSNN), which has the advantages of both high biological realism and large network scale. Using this system, a detailed large-scale cortico-basal ganglia-thalamocortical loop is simulated using a scalable 3-D network-on-chip (NoC) topology with six Altera Stratix III field-programmable gate arrays simulate 1 million neurons. Novel router architecture is presented to deal with the communication of multiple data flows in the multinuclei neural network, which has not been solved in previous NoC studies. At the single neuron level, cost-efficient conductance-based neuron models are proposed, resulting in the average utilization of 95% less memory resources and 100% less DSP resources for multiplier-less realization, which is the foundation of the large-scale realization. An analysis of the modified models is conducted, including investigation of bifurcation behaviors and ionic dynamics, demonstrating the required range of dynamics with a more reduced resource cost. The proposed LaCSNN system is shown to outperform the alternative state-of-the-art approaches previously used to implement the large-scale spiking neural network, and enables a broad range of potential applications due to its real-time computational power.
Collapse
|
30
|
Chemchem A, Alin F, Krajecki M. Improving the Cognitive Agent Intelligence by Deep Knowledge Classification. INTERNATIONAL JOURNAL OF COMPUTATIONAL INTELLIGENCE AND APPLICATIONS 2019. [DOI: 10.1142/s1469026819500056] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022]
Abstract
In this paper, a new idea is developed for improving the agent intelligence. In fact with the presented convolutional neural network (CNN) approach for knowledge classification, the agent will be able to manage its knowledge. This new concept allows the agent to select only the actionable rule class, instead of trying to infer its whole rule base exhaustively. In addition, through this research, we developed a comparative study between the proposed CNN approach and the classical classification approaches. As foreseeable the deep learning method outperforms the others in term of classification accuracy.
Collapse
Affiliation(s)
- Amine Chemchem
- CReSTIC Center, University of Reims Champagne-Ardenne, Campus Moulin de la Housse BP 1039, 51687 Reims Cedex 2, France
| | - François Alin
- CReSTIC Center, University of Reims Champagne-Ardenne, Campus Moulin de la Housse BP 1039, 51687 Reims Cedex 2, France
| | - Michael Krajecki
- CReSTIC Center, University of Reims Champagne-Ardenne, Campus Moulin de la Housse BP 1039, 51687 Reims Cedex 2, France
| |
Collapse
|
31
|
Abstract
Building machines that learn and think like humans is essential not only for cognitive science, but also for computational neuroscience, whose ultimate goal is to understand how cognition is implemented in biological brains. A new cognitive computational neuroscience should build cognitive-level and neural-level models, understand their relationships, and test both types of models with both brain and behavioral data.
Collapse
|
32
|
Chatzikonstantis G, Sidiropoulos H, Strydis C, Negrello M, Smaragdos G, De Zeeuw C, Soudris D. Multinode implementation of an extended Hodgkin–Huxley simulator. Neurocomputing 2019. [DOI: 10.1016/j.neucom.2018.10.062] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
|
33
|
Abstract
Network theory provides an intuitively appealing framework for studying relationships among interconnected brain mechanisms and their relevance to behaviour. As the space of its applications grows, so does the diversity of meanings of the term network model. This diversity can cause confusion, complicate efforts to assess model validity and efficacy, and hamper interdisciplinary collaboration. In this Review, we examine the field of network neuroscience, focusing on organizing principles that can help overcome these challenges. First, we describe the fundamental goals in constructing network models. Second, we review the most common forms of network models, which can be described parsimoniously along the following three primary dimensions: from data representations to first-principles theory; from biophysical realism to functional phenomenology; and from elementary descriptions to coarse-grained approximations. Third, we draw on biology, philosophy and other disciplines to establish validation principles for these models. We close with a discussion of opportunities to bridge model types and point to exciting frontiers for future pursuits.
Collapse
Affiliation(s)
- Danielle S Bassett
- Department of Bioengineering, University of Pennsylvania, Philadelphia, PA, USA.
- Department of Physics and Astronomy, University of Pennsylvania, Philadelphia, PA, USA.
- Department of Electrical and Systems Engineering, University of Pennsylvania, Philadelphia, PA, USA.
- Department of Neurology, University of Pennsylvania, Philadelphia, PA, USA.
| | - Perry Zurn
- Department of Philosophy, American University, Washington, DC, USA
| | - Joshua I Gold
- Department of Neuroscience, University of Pennsylvania, Philadelphia, PA, USA
| |
Collapse
|
34
|
Kriegeskorte N, Douglas PK. Cognitive computational neuroscience. Nat Neurosci 2018; 21:1148-1160. [PMID: 30127428 DOI: 10.1038/s41593-018-0210-5] [Citation(s) in RCA: 143] [Impact Index Per Article: 23.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/07/2016] [Revised: 06/09/2018] [Accepted: 07/11/2018] [Indexed: 12/24/2022]
Abstract
To learn how cognition is implemented in the brain, we must build computational models that can perform cognitive tasks, and test such models with brain and behavioral experiments. Cognitive science has developed computational models that decompose cognition into functional components. Computational neuroscience has modeled how interacting neurons can implement elementary components of cognition. It is time to assemble the pieces of the puzzle of brain computation and to better integrate these separate disciplines. Modern technologies enable us to measure and manipulate brain activity in unprecedentedly rich ways in animals and humans. However, experiments will yield theoretical insight only when employed to test brain-computational models. Here we review recent work in the intersection of cognitive science, computational neuroscience and artificial intelligence. Computational models that mimic brain information processing during perceptual, cognitive and control tasks are beginning to be developed and tested with brain and behavioral data.
Collapse
Affiliation(s)
- Nikolaus Kriegeskorte
- Department of Psychology, Department of Neuroscience, Department of Electrical Engineering, Zuckerman Mind Brain Behavior Institute, Columbia University, New York, NY, USA.
| | - Pamela K Douglas
- Center for Cognitive Neuroscience, University of California, Los Angeles, Los Angeles, CA, USA
| |
Collapse
|
35
|
Plebe A. The search of "canonical" explanations for the cerebral cortex. HISTORY AND PHILOSOPHY OF THE LIFE SCIENCES 2018; 40:40. [PMID: 29905901 DOI: 10.1007/s40656-018-0205-2] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/22/2017] [Accepted: 06/07/2018] [Indexed: 06/08/2023]
Abstract
This paper addresses a fundamental line of research in neuroscience: the identification of a putative neural processing core of the cerebral cortex, often claimed to be "canonical". This "canonical" core would be shared by the entire cortex, and would explain why it is so powerful and diversified in tasks and functions, yet so uniform in architecture. The purpose of this paper is to analyze the search for canonical explanations over the past 40 years, discussing the theoretical frameworks informing this research. It will highlight a bias that, in my opinion, has limited the success of this research project, that of overlooking the dimension of cortical development. The earliest explanation of the cerebral cortex as canonical was attempted by David Marr, deriving putative cortical circuits from general mathematical laws, loosely following a deductive-nomological account. Although Marr's theory turned out to be incorrect, one of its merits was to have put the issue of cortical circuit development at the top of his agenda. This aspect has been largely neglected in much of the research on canonical models that has followed. Models proposed in the 1980s were conceived as mechanistic. They identified a small number of components that interacted as a basic circuit, with each component defined as a function. More recent models have been presented as idealized canonical computations, distinct from mechanistic explanations, due to the lack of identifiable cortical components. Currently, the entire enterprise of coming up with a single canonical explanation has been criticized as being misguided, and the premise of the uniformity of the cortex has been strongly challenged. This debate is analyzed here. The legacy of the canonical circuit concept is reflected in both positive and negative ways in recent large-scale brain projects, such as the Human Brain Project. One positive aspect is that these projects might achieve the aim of producing detailed simulations of cortical electrical activity, a negative one regards whether they will be able to find ways of simulating how circuits actually develop.
Collapse
Affiliation(s)
- Alessio Plebe
- Department of Cognitive Science, Università degli Studi di Messina, v. Concezione 8, Messina, Italy.
| |
Collapse
|
36
|
Farisco M, Kotaleski JH, Evers K. Large-Scale Brain Simulation and Disorders of Consciousness. Mapping Technical and Conceptual Issues. Front Psychol 2018; 9:585. [PMID: 29740372 PMCID: PMC5928391 DOI: 10.3389/fpsyg.2018.00585] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/26/2018] [Accepted: 04/06/2018] [Indexed: 11/15/2022] Open
Abstract
Modeling and simulations have gained a leading position in contemporary attempts to describe, explain, and quantitatively predict the human brain's operations. Computer models are highly sophisticated tools developed to achieve an integrated knowledge of the brain with the aim of overcoming the actual fragmentation resulting from different neuroscientific approaches. In this paper we investigate the plausibility of simulation technologies for emulation of consciousness and the potential clinical impact of large-scale brain simulation on the assessment and care of disorders of consciousness (DOCs), e.g., Coma, Vegetative State/Unresponsive Wakefulness Syndrome, Minimally Conscious State. Notwithstanding their technical limitations, we suggest that simulation technologies may offer new solutions to old practical problems, particularly in clinical contexts. We take DOCs as an illustrative case, arguing that the simulation of neural correlates of consciousness is potentially useful for improving treatments of patients with DOCs.
Collapse
Affiliation(s)
- Michele Farisco
- Centre for Research Ethics and Bioethics, Uppsala University, Uppsala, Sweden
- Science and Society Unit, Biogem Genetic Research Centre, Ariano Irpino (AV), Italy
| | - Jeanette H. Kotaleski
- Science for Life Laboratory, School of Computer Science and Communication, KTH Royal Institute of Technology, Stockholm, Sweden
- Department of Neuroscience, Karolinska Institute, Solna, Sweden
| | - Kathinka Evers
- Centre for Research Ethics and Bioethics, Uppsala University, Uppsala, Sweden
| |
Collapse
|
37
|
Frégnac Y. Big data and the industrialization of neuroscience: A safe roadmap for understanding the brain? Science 2018; 358:470-477. [PMID: 29074766 DOI: 10.1126/science.aan8866] [Citation(s) in RCA: 51] [Impact Index Per Article: 8.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/22/2022]
Abstract
New technologies in neuroscience generate reams of data at an exponentially increasing rate, spurring the design of very-large-scale data-mining initiatives. Several supranational ventures are contemplating the possibility of achieving, within the next decade(s), full simulation of the human brain.
Collapse
Affiliation(s)
- Yves Frégnac
- Unité de Neuroscience, Information et Complexité (UNIC-CNRS), Gif-sur-Yvette, France.
| |
Collapse
|
38
|
DeWolf T, Stewart TC, Slotine JJ, Eliasmith C. A spiking neural model of adaptive arm control. Proc Biol Sci 2017; 283:rspb.2016.2134. [PMID: 27903878 DOI: 10.1098/rspb.2016.2134] [Citation(s) in RCA: 25] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/04/2016] [Accepted: 11/03/2016] [Indexed: 11/12/2022] Open
Abstract
We present a spiking neuron model of the motor cortices and cerebellum of the motor control system. The model consists of anatomically organized spiking neurons encompassing premotor, primary motor, and cerebellar cortices. The model proposes novel neural computations within these areas to control a nonlinear three-link arm model that can adapt to unknown changes in arm dynamics and kinematic structure. We demonstrate the mathematical stability of both forms of adaptation, suggesting that this is a robust approach for common biological problems of changing body size (e.g. during growth), and unexpected dynamic perturbations (e.g. when moving through different media, such as water or mud). To demonstrate the plausibility of the proposed neural mechanisms, we show that the model accounts for data across 19 studies of the motor control system. These data include a mix of behavioural and neural spiking activity, across subjects performing adaptive and static tasks. Given this proposed characterization of the biological processes involved in motor control of the arm, we provide several experimentally testable predictions that distinguish our model from previous work.
Collapse
Affiliation(s)
- Travis DeWolf
- Centre for Theoretical Neuroscience, University of Waterloo, Waterloo, Ontario, Canada N2L3G1 .,Applied Brain Research, Inc., Waterloo, Ontario, Canada N2L3G1
| | - Terrence C Stewart
- Centre for Theoretical Neuroscience, University of Waterloo, Waterloo, Ontario, Canada N2L3G1.,Applied Brain Research, Inc., Waterloo, Ontario, Canada N2L3G1
| | | | - Chris Eliasmith
- Centre for Theoretical Neuroscience, University of Waterloo, Waterloo, Ontario, Canada N2L3G1.,Applied Brain Research, Inc., Waterloo, Ontario, Canada N2L3G1
| |
Collapse
|
39
|
Sen-Bhattacharya B, Serrano-Gotarredona T, Balassa L, Bhattacharya A, Stokes AB, Rowley A, Sugiarto I, Furber S. A Spiking Neural Network Model of the Lateral Geniculate Nucleus on the SpiNNaker Machine. Front Neurosci 2017; 11:454. [PMID: 28848380 PMCID: PMC5552764 DOI: 10.3389/fnins.2017.00454] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/05/2017] [Accepted: 07/25/2017] [Indexed: 01/23/2023] Open
Abstract
We present a spiking neural network model of the thalamic Lateral Geniculate Nucleus (LGN) developed on SpiNNaker, which is a state-of-the-art digital neuromorphic hardware built with very-low-power ARM processors. The parallel, event-based data processing in SpiNNaker makes it viable for building massively parallel neuro-computational frameworks. The LGN model has 140 neurons representing a "basic building block" for larger modular architectures. The motivation of this work is to simulate biologically plausible LGN dynamics on SpiNNaker. Synaptic layout of the model is consistent with biology. The model response is validated with existing literature reporting entrainment in steady state visually evoked potentials (SSVEP)-brain oscillations corresponding to periodic visual stimuli recorded via electroencephalography (EEG). Periodic stimulus to the model is provided by: a synthetic spike-train with inter-spike-intervals in the range 10-50 Hz at a resolution of 1 Hz; and spike-train output from a state-of-the-art electronic retina subjected to a light emitting diode flashing at 10, 20, and 40 Hz, simulating real-world visual stimulus to the model. The resolution of simulation is 0.1 ms to ensure solution accuracy for the underlying differential equations defining Izhikevichs neuron model. Under this constraint, 1 s of model simulation time is executed in 10 s real time on SpiNNaker; this is because simulations on SpiNNaker work in real time for time-steps dt ⩾ 1 ms. The model output shows entrainment with both sets of input and contains harmonic components of the fundamental frequency. However, suppressing the feed-forward inhibition in the circuit produces subharmonics within the gamma band (>30 Hz) implying a reduced information transmission fidelity. These model predictions agree with recent lumped-parameter computational model-based predictions, using conventional computers. Scalability of the framework is demonstrated by a multi-node architecture consisting of three "nodes," where each node is the "basic building block" LGN model. This 420 neuron model is tested with synthetic periodic stimulus at 10 Hz to all the nodes. The model output is the average of the outputs from all nodes, and conforms to the above-mentioned predictions of each node. Power consumption for model simulation on SpiNNaker is ≪1 W.
Collapse
Affiliation(s)
- Basabdatta Sen-Bhattacharya
- Advanced Processor Technologies Group, School of Computer Science, University of ManchesterManchester, United Kingdom
| | | | | | | | - Alan B. Stokes
- Advanced Processor Technologies Group, School of Computer Science, University of ManchesterManchester, United Kingdom
| | - Andrew Rowley
- Advanced Processor Technologies Group, School of Computer Science, University of ManchesterManchester, United Kingdom
| | - Indar Sugiarto
- Advanced Processor Technologies Group, School of Computer Science, University of ManchesterManchester, United Kingdom
| | - Steve Furber
- Advanced Processor Technologies Group, School of Computer Science, University of ManchesterManchester, United Kingdom
| |
Collapse
|
40
|
Abstract
Large-scale neural simulations have the marks of a distinct methodology which can be fruitfully deployed in neuroscience. I distinguish between two types of applications of the simulation methodology in neuroscientific research. Model-oriented applications aim to use the simulation outputs to derive new hypotheses about brain organization and functioning and thus to extend current theoretical knowledge and understanding in the field. Data-oriented applications of the simulation methodology target the collection and analysis of data relevant for neuroscientific research that is inaccessible via more traditional experimental methods. I argue for a two-stage evaluation schema which helps clarify the differences and similarities between three current large-scale simulation projects pursued in neuroscience.
Collapse
|
41
|
Abstract
Describing the human brain in mathematical terms is an important ambition of neuroscience research, yet the challenges remain considerable. It was Alan Turing, writing in 1950, who first sought to demonstrate how time-consuming such an undertaking would be. Through analogy to the computer program, Turing argued that arriving at a complete mathematical description of the mind would take well over a thousand years. In this opinion piece, we argue that — despite seventy years of progress in the field — his arguments remain both prescient and persuasive.
Collapse
|
42
|
Mechanisms underlying a thalamocortical transformation during active tactile sensation. PLoS Comput Biol 2017; 13:e1005576. [PMID: 28591219 PMCID: PMC5479597 DOI: 10.1371/journal.pcbi.1005576] [Citation(s) in RCA: 23] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/28/2016] [Revised: 06/21/2017] [Accepted: 05/05/2017] [Indexed: 12/03/2022] Open
Abstract
During active somatosensation, neural signals expected from movement of the sensors are suppressed in the cortex, whereas information related to touch is enhanced. This tactile suppression underlies low-noise encoding of relevant tactile features and the brain’s ability to make fine tactile discriminations. Layer (L) 4 excitatory neurons in the barrel cortex, the major target of the somatosensory thalamus (VPM), respond to touch, but have low spike rates and low sensitivity to the movement of whiskers. Most neurons in VPM respond to touch and also show an increase in spike rate with whisker movement. Therefore, signals related to self-movement are suppressed in L4. Fast-spiking (FS) interneurons in L4 show similar dynamics to VPM neurons. Stimulation of halorhodopsin in FS interneurons causes a reduction in FS neuron activity and an increase in L4 excitatory neuron activity. This decrease of activity of L4 FS neurons contradicts the "paradoxical effect" predicted in networks stabilized by inhibition and in strongly-coupled networks. To explain these observations, we constructed a model of the L4 circuit, with connectivity constrained by in vitro measurements. The model explores the various synaptic conductance strengths for which L4 FS neurons actively suppress baseline and movement-related activity in layer 4 excitatory neurons. Feedforward inhibition, in concert with recurrent intracortical circuitry, produces tactile suppression. Synaptic delays in feedforward inhibition allow transmission of temporally brief volleys of activity associated with touch. Our model provides a mechanistic explanation of a behavior-related computation implemented by the thalamocortical circuit. We study how information is transformed between connected brain areas: the thalamus, the gateway to the cortex, and layer 4 (L4) in cortex, which is the first station to process sensory input from the thalamus. When mice perform an active object localization task with their whiskers, thalamic neurons and inhibitory fast-spiking (FS) interneurons in L4 encode whisker movement and touch, whereas L4 excitatory neurons respond almost exclusively to touch. To explain these observations, we constructed a computational model based on measured circuit parameters. The model reveals that without touch, when thalamic activity varies slowly, strong inhibition from FS neurons prevents activity in L4 excitatory neurons. Brief and strong touch-induced thalamic activity excites both excitatory and FS neurons in L4. FS neurons inhibit excitatory neurons with a delay of approximately 1 ms relative to ascending excitation, allowing L4 excitatory neurons to spike. Our results demonstrate that cortical circuits exploit synaptic delays for fast computations. Similar mechanisms likely also operate for rapid stimuli in the visual and auditory systems.
Collapse
|
43
|
A new neuroinformatics approach to personalized medicine in neurology: The Virtual Brain. Curr Opin Neurol 2016; 29:429-36. [PMID: 27224088 DOI: 10.1097/wco.0000000000000344] [Citation(s) in RCA: 32] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/19/2023]
Abstract
PURPOSE OF REVIEW An exciting advance in the field of neuroimaging is the acquisition and processing of very large data sets (so called 'big data'), permitting large-scale inferences that foster a greater understanding of brain function in health and disease. Yet what we are clearly lacking are quantitative integrative tools to translate this understanding to the individual level to lay the basis for personalized medicine. RECENT FINDINGS Here we address this challenge through a review on how the relatively new field of neuroinformatics modeling has the capacity to track brain network function at different levels of inquiry, from microscopic to macroscopic and from the localized to the distributed. In this context, we introduce a new and unique multiscale approach, The Virtual Brain (TVB), that effectively models individualized brain activity, linking large-scale (macroscopic) brain dynamics with biophysical parameters at the microscopic level. We also show how TVB modeling provides unique biological interpretable data in epilepsy and stroke. SUMMARY These results establish the basis for a deliberate integration of computational biology and neuroscience into clinical approaches for elucidating cellular mechanisms of disease. In the future, this can provide the means to create a collection of disease-specific models that can be applied on the individual level to personalize therapeutic interventions. VIDEO ABSTRACT.
Collapse
|
44
|
Komer B, Eliasmith C. A unified theoretical approach for biological cognition and learning. Curr Opin Behav Sci 2016. [DOI: 10.1016/j.cobeha.2016.03.006] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
|
45
|
Almog M, Korngreen A. Is realistic neuronal modeling realistic? J Neurophysiol 2016; 116:2180-2209. [PMID: 27535372 DOI: 10.1152/jn.00360.2016] [Citation(s) in RCA: 50] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/11/2016] [Accepted: 08/17/2016] [Indexed: 11/22/2022] Open
Abstract
Scientific models are abstractions that aim to explain natural phenomena. A successful model shows how a complex phenomenon arises from relatively simple principles while preserving major physical or biological rules and predicting novel experiments. A model should not be a facsimile of reality; it is an aid for understanding it. Contrary to this basic premise, with the 21st century has come a surge in computational efforts to model biological processes in great detail. Here we discuss the oxymoronic, realistic modeling of single neurons. This rapidly advancing field is driven by the discovery that some neurons don't merely sum their inputs and fire if the sum exceeds some threshold. Thus researchers have asked what are the computational abilities of single neurons and attempted to give answers using realistic models. We briefly review the state of the art of compartmental modeling highlighting recent progress and intrinsic flaws. We then attempt to address two fundamental questions. Practically, can we realistically model single neurons? Philosophically, should we realistically model single neurons? We use layer 5 neocortical pyramidal neurons as a test case to examine these issues. We subject three publically available models of layer 5 pyramidal neurons to three simple computational challenges. Based on their performance and a partial survey of published models, we conclude that current compartmental models are ad hoc, unrealistic models functioning poorly once they are stretched beyond the specific problems for which they were designed. We then attempt to plot possible paths for generating realistic single neuron models.
Collapse
Affiliation(s)
- Mara Almog
- The Leslie and Susan Gonda Interdisciplinary Brain Research Centre, Bar-Ilan University, Ramat Gan, Israel; and.,The Mina and Everard Goodman Faculty of Life Sciences, Bar-Ilan University, Ramat Gan, Israel
| | - Alon Korngreen
- The Leslie and Susan Gonda Interdisciplinary Brain Research Centre, Bar-Ilan University, Ramat Gan, Israel; and .,The Mina and Everard Goodman Faculty of Life Sciences, Bar-Ilan University, Ramat Gan, Israel
| |
Collapse
|
46
|
Gaiteri C, Mostafavi S, Honey CJ, De Jager PL, Bennett DA. Genetic variants in Alzheimer disease - molecular and brain network approaches. Nat Rev Neurol 2016; 12:413-27. [PMID: 27282653 PMCID: PMC5017598 DOI: 10.1038/nrneurol.2016.84] [Citation(s) in RCA: 69] [Impact Index Per Article: 8.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/21/2022]
Abstract
Genetic studies in late-onset Alzheimer disease (LOAD) are aimed at identifying core disease mechanisms and providing potential biomarkers and drug candidates to improve clinical care of AD. However, owing to the complexity of LOAD, including pathological heterogeneity and disease polygenicity, extraction of actionable guidance from LOAD genetics has been challenging. Past attempts to summarize the effects of LOAD-associated genetic variants have used pathway analysis and collections of small-scale experiments to hypothesize functional convergence across several variants. In this Review, we discuss how the study of molecular, cellular and brain networks provides additional information on the effects of LOAD-associated genetic variants. We then discuss emerging combinations of these omic data sets into multiscale models, which provide a more comprehensive representation of the effects of LOAD-associated genetic variants at multiple biophysical scales. Furthermore, we highlight the clinical potential of mechanistically coupling genetic variants and disease phenotypes with multiscale brain models.
Collapse
Affiliation(s)
- Chris Gaiteri
- Rush Alzheimer's Disease Center, Rush University Medical Center, 600 S Paulina Street, Chicago, Illinois 60612, USA
| | - Sara Mostafavi
- Department of Statistics, and Medical Genetics; Centre for Molecular and Medicine and Therapeutics, University of British Columbia, 950 West 28th Avenue, Vancouver, British Columbia V5Z 4H4, Canada
| | - Christopher J Honey
- Department of Psychology, University of Toronto, 100 St. George Street, 4th Floor Sidney Smith Hall, Toronto, Ontario M5S 3G3, Canada
| | - Philip L De Jager
- Program in Translational NeuroPsychiatric Genomics, Institute for the Neurosciences, Departments of Neurology and Psychiatry, Brigham and Women's Hospital, 75 Francis Street, Boston MA 02115, USA
| | - David A Bennett
- Rush Alzheimer's Disease Center, Rush University Medical Center, 600 S Paulina Street, Chicago, Illinois 60612, USA
| |
Collapse
|
47
|
Diaz-Pier S, Naveau M, Butz-Ostendorf M, Morrison A. Automatic Generation of Connectivity for Large-Scale Neuronal Network Models through Structural Plasticity. Front Neuroanat 2016; 10:57. [PMID: 27303272 PMCID: PMC4880596 DOI: 10.3389/fnana.2016.00057] [Citation(s) in RCA: 32] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/30/2015] [Accepted: 05/06/2016] [Indexed: 11/13/2022] Open
Abstract
With the emergence of new high performance computation technology in the last decade, the simulation of large scale neural networks which are able to reproduce the behavior and structure of the brain has finally become an achievable target of neuroscience. Due to the number of synaptic connections between neurons and the complexity of biological networks, most contemporary models have manually defined or static connectivity. However, it is expected that modeling the dynamic generation and deletion of the links among neurons, locally and between different regions of the brain, is crucial to unravel important mechanisms associated with learning, memory and healing. Moreover, for many neural circuits that could potentially be modeled, activity data is more readily and reliably available than connectivity data. Thus, a framework that enables networks to wire themselves on the basis of specified activity targets can be of great value in specifying network models where connectivity data is incomplete or has large error margins. To address these issues, in the present work we present an implementation of a model of structural plasticity in the neural network simulator NEST. In this model, synapses consist of two parts, a pre- and a post-synaptic element. Synapses are created and deleted during the execution of the simulation following local homeostatic rules until a mean level of electrical activity is reached in the network. We assess the scalability of the implementation in order to evaluate its potential usage in the self generation of connectivity of large scale networks. We show and discuss the results of simulations on simple two population networks and more complex models of the cortical microcircuit involving 8 populations and 4 layers using the new framework.
Collapse
Affiliation(s)
- Sandra Diaz-Pier
- Simulation Laboratory Neuroscience - Bernstein Facility for Simulation and Database Technology, Institute for Advanced Simulation, Jülich Aachen Research Alliance, Jülich Research Center Jülich, Germany
| | - Mikaël Naveau
- Simulation Laboratory Neuroscience - Bernstein Facility for Simulation and Database Technology, Institute for Advanced Simulation, Jülich Aachen Research Alliance, Jülich Research CenterJülich, Germany; Serine Proteases and Pathophysiology of the Neurovascular Unit, Institut National de la Santé et de la Recherche Médicale UMR-S U919, Caen Normandy University, Groupement d'Intérêt Public (GIP) CYCERONCaen, France
| | - Markus Butz-Ostendorf
- Simulation Laboratory Neuroscience - Bernstein Facility for Simulation and Database Technology, Institute for Advanced Simulation, Jülich Aachen Research Alliance, Jülich Research Center Jülich, Germany
| | - Abigail Morrison
- Simulation Laboratory Neuroscience - Bernstein Facility for Simulation and Database Technology, Institute for Advanced Simulation, Jülich Aachen Research Alliance, Jülich Research CenterJülich, Germany; Institute of Neuroscience and Medicine (INM-6), Computational and Systems Neuroscience, Jülich Research CentreJülich, Germany; Faculty of Psychology, Institute of Cognitive Neuroscience, Ruhr-University BochumBochum, Germany
| |
Collapse
|
48
|
Parallel Brain Simulator: A Multi-scale and Parallel Brain-Inspired Neural Network Modeling and Simulation Platform. Cognit Comput 2016. [DOI: 10.1007/s12559-016-9411-y] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/21/2022]
|
49
|
Colombo M. Why build a virtual brain? Large-scale neural simulations as jump start for cognitive computing. J EXP THEOR ARTIF IN 2016. [DOI: 10.1080/0952813x.2016.1148076] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/22/2022]
|
50
|
Glaser JI, Kording KP. The Development and Analysis of Integrated Neuroscience Data. Front Comput Neurosci 2016; 10:11. [PMID: 26903852 PMCID: PMC4749710 DOI: 10.3389/fncom.2016.00011] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/02/2015] [Accepted: 01/28/2016] [Indexed: 12/12/2022] Open
Abstract
There is a strong emphasis on developing novel neuroscience technologies, in particular on recording from more neurons. There has thus been increasing discussion about how to analyze the resulting big datasets. What has received less attention is that over the last 30 years, papers in neuroscience have progressively integrated more approaches, such as electrophysiology, anatomy, and genetics. As such, there has been little discussion on how to combine and analyze this multimodal data. Here, we describe the growth of multimodal approaches, and discuss the needed analysis advancements to make sense of this data.
Collapse
Affiliation(s)
- Joshua I Glaser
- Interdepartmental Neuroscience Program, Northwestern UniversityChicago, IL, USA; Department of Physical Medicine and Rehabilitation, Northwestern University and Rehabilitation Institute of ChicagoChicago, IL, USA
| | - Konrad P Kording
- Interdepartmental Neuroscience Program, Northwestern UniversityChicago, IL, USA; Department of Physical Medicine and Rehabilitation, Northwestern University and Rehabilitation Institute of ChicagoChicago, IL, USA; Department of Physiology, Northwestern UniversityChicago, IL, USA; Department of Applied Mathematics, Northwestern UniversityChicago, IL, USA
| |
Collapse
|