151
|
Wei W, Wang XJ. Downstream Effect of Ramping Neuronal Activity through Synapses with Short-Term Plasticity. Neural Comput 2016; 28:652-66. [PMID: 26890350 DOI: 10.1162/neco_a_00818] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Ramping neuronal activity refers to spiking activity with a rate that increases quasi-linearly over time. It has been observed in multiple cortical areas and is correlated with evidence accumulation processes or timing. In this work, we investigated the downstream effect of ramping neuronal activity through synapses that display short-term facilitation (STF) or depression (STD). We obtained an analytical result for a synapse driven by deterministic linear ramping input that exhibits pure STF or STD and numerically investigated the general case when a synapse displays both STF and STD. We show that the analytical deterministic solution gives an accurate description of the averaging synaptic activation of many inputs converging onto a postsynaptic neuron, even when fluctuations in the ramping input are strong. Activation of a synapse with STF shows an initial cubical increase with time, followed by a linear ramping similar to a synapse without STF. Activation of a synapse with STD grows in time to a maximum before falling and reaching a plateau, and this steady state is independent of the slope of the ramping input. For a synapse displaying both STF and STD, an increase in the depression time constant from a value much smaller than the facilitation time constant τ(F) to a value much larger than τ(F) leads to a transition from facilitation dominance to depression dominance. Therefore, our work provides insights into the impact of ramping neuronal activity on downstream neurons through synapses that display short-term plasticity. In a perceptual decision-making process, ramping activity has been observed in the parietal and prefrontal cortices, with a slope that decreases with task difficulty. Our work predicts that neurons downstream from such a decision circuit could instead display a firing plateau independent of the task difficulty, provided that the synaptic connection is endowed with short-term depression.
Collapse
Affiliation(s)
- Wei Wei
- Center for Neural Science, New York University, New York, NY 10003, U.S.A.
| | - Xiao-Jing Wang
- Center for Neural Science, New York University, New York, NY 10003, U.S.A., and NYU-ECNU Joint Institute of Brain and Cognitive Science, NYU Shanghai, Shanghai, China.
| |
Collapse
|
152
|
Kato H, Ikeguchi T. Oscillation, Conduction Delays, and Learning Cooperate to Establish Neural Competition in Recurrent Networks. PLoS One 2016; 11:e0146044. [PMID: 26840529 PMCID: PMC4740405 DOI: 10.1371/journal.pone.0146044] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/17/2015] [Accepted: 12/11/2015] [Indexed: 11/18/2022] Open
Abstract
Specific memory might be stored in a subnetwork consisting of a small population of neurons. To select neurons involved in memory formation, neural competition might be essential. In this paper, we show that excitable neurons are competitive and organize into two assemblies in a recurrent network with spike timing-dependent synaptic plasticity (STDP) and axonal conduction delays. Neural competition is established by the cooperation of spontaneously induced neural oscillation, axonal conduction delays, and STDP. We also suggest that the competition mechanism in this paper is one of the basic functions required to organize memory-storing subnetworks into fine-scale cortical networks.
Collapse
Affiliation(s)
- Hideyuki Kato
- School of Engineering, Tokyo University of Technology, Tokyo Japan
- * E-mail:
| | - Tohru Ikeguchi
- Faculty of Engineering Division I, Tokyo University of Science, Tokyo, Japan
| |
Collapse
|
153
|
Frémaux N, Gerstner W. Neuromodulated Spike-Timing-Dependent Plasticity, and Theory of Three-Factor Learning Rules. Front Neural Circuits 2016; 9:85. [PMID: 26834568 PMCID: PMC4717313 DOI: 10.3389/fncir.2015.00085] [Citation(s) in RCA: 138] [Impact Index Per Article: 17.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/12/2015] [Accepted: 12/14/2015] [Indexed: 11/13/2022] Open
Abstract
Classical Hebbian learning puts the emphasis on joint pre- and postsynaptic activity, but neglects the potential role of neuromodulators. Since neuromodulators convey information about novelty or reward, the influence of neuromodulators on synaptic plasticity is useful not just for action learning in classical conditioning, but also to decide "when" to create new memories in response to a flow of sensory stimuli. In this review, we focus on timing requirements for pre- and postsynaptic activity in conjunction with one or several phasic neuromodulatory signals. While the emphasis of the text is on conceptual models and mathematical theories, we also discuss some experimental evidence for neuromodulation of Spike-Timing-Dependent Plasticity. We highlight the importance of synaptic mechanisms in bridging the temporal gap between sensory stimulation and neuromodulatory signals, and develop a framework for a class of neo-Hebbian three-factor learning rules that depend on presynaptic activity, postsynaptic variables as well as the influence of neuromodulators.
Collapse
Affiliation(s)
- Nicolas Frémaux
- School of Computer Science and Brain Mind Institute, School of Life Sciences, École Polytechnique Fédérale de Lausanne Lausanne, Switzerland
| | - Wulfram Gerstner
- School of Computer Science and Brain Mind Institute, School of Life Sciences, École Polytechnique Fédérale de Lausanne Lausanne, Switzerland
| |
Collapse
|
154
|
Eyes Open on Sleep and Wake: In Vivo to In Silico Neural Networks. Neural Plast 2016; 2016:1478684. [PMID: 26885400 PMCID: PMC4738930 DOI: 10.1155/2016/1478684] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/23/2015] [Accepted: 10/11/2015] [Indexed: 12/14/2022] Open
Abstract
Functional and effective connectivity of cortical areas are essential for normal brain function under different behavioral states. Appropriate cortical activity during sleep and wakefulness is ensured by the balanced activity of excitatory and inhibitory circuits. Ultimately, fast, millisecond cortical rhythmic oscillations shape cortical function in time and space. On a much longer time scale, brain function also depends on prior sleep-wake history and circadian processes. However, much remains to be established on how the brain operates at the neuronal level in humans during sleep and wakefulness. A key limitation of human neuroscience is the difficulty in isolating neuronal excitation/inhibition drive in vivo. Therefore, computational models are noninvasive approaches of choice to indirectly access hidden neuronal states. In this review, we present a physiologically driven in silico approach, Dynamic Causal Modelling (DCM), as a means to comprehend brain function under different experimental paradigms. Importantly, DCM has allowed for the understanding of how brain dynamics underscore brain plasticity, cognition, and different states of consciousness. In a broader perspective, noninvasive computational approaches, such as DCM, may help to puzzle out the spatial and temporal dynamics of human brain function at different behavioural states.
Collapse
|
155
|
|
156
|
Mirrored STDP Implements Autoencoder Learning in a Network of Spiking Neurons. PLoS Comput Biol 2015; 11:e1004566. [PMID: 26633645 PMCID: PMC4669146 DOI: 10.1371/journal.pcbi.1004566] [Citation(s) in RCA: 29] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/11/2014] [Accepted: 09/23/2015] [Indexed: 12/04/2022] Open
Abstract
The autoencoder algorithm is a simple but powerful unsupervised method for training neural networks. Autoencoder networks can learn sparse distributed codes similar to those seen in cortical sensory areas such as visual area V1, but they can also be stacked to learn increasingly abstract representations. Several computational neuroscience models of sensory areas, including Olshausen & Field’s Sparse Coding algorithm, can be seen as autoencoder variants, and autoencoders have seen extensive use in the machine learning community. Despite their power and versatility, autoencoders have been difficult to implement in a biologically realistic fashion. The challenges include their need to calculate differences between two neuronal activities and their requirement for learning rules which lead to identical changes at feedforward and feedback connections. Here, we study a biologically realistic network of integrate-and-fire neurons with anatomical connectivity and synaptic plasticity that closely matches that observed in cortical sensory areas. Our choice of synaptic plasticity rules is inspired by recent experimental and theoretical results suggesting that learning at feedback connections may have a different form from learning at feedforward connections, and our results depend critically on this novel choice of plasticity rules. Specifically, we propose that plasticity rules at feedforward versus feedback connections are temporally opposed versions of spike-timing dependent plasticity (STDP), leading to a symmetric combined rule we call Mirrored STDP (mSTDP). We show that with mSTDP, our network follows a learning rule that approximately minimizes an autoencoder loss function. When trained with whitened natural image patches, the learned synaptic weights resemble the receptive fields seen in V1. Our results use realistic synaptic plasticity rules to show that the powerful autoencoder learning algorithm could be within the reach of real biological networks. In the brain areas responsible for sensory processing, neurons learn over time to respond to specific features in the external world. Here, we propose a new, biologically plausible model for how groups of neurons can learn which specific features to respond to. Our work connects theoretical arguments about the optimal forms of neuronal representations with experimental results showing how synaptic connections change in response to neuronal activity. Specifically, we show that biologically realistic neurons can implement an algorithm known as autoencoder learning, in which the neurons learn to form representations that can be used to reconstruct their inputs. Autoencoder networks can successfully model neuronal responses in early sensory areas, and they are also frequently used in machine learning for training deep neural networks. Despite their power and utility, autoencoder networks have not been previously implemented in a fully biological fashion. To perform the autoencoder algorithm, neurons must modify their incoming, feedforward synaptic connections as well as their outgoing, feedback synaptic connections—and the changes to both must depend on the errors the network makes when it tries to reconstruct its input. Here, we propose a model for activity in the network and show that the commonly used spike-timing-dependent plasticity paradigm will implement the desired changes to feedforward synaptic connection weights. Critically, we use recent experimental evidence to propose that feedback connections learn according to a temporally reversed plasticity rule. We show mathematically that the two rules combined can approximately implement autoencoder learning, and confirm our results using simulated networks of integrate-and-fire neurons. By showing that biological neurons can implement this powerful algorithm, our work opens the door for the modeling of many learning paradigms from both the fields of computational neuroscience and machine learning.
Collapse
|
157
|
Alstott J, Pajevic S, Bullmore E, Plenz D. Opening bottlenecks on weighted networks by local adaptation to cascade failures. JOURNAL OF COMPLEX NETWORKS 2015; 3:552-565. [PMID: 28890788 PMCID: PMC5589341 DOI: 10.1093/comnet/cnv002] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/07/2023]
Abstract
Structure and dynamics of complex systems are often described using weighted networks in which the position, weight and direction of links quantify how activity propagates between system elements, or nodes. Nodes with only few outgoing links of low weight have low out-strength and thus form bottlenecks that hinder propagation. It is currently not well understood how systems can overcome limits imposed by such bottlenecks. Here, we simulate activity cascades on weighted networks and show that, for any cascade length, activity initially propagates towards high out-strength nodes before terminating in low out-strength bottlenecks. Increasing the weights of links that are active early in the cascade further enhances already strong pathways, but worsens the bottlenecks thereby limiting accessibility to other pathways in the network. In contrast, strengthening only links that propagated the activity just prior to cascade termination, i.e. links that point into bottlenecks, eventually removes these bottlenecks and increases the accessibility of all paths on the network. This local adaptation rule simply relies on the relative timing to a global failure signal and allows systems to overcome engrained structure to adapt to new challenges.
Collapse
Affiliation(s)
- Jeff Alstott
- Section on Critical Brain Dynamics, National Institute of Mental Health, Bethesda, MD, USA and Brain Mapping Unit, Behavioural and Clinical Neuroscience Institute, University of Cambridge, Cambridge, UK
| | - Sinisa Pajevic
- Mathematical and Statistical Computing Laboratory, Division of Computational Bioscience, Center for Information Technology, National Institutes of Health, Bethesda, MD, USA
| | - Ed Bullmore
- Brain Mapping Unit, Behavioural and Clinical Neuroscience Institute, University of Cambridge, Cambridge, UK
| | - Dietmar Plenz
- Section on Critical Brain Dynamics, National Institute of Mental Health, Bethesda, MD, USA
| |
Collapse
|
158
|
Grytskyy D, Diesmann M, Helias M. Functional consequences of non-equilibrium dynamics caused by antisymmetric and symmetric learning rules. BMC Neurosci 2015. [PMCID: PMC4697537 DOI: 10.1186/1471-2202-16-s1-p96] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022] Open
|
159
|
Neuromorphic implementations of neurobiological learning algorithms for spiking neural networks. Neural Netw 2015; 72:152-67. [DOI: 10.1016/j.neunet.2015.07.004] [Citation(s) in RCA: 44] [Impact Index Per Article: 4.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/15/2015] [Revised: 05/29/2015] [Accepted: 07/09/2015] [Indexed: 11/21/2022]
|
160
|
Taherkhani A, Belatreche A, Li Y, Maguire LP. DL-ReSuMe: A Delay Learning-Based Remote Supervised Method for Spiking Neurons. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2015; 26:3137-3149. [PMID: 25794401 DOI: 10.1109/tnnls.2015.2404938] [Citation(s) in RCA: 31] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/04/2023]
Abstract
Recent research has shown the potential capability of spiking neural networks (SNNs) to model complex information processing in the brain. There is biological evidence to prove the use of the precise timing of spikes for information coding. However, the exact learning mechanism in which the neuron is trained to fire at precise times remains an open problem. The majority of the existing learning methods for SNNs are based on weight adjustment. However, there is also biological evidence that the synaptic delay is not constant. In this paper, a learning method for spiking neurons, called delay learning remote supervised method (DL-ReSuMe), is proposed to merge the delay shift approach and ReSuMe-based weight adjustment to enhance the learning performance. DL-ReSuMe uses more biologically plausible properties, such as delay learning, and needs less weight adjustment than ReSuMe. Simulation results have shown that the proposed DL-ReSuMe approach achieves learning accuracy and learning speed improvements compared with ReSuMe.
Collapse
|
161
|
|
162
|
Jedlicka P, Benuskova L, Abraham WC. A Voltage-Based STDP Rule Combined with Fast BCM-Like Metaplasticity Accounts for LTP and Concurrent "Heterosynaptic" LTD in the Dentate Gyrus In Vivo. PLoS Comput Biol 2015; 11:e1004588. [PMID: 26544038 PMCID: PMC4636250 DOI: 10.1371/journal.pcbi.1004588] [Citation(s) in RCA: 26] [Impact Index Per Article: 2.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/02/2015] [Accepted: 10/06/2015] [Indexed: 11/18/2022] Open
Abstract
Long-term potentiation (LTP) and long-term depression (LTD) are widely accepted to be synaptic mechanisms involved in learning and memory. It remains uncertain, however, which particular activity rules are utilized by hippocampal neurons to induce LTP and LTD in behaving animals. Recent experiments in the dentate gyrus of freely moving rats revealed an unexpected pattern of LTP and LTD from high-frequency perforant path stimulation. While 400 Hz theta-burst stimulation (400-TBS) and 400 Hz delta-burst stimulation (400-DBS) elicited substantial LTP of the tetanized medial path input and, concurrently, LTD of the non-tetanized lateral path input, 100 Hz theta-burst stimulation (100-TBS, a normally efficient LTP protocol for in vitro preparations) produced only weak LTP and concurrent LTD. Here we show in a biophysically realistic compartmental granule cell model that this pattern of results can be accounted for by a voltage-based spike-timing-dependent plasticity (STDP) rule combined with a relatively fast Bienenstock-Cooper-Munro (BCM)-like homeostatic metaplasticity rule, all on a background of ongoing spontaneous activity in the input fibers. Our results suggest that, at least for dentate granule cells, the interplay of STDP-BCM plasticity rules and ongoing pre- and postsynaptic background activity determines not only the degree of input-specific LTP elicited by various plasticity-inducing protocols, but also the degree of associated LTD in neighboring non-tetanized inputs, as generated by the ongoing constitutive activity at these synapses. The vast majority of computational studies that model synaptic plasticity neglect the fact that in vivo neurons exhibit an ongoing spontaneous spiking which affects the dynamics of synaptic changes. Here we study how key components of learning mechanisms in the brain, namely spike timing-dependent plasticity and metaplasticity, interact with spontaneous activity in the input pathways of the neuron. Using biologically realistic simulations we show that ongoing background activity is a key determinant of the degree of long-term potentiation and long-term depression of synaptic transmission between nerve cells in the hippocampus of freely moving animals. This work helps better understand the computational rules which drive synaptic plasticity in vivo.
Collapse
Affiliation(s)
- Peter Jedlicka
- Institute of Clinical Neuroanatomy, Neuroscience Center, Goethe University Frankfurt, Frankfurt, Germany
- * E-mail: (PJ); (LB)
| | - Lubica Benuskova
- Department of Computer Science, University of Otago, Dunedin, New Zealand
- Brain Health Research Centre and Brain Research New Zealand, University of Otago, Dunedin, New Zealand
- * E-mail: (PJ); (LB)
| | - Wickliffe C. Abraham
- Brain Health Research Centre and Brain Research New Zealand, University of Otago, Dunedin, New Zealand
- Department of Psychology, University of Otago, Dunedin, New Zealand
| |
Collapse
|
163
|
Robinson BS, Song D, Berger TW. Generalized Volterra kernel model identification of spike-timing-dependent plasticity from simulated spiking activity. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2015; 2014:6585-8. [PMID: 25571505 DOI: 10.1109/embc.2014.6945137] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Abstract
This paper presents a methodology to estimate a learning rule that governs activity-dependent plasticity from behaviorally recorded spiking events. To demonstrate this framework, we simulate a probabilistic spiking neuron with spike-timing-dependent plasticity (STDP) and estimate all model parameters from the simulated spiking data. In the neuron model, output spiking activity is generated by the combination of noise, feedback from the output, and an input-feedforward component whose magnitude is modulated by synaptic weight. The synaptic weight is calculated with STDP with the following features: (1) weight change based on the relative timing of input-output spike pairs, (2) prolonged plasticity induction, and (3) considerations for system stability. Estimation of all model parameters is achieved iteratively by formulating the model as a generalized linear model with Volterra kernels and basis function expansion. Successful estimation of all model parameters in this study demonstrates the feasibility of this approach for in-vivo experimental studies. Furthermore, the consideration of system stability and prolonged plasticity induction enhances the ability to capture how STDP affects a neural population's signal transformation properties over a realistic time course. Plasticity characterization with this estimation method could yield insights into functional implications of STDP and be incorporated into a cortical prosthesis.
Collapse
|
164
|
Effenberger F, Jost J, Levina A. Self-organization in Balanced State Networks by STDP and Homeostatic Plasticity. PLoS Comput Biol 2015; 11:e1004420. [PMID: 26335425 PMCID: PMC4559467 DOI: 10.1371/journal.pcbi.1004420] [Citation(s) in RCA: 37] [Impact Index Per Article: 4.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/02/2014] [Accepted: 06/30/2015] [Indexed: 11/18/2022] Open
Abstract
Structural inhomogeneities in synaptic efficacies have a strong impact on population response dynamics of cortical networks and are believed to play an important role in their functioning. However, little is known about how such inhomogeneities could evolve by means of synaptic plasticity. Here we present an adaptive model of a balanced neuronal network that combines two different types of plasticity, STDP and synaptic scaling. The plasticity rules yield both long-tailed distributions of synaptic weights and firing rates. Simultaneously, a highly connected subnetwork of driver neurons with strong synapses emerges. Coincident spiking activity of several driver cells can evoke population bursts and driver cells have similar dynamical properties as leader neurons found experimentally. Our model allows us to observe the delicate interplay between structural and dynamical properties of the emergent inhomogeneities. It is simple, robust to parameter changes and able to explain a multitude of different experimental findings in one basic network. It is widely believed that the structure of neuronal circuits plays a major role in brain functioning. Although the full synaptic connectivity for larger populations is not yet assessable even by current experimental techniques, available data show that neither synaptic strengths nor the number of synapses per neuron are homogeneously distributed. Several studies have found long-tailed distributions of synaptic weights with many weak and a few exceptionally strong synaptic connections, as well as strongly connected cells and subnetworks that may play a decisive role for data processing in neural circuits. Little is known about how inhomogeneities could arise in the developing brain and we hypothesize that there is a self-organizing principle behind their appearance. In this study we show how structural inhomogeneities can emerge by simple synaptic plasticity mechanisms from an initially homogeneous network. We perform numerical simulations and show analytically how a small imbalance in the initial structure is amplified by the synaptic plasticities and their interplay. Our network can simultaneously explain several experimental observations that were previously not linked.
Collapse
Affiliation(s)
- Felix Effenberger
- Max-Planck-Institute for Mathematics in the Sciences, Leipzig, Germany
- * E-mail:
| | - Jürgen Jost
- Max-Planck-Institute for Mathematics in the Sciences, Leipzig, Germany
| | - Anna Levina
- Max-Planck-Institute for Mathematics in the Sciences, Leipzig, Germany
- Bernstein Center for Computational Neuroscience Göttingen, Göttingen, Germany
| |
Collapse
|
165
|
Baladron J, Hamker FH. A spiking neural network based on the basal ganglia functional anatomy. Neural Netw 2015; 67:1-13. [DOI: 10.1016/j.neunet.2015.03.002] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/05/2014] [Revised: 01/29/2015] [Accepted: 03/03/2015] [Indexed: 10/23/2022]
|
166
|
Abstract
Spike-timing-dependent plasticity (STDP) is a set of Hebbian learning rules firmly based on biological evidence. It has been demonstrated that one of the STDP learning rules is suited for learning spatiotemporal patterns. When multiple neurons are organized in a simple competitive spiking neural network, this network is capable of learning multiple distinct patterns. If patterns overlap significantly (i.e., patterns are mutually inclusive), however, competition would not preclude trained neuron's responding to a new pattern and adjusting synaptic weights accordingly. This letter presents a simple neural network that combines vertical inhibition and Euclidean distance-dependent synaptic strength factor. This approach helps to solve the problem of pattern size-dependent parameter optimality and significantly reduces the probability of a neuron's forgetting an already learned pattern. For demonstration purposes, the network was trained for the first ten letters of the Braille alphabet.
Collapse
Affiliation(s)
- Dalius Krunglevicius
- Faculty of Mathematics and Informatics, Vilnius University, Vilnius, LT-03225, Lithuania
| |
Collapse
|
167
|
Abstract
Synaptic plasticity, a key process for memory formation, manifests itself across different time scales ranging from a few seconds for plasticity induction up to hours or even years for consolidation and memory retention. We developed a three-layered model of synaptic consolidation that accounts for data across a large range of experimental conditions. Consolidation occurs in the model through the interaction of the synaptic efficacy with a scaffolding variable by a read-write process mediated by a tagging-related variable. Plasticity-inducing stimuli modify the efficacy, but the state of tag and scaffold can only change if a write protection mechanism is overcome. Our model makes a link from depotentiation protocols in vitro to behavioral results regarding the influence of novelty on inhibitory avoidance memory in rats.
Collapse
|
168
|
Vogginger B, Schüffny R, Lansner A, Cederström L, Partzsch J, Höppner S. Reducing the computational footprint for real-time BCPNN learning. Front Neurosci 2015; 9:2. [PMID: 25657618 PMCID: PMC4302947 DOI: 10.3389/fnins.2015.00002] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/22/2014] [Accepted: 01/03/2015] [Indexed: 11/26/2022] Open
Abstract
The implementation of synaptic plasticity in neural simulation or neuromorphic hardware is usually very resource-intensive, often requiring a compromise between efficiency and flexibility. A versatile, but computationally-expensive plasticity mechanism is provided by the Bayesian Confidence Propagation Neural Network (BCPNN) paradigm. Building upon Bayesian statistics, and having clear links to biological plasticity processes, the BCPNN learning rule has been applied in many fields, ranging from data classification, associative memory, reward-based learning, probabilistic inference to cortical attractor memory networks. In the spike-based version of this learning rule the pre-, postsynaptic and coincident activity is traced in three low-pass-filtering stages, requiring a total of eight state variables, whose dynamics are typically simulated with the fixed step size Euler method. We derive analytic solutions allowing an efficient event-driven implementation of this learning rule. Further speedup is achieved by first rewriting the model which reduces the number of basic arithmetic operations per update to one half, and second by using look-up tables for the frequently calculated exponential decay. Ultimately, in a typical use case, the simulation using our approach is more than one order of magnitude faster than with the fixed step size Euler method. Aiming for a small memory footprint per BCPNN synapse, we also evaluate the use of fixed-point numbers for the state variables, and assess the number of bits required to achieve same or better accuracy than with the conventional explicit Euler method. All of this will allow a real-time simulation of a reduced cortex model based on BCPNN in high performance computing. More important, with the analytic solution at hand and due to the reduced memory bandwidth, the learning rule can be efficiently implemented in dedicated or existing digital neuromorphic hardware.
Collapse
Affiliation(s)
- Bernhard Vogginger
- Department of Electrical Engineering and Information Technology, Technische Universität Dresden Germany
| | - René Schüffny
- Department of Electrical Engineering and Information Technology, Technische Universität Dresden Germany
| | - Anders Lansner
- Department of Computational Biology, School of Computer Science and Communication, Royal Institute of Technology (KTH) Stockholm, Sweden ; Department of Numerical Analysis and Computer Science, Stockholm University Stockholm, Sweden
| | - Love Cederström
- Department of Electrical Engineering and Information Technology, Technische Universität Dresden Germany
| | - Johannes Partzsch
- Department of Electrical Engineering and Information Technology, Technische Universität Dresden Germany
| | - Sebastian Höppner
- Department of Electrical Engineering and Information Technology, Technische Universität Dresden Germany
| |
Collapse
|
169
|
Lourens MAJ, Schwab BC, Nirody JA, Meijer HGE, van Gils SA. Exploiting pallidal plasticity for stimulation in Parkinson's disease. J Neural Eng 2015; 12:026005. [PMID: 25650741 DOI: 10.1088/1741-2560/12/2/026005] [Citation(s) in RCA: 23] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022]
Abstract
OBJECTIVE Continuous application of high-frequency deep brain stimulation (DBS) often effectively reduces motor symptoms of Parkinson's disease patients. While there is a growing need for more effective and less traumatic stimulation, the exact mechanism of DBS is still unknown. Here, we present a methodology to exploit the plasticity of GABAergic synapses inside the external globus pallidus (GPe) for the optimization of DBS. APPROACH Assuming the existence of spike-timing-dependent plasticity (STDP) at GABAergic GPe-GPe synapses, we simulate neural activity in a network model of the subthalamic nucleus and GPe. In particular, we test different DBS protocols in our model and quantify their influence on neural synchrony. MAIN RESULTS In an exemplary set of biologically plausible model parameters, we show that STDP in the GPe has a direct influence on neural activity and especially the stability of firing patterns. STDP stabilizes both uncorrelated firing in the healthy state and correlated firing in the parkinsonian state. Alternative stimulation protocols such as coordinated reset stimulation can clearly profit from the stabilizing effect of STDP. These results are widely independent of the STDP learning rule. SIGNIFICANCE Once the model settings, e.g., connection architectures, have been described experimentally, our model can be adjusted and directly applied in the development of novel stimulation protocols. More efficient stimulation leads to both minimization of side effects and savings in battery power.
Collapse
Affiliation(s)
- Marcel A J Lourens
- MIRA: Institute for Biomedical Technology and Technical Medicine, University of Twente, Enschede, 7500 AE, The Netherlands
| | | | | | | | | |
Collapse
|
170
|
Galluppi F, Lagorce X, Stromatias E, Pfeiffer M, Plana LA, Furber SB, Benosman RB. A framework for plasticity implementation on the SpiNNaker neural architecture. Front Neurosci 2015; 8:429. [PMID: 25653580 PMCID: PMC4299433 DOI: 10.3389/fnins.2014.00429] [Citation(s) in RCA: 36] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/11/2014] [Accepted: 12/07/2014] [Indexed: 11/21/2022] Open
Abstract
Many of the precise biological mechanisms of synaptic plasticity remain elusive, but simulations of neural networks have greatly enhanced our understanding of how specific global functions arise from the massively parallel computation of neurons and local Hebbian or spike-timing dependent plasticity rules. For simulating large portions of neural tissue, this has created an increasingly strong need for large scale simulations of plastic neural networks on special purpose hardware platforms, because synaptic transmissions and updates are badly matched to computing style supported by current architectures. Because of the great diversity of biological plasticity phenomena and the corresponding diversity of models, there is a great need for testing various hypotheses about plasticity before committing to one hardware implementation. Here we present a novel framework for investigating different plasticity approaches on the SpiNNaker distributed digital neural simulation platform. The key innovation of the proposed architecture is to exploit the reconfigurability of the ARM processors inside SpiNNaker, dedicating a subset of them exclusively to process synaptic plasticity updates, while the rest perform the usual neural and synaptic simulations. We demonstrate the flexibility of the proposed approach by showing the implementation of a variety of spike- and rate-based learning rules, including standard Spike-Timing dependent plasticity (STDP), voltage-dependent STDP, and the rate-based BCM rule. We analyze their performance and validate them by running classical learning experiments in real time on a 4-chip SpiNNaker board. The result is an efficient, modular, flexible and scalable framework, which provides a valuable tool for the fast and easy exploration of learning models of very different kinds on the parallel and reconfigurable SpiNNaker system.
Collapse
Affiliation(s)
- Francesco Galluppi
- Equipe de Vision et Calcul Naturel, Vision Institute, Université Pierre et Marie Curie, Unité Mixte de Recherche S968 Inserm, l'Université Pierre et Marie Curie, Centre National de la Recherche Scientifique Unité Mixte de Recherche 7210, Centre Hospitalier National d'Ophtalmologie des quinze-vingtsParis, France
| | - Xavier Lagorce
- Equipe de Vision et Calcul Naturel, Vision Institute, Université Pierre et Marie Curie, Unité Mixte de Recherche S968 Inserm, l'Université Pierre et Marie Curie, Centre National de la Recherche Scientifique Unité Mixte de Recherche 7210, Centre Hospitalier National d'Ophtalmologie des quinze-vingtsParis, France
| | - Evangelos Stromatias
- Advanced Processors Technology Group, School of Computer Science, University of ManchesterManchester, UK
| | - Michael Pfeiffer
- Institute of Neuroinformatics, University of Zürich and ETH ZürichZürich, Switzerland
| | - Luis A. Plana
- Advanced Processors Technology Group, School of Computer Science, University of ManchesterManchester, UK
| | - Steve B. Furber
- Advanced Processors Technology Group, School of Computer Science, University of ManchesterManchester, UK
| | - Ryad B. Benosman
- Equipe de Vision et Calcul Naturel, Vision Institute, Université Pierre et Marie Curie, Unité Mixte de Recherche S968 Inserm, l'Université Pierre et Marie Curie, Centre National de la Recherche Scientifique Unité Mixte de Recherche 7210, Centre Hospitalier National d'Ophtalmologie des quinze-vingtsParis, France
| |
Collapse
|
171
|
Duarte RCF, Morrison A. Dynamic stability of sequential stimulus representations in adapting neuronal networks. Front Comput Neurosci 2014; 8:124. [PMID: 25374534 PMCID: PMC4205815 DOI: 10.3389/fncom.2014.00124] [Citation(s) in RCA: 23] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/07/2014] [Accepted: 09/16/2014] [Indexed: 12/16/2022] Open
Abstract
The ability to acquire and maintain appropriate representations of time-varying, sequential stimulus events is a fundamental feature of neocortical circuits and a necessary first step toward more specialized information processing. The dynamical properties of such representations depend on the current state of the circuit, which is determined primarily by the ongoing, internally generated activity, setting the ground state from which input-specific transformations emerge. Here, we begin by demonstrating that timing-dependent synaptic plasticity mechanisms have an important role to play in the active maintenance of an ongoing dynamics characterized by asynchronous and irregular firing, closely resembling cortical activity in vivo. Incoming stimuli, acting as perturbations of the local balance of excitation and inhibition, require fast adaptive responses to prevent the development of unstable activity regimes, such as those characterized by a high degree of population-wide synchrony. We establish a link between such pathological network activity, which is circumvented by the action of plasticity, and a reduced computational capacity. Additionally, we demonstrate that the action of plasticity shapes and stabilizes the transient network states exhibited in the presence of sequentially presented stimulus events, allowing the development of adequate and discernible stimulus representations. The main feature responsible for the increased discriminability of stimulus-driven population responses in plastic networks is shown to be the decorrelating action of inhibitory plasticity and the consequent maintenance of the asynchronous irregular dynamic regime both for ongoing activity and stimulus-driven responses, whereas excitatory plasticity is shown to play only a marginal role.
Collapse
Affiliation(s)
- Renato C F Duarte
- Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6), Jülich Research Center and JARA Jülich, Germany ; Bernstein Center Freiburg, Albert-Ludwig University of Freiburg Freiburg im Breisgau, Germany ; Faculty of Biology, Albert-Ludwig University of Freiburg Freiburg im Breisgau, Germany ; School of Informatics, Institute of Adaptive and Neural Computation, University of Edinburgh Edinburgh, UK
| | - Abigail Morrison
- Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6), Jülich Research Center and JARA Jülich, Germany ; Bernstein Center Freiburg, Albert-Ludwig University of Freiburg Freiburg im Breisgau, Germany ; Faculty of Biology, Albert-Ludwig University of Freiburg Freiburg im Breisgau, Germany ; Faculty of Psychology, Institute of Cognitive Neuroscience, Ruhr-University Bochum Bochum, Germany
| |
Collapse
|
172
|
Petrovici MA, Vogginger B, Müller P, Breitwieser O, Lundqvist M, Muller L, Ehrlich M, Destexhe A, Lansner A, Schüffny R, Schemmel J, Meier K. Characterization and compensation of network-level anomalies in mixed-signal neuromorphic modeling platforms. PLoS One 2014; 9:e108590. [PMID: 25303102 PMCID: PMC4193761 DOI: 10.1371/journal.pone.0108590] [Citation(s) in RCA: 32] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/29/2014] [Accepted: 08/22/2014] [Indexed: 11/18/2022] Open
Abstract
Advancing the size and complexity of neural network models leads to an ever increasing demand for computational resources for their simulation. Neuromorphic devices offer a number of advantages over conventional computing architectures, such as high emulation speed or low power consumption, but this usually comes at the price of reduced configurability and precision. In this article, we investigate the consequences of several such factors that are common to neuromorphic devices, more specifically limited hardware resources, limited parameter configurability and parameter variations due to fixed-pattern noise and trial-to-trial variability. Our final aim is to provide an array of methods for coping with such inevitable distortion mechanisms. As a platform for testing our proposed strategies, we use an executable system specification (ESS) of the BrainScaleS neuromorphic system, which has been designed as a universal emulation back-end for neuroscientific modeling. We address the most essential limitations of this device in detail and study their effects on three prototypical benchmark network models within a well-defined, systematic workflow. For each network model, we start by defining quantifiable functionality measures by which we then assess the effects of typical hardware-specific distortion mechanisms, both in idealized software simulations and on the ESS. For those effects that cause unacceptable deviations from the original network dynamics, we suggest generic compensation mechanisms and demonstrate their effectiveness. Both the suggested workflow and the investigated compensation mechanisms are largely back-end independent and do not require additional hardware configurability beyond the one required to emulate the benchmark networks in the first place. We hereby provide a generic methodological environment for configurable neuromorphic devices that are targeted at emulating large-scale, functional neural networks.
Collapse
Affiliation(s)
- Mihai A. Petrovici
- Ruprecht-Karls-Universität Heidelberg, Kirchhoff Institute for Physics, Heidelberg, Germany
| | - Bernhard Vogginger
- Technische Universität Dresden, Institute of Circuits and Systems, Dresden, Germany
| | - Paul Müller
- Ruprecht-Karls-Universität Heidelberg, Kirchhoff Institute for Physics, Heidelberg, Germany
| | - Oliver Breitwieser
- Ruprecht-Karls-Universität Heidelberg, Kirchhoff Institute for Physics, Heidelberg, Germany
| | - Mikael Lundqvist
- Department of Computational Biology, School of Computer Science and Communication, Stockholm University and Royal Institute of Technology, Stockholm, Sweden
| | - Lyle Muller
- CNRS, Unité de Neuroscience, Information et Complexité, Gif sur Yvette, France
| | - Matthias Ehrlich
- Technische Universität Dresden, Institute of Circuits and Systems, Dresden, Germany
| | - Alain Destexhe
- CNRS, Unité de Neuroscience, Information et Complexité, Gif sur Yvette, France
| | - Anders Lansner
- Department of Computational Biology, School of Computer Science and Communication, Stockholm University and Royal Institute of Technology, Stockholm, Sweden
| | - René Schüffny
- Technische Universität Dresden, Institute of Circuits and Systems, Dresden, Germany
| | - Johannes Schemmel
- Ruprecht-Karls-Universität Heidelberg, Kirchhoff Institute for Physics, Heidelberg, Germany
| | - Karlheinz Meier
- Ruprecht-Karls-Universität Heidelberg, Kirchhoff Institute for Physics, Heidelberg, Germany
| |
Collapse
|
173
|
Kunkel S, Schmidt M, Eppler JM, Plesser HE, Masumoto G, Igarashi J, Ishii S, Fukai T, Morrison A, Diesmann M, Helias M. Spiking network simulation code for petascale computers. Front Neuroinform 2014; 8:78. [PMID: 25346682 PMCID: PMC4193238 DOI: 10.3389/fninf.2014.00078] [Citation(s) in RCA: 45] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/18/2014] [Accepted: 08/27/2014] [Indexed: 11/13/2022] Open
Abstract
Brain-scale networks exhibit a breathtaking heterogeneity in the dynamical properties and parameters of their constituents. At cellular resolution, the entities of theory are neurons and synapses and over the past decade researchers have learned to manage the heterogeneity of neurons and synapses with efficient data structures. Already early parallel simulation codes stored synapses in a distributed fashion such that a synapse solely consumes memory on the compute node harboring the target neuron. As petaflop computers with some 100,000 nodes become increasingly available for neuroscience, new challenges arise for neuronal network simulation software: Each neuron contacts on the order of 10,000 other neurons and thus has targets only on a fraction of all compute nodes; furthermore, for any given source neuron, at most a single synapse is typically created on any compute node. From the viewpoint of an individual compute node, the heterogeneity in the synaptic target lists thus collapses along two dimensions: the dimension of the types of synapses and the dimension of the number of synapses of a given type. Here we present a data structure taking advantage of this double collapse using metaprogramming techniques. After introducing the relevant scaling scenario for brain-scale simulations, we quantitatively discuss the performance on two supercomputers. We show that the novel architecture scales to the largest petascale supercomputers available today.
Collapse
Affiliation(s)
- Susanne Kunkel
- Simulation Laboratory Neuroscience - Bernstein Facility for Simulation and Database Technology, Institute for Advanced Simulation, Jülich Aachen Research Alliance, Jülich Research Centre Jülich, Germany ; Programming Environment Research Team, RIKEN Advanced Institute for Computational Science Kobe, Japan
| | - Maximilian Schmidt
- Institute of Neuroscience and Medicine (INM-6), Institute for Advanced Simulation (IAS-6), Jülich Research Centre and JARA Jülich, Germany
| | - Jochen M Eppler
- Institute of Neuroscience and Medicine (INM-6), Institute for Advanced Simulation (IAS-6), Jülich Research Centre and JARA Jülich, Germany
| | - Hans E Plesser
- Institute of Neuroscience and Medicine (INM-6), Institute for Advanced Simulation (IAS-6), Jülich Research Centre and JARA Jülich, Germany ; Department of Mathematical Sciences and Technology, Norwegian University of Life Sciences Aas, Norway
| | - Gen Masumoto
- Advanced Center for Computing and Communication, RIKEN Wako, Japan
| | - Jun Igarashi
- Neural Computation Unit, Okinawa Institute of Science and Technology Okinawa, Japan ; Laboratory for Neural Circuit Theory, RIKEN Brain Science Institute Wako, Japan
| | - Shin Ishii
- Integrated Systems Biology Laboratory, Department of Systems Science, Graduate School of Informatics, Kyoto University Kyoto, Japan
| | - Tomoki Fukai
- Laboratory for Neural Circuit Theory, RIKEN Brain Science Institute Wako, Japan
| | - Abigail Morrison
- Simulation Laboratory Neuroscience - Bernstein Facility for Simulation and Database Technology, Institute for Advanced Simulation, Jülich Aachen Research Alliance, Jülich Research Centre Jülich, Germany ; Institute of Neuroscience and Medicine (INM-6), Institute for Advanced Simulation (IAS-6), Jülich Research Centre and JARA Jülich, Germany ; Faculty of Psychology, Institute of Cognitive Neuroscience, Ruhr-University Bochum Bochum, Germany
| | - Markus Diesmann
- Institute of Neuroscience and Medicine (INM-6), Institute for Advanced Simulation (IAS-6), Jülich Research Centre and JARA Jülich, Germany ; Laboratory for Neural Circuit Theory, RIKEN Brain Science Institute Wako, Japan ; Medical Faculty, RWTH University Aachen, Germany
| | - Moritz Helias
- Programming Environment Research Team, RIKEN Advanced Institute for Computational Science Kobe, Japan ; Institute of Neuroscience and Medicine (INM-6), Institute for Advanced Simulation (IAS-6), Jülich Research Centre and JARA Jülich, Germany
| |
Collapse
|
174
|
Graham BP, Saudargiene A, Cobb S. Spine Head Calcium as a Measure of Summed Postsynaptic Activity for Driving Synaptic Plasticity. Neural Comput 2014; 26:2194-222. [DOI: 10.1162/neco_a_00640] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
We use a computational model of a hippocampal CA1 pyramidal cell to demonstrate that spine head calcium provides an instantaneous readout at each synapse of the postsynaptic weighted sum of all presynaptic activity impinging on the cell. The form of the readout is equivalent to the functions of weighted, summed inputs used in neural network learning rules. Within a dendritic layer, peak spine head calcium levels are either a linear or sigmoidal function of the number of coactive synapses, with nonlinearity depending on the ability of voltage spread in the dendrites to reach calcium spike threshold. This is strongly controlled by the potassium A-type current, with calcium spikes and the consequent sigmoidal increase in peak spine head calcium present only when the A-channel density is low. Other membrane characteristics influence the gain of the relationship between peak calcium and the number of active synapses. In particular, increasing spine neck resistance increases the gain due to increased voltage responses to synaptic input in spine heads. Colocation of stimulated synapses on a single dendritic branch also increases the gain of the response. Input pathways cooperate: CA3 inputs to the proximal apical dendrites can strongly amplify peak calcium levels due to weak EC input to the distal dendrites, but not so strongly vice versa. CA3 inputs to the basal dendrites can boost calcium levels in the proximal apical dendrites, but the relative electrical compactness of the basal dendrites results in the reverse effect being less significant. These results give pointers as to how to better describe the contributions of pre- and postsynaptic activity in the learning “rules” that apply in these cells. The calcium signal is closer in form to the activity measures used in traditional neural network learning rules than to the spike times used in spike-timing-dependent plasticity.
Collapse
Affiliation(s)
- Bruce P. Graham
- Computing Science and Mathematics, School of Natural Sciences, University of Stirling, Stirling, FK9 4LA, U.K
| | - Ausra Saudargiene
- Department of Informatics, Vytautas Magnus University, Kaunas, LT-44404, Lithuania
| | - Stuart Cobb
- Institute of Neuroscience and Psychology, University of Glasgow, Glasgow, G12 8QB, U.K
| |
Collapse
|
175
|
Pyka M, Klatt S, Cheng S. Parametric Anatomical Modeling: a method for modeling the anatomical layout of neurons and their projections. Front Neuroanat 2014; 8:91. [PMID: 25309338 PMCID: PMC4164034 DOI: 10.3389/fnana.2014.00091] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/01/2014] [Accepted: 08/20/2014] [Indexed: 01/07/2023] Open
Abstract
Computational models of neural networks can be based on a variety of different parameters. These parameters include, for example, the 3d shape of neuron layers, the neurons' spatial projection patterns, spiking dynamics and neurotransmitter systems. While many well-developed approaches are available to model, for example, the spiking dynamics, there is a lack of approaches for modeling the anatomical layout of neurons and their projections. We present a new method, called Parametric Anatomical Modeling (PAM), to fill this gap. PAM can be used to derive network connectivities and conduction delays from anatomical data, such as the position and shape of the neuronal layers and the dendritic and axonal projection patterns. Within the PAM framework, several mapping techniques between layers can account for a large variety of connection properties between pre- and post-synaptic neuron layers. PAM is implemented as a Python tool and integrated in the 3d modeling software Blender. We demonstrate on a 3d model of the hippocampal formation how PAM can help reveal complex properties of the synaptic connectivity and conduction delays, properties that might be relevant to uncover the function of the hippocampus. Based on these analyses, two experimentally testable predictions arose: (i) the number of neurons and the spread of connections is heterogeneously distributed across the main anatomical axes, (ii) the distribution of connection lengths in CA3-CA1 differ qualitatively from those between DG-CA3 and CA3-CA3. Models created by PAM can also serve as an educational tool to visualize the 3d connectivity of brain regions. The low-dimensional, but yet biologically plausible, parameter space renders PAM suitable to analyse allometric and evolutionary factors in networks and to model the complexity of real networks with comparatively little effort.
Collapse
Affiliation(s)
- Martin Pyka
- Department of Psychology, Mercator Research Group "Structure of Memory," Ruhr-University Bochum Bochum, Germany ; Faculty of Psychology, Ruhr-University Bochum Bochum, Germany
| | - Sebastian Klatt
- Department of Psychology, Mercator Research Group "Structure of Memory," Ruhr-University Bochum Bochum, Germany ; Faculty of Electrical Engineering and Information Technology, Ruhr-University Bochum Bochum, Germany
| | - Sen Cheng
- Department of Psychology, Mercator Research Group "Structure of Memory," Ruhr-University Bochum Bochum, Germany ; Faculty of Psychology, Ruhr-University Bochum Bochum, Germany
| |
Collapse
|
176
|
Zenke F, Gerstner W. Limits to high-speed simulations of spiking neural networks using general-purpose computers. Front Neuroinform 2014; 8:76. [PMID: 25309418 PMCID: PMC4160969 DOI: 10.3389/fninf.2014.00076] [Citation(s) in RCA: 28] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/09/2014] [Accepted: 08/25/2014] [Indexed: 11/13/2022] Open
Abstract
To understand how the central nervous system performs computations using recurrent neuronal circuitry, simulations have become an indispensable tool for theoretical neuroscience. To study neuronal circuits and their ability to self-organize, increasing attention has been directed toward synaptic plasticity. In particular spike-timing-dependent plasticity (STDP) creates specific demands for simulations of spiking neural networks. On the one hand a high temporal resolution is required to capture the millisecond timescale of typical STDP windows. On the other hand network simulations have to evolve over hours up to days, to capture the timescale of long-term plasticity. To do this efficiently, fast simulation speed is the crucial ingredient rather than large neuron numbers. Using different medium-sized network models consisting of several thousands of neurons and off-the-shelf hardware, we compare the simulation speed of the simulators: Brian, NEST and Neuron as well as our own simulator Auryn. Our results show that real-time simulations of different plastic network models are possible in parallel simulations in which numerical precision is not a primary concern. Even so, the speed-up margin of parallelism is limited and boosting simulation speeds beyond one tenth of real-time is difficult. By profiling simulation code we show that the run times of typical plastic network simulations encounter a hard boundary. This limit is partly due to latencies in the inter-process communications and thus cannot be overcome by increased parallelism. Overall, these results show that to study plasticity in medium-sized spiking neural networks, adequate simulation tools are readily available which run efficiently on small clusters. However, to run simulations substantially faster than real-time, special hardware is a prerequisite.
Collapse
Affiliation(s)
- Friedemann Zenke
- School of Computer and Communication Sciences and Brain Mind Institute, School of Life Sciences, Ecole Polytechnique Fédérale de Lausanne Lausanne, Switzerland
| | - Wulfram Gerstner
- School of Computer and Communication Sciences and Brain Mind Institute, School of Life Sciences, Ecole Polytechnique Fédérale de Lausanne Lausanne, Switzerland
| |
Collapse
|
177
|
Synaptic dynamics: Linear model and adaptation algorithm. Neural Netw 2014; 56:49-68. [DOI: 10.1016/j.neunet.2014.04.001] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/07/2014] [Revised: 04/10/2014] [Accepted: 04/20/2014] [Indexed: 11/17/2022]
|
178
|
The effect of STDP temporal kernel structure on the learning dynamics of single excitatory and inhibitory synapses. PLoS One 2014; 9:e101109. [PMID: 24999634 PMCID: PMC4085044 DOI: 10.1371/journal.pone.0101109] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/27/2014] [Accepted: 06/02/2014] [Indexed: 11/25/2022] Open
Abstract
Spike-Timing Dependent Plasticity (STDP) is characterized by a wide range of temporal kernels. However, much of the theoretical work has focused on a specific kernel – the “temporally asymmetric Hebbian” learning rules. Previous studies linked excitatory STDP to positive feedback that can account for the emergence of response selectivity. Inhibitory plasticity was associated with negative feedback that can balance the excitatory and inhibitory inputs. Here we study the possible computational role of the temporal structure of the STDP. We represent the STDP as a superposition of two processes: potentiation and depression. This allows us to model a wide range of experimentally observed STDP kernels, from Hebbian to anti-Hebbian, by varying a single parameter. We investigate STDP dynamics of a single excitatory or inhibitory synapse in purely feed-forward architecture. We derive a mean-field-Fokker-Planck dynamics for the synaptic weight and analyze the effect of STDP structure on the fixed points of the mean field dynamics. We find a phase transition along the Hebbian to anti-Hebbian parameter from a phase that is characterized by a unimodal distribution of the synaptic weight, in which the STDP dynamics is governed by negative feedback, to a phase with positive feedback characterized by a bimodal distribution. The critical point of this transition depends on general properties of the STDP dynamics and not on the fine details. Namely, the dynamics is affected by the pre-post correlations only via a single number that quantifies its overlap with the STDP kernel. We find that by manipulating the STDP temporal kernel, negative feedback can be induced in excitatory synapses and positive feedback in inhibitory. Moreover, there is an exact symmetry between inhibitory and excitatory plasticity, i.e., for every STDP rule of inhibitory synapse there exists an STDP rule for excitatory synapse, such that their dynamics is identical.
Collapse
|
179
|
Duarte R, Seriès P, Morrison A. Temporal sequence learning via adaptation in biologically plausible spiking neural networks. BMC Neurosci 2014. [PMCID: PMC4126573 DOI: 10.1186/1471-2202-15-s1-p85] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022] Open
|
180
|
STRACK BEATA, JACOBS KIMBERLEM, CIOS KRZYSZTOFJ. SIMULATING VERTICAL AND HORIZONTAL INHIBITION WITH SHORT-TERM DYNAMICS IN A MULTI-COLUMN MULTI-LAYER MODEL OF NEOCORTEX. Int J Neural Syst 2014; 24:1440002. [DOI: 10.1142/s0129065714400024] [Citation(s) in RCA: 19] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022]
Abstract
The paper introduces a multi-layer multi-column model of the cortex that uses four different neuron types and short-term plasticity dynamics. It was designed with details of neuronal connectivity available in the literature and meets these conditions: (1) biologically accurate laminar and columnar flows of activity, (2) normal function of low-threshold spiking and fast spiking neurons, and (3) ability to generate different stages of epileptiform activity. With these characteristics the model allows for modeling lesioned or malformed cortex, i.e. examine properties of developmentally malformed cortex in which the balance between inhibitory neuron subtypes is disturbed.
Collapse
Affiliation(s)
- BEATA STRACK
- Department of Computer Science, Virginia Commonwealth University, Richmond, VA, USA
| | - KIMBERLE M. JACOBS
- Department of Anatomy and Neurobiology, Virginia Commonwealth University, Richmond, VA, USA
| | - KRZYSZTOF J. CIOS
- Department of Computer Science, Virginia Commonwealth University, Richmond, VA, USA
- IITiS Polish Academy of Sciences, Poland
| |
Collapse
|
181
|
Ke H, Tinsley MR, Steele A, Wang F, Showalter K. Link weight evolution in a network of coupled chemical oscillators. PHYSICAL REVIEW. E, STATISTICAL, NONLINEAR, AND SOFT MATTER PHYSICS 2014; 89:052712. [PMID: 25353834 DOI: 10.1103/physreve.89.052712] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/22/2013] [Indexed: 05/17/2023]
Abstract
Link weight evolution is studied in a network of coupled chemical oscillators. Oscillators are perturbed by adjustments in imposed light intensity based on excitatory or inhibitory links to other oscillators undergoing excitation. Experimental and modeling studies demonstrate that the network is capable of producing sustained coordinated activity. The individual nodes of the network exhibit incoherent firing events; however, a dominant frequency can be discerned within the collective signal by Fourier analysis. The introduction of spike-timing-dependent plasticity yields a network that evolves to a stable unimodal link weight distribution.
Collapse
Affiliation(s)
- Hua Ke
- C. Eugene Bennett Department of Chemistry, West Virginia University, Morgantown, West Virginia 26506-6045, USA
| | - Mark R Tinsley
- C. Eugene Bennett Department of Chemistry, West Virginia University, Morgantown, West Virginia 26506-6045, USA
| | - Aaron Steele
- C. Eugene Bennett Department of Chemistry, West Virginia University, Morgantown, West Virginia 26506-6045, USA
| | - Fang Wang
- C. Eugene Bennett Department of Chemistry, West Virginia University, Morgantown, West Virginia 26506-6045, USA
| | - Kenneth Showalter
- C. Eugene Bennett Department of Chemistry, West Virginia University, Morgantown, West Virginia 26506-6045, USA
| |
Collapse
|
182
|
Three tools for the real-time simulation of embodied spiking neural networks using GPUs. Neuroinformatics 2014; 11:267-90. [PMID: 23274962 DOI: 10.1007/s12021-012-9174-x] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/27/2022]
Abstract
This paper presents a toolbox of solutions that enable the user to construct biologically-inspired spiking neural networks with tens of thousands of neurons and millions of connections that can be simulated in real time, visualized in 3D and connected to robots and other devices. NeMo is a high performance simulator that works with a variety of neural and oscillator models and performs parallel simulations on either GPUs or multi-core processors. SpikeStream is a visualization and analysis environment that works with NeMo and can construct networks, store them in a database and visualize their activity in 3D. The iSpike library provides biologically-inspired conversion between real data and spike representations to support work with robots, such as the iCub. Each of the tools described in this paper can be used independently with other software, and they also work well together.
Collapse
|
183
|
Sinha DB, Ledbetter NM, Barbour DL. Spike-timing computation properties of a feed-forward neural network model. Front Comput Neurosci 2014; 8:5. [PMID: 24478688 PMCID: PMC3904091 DOI: 10.3389/fncom.2014.00005] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/04/2013] [Accepted: 01/09/2014] [Indexed: 11/13/2022] Open
Abstract
Brain function is characterized by dynamical interactions among networks of neurons. These interactions are mediated by network topology at many scales ranging from microcircuits to brain areas. Understanding how networks operate can be aided by understanding how the transformation of inputs depends upon network connectivity patterns, e.g., serial and parallel pathways. To tractably determine how single synapses or groups of synapses in such pathways shape these transformations, we modeled feed-forward networks of 7–22 neurons in which synaptic strength changed according to a spike-timing dependent plasticity (STDP) rule. We investigated how activity varied when dynamics were perturbed by an activity-dependent electrical stimulation protocol (spike-triggered stimulation; STS) in networks of different topologies and background input correlations. STS can successfully reorganize functional brain networks in vivo, but with a variability in effectiveness that may derive partially from the underlying network topology. In a simulated network with a single disynaptic pathway driven by uncorrelated background activity, structured spike-timing relationships between polysynaptically connected neurons were not observed. When background activity was correlated or parallel disynaptic pathways were added, however, robust polysynaptic spike timing relationships were observed, and application of STS yielded predictable changes in synaptic strengths and spike-timing relationships. These observations suggest that precise input-related or topologically induced temporal relationships in network activity are necessary for polysynaptic signal propagation. Such constraints for polysynaptic computation suggest potential roles for higher-order topological structure in network organization, such as maintaining polysynaptic correlation in the face of relatively weak synapses.
Collapse
Affiliation(s)
- Drew B Sinha
- Laboratory of Sensory Neuroscience and Neuroengineering, Department of Biomedical Engineering, Washington University in St. Louis St. Louis, MO, USA
| | - Noah M Ledbetter
- Laboratory of Sensory Neuroscience and Neuroengineering, Department of Biomedical Engineering, Washington University in St. Louis St. Louis, MO, USA
| | - Dennis L Barbour
- Laboratory of Sensory Neuroscience and Neuroengineering, Department of Biomedical Engineering, Washington University in St. Louis St. Louis, MO, USA
| |
Collapse
|
184
|
The correlation structure of local neuronal networks intrinsically results from recurrent dynamics. PLoS Comput Biol 2014; 10:e1003428. [PMID: 24453955 PMCID: PMC3894226 DOI: 10.1371/journal.pcbi.1003428] [Citation(s) in RCA: 57] [Impact Index Per Article: 5.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/09/2013] [Accepted: 11/22/2013] [Indexed: 11/19/2022] Open
Abstract
Correlated neuronal activity is a natural consequence of network connectivity and shared inputs to pairs of neurons, but the task-dependent modulation of correlations in relation to behavior also hints at a functional role. Correlations influence the gain of postsynaptic neurons, the amount of information encoded in the population activity and decoded by readout neurons, and synaptic plasticity. Further, it affects the power and spatial reach of extracellular signals like the local-field potential. A theory of correlated neuronal activity accounting for recurrent connectivity as well as fluctuating external sources is currently lacking. In particular, it is unclear how the recently found mechanism of active decorrelation by negative feedback on the population level affects the network response to externally applied correlated stimuli. Here, we present such an extension of the theory of correlations in stochastic binary networks. We show that (1) for homogeneous external input, the structure of correlations is mainly determined by the local recurrent connectivity, (2) homogeneous external inputs provide an additive, unspecific contribution to the correlations, (3) inhibitory feedback effectively decorrelates neuronal activity, even if neurons receive identical external inputs, and (4) identical synaptic input statistics to excitatory and to inhibitory cells increases intrinsically generated fluctuations and pairwise correlations. We further demonstrate how the accuracy of mean-field predictions can be improved by self-consistently including correlations. As a byproduct, we show that the cancellation of correlations between the summed inputs to pairs of neurons does not originate from the fast tracking of external input, but from the suppression of fluctuations on the population level by the local network. This suppression is a necessary constraint, but not sufficient to determine the structure of correlations; specifically, the structure observed at finite network size differs from the prediction based on perfect tracking, even though perfect tracking implies suppression of population fluctuations. The co-occurrence of action potentials of pairs of neurons within short time intervals has been known for a long time. Such synchronous events can appear time-locked to the behavior of an animal, and also theoretical considerations argue for a functional role of synchrony. Early theoretical work tried to explain correlated activity by neurons transmitting common fluctuations due to shared inputs. This, however, overestimates correlations. Recently, the recurrent connectivity of cortical networks was shown responsible for the observed low baseline correlations. Two different explanations were given: One argues that excitatory and inhibitory population activities closely follow the external inputs to the network, so that their effects on a pair of cells mutually cancel. Another explanation relies on negative recurrent feedback to suppress fluctuations in the population activity, equivalent to small correlations. In a biological neuronal network one expects both, external inputs and recurrence, to affect correlated activity. The present work extends the theoretical framework of correlations to include both contributions and explains their qualitative differences. Moreover, the study shows that the arguments of fast tracking and recurrent feedback are not equivalent, only the latter correctly predicts the cell-type specific correlations.
Collapse
|
185
|
Vasilaki E, Giugliano M. Emergence of connectivity motifs in networks of model neurons with short- and long-term plastic synapses. PLoS One 2014; 9:e84626. [PMID: 24454735 PMCID: PMC3893143 DOI: 10.1371/journal.pone.0084626] [Citation(s) in RCA: 20] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/18/2013] [Accepted: 11/16/2013] [Indexed: 11/29/2022] Open
Abstract
Recent experimental data from the rodent cerebral cortex and olfactory bulb indicate that specific connectivity motifs are correlated with short-term dynamics of excitatory synaptic transmission. It was observed that neurons with short-term facilitating synapses form predominantly reciprocal pairwise connections, while neurons with short-term depressing synapses form predominantly unidirectional pairwise connections. The cause of these structural differences in excitatory synaptic microcircuits is unknown. We show that these connectivity motifs emerge in networks of model neurons, from the interactions between short-term synaptic dynamics (SD) and long-term spike-timing dependent plasticity (STDP). While the impact of STDP on SD was shown in simultaneous neuronal pair recordings in vitro, the mutual interactions between STDP and SD in large networks are still the subject of intense research. Our approach combines an SD phenomenological model with an STDP model that faithfully captures long-term plasticity dependence on both spike times and frequency. As a proof of concept, we first simulate and analyze recurrent networks of spiking neurons with random initial connection efficacies and where synapses are either all short-term facilitating or all depressing. For identical external inputs to the network, and as a direct consequence of internally generated activity, we find that networks with depressing synapses evolve unidirectional connectivity motifs, while networks with facilitating synapses evolve reciprocal connectivity motifs. We then show that the same results hold for heterogeneous networks, including both facilitating and depressing synapses. This does not contradict a recent theory that proposes that motifs are shaped by external inputs, but rather complements it by examining the role of both the external inputs and the internally generated network activity. Our study highlights the conditions under which SD-STDP might explain the correlation between facilitation and reciprocal connectivity motifs, as well as between depression and unidirectional motifs.
Collapse
Affiliation(s)
- Eleni Vasilaki
- Department of Computer Science, University of Sheffield, Sheffield, United Kingdom
- Department of Biomedical Sciences, University of Antwerp, Wilrijk, Belgium
| | - Michele Giugliano
- Department of Computer Science, University of Sheffield, Sheffield, United Kingdom
- Department of Biomedical Sciences, University of Antwerp, Wilrijk, Belgium
- Brain Mind Institute, Swiss Federal Institute of Technology of Lausanne, Lausanne, Switzerland
- * E-mail:
| |
Collapse
|
186
|
Millard DC, Wang Q, Gollnick CA, Stanley GB. System identification of the nonlinear dynamics in the thalamocortical circuit in response to patterned thalamic microstimulation in vivo. J Neural Eng 2013; 10:066011. [PMID: 24162186 PMCID: PMC4064456 DOI: 10.1088/1741-2560/10/6/066011] [Citation(s) in RCA: 23] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/16/2022]
Abstract
OBJECTIVE Nonlinear system identification approaches were used to develop a dynamical model of the network level response to patterns of microstimulation in vivo. APPROACH The thalamocortical circuit of the rodent vibrissa pathway was the model system, with voltage sensitive dye imaging capturing the cortical response to patterns of stimulation delivered from a single electrode in the ventral posteromedial thalamus. The results of simple paired stimulus experiments formed the basis for the development of a phenomenological model explicitly containing nonlinear elements observed experimentally. The phenomenological model was fit using datasets obtained with impulse train inputs, Poisson-distributed in time and uniformly varying in amplitude. MAIN RESULTS The phenomenological model explained 58% of the variance in the cortical response to out of sample patterns of thalamic microstimulation. Furthermore, while fit on trial-averaged data, the phenomenological model reproduced single trial response properties when simulated with noise added into the system during stimulus presentation. The simulations indicate that the single trial response properties were dependent on the relative sensitivity of the static nonlinearities in the two stages of the model, and ultimately suggest that electrical stimulation activates local circuitry through linear recruitment, but that this activity propagates in a highly nonlinear fashion to downstream targets. SIGNIFICANCE The development of nonlinear dynamical models of neural circuitry will guide information delivery for sensory prosthesis applications, and more generally reveal properties of population coding within neural circuits.
Collapse
Affiliation(s)
- Daniel C Millard
- Department of Biomedical Engineering, Georgia Institute of Technology/Emory University, Atlanta, GA 30332, USA
| | - Qi Wang
- Department of Biomedical Engineering, Georgia Institute of Technology/Emory University, Atlanta, GA 30332, USA
| | - Clare A Gollnick
- Department of Biomedical Engineering, Georgia Institute of Technology/Emory University, Atlanta, GA 30332, USA
| | - Garrett B Stanley
- Department of Biomedical Engineering, Georgia Institute of Technology/Emory University, Atlanta, GA 30332, USA
| |
Collapse
|
187
|
Beyeler M, Dutt ND, Krichmar JL. Categorization and decision-making in a neurobiologically plausible spiking network using a STDP-like learning rule. Neural Netw 2013; 48:109-24. [DOI: 10.1016/j.neunet.2013.07.012] [Citation(s) in RCA: 77] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/24/2013] [Revised: 07/28/2013] [Accepted: 07/31/2013] [Indexed: 11/26/2022]
|
188
|
Albers C, Schmiedt JT, Pawelzik KR. Theta-specific susceptibility in a model of adaptive synaptic plasticity. Front Comput Neurosci 2013; 7:170. [PMID: 24312047 PMCID: PMC3835974 DOI: 10.3389/fncom.2013.00170] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/28/2013] [Accepted: 11/04/2013] [Indexed: 12/13/2022] Open
Abstract
Learning and memory formation are processes which are still not fully understood. It is widely believed that synaptic plasticity is the most important neural substrate for both. However, it has been observed that large-scale theta band oscillations in the mammalian brain are beneficial for learning, and it is not clear if and how this is linked to synaptic plasticity. Also, the underlying dynamics of synaptic plasticity itself have not been completely uncovered yet, especially for non-linear interactions between multiple spikes. Here, we present a new and simple dynamical model of synaptic plasticity. It incorporates novel contributions to synaptic plasticity including adaptation processes. We test its ability to reproduce non-linear effects on four different data sets of complex spike patterns, and show that the model can be tuned to reproduce the observed synaptic changes in great detail. When subjected to periodically varying firing rates, already linear pair based spike timing dependent plasticity (STDP) predicts a specific susceptibility of synaptic plasticity to pre- and postsynaptic firing rate oscillations in the theta-band. Our model retains this band-pass property, while for high firing rates in the non-linear regime it modifies the specific phase relation required for depression and potentiation. For realistic parameters, maximal synaptic potentiation occurs when the postsynaptic is trailing the presynaptic activity slightly. Anti-phase oscillations tend to depress it. Our results are well in line with experimental findings, providing a straightforward and mechanistic explanation for the importance of theta oscillations for learning.
Collapse
Affiliation(s)
- Christian Albers
- Department of Neurophysics, Institute for Theoretical Physics, University of Bremen Bremen, Germany
| | | | | |
Collapse
|
189
|
Strack B, Jacobs KM, Cios KJ. Simulating lesions in multi-layer, multi-columnar model of neocortex. INTERNATIONAL IEEE/EMBS CONFERENCE ON NEURAL ENGINEERING : [PROCEEDINGS]. INTERNATIONAL IEEE EMBS CONFERENCE ON NEURAL ENGINEERING 2013; 2013:835-838. [PMID: 36818467 PMCID: PMC9937446 DOI: 10.1109/ner.2013.6696064] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Abstract
The paper presents results of modeling global and focal loss of layers in a multi-columnar model of neocortex. Specifically, the spread of activity across columns in conditions of inhibitory blockade is compared. With very low inhibition activity spreads through all layers, however, deep layers are critical for spread of activity when inhibition is only moderately blocked.
Collapse
Affiliation(s)
- Beata Strack
- Department of Computer Science, Virginia Commonwealth University School of Engineering, Richmond, VA
| | - Kimberle M Jacobs
- Department of Anatomy and Neurobiology, Virginia Commonwealth University School of Medicine, Richmond, VA
| | - Krzysztof J Cios
- Department of Computer Science, Virginia Commonwealth University School of Engineering, Richmond, VA and IITiS Polish Academy of Sciences, Poland
| |
Collapse
|
190
|
Wang P, Knösche TR. A realistic neural mass model of the cortex with laminar-specific connections and synaptic plasticity - evaluation with auditory habituation. PLoS One 2013; 8:e77876. [PMID: 24205009 PMCID: PMC3813749 DOI: 10.1371/journal.pone.0077876] [Citation(s) in RCA: 28] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/28/2013] [Accepted: 09/05/2013] [Indexed: 11/18/2022] Open
Abstract
In this work we propose a biologically realistic local cortical circuit model (LCCM), based on neural masses, that incorporates important aspects of the functional organization of the brain that have not been covered by previous models: (1) activity dependent plasticity of excitatory synaptic couplings via depleting and recycling of neurotransmitters and (2) realistic inter-laminar dynamics via laminar-specific distribution of and connections between neural populations. The potential of the LCCM was demonstrated by accounting for the process of auditory habituation. The model parameters were specified using Bayesian inference. It was found that: (1) besides the major serial excitatory information pathway (layer 4 to layer 2/3 to layer 5/6), there exists a parallel "short-cut" pathway (layer 4 to layer 5/6), (2) the excitatory signal flow from the pyramidal cells to the inhibitory interneurons seems to be more intra-laminar while, in contrast, the inhibitory signal flow from inhibitory interneurons to the pyramidal cells seems to be both intra- and inter-laminar, and (3) the habituation rates of the connections are unsymmetrical: forward connections (from layer 4 to layer 2/3) are more strongly habituated than backward connections (from Layer 5/6 to layer 4). Our evaluation demonstrates that the novel features of the LCCM are of crucial importance for mechanistic explanations of brain function. The incorporation of these features into a mass model makes them applicable to modeling based on macroscopic data (like EEG or MEG), which are usually available in human experiments. Our LCCM is therefore a valuable building block for future realistic models of human cognitive function.
Collapse
Affiliation(s)
- Peng Wang
- Max Planck Institute for Human Cognitive and Brain Sciences, MEG and Cortical Networks, Leipzig, Germany
| | - Thomas R. Knösche
- Max Planck Institute for Human Cognitive and Brain Sciences, MEG and Cortical Networks, Leipzig, Germany
| |
Collapse
|
191
|
Yger P, Harris KD. The Convallis rule for unsupervised learning in cortical networks. PLoS Comput Biol 2013; 9:e1003272. [PMID: 24204224 PMCID: PMC3808450 DOI: 10.1371/journal.pcbi.1003272] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/01/2013] [Accepted: 08/28/2013] [Indexed: 01/26/2023] Open
Abstract
The phenomenology and cellular mechanisms of cortical synaptic plasticity are becoming known in increasing detail, but the computational principles by which cortical plasticity enables the development of sensory representations are unclear. Here we describe a framework for cortical synaptic plasticity termed the "Convallis rule", mathematically derived from a principle of unsupervised learning via constrained optimization. Implementation of the rule caused a recurrent cortex-like network of simulated spiking neurons to develop rate representations of real-world speech stimuli, enabling classification by a downstream linear decoder. Applied to spike patterns used in in vitro plasticity experiments, the rule reproduced multiple results including and beyond STDP. However STDP alone produced poorer learning performance. The mathematical form of the rule is consistent with a dual coincidence detector mechanism that has been suggested by experiments in several synaptic classes of juvenile neocortex. Based on this confluence of normative, phenomenological, and mechanistic evidence, we suggest that the rule may approximate a fundamental computational principle of the neocortex.
Collapse
Affiliation(s)
- Pierre Yger
- UCL Institute of Neurology and UCL Department of Neuroscience, Physiology, and Pharmacology, London, United Kingdom
- * E-mail:
| | - Kenneth D. Harris
- UCL Institute of Neurology and UCL Department of Neuroscience, Physiology, and Pharmacology, London, United Kingdom
| |
Collapse
|
192
|
Friedmann S, Frémaux N, Schemmel J, Gerstner W, Meier K. Reward-based learning under hardware constraints-using a RISC processor embedded in a neuromorphic substrate. Front Neurosci 2013; 7:160. [PMID: 24065877 PMCID: PMC3778319 DOI: 10.3389/fnins.2013.00160] [Citation(s) in RCA: 23] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/26/2013] [Accepted: 08/19/2013] [Indexed: 11/16/2022] Open
Abstract
In this study, we propose and analyze in simulations a new, highly flexible method of implementing synaptic plasticity in a wafer-scale, accelerated neuromorphic hardware system. The study focuses on globally modulated STDP, as a special use-case of this method. Flexibility is achieved by embedding a general-purpose processor dedicated to plasticity into the wafer. To evaluate the suitability of the proposed system, we use a reward modulated STDP rule in a spike train learning task. A single layer of neurons is trained to fire at specific points in time with only the reward as feedback. This model is simulated to measure its performance, i.e., the increase in received reward after learning. Using this performance as baseline, we then simulate the model with various constraints imposed by the proposed implementation and compare the performance. The simulated constraints include discretized synaptic weights, a restricted interface between analog synapses and embedded processor, and mismatch of analog circuits. We find that probabilistic updates can increase the performance of low-resolution weights, a simple interface between analog synapses and processor is sufficient for learning, and performance is insensitive to mismatch. Further, we consider communication latency between wafer and the conventional control computer system that is simulating the environment. This latency increases the delay, with which the reward is sent to the embedded processor. Because of the time continuous operation of the analog synapses, delay can cause a deviation of the updates as compared to the not delayed situation. We find that for highly accelerated systems latency has to be kept to a minimum. This study demonstrates the suitability of the proposed implementation to emulate the selected reward modulated STDP learning rule. It is therefore an ideal candidate for implementation in an upgraded version of the wafer-scale system developed within the BrainScaleS project.
Collapse
Affiliation(s)
- Simon Friedmann
- Kirchhoff Institute for Physics, Ruprecht-Karls-University Heidelberg Heidelberg, Germany
| | | | | | | | | |
Collapse
|
193
|
Abstract
Spike-timing-dependent construction (STDC) is the production of new spiking neurons and connections in a simulated neural network in response to neuron activity. Following the discovery of spike-timing-dependent plasticity (STDP), significant effort has gone into the modeling and simulation of adaptation in spiking neural networks (SNNs). Limitations in computational power imposed by network topology, however, constrain learning capabilities through connection weight modification alone. Constructive algorithms produce new neurons and connections, allowing automatic structural responses for applications of unknown complexity and nonstationary solutions. A conceptual analogy is developed and extended to theoretical conditions for modeling synaptic plasticity as network construction. Generalizing past constructive algorithms, we propose a framework for the design of novel constructive SNNs and demonstrate its application in the development of simulations for the validation of developed theory. Potential directions of future research and applications of STDC for biological modeling and machine learning are also discussed.
Collapse
Affiliation(s)
- Toby Lightheart
- School of Mechanical Engineering, University of Adelaide, Adelaide, SA 5005, Australia.
| | | | | |
Collapse
|
194
|
Vogt SM, Hofmann UG. A unifying perspective on neuromodulatory effects on signal transmission and plasticity in D1-dominant MSN neurons. BMC Neurosci 2013. [PMCID: PMC3704299 DOI: 10.1186/1471-2202-14-s1-p172] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022] Open
Affiliation(s)
- Simon M Vogt
- AG Neuroelectronic Systems, University Clinic Freiburg, 79108 Freiburg, Germany,Institute for Signal Processing, University of Lübeck, 23562 Lübeck, Germany
| | - Ulrich G Hofmann
- AG Neuroelectronic Systems, University Clinic Freiburg, 79108 Freiburg, Germany
| |
Collapse
|
195
|
Zhdanov VP. A Neuron Model Including Gene Expression: Bistability, Long-Term Memory, etc. Neural Process Lett 2013. [DOI: 10.1007/s11063-013-9304-y] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/12/2022]
|
196
|
Dynamic evolving spiking neural networks for on-line spatio- and spectro-temporal pattern recognition. Neural Netw 2013; 41:188-201. [DOI: 10.1016/j.neunet.2012.11.014] [Citation(s) in RCA: 235] [Impact Index Per Article: 21.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/01/2012] [Revised: 11/20/2012] [Accepted: 11/25/2012] [Indexed: 11/21/2022]
|
197
|
Fung P, Robinson P. Neural field theory of calcium dependent plasticity with applications to transcranial magnetic stimulation. J Theor Biol 2013; 324:72-83. [DOI: 10.1016/j.jtbi.2013.01.013] [Citation(s) in RCA: 16] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/02/2012] [Revised: 01/17/2013] [Accepted: 01/20/2013] [Indexed: 10/27/2022]
|
198
|
Nessler B, Pfeiffer M, Buesing L, Maass W. Bayesian computation emerges in generic cortical microcircuits through spike-timing-dependent plasticity. PLoS Comput Biol 2013; 9:e1003037. [PMID: 23633941 PMCID: PMC3636028 DOI: 10.1371/journal.pcbi.1003037] [Citation(s) in RCA: 112] [Impact Index Per Article: 10.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/19/2012] [Accepted: 03/04/2013] [Indexed: 11/24/2022] Open
Abstract
The principles by which networks of neurons compute, and how spike-timing dependent plasticity (STDP) of synaptic weights generates and maintains their computational function, are unknown. Preceding work has shown that soft winner-take-all (WTA) circuits, where pyramidal neurons inhibit each other via interneurons, are a common motif of cortical microcircuits. We show through theoretical analysis and computer simulations that Bayesian computation is induced in these network motifs through STDP in combination with activity-dependent changes in the excitability of neurons. The fundamental components of this emergent Bayesian computation are priors that result from adaptation of neuronal excitability and implicit generative models for hidden causes that are created in the synaptic weights through STDP. In fact, a surprising result is that STDP is able to approximate a powerful principle for fitting such implicit generative models to high-dimensional spike inputs: Expectation Maximization. Our results suggest that the experimentally observed spontaneous activity and trial-to-trial variability of cortical neurons are essential features of their information processing capability, since their functional role is to represent probability distributions rather than static neural codes. Furthermore it suggests networks of Bayesian computation modules as a new model for distributed information processing in the cortex.
Collapse
Affiliation(s)
- Bernhard Nessler
- Institute for Theoretical Computer Science, Graz University of Technology, Graz, Austria.
| | | | | | | |
Collapse
|
199
|
Kerr RR, Burkitt AN, Thomas DA, Gilson M, Grayden DB. Delay selection by spike-timing-dependent plasticity in recurrent networks of spiking neurons receiving oscillatory inputs. PLoS Comput Biol 2013; 9:e1002897. [PMID: 23408878 PMCID: PMC3567188 DOI: 10.1371/journal.pcbi.1002897] [Citation(s) in RCA: 16] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/12/2012] [Accepted: 12/10/2012] [Indexed: 11/28/2022] Open
Abstract
Learning rules, such as spike-timing-dependent plasticity (STDP), change the structure of networks of neurons based on the firing activity. A network level understanding of these mechanisms can help infer how the brain learns patterns and processes information. Previous studies have shown that STDP selectively potentiates feed-forward connections that have specific axonal delays, and that this underlies behavioral functions such as sound localization in the auditory brainstem of the barn owl. In this study, we investigate how STDP leads to the selective potentiation of recurrent connections with different axonal and dendritic delays during oscillatory activity. We develop analytical models of learning with additive STDP in recurrent networks driven by oscillatory inputs, and support the results using simulations with leaky integrate-and-fire neurons. Our results show selective potentiation of connections with specific axonal delays, which depended on the input frequency. In addition, we demonstrate how this can lead to a network becoming selective in the amplitude of its oscillatory response to this frequency. We extend this model of axonal delay selection within a single recurrent network in two ways. First, we show the selective potentiation of connections with a range of both axonal and dendritic delays. Second, we show axonal delay selection between multiple groups receiving out-of-phase, oscillatory inputs. We discuss the application of these models to the formation and activation of neuronal ensembles or cell assemblies in the cortex, and also to missing fundamental pitch perception in the auditory brainstem. Our brain's ability to perform cognitive processes, such as object identification, problem solving, and decision making, comes from the specific connections between neurons. The neurons carry information as spikes that are transmitted to other neurons via connections with different strengths and propagation delays. Experimentally observed learning rules can modify the strengths of connections between neurons based on the timing of their spikes. The learning that occurs in neuronal networks due to these rules is thought to be vital to creating the structures necessary for different cognitive processes as well as for memory. The spiking rate of populations of neurons has been observed to oscillate at particular frequencies in various brain regions, and there is evidence that these oscillations play a role in cognition. Here, we use analytical and numerical methods to investigate the changes to the network structure caused by a specific learning rule during oscillatory neural activity. We find the conditions under which connections with propagation delays that resonate with the oscillations are strengthened relative to the other connections. We demonstrate that networks learn to oscillate more strongly to oscillations at the frequency they were presented with during learning. We discuss the possible application of these results to specific areas of the brain.
Collapse
Affiliation(s)
- Robert R. Kerr
- NeuroEngineering Laboratory, Department of Electrical and Electronic Engineering, The University of Melbourne, Melbourne, Victoria, Australia
- Centre for Neural Engineering, The University of Melbourne, Melbourne, Victoria, Australia
| | - Anthony N. Burkitt
- NeuroEngineering Laboratory, Department of Electrical and Electronic Engineering, The University of Melbourne, Melbourne, Victoria, Australia
- Centre for Neural Engineering, The University of Melbourne, Melbourne, Victoria, Australia
- Bionics Institute, Melbourne, Victoria, Australia
- * E-mail:
| | - Doreen A. Thomas
- Department of Mechanical Engineering, The University of Melbourne, Melbourne, Victoria, Australia
| | - Matthieu Gilson
- NeuroEngineering Laboratory, Department of Electrical and Electronic Engineering, The University of Melbourne, Melbourne, Victoria, Australia
- Centre for Neural Engineering, The University of Melbourne, Melbourne, Victoria, Australia
- Laboratory for Neural Circuit Theory, RIKEN Brain Science Institute, Saitama, Japan
| | - David B. Grayden
- NeuroEngineering Laboratory, Department of Electrical and Electronic Engineering, The University of Melbourne, Melbourne, Victoria, Australia
- Centre for Neural Engineering, The University of Melbourne, Melbourne, Victoria, Australia
- Bionics Institute, Melbourne, Victoria, Australia
| |
Collapse
|
200
|
Yousefi A, Dibazar AA, Berger TW. Synaptic dynamics: Linear model and adaptation algorithm. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2013; 2012:1362-5. [PMID: 23366152 DOI: 10.1109/embc.2012.6346191] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
Abstract
Linear model for synapse temporal dynamics and learning algorithm for synaptic adaptation in spiking neural networks are presented. The proposed linear model substantially simplifies analysis and training of spiking neural networks, meanwhile accurately models facilitation and depression dynamics in synapse. The learning rule is biologically plausible and is capable of simultaneously adjusting both of LTP and STP parameters of individual synapses in a network. To prove efficiency of the system, a small size spiking neural network is trained for generating different spike and bursting patterns of cortical neurons. The simulation results revealed that the linear model of synaptic dynamics along with the proposed STDP based learning algorithm can provide a practical tool for simulating and training very large scale spiking neural circuitry comprising of significant number of synapses and neurons.
Collapse
Affiliation(s)
- Ali Yousefi
- Neural Dynamics Laboratory, University of Southern California, USA.
| | | | | |
Collapse
|