1
|
Lansner A, Fiebig F, Herman P. Fast Hebbian plasticity and working memory. Curr Opin Neurobiol 2023; 83:102809. [PMID: 37980802 DOI: 10.1016/j.conb.2023.102809] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/14/2023] [Revised: 10/10/2023] [Accepted: 10/19/2023] [Indexed: 11/21/2023]
Abstract
Theories and models of working memory (WM) were at least since the mid-1990s dominated by the persistent activity hypothesis. The past decade has seen rising concerns about the shortcomings of sustained activity as the mechanism for short-term maintenance of WM information in the light of accumulating experimental evidence for so-called activity-silent WM and the fundamental difficulty in explaining robust multi-item WM. In consequence, alternative theories are now explored mostly in the direction of fast synaptic plasticity as the underlying mechanism. The question of non-Hebbian vs Hebbian synaptic plasticity emerges naturally in this context. In this review, we focus on fast Hebbian plasticity and trace the origins of WM theories and models building on this form of associative learning.
Collapse
Affiliation(s)
- Anders Lansner
- Stockholm University, Department of Mathematics, SE-106 91 Stockholm, Sweden; KTH Royal Institute of Technology, Dept of Computational Science and Technology, 100 44 Stockholm, Sweden; SeRC (Swedish e-Science Research Center), Sweden.
| | - Florian Fiebig
- KTH Royal Institute of Technology, Dept of Computational Science and Technology, 100 44 Stockholm, Sweden.
| | - Pawel Herman
- KTH Royal Institute of Technology, Dept of Computational Science and Technology, 100 44 Stockholm, Sweden; Digital Futures, KTH Royal Institute of Technology, Stockholm, Sweden; SeRC (Swedish e-Science Research Center), Sweden. https://twitter.com/PHermanKTHbrain
| |
Collapse
|
2
|
Wang D, Xu J, Li F, Zhang L, Cao C, Stathis D, Lansner A, Hemani A, Zheng LR, Zou Z. A Memristor-Based Learning Engine for Synaptic Trace-Based Online Learning. IEEE Trans Biomed Circuits Syst 2023; 17:1153-1165. [PMID: 37390002 DOI: 10.1109/tbcas.2023.3291021] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 07/02/2023]
Abstract
The memristor has been extensively used to facilitate the synaptic online learning of brain-inspired spiking neural networks (SNNs). However, the current memristor-based work can not support the widely used yet sophisticated trace-based learning rules, including the trace-based Spike-Timing-Dependent Plasticity (STDP) and the Bayesian Confidence Propagation Neural Network (BCPNN) learning rules. This paper proposes a learning engine to implement trace-based online learning, consisting of memristor-based blocks and analog computing blocks. The memristor is used to mimic the synaptic trace dynamics by exploiting the nonlinear physical property of the device. The analog computing blocks are used for the addition, multiplication, logarithmic and integral operations. By organizing these building blocks, a reconfigurable learning engine is architected and realized to simulate the STDP and BCPNN online learning rules, using memristors and 180 nm analog CMOS technology. The results show that the proposed learning engine can achieve energy consumption of 10.61 pJ and 51.49 pJ per synaptic update for the STDP and BCPNN learning rules, respectively, with a 147.03× and 93.61× reduction compared to the 180 nm ASIC counterparts, and also a 9.39× and 5.63× reduction compared to the 40 nm ASIC counterparts. Compared with the state-of-the-art work of Loihi and eBrainII, the learning engine can reduce the energy per synaptic update by 11.31× and 13.13× for trace-based STDP and BCPNN learning rules, respectively.
Collapse
|
3
|
Chrysanthidis N, Fiebig F, Lansner A, Herman P. Traces of semantization - from episodic to semantic memory in a spiking cortical network model. eNeuro 2022; 9:ENEURO.0062-22.2022. [PMID: 35803714 PMCID: PMC9347313 DOI: 10.1523/eneuro.0062-22.2022] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/06/2022] [Revised: 05/05/2022] [Accepted: 05/28/2022] [Indexed: 11/21/2022] Open
Abstract
Episodic memory is a recollection of past personal experiences associated with particular times and places. This kind of memory is commonly subject to loss of contextual information or" semantization", which gradually decouples the encoded memory items from their associated contexts while transforming them into semantic or gist-like representations. Novel extensions to the classical Remember/Know behavioral paradigm attribute the loss of episodicity to multiple exposures of an item in different contexts. Despite recent advancements explaining semantization at a behavioral level, the underlying neural mechanisms remain poorly understood. In this study, we suggest and evaluate a novel hypothesis proposing that Bayesian-Hebbian synaptic plasticity mechanisms might cause semantization of episodic memory. We implement a cortical spiking neural network model with a Bayesian-Hebbian learning rule called Bayesian Confidence Propagation Neural Network (BCPNN), which captures the semantization phenomenon and offers a mechanistic explanation for it. Encoding items across multiple contexts leads to item-context decoupling akin to semantization. We compare BCPNN plasticity with the more commonly used spike-timing dependent plasticity (STDP) learning rule in the same episodic memory task. Unlike BCPNN, STDP does not explain the decontextualization process. We further examine how selective plasticity modulation of isolated salient events may enhance preferential retention and resistance to semantization. Our model reproduces important features of episodicity on behavioral timescales under various biological constraints whilst also offering a novel neural and synaptic explanation for semantization, thereby casting new light on the interplay between episodic and semantic memory processes.Significance StatementRemembering single episodes is a fundamental attribute of cognition. Difficulties recollecting contextual information is a key sign of episodic memory loss or semantization. Behavioral studies demonstrate that semantization of episodic memory can occur rapidly, yet the neural mechanisms underlying this effect are insufficiently investigated. In line with recent behavioral findings, we show that multiple stimulus exposures in different contexts may advance item-context decoupling. We suggest a Bayesian-Hebbian synaptic plasticity hypothesis of memory semantization and further show that a transient modulation of plasticity during salient events may disrupt the decontextualization process by strengthening memory traces, and thus, enhancing preferential retention. The proposed cortical network-of-networks model thus bridges micro and mesoscale synaptic effects with network dynamics and behavior.
Collapse
Affiliation(s)
- Nikolaos Chrysanthidis
- Division of Computational Science and Technology, School of Electrical Engineering and Computer Science, KTH Royal Institute of Technology, 10044 Stockholm, Sweden
| | - Florian Fiebig
- Division of Computational Science and Technology, School of Electrical Engineering and Computer Science, KTH Royal Institute of Technology, 10044 Stockholm, Sweden
| | - Anders Lansner
- Division of Computational Science and Technology, School of Electrical Engineering and Computer Science, KTH Royal Institute of Technology, 10044 Stockholm, Sweden
- Department of Mathematics, Stockholm University, 10691 Stockholm, Sweden
| | - Pawel Herman
- Division of Computational Science and Technology, School of Electrical Engineering and Computer Science, KTH Royal Institute of Technology, 10044 Stockholm, Sweden
- Digital Futures, Stockholm, Sweden
- Swedish e-Science Research Centre, Stockholm, Sweden
| |
Collapse
|
4
|
Wang D, Xu J, Stathis D, Zhang L, Li F, Lansner A, Hemani A, Yang Y, Herman P, Zou Z. Mapping the BCPNN Learning Rule to a Memristor Model. Front Neurosci 2021; 15:750458. [PMID: 34955716 PMCID: PMC8695980 DOI: 10.3389/fnins.2021.750458] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/30/2021] [Accepted: 11/11/2021] [Indexed: 11/15/2022] Open
Abstract
The Bayesian Confidence Propagation Neural Network (BCPNN) has been implemented in a way that allows mapping to neural and synaptic processes in the human cortexandhas been used extensively in detailed spiking models of cortical associative memory function and recently also for machine learning applications. In conventional digital implementations of BCPNN, the von Neumann bottleneck is a major challenge with synaptic storage and access to it as the dominant cost. The memristor is a non-volatile device ideal for artificial synapses that fuses computation and storage and thus fundamentally overcomes the von Neumann bottleneck. While the implementation of other neural networks like Spiking Neural Network (SNN) and even Convolutional Neural Network (CNN) on memristor has been studied, the implementation of BCPNN has not. In this paper, the BCPNN learning rule is mapped to a memristor model and implemented with a memristor-based architecture. The implementation of the BCPNN learning rule is a mixed-signal design with the main computation and storage happening in the analog domain. In particular, the nonlinear dopant drift phenomenon of the memristor is exploited to simulate the exponential decay of the synaptic state variables in the BCPNN learning rule. The consistency between the memristor-based solution and the BCPNN learning rule is simulated and verified in Matlab, with a correlation coefficient as high as 0.99. The analog circuit is designed and implemented in the SPICE simulation environment, demonstrating a good emulation effect for the BCPNN learning rule with a correlation coefficient as high as 0.98. This work focuses on demonstrating the feasibility of mapping the BCPNN learning rule to in-circuit computation in memristor. The feasibility of the memristor-based implementation is evaluated and validated in the paper, to pave the way for a more efficient BCPNN implementation, toward a real-time brain emulation engine.
Collapse
Affiliation(s)
- Deyu Wang
- State Key Laboratory of ASIC and System, School of Information Science and Technology, Fudan University, Shanghai, China
| | - Jiawei Xu
- State Key Laboratory of ASIC and System, School of Information Science and Technology, Fudan University, Shanghai, China
| | - Dimitrios Stathis
- School of Electrical Engineering and Computer Science, KTH Royal Institute of Technology, Stockholm, Sweden
| | - Lianhao Zhang
- Department of Electrical Engineering, Technical University of Denmark, Kongens Lyngby, Denmark
| | - Feng Li
- State Key Laboratory of ASIC and System, School of Information Science and Technology, Fudan University, Shanghai, China
| | - Anders Lansner
- School of Electrical Engineering and Computer Science, KTH Royal Institute of Technology, Stockholm, Sweden
- Department of Mathematics, Stockholm University, Stockholm, Sweden
| | - Ahmed Hemani
- School of Electrical Engineering and Computer Science, KTH Royal Institute of Technology, Stockholm, Sweden
| | - Yu Yang
- School of Electrical Engineering and Computer Science, KTH Royal Institute of Technology, Stockholm, Sweden
| | - Pawel Herman
- School of Electrical Engineering and Computer Science, KTH Royal Institute of Technology, Stockholm, Sweden
| | - Zhuo Zou
- State Key Laboratory of ASIC and System, School of Information Science and Technology, Fudan University, Shanghai, China
| |
Collapse
|
5
|
Abstract
Simulation of large scale biologically plausible spiking neural networks, e.g., Bayesian Confidence Propagation Neural Network (BCPNN), usually requires high-performance supercomputers with dedicated accelerators, such as GPUs, FPGAs, or even Application-Specific Integrated Circuits (ASICs). Almost all of these computers are based on the von Neumann architecture that separates storage and computation. In all these solutions, memory access is the dominant cost even for highly customized computation and memory architecture, such as ASICs. In this paper, we propose an optimization technique that can make the BCPNN simulation memory access friendly by avoiding a dual-access pattern. The BCPNN synaptic traces and weights are organized as matrices accessed both row-wise and column-wise. Accessing data stored in DRAM with a dual-access pattern is extremely expensive. A post-synaptic history buffer and an approximation function thus are introduced to eliminate the troublesome column update. The error analysis combining theoretical analysis and experiments suggests that the probability of introducing intolerable errors by such optimization can be bounded to a very small number, which makes it almost negligible. Derivation and validation of such a bound is the core contribution of this paper. Experiments on a GPU platform shows that compared to the previously reported baseline simulation strategy, the proposed optimization technique reduces the storage requirement by 33%, the global memory access demand by more than 27% and DRAM access rate by more than 5%; the latency of updating synaptic traces decreases by roughly 50%. Compared with the other similar optimization technique reported in the literature, our method clearly shows considerably better results. Although the BCPNN is used as the targeted neural network model, the proposed optimization method can be applied to other artificial neural network models based on a Hebbian learning rule.
Collapse
Affiliation(s)
- Yu Yang
- Division of Electronics and Embedded Systems, School of Electrical Engineering and Computer Science, KTH Royal Institute of Technology, Stockholm, Sweden
- *Correspondence: Yu Yang
| | - Dimitrios Stathis
- Division of Electronics and Embedded Systems, School of Electrical Engineering and Computer Science, KTH Royal Institute of Technology, Stockholm, Sweden
| | - Rodolfo Jordão
- Division of Electronics and Embedded Systems, School of Electrical Engineering and Computer Science, KTH Royal Institute of Technology, Stockholm, Sweden
| | - Ahmed Hemani
- Division of Electronics and Embedded Systems, School of Electrical Engineering and Computer Science, KTH Royal Institute of Technology, Stockholm, Sweden
| | - Anders Lansner
- Division of Computational Science and Technology, School of Electrical Engineering and Computer Science, KTH Royal Institute of Technology, Stockholm, Sweden
- Department of Mathematics, Stockholm University, Stockholm, Sweden
| |
Collapse
|
6
|
Martinez RH, Lansner A, Herman P. Probabilistic associative learning suffices for learning the temporal structure of multiple sequences. PLoS One 2019; 14:e0220161. [PMID: 31369571 PMCID: PMC6675053 DOI: 10.1371/journal.pone.0220161] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/22/2019] [Accepted: 07/08/2019] [Indexed: 11/19/2022] Open
Abstract
From memorizing a musical tune to navigating a well known route, many of our underlying behaviors have a strong temporal component. While the mechanisms behind the sequential nature of the underlying brain activity are likely multifarious and multi-scale, in this work we attempt to characterize to what degree some of this properties can be explained as a consequence of simple associative learning. To this end, we employ a parsimonious firing-rate attractor network equipped with the Hebbian-like Bayesian Confidence Propagating Neural Network (BCPNN) learning rule relying on synaptic traces with asymmetric temporal characteristics. The proposed network model is able to encode and reproduce temporal aspects of the input, and offers internal control of the recall dynamics by gain modulation. We provide an analytical characterisation of the relationship between the structure of the weight matrix, the dynamical network parameters and the temporal aspects of sequence recall. We also present a computational study of the performance of the system under the effects of noise for an extensive region of the parameter space. Finally, we show how the inclusion of modularity in our network structure facilitates the learning and recall of multiple overlapping sequences even in a noisy regime.
Collapse
Affiliation(s)
- Ramon H. Martinez
- Computational Brain Science Lab, KTH Royal Institute of Technology, Stockholm, Sweden
| | - Anders Lansner
- Computational Brain Science Lab, KTH Royal Institute of Technology, Stockholm, Sweden
- Mathematics Department, Stockholm University, Stockholm, Sweden
| | - Pawel Herman
- Computational Brain Science Lab, KTH Royal Institute of Technology, Stockholm, Sweden
| |
Collapse
|
7
|
Iatropoulos G, Herman P, Lansner A, Karlgren J, Larsson M, Olofsson JK. The language of smell: Connecting linguistic and psychophysical properties of odor descriptors. Cognition 2018; 178:37-49. [DOI: 10.1016/j.cognition.2018.05.007] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/15/2017] [Revised: 04/12/2018] [Accepted: 05/07/2018] [Indexed: 10/16/2022]
|
8
|
Berthet P, Lindahl M, Tully PJ, Hellgren-Kotaleski J, Lansner A. Functional Relevance of Different Basal Ganglia Pathways Investigated in a Spiking Model with Reward Dependent Plasticity. Front Neural Circuits 2016; 10:53. [PMID: 27493625 PMCID: PMC4954853 DOI: 10.3389/fncir.2016.00053] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/03/2015] [Accepted: 07/06/2016] [Indexed: 11/13/2022] Open
Abstract
The brain enables animals to behaviorally adapt in order to survive in a complex and dynamic environment, but how reward-oriented behaviors are achieved and computed by its underlying neural circuitry is an open question. To address this concern, we have developed a spiking model of the basal ganglia (BG) that learns to dis-inhibit the action leading to a reward despite ongoing changes in the reward schedule. The architecture of the network features the two pathways commonly described in BG, the direct (denoted D1) and the indirect (denoted D2) pathway, as well as a loop involving striatum and the dopaminergic system. The activity of these dopaminergic neurons conveys the reward prediction error (RPE), which determines the magnitude of synaptic plasticity within the different pathways. All plastic connections implement a versatile four-factor learning rule derived from Bayesian inference that depends upon pre- and post-synaptic activity, receptor type, and dopamine level. Synaptic weight updates occur in the D1 or D2 pathways depending on the sign of the RPE, and an efference copy informs upstream nuclei about the action selected. We demonstrate successful performance of the system in a multiple-choice learning task with a transiently changing reward schedule. We simulate lesioning of the various pathways and show that a condition without the D2 pathway fares worse than one without D1. Additionally, we simulate the degeneration observed in Parkinson's disease (PD) by decreasing the number of dopaminergic neurons during learning. The results suggest that the D1 pathway impairment in PD might have been overlooked. Furthermore, an analysis of the alterations in the synaptic weights shows that using the absolute reward value instead of the RPE leads to a larger change in D1.
Collapse
Affiliation(s)
- Pierre Berthet
- Numerical Analysis and Computer Science, Stockholm UniversityStockholm, Sweden
- Department of Computational Biology, School of Computer Science and Communication, KTH Royal Institute of TechnologyStockholm, Sweden
- Stockholm Brain Institute, Karolinska InstituteStockholm, Sweden
| | - Mikael Lindahl
- Department of Computational Biology, School of Computer Science and Communication, KTH Royal Institute of TechnologyStockholm, Sweden
- Stockholm Brain Institute, Karolinska InstituteStockholm, Sweden
| | - Philip J. Tully
- Department of Computational Biology, School of Computer Science and Communication, KTH Royal Institute of TechnologyStockholm, Sweden
- Stockholm Brain Institute, Karolinska InstituteStockholm, Sweden
- Institute for Adaptive and Neural Computation, School of Informatics, University of EdinburghEdinburgh, UK
| | - Jeanette Hellgren-Kotaleski
- Department of Computational Biology, School of Computer Science and Communication, KTH Royal Institute of TechnologyStockholm, Sweden
- Stockholm Brain Institute, Karolinska InstituteStockholm, Sweden
- Department of Neuroscience, Karolinska InstituteStockholm, Sweden
| | - Anders Lansner
- Numerical Analysis and Computer Science, Stockholm UniversityStockholm, Sweden
- Department of Computational Biology, School of Computer Science and Communication, KTH Royal Institute of TechnologyStockholm, Sweden
- Stockholm Brain Institute, Karolinska InstituteStockholm, Sweden
| |
Collapse
|
9
|
Knight JC, Tully PJ, Kaplan BA, Lansner A, Furber SB. Large-Scale Simulations of Plastic Neural Networks on Neuromorphic Hardware. Front Neuroanat 2016; 10:37. [PMID: 27092061 PMCID: PMC4823276 DOI: 10.3389/fnana.2016.00037] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/30/2015] [Accepted: 03/18/2016] [Indexed: 11/17/2022] Open
Abstract
SpiNNaker is a digital, neuromorphic architecture designed for simulating large-scale spiking neural networks at speeds close to biological real-time. Rather than using bespoke analog or digital hardware, the basic computational unit of a SpiNNaker system is a general-purpose ARM processor, allowing it to be programmed to simulate a wide variety of neuron and synapse models. This flexibility is particularly valuable in the study of biological plasticity phenomena. A recently proposed learning rule based on the Bayesian Confidence Propagation Neural Network (BCPNN) paradigm offers a generic framework for modeling the interaction of different plasticity mechanisms using spiking neurons. However, it can be computationally expensive to simulate large networks with BCPNN learning since it requires multiple state variables for each synapse, each of which needs to be updated every simulation time-step. We discuss the trade-offs in efficiency and accuracy involved in developing an event-based BCPNN implementation for SpiNNaker based on an analytical solution to the BCPNN equations, and detail the steps taken to fit this within the limited computational and memory resources of the SpiNNaker architecture. We demonstrate this learning rule by learning temporal sequences of neural activity within a recurrent attractor network which we simulate at scales of up to 2.0 × 104 neurons and 5.1 × 107 plastic synapses: the largest plastic neural network ever to be simulated on neuromorphic hardware. We also run a comparable simulation on a Cray XC-30 supercomputer system and find that, if it is to match the run-time of our SpiNNaker simulation, the super computer system uses approximately 45× more power. This suggests that cheaper, more power efficient neuromorphic systems are becoming useful discovery tools in the study of plasticity in large-scale brain models.
Collapse
Affiliation(s)
- James C Knight
- Advanced Processor Technologies Group, School of Computer Science, University of Manchester Manchester, UK
| | - Philip J Tully
- Department of Computational Biology, Royal Institute of TechnologyStockholm, Sweden; Stockholm Brain Institute, Karolinska InstituteStockholm, Sweden; Institute for Adaptive and Neural Computation, School of Informatics, University of EdinburghEdinburgh, UK
| | - Bernhard A Kaplan
- Department of Visualization and Data Analysis, Zuse Institute Berlin Berlin, Germany
| | - Anders Lansner
- Department of Computational Biology, Royal Institute of TechnologyStockholm, Sweden; Stockholm Brain Institute, Karolinska InstituteStockholm, Sweden; Department of Numerical analysis and Computer Science, Stockholm UniversityStockholm, Sweden
| | - Steve B Furber
- Advanced Processor Technologies Group, School of Computer Science, University of Manchester Manchester, UK
| |
Collapse
|
10
|
Mazzoni A, Lindén H, Cuntz H, Lansner A, Panzeri S, Einevoll GT. Computing the Local Field Potential (LFP) from Integrate-and-Fire Network Models. PLoS Comput Biol 2015; 11:e1004584. [PMID: 26657024 PMCID: PMC4682791 DOI: 10.1371/journal.pcbi.1004584] [Citation(s) in RCA: 113] [Impact Index Per Article: 12.6] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/29/2015] [Accepted: 10/02/2015] [Indexed: 01/27/2023] Open
Abstract
Leaky integrate-and-fire (LIF) network models are commonly used to study how the spiking dynamics of neural networks changes with stimuli, tasks or dynamic network states. However, neurophysiological studies in vivo often rather measure the mass activity of neuronal microcircuits with the local field potential (LFP). Given that LFPs are generated by spatially separated currents across the neuronal membrane, they cannot be computed directly from quantities defined in models of point-like LIF neurons. Here, we explore the best approximation for predicting the LFP based on standard output from point-neuron LIF networks. To search for this best "LFP proxy", we compared LFP predictions from candidate proxies based on LIF network output (e.g, firing rates, membrane potentials, synaptic currents) with "ground-truth" LFP obtained when the LIF network synaptic input currents were injected into an analogous three-dimensional (3D) network model of multi-compartmental neurons with realistic morphology, spatial distributions of somata and synapses. We found that a specific fixed linear combination of the LIF synaptic currents provided an accurate LFP proxy, accounting for most of the variance of the LFP time course observed in the 3D network for all recording locations. This proxy performed well over a broad set of conditions, including substantial variations of the neuronal morphologies. Our results provide a simple formula for estimating the time course of the LFP from LIF network simulations in cases where a single pyramidal population dominates the LFP generation, and thereby facilitate quantitative comparison between computational models and experimental LFP recordings in vivo.
Collapse
Affiliation(s)
- Alberto Mazzoni
- The Biorobotics Institute, Scuola Superiore Sant’Anna, Pontedera, Pisa, Italy
- Neural Computation Laboratory, Center for Neuroscience and Cognitive Systems @UniTn, Istituto Italiano di Tecnologia, Rovereto, Italy
- * E-mail: (AM); (GTE)
| | - Henrik Lindén
- Department of Neuroscience and Pharmacology, University of Copenhagen, Copenhagen, Denmark
- Department of Computational Biology, School of Computer Science and Communication, Royal Institute of Technology–KTH, Stockholm, Sweden
| | - Hermann Cuntz
- Ernst Strüngmann Institute (ESI) for Neuroscience in Cooperation with Max Planck Society, Frankfurt/Main, Germany
- Institute of Clinical Neuroanatomy, Goethe University Frankfurt, Frankfurt/Main, Germany
- Frankfurt Institute for Advanced Studies (FIAS), Frankfurt/Main, Germany
| | - Anders Lansner
- Department of Computational Biology, School of Computer Science and Communication, Royal Institute of Technology–KTH, Stockholm, Sweden
| | - Stefano Panzeri
- Neural Computation Laboratory, Center for Neuroscience and Cognitive Systems @UniTn, Istituto Italiano di Tecnologia, Rovereto, Italy
| | - Gaute T. Einevoll
- Department of Mathematical Sciences and Technology, Norwegian University of Life Sciences, Ås, Norway
- Department of Physics, University of Oslo, Oslo, Norway
- * E-mail: (AM); (GTE)
| |
Collapse
|
11
|
Krishnamurthy P, Silberberg G, Lansner A. Long-range recruitment of Martinotti cells causes surround suppression and promotes saliency in an attractor network model. Front Neural Circuits 2015; 9:60. [PMID: 26528143 PMCID: PMC4604243 DOI: 10.3389/fncir.2015.00060] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/28/2015] [Accepted: 09/23/2015] [Indexed: 11/13/2022] Open
Abstract
Although the importance of long-range connections for cortical information processing has been acknowledged for a long time, most studies focused on the long-range interactions between excitatory cortical neurons. Inhibitory interneurons play an important role in cortical computation and have thus far been studied mainly with respect to their local synaptic interactions within the cortical microcircuitry. A recent study showed that long-range excitatory connections onto Martinotti cells (MC) mediate surround suppression. Here we have extended our previously reported attractor network of pyramidal cells (PC) and MC by introducing long-range connections targeting MC. We have demonstrated how the network with Martinotti cell-mediated long-range inhibition gives rise to surround suppression and also promotes saliency of locations at which simple non-uniformities in the stimulus field are introduced. Furthermore, our analysis suggests that the presynaptic dynamics of MC is only ancillary to its orientation tuning property in enabling the network with saliency detection. Lastly, we have also implemented a disinhibitory pathway mediated by another interneuron type (VIP interneurons), which inhibits MC and abolishes surround suppression.
Collapse
Affiliation(s)
- Pradeep Krishnamurthy
- Department of Numerical Analysis and Computer Science, Stockholm University Stockholm, Sweden ; Department of Computational Biology, School of Computer Science and Communication, Royal Institute of Technology (KTH) Stockholm, Sweden
| | - Gilad Silberberg
- Department of Neuroscience, Karolinska Institutet Stockholm, Sweden
| | - Anders Lansner
- Department of Numerical Analysis and Computer Science, Stockholm University Stockholm, Sweden ; Department of Computational Biology, School of Computer Science and Communication, Royal Institute of Technology (KTH) Stockholm, Sweden
| |
Collapse
|
12
|
Abstract
A crucial role for working memory in temporary information processing and guidance of complex behavior has been recognized for many decades. There is emerging consensus that working-memory maintenance results from the interactions among long-term memory representations and basic processes, including attention, that are instantiated as reentrant loops between frontal and posterior cortical areas, as well as sub-cortical structures. The nature of such interactions can account for capacity limitations, lifespan changes, and restricted transfer after working-memory training. Recent data and models indicate that working memory may also be based on synaptic plasticity and that working memory can operate on non-consciously perceived information.
Collapse
Affiliation(s)
- Johan Eriksson
- Department of Integrative Medical Biology, Umeå University, 901 87 Umeå, Sweden; Umeå Center for Function Brain Imaging (UFBI), Umeå University, 901 87 Umeå, Sweden.
| | - Edward K Vogel
- Department of Psychology, Institute for Mind and Biology, University of Chicago, Chicago, IL 60637, USA
| | - Anders Lansner
- Department of Computational Biology, KTH Royal Institute of Technology, 100 44 Stockholm, Sweden; Department of Numerical Analysis and Computer Science, Stockholm University, 106 91 Stockholm, Sweden
| | - Fredrik Bergström
- Department of Integrative Medical Biology, Umeå University, 901 87 Umeå, Sweden; Umeå Center for Function Brain Imaging (UFBI), Umeå University, 901 87 Umeå, Sweden
| | - Lars Nyberg
- Department of Integrative Medical Biology, Umeå University, 901 87 Umeå, Sweden; Umeå Center for Function Brain Imaging (UFBI), Umeå University, 901 87 Umeå, Sweden; Department of Radiation Sciences, Umeå University, 901 87 Umeå, Sweden
| |
Collapse
|
13
|
Vogginger B, Schüffny R, Lansner A, Cederström L, Partzsch J, Höppner S. Reducing the computational footprint for real-time BCPNN learning. Front Neurosci 2015; 9:2. [PMID: 25657618 PMCID: PMC4302947 DOI: 10.3389/fnins.2015.00002] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/22/2014] [Accepted: 01/03/2015] [Indexed: 11/26/2022] Open
Abstract
The implementation of synaptic plasticity in neural simulation or neuromorphic hardware is usually very resource-intensive, often requiring a compromise between efficiency and flexibility. A versatile, but computationally-expensive plasticity mechanism is provided by the Bayesian Confidence Propagation Neural Network (BCPNN) paradigm. Building upon Bayesian statistics, and having clear links to biological plasticity processes, the BCPNN learning rule has been applied in many fields, ranging from data classification, associative memory, reward-based learning, probabilistic inference to cortical attractor memory networks. In the spike-based version of this learning rule the pre-, postsynaptic and coincident activity is traced in three low-pass-filtering stages, requiring a total of eight state variables, whose dynamics are typically simulated with the fixed step size Euler method. We derive analytic solutions allowing an efficient event-driven implementation of this learning rule. Further speedup is achieved by first rewriting the model which reduces the number of basic arithmetic operations per update to one half, and second by using look-up tables for the frequently calculated exponential decay. Ultimately, in a typical use case, the simulation using our approach is more than one order of magnitude faster than with the fixed step size Euler method. Aiming for a small memory footprint per BCPNN synapse, we also evaluate the use of fixed-point numbers for the state variables, and assess the number of bits required to achieve same or better accuracy than with the conventional explicit Euler method. All of this will allow a real-time simulation of a reduced cortex model based on BCPNN in high performance computing. More important, with the analytic solution at hand and due to the reduced memory bandwidth, the learning rule can be efficiently implemented in dedicated or existing digital neuromorphic hardware.
Collapse
Affiliation(s)
- Bernhard Vogginger
- Department of Electrical Engineering and Information Technology, Technische Universität Dresden Germany
| | - René Schüffny
- Department of Electrical Engineering and Information Technology, Technische Universität Dresden Germany
| | - Anders Lansner
- Department of Computational Biology, School of Computer Science and Communication, Royal Institute of Technology (KTH) Stockholm, Sweden ; Department of Numerical Analysis and Computer Science, Stockholm University Stockholm, Sweden
| | - Love Cederström
- Department of Electrical Engineering and Information Technology, Technische Universität Dresden Germany
| | - Johannes Partzsch
- Department of Electrical Engineering and Information Technology, Technische Universität Dresden Germany
| | - Sebastian Höppner
- Department of Electrical Engineering and Information Technology, Technische Universität Dresden Germany
| |
Collapse
|
14
|
Petrovici MA, Vogginger B, Müller P, Breitwieser O, Lundqvist M, Muller L, Ehrlich M, Destexhe A, Lansner A, Schüffny R, Schemmel J, Meier K. Characterization and compensation of network-level anomalies in mixed-signal neuromorphic modeling platforms. PLoS One 2014; 9:e108590. [PMID: 25303102 PMCID: PMC4193761 DOI: 10.1371/journal.pone.0108590] [Citation(s) in RCA: 32] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/29/2014] [Accepted: 08/22/2014] [Indexed: 11/18/2022] Open
Abstract
Advancing the size and complexity of neural network models leads to an ever increasing demand for computational resources for their simulation. Neuromorphic devices offer a number of advantages over conventional computing architectures, such as high emulation speed or low power consumption, but this usually comes at the price of reduced configurability and precision. In this article, we investigate the consequences of several such factors that are common to neuromorphic devices, more specifically limited hardware resources, limited parameter configurability and parameter variations due to fixed-pattern noise and trial-to-trial variability. Our final aim is to provide an array of methods for coping with such inevitable distortion mechanisms. As a platform for testing our proposed strategies, we use an executable system specification (ESS) of the BrainScaleS neuromorphic system, which has been designed as a universal emulation back-end for neuroscientific modeling. We address the most essential limitations of this device in detail and study their effects on three prototypical benchmark network models within a well-defined, systematic workflow. For each network model, we start by defining quantifiable functionality measures by which we then assess the effects of typical hardware-specific distortion mechanisms, both in idealized software simulations and on the ESS. For those effects that cause unacceptable deviations from the original network dynamics, we suggest generic compensation mechanisms and demonstrate their effectiveness. Both the suggested workflow and the investigated compensation mechanisms are largely back-end independent and do not require additional hardware configurability beyond the one required to emulate the benchmark networks in the first place. We hereby provide a generic methodological environment for configurable neuromorphic devices that are targeted at emulating large-scale, functional neural networks.
Collapse
Affiliation(s)
- Mihai A. Petrovici
- Ruprecht-Karls-Universität Heidelberg, Kirchhoff Institute for Physics, Heidelberg, Germany
| | - Bernhard Vogginger
- Technische Universität Dresden, Institute of Circuits and Systems, Dresden, Germany
| | - Paul Müller
- Ruprecht-Karls-Universität Heidelberg, Kirchhoff Institute for Physics, Heidelberg, Germany
| | - Oliver Breitwieser
- Ruprecht-Karls-Universität Heidelberg, Kirchhoff Institute for Physics, Heidelberg, Germany
| | - Mikael Lundqvist
- Department of Computational Biology, School of Computer Science and Communication, Stockholm University and Royal Institute of Technology, Stockholm, Sweden
| | - Lyle Muller
- CNRS, Unité de Neuroscience, Information et Complexité, Gif sur Yvette, France
| | - Matthias Ehrlich
- Technische Universität Dresden, Institute of Circuits and Systems, Dresden, Germany
| | - Alain Destexhe
- CNRS, Unité de Neuroscience, Information et Complexité, Gif sur Yvette, France
| | - Anders Lansner
- Department of Computational Biology, School of Computer Science and Communication, Stockholm University and Royal Institute of Technology, Stockholm, Sweden
| | - René Schüffny
- Technische Universität Dresden, Institute of Circuits and Systems, Dresden, Germany
| | - Johannes Schemmel
- Ruprecht-Karls-Universität Heidelberg, Kirchhoff Institute for Physics, Heidelberg, Germany
| | - Karlheinz Meier
- Ruprecht-Karls-Universität Heidelberg, Kirchhoff Institute for Physics, Heidelberg, Germany
| |
Collapse
|
15
|
Fiebig F, Lansner A. Memory consolidation from seconds to weeks: a three-stage neural network model with autonomous reinstatement dynamics. Front Comput Neurosci 2014; 8:64. [PMID: 25071536 PMCID: PMC4077014 DOI: 10.3389/fncom.2014.00064] [Citation(s) in RCA: 19] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/14/2014] [Accepted: 05/24/2014] [Indexed: 11/29/2022] Open
Abstract
Declarative long-term memories are not created in an instant. Gradual stabilization and temporally shifting dependence of acquired declarative memories in different brain regions-called systems consolidation-can be tracked in time by lesion experiments. The observation of temporally graded retrograde amnesia (RA) following hippocampal lesions points to a gradual transfer of memory from hippocampus to neocortical long-term memory. Spontaneous reactivations of hippocampal memories, as observed in place cell reactivations during slow-wave-sleep, are supposed to drive neocortical reinstatements and facilitate this process. We propose a functional neural network implementation of these ideas and furthermore suggest an extended three-state framework that includes the prefrontal cortex (PFC). It bridges the temporal chasm between working memory percepts on the scale of seconds and consolidated long-term memory on the scale of weeks or months. We show that our three-stage model can autonomously produce the necessary stochastic reactivation dynamics for successful episodic memory consolidation. The resulting learning system is shown to exhibit classical memory effects seen in experimental studies, such as retrograde and anterograde amnesia (AA) after simulated hippocampal lesioning; furthermore the model reproduces peculiar biological findings on memory modulation, such as retrograde facilitation of memory after suppressed acquisition of new long-term memories-similar to the effects of benzodiazepines on memory.
Collapse
Affiliation(s)
- Florian Fiebig
- Department of Computational Biology, Royal Institute of Technology (KTH)Stockholm, Sweden
- Institute for Adaptive and Neural Computation, School of Informatics, Edinburgh UniversityEdinburgh, Scotland
| | - Anders Lansner
- Department of Computational Biology, Royal Institute of Technology (KTH)Stockholm, Sweden
- Department of Numerical Analysis and Computer Science, Stockholm UniversityStockholm, Sweden
| |
Collapse
|
16
|
Tully PJ, Hennig MH, Lansner A. Synaptic and nonsynaptic plasticity approximating probabilistic inference. Front Synaptic Neurosci 2014; 6:8. [PMID: 24782758 PMCID: PMC3986567 DOI: 10.3389/fnsyn.2014.00008] [Citation(s) in RCA: 24] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/19/2013] [Accepted: 03/20/2014] [Indexed: 12/28/2022] Open
Abstract
Learning and memory operations in neural circuits are believed to involve molecular cascades of synaptic and nonsynaptic changes that lead to a diverse repertoire of dynamical phenomena at higher levels of processing. Hebbian and homeostatic plasticity, neuromodulation, and intrinsic excitability all conspire to form and maintain memories. But it is still unclear how these seemingly redundant mechanisms could jointly orchestrate learning in a more unified system. To this end, a Hebbian learning rule for spiking neurons inspired by Bayesian statistics is proposed. In this model, synaptic weights and intrinsic currents are adapted on-line upon arrival of single spikes, which initiate a cascade of temporally interacting memory traces that locally estimate probabilities associated with relative neuronal activation levels. Trace dynamics enable synaptic learning to readily demonstrate a spike-timing dependence, stably return to a set-point over long time scales, and remain competitive despite this stability. Beyond unsupervised learning, linking the traces with an external plasticity-modulating signal enables spike-based reinforcement learning. At the postsynaptic neuron, the traces are represented by an activity-dependent ion channel that is shown to regulate the input received by a postsynaptic cell and generate intrinsic graded persistent firing levels. We show how spike-based Hebbian-Bayesian learning can be performed in a simulated inference task using integrate-and-fire (IAF) neurons that are Poisson-firing and background-driven, similar to the preferred regime of cortical neurons. Our results support the view that neurons can represent information in the form of probability distributions, and that probabilistic inference could be a functional by-product of coupled synaptic and nonsynaptic mechanisms operating over several timescales. The model provides a biophysical realization of Bayesian computation by reconciling several observed neural phenomena whose functional effects are only partially understood in concert.
Collapse
Affiliation(s)
- Philip J Tully
- Department of Computational Biology, Royal Institute of Technology (KTH) Stockholm, Sweden ; Stockholm Brain Institute, Karolinska Institute Stockholm, Sweden ; School of Informatics, Institute for Adaptive and Neural Computation, University of Edinburgh Edinburgh, UK
| | - Matthias H Hennig
- School of Informatics, Institute for Adaptive and Neural Computation, University of Edinburgh Edinburgh, UK
| | - Anders Lansner
- Department of Computational Biology, Royal Institute of Technology (KTH) Stockholm, Sweden ; Stockholm Brain Institute, Karolinska Institute Stockholm, Sweden ; Department of Numerical Analysis and Computer Science, Stockholm University Stockholm, Sweden
| |
Collapse
|
17
|
Berthet P, Lansner A. Optogenetic stimulation in a computational model of the basal ganglia biases action selection and reward prediction error. PLoS One 2014; 9:e90578. [PMID: 24614169 PMCID: PMC3948624 DOI: 10.1371/journal.pone.0090578] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/18/2013] [Accepted: 02/03/2014] [Indexed: 11/30/2022] Open
Abstract
Optogenetic stimulation of specific types of medium spiny neurons (MSNs) in the striatum has been shown to bias the selection of mice in a two choices task. This shift is dependent on the localisation and on the intensity of the stimulation but also on the recent reward history. We have implemented a way to simulate this increased activity produced by the optical flash in our computational model of the basal ganglia (BG). This abstract model features the direct and indirect pathways commonly described in biology, and a reward prediction pathway (RP). The framework is similar to Actor-Critic methods and to the ventral/dorsal distinction in the striatum. We thus investigated the impact on the selection caused by an added stimulation in each of the three pathways. We were able to reproduce in our model the bias in action selection observed in mice. Our results also showed that biasing the reward prediction is sufficient to create a modification in the action selection. However, we had to increase the percentage of trials with stimulation relative to that in experiments in order to impact the selection. We found that increasing only the reward prediction had a different effect if the stimulation in RP was action dependent (only for a specific action) or not. We further looked at the evolution of the change in the weights depending on the stage of learning within a block. A bias in RP impacts the plasticity differently depending on that stage but also on the outcome. It remains to experimentally test how the dopaminergic neurons are affected by specific stimulations of neurons in the striatum and to relate data to predictions of our model.
Collapse
Affiliation(s)
- Pierre Berthet
- Numerical Analysis and Computer Science, Stockholm University, Stockholm, Sweden
- Department of Computational Biology, School of Computer Science and Communication, KTH Royal Institute of Technology, Stockholm, Sweden
- Stockholm Brain Institute, Karolinska Institute, Stockholm, Sweden
| | - Anders Lansner
- Numerical Analysis and Computer Science, Stockholm University, Stockholm, Sweden
- Department of Computational Biology, School of Computer Science and Communication, KTH Royal Institute of Technology, Stockholm, Sweden
- Stockholm Brain Institute, Karolinska Institute, Stockholm, Sweden
| |
Collapse
|
18
|
Kaplan BA, Lansner A. A spiking neural network model of self-organized pattern recognition in the early mammalian olfactory system. Front Neural Circuits 2014; 8:5. [PMID: 24570657 PMCID: PMC3916767 DOI: 10.3389/fncir.2014.00005] [Citation(s) in RCA: 17] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/01/2013] [Accepted: 01/09/2014] [Indexed: 01/01/2023] Open
Abstract
Olfactory sensory information passes through several processing stages before an odor percept emerges. The question how the olfactory system learns to create odor representations linking those different levels and how it learns to connect and discriminate between them is largely unresolved. We present a large-scale network model with single and multi-compartmental Hodgkin-Huxley type model neurons representing olfactory receptor neurons (ORNs) in the epithelium, periglomerular cells, mitral/tufted cells and granule cells in the olfactory bulb (OB), and three types of cortical cells in the piriform cortex (PC). Odor patterns are calculated based on affinities between ORNs and odor stimuli derived from physico-chemical descriptors of behaviorally relevant real-world odorants. The properties of ORNs were tuned to show saturated response curves with increasing concentration as seen in experiments. On the level of the OB we explored the possibility of using a fuzzy concentration interval code, which was implemented through dendro-dendritic inhibition leading to winner-take-all like dynamics between mitral/tufted cells belonging to the same glomerulus. The connectivity from mitral/tufted cells to PC neurons was self-organized from a mutual information measure and by using a competitive Hebbian-Bayesian learning algorithm based on the response patterns of mitral/tufted cells to different odors yielding a distributed feed-forward projection to the PC. The PC was implemented as a modular attractor network with a recurrent connectivity that was likewise organized through Hebbian-Bayesian learning. We demonstrate the functionality of the model in a one-sniff-learning and recognition task on a set of 50 odorants. Furthermore, we study its robustness against noise on the receptor level and its ability to perform concentration invariant odor recognition. Moreover, we investigate the pattern completion capabilities of the system and rivalry dynamics for odor mixtures.
Collapse
Affiliation(s)
- Bernhard A Kaplan
- Department of Computational Biology, School of Computer Science and Communication, Royal Institute of Technology Stockholm, Sweden ; Stockholm Brain Institute, Karolinska Institute Stockholm, Sweden
| | - Anders Lansner
- Department of Computational Biology, School of Computer Science and Communication, Royal Institute of Technology Stockholm, Sweden ; Stockholm Brain Institute, Karolinska Institute Stockholm, Sweden ; Department of Numerical Analysis and Computer Science, Stockholm University Stockholm, Sweden
| |
Collapse
|
19
|
Kaplan BA, Lansner A, Masson GS, Perrinet LU. Anisotropic connectivity implements motion-based prediction in a spiking neural network. Front Comput Neurosci 2013; 7:112. [PMID: 24062680 PMCID: PMC3775506 DOI: 10.3389/fncom.2013.00112] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/21/2013] [Accepted: 07/25/2013] [Indexed: 12/28/2022] Open
Abstract
Predictive coding hypothesizes that the brain explicitly infers upcoming sensory input to establish a coherent representation of the world. Although it is becoming generally accepted, it is not clear on which level spiking neural networks may implement predictive coding and what function their connectivity may have. We present a network model of conductance-based integrate-and-fire neurons inspired by the architecture of retinotopic cortical areas that assumes predictive coding is implemented through network connectivity, namely in the connection delays and in selectiveness for the tuning properties of source and target cells. We show that the applied connection pattern leads to motion-based prediction in an experiment tracking a moving dot. In contrast to our proposed model, a network with random or isotropic connectivity fails to predict the path when the moving dot disappears. Furthermore, we show that a simple linear decoding approach is sufficient to transform neuronal spiking activity into a probabilistic estimate for reading out the target trajectory.
Collapse
Affiliation(s)
- Bernhard A. Kaplan
- Department of Computational Biology, Royal Institute of TechnologyStockholm, Sweden
- Stockholm Brain Institute, Karolinska InstituteStockholm, Sweden
| | - Anders Lansner
- Department of Computational Biology, Royal Institute of TechnologyStockholm, Sweden
- Stockholm Brain Institute, Karolinska InstituteStockholm, Sweden
- Department of Numerical Analysis and Computer Science, Stockholm UniversityStockholm, Sweden
| | - Guillaume S. Masson
- Institut de Neurosciences de la Timone, UMR7289, Centre National de la Recherche Scientifique & Aix-Marseille UniversitéMarseille, France
| | - Laurent U. Perrinet
- Institut de Neurosciences de la Timone, UMR7289, Centre National de la Recherche Scientifique & Aix-Marseille UniversitéMarseille, France
| |
Collapse
|
20
|
Lansner A, Marklund P, Sikström S, Nilsson LG. Reactivation in working memory: an attractor network model of free recall. PLoS One 2013; 8:e73776. [PMID: 24023690 PMCID: PMC3758294 DOI: 10.1371/journal.pone.0073776] [Citation(s) in RCA: 28] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/05/2012] [Accepted: 07/23/2013] [Indexed: 11/18/2022] Open
Abstract
The dynamic nature of human working memory, the general-purpose system for processing continuous input, while keeping no longer externally available information active in the background, is well captured in immediate free recall of supraspan word-lists. Free recall tasks produce several benchmark memory phenomena, like the U-shaped serial position curve, reflecting enhanced memory for early and late list items. To account for empirical data, including primacy and recency as well as contiguity effects, we propose here a neurobiologically based neural network model that unifies short- and long-term forms of memory and challenges both the standard view of working memory as persistent activity and dual-store accounts of free recall. Rapidly expressed and volatile synaptic plasticity, modulated intrinsic excitability, and spike-frequency adaptation are suggested as key cellular mechanisms underlying working memory encoding, reactivation and recall. Recent findings on the synaptic and molecular mechanisms behind early LTP and on spiking activity during delayed-match-to-sample tasks support this view.
Collapse
Affiliation(s)
- Anders Lansner
- Department of Numerical Analysis and Computer Science, Stockholm University, Stockholm, Sweden
- School of Computer Science and Communication, Department of Computational Biology, KTH (Royal Institute of Technology), Stockholm, Sweden
- Stockholm Brain Institute, Stockholm, Sweden
| | - Petter Marklund
- Stockholm Brain Institute, Stockholm, Sweden
- Department of Psychology, Stockholm University, Stockholm, Sweden
| | - Sverker Sikström
- Stockholm Brain Institute, Stockholm, Sweden
- Department of Psychology, Lund University, Lund, Sweden
| | - Lars-Göran Nilsson
- Stockholm Brain Institute, Stockholm, Sweden
- Department of Psychology, Stockholm University, Stockholm, Sweden
| |
Collapse
|
21
|
Herman PA, Lundqvist M, Lansner A. Nested theta to gamma oscillations and precise spatiotemporal firing during memory retrieval in a simulated attractor network. Brain Res 2013; 1536:68-87. [PMID: 23939226 DOI: 10.1016/j.brainres.2013.08.002] [Citation(s) in RCA: 18] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/16/2013] [Revised: 07/30/2013] [Accepted: 08/02/2013] [Indexed: 10/26/2022]
Abstract
Nested oscillations, where the phase of the underlying slow rhythm modulates the power of faster oscillations, have recently attracted considerable research attention as the increased phase-coupling of cross-frequency oscillations has been shown to relate to memory processes. Here we investigate the hypothesis that reactivations of memory patterns, induced by either external stimuli or internal dynamics, are manifested as distributed cell assemblies oscillating at gamma-like frequencies with life-times on a theta scale. For this purpose, we study the spatiotemporal oscillatory dynamics of a previously developed meso-scale attractor network model as a correlate of its memory function. The focus is on a hierarchical nested organization of neural oscillations in delta/theta (2-5Hz) and gamma frequency bands (25-35Hz), and in some conditions even in lower alpha band (8-12Hz), which emerge in the synthesized field potentials during attractor memory retrieval. We also examine spiking behavior of the network in close relation to oscillations. Despite highly irregular firing during memory retrieval and random connectivity within each cell assembly, we observe precise spatiotemporal firing patterns that repeat across memory activations at a rate higher than expected from random firing. In contrast to earlier studies aimed at modeling neural oscillations, our attractor memory network allows us to elaborate on the functional context of emerging rhythms and discuss their relevance. We provide support for the hypothesis that the dynamics of coherent delta/theta oscillations constitute an important aspect of the formation and replay of neuronal assemblies. This article is part of a Special Issue entitled Neural Coding 2012.
Collapse
Affiliation(s)
- Pawel Andrzej Herman
- KTH Royal Institute of Technology and Stockholm University, Department of Computational Biology, Sweden.
| | | | | |
Collapse
|
22
|
Lundqvist M, Herman P, Lansner A. Effect of prestimulus alpha power, phase, and synchronization on stimulus detection rates in a biophysical attractor network model. J Neurosci 2013; 33:11817-24. [PMID: 23864671 PMCID: PMC3722510 DOI: 10.1523/jneurosci.5155-12.2013] [Citation(s) in RCA: 25] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/05/2012] [Revised: 04/22/2013] [Accepted: 05/19/2013] [Indexed: 11/21/2022] Open
Abstract
Spontaneous oscillations measured by local field potentials, electroencephalograms and magnetoencephalograms exhibit a pronounced peak in the alpha band (8-12 Hz) in humans and primates. Both instantaneous power and phase of these ongoing oscillations have commonly been observed to correlate with psychophysical performance in stimulus detection tasks. We use a novel model-based approach to study the effect of prestimulus oscillations on detection rate. A previously developed biophysically detailed attractor network exhibits spontaneous oscillations in the alpha range before a stimulus is presented and transiently switches to gamma-like oscillations on successful detection. We demonstrate that both phase and power of the ongoing alpha oscillations modulate the probability of such state transitions. The power can either positively or negatively correlate with the detection rate, in agreement with experimental findings, depending on the underlying neural mechanism modulating the oscillatory power. Furthermore, the spatially distributed alpha oscillators of the network can be synchronized by global nonspecific weak excitatory signals. These synchronization events lead to transient increases in alpha-band power and render the network sensitive to the exact timing of target stimuli, making the alpha cycle function as a temporal mask in line with recent experimental observations. Our results are relevant to several studies that attribute a modulatory role to prestimulus alpha dynamics.
Collapse
Affiliation(s)
- Mikael Lundqvist
- Department of Computational Biology, Royal Institute of Technology-KTH and Stockholm University, 11421 Stockholm, Sweden.
| | | | | |
Collapse
|
23
|
Abstract
An important research topic in neuroscience is the study of mechanisms underlying memory and the estimation of the information capacity of the biological system. In this report we investigate the performance of a modular attractor network with recurrent connections similar to the cortical long-range connections extending in the horizontal direction. We considered a single learning rule, the BCPNN, which implements a kind of Hebbian learning and we trained the network with sparse random patterns. The storage capacity was measured experimentally for networks of size between 500 and 46 K units with a constant activity level, gradually diluting the connectivity. We show that the storage capacity of the modular network with patchy connectivity is comparable with the theoretical values estimated for simple associative memories and furthermore we introduce a new technique to prune the connectivity, which enhances the storage capacity up to the asymptotic value.
Collapse
Affiliation(s)
- Cristina Meli
- Department of Computational Biology (CB), School of Computer Science and Communication (CSC), Royal Institute of Technology (KTH) , Stockholm , Sweden
| | | |
Collapse
|
24
|
Benjaminsson S, Lansner A. Nexa: a scalable neural simulator with integrated analysis. Network 2012; 23:254-271. [PMID: 23116128 DOI: 10.3109/0954898x.2012.737087] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/01/2023]
Abstract
Large-scale neural simulations encompass challenges in simulator design, data handling and understanding of simulation output. As the computational power of supercomputers and the size of network models increase, these challenges become even more pronounced. Here we introduce the experimental scalable neural simulator Nexa, for parallel simulation of large-scale neural network models at a high level of biological abstraction and for exploration of the simulation methods involved. It includes firing-rate models and capabilities to build networks using machine learning inspired methods for e.g. self-organization of network architecture and for structural plasticity. We show scalability up to the size of the largest machines currently available for a number of model scenarios. We further demonstrate simulator integration with online analysis and real-time visualization as scalable solutions for the data handling challenges.
Collapse
Affiliation(s)
- Simon Benjaminsson
- Department of Computational Biology, Royal Institute of Technology, 114 21 Stockholm, Sweden.
| | | |
Collapse
|
25
|
Berthet P, Hellgren-Kotaleski J, Lansner A. Action selection performance of a reconfigurable basal ganglia inspired model with Hebbian-Bayesian Go-NoGo connectivity. Front Behav Neurosci 2012; 6:65. [PMID: 23060764 PMCID: PMC3462417 DOI: 10.3389/fnbeh.2012.00065] [Citation(s) in RCA: 16] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/11/2012] [Accepted: 09/11/2012] [Indexed: 12/22/2022] Open
Abstract
Several studies have shown a strong involvement of the basal ganglia (BG) in action selection and dopamine dependent learning. The dopaminergic signal to striatum, the input stage of the BG, has been commonly described as coding a reward prediction error (RPE), i.e., the difference between the predicted and actual reward. The RPE has been hypothesized to be critical in the modulation of the synaptic plasticity in cortico-striatal synapses in the direct and indirect pathway. We developed an abstract computational model of the BG, with a dual pathway structure functionally corresponding to the direct and indirect pathways, and compared its behavior to biological data as well as other reinforcement learning models. The computations in our model are inspired by Bayesian inference, and the synaptic plasticity changes depend on a three factor Hebbian–Bayesian learning rule based on co-activation of pre- and post-synaptic units and on the value of the RPE. The model builds on a modified Actor-Critic architecture and implements the direct (Go) and the indirect (NoGo) pathway, as well as the reward prediction (RP) system, acting in a complementary fashion. We investigated the performance of the model system when different configurations of the Go, NoGo, and RP system were utilized, e.g., using only the Go, NoGo, or RP system, or combinations of those. Learning performance was investigated in several types of learning paradigms, such as learning-relearning, successive learning, stochastic learning, reversal learning and a two-choice task. The RPE and the activity of the model during learning were similar to monkey electrophysiological and behavioral data. Our results, however, show that there is not a unique best way to configure this BG model to handle well all the learning paradigms tested. We thus suggest that an agent might dynamically configure its action selection mode, possibly depending on task characteristics and also on how much time is available.
Collapse
Affiliation(s)
- Pierre Berthet
- Computational Biology, School of Computer Science and Communication, KTH Royal Institute of Technology Stockholm, Sweden ; Numerical Analysis and Computer Science, Stockholm University Stockholm, Sweden ; Stockholm Brain Institute Stockholm, Sweden
| | | | | |
Collapse
|
26
|
Krishnamurthy P, Silberberg G, Lansner A. A cortical attractor network with Martinotti cells driven by facilitating synapses. PLoS One 2012; 7:e30752. [PMID: 22523533 PMCID: PMC3327695 DOI: 10.1371/journal.pone.0030752] [Citation(s) in RCA: 18] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/05/2011] [Accepted: 12/21/2011] [Indexed: 12/02/2022] Open
Abstract
The population of pyramidal cells significantly outnumbers the inhibitory interneurons in the neocortex, while at the same time the diversity of interneuron types is much more pronounced. One acknowledged key role of inhibition is to control the rate and patterning of pyramidal cell firing via negative feedback, but most likely the diversity of inhibitory pathways is matched by a corresponding diversity of functional roles. An important distinguishing feature of cortical interneurons is the variability of the short-term plasticity properties of synapses received from pyramidal cells. The Martinotti cell type has recently come under scrutiny due to the distinctly facilitating nature of the synapses they receive from pyramidal cells. This distinguishes these neurons from basket cells and other inhibitory interneurons typically targeted by depressing synapses. A key aspect of the work reported here has been to pinpoint the role of this variability. We first set out to reproduce quantitatively based on in vitro data the di-synaptic inhibitory microcircuit connecting two pyramidal cells via one or a few Martinotti cells. In a second step, we embedded this microcircuit in a previously developed attractor memory network model of neocortical layers 2/3. This model network demonstrated that basket cells with their characteristic depressing synapses are the first to discharge when the network enters an attractor state and that Martinotti cells respond with a delay, thereby shifting the excitation-inhibition balance and acting to terminate the attractor state. A parameter sensitivity analysis suggested that Martinotti cells might, in fact, play a dominant role in setting the attractor dwell time and thus cortical speed of processing, with cellular adaptation and synaptic depression having a less prominent role than previously thought.
Collapse
Affiliation(s)
- Pradeep Krishnamurthy
- Department of Numerical Analysis and Computer Science, Stockholm University, Stockholm, Sweden
- School of Computer Science and Communication, Department of Computational Biology, Royal Institute of Technology (KTH), Stockholm, Sweden
| | - Gilad Silberberg
- Nobel Institute of Neurophysiology, Department of Neuroscience, Karolinska Institute, Stockholm, Sweden
| | - Anders Lansner
- Department of Numerical Analysis and Computer Science, Stockholm University, Stockholm, Sweden
- School of Computer Science and Communication, Department of Computational Biology, Royal Institute of Technology (KTH), Stockholm, Sweden
| |
Collapse
|
27
|
Berthet P, Lansner A. An abstract model of the basal ganglia, reward learning and action selection. BMC Neurosci 2011. [PMCID: PMC3240288 DOI: 10.1186/1471-2202-12-s1-p189] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
|
28
|
|
29
|
Benjaminsson S, Herman P, Lansner A. Odor segmentation and identification in an abstract large-scale model of the mammalian olfactory system. BMC Neurosci 2011. [PMCID: PMC3240287 DOI: 10.1186/1471-2202-12-s1-p188] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022] Open
|
30
|
Krishnamurthy P, Silberberg G, Lansner A. A cortical attractor network with dynamic synapses. BMC Neurosci 2011. [PMCID: PMC3240286 DOI: 10.1186/1471-2202-12-s1-p187] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022] Open
|
31
|
|
32
|
Lansner A. Perceptual and memory functions in a cortex-inspired attractor network model. BMC Neurosci 2011. [PMCID: PMC3240167 DOI: 10.1186/1471-2202-12-s1-k2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022] Open
|
33
|
Affiliation(s)
- Anders Holst
- SANS — Studies of Artificial Neural Systems, Department of Numerical Analysis and Computing Science Royal Institute of Technology S-100 44 Stockholm, Sweden
| | - Anders Lansner
- SANS — Studies of Artificial Neural Systems, Department of Numerical Analysis and Computing Science Royal Institute of Technology S-100 44 Stockholm, Sweden
| |
Collapse
|
34
|
Abstract
A probabilistic artificial neural network is presented. It is of a one-layer, feedback-coupled type with graded units. The learning rule is derived from Bayes's rule. Learning is regarded as collecting statistics and recall as a statistical inference process. Units correspond to events and connections come out as compatibility coefficients in a logarithmic combination rule. The input to a unit via connections from other active units affects the a posteriori belief in the event in question. The new model is compared to an earlier binary model with respect to storage capacity, noise tolerance, etc. in a content addressable memory (CAM) task. The new model is a real time network and some results on the reaction time for associative recall are given. The scaling of learning and relaxation operations is considered together with issues related to representation of information in one-layer artificial neural networks. An extension with complex units is discussed.
Collapse
Affiliation(s)
- Anders Lansner
- Dept. of Numerical Analysis and Computing Science, Royal Institute of Technology, Stockholm, Sweden
| | - Örjan Ekeberg
- Dept. of Numerical Analysis and Computing Science, Royal Institute of Technology, Stockholm, Sweden
| |
Collapse
|
35
|
Auffarth B, Kaplan B, Lansner A. Map formation in the olfactory bulb by axon guidance of olfactory neurons. Front Syst Neurosci 2011; 5:84. [PMID: 22013417 PMCID: PMC3190187 DOI: 10.3389/fnsys.2011.00084] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/22/2011] [Accepted: 09/22/2011] [Indexed: 11/28/2022] Open
Abstract
The organization of representations in the brain has been observed to locally reflect subspaces of inputs that are relevant to behavioral or perceptual feature combinations, such as in areas receptive to lower and higher-order features in the visual system. The early olfactory system developed highly plastic mechanisms and convergent evidence indicates that projections from primary neurons converge onto the glomerular level of the olfactory bulb (OB) to form a code composed of continuous spatial zones that are differentially active for particular physico-chemical feature combinations, some of which are known to trigger behavioral responses. In a model study of the early human olfactory system, we derive a glomerular organization based on a set of real-world, biologically relevant stimuli, a distribution of receptors that respond each to a set of odorants of similar ranges of molecular properties, and a mechanism of axon guidance based on activity. Apart from demonstrating activity-dependent glomeruli formation and reproducing the relationship of glomerular recruitment with concentration, it is shown that glomerular responses reflect similarities of human odor category perceptions and that further, a spatial code provides a better correlation than a distributed population code. These results are consistent with evidence of functional compartmentalization in the OB and could suggest a function for the bulb in encoding of perceptual dimensions.
Collapse
Affiliation(s)
- Benjamin Auffarth
- Department of Computational Biology, Royal Institute of Technology Stockholm, Sweden
| | | | | |
Collapse
|
36
|
Kaplan B, Benjaminsson S, Lansner A. A large-scale model of the three first stages of the mammalian olfactory system implemented with spiking neurons. BMC Neurosci 2011. [PMCID: PMC3240284 DOI: 10.1186/1471-2202-12-s1-p185] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022] Open
|
37
|
Abstract
This study proposes a computational model for attentional blink or “blink of the mind,” a phenomenon where a human subject misses perception of a later expected visual pattern as two expected visual patterns are presented less than 500 ms apart. A neocortical patch modeled as an attractor network is stimulated with a sequence of 14 patterns 100 ms apart, two of which are expected targets. Patterns that become active attractors are considered recognized. A neocortical patch is represented as a square matrix of hypercolumns, each containing a set of minicolumns with synaptic connections within and across both minicolumns and hypercolumns. Each minicolumn consists of locally connected layer 2/3 pyramidal cells with interacting basket cells and layer 4 pyramidal cells for input stimulation. All neurons are implemented using the Hodgkin–Huxley multi-compartmental cell formalism and include calcium dynamics, and they interact via saturating and depressing AMPA/NMDA and GABAA synapses. Stored patterns are encoded with global connectivity of minicolumns across hypercolumns and active patterns compete as the result of lateral inhibition in the network. Stored patterns were stimulated over time intervals to create attractor interference measurable with synthetic spike traces. This setup corresponds with item presentations in human visual attentional blink studies. Stored target patterns were depolarized while distractor patterns where hyperpolarized to represent expectation of items in working memory. Simulations replicated the basic attentional blink phenomena and showed a reduced blink when targets were more salient. Studies on the inhibitory effect of benzodiazepines on attentional blink in human subjects were compared with neocortical simulations where the GABAA receptor conductance and decay time were increased. Simulations showed increases in the attentional blink duration, agreeing with observations in human studies. In addition, sensitivity analysis was performed on key parameters of the model, including Ca2+-gated K+ channel conductance, synaptic depression, GABAA channel conductance and the NMDA/AMPA ratio of charge entry.
Collapse
Affiliation(s)
- David N Silverstein
- Department of Computational Biology, Royal Institute of Technology Stockholm, Sweden
| | | |
Collapse
|
38
|
Brüderle D, Petrovici MA, Vogginger B, Ehrlich M, Pfeil T, Millner S, Grübl A, Wendt K, Müller E, Schwartz MO, de Oliveira DH, Jeltsch S, Fieres J, Schilling M, Müller P, Breitwieser O, Petkov V, Muller L, Davison AP, Krishnamurthy P, Kremkow J, Lundqvist M, Muller E, Partzsch J, Scholze S, Zühl L, Mayr C, Destexhe A, Diesmann M, Potjans TC, Lansner A, Schüffny R, Schemmel J, Meier K. A comprehensive workflow for general-purpose neural modeling with highly configurable neuromorphic hardware systems. Biol Cybern 2011; 104:263-296. [PMID: 21618053 DOI: 10.1007/s00422-011-0435-9] [Citation(s) in RCA: 35] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/10/2010] [Accepted: 04/19/2011] [Indexed: 05/30/2023]
Abstract
In this article, we present a methodological framework that meets novel requirements emerging from upcoming types of accelerated and highly configurable neuromorphic hardware systems. We describe in detail a device with 45 million programmable and dynamic synapses that is currently under development, and we sketch the conceptual challenges that arise from taking this platform into operation. More specifically, we aim at the establishment of this neuromorphic system as a flexible and neuroscientifically valuable modeling tool that can be used by non-hardware experts. We consider various functional aspects to be crucial for this purpose, and we introduce a consistent workflow with detailed descriptions of all involved modules that implement the suggested steps: The integration of the hardware interface into the simulator-independent model description language PyNN; a fully automated translation between the PyNN domain and appropriate hardware configurations; an executable specification of the future neuromorphic system that can be seamlessly integrated into this biology-to-hardware mapping process as a test bench for all software layers and possible hardware design modifications; an evaluation scheme that deploys models from a dedicated benchmark library, compares the results generated by virtual or prototype hardware devices with reference software simulations and analyzes the differences. The integration of these components into one hardware-software workflow provides an ecosystem for ongoing preparative studies that support the hardware design process and represents the basis for the maturity of the model-to-hardware mapping software. The functionality and flexibility of the latter is proven with a variety of experimental results.
Collapse
Affiliation(s)
- Daniel Brüderle
- Kirchhoff Institute for Physics, Ruprecht-Karls-Universität Heidelberg, Heidelberg, Germany.
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
Collapse
|
39
|
Lundqvist M, Herman P, Lansner A. Theta and gamma power increases and alpha/beta power decreases with memory load in an attractor network model. J Cogn Neurosci 2011; 23:3008-20. [PMID: 21452933 DOI: 10.1162/jocn_a_00029] [Citation(s) in RCA: 123] [Impact Index Per Article: 9.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Changes in oscillatory brain activity are strongly correlated with performance in cognitive tasks and modulations in specific frequency bands are associated with working memory tasks. Mesoscale network models allow the study of oscillations as an emergent feature of neuronal activity. Here we extend a previously developed attractor network model, shown to faithfully reproduce single-cell activity during retention and memory recall, with synaptic augmentation. This enables the network to function as a multi-item working memory by cyclic reactivation of up to six items. The reactivation happens at theta frequency, consistently with recent experimental findings, with increasing theta power for each additional item loaded in the network's memory. Furthermore, each memory reactivation is associated with gamma oscillations. Thus, single-cell spike trains as well as gamma oscillations in local groups are nested in the theta cycle. The network also exhibits an idling rhythm in the alpha/beta band associated with a noncoding global attractor. Put together, the resulting effect is increasing theta and gamma power and decreasing alpha/beta power with growing working memory load, rendering the network mechanisms involved a plausible explanation for this often reported behavior.
Collapse
Affiliation(s)
- Mikael Lundqvist
- Royal Institute of Technology (KTH), Sweden, and Stockholm University, Sweden.
| | | | | |
Collapse
|
40
|
Fonollosa J, Gutierrez-Galvez A, Lansner A, Martinez D, Rospars J, Beccherelli R, Perera A, Pearce T, Vershure P, Persaud K, Marco S. Biologically Inspired Computation for Chemical Sensing. ACTA ACUST UNITED AC 2011. [DOI: 10.1016/j.procs.2011.09.066] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
|
41
|
Benjaminsson S, Fransson P, Lansner A. A novel model-free data analysis technique based on clustering in a mutual information space: application to resting-state FMRI. Front Syst Neurosci 2010; 4. [PMID: 20721313 PMCID: PMC2922939 DOI: 10.3389/fnsys.2010.00034] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/18/2009] [Accepted: 06/18/2010] [Indexed: 11/26/2022] Open
Abstract
Non-parametric data-driven analysis techniques can be used to study datasets with few assumptions about the data and underlying experiment. Variations of independent component analysis (ICA) have been the methods mostly used on fMRI data, e.g., in finding resting-state networks thought to reflect the connectivity of the brain. Here we present a novel data analysis technique and demonstrate it on resting-state fMRI data. It is a generic method with few underlying assumptions about the data. The results are built from the statistical relations between all input voxels, resulting in a whole-brain analysis on a voxel level. It has good scalability properties and the parallel implementation is capable of handling large datasets and databases. From the mutual information between the activities of the voxels over time, a distance matrix is created for all voxels in the input space. Multidimensional scaling is used to put the voxels in a lower-dimensional space reflecting the dependency relations based on the distance matrix. By performing clustering in this space we can find the strong statistical regularities in the data, which for the resting-state data turns out to be the resting-state networks. The decomposition is performed in the last step of the algorithm and is computationally simple. This opens up for rapid analysis and visualization of the data on different spatial levels, as well as automatically finding a suitable number of decomposition components.
Collapse
|
42
|
Benjaminsson S, Lansner A. Adaptive sensor drift counteraction by a modular neural network. Neurosci Res 2010. [DOI: 10.1016/j.neures.2010.07.2508] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
|
43
|
Çürüklü B, Lansner A. Configuration-specific facilitation phenomena explained by layer 2/3 summation pools in V1. BMC Neurosci 2009. [DOI: 10.1186/1471-2202-10-s1-p177] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022] Open
|
44
|
Lansner A. Associative memory models: from the cell-assembly theory to biophysically detailed cortex simulations. Trends Neurosci 2009; 32:178-86. [PMID: 19187979 DOI: 10.1016/j.tins.2008.12.002] [Citation(s) in RCA: 76] [Impact Index Per Article: 5.1] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/31/2008] [Revised: 12/04/2008] [Accepted: 12/08/2008] [Indexed: 01/23/2023]
Abstract
The second half of the past century saw the emergence of a theory of cortical associative memory function originating in Donald Hebb's hypotheses on activity-dependent synaptic plasticity and cell-assembly formation and dynamics. This conceptual framework has today developed into a theory of attractor memory that brings together many experimental observations from different sources and levels of investigation into computational models displaying information-processing capabilities such as efficient associative memory and holistic perception. Here, we outline a development that might eventually lead to a neurobiologically grounded theory of cortical associative memory.
Collapse
Affiliation(s)
- Anders Lansner
- Department of Computational Biology, School of Computer Science and Communication, Stockholm University and Royal Institute of Technology, 114 21 Stockholm, Sweden.
| |
Collapse
|
45
|
Benjaminsson S, Fransson P, Lansner A. A novel model-free fMRI data analysis technique based on clustering in a mutual information space. Front Neuroinform 2009. [DOI: 10.3389/conf.neuro.11.2009.08.028] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/13/2022] Open
|
46
|
|
47
|
Sandström M, Proschinger T, Lansner A. Fuzzy interval representation of olfactory stimulus concentration in an olfactory glomerulus model. BMC Neurosci 2008. [DOI: 10.1186/1471-2202-9-s1-p123] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022] Open
|
48
|
Abstract
Is there any hope of achieving a thorough understanding of higher functions such as perception, memory, thought and emotion or is the stunning complexity of the brain a barrier which will limit such efforts for the foreseeable future? In this perspective we discuss methods to handle complexity, approaches to model building, and point to detailed large-scale models as a new contribution to the toolbox of the computational neuroscientist. We elucidate some aspects which distinguishes large-scale models and some of the technological challenges which they entail.
Collapse
|
49
|
Brette R, Rudolph M, Carnevale T, Hines M, Beeman D, Bower JM, Diesmann M, Morrison A, Goodman PH, Harris FC, Zirpe M, Natschläger T, Pecevski D, Ermentrout B, Djurfeldt M, Lansner A, Rochel O, Vieville T, Muller E, Davison AP, El Boustani S, Destexhe A. Simulation of networks of spiking neurons: a review of tools and strategies. J Comput Neurosci 2007; 23:349-98. [PMID: 17629781 PMCID: PMC2638500 DOI: 10.1007/s10827-007-0038-6] [Citation(s) in RCA: 334] [Impact Index Per Article: 19.6] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/29/2006] [Revised: 04/02/2007] [Accepted: 04/12/2007] [Indexed: 11/26/2022]
Abstract
We review different aspects of the simulation of spiking neural networks. We start by reviewing the different types of simulation strategies and algorithms that are currently implemented. We next review the precision of those simulation strategies, in particular in cases where plasticity depends on the exact timing of the spikes. We overview different simulators and simulation environments presently available (restricted to those freely available, open source and documented). For each simulation tool, its advantages and pitfalls are reviewed, with an aim to allow the reader to identify which simulator is appropriate for a given task. Finally, we provide a series of benchmark simulations of different types of networks of spiking neurons, including Hodgkin-Huxley type, integrate-and-fire models, interacting with current-based or conductance-based synapses, using clock-driven or event-driven integration strategies. The same set of models are implemented on the different simulators, and the codes are made available. The ultimate goal of this review is to provide a resource to facilitate identifying the appropriate integration strategy and simulation tool to use for a given modeling problem related to spiking neural networks.
Collapse
|
50
|
Abstract
In this letter, we study an abstract model of neocortex based on its modularization into mini- and hypercolumns. We discuss a full-scale instance of this model and connect its network properties to the underlying biological properties of neurons in cortex. In particular, we discuss how the biological constraints put on the network determine the network's performance in terms of storage capacity. We show that a network instantiating the model scales well given the biologically constrained parameters on activity and connectivity, which makes this network interesting also as an engineered system. In this model, the minicolumns are grouped into hypercolumns that can be active or quiescent, and the model predicts that only a few percent of the hypercolumns should be active at any one time. With this model, we show that at least 20 to 30 pyramidal neurons should be aggregated into a minicolumn and at least 50 to 60 minicolumns should be grouped into a hypercolumn in order to achieve high storage capacity.
Collapse
Affiliation(s)
- Christopher Johansson
- School of Computer Science and Communication, Royal Institute of Technology, SE-100 44 Stockholm, Sweden.
| | | |
Collapse
|