1
|
Vieth M, Rahimi A, Gorgan Mohammadi A, Triesch J, Ganjtabesh M. Accelerating spiking neural network simulations with PymoNNto and PymoNNtorch. Front Neuroinform 2024; 18:1331220. [PMID: 38444756 PMCID: PMC10913591 DOI: 10.3389/fninf.2024.1331220] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/31/2023] [Accepted: 01/29/2024] [Indexed: 03/07/2024] Open
Abstract
Spiking neural network simulations are a central tool in Computational Neuroscience, Artificial Intelligence, and Neuromorphic Engineering research. A broad range of simulators and software frameworks for such simulations exist with different target application areas. Among these, PymoNNto is a recent Python-based toolbox for spiking neural network simulations that emphasizes the embedding of custom code in a modular and flexible way. While PymoNNto already supports GPU implementations, its backend relies on NumPy operations. Here we introduce PymoNNtorch, which is natively implemented with PyTorch while retaining PymoNNto's modular design. Furthermore, we demonstrate how changes to the implementations of common network operations in combination with PymoNNtorch's native GPU support can offer speed-up over conventional simulators like NEST, ANNarchy, and Brian 2 in certain situations. Overall, we show how PymoNNto's modular and flexible design in combination with PymoNNtorch's GPU acceleration and optimized indexing operations facilitate research and development of spiking neural networks in the Python programming language.
Collapse
Affiliation(s)
- Marius Vieth
- Frankfurt Institute for Advanced Studies, Frankfurt am Main, Germany
| | - Ali Rahimi
- Department of Mathematics, Statistics, and Computer Science - College of Science, University of Tehran, Tehran, Iran
| | - Ashena Gorgan Mohammadi
- Department of Mathematics, Statistics, and Computer Science - College of Science, University of Tehran, Tehran, Iran
| | - Jochen Triesch
- Frankfurt Institute for Advanced Studies, Frankfurt am Main, Germany
| | - Mohammad Ganjtabesh
- Department of Mathematics, Statistics, and Computer Science - College of Science, University of Tehran, Tehran, Iran
| |
Collapse
|
2
|
Steiner P, Jalalvand A, Birkholz P. Cluster-Based Input Weight Initialization for Echo State Networks. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2023; 34:7648-7659. [PMID: 35120012 DOI: 10.1109/tnnls.2022.3145565] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
Echo state networks (ESNs) are a special type of recurrent neural networks (RNNs), in which the input and recurrent connections are traditionally generated randomly, and only the output weights are trained. Despite the recent success of ESNs in various tasks of audio, image, and radar recognition, we postulate that a purely random initialization is not the ideal way of initializing ESNs. The aim of this work is to propose an unsupervised initialization of the input connections using the K -means algorithm on the training data. We show that for a large variety of datasets, this initialization performs equivalently or superior than a randomly initialized ESN while needing significantly less reservoir neurons. Furthermore, we discuss that this approach provides the opportunity to estimate a suitable size of the reservoir based on prior knowledge about the data.
Collapse
|
3
|
Yuan Y, Zhu Y, Wang J, Li R, Xu X, Fang T, Huo H, Wan L, Li Q, Liu N, Yang S. Incorporating structural plasticity into self-organization recurrent networks for sequence learning. Front Neurosci 2023; 17:1224752. [PMID: 37592946 PMCID: PMC10427342 DOI: 10.3389/fnins.2023.1224752] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/18/2023] [Accepted: 07/13/2023] [Indexed: 08/19/2023] Open
Abstract
Introduction Spiking neural networks (SNNs), inspired by biological neural networks, have received a surge of interest due to its temporal encoding. Biological neural networks are driven by multiple plasticities, including spike timing-dependent plasticity (STDP), structural plasticity, and homeostatic plasticity, making network connection patterns and weights to change continuously during the lifecycle. However, it is unclear how these plasticities interact to shape neural networks and affect neural signal processing. Method Here, we propose a reward-modulated self-organization recurrent network with structural plasticity (RSRN-SP) to investigate this issue. Specifically, RSRN-SP uses spikes to encode information, and incorporate multiple plasticities including reward-modulated spike timing-dependent plasticity (R-STDP), homeostatic plasticity, and structural plasticity. On the one hand, combined with homeostatic plasticity, R-STDP is presented to guide the updating of synaptic weights. On the other hand, structural plasticity is utilized to simulate the growth and pruning of synaptic connections. Results and discussion Extensive experiments for sequential learning tasks are conducted to demonstrate the representational ability of the RSRN-SP, including counting task, motion prediction, and motion generation. Furthermore, the simulations also indicate that the characteristics arose from the RSRN-SP are consistent with biological observations.
Collapse
Affiliation(s)
- Ye Yuan
- School of Health Science and Engineering, Institute of Machine Intelligence, University of Shanghai for Science and Technology, Shanghai, China
| | - Yongtong Zhu
- School of Health Science and Engineering, Institute of Machine Intelligence, University of Shanghai for Science and Technology, Shanghai, China
| | - Jiaqi Wang
- School of Health Science and Engineering, Institute of Machine Intelligence, University of Shanghai for Science and Technology, Shanghai, China
| | - Ruoshi Li
- School of Health Science and Engineering, Institute of Machine Intelligence, University of Shanghai for Science and Technology, Shanghai, China
| | - Xin Xu
- School of Health Science and Engineering, Institute of Machine Intelligence, University of Shanghai for Science and Technology, Shanghai, China
| | - Tao Fang
- Automation of Department, Shanghai Jiao Tong University, Shanghai, China
| | - Hong Huo
- Automation of Department, Shanghai Jiao Tong University, Shanghai, China
| | - Lihong Wan
- Origin Dynamics Intelligent Robot Co., Ltd., Zhengzhou, China
| | - Qingdu Li
- School of Health Science and Engineering, Institute of Machine Intelligence, University of Shanghai for Science and Technology, Shanghai, China
| | - Na Liu
- School of Health Science and Engineering, Institute of Machine Intelligence, University of Shanghai for Science and Technology, Shanghai, China
| | - Shiyan Yang
- Eco-Environmental Protection Institution, Shanghai Academy of Agricultural Sciences, Shanghai, China
| |
Collapse
|
4
|
Yiling Y, Shapcott K, Peter A, Klon-Lipok J, Xuhui H, Lazar A, Singer W. Robust encoding of natural stimuli by neuronal response sequences in monkey visual cortex. Nat Commun 2023; 14:3021. [PMID: 37231014 DOI: 10.1038/s41467-023-38587-2] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/01/2022] [Accepted: 05/08/2023] [Indexed: 05/27/2023] Open
Abstract
Parallel multisite recordings in the visual cortex of trained monkeys revealed that the responses of spatially distributed neurons to natural scenes are ordered in sequences. The rank order of these sequences is stimulus-specific and maintained even if the absolute timing of the responses is modified by manipulating stimulus parameters. The stimulus specificity of these sequences was highest when they were evoked by natural stimuli and deteriorated for stimulus versions in which certain statistical regularities were removed. This suggests that the response sequences result from a matching operation between sensory evidence and priors stored in the cortical network. Decoders trained on sequence order performed as well as decoders trained on rate vectors but the former could decode stimulus identity from considerably shorter response intervals than the latter. A simulated recurrent network reproduced similarly structured stimulus-specific response sequences, particularly once it was familiarized with the stimuli through non-supervised Hebbian learning. We propose that recurrent processing transforms signals from stationary visual scenes into sequential responses whose rank order is the result of a Bayesian matching operation. If this temporal code were used by the visual system it would allow for ultrafast processing of visual scenes.
Collapse
Affiliation(s)
- Yang Yiling
- Ernst Strüngmann Institute (ESI) for Neuroscience in Cooperation with Max Planck Society, 60528, Frankfurt am Main, Germany
- International Max Planck Research School (IMPRS) for Neural Circuits, 60438, Frankfurt am Main, Germany
- Faculty of Biological Sciences, Goethe-University Frankfurt am Main, 60438, Frankfurt am Main, Germany
| | - Katharine Shapcott
- Ernst Strüngmann Institute (ESI) for Neuroscience in Cooperation with Max Planck Society, 60528, Frankfurt am Main, Germany
- Frankfurt Institute for Advanced Studies, 60438, Frankfurt am Main, Germany
| | - Alina Peter
- Ernst Strüngmann Institute (ESI) for Neuroscience in Cooperation with Max Planck Society, 60528, Frankfurt am Main, Germany
- International Max Planck Research School (IMPRS) for Neural Circuits, 60438, Frankfurt am Main, Germany
- Faculty of Biological Sciences, Goethe-University Frankfurt am Main, 60438, Frankfurt am Main, Germany
| | - Johanna Klon-Lipok
- Max Planck Institute for Brain Research, 60438, Frankfurt am Main, Germany
| | - Huang Xuhui
- Intelligent Science and Technology Academy, China Aerospace Science and Industry Corporation (CASIC), 100144, Beijing, China
- Institute of Automation, Chinese Academy of Sciences, 100190, Beijing, China
| | - Andreea Lazar
- Ernst Strüngmann Institute (ESI) for Neuroscience in Cooperation with Max Planck Society, 60528, Frankfurt am Main, Germany
| | - Wolf Singer
- Ernst Strüngmann Institute (ESI) for Neuroscience in Cooperation with Max Planck Society, 60528, Frankfurt am Main, Germany.
- Frankfurt Institute for Advanced Studies, 60438, Frankfurt am Main, Germany.
- Max Planck Institute for Brain Research, 60438, Frankfurt am Main, Germany.
| |
Collapse
|
5
|
Aceituno PV, Farinha MT, Loidl R, Grewe BF. Learning cortical hierarchies with temporal Hebbian updates. Front Comput Neurosci 2023; 17:1136010. [PMID: 37293353 PMCID: PMC10244748 DOI: 10.3389/fncom.2023.1136010] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/02/2023] [Accepted: 04/25/2023] [Indexed: 06/10/2023] Open
Abstract
A key driver of mammalian intelligence is the ability to represent incoming sensory information across multiple abstraction levels. For example, in the visual ventral stream, incoming signals are first represented as low-level edge filters and then transformed into high-level object representations. Similar hierarchical structures routinely emerge in artificial neural networks (ANNs) trained for object recognition tasks, suggesting that similar structures may underlie biological neural networks. However, the classical ANN training algorithm, backpropagation, is considered biologically implausible, and thus alternative biologically plausible training methods have been developed such as Equilibrium Propagation, Deep Feedback Control, Supervised Predictive Coding, and Dendritic Error Backpropagation. Several of those models propose that local errors are calculated for each neuron by comparing apical and somatic activities. Notwithstanding, from a neuroscience perspective, it is not clear how a neuron could compare compartmental signals. Here, we propose a solution to this problem in that we let the apical feedback signal change the postsynaptic firing rate and combine this with a differential Hebbian update, a rate-based version of classical spiking time-dependent plasticity (STDP). We prove that weight updates of this form minimize two alternative loss functions that we prove to be equivalent to the error-based losses used in machine learning: the inference latency and the amount of top-down feedback necessary. Moreover, we show that the use of differential Hebbian updates works similarly well in other feedback-based deep learning frameworks such as Predictive Coding or Equilibrium Propagation. Finally, our work removes a key requirement of biologically plausible models for deep learning and proposes a learning mechanism that would explain how temporal Hebbian learning rules can implement supervised hierarchical learning.
Collapse
Affiliation(s)
- Pau Vilimelis Aceituno
- Institute of Neuroinformatics, University of Zurich and ETH Zurich, Zurich, Switzerland
- ETH AI Center, ETH Zurich, Zurich, Switzerland
| | | | - Reinhard Loidl
- Institute of Neuroinformatics, University of Zurich and ETH Zurich, Zurich, Switzerland
| | - Benjamin F. Grewe
- Institute of Neuroinformatics, University of Zurich and ETH Zurich, Zurich, Switzerland
- ETH AI Center, ETH Zurich, Zurich, Switzerland
| |
Collapse
|
6
|
Stöber TM, Batulin D, Triesch J, Narayanan R, Jedlicka P. Degeneracy in epilepsy: multiple routes to hyperexcitable brain circuits and their repair. Commun Biol 2023; 6:479. [PMID: 37137938 PMCID: PMC10156698 DOI: 10.1038/s42003-023-04823-0] [Citation(s) in RCA: 12] [Impact Index Per Article: 12.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/24/2022] [Accepted: 04/06/2023] [Indexed: 05/05/2023] Open
Abstract
Due to its complex and multifaceted nature, developing effective treatments for epilepsy is still a major challenge. To deal with this complexity we introduce the concept of degeneracy to the field of epilepsy research: the ability of disparate elements to cause an analogous function or malfunction. Here, we review examples of epilepsy-related degeneracy at multiple levels of brain organisation, ranging from the cellular to the network and systems level. Based on these insights, we outline new multiscale and population modelling approaches to disentangle the complex web of interactions underlying epilepsy and to design personalised multitarget therapies.
Collapse
Affiliation(s)
- Tristan Manfred Stöber
- Frankfurt Institute for Advanced Studies, 60438, Frankfurt am Main, Germany
- Institute for Neural Computation, Faculty of Computer Science, Ruhr University Bochum, 44801, Bochum, Germany
- Epilepsy Center Frankfurt Rhine-Main, Department of Neurology, Goethe University, 60590, Frankfurt, Germany
| | - Danylo Batulin
- Frankfurt Institute for Advanced Studies, 60438, Frankfurt am Main, Germany
- CePTER - Center for Personalized Translational Epilepsy Research, Goethe University, 60590, Frankfurt, Germany
- Faculty of Computer Science and Mathematics, Goethe University, 60486, Frankfurt, Germany
| | - Jochen Triesch
- Frankfurt Institute for Advanced Studies, 60438, Frankfurt am Main, Germany
| | - Rishikesh Narayanan
- Cellular Neurophysiology Laboratory, Molecular Biophysics Unit, Indian Institute of Science, Bangalore, 560012, India
| | - Peter Jedlicka
- ICAR3R - Interdisciplinary Centre for 3Rs in Animal Research, Faculty of Medicine, Justus Liebig University Giessen, 35390, Giessen, Germany.
- Institute of Clinical Neuroanatomy, Neuroscience Center, Goethe University, 60590, Frankfurt am Main, Germany.
| |
Collapse
|
7
|
Grosu GF, Hopp AV, Moca VV, Bârzan H, Ciuparu A, Ercsey-Ravasz M, Winkel M, Linde H, Mureșan RC. The fractal brain: scale-invariance in structure and dynamics. Cereb Cortex 2023; 33:4574-4605. [PMID: 36156074 PMCID: PMC10110456 DOI: 10.1093/cercor/bhac363] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/02/2022] [Revised: 08/09/2022] [Accepted: 08/10/2022] [Indexed: 11/12/2022] Open
Abstract
The past 40 years have witnessed extensive research on fractal structure and scale-free dynamics in the brain. Although considerable progress has been made, a comprehensive picture has yet to emerge, and needs further linking to a mechanistic account of brain function. Here, we review these concepts, connecting observations across different levels of organization, from both a structural and functional perspective. We argue that, paradoxically, the level of cortical circuits is the least understood from a structural point of view and perhaps the best studied from a dynamical one. We further link observations about scale-freeness and fractality with evidence that the environment provides constraints that may explain the usefulness of fractal structure and scale-free dynamics in the brain. Moreover, we discuss evidence that behavior exhibits scale-free properties, likely emerging from similarly organized brain dynamics, enabling an organism to thrive in an environment that shares the same organizational principles. Finally, we review the sparse evidence for and try to speculate on the functional consequences of fractality and scale-freeness for brain computation. These properties may endow the brain with computational capabilities that transcend current models of neural computation and could hold the key to unraveling how the brain constructs percepts and generates behavior.
Collapse
Affiliation(s)
- George F Grosu
- Department of Experimental and Theoretical Neuroscience, Transylvanian Institute of Neuroscience, Str. Ploiesti 33, 400157 Cluj-Napoca, Romania
- Faculty of Electronics, Telecommunications and Information Technology, Technical University of Cluj-Napoca, Str. Memorandumului 28, 400114 Cluj-Napoca, Romania
| | | | - Vasile V Moca
- Department of Experimental and Theoretical Neuroscience, Transylvanian Institute of Neuroscience, Str. Ploiesti 33, 400157 Cluj-Napoca, Romania
| | - Harald Bârzan
- Department of Experimental and Theoretical Neuroscience, Transylvanian Institute of Neuroscience, Str. Ploiesti 33, 400157 Cluj-Napoca, Romania
- Faculty of Electronics, Telecommunications and Information Technology, Technical University of Cluj-Napoca, Str. Memorandumului 28, 400114 Cluj-Napoca, Romania
| | - Andrei Ciuparu
- Department of Experimental and Theoretical Neuroscience, Transylvanian Institute of Neuroscience, Str. Ploiesti 33, 400157 Cluj-Napoca, Romania
- Faculty of Electronics, Telecommunications and Information Technology, Technical University of Cluj-Napoca, Str. Memorandumului 28, 400114 Cluj-Napoca, Romania
| | - Maria Ercsey-Ravasz
- Department of Experimental and Theoretical Neuroscience, Transylvanian Institute of Neuroscience, Str. Ploiesti 33, 400157 Cluj-Napoca, Romania
- Faculty of Physics, Babes-Bolyai University, Str. Mihail Kogalniceanu 1, 400084 Cluj-Napoca, Romania
| | - Mathias Winkel
- Merck KGaA, Frankfurter Straße 250, 64293 Darmstadt, Germany
| | - Helmut Linde
- Department of Experimental and Theoretical Neuroscience, Transylvanian Institute of Neuroscience, Str. Ploiesti 33, 400157 Cluj-Napoca, Romania
- Merck KGaA, Frankfurter Straße 250, 64293 Darmstadt, Germany
| | - Raul C Mureșan
- Department of Experimental and Theoretical Neuroscience, Transylvanian Institute of Neuroscience, Str. Ploiesti 33, 400157 Cluj-Napoca, Romania
| |
Collapse
|
8
|
Payvand M, Moro F, Nomura K, Dalgaty T, Vianello E, Nishi Y, Indiveri G. Self-organization of an inhomogeneous memristive hardware for sequence learning. Nat Commun 2022; 13:5793. [PMID: 36184665 PMCID: PMC9527242 DOI: 10.1038/s41467-022-33476-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/05/2021] [Accepted: 09/19/2022] [Indexed: 11/27/2022] Open
Abstract
Learning is a fundamental component of creating intelligent machines. Biological intelligence orchestrates synaptic and neuronal learning at multiple time scales to self-organize populations of neurons for solving complex tasks. Inspired by this, we design and experimentally demonstrate an adaptive hardware architecture Memristive Self-organizing Spiking Recurrent Neural Network (MEMSORN). MEMSORN incorporates resistive memory (RRAM) in its synapses and neurons which configure their state based on Hebbian and Homeostatic plasticity respectively. For the first time, we derive these plasticity rules directly from the statistical measurements of our fabricated RRAM-based neurons and synapses. These "technologically plausible” learning rules exploit the intrinsic variability of the devices and improve the accuracy of the network on a sequence learning task by 30%. Finally, we compare the performance of MEMSORN to a fully-randomly-set-up spiking recurrent network on the same task, showing that self-organization improves the accuracy by more than 15%. This work demonstrates the importance of the device-circuit-algorithm co-design approach for implementing brain-inspired computing hardware. One gap between the neuro-inspired computing and its applications lies in the intrinsic variability of the devices. Here, Payvand et al. suggest a technologically plausible co-design of the hardware architecture which takes into account and exploits the physics behind memristors.
Collapse
Affiliation(s)
- Melika Payvand
- Institute for Neuroinformatics, University of Zurich and ETH Zurich, Zurich, Switzerland.
| | - Filippo Moro
- Institute for Neuroinformatics, University of Zurich and ETH Zurich, Zurich, Switzerland.,Université Grenoble Alpes, CEA, Leti, F-38000, Grenoble, France
| | - Kumiko Nomura
- Corporate Research & Development Center, Toshiba Corporation, Kawasaki, Japan
| | - Thomas Dalgaty
- Université Grenoble Alpes, CEA, Leti, F-38000, Grenoble, France
| | - Elisa Vianello
- Université Grenoble Alpes, CEA, Leti, F-38000, Grenoble, France
| | - Yoshifumi Nishi
- Corporate Research & Development Center, Toshiba Corporation, Kawasaki, Japan
| | - Giacomo Indiveri
- Institute for Neuroinformatics, University of Zurich and ETH Zurich, Zurich, Switzerland
| |
Collapse
|
9
|
Leibold C. Neural kernels for recursive support vector regression as a model for episodic memory. BIOLOGICAL CYBERNETICS 2022; 116:377-386. [PMID: 35348879 PMCID: PMC9170657 DOI: 10.1007/s00422-022-00926-9] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 11/19/2021] [Accepted: 02/24/2022] [Indexed: 06/14/2023]
Abstract
Retrieval of episodic memories requires intrinsic reactivation of neuronal activity patterns. The content of the memories is thereby assumed to be stored in synaptic connections. This paper proposes a theory in which these are the synaptic connections that specifically convey the temporal order information contained in the sequences of a neuronal reservoir to the sensory-motor cortical areas that give rise to the subjective impression of retrieval of sensory motor events. The theory is based on a novel recursive version of support vector regression that allows for efficient continuous learning that is only limited by the representational capacity of the reservoir. The paper argues that hippocampal theta sequences are a potential neural substrate underlying this reservoir. The theory is consistent with confabulations and post hoc alterations of existing memories.
Collapse
Affiliation(s)
- Christian Leibold
- Fakultät für Biologie & Bernstein Center Freiburg, Albert-Ludwigs-Universität Freiburg, Hansastr. 9a, Freiburg, 79104, Germany.
| |
Collapse
|
10
|
Bouhadjar Y, Wouters DJ, Diesmann M, Tetzlaff T. Sequence learning, prediction, and replay in networks of spiking neurons. PLoS Comput Biol 2022; 18:e1010233. [PMID: 35727857 PMCID: PMC9273101 DOI: 10.1371/journal.pcbi.1010233] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/08/2021] [Revised: 07/11/2022] [Accepted: 05/20/2022] [Indexed: 11/24/2022] Open
Abstract
Sequence learning, prediction and replay have been proposed to constitute the universal computations performed by the neocortex. The Hierarchical Temporal Memory (HTM) algorithm realizes these forms of computation. It learns sequences in an unsupervised and continuous manner using local learning rules, permits a context specific prediction of future sequence elements, and generates mismatch signals in case the predictions are not met. While the HTM algorithm accounts for a number of biological features such as topographic receptive fields, nonlinear dendritic processing, and sparse connectivity, it is based on abstract discrete-time neuron and synapse dynamics, as well as on plasticity mechanisms that can only partly be related to known biological mechanisms. Here, we devise a continuous-time implementation of the temporal-memory (TM) component of the HTM algorithm, which is based on a recurrent network of spiking neurons with biophysically interpretable variables and parameters. The model learns high-order sequences by means of a structural Hebbian synaptic plasticity mechanism supplemented with a rate-based homeostatic control. In combination with nonlinear dendritic input integration and local inhibitory feedback, this type of plasticity leads to the dynamic self-organization of narrow sequence-specific subnetworks. These subnetworks provide the substrate for a faithful propagation of sparse, synchronous activity, and, thereby, for a robust, context specific prediction of future sequence elements as well as for the autonomous replay of previously learned sequences. By strengthening the link to biology, our implementation facilitates the evaluation of the TM hypothesis based on experimentally accessible quantities. The continuous-time implementation of the TM algorithm permits, in particular, an investigation of the role of sequence timing for sequence learning, prediction and replay. We demonstrate this aspect by studying the effect of the sequence speed on the sequence learning performance and on the speed of autonomous sequence replay. Essentially all data processed by mammals and many other living organisms is sequential. This holds true for all types of sensory input data as well as motor output activity. Being able to form memories of such sequential data, to predict future sequence elements, and to replay learned sequences is a necessary prerequisite for survival. It has been hypothesized that sequence learning, prediction and replay constitute the fundamental computations performed by the neocortex. The Hierarchical Temporal Memory (HTM) constitutes an abstract powerful algorithm implementing this form of computation and has been proposed to serve as a model of neocortical processing. In this study, we are reformulating this algorithm in terms of known biological ingredients and mechanisms to foster the verifiability of the HTM hypothesis based on electrophysiological and behavioral data. The proposed model learns continuously in an unsupervised manner by biologically plausible, local plasticity mechanisms, and successfully predicts and replays complex sequences. Apart from establishing contact to biology, the study sheds light on the mechanisms determining at what speed we can process sequences and provides an explanation of fast sequence replay observed in the hippocampus and in the neocortex.
Collapse
Affiliation(s)
- Younes Bouhadjar
- Institute of Neuroscience and Medicine (INM-6), & Institute for Advanced Simulation (IAS-6), & JARA BRAIN Institute Structure-Function Relationships (INM-10), Jülich Research Centre, Jülich, Germany
- Peter Grünberg Institute (PGI-7,10), Jülich Research Centre and JARA, Jülich, Germany
- RWTH Aachen University, Aachen, Germany
- * E-mail:
| | - Dirk J. Wouters
- Institute of Electronic Materials (IWE 2) & JARA-FIT, RWTH Aachen University, Aachen, Germany
| | - Markus Diesmann
- Institute of Neuroscience and Medicine (INM-6), & Institute for Advanced Simulation (IAS-6), & JARA BRAIN Institute Structure-Function Relationships (INM-10), Jülich Research Centre, Jülich, Germany
- Department of Physics, Faculty 1, & Department of Psychiatry, Psychotherapy, and Psychosomatics, Medical School, RWTH Aachen University, Aachen, Germany
| | - Tom Tetzlaff
- Institute of Neuroscience and Medicine (INM-6), & Institute for Advanced Simulation (IAS-6), & JARA BRAIN Institute Structure-Function Relationships (INM-10), Jülich Research Centre, Jülich, Germany
| |
Collapse
|
11
|
Drifting assemblies for persistent memory: Neuron transitions and unsupervised compensation. Proc Natl Acad Sci U S A 2021; 118:2023832118. [PMID: 34772802 DOI: 10.1073/pnas.2023832118] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 09/11/2021] [Indexed: 11/18/2022] Open
Abstract
Change is ubiquitous in living beings. In particular, the connectome and neural representations can change. Nevertheless, behaviors and memories often persist over long times. In a standard model, associative memories are represented by assemblies of strongly interconnected neurons. For faithful storage these assemblies are assumed to consist of the same neurons over time. Here we propose a contrasting memory model with complete temporal remodeling of assemblies, based on experimentally observed changes of synapses and neural representations. The assemblies drift freely as noisy autonomous network activity and spontaneous synaptic turnover induce neuron exchange. The gradual exchange allows activity-dependent and homeostatic plasticity to conserve the representational structure and keep inputs, outputs, and assemblies consistent. This leads to persistent memory. Our findings explain recent experimental results on temporal evolution of fear memory representations and suggest that memory systems need to be understood in their completeness as individual parts may constantly change.
Collapse
|
12
|
Hu X, Zeng Z. Bridging the Functional and Wiring Properties of V1 Neurons Through Sparse Coding. Neural Comput 2021; 34:104-137. [PMID: 34758484 DOI: 10.1162/neco_a_01453] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/23/2021] [Accepted: 07/20/2021] [Indexed: 11/04/2022]
Abstract
The functional properties of neurons in the primary visual cortex (V1) are thought to be closely related to the structural properties of this network, but the specific relationships remain unclear. Previous theoretical studies have suggested that sparse coding, an energy-efficient coding method, might underlie the orientation selectivity of V1 neurons. We thus aimed to delineate how the neurons are wired to produce this feature. We constructed a model and endowed it with a simple Hebbian learning rule to encode images of natural scenes. The excitatory neurons fired sparsely in response to images and developed strong orientation selectivity. After learning, the connectivity between excitatory neuron pairs, inhibitory neuron pairs, and excitatory-inhibitory neuron pairs depended on firing pattern and receptive field similarity between the neurons. The receptive fields (RFs) of excitatory neurons and inhibitory neurons were well predicted by the RFs of presynaptic excitatory neurons and inhibitory neurons, respectively. The excitatory neurons formed a small-world network, in which certain local connection patterns were significantly overrepresented. Bidirectionally manipulating the firing rates of inhibitory neurons caused linear transformations of the firing rates of excitatory neurons, and vice versa. These wiring properties and modulatory effects were congruent with a wide variety of data measured in V1, suggesting that the sparse coding principle might underlie both the functional and wiring properties of V1 neurons.
Collapse
Affiliation(s)
- Xiaolin Hu
- Department of Computer Science and Technology, State Key Laboratory of Intelligent Technology and Systems, BNRist, Tsinghua Laboratory of Brain and Intelligence, and IDG/McGovern Institute for Brain Research, Tsinghua University, Beijing 100084, China
| | - Zhigang Zeng
- School of Artificial Intelligence and Automation, Huazhong University of Science and Technology, Wuhan 430074, China, and Key Laboratory of Image Processing and Intelligent Control, Education Ministry of China, Wuhan 430074, China
| |
Collapse
|
13
|
Vieth M, Stöber TM, Triesch J. PymoNNto: A Flexible Modular Toolbox for Designing Brain-Inspired Neural Networks. Front Neuroinform 2021; 15:715131. [PMID: 34790108 PMCID: PMC8591031 DOI: 10.3389/fninf.2021.715131] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/26/2021] [Accepted: 09/07/2021] [Indexed: 11/13/2022] Open
Abstract
The Python Modular Neural Network Toolbox (PymoNNto) provides a versatile and adaptable Python-based framework to develop and investigate brain-inspired neural networks. In contrast to other commonly used simulators such as Brian2 and NEST, PymoNNto imposes only minimal restrictions for implementation and execution. The basic structure of PymoNNto consists of one network class with several neuron- and synapse-groups. The behaviour of each group can be flexibly defined by exchangeable modules. The implementation of these modules is up to the user and only limited by Python itself. Behaviours can be implemented in Python, Numpy, Tensorflow, and other libraries to perform computations on CPUs and GPUs. PymoNNto comes with convenient high level behaviour modules, allowing differential equation-based implementations similar to Brian2, and an adaptable modular Graphical User Interface for real-time observation and modification of the simulated network and its parameters.
Collapse
Affiliation(s)
- Marius Vieth
- Frankfurt Institute for Advanced Studies, Frankfurt am Main, Germany
| | | | - Jochen Triesch
- Frankfurt Institute for Advanced Studies, Frankfurt am Main, Germany
| |
Collapse
|
14
|
Visual exposure enhances stimulus encoding and persistence in primary cortex. Proc Natl Acad Sci U S A 2021; 118:2105276118. [PMID: 34663727 PMCID: PMC8639370 DOI: 10.1073/pnas.2105276118] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 09/07/2021] [Indexed: 11/28/2022] Open
Abstract
Experience shapes sensory responses, already at the earliest stages of cortical processing. We provide evidence that, in the primary visual cortex of anesthetized cats, brief repetitive exposure to a set of simple, abstract stimuli expands the range and decreases the variability of neuronal responses that persist after stimulus offset. These refinements increase the stimulus-specific clustering of neuronal population responses and result in a more efficient encoding of both stimulus identity and stimulus structure, thus potentially benefiting simple readouts in higher cortical areas. Similar results can be achieved via local plasticity mechanisms in recurrent networks, through self-organized refinements of internal dynamics that do not require changes in firing amplitudes. The brain adapts to the sensory environment. For example, simple sensory exposure can modify the response properties of early sensory neurons. How these changes affect the overall encoding and maintenance of stimulus information across neuronal populations remains unclear. We perform parallel recordings in the primary visual cortex of anesthetized cats and find that brief, repetitive exposure to structured visual stimuli enhances stimulus encoding by decreasing the selectivity and increasing the range of the neuronal responses that persist after stimulus presentation. Low-dimensional projection methods and simple classifiers demonstrate that visual exposure increases the segregation of persistent neuronal population responses into stimulus-specific clusters. These observed refinements preserve the representational details required for stimulus reconstruction and are detectable in postexposure spontaneous activity. Assuming response facilitation and recurrent network interactions as the core mechanisms underlying stimulus persistence, we show that the exposure-driven segregation of stimulus responses can arise through strictly local plasticity mechanisms, also in the absence of firing rate changes. Our findings provide evidence for the existence of an automatic, unguided optimization process that enhances the encoding power of neuronal populations in early visual cortex, thus potentially benefiting simple readouts at higher stages of visual processing.
Collapse
|
15
|
Nishi Y, Nomura K, Marukame T, Mizushima K. Stochastic binary synapses having sigmoidal cumulative distribution functions for unsupervised learning with spike timing-dependent plasticity. Sci Rep 2021; 11:18282. [PMID: 34521895 PMCID: PMC8440757 DOI: 10.1038/s41598-021-97583-y] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/14/2021] [Accepted: 08/23/2021] [Indexed: 11/17/2022] Open
Abstract
Spike timing-dependent plasticity (STDP), which is widely studied as a fundamental synaptic update rule for neuromorphic hardware, requires precise control of continuous weights. From the viewpoint of hardware implementation, a simplified update rule is desirable. Although simplified STDP with stochastic binary synapses was proposed previously, we find that it leads to degradation of memory maintenance during learning, which is unfavourable for unsupervised online learning. In this work, we propose a stochastic binary synaptic model where the cumulative probability of the weight change evolves in a sigmoidal fashion with potentiation or depression trials, which can be implemented using a pair of switching devices consisting of serially connected multiple binary memristors. As a benchmark test we perform simulations of unsupervised learning of MNIST images with a two-layer network and show that simplified STDP in combination with this model can outperform conventional rules with continuous weights not only in memory maintenance but also in recognition accuracy. Our method achieves 97.3% in recognition accuracy, which is higher than that reported with standard STDP in the same framework. We also show that the high performance of our learning rule is robust against device-to-device variability of the memristor's probabilistic behaviour.
Collapse
Affiliation(s)
- Yoshifumi Nishi
- Frontier Research Laboratory, Corporate R&D Center, Toshiba Corporation, 1, Komukai-Toshiba-Cho, Saiwai-ku, Kawasaki, 212-8582, Japan.
| | - Kumiko Nomura
- Frontier Research Laboratory, Corporate R&D Center, Toshiba Corporation, 1, Komukai-Toshiba-Cho, Saiwai-ku, Kawasaki, 212-8582, Japan
| | - Takao Marukame
- Frontier Research Laboratory, Corporate R&D Center, Toshiba Corporation, 1, Komukai-Toshiba-Cho, Saiwai-ku, Kawasaki, 212-8582, Japan
| | - Koichi Mizushima
- Frontier Research Laboratory, Corporate R&D Center, Toshiba Corporation, 1, Komukai-Toshiba-Cho, Saiwai-ku, Kawasaki, 212-8582, Japan
| |
Collapse
|
16
|
Singer W. Recurrent dynamics in the cerebral cortex: Integration of sensory evidence with stored knowledge. Proc Natl Acad Sci U S A 2021; 118:e2101043118. [PMID: 34362837 PMCID: PMC8379985 DOI: 10.1073/pnas.2101043118] [Citation(s) in RCA: 30] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/29/2022] Open
Abstract
Current concepts of sensory processing in the cerebral cortex emphasize serial extraction and recombination of features in hierarchically structured feed-forward networks in order to capture the relations among the components of perceptual objects. These concepts are implemented in convolutional deep learning networks and have been validated by the astounding similarities between the functional properties of artificial systems and their natural counterparts. However, cortical architectures also display an abundance of recurrent coupling within and between the layers of the processing hierarchy. This massive recurrence gives rise to highly complex dynamics whose putative function is poorly understood. Here a concept is proposed that assigns specific functions to the dynamics of cortical networks and combines, in a unifying approach, the respective advantages of recurrent and feed-forward processing. It is proposed that the priors about regularities of the world are stored in the weight distributions of feed-forward and recurrent connections and that the high-dimensional, dynamic space provided by recurrent interactions is exploited for computations. These comprise the ultrafast matching of sensory evidence with the priors covertly represented in the correlation structure of spontaneous activity and the context-dependent grouping of feature constellations characterizing natural objects. The concept posits that information is encoded not only in the discharge frequency of neurons but also in the precise timing relations among the discharges. Results of experiments designed to test the predictions derived from this concept support the hypothesis that cerebral cortex exploits the high-dimensional recurrent dynamics for computations serving predictive coding.
Collapse
Affiliation(s)
- Wolf Singer
- Ernst Strüngmann Institute for Neuroscience in Cooperation with Max Planck Society, Frankfurt am Main 60438, Germany;
- Max Planck Institute for Brain Research, Frankfurt am Main 60438, Germany
- Frankfurt Institute for Advanced Studies, Frankfurt am Main 60438, Germany
| |
Collapse
|
17
|
Bird AD, Jedlicka P, Cuntz H. Dendritic normalisation improves learning in sparsely connected artificial neural networks. PLoS Comput Biol 2021; 17:e1009202. [PMID: 34370727 PMCID: PMC8407571 DOI: 10.1371/journal.pcbi.1009202] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/10/2020] [Revised: 08/31/2021] [Accepted: 06/19/2021] [Indexed: 11/25/2022] Open
Abstract
Artificial neural networks, taking inspiration from biological neurons, have become an invaluable tool for machine learning applications. Recent studies have developed techniques to effectively tune the connectivity of sparsely-connected artificial neural networks, which have the potential to be more computationally efficient than their fully-connected counterparts and more closely resemble the architectures of biological systems. We here present a normalisation, based on the biophysical behaviour of neuronal dendrites receiving distributed synaptic inputs, that divides the weight of an artificial neuron's afferent contacts by their number. We apply this dendritic normalisation to various sparsely-connected feedforward network architectures, as well as simple recurrent and self-organised networks with spatially extended units. The learning performance is significantly increased, providing an improvement over other widely-used normalisations in sparse networks. The results are two-fold, being both a practical advance in machine learning and an insight into how the structure of neuronal dendritic arbours may contribute to computation.
Collapse
Affiliation(s)
- Alex D. Bird
- Ernst Strüngmann Institute for Neuroscience (ESI) in co-operation with Max Planck Society, Frankfurt, Germany
- Frankfurt Institute for Advanced Studies (FIAS), Frankfurt, Germany
- ICAR3R-Interdisciplinary Centre for 3Rs in Animal Research, Faculty of Medicine, Justus Liebig University Giessen, Giessen, Germany
| | - Peter Jedlicka
- Frankfurt Institute for Advanced Studies (FIAS), Frankfurt, Germany
- ICAR3R-Interdisciplinary Centre for 3Rs in Animal Research, Faculty of Medicine, Justus Liebig University Giessen, Giessen, Germany
| | - Hermann Cuntz
- Ernst Strüngmann Institute for Neuroscience (ESI) in co-operation with Max Planck Society, Frankfurt, Germany
- Frankfurt Institute for Advanced Studies (FIAS), Frankfurt, Germany
| |
Collapse
|
18
|
Miner D, Wörgötter F, Tetzlaff C, Fauth M. Self-Organized Structuring of Recurrent Neuronal Networks for Reliable Information Transmission. BIOLOGY 2021; 10:biology10070577. [PMID: 34202473 PMCID: PMC8301101 DOI: 10.3390/biology10070577] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/07/2021] [Revised: 06/09/2021] [Accepted: 06/15/2021] [Indexed: 11/16/2022]
Abstract
Simple Summary Information processing in the brain takes places at multiple stages, each of which is a local network of neurons. The long-range connections between these network stages are sparse and do not change over time. Thus, within each stage information arrives at a sparse subset of input neurons and must be routed to a sparse subset of output neurons. In this theoretical work, we investigate how networks achieve this routing in a self-organized manner without losing information. We show that biologically inspired self-organization entails that input information is distributed to all neurons in the network by strengthening many synapses in the local networks. Thus, after successful self-organization, input information can be read out and decoded from a small number of outputs. We also show that this way of self-organization can still be more energy efficient than creating more long-range in- and output connections. Abstract Our brains process information using a layered hierarchical network architecture, with abundant connections within each layer and sparse long-range connections between layers. As these long-range connections are mostly unchanged after development, each layer has to locally self-organize in response to new inputs to enable information routing between the sparse in- and output connections. Here we demonstrate that this can be achieved by a well-established model of cortical self-organization based on a well-orchestrated interplay between several plasticity processes. After this self-organization, stimuli conveyed by sparse inputs can be rapidly read out from a layer using only very few long-range connections. To achieve this information routing, the neurons that are stimulated form feed-forward projections into the unstimulated parts of the same layer and get more neurons to represent the stimulus. Hereby, the plasticity processes ensure that each neuron only receives projections from and responds to only one stimulus such that the network is partitioned into parts with different preferred stimuli. Along this line, we show that the relation between the network activity and connectivity self-organizes into a biologically plausible regime. Finally, we argue how the emerging connectivity may minimize the metabolic cost for maintaining a network structure that rapidly transmits stimulus information despite sparse input and output connectivity.
Collapse
|
19
|
Gu L, Wu R. Robust cortical criticality and diverse dynamics resulting from functional specification. Phys Rev E 2021; 103:042407. [PMID: 34005915 DOI: 10.1103/physreve.103.042407] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/11/2020] [Accepted: 03/23/2021] [Indexed: 11/07/2022]
Abstract
Despite the recognition of the layered structure and evident criticality in the cortex, how the specification of input, output, and computational layers affects the self-organized criticality has not been much explored. By constructing heterogeneous structures with a well-accepted model of leaky neurons, we find that the specification can lead to robust criticality rather insensitive to the strength of external stimuli. This naturally unifies the adaptation to strong inputs without extra synaptic plasticity mechanisms. Low degree of recurrence constitutes an alternative explanation to subcriticality other than the high-frequency inputs. Unlike fully recurrent networks where external stimuli always render subcriticality, the dynamics of networks with sufficient feedforward connections can be driven to criticality and supercriticality. These findings indicate that functional and structural specification and their interplay with external stimuli are of crucial importance for the network dynamics. The robust criticality puts forward networks of the leaky neurons as promising platforms for realizing artificial neural networks that work in the vicinity of critical points.
Collapse
Affiliation(s)
- Lei Gu
- Department of Physics and Astronomy, University of California, Irvine, California 92697, USA
| | - Ruqian Wu
- Department of Physics and Astronomy, University of California, Irvine, California 92697, USA
| |
Collapse
|
20
|
Cellular connectomes as arbiters of local circuit models in the cerebral cortex. Nat Commun 2021; 12:2785. [PMID: 33986261 PMCID: PMC8119988 DOI: 10.1038/s41467-021-22856-z] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/01/2017] [Accepted: 03/28/2021] [Indexed: 02/03/2023] Open
Abstract
With the availability of cellular-resolution connectivity maps, connectomes, from the mammalian nervous system, it is in question how informative such massive connectomic data can be for the distinction of local circuit models in the mammalian cerebral cortex. Here, we investigated whether cellular-resolution connectomic data can in principle allow model discrimination for local circuit modules in layer 4 of mouse primary somatosensory cortex. We used approximate Bayesian model selection based on a set of simple connectome statistics to compute the posterior probability over proposed models given a to-be-measured connectome. We find that the distinction of the investigated local cortical models is faithfully possible based on purely structural connectomic data with an accuracy of more than 90%, and that such distinction is stable against substantial errors in the connectome measurement. Furthermore, mapping a fraction of only 10% of the local connectome is sufficient for connectome-based model distinction under realistic experimental constraints. Together, these results show for a concrete local circuit example that connectomic data allows model selection in the cerebral cortex and define the experimental strategy for obtaining such connectomic data.
Collapse
|
21
|
Schubert F, Gros C. Local Homeostatic Regulation of the Spectral Radius of Echo-State Networks. Front Comput Neurosci 2021; 15:587721. [PMID: 33732127 PMCID: PMC7958921 DOI: 10.3389/fncom.2021.587721] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/27/2020] [Accepted: 01/25/2020] [Indexed: 12/02/2022] Open
Abstract
Recurrent cortical networks provide reservoirs of states that are thought to play a crucial role for sequential information processing in the brain. However, classical reservoir computing requires manual adjustments of global network parameters, particularly of the spectral radius of the recurrent synaptic weight matrix. It is hence not clear if the spectral radius is accessible to biological neural networks. Using random matrix theory, we show that the spectral radius is related to local properties of the neuronal dynamics whenever the overall dynamical state is only weakly correlated. This result allows us to introduce two local homeostatic synaptic scaling mechanisms, termed flow control and variance control, that implicitly drive the spectral radius toward the desired value. For both mechanisms the spectral radius is autonomously adapted while the network receives and processes inputs under working conditions. We demonstrate the effectiveness of the two adaptation mechanisms under different external input protocols. Moreover, we evaluated the network performance after adaptation by training the network to perform a time-delayed XOR operation on binary sequences. As our main result, we found that flow control reliably regulates the spectral radius for different types of input statistics. Precise tuning is however negatively affected when interneural correlations are substantial. Furthermore, we found a consistent task performance over a wide range of input strengths/variances. Variance control did however not yield the desired spectral radii with the same precision, being less consistent across different input strengths. Given the effectiveness and remarkably simple mathematical form of flow control, we conclude that self-consistent local control of the spectral radius via an implicit adaptation scheme is an interesting and biological plausible alternative to conventional methods using set point homeostatic feedback controls of neural firing.
Collapse
Affiliation(s)
- Fabian Schubert
- Institute for Theoretical Physics, Goethe University Frankfurt am Main, Frankfurt am Main, Germany
| | | |
Collapse
|
22
|
Perez-Catalan NA, Doe CQ, Ackerman SD. The role of astrocyte-mediated plasticity in neural circuit development and function. Neural Dev 2021; 16:1. [PMID: 33413602 PMCID: PMC7789420 DOI: 10.1186/s13064-020-00151-9] [Citation(s) in RCA: 72] [Impact Index Per Article: 24.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/25/2020] [Accepted: 12/26/2020] [Indexed: 02/03/2023] Open
Abstract
Neuronal networks are capable of undergoing rapid structural and functional changes called plasticity, which are essential for shaping circuit function during nervous system development. These changes range from short-term modifications on the order of milliseconds, to long-term rearrangement of neural architecture that could last for the lifetime of the organism. Neural plasticity is most prominent during development, yet also plays a critical role during memory formation, behavior, and disease. Therefore, it is essential to define and characterize the mechanisms underlying the onset, duration, and form of plasticity. Astrocytes, the most numerous glial cell type in the human nervous system, are integral elements of synapses and are components of a glial network that can coordinate neural activity at a circuit-wide level. Moreover, their arrival to the CNS during late embryogenesis correlates to the onset of sensory-evoked activity, making them an interesting target for circuit plasticity studies. Technological advancements in the last decade have uncovered astrocytes as prominent regulators of circuit assembly and function. Here, we provide a brief historical perspective on our understanding of astrocytes in the nervous system, and review the latest advances on the role of astroglia in regulating circuit plasticity and function during nervous system development and homeostasis.
Collapse
Affiliation(s)
- Nelson A Perez-Catalan
- Institute of Neuroscience, Howard Hughes Medical Institute, University of Oregon, Eugene, OR, USA
- Kennedy Center, Department of Pediatrics, The University of Chicago, Chicago, IL, USA
| | - Chris Q Doe
- Institute of Neuroscience, Howard Hughes Medical Institute, University of Oregon, Eugene, OR, USA
| | - Sarah D Ackerman
- Institute of Neuroscience, Howard Hughes Medical Institute, University of Oregon, Eugene, OR, USA.
| |
Collapse
|
23
|
Motanis H, Buonomano D. Decreased reproducibility and abnormal experience-dependent plasticity of network dynamics in Fragile X circuits. Sci Rep 2020; 10:14535. [PMID: 32884028 PMCID: PMC7471942 DOI: 10.1038/s41598-020-71333-y] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/06/2020] [Accepted: 08/10/2020] [Indexed: 02/06/2023] Open
Abstract
Fragile X syndrome is a neurodevelopmental disorder associated with a broad range of neural phenotypes. Interpreting these findings has proven challenging because some phenotypes may reflect compensatory mechanisms or normal forms of plasticity differentially engaged by experiential differences. To help minimize compensatory and experiential influences, we used an ex vivo approach to study network dynamics and plasticity of cortical microcircuits. In Fmr1-/y circuits, the spatiotemporal structure of Up-states was less reproducible, suggesting alterations in the plasticity mechanisms governing network activity. Chronic optical stimulation revealed normal homeostatic plasticity of Up-states, however, Fmr1-/y circuits exhibited abnormal experience-dependent plasticity as they did not adapt to chronically presented temporal patterns in an interval-specific manner. These results, suggest that while homeostatic plasticity is normal, Fmr1-/y circuits exhibit deficits in the ability to orchestrate multiple forms of synaptic plasticity and to adapt to sensory patterns in an experience-dependent manner-which is likely to contribute to learning deficits.
Collapse
Affiliation(s)
- Helen Motanis
- Departments of Neurobiology and Psychology, and Integrative Center for Learning and Memory, University of California, 630 Charles E Young Dr S, Center for Health Sciences Building, Los Angeles, CA, 90095, USA
| | - Dean Buonomano
- Departments of Neurobiology and Psychology, and Integrative Center for Learning and Memory, University of California, 630 Charles E Young Dr S, Center for Health Sciences Building, Los Angeles, CA, 90095, USA.
| |
Collapse
|
24
|
Hey, look over there: Distraction effects on rapid sequence recall. PLoS One 2020; 15:e0223743. [PMID: 32275703 PMCID: PMC7147745 DOI: 10.1371/journal.pone.0223743] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/25/2019] [Accepted: 03/11/2020] [Indexed: 11/19/2022] Open
Abstract
In the course of everyday life, the brain must store and recall a huge variety of representations of stimuli which are presented in an ordered or sequential way. The processes by which the ordering of these various things is stored and recalled are moderately well understood. We use here a computational model of a cortex-like recurrent neural network adapted by a multitude of plasticity mechanisms. We first demonstrate the learning of a sequence. Then, we examine the influence of different types of distractors on the network dynamics during the recall of the encoded ordered information being ordered in a sequence. We are able to broadly arrive at two distinct effect-categories for distractors, arrive at a basic understanding of why this is so, and predict what distractors will fall into each category.
Collapse
|
25
|
Krüppel S, Tetzlaff C. The self-organized learning of noisy environmental stimuli requires distinct phases of plasticity. Netw Neurosci 2020; 4:174-199. [PMID: 32166207 PMCID: PMC7055647 DOI: 10.1162/netn_a_00118] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/31/2019] [Accepted: 12/09/2019] [Indexed: 11/25/2022] Open
Abstract
Along sensory pathways, representations of environmental stimuli become increasingly sparse and expanded. If additionally the feed-forward synaptic weights are structured according to the inherent organization of stimuli, the increase in sparseness and expansion leads to a reduction of sensory noise. However, it is unknown how the synapses in the brain form the required structure, especially given the omnipresent noise of environmental stimuli. Here, we employ a combination of synaptic plasticity and intrinsic plasticity—adapting the excitability of each neuron individually—and present stimuli with an inherent organization to a feed-forward network. We observe that intrinsic plasticity maintains the sparseness of the neural code and thereby allows synaptic plasticity to learn the organization of stimuli in low-noise environments. Nevertheless, even high levels of noise can be handled after a subsequent phase of readaptation of the neuronal excitabilities by intrinsic plasticity. Interestingly, during this phase the synaptic structure has to be maintained. These results demonstrate that learning and recalling in the presence of noise requires the coordinated interplay between plasticity mechanisms adapting different properties of the neuronal circuit. Everyday life requires living beings to continuously recognize and categorize perceived stimuli from the environment. To master this task, the representations of these stimuli become increasingly sparse and expanded along the sensory pathways of the brain. In addition, the underlying neuronal network has to be structured according to the inherent organization of the environmental stimuli. However, how the neuronal network learns the required structure even in the presence of noise remains unknown. In this theoretical study, we show that the interplay between synaptic plasticity—controlling the synaptic efficacies—and intrinsic plasticity—adapting the neuronal excitabilities—enables the network to encode the organization of environmental stimuli. It thereby structures the network to correctly categorize stimuli even in the presence of noise. After having encoded the stimuli’s organization, consolidating the synaptic structure while keeping the neuronal excitabilities dynamic enables the neuronal system to readapt to arbitrary levels of noise resulting in a near-optimal classification performance for all noise levels. These results provide new insights into the interplay between different plasticity mechanisms and how this interplay enables sensory systems to reliably learn and categorize stimuli from the surrounding environment.
Collapse
Affiliation(s)
- Steffen Krüppel
- Department of Computational Neuroscience, Third Institute of Physics - Biophysics, Georg-August-University, Göttingen, Germany
| | - Christian Tetzlaff
- Department of Computational Neuroscience, Third Institute of Physics - Biophysics, Georg-August-University, Göttingen, Germany
| |
Collapse
|
26
|
Herpich J, Tetzlaff C. Principles underlying the input-dependent formation and organization of memories. Netw Neurosci 2019; 3:606-634. [PMID: 31157312 PMCID: PMC6542621 DOI: 10.1162/netn_a_00086] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/26/2018] [Accepted: 03/21/2019] [Indexed: 11/29/2022] Open
Abstract
The neuronal system exhibits the remarkable ability to dynamically store and organize incoming information into a web of memory representations (items), which is essential for the generation of complex behaviors. Central to memory function is that such memory items must be (1) discriminated from each other, (2) associated to each other, or (3) brought into a sequential order. However, how these three basic mechanisms are robustly implemented in an input-dependent manner by the underlying complex neuronal and synaptic dynamics is still unknown. Here, we develop a mathematical framework, which provides a direct link between different synaptic mechanisms, determining the neuronal and synaptic dynamics of the network, to create a network that emulates the above mechanisms. Combining correlation-based synaptic plasticity and homeostatic synaptic scaling, we demonstrate that these mechanisms enable the reliable formation of sequences and associations between two memory items still missing the capability for discrimination. We show that this shortcoming can be removed by additionally considering inhibitory synaptic plasticity. Thus, the here-presented framework provides a new, functionally motivated link between different known synaptic mechanisms leading to the self-organization of fundamental memory mechanisms.
Collapse
Affiliation(s)
- Juliane Herpich
- Department of Computational Neuroscience, Third Institute of Physics - Biophysics, Georg-August-University, Göttingen, Germany
- Bernstein Center for Computational Neuroscience, Georg-August-University, Göttingen, Germany
| | - Christian Tetzlaff
- Department of Computational Neuroscience, Third Institute of Physics - Biophysics, Georg-August-University, Göttingen, Germany
- Bernstein Center for Computational Neuroscience, Georg-August-University, Göttingen, Germany
| |
Collapse
|
27
|
Demin V, Nekhaev D. Recurrent Spiking Neural Network Learning Based on a Competitive Maximization of Neuronal Activity. Front Neuroinform 2018; 12:79. [PMID: 30498439 PMCID: PMC6250118 DOI: 10.3389/fninf.2018.00079] [Citation(s) in RCA: 15] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/27/2018] [Accepted: 10/18/2018] [Indexed: 12/21/2022] Open
Abstract
Spiking neural networks (SNNs) are believed to be highly computationally and energy efficient for specific neurochip hardware real-time solutions. However, there is a lack of learning algorithms for complex SNNs with recurrent connections, comparable in efficiency with back-propagation techniques and capable of unsupervised training. Here we suppose that each neuron in a biological neural network tends to maximize its activity in competition with other neurons, and put this principle at the basis of a new SNN learning algorithm. In such a way, a spiking network with the learned feed-forward, reciprocal and intralayer inhibitory connections, is introduced to the MNIST database digit recognition. It has been demonstrated that this SNN can be trained without a teacher, after a short supervised initialization of weights by the same algorithm. Also, it has been shown that neurons are grouped into families of hierarchical structures, corresponding to different digit classes and their associations. This property is expected to be useful to reduce the number of layers in deep neural networks and modeling the formation of various functional structures in a biological nervous system. Comparison of the learning properties of the suggested algorithm, with those of the Sparse Distributed Representation approach shows similarity in coding but also some advantages of the former. The basic principle of the proposed algorithm is believed to be practically applicable to the construction of much more complicated and diverse task solving SNNs. We refer to this new approach as "Family-Engaged Execution and Learning of Induced Neuron Groups," or FEELING.
Collapse
Affiliation(s)
- Vyacheslav Demin
- National Research Center "Kurchatov Institute", Moscow, Russia.,Moscow Institute of Phycics and Technology, Dolgoprudny, Russia
| | - Dmitry Nekhaev
- National Research Center "Kurchatov Institute", Moscow, Russia
| |
Collapse
|
28
|
Wilting J, Dehning J, Pinheiro Neto J, Rudelt L, Wibral M, Zierenberg J, Priesemann V. Operating in a Reverberating Regime Enables Rapid Tuning of Network States to Task Requirements. Front Syst Neurosci 2018; 12:55. [PMID: 30459567 PMCID: PMC6232511 DOI: 10.3389/fnsys.2018.00055] [Citation(s) in RCA: 24] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/04/2018] [Accepted: 10/09/2018] [Indexed: 01/02/2023] Open
Abstract
Neural circuits are able to perform computations under very diverse conditions and requirements. The required computations impose clear constraints on their fine-tuning: a rapid and maximally informative response to stimuli in general requires decorrelated baseline neural activity. Such network dynamics is known as asynchronous-irregular. In contrast, spatio-temporal integration of information requires maintenance and transfer of stimulus information over extended time periods. This can be realized at criticality, a phase transition where correlations, sensitivity and integration time diverge. Being able to flexibly switch, or even combine the above properties in a task-dependent manner would present a clear functional advantage. We propose that cortex operates in a “reverberating regime” because it is particularly favorable for ready adaptation of computational properties to context and task. This reverberating regime enables cortical networks to interpolate between the asynchronous-irregular and the critical state by small changes in effective synaptic strength or excitation-inhibition ratio. These changes directly adapt computational properties, including sensitivity, amplification, integration time and correlation length within the local network. We review recent converging evidence that cortex in vivo operates in the reverberating regime, and that various cortical areas have adapted their integration times to processing requirements. In addition, we propose that neuromodulation enables a fine-tuning of the network, so that local circuits can either decorrelate or integrate, and quench or maintain their input depending on task. We argue that this task-dependent tuning, which we call “dynamic adaptive computation,” presents a central organization principle of cortical networks and discuss first experimental evidence.
Collapse
Affiliation(s)
- Jens Wilting
- Max-Planck-Institute for Dynamics and Self-Organization, Göttingen, Germany
| | - Jonas Dehning
- Max-Planck-Institute for Dynamics and Self-Organization, Göttingen, Germany
| | - Joao Pinheiro Neto
- Max-Planck-Institute for Dynamics and Self-Organization, Göttingen, Germany
| | - Lucas Rudelt
- Max-Planck-Institute for Dynamics and Self-Organization, Göttingen, Germany
| | - Michael Wibral
- Magnetoencephalography Unit, Brain Imaging Center, Johann-Wolfgang-Goethe University, Frankfurt, Germany
| | - Johannes Zierenberg
- Max-Planck-Institute for Dynamics and Self-Organization, Göttingen, Germany.,Bernstein-Center for Computational Neuroscience, Göttingen, Germany
| | - Viola Priesemann
- Max-Planck-Institute for Dynamics and Self-Organization, Göttingen, Germany.,Bernstein-Center for Computational Neuroscience, Göttingen, Germany
| |
Collapse
|
29
|
Goodhill GJ. Theoretical Models of Neural Development. iScience 2018; 8:183-199. [PMID: 30321813 PMCID: PMC6197653 DOI: 10.1016/j.isci.2018.09.017] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/15/2018] [Revised: 08/06/2018] [Accepted: 09/19/2018] [Indexed: 12/22/2022] Open
Abstract
Constructing a functioning nervous system requires the precise orchestration of a vast array of mechanical, molecular, and neural-activity-dependent cues. Theoretical models can play a vital role in helping to frame quantitative issues, reveal mathematical commonalities between apparently diverse systems, identify what is and what is not possible in principle, and test the abilities of specific mechanisms to explain the data. This review focuses on the progress that has been made over the last decade in our theoretical understanding of neural development.
Collapse
Affiliation(s)
- Geoffrey J Goodhill
- Queensland Brain Institute and School of Mathematics and Physics, The University of Queensland, St Lucia, QLD 4072, Australia.
| |
Collapse
|
30
|
Bayati M, Neher T, Melchior J, Diba K, Wiskott L, Cheng S. Storage fidelity for sequence memory in the hippocampal circuit. PLoS One 2018; 13:e0204685. [PMID: 30286147 PMCID: PMC6171846 DOI: 10.1371/journal.pone.0204685] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/03/2018] [Accepted: 09/11/2018] [Indexed: 12/24/2022] Open
Abstract
Episodic memories have been suggested to be represented by neuronal sequences, which are stored and retrieved from the hippocampal circuit. A special difficulty is that realistic neuronal sequences are strongly correlated with each other since computational memory models generally perform poorly when correlated patterns are stored. Here, we study in a computational model under which conditions the hippocampal circuit can perform this function robustly. During memory encoding, CA3 sequences in our model are driven by intrinsic dynamics, entorhinal inputs, or a combination of both. These CA3 sequences are hetero-associated with the input sequences, so that the network can retrieve entire sequences based on a single cue pattern. We find that overall memory performance depends on two factors: the robustness of sequence retrieval from CA3 and the circuit's ability to perform pattern completion through the feedforward connectivity, including CA3, CA1 and EC. The two factors, in turn, depend on the relative contribution of the external inputs and recurrent drive on CA3 activity. In conclusion, memory performance in our network model critically depends on the network architecture and dynamics in CA3.
Collapse
Affiliation(s)
- Mehdi Bayati
- Institut für Neuroinformatik, Ruhr-University Bochum, Bochum, Germany
| | - Torsten Neher
- Mental Health Research and Treatment Center, Department of Clinical Child and Adolescent Psychology, Faculty of Psychology, Ruhr University Bochum, Bochum, Germany
| | - Jan Melchior
- Institut für Neuroinformatik, Ruhr-University Bochum, Bochum, Germany
| | - Kamran Diba
- Department of Anesthesiology, University of Michigan, Ann Arbor, United States of America
| | - Laurenz Wiskott
- Institut für Neuroinformatik, Ruhr-University Bochum, Bochum, Germany
| | - Sen Cheng
- Institut für Neuroinformatik, Ruhr-University Bochum, Bochum, Germany
| |
Collapse
|
31
|
Triesch J, Vo AD, Hafner AS. Competition for synaptic building blocks shapes synaptic plasticity. eLife 2018; 7:37836. [PMID: 30222108 PMCID: PMC6181566 DOI: 10.7554/elife.37836] [Citation(s) in RCA: 30] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/24/2018] [Accepted: 09/14/2018] [Indexed: 12/31/2022] Open
Abstract
Changes in the efficacies of synapses are thought to be the neurobiological basis of learning and memory. The efficacy of a synapse depends on its current number of neurotransmitter receptors. Recent experiments have shown that these receptors are highly dynamic, moving back and forth between synapses on time scales of seconds and minutes. This suggests spontaneous fluctuations in synaptic efficacies and a competition of nearby synapses for available receptors. Here we propose a mathematical model of this competition of synapses for neurotransmitter receptors from a local dendritic pool. Using minimal assumptions, the model produces a fast multiplicative scaling behavior of synapses. Furthermore, the model explains a transient form of heterosynaptic plasticity and predicts that its amount is inversely related to the size of the local receptor pool. Overall, our model reveals logistical tradeoffs during the induction of synaptic plasticity due to the rapid exchange of neurotransmitter receptors between synapses.
Collapse
Affiliation(s)
- Jochen Triesch
- Frankfurt Institute for Advanced Studies, Frankfurt am Main, Germany.,Goethe University, Frankfurt am Main, Germany
| | - Anh Duong Vo
- Frankfurt Institute for Advanced Studies, Frankfurt am Main, Germany.,Goethe University, Frankfurt am Main, Germany
| | | |
Collapse
|
32
|
Grossberger L, Battaglia FP, Vinck M. Unsupervised clustering of temporal patterns in high-dimensional neuronal ensembles using a novel dissimilarity measure. PLoS Comput Biol 2018; 14:e1006283. [PMID: 29979681 PMCID: PMC6051652 DOI: 10.1371/journal.pcbi.1006283] [Citation(s) in RCA: 16] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/08/2018] [Revised: 07/18/2018] [Accepted: 06/08/2018] [Indexed: 11/18/2022] Open
Abstract
Temporally ordered multi-neuron patterns likely encode information in the brain. We introduce an unsupervised method, SPOTDisClust (Spike Pattern Optimal Transport Dissimilarity Clustering), for their detection from high-dimensional neural ensembles. SPOTDisClust measures similarity between two ensemble spike patterns by determining the minimum transport cost of transforming their corresponding normalized cross-correlation matrices into each other (SPOTDis). Then, it performs density-based clustering based on the resulting inter-pattern dissimilarity matrix. SPOTDisClust does not require binning and can detect complex patterns (beyond sequential activation) even when high levels of out-of-pattern "noise" spiking are present. Our method handles efficiently the additional information from increasingly large neuronal ensembles and can detect a number of patterns that far exceeds the number of recorded neurons. In an application to neural ensemble data from macaque monkey V1 cortex, SPOTDisClust can identify different moving stimulus directions on the sole basis of temporal spiking patterns.
Collapse
Affiliation(s)
- Lukas Grossberger
- Donders Institute for Brain, Cognition and Behaviour, Radboud Universiteit, Nijmegen, the Netherlands
- Ernst Strüngmann Institute for Neuroscience in cooperation with Max Planck Society, Frankfurt am Main, Germany
| | - Francesco P. Battaglia
- Donders Institute for Brain, Cognition and Behaviour, Radboud Universiteit, Nijmegen, the Netherlands
| | - Martin Vinck
- Ernst Strüngmann Institute for Neuroscience in cooperation with Max Planck Society, Frankfurt am Main, Germany
| |
Collapse
|
33
|
Bridging structure and function: A model of sequence learning and prediction in primary visual cortex. PLoS Comput Biol 2018; 14:e1006187. [PMID: 29870532 PMCID: PMC6003695 DOI: 10.1371/journal.pcbi.1006187] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/16/2017] [Revised: 06/15/2018] [Accepted: 05/09/2018] [Indexed: 11/29/2022] Open
Abstract
Recent experiments have demonstrated that visual cortex engages in spatio-temporal sequence learning and prediction. The cellular basis of this learning remains unclear, however. Here we present a spiking neural network model that explains a recent study on sequence learning in the primary visual cortex of rats. The model posits that the sequence learning and prediction abilities of cortical circuits result from the interaction of spike-timing dependent plasticity (STDP) and homeostatic plasticity mechanisms. It also reproduces changes in stimulus-evoked multi-unit activity during learning. Furthermore, it makes precise predictions regarding how training shapes network connectivity to establish its prediction ability. Finally, it predicts that the adapted connectivity gives rise to systematic changes in spontaneous network activity. Taken together, our model establishes a new conceptual bridge between the structure and function of cortical circuits in the context of sequence learning and prediction. A central goal of Neuroscience is to understand the relationship between the structure and function of brain networks. Of particular interest are the circuits of the neocortex, the seat of our highest cognitive abilities. Here we provide a new link between the structure and function of neocortical circuits in the context of sequence learning. We study a spiking neural network model that self-organizes its connectivity and activity via a combination of different plasticity mechanisms known to operate in cortical circuits. We use this model to explain various findings from a recent experimental study on sequence learning and prediction in rat visual cortex. Our model reproduces the changes in activity patterns as the animal learns the sequential pattern of visual stimulation. In addition, the model predicts what stimulation-induced structural changes underlie this sequence learning ability. Finally, the model also predicts how the adapted network structure influences spontaneous network activity when there is no visual stimulation. Hence, our model provides new insights about the relationship between structure and function of cortical circuits.
Collapse
|
34
|
Rost T, Deger M, Nawrot MP. Winnerless competition in clustered balanced networks: inhibitory assemblies do the trick. BIOLOGICAL CYBERNETICS 2018; 112:81-98. [PMID: 29075845 PMCID: PMC5908874 DOI: 10.1007/s00422-017-0737-7] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 02/21/2017] [Accepted: 10/11/2017] [Indexed: 06/07/2023]
Abstract
Balanced networks are a frequently employed basic model for neuronal networks in the mammalian neocortex. Large numbers of excitatory and inhibitory neurons are recurrently connected so that the numerous positive and negative inputs that each neuron receives cancel out on average. Neuronal firing is therefore driven by fluctuations in the input and resembles the irregular and asynchronous activity observed in cortical in vivo data. Recently, the balanced network model has been extended to accommodate clusters of strongly interconnected excitatory neurons in order to explain persistent activity in working memory-related tasks. This clustered topology introduces multistability and winnerless competition between attractors and can capture the high trial-to-trial variability and its reduction during stimulation that has been found experimentally. In this prospect article, we review the mean field description of balanced networks of binary neurons and apply the theory to clustered networks. We show that the stable fixed points of networks with clustered excitatory connectivity tend quickly towards firing rate saturation, which is generally inconsistent with experimental data. To remedy this shortcoming, we then present a novel perspective on networks with locally balanced clusters of both excitatory and inhibitory neuron populations. This approach allows for true multistability and moderate firing rates in activated clusters over a wide range of parameters. Our findings are supported by mean field theory and numerical network simulations. Finally, we discuss possible applications of the concept of joint excitatory and inhibitory clustering in future cortical network modelling studies.
Collapse
Affiliation(s)
- Thomas Rost
- Computational Systems Neuroscience, Institute for Zoology, Faculty of Mathematics and Natural Sciences, University of Cologne, Cologne, Germany
| | - Moritz Deger
- Computational Systems Neuroscience, Institute for Zoology, Faculty of Mathematics and Natural Sciences, University of Cologne, Cologne, Germany
| | - Martin P Nawrot
- Computational Systems Neuroscience, Institute for Zoology, Faculty of Mathematics and Natural Sciences, University of Cologne, Cologne, Germany.
| |
Collapse
|
35
|
Panda P, Roy K. Learning to Generate Sequences with Combination of Hebbian and Non-hebbian Plasticity in Recurrent Spiking Neural Networks. Front Neurosci 2017; 11:693. [PMID: 29311774 PMCID: PMC5733011 DOI: 10.3389/fnins.2017.00693] [Citation(s) in RCA: 22] [Impact Index Per Article: 3.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/07/2017] [Accepted: 11/23/2017] [Indexed: 11/13/2022] Open
Abstract
Synaptic Plasticity, the foundation for learning and memory formation in the human brain, manifests in various forms. Here, we combine the standard spike timing correlation based Hebbian plasticity with a non-Hebbian synaptic decay mechanism for training a recurrent spiking neural model to generate sequences. We show that inclusion of the adaptive decay of synaptic weights with standard STDP helps learn stable contextual dependencies between temporal sequences, while reducing the strong attractor states that emerge in recurrent models due to feedback loops. Furthermore, we show that the combined learning scheme suppresses the chaotic activity in the recurrent model substantially, thereby enhancing its' ability to generate sequences consistently even in the presence of perturbations.
Collapse
Affiliation(s)
- Priyadarshini Panda
- Nanoelectronics Reserach Laboratory, Purdue Univerisity, School of Electrical and Computer Engineering, West Lafayette, IN, United States
| | - Kaushik Roy
- Nanoelectronics Reserach Laboratory, Purdue Univerisity, School of Electrical and Computer Engineering, West Lafayette, IN, United States
| |
Collapse
|
36
|
Abstract
Implicit expectations induced by predictable stimuli sequences affect neuronal response to upcoming stimuli at both single cell and neural population levels. Temporally regular sensory streams also phase entrain ongoing low frequency brain oscillations but how and why this happens is unknown. Here we investigate how random recurrent neural networks without plasticity respond to stimuli streams containing oddballs. We found the neuronal correlates of sensory stream adaptation emerge if networks generate chaotic oscillations which can be phase entrained by stimulus streams. The resultant activity patterns are close to critical and support history dependent response on long timescales. Because critical network entrainment is a slow process stimulus response adapts gradually over multiple repetitions. Repeated stimuli generate suppressed responses but oddball responses are large and distinct. Oscillatory mismatch responses persist in population activity for long periods after stimulus offset while individual cell mismatch responses are strongly phasic. These effects are weakened in temporally irregular sensory streams. Thus we show that network phase entrainment provides a biologically plausible mechanism for neural oddball detection. Our results do not depend on specific network characteristics, are consistent with experimental studies and may be relevant for multiple pathologies demonstrating altered mismatch processing such as schizophrenia and depression.
Collapse
Affiliation(s)
- Adam Ponzi
- IBM T.J. Watson Research Center, Yorktown Heights, NY, USA.
- Okinawa Institute of Science and Technology Graduate University (OIST), Okinawa, Japan.
| |
Collapse
|
37
|
Zenke F, Gerstner W. Hebbian plasticity requires compensatory processes on multiple timescales. Philos Trans R Soc Lond B Biol Sci 2017; 372:rstb.2016.0259. [PMID: 28093557 PMCID: PMC5247595 DOI: 10.1098/rstb.2016.0259] [Citation(s) in RCA: 94] [Impact Index Per Article: 13.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 11/09/2016] [Indexed: 01/19/2023] Open
Abstract
We review a body of theoretical and experimental research on Hebbian and homeostatic plasticity, starting from a puzzling observation: while homeostasis of synapses found in experiments is a slow compensatory process, most mathematical models of synaptic plasticity use rapid compensatory processes (RCPs). Even worse, with the slow homeostatic plasticity reported in experiments, simulations of existing plasticity models cannot maintain network stability unless further control mechanisms are implemented. To solve this paradox, we suggest that in addition to slow forms of homeostatic plasticity there are RCPs which stabilize synaptic plasticity on short timescales. These rapid processes may include heterosynaptic depression triggered by episodes of high postsynaptic firing rate. While slower forms of homeostatic plasticity are not sufficient to stabilize Hebbian plasticity, they are important for fine-tuning neural circuits. Taken together we suggest that learning and memory rely on an intricate interplay of diverse plasticity mechanisms on different timescales which jointly ensure stability and plasticity of neural circuits.This article is part of the themed issue 'Integrating Hebbian and homeostatic plasticity'.
Collapse
Affiliation(s)
- Friedemann Zenke
- Department of Applied Physics, Stanford University, Stanford, CA 94305, USA
| | - Wulfram Gerstner
- Brain Mind Institute, School of Life Sciences and School of Computer and Communication Sciences, Ecole Polytechnique Fédérale de Lausanne, 1015 Lausanne EPFL, Switzerland
| |
Collapse
|
38
|
Richter LMA, Gjorgjieva J. Understanding neural circuit development through theory and models. Curr Opin Neurobiol 2017; 46:39-47. [DOI: 10.1016/j.conb.2017.07.004] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/25/2017] [Revised: 07/07/2017] [Accepted: 07/10/2017] [Indexed: 11/25/2022]
|
39
|
Wang Q, Rothkopf CA, Triesch J. A model of human motor sequence learning explains facilitation and interference effects based on spike-timing dependent plasticity. PLoS Comput Biol 2017; 13:e1005632. [PMID: 28767646 PMCID: PMC5555713 DOI: 10.1371/journal.pcbi.1005632] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/10/2016] [Revised: 08/14/2017] [Accepted: 06/16/2017] [Indexed: 01/01/2023] Open
Abstract
The ability to learn sequential behaviors is a fundamental property of our brains. Yet a long stream of studies including recent experiments investigating motor sequence learning in adult human subjects have produced a number of puzzling and seemingly contradictory results. In particular, when subjects have to learn multiple action sequences, learning is sometimes impaired by proactive and retroactive interference effects. In other situations, however, learning is accelerated as reflected in facilitation and transfer effects. At present it is unclear what the underlying neural mechanism are that give rise to these diverse findings. Here we show that a recently developed recurrent neural network model readily reproduces this diverse set of findings. The self-organizing recurrent neural network (SORN) model is a network of recurrently connected threshold units that combines a simplified form of spike-timing dependent plasticity (STDP) with homeostatic plasticity mechanisms ensuring network stability, namely intrinsic plasticity (IP) and synaptic normalization (SN). When trained on sequence learning tasks modeled after recent experiments we find that it reproduces the full range of interference, facilitation, and transfer effects. We show how these effects are rooted in the network’s changing internal representation of the different sequences across learning and how they depend on an interaction of training schedule and task similarity. Furthermore, since learning in the model is based on fundamental neuronal plasticity mechanisms, the model reveals how these plasticity mechanisms are ultimately responsible for the network’s sequence learning abilities. In particular, we find that all three plasticity mechanisms are essential for the network to learn effective internal models of the different training sequences. This ability to form effective internal models is also the basis for the observed interference and facilitation effects. This suggests that STDP, IP, and SN may be the driving forces behind our ability to learn complex action sequences. From dialing a phone number to driving home after work, much of human behavior is inherently sequential. But how do we learn such sequential behaviors and what neural plasticity mechanisms support this learning? Recent experiments on sequence learning in human adults have produced a range of confusing findings, especially when subjects have to learn multiple sequences at the same time. For example, the succes of training can strongly depend on subjects’ training schedules, i.e., whether they practice one task until they are proficient before switching to the next or whether they interleave training of the different tasks. Here we show that a model self-organizing neural network readily explains many findings on human sequence learning. The model is formulated as a recurrent network of simplified spiking neurons and incorporates multiple biologically plausible plasticity mechanisms of neurons and synapses. Therefore, it offers a theoretical bridge between basic mechanisms of synaptic and neuronal plasticity and the behavior of human subjects in sequence learning tasks.
Collapse
Affiliation(s)
- Quan Wang
- Frankfurt Institute for Advanced Studies, Ruth-Moufang Str. 1, 60438 Frankfurt, Germany
- * E-mail:
| | - Constantin A. Rothkopf
- Frankfurt Institute for Advanced Studies, Ruth-Moufang Str. 1, 60438 Frankfurt, Germany
- Centre for Cognitive Science & Institute of Psychology, Technical University Darmstadt, Darmstadt, Germany
| | - Jochen Triesch
- Frankfurt Institute for Advanced Studies, Ruth-Moufang Str. 1, 60438 Frankfurt, Germany
| |
Collapse
|
40
|
Born J, Galeazzi JM, Stringer SM. Hebbian learning of hand-centred representations in a hierarchical neural network model of the primate visual system. PLoS One 2017; 12:e0178304. [PMID: 28562618 PMCID: PMC5451055 DOI: 10.1371/journal.pone.0178304] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/26/2017] [Accepted: 05/10/2017] [Indexed: 12/05/2022] Open
Abstract
A subset of neurons in the posterior parietal and premotor areas of the primate brain respond to the locations of visual targets in a hand-centred frame of reference. Such hand-centred visual representations are thought to play an important role in visually-guided reaching to target locations in space. In this paper we show how a biologically plausible, Hebbian learning mechanism may account for the development of localized hand-centred representations in a hierarchical neural network model of the primate visual system, VisNet. The hand-centered neurons developed in the model use an invariance learning mechanism known as continuous transformation (CT) learning. In contrast to previous theoretical proposals for the development of hand-centered visual representations, CT learning does not need a memory trace of recent neuronal activity to be incorporated in the synaptic learning rule. Instead, CT learning relies solely on a Hebbian learning rule, which is able to exploit the spatial overlap that naturally occurs between successive images of a hand-object configuration as it is shifted across different retinal locations due to saccades. Our simulations show how individual neurons in the network model can learn to respond selectively to target objects in particular locations with respect to the hand, irrespective of where the hand-object configuration occurs on the retina. The response properties of these hand-centred neurons further generalise to localised receptive fields in the hand-centred space when tested on novel hand-object configurations that have not been explored during training. Indeed, even when the network is trained with target objects presented across a near continuum of locations around the hand during training, the model continues to develop hand-centred neurons with localised receptive fields in hand-centred space. With the help of principal component analysis, we provide the first theoretical framework that explains the behavior of Hebbian learning in VisNet.
Collapse
Affiliation(s)
- Jannis Born
- Oxford Centre for Theoretical Neuroscience and Artificial Intelligence, Department of Experimental Psychology, University of Oxford, Oxfordshire, United Kingdom
- Institute of Cognitive Science, University of Osnabrück, Osnabrück, Germany
| | - Juan M. Galeazzi
- Oxford Centre for Theoretical Neuroscience and Artificial Intelligence, Department of Experimental Psychology, University of Oxford, Oxfordshire, United Kingdom
| | - Simon M. Stringer
- Oxford Centre for Theoretical Neuroscience and Artificial Intelligence, Department of Experimental Psychology, University of Oxford, Oxfordshire, United Kingdom
| |
Collapse
|
41
|
Del Papa B, Priesemann V, Triesch J. Criticality meets learning: Criticality signatures in a self-organizing recurrent neural network. PLoS One 2017; 12:e0178683. [PMID: 28552964 PMCID: PMC5446191 DOI: 10.1371/journal.pone.0178683] [Citation(s) in RCA: 43] [Impact Index Per Article: 6.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/24/2016] [Accepted: 05/17/2017] [Indexed: 11/23/2022] Open
Abstract
Many experiments have suggested that the brain operates close to a critical state, based on signatures of criticality such as power-law distributed neuronal avalanches. In neural network models, criticality is a dynamical state that maximizes information processing capacities, e.g. sensitivity to input, dynamical range and storage capacity, which makes it a favorable candidate state for brain function. Although models that self-organize towards a critical state have been proposed, the relation between criticality signatures and learning is still unclear. Here, we investigate signatures of criticality in a self-organizing recurrent neural network (SORN). Investigating criticality in the SORN is of particular interest because it has not been developed to show criticality. Instead, the SORN has been shown to exhibit spatio-temporal pattern learning through a combination of neural plasticity mechanisms and it reproduces a number of biological findings on neural variability and the statistics and fluctuations of synaptic efficacies. We show that, after a transient, the SORN spontaneously self-organizes into a dynamical state that shows criticality signatures comparable to those found in experiments. The plasticity mechanisms are necessary to attain that dynamical state, but not to maintain it. Furthermore, onset of external input transiently changes the slope of the avalanche distributions – matching recent experimental findings. Interestingly, the membrane noise level necessary for the occurrence of the criticality signatures reduces the model’s performance in simple learning tasks. Overall, our work shows that the biologically inspired plasticity and homeostasis mechanisms responsible for the SORN’s spatio-temporal learning abilities can give rise to criticality signatures in its activity when driven by random input, but these break down under the structured input of short repeating sequences.
Collapse
Affiliation(s)
- Bruno Del Papa
- Frankfurt Institute for Advanced Studies, Johann Wolfgang Goethe University, Frankfurt am Main, Germany
- International Max Planck Research School for Neural Circuits, Max Planck Institute for Brain Research, Frankfurt am Main, Germany
- * E-mail:
| | - Viola Priesemann
- Department of Non-linear Dynamics, Max Planck Institute for Dynamics and Self-Organization, Göttingen, Germany
- Bernstein Center for Computational Neuroscience, Göttingen, Germany
| | - Jochen Triesch
- Frankfurt Institute for Advanced Studies, Johann Wolfgang Goethe University, Frankfurt am Main, Germany
| |
Collapse
|
42
|
Duarte R, Seeholzer A, Zilles K, Morrison A. Synaptic patterning and the timescales of cortical dynamics. Curr Opin Neurobiol 2017; 43:156-165. [PMID: 28407562 DOI: 10.1016/j.conb.2017.02.007] [Citation(s) in RCA: 27] [Impact Index Per Article: 3.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/08/2016] [Revised: 11/22/2016] [Accepted: 02/08/2017] [Indexed: 11/19/2022]
Abstract
Neocortical circuits, as large heterogeneous recurrent networks, can potentially operate and process signals at multiple timescales, but appear to be differentially tuned to operate within certain temporal receptive windows. The modular and hierarchical organization of this selectivity mirrors anatomical and physiological relations throughout the cortex and is likely determined by the regional electrochemical composition. Being consistently patterned and actively regulated, the expression of molecules involved in synaptic transmission constitutes the most significant source of laminar and regional variability. Due to their complex kinetics and adaptability, synapses form a natural primary candidate underlying this regional temporal selectivity. The ability of cortical networks to reflect the temporal structure of the sensory environment can thus be regulated by evolutionary and experience-dependent processes.
Collapse
Affiliation(s)
- Renato Duarte
- Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA BRAIN Institute I, Jülich Research Centre, Jülich, Germany; Bernstein Center Freiburg, Albert-Ludwig University of Freiburg, Germany; Faculty of Biology, Albert-Ludwig University of Freiburg, Freiburg im Breisgau, Germany; Institute of Adaptive and Neural Computation, School of Informatics, University of Edinburgh, UK.
| | - Alexander Seeholzer
- School of Computer and Communication Sciences and School of Life Sciences, Brain Mind Institute, École Polytechnique Fédérale de Lausanne, Lausanne, Switzerland
| | - Karl Zilles
- Institute of Neuroscience and Medicine (INM-1), Jülich Research Centre, Jülich, Germany; JARA-BRAIN, Aachen, Germany
| | - Abigail Morrison
- Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA BRAIN Institute I, Jülich Research Centre, Jülich, Germany; Bernstein Center Freiburg, Albert-Ludwig University of Freiburg, Germany; Institute of Cognitive Neuroscience, Faculty of Psychology, Ruhr-University Bochum, Bochum, Germany
| |
Collapse
|
43
|
Zenke F, Gerstner W, Ganguli S. The temporal paradox of Hebbian learning and homeostatic plasticity. Curr Opin Neurobiol 2017; 43:166-176. [DOI: 10.1016/j.conb.2017.03.015] [Citation(s) in RCA: 104] [Impact Index Per Article: 14.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/04/2016] [Revised: 03/07/2017] [Accepted: 03/22/2017] [Indexed: 11/16/2022]
|
44
|
Memory replay in balanced recurrent networks. PLoS Comput Biol 2017; 13:e1005359. [PMID: 28135266 PMCID: PMC5305273 DOI: 10.1371/journal.pcbi.1005359] [Citation(s) in RCA: 32] [Impact Index Per Article: 4.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/30/2016] [Revised: 02/13/2017] [Accepted: 01/09/2017] [Indexed: 11/19/2022] Open
Abstract
Complex patterns of neural activity appear during up-states in the neocortex and sharp waves in the hippocampus, including sequences that resemble those during prior behavioral experience. The mechanisms underlying this replay are not well understood. How can small synaptic footprints engraved by experience control large-scale network activity during memory retrieval and consolidation? We hypothesize that sparse and weak synaptic connectivity between Hebbian assemblies are boosted by pre-existing recurrent connectivity within them. To investigate this idea, we connect sequences of assemblies in randomly connected spiking neuronal networks with a balance of excitation and inhibition. Simulations and analytical calculations show that recurrent connections within assemblies allow for a fast amplification of signals that indeed reduces the required number of inter-assembly connections. Replay can be evoked by small sensory-like cues or emerge spontaneously by activity fluctuations. Global-potentially neuromodulatory-alterations of neuronal excitability can switch between network states that favor retrieval and consolidation.
Collapse
|
45
|
Computational analysis of memory capacity in echo state networks. Neural Netw 2016; 83:109-120. [PMID: 27599031 DOI: 10.1016/j.neunet.2016.07.012] [Citation(s) in RCA: 42] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/13/2015] [Revised: 07/25/2016] [Accepted: 07/28/2016] [Indexed: 11/23/2022]
|
46
|
Kovac AD, Koall M, Pipa G, Toutounji H. Persistent Memory in Single Node Delay-Coupled Reservoir Computing. PLoS One 2016; 11:e0165170. [PMID: 27783690 PMCID: PMC5081200 DOI: 10.1371/journal.pone.0165170] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/02/2016] [Accepted: 10/07/2016] [Indexed: 11/19/2022] Open
Abstract
Delays are ubiquitous in biological systems, ranging from genetic regulatory networks and synaptic conductances, to predator/pray population interactions. The evidence is mounting, not only to the presence of delays as physical constraints in signal propagation speed, but also to their functional role in providing dynamical diversity to the systems that comprise them. The latter observation in biological systems inspired the recent development of a computational architecture that harnesses this dynamical diversity, by delay-coupling a single nonlinear element to itself. This architecture is a particular realization of Reservoir Computing, where stimuli are injected into the system in time rather than in space as is the case with classical recurrent neural network realizations. This architecture also exhibits an internal memory which fades in time, an important prerequisite to the functioning of any reservoir computing device. However, fading memory is also a limitation to any computation that requires persistent storage. In order to overcome this limitation, the current work introduces an extended version to the single node Delay-Coupled Reservoir, that is based on trained linear feedback. We show by numerical simulations that adding task-specific linear feedback to the single node Delay-Coupled Reservoir extends the class of solvable tasks to those that require nonfading memory. We demonstrate, through several case studies, the ability of the extended system to carry out complex nonlinear computations that depend on past information, whereas the computational power of the system with fading memory alone quickly deteriorates. Our findings provide the theoretical basis for future physical realizations of a biologically-inspired ultrafast computing device with extended functionality.
Collapse
Affiliation(s)
- André David Kovac
- Neuroinformatics Department, Institute of Cognitive Science, University of Osnabrück, Osnabrück, Lower Saxony, Germany
| | - Maximilian Koall
- Neuroinformatics Department, Institute of Cognitive Science, University of Osnabrück, Osnabrück, Lower Saxony, Germany
| | - Gordon Pipa
- Neuroinformatics Department, Institute of Cognitive Science, University of Osnabrück, Osnabrück, Lower Saxony, Germany
| | - Hazem Toutounji
- Department of Theoretical Neuroscience, Central Institute of Mental Health, Medical Faculty Mannheim of Heidelberg University, Mannheim, Baden-Württemberg, Germany
- * E-mail:
| |
Collapse
|
47
|
Singer W, Lazar A. Does the Cerebral Cortex Exploit High-Dimensional, Non-linear Dynamics for Information Processing? Front Comput Neurosci 2016; 10:99. [PMID: 27713697 PMCID: PMC5031693 DOI: 10.3389/fncom.2016.00099] [Citation(s) in RCA: 22] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/29/2016] [Accepted: 09/02/2016] [Indexed: 12/04/2022] Open
Abstract
The discovery of stimulus induced synchronization in the visual cortex suggested the possibility that the relations among low-level stimulus features are encoded by the temporal relationship between neuronal discharges. In this framework, temporal coherence is considered a signature of perceptual grouping. This insight triggered a large number of experimental studies which sought to investigate the relationship between temporal coordination and cognitive functions. While some core predictions derived from the initial hypothesis were confirmed, these studies, also revealed a rich dynamical landscape beyond simple coherence whose role in signal processing is still poorly understood. In this paper, a framework is presented which establishes links between the various manifestations of cortical dynamics by assigning specific coding functions to low-dimensional dynamic features such as synchronized oscillations and phase shifts on the one hand and high-dimensional non-linear, non-stationary dynamics on the other. The data serving as basis for this synthetic approach have been obtained with chronic multisite recordings from the visual cortex of anesthetized cats and from monkeys trained to solve cognitive tasks. It is proposed that the low-dimensional dynamics characterized by synchronized oscillations and large-scale correlations are substates that represent the results of computations performed in the high-dimensional state-space provided by recurrently coupled networks.
Collapse
Affiliation(s)
- Wolf Singer
- Ernst Strüngmann Institute for Neuroscience in Cooperation with Max Planck SocietyFrankfurt am Main, Germany; Max Planck Institute for Brain ResearchFrankfurt am Main, Germany; Frankfurt Institute for Advanced StudiesFrankfurt am Main, Germany
| | - Andreea Lazar
- Ernst Strüngmann Institute for Neuroscience in Cooperation with Max Planck SocietyFrankfurt am Main, Germany; Max Planck Institute for Brain ResearchFrankfurt am Main, Germany; Frankfurt Institute for Advanced StudiesFrankfurt am Main, Germany
| |
Collapse
|
48
|
Simbrain 3.0: A flexible, visually-oriented neural network simulator. Neural Netw 2016; 83:1-10. [PMID: 27541049 DOI: 10.1016/j.neunet.2016.07.005] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/04/2015] [Revised: 07/04/2016] [Accepted: 07/13/2016] [Indexed: 11/22/2022]
Abstract
Simbrain 3.0 is a software package for neural network design and analysis, which emphasizes flexibility (arbitrarily complex networks can be built using a suite of basic components) and a visually rich, intuitive interface. These features support both students and professionals. Students can study all of the major classes of neural networks in a familiar graphical setting, and can easily modify simulations, experimenting with networks and immediately seeing the results of their interventions. With the 3.0 release, Simbrain supports models on the order of thousands of neurons and a million synapses. This allows the same features that support education to support research professionals, who can now use the tool to quickly design, run, and analyze the behavior of large, highly customizable simulations.
Collapse
|
49
|
Goel A, Buonomano DV. Temporal Interval Learning in Cortical Cultures Is Encoded in Intrinsic Network Dynamics. Neuron 2016; 91:320-7. [PMID: 27346530 DOI: 10.1016/j.neuron.2016.05.042] [Citation(s) in RCA: 21] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/06/2015] [Revised: 03/22/2016] [Accepted: 05/24/2016] [Indexed: 10/21/2022]
Abstract
Telling time and anticipating when external events will happen is among the most important tasks the brain performs. Yet the neural mechanisms underlying timing remain elusive. One theory proposes that timing is a general and intrinsic computation of cortical circuits. We tested this hypothesis using electrical and optogenetic stimulation to determine if brain slices could "learn" temporal intervals. Presentation of intervals between 100 and 500 ms altered the temporal profile of evoked network activity in an interval and pathway-specific manner-suggesting that the network learned to anticipate an expected stimulus. Recordings performed during training revealed a progressive increase in evoked network activity, followed by subsequent refinement of temporal dynamics, which was related to a time-window-specific increase in the excitatory-inhibitory balance. These results support the hypothesis that subsecond timing is an intrinsic computation and that timing emerges from network-wide, yet pathway-specific, changes in evoked neural dynamics.
Collapse
Affiliation(s)
- Anubhuti Goel
- Department of Neurology, University of California, Los Angeles, Reed Neurological Research Ctr-A-145, 710 Westwood Plaza, Los Angeles, CA 90095, USA
| | - Dean V Buonomano
- Departments of Neurobiology and Psychology, Integrative Center for Learning and Memory, University of California, Los Angeles, 695 Young Drive, Los Angeles, CA 90095, USA.
| |
Collapse
|
50
|
Thalmeier D, Uhlmann M, Kappen HJ, Memmesheimer RM. Learning Universal Computations with Spikes. PLoS Comput Biol 2016; 12:e1004895. [PMID: 27309381 PMCID: PMC4911146 DOI: 10.1371/journal.pcbi.1004895] [Citation(s) in RCA: 40] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/02/2015] [Accepted: 04/01/2016] [Indexed: 11/19/2022] Open
Abstract
Providing the neurobiological basis of information processing in higher animals, spiking neural networks must be able to learn a variety of complicated computations, including the generation of appropriate, possibly delayed reactions to inputs and the self-sustained generation of complex activity patterns, e.g. for locomotion. Many such computations require previous building of intrinsic world models. Here we show how spiking neural networks may solve these different tasks. Firstly, we derive constraints under which classes of spiking neural networks lend themselves to substrates of powerful general purpose computing. The networks contain dendritic or synaptic nonlinearities and have a constrained connectivity. We then combine such networks with learning rules for outputs or recurrent connections. We show that this allows to learn even difficult benchmark tasks such as the self-sustained generation of desired low-dimensional chaotic dynamics or memory-dependent computations. Furthermore, we show how spiking networks can build models of external world systems and use the acquired knowledge to control them.
Collapse
Affiliation(s)
- Dominik Thalmeier
- Donders Institute, Department of Biophysics, Radboud University, Nijmegen, Netherlands
| | - Marvin Uhlmann
- Max Planck Institute for Psycholinguistics, Department for Neurobiology of Language, Nijmegen, Netherlands
- Donders Institute, Department for Neuroinformatics, Radboud University, Nijmegen, Netherlands
| | - Hilbert J. Kappen
- Donders Institute, Department of Biophysics, Radboud University, Nijmegen, Netherlands
| | - Raoul-Martin Memmesheimer
- Donders Institute, Department for Neuroinformatics, Radboud University, Nijmegen, Netherlands
- Center for Theoretical Neuroscience, Columbia University, New York, New York, United States of America
- * E-mail:
| |
Collapse
|