1
|
van Albada SJ, Morales-Gregorio A, Dickscheid T, Goulas A, Bakker R, Bludau S, Palm G, Hilgetag CC, Diesmann M. Bringing Anatomical Information into Neuronal Network Models. ADVANCES IN EXPERIMENTAL MEDICINE AND BIOLOGY 2022; 1359:201-234. [DOI: 10.1007/978-3-030-89439-9_9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/18/2022]
|
2
|
Short term memory properties of sensory neural architectures. J Comput Neurosci 2019; 46:321-332. [PMID: 31104206 DOI: 10.1007/s10827-019-00720-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/05/2019] [Revised: 05/09/2019] [Accepted: 05/12/2019] [Indexed: 10/26/2022]
Abstract
A functional role of the cerebral cortex is to form and hold representations of the sensory world for behavioral purposes. This is achieved by a sheet of neurons, organized in modules called cortical columns, that receives inputs in a peculiar manner, with only a few neurons driven by sensory inputs through thalamic projections, and a vast majority of neurons receiving mainly cortical inputs. How should cortical modules be organized, with respect to sensory inputs, in order for the cortex to efficiently hold sensory representations in memory? To address this question we investigate the memory performance of trees of recurrent networks (TRN) that are composed of recurrent networks, modeling cortical columns, connected with each others through a tree-shaped feed-forward backbone of connections, with sensory stimuli injected at the root of the tree. On these sensory architectures two types of short-term memory (STM) mechanisms can be implemented, STM via transient dynamics on the feed-forward tree, and STM via reverberating activity on the recurrent connectivity inside modules. We derive equations describing the dynamics of such networks, which allow us to thoroughly explore the space of possible architectures and quantify their memory performance. By varying the divergence ratio of the tree, we show that serial architectures, where sensory inputs are successively processed in different modules, are better suited to implement STM via transient dynamics, while parallel architectures, where sensory inputs are simultaneously processed by all modules, are better suited to implement STM via reverberating dynamics.
Collapse
|
3
|
Storing structured sparse memories in a multi-modular cortical network model. J Comput Neurosci 2016; 40:157-75. [PMID: 26852335 DOI: 10.1007/s10827-016-0590-z] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/24/2015] [Revised: 01/03/2016] [Accepted: 01/07/2016] [Indexed: 10/22/2022]
Abstract
We study the memory performance of a class of modular attractor neural networks, where modules are potentially fully-connected networks connected to each other via diluted long-range connections. On this anatomical architecture we store memory patterns of activity using a Willshaw-type learning rule. P patterns are split in categories, such that patterns of the same category activate the same set of modules. We first compute the maximal storage capacity of these networks. We then investigate their error-correction properties through an exhaustive exploration of parameter space, and identify regions where the networks behave as an associative memory device. The crucial parameters that control the retrieval abilities of the network are (1) the ratio between the number of synaptic contacts of long- and short-range origins (2) the number of categories in which a module is activated and (3) the amount of local inhibition. We discuss the relationship between our model and networks of cortical patches that have been observed in different cortical areas.
Collapse
|
4
|
Abstract
Neural associative networks are a promising computational paradigm for both modeling neural circuits of the brain and implementing associative memory and Hebbian cell assemblies in parallel VLSI or nanoscale hardware. Previous work has extensively investigated synaptic learning in linear models of the Hopfield type and simple nonlinear models of the Steinbuch/Willshaw type. Optimized Hopfield networks of size n can store a large number of about [Formula: see text] memories of size k (or associations between them) but require real-valued synapses, which are expensive to implement and can store at most [Formula: see text] bits per synapse. Willshaw networks can store a much smaller number of about [Formula: see text] memories but get along with much cheaper binary synapses. Here I present a learning model employing synapses with discrete synaptic weights. For optimal discretization parameters, this model can store, up to a factor [Formula: see text] close to one, the same number of memories as for optimized Hopfield-type learning—for example, [Formula: see text] for binary synapses, [Formula: see text] for 2 bit (four-state) synapses, [Formula: see text] for 3 bit (8-state) synapses, and [Formula: see text] for 4 bit (16-state) synapses. The model also provides the theoretical framework to determine optimal discretization parameters for computer implementations or brainlike parallel hardware including structural plasticity. In particular, as recently shown for the Willshaw network, it is possible to store [Formula: see text] bit per computer bit and up to [Formula: see text] bits per nonsilent synapse, whereas the absolute number of stored memories can be much larger than for the Willshaw model.
Collapse
|
5
|
Cheng Z, Deng Z, Hu X, Zhang B, Yang T. Efficient reinforcement learning of a reservoir network model of parametric working memory achieved with a cluster population winner-take-all readout mechanism. J Neurophysiol 2015; 114:3296-305. [PMID: 26445865 DOI: 10.1152/jn.00378.2015] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/17/2015] [Accepted: 10/07/2015] [Indexed: 11/22/2022] Open
Abstract
The brain often has to make decisions based on information stored in working memory, but the neural circuitry underlying working memory is not fully understood. Many theoretical efforts have been focused on modeling the persistent delay period activity in the prefrontal areas that is believed to represent working memory. Recent experiments reveal that the delay period activity in the prefrontal cortex is neither static nor homogeneous as previously assumed. Models based on reservoir networks have been proposed to model such a dynamical activity pattern. The connections between neurons within a reservoir are random and do not require explicit tuning. Information storage does not depend on the stable states of the network. However, it is not clear how the encoded information can be retrieved for decision making with a biologically realistic algorithm. We therefore built a reservoir-based neural network to model the neuronal responses of the prefrontal cortex in a somatosensory delayed discrimination task. We first illustrate that the neurons in the reservoir exhibit a heterogeneous and dynamical delay period activity observed in previous experiments. Then we show that a cluster population circuit decodes the information from the reservoir with a winner-take-all mechanism and contributes to the decision making. Finally, we show that the model achieves a good performance rapidly by shaping only the readout with reinforcement learning. Our model reproduces important features of previous behavior and neurophysiology data. We illustrate for the first time how task-specific information stored in a reservoir network can be retrieved with a biologically plausible reinforcement learning training scheme.
Collapse
Affiliation(s)
- Zhenbo Cheng
- State Key Laboratory of Intelligent Technology and Systems, Tsinghua National Laboratory for Information Science and Technology, Department of Computer Science and Technology, Tsinghua University, Beijing, China; Department of Computer Science and Technology, Zhejiang University of Technology, Hangzhou, China; and
| | - Zhidong Deng
- State Key Laboratory of Intelligent Technology and Systems, Tsinghua National Laboratory for Information Science and Technology, Department of Computer Science and Technology, Tsinghua University, Beijing, China
| | - Xiaolin Hu
- State Key Laboratory of Intelligent Technology and Systems, Tsinghua National Laboratory for Information Science and Technology, Department of Computer Science and Technology, Tsinghua University, Beijing, China
| | - Bo Zhang
- State Key Laboratory of Intelligent Technology and Systems, Tsinghua National Laboratory for Information Science and Technology, Department of Computer Science and Technology, Tsinghua University, Beijing, China
| | - Tianming Yang
- Institute of Neuroscience, Key Laboratory of Primate Neurobiology, CAS Center for Excellence in Brain Science and Intelligence Technology, Shanghai Institutes for Biological Sciences, Chinese Academy of Sciences, Shanghai, China
| |
Collapse
|
6
|
Fiebig F, Lansner A. Memory consolidation from seconds to weeks: a three-stage neural network model with autonomous reinstatement dynamics. Front Comput Neurosci 2014; 8:64. [PMID: 25071536 PMCID: PMC4077014 DOI: 10.3389/fncom.2014.00064] [Citation(s) in RCA: 19] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/14/2014] [Accepted: 05/24/2014] [Indexed: 11/29/2022] Open
Abstract
Declarative long-term memories are not created in an instant. Gradual stabilization and temporally shifting dependence of acquired declarative memories in different brain regions-called systems consolidation-can be tracked in time by lesion experiments. The observation of temporally graded retrograde amnesia (RA) following hippocampal lesions points to a gradual transfer of memory from hippocampus to neocortical long-term memory. Spontaneous reactivations of hippocampal memories, as observed in place cell reactivations during slow-wave-sleep, are supposed to drive neocortical reinstatements and facilitate this process. We propose a functional neural network implementation of these ideas and furthermore suggest an extended three-state framework that includes the prefrontal cortex (PFC). It bridges the temporal chasm between working memory percepts on the scale of seconds and consolidated long-term memory on the scale of weeks or months. We show that our three-stage model can autonomously produce the necessary stochastic reactivation dynamics for successful episodic memory consolidation. The resulting learning system is shown to exhibit classical memory effects seen in experimental studies, such as retrograde and anterograde amnesia (AA) after simulated hippocampal lesioning; furthermore the model reproduces peculiar biological findings on memory modulation, such as retrograde facilitation of memory after suppressed acquisition of new long-term memories-similar to the effects of benzodiazepines on memory.
Collapse
Affiliation(s)
- Florian Fiebig
- Department of Computational Biology, Royal Institute of Technology (KTH)Stockholm, Sweden
- Institute for Adaptive and Neural Computation, School of Informatics, Edinburgh UniversityEdinburgh, Scotland
| | - Anders Lansner
- Department of Computational Biology, Royal Institute of Technology (KTH)Stockholm, Sweden
- Department of Numerical Analysis and Computer Science, Stockholm UniversityStockholm, Sweden
| |
Collapse
|
7
|
Krishnamurthy P, Silberberg G, Lansner A. A cortical attractor network with Martinotti cells driven by facilitating synapses. PLoS One 2012; 7:e30752. [PMID: 22523533 PMCID: PMC3327695 DOI: 10.1371/journal.pone.0030752] [Citation(s) in RCA: 18] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/05/2011] [Accepted: 12/21/2011] [Indexed: 12/02/2022] Open
Abstract
The population of pyramidal cells significantly outnumbers the inhibitory interneurons in the neocortex, while at the same time the diversity of interneuron types is much more pronounced. One acknowledged key role of inhibition is to control the rate and patterning of pyramidal cell firing via negative feedback, but most likely the diversity of inhibitory pathways is matched by a corresponding diversity of functional roles. An important distinguishing feature of cortical interneurons is the variability of the short-term plasticity properties of synapses received from pyramidal cells. The Martinotti cell type has recently come under scrutiny due to the distinctly facilitating nature of the synapses they receive from pyramidal cells. This distinguishes these neurons from basket cells and other inhibitory interneurons typically targeted by depressing synapses. A key aspect of the work reported here has been to pinpoint the role of this variability. We first set out to reproduce quantitatively based on in vitro data the di-synaptic inhibitory microcircuit connecting two pyramidal cells via one or a few Martinotti cells. In a second step, we embedded this microcircuit in a previously developed attractor memory network model of neocortical layers 2/3. This model network demonstrated that basket cells with their characteristic depressing synapses are the first to discharge when the network enters an attractor state and that Martinotti cells respond with a delay, thereby shifting the excitation-inhibition balance and acting to terminate the attractor state. A parameter sensitivity analysis suggested that Martinotti cells might, in fact, play a dominant role in setting the attractor dwell time and thus cortical speed of processing, with cellular adaptation and synaptic depression having a less prominent role than previously thought.
Collapse
Affiliation(s)
- Pradeep Krishnamurthy
- Department of Numerical Analysis and Computer Science, Stockholm University, Stockholm, Sweden
- School of Computer Science and Communication, Department of Computational Biology, Royal Institute of Technology (KTH), Stockholm, Sweden
| | - Gilad Silberberg
- Nobel Institute of Neurophysiology, Department of Neuroscience, Karolinska Institute, Stockholm, Sweden
| | - Anders Lansner
- Department of Numerical Analysis and Computer Science, Stockholm University, Stockholm, Sweden
- School of Computer Science and Communication, Department of Computational Biology, Royal Institute of Technology (KTH), Stockholm, Sweden
| |
Collapse
|
8
|
Learning sequences of sparse correlated patterns using small-world attractor neural networks: An application to traffic videos. Neurocomputing 2011. [DOI: 10.1016/j.neucom.2011.03.014] [Citation(s) in RCA: 11] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
|
9
|
Abstract
Neural associative memories are perceptron-like single-layer networks with fast synaptic learning typically storing discrete associations between pairs of neural activity patterns. Previous work optimized the memory capacity for various models of synaptic learning: linear Hopfield-type rules, the Willshaw model employing binary synapses, or the BCPNN rule of Lansner and Ekeberg, for example. Here I show that all of these previous models are limit cases of a general optimal model where synaptic learning is determined by probabilistic Bayesian considerations. Asymptotically, for large networks and very sparse neuron activity, the Bayesian model becomes identical to an inhibitory implementation of the Willshaw and BCPNN-type models. For less sparse patterns, the Bayesian model becomes identical to Hopfield-type networks employing the covariance rule. For intermediate sparseness or finite networks, the optimal Bayesian learning rule differs from the previous models and can significantly improve memory performance. I also provide a unified analytical framework to determine memory capacity at a given output noise level that links approaches based on mutual information, Hamming distance, and signal-to-noise ratio.
Collapse
|
10
|
Voges N, Guijarro C, Aertsen A, Rotter S. Models of cortical networks with long-range patchy projections. J Comput Neurosci 2009; 28:137-54. [PMID: 19866352 DOI: 10.1007/s10827-009-0193-z] [Citation(s) in RCA: 24] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/13/2008] [Revised: 08/25/2009] [Accepted: 10/01/2009] [Indexed: 10/20/2022]
Abstract
The cortex exhibits an intricate vertical and horizontal architecture, the latter often featuring spatially clustered projection patterns, so-called patches. Many network studies of cortical dynamics ignore such spatial structures and assume purely random wiring. Here, we focus on non-random network structures provided by long-range horizontal (patchy) connections that remain inside the gray matter. We investigate how the spatial arrangement of patchy projections influences global network topology and predict its impact on the activity dynamics of the network. Since neuroanatomical data on horizontal projections is rather sparse, we suggest and compare four candidate scenarios of how patchy connections may be established. To identify a set of characteristic network properties that enables us to pin down the differences between the resulting network models, we employ the framework of stochastic graph theory. We find that patchy projections provide an exceptionally efficient way of wiring, as the resulting networks tend to exhibit small-world properties with significantly reduced wiring costs. Furthermore, the eigenvalue spectra, as well as the structure of common in- and output of the networks suggest that different spatial connectivity patterns support distinct types of activity propagation.
Collapse
Affiliation(s)
- Nicole Voges
- Bernstein Center for Computational Neuroscience Freiburg, Albert-Ludwig University, Freiburg, Germany.
| | | | | | | |
Collapse
|
11
|
|
12
|
Dominguez D, González M, Serrano E, Rodríguez FB. Structured information in small-world neural networks. PHYSICAL REVIEW. E, STATISTICAL, NONLINEAR, AND SOFT MATTER PHYSICS 2009; 79:021909. [PMID: 19391780 DOI: 10.1103/physreve.79.021909] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/29/2005] [Revised: 08/18/2008] [Indexed: 05/27/2023]
Abstract
The retrieval abilities of spatially uniform attractor networks can be measured by the global overlap between patterns and neural states. However, we found that nonuniform networks, for instance, small-world networks, can retrieve fragments of patterns (blocks) without performing global retrieval. We propose a way to measure the local retrieval using a parameter that is related to the fluctuation of the block overlaps. Simulation of neural dynamics shows a competition between local and global retrieval. The phase diagram shows a transition from local retrieval to global retrieval when the storage ratio increases and the topology becomes more random. A theoretical approach confirms the simulation results and predicts that the stability of blocks can be improved by dilution.
Collapse
|
13
|
Johansson C, Lansner A. Implementing plastic weights in neural networks using low precision arithmetic. Neurocomputing 2009. [DOI: 10.1016/j.neucom.2008.04.007] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/22/2022]
|