1
|
Liang Q, Zeng Y, Xu B. Temporal-Sequential Learning With a Brain-Inspired Spiking Neural Network and Its Application to Musical Memory. Front Comput Neurosci 2020; 14:51. [PMID: 32714173 PMCID: PMC7343962 DOI: 10.3389/fncom.2020.00051] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/22/2019] [Accepted: 05/11/2020] [Indexed: 11/13/2022] Open
Abstract
Sequence learning is a fundamental cognitive function of the brain. However, the ways in which sequential information is represented and memorized are not dealt with satisfactorily by existing models. To overcome this deficiency, this paper introduces a spiking neural network based on psychological and neurobiological findings at multiple scales. Compared with existing methods, our model has four novel features: (1) It contains several collaborative subnetworks similar to those in brain regions with different cognitive functions. The individual building blocks of the simulated areas are neural functional minicolumns composed of biologically plausible neurons. Both excitatory and inhibitory connections between neurons are modulated dynamically using a spike-timing-dependent plasticity learning rule. (2) Inspired by the mechanisms of the brain's cortical-striatal loop, a dependent timing module is constructed to encode temporal information, which is essential in sequence learning but has not been processed well by traditional algorithms. (3) Goal-based and episodic retrievals can be achieved at different time scales. (4) Musical memory is used as an application to validate the model. Experiments show that the model can store a huge amount of data on melodies and recall them with high accuracy. In addition, it can remember the entirety of a melody given only an episode or the melody played at different paces.
Collapse
Affiliation(s)
- Qian Liang
- Research Center for Brain-Inspired Intelligence, Institute of Automation, Chinese Academy of Sciences, Beijing, China.,School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing, China
| | - Yi Zeng
- Research Center for Brain-Inspired Intelligence, Institute of Automation, Chinese Academy of Sciences, Beijing, China.,School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing, China.,National Laboratory of Pattern Recognition, Institute of Automation, Chinese Academy of Sciences, Beijing, China.,Center for Excellence in Brain Science and Intelligence Technology, Chinese Academy of Sciences, Shanghai, China
| | - Bo Xu
- Research Center for Brain-Inspired Intelligence, Institute of Automation, Chinese Academy of Sciences, Beijing, China.,School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing, China.,Center for Excellence in Brain Science and Intelligence Technology, Chinese Academy of Sciences, Shanghai, China
| |
Collapse
|
2
|
Martinez RH, Lansner A, Herman P. Probabilistic associative learning suffices for learning the temporal structure of multiple sequences. PLoS One 2019; 14:e0220161. [PMID: 31369571 PMCID: PMC6675053 DOI: 10.1371/journal.pone.0220161] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/22/2019] [Accepted: 07/08/2019] [Indexed: 11/19/2022] Open
Abstract
From memorizing a musical tune to navigating a well known route, many of our underlying behaviors have a strong temporal component. While the mechanisms behind the sequential nature of the underlying brain activity are likely multifarious and multi-scale, in this work we attempt to characterize to what degree some of this properties can be explained as a consequence of simple associative learning. To this end, we employ a parsimonious firing-rate attractor network equipped with the Hebbian-like Bayesian Confidence Propagating Neural Network (BCPNN) learning rule relying on synaptic traces with asymmetric temporal characteristics. The proposed network model is able to encode and reproduce temporal aspects of the input, and offers internal control of the recall dynamics by gain modulation. We provide an analytical characterisation of the relationship between the structure of the weight matrix, the dynamical network parameters and the temporal aspects of sequence recall. We also present a computational study of the performance of the system under the effects of noise for an extensive region of the parameter space. Finally, we show how the inclusion of modularity in our network structure facilitates the learning and recall of multiple overlapping sequences even in a noisy regime.
Collapse
Affiliation(s)
- Ramon H. Martinez
- Computational Brain Science Lab, KTH Royal Institute of Technology, Stockholm, Sweden
| | - Anders Lansner
- Computational Brain Science Lab, KTH Royal Institute of Technology, Stockholm, Sweden
- Mathematics Department, Stockholm University, Stockholm, Sweden
| | - Pawel Herman
- Computational Brain Science Lab, KTH Royal Institute of Technology, Stockholm, Sweden
| |
Collapse
|
3
|
Chenani A, Sabariego M, Schlesiger MI, Leutgeb JK, Leutgeb S, Leibold C. Hippocampal CA1 replay becomes less prominent but more rigid without inputs from medial entorhinal cortex. Nat Commun 2019; 10:1341. [PMID: 30902981 PMCID: PMC6430812 DOI: 10.1038/s41467-019-09280-0] [Citation(s) in RCA: 28] [Impact Index Per Article: 5.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/23/2017] [Accepted: 03/03/2019] [Indexed: 01/20/2023] Open
Abstract
The hippocampus is an essential brain area for learning and memory. However, the network mechanisms underlying memory storage, consolidation and retrieval remain incompletely understood. Place cell sequences during theta oscillations are thought to be replayed during non-theta states to support consolidation and route planning. In animals with medial entorhinal cortex (MEC) lesions, the temporal organization of theta-related hippocampal activity is disrupted, which allows us to test whether replay is also compromised. Two different analyses—comparison of co-activation patterns between running and rest epochs and analysis of the recurrence of place cell sequences—reveal that the enhancement of replay by behavior is reduced in MEC-lesioned versus control rats. In contrast, the degree of intrinsic network structure prior and subsequent to behavior remains unaffected by MEC lesions. The MEC-dependent temporal coordination during theta states therefore appears to facilitate behavior-related plasticity, but does not disrupt pre-existing functional connectivity. Medial entorhinal cortex (MEC) is involved in memory processes that entail the replay of sequential firing of hippocampal place cells during rest periods and during behaviour. Here, the authors show that MEC lesioned animals show intact replay after an epoch of rats running on a linear track, while replay during the behavioral epoch is reduced.
Collapse
Affiliation(s)
- Alireza Chenani
- Department Biology II, Ludwig-Maximilians-Universität München, Martinsried, 82152, Germany.,Max-Planck Institute for Psychiatry, 80804, Munich, Germany
| | - Marta Sabariego
- Neurobiology Section and Center for Neural Circuits and Behavior, University of California, San Diego, La Jolla, 92093, CA, USA
| | - Magdalene I Schlesiger
- Neurobiology Section and Center for Neural Circuits and Behavior, University of California, San Diego, La Jolla, 92093, CA, USA.,Department of Clinical Neurobiology, Medical Faculty of Heidelberg University and German Cancer Research Center (DKFZ), 69120, Heidelberg, Germany
| | - Jill K Leutgeb
- Neurobiology Section and Center for Neural Circuits and Behavior, University of California, San Diego, La Jolla, 92093, CA, USA
| | - Stefan Leutgeb
- Neurobiology Section and Center for Neural Circuits and Behavior, University of California, San Diego, La Jolla, 92093, CA, USA.,Kavli Institute for Brain and Mind, University of California, San Diego, La Jolla, 92093, CA, USA
| | - Christian Leibold
- Department Biology II, Ludwig-Maximilians-Universität München, Martinsried, 82152, Germany. .,Bernstein Center for Computational Neuroscience Munich, Martinsried, 82152, Germany.
| |
Collapse
|
4
|
Bridging structure and function: A model of sequence learning and prediction in primary visual cortex. PLoS Comput Biol 2018; 14:e1006187. [PMID: 29870532 PMCID: PMC6003695 DOI: 10.1371/journal.pcbi.1006187] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/16/2017] [Revised: 06/15/2018] [Accepted: 05/09/2018] [Indexed: 11/29/2022] Open
Abstract
Recent experiments have demonstrated that visual cortex engages in spatio-temporal sequence learning and prediction. The cellular basis of this learning remains unclear, however. Here we present a spiking neural network model that explains a recent study on sequence learning in the primary visual cortex of rats. The model posits that the sequence learning and prediction abilities of cortical circuits result from the interaction of spike-timing dependent plasticity (STDP) and homeostatic plasticity mechanisms. It also reproduces changes in stimulus-evoked multi-unit activity during learning. Furthermore, it makes precise predictions regarding how training shapes network connectivity to establish its prediction ability. Finally, it predicts that the adapted connectivity gives rise to systematic changes in spontaneous network activity. Taken together, our model establishes a new conceptual bridge between the structure and function of cortical circuits in the context of sequence learning and prediction. A central goal of Neuroscience is to understand the relationship between the structure and function of brain networks. Of particular interest are the circuits of the neocortex, the seat of our highest cognitive abilities. Here we provide a new link between the structure and function of neocortical circuits in the context of sequence learning. We study a spiking neural network model that self-organizes its connectivity and activity via a combination of different plasticity mechanisms known to operate in cortical circuits. We use this model to explain various findings from a recent experimental study on sequence learning and prediction in rat visual cortex. Our model reproduces the changes in activity patterns as the animal learns the sequential pattern of visual stimulation. In addition, the model predicts what stimulation-induced structural changes underlie this sequence learning ability. Finally, the model also predicts how the adapted network structure influences spontaneous network activity when there is no visual stimulation. Hence, our model provides new insights about the relationship between structure and function of cortical circuits.
Collapse
|
5
|
Leibold C, Monsalve-Mercado MM. Traveling Theta Waves and the Hippocampal Phase Code. Sci Rep 2017; 7:7678. [PMID: 28794419 PMCID: PMC5550484 DOI: 10.1038/s41598-017-08053-3] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/03/2017] [Accepted: 07/06/2017] [Indexed: 11/13/2022] Open
Abstract
Hippocampal place fields form a neuronal map of the spatial environment. In addition, the distance between two place field centers is proportional to the firing phase difference of two place cells with respect to the local theta rhythm. This consistency between spatial distance and theta phase is generally assumed to result from hippocampal phase precession: The firing phase of a place cell decreases with distance traveled in the place field. The rate of phase precession depends on place field width such that the phase range covered in a traversal of a place field is independent of field width. Width-dependent precession rates, however, generally disrupt the consistency between distance and phase differences. In this paper we provide a mathematical theory suggesting that this consistency can only be secured for different place field widths if phase precession starts at a width-dependent phase offset. These offsets are in accordance with the experimentally observed theta wave traveling from the dorsal to the ventral pole of the hippocampus. Furthermore the theory predicts that sequences of place cells with different widths should be ordered according to the end of the place field. The results also hold for considerably nonlinear phase precession profiles.
Collapse
Affiliation(s)
- Christian Leibold
- Department Biology II, Ludwig-Maximilians-Universität München, Munich, Germany. .,Bernstein Center for Computational Neuroscience Munich, Munich, Germany.
| | - Mauro M Monsalve-Mercado
- Department Biology II, Ludwig-Maximilians-Universität München, Munich, Germany.,Bernstein Center for Computational Neuroscience Munich, Munich, Germany
| |
Collapse
|
6
|
Wang Q, Rothkopf CA, Triesch J. A model of human motor sequence learning explains facilitation and interference effects based on spike-timing dependent plasticity. PLoS Comput Biol 2017; 13:e1005632. [PMID: 28767646 PMCID: PMC5555713 DOI: 10.1371/journal.pcbi.1005632] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/10/2016] [Revised: 08/14/2017] [Accepted: 06/16/2017] [Indexed: 01/01/2023] Open
Abstract
The ability to learn sequential behaviors is a fundamental property of our brains. Yet a long stream of studies including recent experiments investigating motor sequence learning in adult human subjects have produced a number of puzzling and seemingly contradictory results. In particular, when subjects have to learn multiple action sequences, learning is sometimes impaired by proactive and retroactive interference effects. In other situations, however, learning is accelerated as reflected in facilitation and transfer effects. At present it is unclear what the underlying neural mechanism are that give rise to these diverse findings. Here we show that a recently developed recurrent neural network model readily reproduces this diverse set of findings. The self-organizing recurrent neural network (SORN) model is a network of recurrently connected threshold units that combines a simplified form of spike-timing dependent plasticity (STDP) with homeostatic plasticity mechanisms ensuring network stability, namely intrinsic plasticity (IP) and synaptic normalization (SN). When trained on sequence learning tasks modeled after recent experiments we find that it reproduces the full range of interference, facilitation, and transfer effects. We show how these effects are rooted in the network’s changing internal representation of the different sequences across learning and how they depend on an interaction of training schedule and task similarity. Furthermore, since learning in the model is based on fundamental neuronal plasticity mechanisms, the model reveals how these plasticity mechanisms are ultimately responsible for the network’s sequence learning abilities. In particular, we find that all three plasticity mechanisms are essential for the network to learn effective internal models of the different training sequences. This ability to form effective internal models is also the basis for the observed interference and facilitation effects. This suggests that STDP, IP, and SN may be the driving forces behind our ability to learn complex action sequences. From dialing a phone number to driving home after work, much of human behavior is inherently sequential. But how do we learn such sequential behaviors and what neural plasticity mechanisms support this learning? Recent experiments on sequence learning in human adults have produced a range of confusing findings, especially when subjects have to learn multiple sequences at the same time. For example, the succes of training can strongly depend on subjects’ training schedules, i.e., whether they practice one task until they are proficient before switching to the next or whether they interleave training of the different tasks. Here we show that a model self-organizing neural network readily explains many findings on human sequence learning. The model is formulated as a recurrent network of simplified spiking neurons and incorporates multiple biologically plausible plasticity mechanisms of neurons and synapses. Therefore, it offers a theoretical bridge between basic mechanisms of synaptic and neuronal plasticity and the behavior of human subjects in sequence learning tasks.
Collapse
Affiliation(s)
- Quan Wang
- Frankfurt Institute for Advanced Studies, Ruth-Moufang Str. 1, 60438 Frankfurt, Germany
- * E-mail:
| | - Constantin A. Rothkopf
- Frankfurt Institute for Advanced Studies, Ruth-Moufang Str. 1, 60438 Frankfurt, Germany
- Centre for Cognitive Science & Institute of Psychology, Technical University Darmstadt, Darmstadt, Germany
| | - Jochen Triesch
- Frankfurt Institute for Advanced Studies, Ruth-Moufang Str. 1, 60438 Frankfurt, Germany
| |
Collapse
|
7
|
Spike-Based Bayesian-Hebbian Learning of Temporal Sequences. PLoS Comput Biol 2016; 12:e1004954. [PMID: 27213810 PMCID: PMC4877102 DOI: 10.1371/journal.pcbi.1004954] [Citation(s) in RCA: 23] [Impact Index Per Article: 2.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/29/2015] [Accepted: 04/28/2016] [Indexed: 11/25/2022] Open
Abstract
Many cognitive and motor functions are enabled by the temporal representation and processing of stimuli, but it remains an open issue how neocortical microcircuits can reliably encode and replay such sequences of information. To better understand this, a modular attractor memory network is proposed in which meta-stable sequential attractor transitions are learned through changes to synaptic weights and intrinsic excitabilities via the spike-based Bayesian Confidence Propagation Neural Network (BCPNN) learning rule. We find that the formation of distributed memories, embodied by increased periods of firing in pools of excitatory neurons, together with asymmetrical associations between these distinct network states, can be acquired through plasticity. The model’s feasibility is demonstrated using simulations of adaptive exponential integrate-and-fire model neurons (AdEx). We show that the learning and speed of sequence replay depends on a confluence of biophysically relevant parameters including stimulus duration, level of background noise, ratio of synaptic currents, and strengths of short-term depression and adaptation. Moreover, sequence elements are shown to flexibly participate multiple times in the sequence, suggesting that spiking attractor networks of this type can support an efficient combinatorial code. The model provides a principled approach towards understanding how multiple interacting plasticity mechanisms can coordinate hetero-associative learning in unison. From one moment to the next, in an ever-changing world, and awash in a deluge of sensory data, the brain fluidly guides our actions throughout an astonishing variety of tasks. Processing this ongoing bombardment of information is a fundamental problem faced by its underlying neural circuits. Given that the structure of our actions along with the organization of the environment in which they are performed can be intuitively decomposed into sequences of simpler patterns, an encoding strategy reflecting the temporal nature of these patterns should offer an efficient approach for assembling more complex memories and behaviors. We present a model that demonstrates how activity could propagate through recurrent cortical microcircuits as a result of a learning rule based on neurobiologically plausible time courses and dynamics. The model predicts that the interaction between several learning and dynamical processes constitute a compound mnemonic engram that can flexibly generate sequential step-wise increases of activity within neural populations.
Collapse
|
8
|
Yu Q, Yan R, Tang H, Tan KC, Li H. A Spiking Neural Network System for Robust Sequence Recognition. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2016; 27:621-635. [PMID: 25879976 DOI: 10.1109/tnnls.2015.2416771] [Citation(s) in RCA: 24] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/04/2023]
Abstract
This paper proposes a biologically plausible network architecture with spiking neurons for sequence recognition. This architecture is a unified and consistent system with functional parts of sensory encoding, learning, and decoding. This is the first systematic model attempting to reveal the neural mechanisms considering both the upstream and the downstream neurons together. The whole system is a consistent temporal framework, where the precise timing of spikes is employed for information processing and cognitive computing. Experimental results show that the system is competent to perform the sequence recognition, being robust to noisy sensory inputs and invariant to changes in the intervals between input stimuli within a certain range. The classification ability of the temporal learning rule used in the system is investigated through two benchmark tasks that outperform the other two widely used learning rules for classification. The results also demonstrate the computational power of spiking neurons over perceptrons for processing spatiotemporal patterns. In summary, the system provides a general way with spiking neurons to encode external stimuli into spatiotemporal spikes, to learn the encoded spike patterns with temporal learning rules, and to decode the sequence order with downstream neurons. The system structure would be beneficial for developments in both hardware and software.
Collapse
|
9
|
Schlesiger MI, Cannova CC, Boublil BL, Hales JB, Mankin EA, Brandon MP, Leutgeb JK, Leibold C, Leutgeb S. The medial entorhinal cortex is necessary for temporal organization of hippocampal neuronal activity. Nat Neurosci 2015; 18:1123-32. [PMID: 26120964 PMCID: PMC4711275 DOI: 10.1038/nn.4056] [Citation(s) in RCA: 128] [Impact Index Per Article: 14.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/19/2015] [Accepted: 06/04/2015] [Indexed: 01/05/2023]
Abstract
The superficial layers of the medial entorhinal cortex (MEC) are the major input to the hippocampus. The high proportion of spatially modulated cells, including grid cells and border cells, in these layers suggests that the MEC inputs to the hippocampus are critical for the representation of space in the hippocampus. However, selective manipulations of the MEC do not completely abolish hippocampal spatial firing. To therefore determine whether other hippocampal firing characteristics depend more critically on MEC inputs, we recorded from hippocampal CA1 cells in rats with MEC lesions. Strikingly, theta phase precession was substantially disrupted, even during periods of stable spatial firing. Our findings indicate that MEC inputs to the hippocampus are required for the temporal organization of hippocampal firing patterns and suggest that cognitive functions that depend on precise neuronal sequences within the hippocampal theta cycle are particularly dependent on the MEC.
Collapse
Affiliation(s)
- Magdalene I Schlesiger
- 1] Neurobiology Section and Center for Neural Circuits and Behavior, University of California, San Diego, La Jolla, California, USA. [2] Graduate School of Systemic Neurosciences, Ludwig-Maximilians-Universität München, Planegg, Germany
| | - Christopher C Cannova
- Neurobiology Section and Center for Neural Circuits and Behavior, University of California, San Diego, La Jolla, California, USA
| | - Brittney L Boublil
- Neurobiology Section and Center for Neural Circuits and Behavior, University of California, San Diego, La Jolla, California, USA
| | - Jena B Hales
- Department of Psychiatry, School of Medicine, University of California, San Diego, La Jolla, California, USA
| | - Emily A Mankin
- Neurobiology Section and Center for Neural Circuits and Behavior, University of California, San Diego, La Jolla, California, USA
| | - Mark P Brandon
- Neurobiology Section and Center for Neural Circuits and Behavior, University of California, San Diego, La Jolla, California, USA
| | - Jill K Leutgeb
- Neurobiology Section and Center for Neural Circuits and Behavior, University of California, San Diego, La Jolla, California, USA
| | - Christian Leibold
- Department Biology II, Ludwig-Maximilians-Universität München, Planegg, Germany
| | - Stefan Leutgeb
- 1] Neurobiology Section and Center for Neural Circuits and Behavior, University of California, San Diego, La Jolla, California, USA. [2] Kavli Institute for Brain and Mind, University of California, San Diego, La Jolla, California, USA
| |
Collapse
|
10
|
Goudar V, Buonomano DV. A model of order-selectivity based on dynamic changes in the balance of excitation and inhibition produced by short-term synaptic plasticity. J Neurophysiol 2014; 113:509-23. [PMID: 25339707 DOI: 10.1152/jn.00568.2014] [Citation(s) in RCA: 19] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
Determining the order of sensory events separated by a few hundred milliseconds is critical to many forms of sensory processing, including vocalization and speech discrimination. Although many experimental studies have recorded from auditory order-sensitive and order-selective neurons, the underlying mechanisms are poorly understood. Here we demonstrate that universal properties of cortical synapses-short-term synaptic plasticity of excitatory and inhibitory synapses-are well suited for the generation of order-selective neural responses. Using computational models of canonical disynaptic circuits, we show that the dynamic changes in the balance of excitation and inhibition imposed by short-term plasticity lead to the generation of order-selective responses. Parametric analyses predict that among the forms of short-term plasticity expressed at excitatory-to-excitatory, excitatory-to-inhibitory, and inhibitory-to-excitatory synapses, the single most important contributor to order-selectivity is the paired-pulse depression of inhibitory postsynaptic potentials (IPSPs). A topographic model of the auditory cortex that incorporates short-term plasticity accounts for both context-dependent suppression and enhancement in response to paired tones. Together these results provide a framework to account for an important computational problem based on ubiquitous synaptic properties that did not yet have a clearly established computational function. Additionally, these studies suggest that disynaptic circuits represent a fundamental computational unit that is capable of processing both spatial and temporal information.
Collapse
Affiliation(s)
- Vishwa Goudar
- Integrative Center for Learning and Memory, Departments of Neurobiology and Psychology, UCLA, Los Angeles, California
| | - Dean V Buonomano
- Integrative Center for Learning and Memory, Departments of Neurobiology and Psychology, UCLA, Los Angeles, California
| |
Collapse
|
11
|
Xia M, Wong WK, Wang Z. Sequence memory based on coherent spin-interaction neural networks. Neural Comput 2014; 26:2944-61. [PMID: 25149698 DOI: 10.1162/neco_a_00663] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Sequence information processing, for instance, the sequence memory, plays an important role on many functions of brain. In the workings of the human brain, the steady-state period is alterable. However, in the existing sequence memory models using heteroassociations, the steady-state period cannot be changed in the sequence recall. In this work, a novel neural network model for sequence memory with controllable steady-state period based on coherent spininteraction is proposed. In the proposed model, neurons fire collectively in a phase-coherent manner, which lets a neuron group respond differently to different patterns and also lets different neuron groups respond differently to one pattern. The simulation results demonstrating the performance of the sequence memory are presented. By introducing a new coherent spin-interaction sequence memory model, the steady-state period can be controlled by dimension parameters and the overlap between the input pattern and the stored patterns. The sequence storage capacity is enlarged by coherent spin interaction compared with the existing sequence memory models. Furthermore, the sequence storage capacity has an exponential relationship to the dimension of the neural network.
Collapse
Affiliation(s)
- Min Xia
- College of Information and Control Science, Nanjing University of Information Science and Technology, Nanjing, 210044, China, and Institute of Textiles and Clothing, Hong Kong Polytechnic University, 999077, Hong Kong
| | | | | |
Collapse
|
12
|
Cortical gamma oscillations: the functional key is activation, not cognition. Neurosci Biobehav Rev 2013; 37:401-17. [PMID: 23333264 DOI: 10.1016/j.neubiorev.2013.01.013] [Citation(s) in RCA: 80] [Impact Index Per Article: 7.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/29/2012] [Revised: 12/28/2012] [Accepted: 01/07/2013] [Indexed: 12/19/2022]
Abstract
Cortical oscillatory synchrony in the gamma range has been attracting increasing attention in cognitive neuroscience ever since being proposed as a solution to the so-called binding problem. This growing literature is critically reviewed in both its basic neuroscience and cognitive aspects. A physiological "default assumption" regarding these oscillations is introduced, according to which they signal a state of physiological activation of cortical tissue, and the associated need to balance excitation with inhibition in particular. As such these oscillations would belong among a variety of generic neural control operations that enable neural tissue to perform its systems level functions, without implementing those functions themselves. Regional control of cerebral blood flow provides an analogy in this regard, and gamma oscillations are tightly correlated with this even more elementary control operation. As correlates of neural activation they will also covary with cognitive activity, and this typically suffices to account for the covariation between gamma activity and cognitive task variables. A number of specific cases of gamma synchrony are examined in this light, including the original impetus for attributing cognitive significance to gamma activity, namely the experiments interpreted as evidence for "binding by synchrony". This examination finds no compelling reasons to assign functional roles to oscillatory synchrony in the gamma range beyond its generic functions at the level of infrastructural neural control.
Collapse
|
13
|
Spectral analysis of input spike trains by spike-timing-dependent plasticity. PLoS Comput Biol 2012; 8:e1002584. [PMID: 22792056 PMCID: PMC3390410 DOI: 10.1371/journal.pcbi.1002584] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/01/2011] [Accepted: 05/14/2012] [Indexed: 11/19/2022] Open
Abstract
Spike-timing-dependent plasticity (STDP) has been observed in many brain areas such as sensory cortices, where it is hypothesized to structure synaptic connections between neurons. Previous studies have demonstrated how STDP can capture spiking information at short timescales using specific input configurations, such as coincident spiking, spike patterns and oscillatory spike trains. However, the corresponding computation in the case of arbitrary input signals is still unclear. This paper provides an overarching picture of the algorithm inherent to STDP, tying together many previous results for commonly used models of pairwise STDP. For a single neuron with plastic excitatory synapses, we show how STDP performs a spectral analysis on the temporal cross-correlograms between its afferent spike trains. The postsynaptic responses and STDP learning window determine kernel functions that specify how the neuron “sees” the input correlations. We thus denote this unsupervised learning scheme as ‘kernel spectral component analysis’ (kSCA). In particular, the whole input correlation structure must be considered since all plastic synapses compete with each other. We find that kSCA is enhanced when weight-dependent STDP induces gradual synaptic competition. For a spiking neuron with a “linear” response and pairwise STDP alone, we find that kSCA resembles principal component analysis (PCA). However, plain STDP does not isolate correlation sources in general, e.g., when they are mixed among the input spike trains. In other words, it does not perform independent component analysis (ICA). Tuning the neuron to a single correlation source can be achieved when STDP is paired with a homeostatic mechanism that reinforces the competition between synaptic inputs. Our results suggest that neuronal networks equipped with STDP can process signals encoded in the transient spiking activity at the timescales of tens of milliseconds for usual STDP. Tuning feature extraction of sensory stimuli is an important function for synaptic plasticity models. A widely studied example is the development of orientation preference in the primary visual cortex, which can emerge using moving bars in the visual field. A crucial point is the decomposition of stimuli into basic information tokens, e.g., selecting individual bars even though they are presented in overlapping pairs (vertical and horizontal). Among classical unsupervised learning models, independent component analysis (ICA) is capable of isolating basic tokens, whereas principal component analysis (PCA) cannot. This paper focuses on spike-timing-dependent plasticity (STDP), whose functional implications for neural information processing have been intensively studied both theoretically and experimentally in the last decade. Following recent studies demonstrating that STDP can perform ICA for specific cases, we show how STDP relates to PCA or ICA, and in particular explains the conditions under which it switches between them. Here information at the neuronal level is assumed to be encoded in temporal cross-correlograms of spike trains. We find that a linear spiking neuron equipped with pairwise STDP requires additional mechanisms, such as a homeostatic regulation of its output firing, in order to separate mixed correlation sources and thus perform ICA.
Collapse
|