1
|
Shen J, Zhao Y, Liu JK, Wang Y. HybridSNN: Combining Bio-Machine Strengths by Boosting Adaptive Spiking Neural Networks. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2023; 34:5841-5855. [PMID: 34890341 DOI: 10.1109/tnnls.2021.3131356] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
Spiking neural networks (SNNs), inspired by the neuronal network in the brain, provide biologically relevant and low-power consuming models for information processing. Existing studies either mimic the learning mechanism of brain neural networks as closely as possible, for example, the temporally local learning rule of spike-timing-dependent plasticity (STDP), or apply the gradient descent rule to optimize a multilayer SNN with fixed structure. However, the learning rule used in the former is local and how the real brain might do the global-scale credit assignment is still not clear, which means that those shallow SNNs are robust but deep SNNs are difficult to be trained globally and could not work so well. For the latter, the nondifferentiable problem caused by the discrete spike trains leads to inaccuracy in gradient computing and difficulties in effective deep SNNs. Hence, a hybrid solution is interesting to combine shallow SNNs with an appropriate machine learning (ML) technique not requiring the gradient computing, which is able to provide both energy-saving and high-performance advantages. In this article, we propose a HybridSNN, a deep and strong SNN composed of multiple simple SNNs, in which data-driven greedy optimization is used to build powerful classifiers, avoiding the derivative problem in gradient descent. During the training process, the output features (spikes) of selected weak classifiers are fed back to the pool for the subsequent weak SNN training and selection. This guarantees HybridSNN not only represents the linear combination of simple SNNs, as what regular AdaBoost algorithm generates, but also contains neuron connection information, thus closely resembling the neural networks of a brain. HybridSNN has the benefits of both low power consumption in weak units and overall data-driven optimizing strength. The network structure in HybridSNN is learned from training samples, which is more flexible and effective compared with existing fixed multilayer SNNs. Moreover, the topological tree of HybridSNN resembles the neural system in the brain, where pyramidal neurons receive thousands of synaptic input signals through their dendrites. Experimental results show that the proposed HybridSNN is highly competitive among the state-of-the-art SNNs.
Collapse
|
2
|
Tsukada H, Tsukada M. Comparison of Pattern Discrimination Mechanisms of Hebbian and Spatiotemporal Learning Rules in Self-Organization. Front Syst Neurosci 2021; 15:624353. [PMID: 33854419 PMCID: PMC8039312 DOI: 10.3389/fnsys.2021.624353] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/31/2020] [Accepted: 01/26/2021] [Indexed: 11/24/2022] Open
Abstract
The spatiotemporal learning rule (STLR) proposed based on hippocampal neurophysiological experiments is essentially different from the Hebbian learning rule (HEBLR) in terms of the self-organization mechanism. The difference is the self-organization of information from the external world by firing (HEBLR) or not firing (STLR) output neurons. Here, we describe the differences of the self-organization mechanism between the two learning rules by simulating neural network models trained on relatively similar spatiotemporal context information. Comparing the weight distributions after training, the HEBLR shows a unimodal distribution near the training vector, whereas the STLR shows a multimodal distribution. We analyzed the shape of the weight distribution in response to temporal changes in contextual information and found that the HEBLR does not change the shape of the weight distribution for time-varying spatiotemporal contextual information, whereas the STLR is sensitive to slight differences in spatiotemporal contexts and produces a multimodal distribution. These results suggest a critical difference in the dynamic change of synaptic weight distributions between the HEBLR and STLR in contextual learning. They also capture the characteristics of the pattern completion in the HEBLR and the pattern discrimination in the STLR, which adequately explain the self-organization mechanism of contextual information learning.
Collapse
Affiliation(s)
- Hiromichi Tsukada
- College of Engineering, Chubu University, Kasugai, Japan.,Neural Computation Unit, Okinawa Institute of Science and Technology Graduate University, Okinawa, Japan
| | - Minoru Tsukada
- Brain Science Institute, Tamagawa University, Tokyo, Japan
| |
Collapse
|
3
|
Jang EK, Park Y, Lee JS. Reversible uptake and release of sodium ions in layered SnS 2-reduced graphene oxide composites for neuromorphic devices. NANOSCALE 2019; 11:15382-15388. [PMID: 31389935 DOI: 10.1039/c9nr03073e] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
With the advent of brain-inspired computing for complex data processing, emerging nonvolatile memories have been widely studied to develop neuromorphic devices for pattern recognition and deep learning. However, the devices still suffer from limitations such as nonlinearity and large write noise because they adopt a stochastic switching approach. Here, we suggest a biomimetic three-terminal electrochemical artificial synapse that is operated by a conductance change in response to intercalation of sodium (Na+) ions into a layered SnS2-reduced graphene oxide (RGO) composite channel. SnS2-RGO can reversibly uptake and release Na+ ions, so the conductance of the channel in artificial synapse can be controlled effectively and thereby it can emulate essential synaptic functions including short-term plasticity, spatiotemporal signal processing, and transition from short-term to long-term plasticity. The artificial synapse also shows linear and symmetric potentiation/depression with low cycle-to-cycle variation; these responses could improve the write linearity and reduce the write noise of devices. This study demonstrates the feasibility of next-generation neuromorphic memory using ion-based electrochemical devices that can mimic biological synapses with the migration of Na+ ions.
Collapse
Affiliation(s)
- Eun-Kyeong Jang
- Department of Materials Science and Engineering, Pohang University of Science and Technology (POSTECH), Pohang 37673, Korea.
| | | | | |
Collapse
|
4
|
Hu W, Jiang J, Xie D, Wang S, Bi K, Duan H, Yang J, He J. Transient security transistors self-supported on biodegradable natural-polymer membranes for brain-inspired neuromorphic applications. NANOSCALE 2018; 10:14893-14901. [PMID: 30043794 DOI: 10.1039/c8nr04136a] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/08/2023]
Abstract
Transient electronics, a new generation of electronics that can physically or functionally vanish on demand, are very promising for future "green" security biocompatible electronics. At the same time, hardware implementation of biological synapses is highly desirable for emerging brain-like neuromorphic computational systems that could look beyond the conventional von Neumann architecture. Here, a hardware-security physically-transient bidirectional artificial synapse network based on a dual in-plane-gate Al-Zn-O neuromorphic transistor was fabricated on free-standing laterally-coupled biopolymer electrolyte membranes (sodium alginate). The excitatory postsynaptic current, paired-pulse-facilitation, and temporal filtering characteristics from high-pass to low-pass transition were successfully mimicked. More importantly, bidirectional dynamic spatiotemporal learning rules and neuronal arithmetic were also experimentally demonstrated using two lateral in-plane gates as the presynaptic inputs. Most interestingly, excellent physically-transient behavior could be achieved with a superfast water-soluble speed of only ∼120 seconds. This work represents a significant step towards future hardware-security transient biocompatible intelligent electronic systems.
Collapse
Affiliation(s)
- Wennan Hu
- Hunan Key Laboratory of Super Microstructure and Ultrafast Process, School of Physics and Electronics, Central South University, Changsha, Hunan 410083, China.
| | | | | | | | | | | | | | | |
Collapse
|
5
|
Bhalla US. Dendrites, deep learning, and sequences in the hippocampus. Hippocampus 2017; 29:239-251. [PMID: 29024221 DOI: 10.1002/hipo.22806] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/09/2017] [Revised: 10/06/2017] [Accepted: 10/10/2017] [Indexed: 11/06/2022]
Abstract
The hippocampus places us both in time and space. It does so over remarkably large spans: milliseconds to years, and centimeters to kilometers. This works for sensory representations, for memory, and for behavioral context. How does it fit in such wide ranges of time and space scales, and keep order among the many dimensions of stimulus context? A key organizing principle for a wide sweep of scales and stimulus dimensions is that of order in time, or sequences. Sequences of neuronal activity are ubiquitous in sensory processing, in motor control, in planning actions, and in memory. Against this strong evidence for the phenomenon, there are currently more models than definite experiments about how the brain generates ordered activity. The flip side of sequence generation is discrimination. Discrimination of sequences has been extensively studied at the behavioral, systems, and modeling level, but again physiological mechanisms are fewer. It is against this backdrop that I discuss two recent developments in neural sequence computation, that at face value share little beyond the label "neural." These are dendritic sequence discrimination, and deep learning. One derives from channel physiology and molecular signaling, the other from applied neural network theory - apparently extreme ends of the spectrum of neural circuit detail. I suggest that each of these topics has deep lessons about the possible mechanisms, scales, and capabilities of hippocampal sequence computation.
Collapse
Affiliation(s)
- Upinder S Bhalla
- Neurobiology, National Centre for Biological Sciences, Tata Institute of Fundamental Research, Bellary Road, Bangalore 560065, Karnataka, India
| |
Collapse
|
6
|
Yu Q, Tang H, Tan KC, Yu H. A brain-inspired spiking neural network model with temporal encoding and learning. Neurocomputing 2014. [DOI: 10.1016/j.neucom.2013.06.052] [Citation(s) in RCA: 81] [Impact Index Per Article: 8.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
|
7
|
Yu Q, Tang H, Tan KC, Li H. Rapid feedforward computation by temporal encoding and learning with spiking neurons. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2013; 24:1539-52. [PMID: 24808592 DOI: 10.1109/tnnls.2013.2245677] [Citation(s) in RCA: 36] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/21/2023]
Abstract
Primates perform remarkably well in cognitive tasks such as pattern recognition. Motivated by recent findings in biological systems, a unified and consistent feedforward system network with a proper encoding scheme and supervised temporal rules is built for solving the pattern recognition task. The temporal rules used for processing precise spiking patterns have recently emerged as ways of emulating the brain's computation from its anatomy and physiology. Most of these rules could be used for recognizing different spatiotemporal patterns. However, there arises the question of whether these temporal rules could be used to recognize real-world stimuli such as images. Furthermore, how the information is represented in the brain still remains unclear. To tackle these problems, a proper encoding method and a unified computational model with consistent and efficient learning rule are proposed. Through encoding, external stimuli are converted into sparse representations, which also have properties of invariance. These temporal patterns are then learned through biologically derived algorithms in the learning layer, followed by the final decision presented through the readout layer. The performance of the model with images of digits from the MNIST database is presented. The results show that the proposed model is capable of recognizing images correctly with a performance comparable to that of current benchmark algorithms. The results also suggest a plausibility proof for a class of feedforward models of rapid and robust recognition in the brain.
Collapse
|
8
|
YAN CHUANKUI, WANG RUBIN, PAN XIAOCHUAN. A MODEL OF HIPPOCAMPAL MEMORY BASED ON AN ADAPTIVE LEARNING RULE OF SYNAPSES. J BIOL SYST 2013. [DOI: 10.1142/s0218339013500162] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022]
Abstract
We constructed a neural network of the hippocampus and proposed an adaptive learning rule of synapses to simulate the storing and retrieving processes of memory in the hippocampus by a mechanism of resonance. The hippocampus network consists of CA1, CA3 and DG, in particular, CA1 is a storage of memory, which receives inputs from both EC through perforant path (PP) and CA3 through Schaffer collaterals (SC). The stimulated results showed that the memory trace was unable to be encoded in CA1 when only a single subthreshold signal from EC or CA3 was inputted, of which the main reason might be lack of the resonance of the two signals. We calculated signal-to-noise ratio (SNR) of the network, and found it reached a peak value at appropriate SC connection strength, indicating that a typical stochastic resonance phenomenon appeared in PP signal detection. The inputs from EC and CA3 were able to enhance the memory representation in CA1, although still incomplete. We used a learning rule to modify synaptic weights by which the network could learn an external pattern. The hippocampus network tended to be stable after sufficient evolution. Some CA1 neurons show synchronized firings which are used to represent memory and are clearer than observed memory traces before learning. The model and results provide a good guidance to our understanding of the mechanism of the hippocampus memory.
Collapse
Affiliation(s)
- CHUANKUI YAN
- Department of Mathematics, School of Science, Hang Zhou Normal University, Xuelin Street 16, Xiasha Higher Education Zone, Hangzhou, 310036, P. R. China
- Institute for Cognitive Neurodynamics, School of Information Science and Engineering, Department of Mathematics, School of Science, East China University of Science and Technology, Meilong 130, Shanghai 200237, P. R. China
| | - RUBIN WANG
- Institute for Cognitive Neurodynamics, School of Information Science and Engineering, Department of Mathematics, School of Science, East China University of Science and Technology, Meilong 130, Shanghai 200237, P. R. China
| | - XIAOCHUAN PAN
- Institute for Cognitive Neurodynamics, School of Information Science and Engineering, Department of Mathematics, School of Science, East China University of Science and Technology, Meilong 130, Shanghai 200237, P. R. China
| |
Collapse
|
9
|
Yoneyama M, Fukushima Y, Tsukada M, Aihara T. Spatiotemporal characteristics of synaptic EPSP summation on the dendritic trees of hippocampal CA1 pyramidal neurons as revealed by laser uncaging stimulation. Cogn Neurodyn 2011; 5:333-42. [PMID: 23115591 DOI: 10.1007/s11571-011-9158-9] [Citation(s) in RCA: 18] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/16/2011] [Revised: 05/31/2011] [Accepted: 06/07/2011] [Indexed: 11/25/2022] Open
Abstract
Synaptic strength is modified by the temporal coincidence of synaptic inputs without back-propagating action potentials (BPAPs) in CA1 pyramidal neurons. In order to clarify the interactive mechanisms of associative long-term potentiation (LTP) without BPAPs, local paired stimuli were applied to the dendrites using high-speed laser uncaging stimulation equipment. When the spatial distance between the paired stimuli was <10 micrometer, nonlinear amplification in excitatory postsynaptic potential summation was observed. In the time window from -20 to 20 ms, supralinear amplification was observed. Supralinear amplification was modulated by antagonist of voltage-gated Na(+)/Ca(2+) channels and NMDA-type glutamate receptors. These results are closely related to the spatiotemporal-characteristics of associative LTP without BPAPs. This study proposes an essential aspect of dendritic information processing.
Collapse
Affiliation(s)
- Makoto Yoneyama
- Brain Science Institute, Tamagawa University, 6-1-1 Tamagawagakuen, Machida, Tokyo, 194-8610 Japan
| | | | | | | |
Collapse
|
10
|
Tsukada M, Fukushima Y. A context-sensitive mechanism in hippocampal CA1 networks. Bull Math Biol 2010; 73:417-35. [PMID: 20844974 DOI: 10.1007/s11538-010-9566-8] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/23/2010] [Accepted: 06/17/2010] [Indexed: 10/19/2022]
Abstract
This paper presents a possible context-sensitive mechanism in a neural network and at single neuron levels based on the experiments of hippocampal CA1 and their theoretical models. First, the spatiotemporal learning rule (STLR, non-Hebbian) and the Hebbian rule (HEBB) are experimentally shown to coexist in dendrite-soma interactions in single hippocampal pyramidal cells of CA1. Second, the functional differences between STLR and HEBB are theoretically shown in pattern separation and pattern completion. Third, the interaction between STLR and HEBB in neural levels is proposed to play an important role in forming a selective context determined by value information, which is related to expected reward and behavioral estimation.
Collapse
Affiliation(s)
- Minoru Tsukada
- Brain Science Institute, Tamagawa University, 6-1-1, Tamagawagakuen, Machida, Tokyo, 194-0041, Japan.
| | | |
Collapse
|
11
|
Neural cytoskeleton capabilities for learning and memory. J Biol Phys 2010; 36:3-21. [PMID: 19669423 PMCID: PMC2791806 DOI: 10.1007/s10867-009-9153-0] [Citation(s) in RCA: 46] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/29/2008] [Accepted: 04/06/2009] [Indexed: 11/10/2022] Open
Abstract
This paper proposes a physical model involving the key structures within the neural cytoskeleton as major players in molecular-level processing of information required for learning and memory storage. In particular, actin filaments and microtubules are macromolecules having highly charged surfaces that enable them to conduct electric signals. The biophysical properties of these filaments relevant to the conduction of ionic current include a condensation of counterions on the filament surface and a nonlinear complex physical structure conducive to the generation of modulated waves. Cytoskeletal filaments are often directly connected with both ionotropic and metabotropic types of membrane-embedded receptors, thereby linking synaptic inputs to intracellular functions. Possible roles for cable-like, conductive filaments in neurons include intracellular information processing, regulating developmental plasticity, and mediating transport. The cytoskeletal proteins form a complex network capable of emergent information processing, and they stand to intervene between inputs to and outputs from neurons. In this manner, the cytoskeletal matrix is proposed to work with neuronal membrane and its intrinsic components (e.g., ion channels, scaffolding proteins, and adaptor proteins), especially at sites of synaptic contacts and spines. An information processing model based on cytoskeletal networks is proposed that may underlie certain types of learning and memory.
Collapse
|
12
|
Iterated function systems in the hippocampal CA1. Cogn Neurodyn 2009; 3:205-22. [PMID: 19554477 DOI: 10.1007/s11571-009-9086-0] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/05/2009] [Revised: 05/29/2009] [Accepted: 05/29/2009] [Indexed: 10/20/2022] Open
Abstract
How does the information of spatiotemporal sequence stemming from the hippocampal CA3 area affect the postsynaptic membrane potentials of the hippocampal CA1 neurons? In a recent study, we observed hierarchical clusters of the distribution of membrane potentials of CA1 neurons, arranged according to the history of input sequences (Fukushima et al Cogn Neurodyn 1(4):305-316, 2007). In the present paper, we deal with the dynamical mechanism generating such a hierarchical distribution. The recording data were investigated using return map analysis. We also deal with a collective behavior at population level, using a reconstructed multi-cell recording data set. At both individual cell and population levels, a return map of the response sequence of CA1 pyramidal cells was well approximated by a set of contractive affine transformations, where the transformations represent self-organized rules by which the input pattern sequences are encoded. These findings provide direct evidence that the information of temporal sequences generated in CA3 can be self-similarly represented in the membrane potentials of CA1 pyramidal cells.
Collapse
|
13
|
Dual synaptic plasticity in the hippocampus: Hebbian and spatiotemporal learning dynamics. Cogn Neurodyn 2008; 3:153-63. [PMID: 19034691 DOI: 10.1007/s11571-008-9071-z] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/30/2008] [Revised: 10/26/2008] [Accepted: 10/26/2008] [Indexed: 10/21/2022] Open
Abstract
We assume that Hebbian learning dynamics (HLD) and spatiotemporal learning dynamics (SLD) are involved in the mechanism of synaptic plasticity in the hippocampal neurons. While HLD is driven by pre- and postsynaptic spike timings through the backpropagating action potential, SLD is evoked by presynaptic spike timings alone. Since the backpropagation attenuates as it nears the distal dendrites, we assume an extreme case as a neuron model where HLD exists only at proximal dendrites and SLD exists only at the distal dendrites. We examined how the synaptic weights change in response to three types of synaptic inputs in computer simulations. First, in response to a Poisson train having a constant mean frequency, the synaptic weights in HLD and SLD are qualitatively similar. Second, SLD responds more rapidly than HLD to synchronous input patterns, while each responds to them. Third, HLD responds more rapidly to more frequent inputs, while SLD shows fluctuating synaptic weights. These results suggest an encoding hypothesis in that a transient synchronous structure in spatiotemporal input patterns will be encoded into distal dendrites through SLD and that persistent synchrony or firing rate information will be encoded into proximal dendrites through HLD.
Collapse
|
14
|
Wysoski SG, Benuskova L, Kasabov N. Fast and adaptive network of spiking neurons for multi-view visual pattern recognition. Neurocomputing 2008. [DOI: 10.1016/j.neucom.2007.12.038] [Citation(s) in RCA: 33] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/22/2022]
|
15
|
Wright JJ, Bourke PD. An outline of functional self-organization in V1: synchrony, STLR and Hebb rules. Cogn Neurodyn 2008; 2:147-57. [PMID: 19003481 DOI: 10.1007/s11571-008-9048-y] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/06/2008] [Accepted: 04/06/2008] [Indexed: 10/22/2022] Open
Abstract
A model of self-organization of synapses in the striate cortex is described, and its functional implications discussed. Principal assumptions are: (a) covariance of cell firing declines with distance in cortex, (b) covariance of stimulus characteristics declines with distance in the visual field, and (c) metabolic rates are approximately uniform in all small axonal segments. Under these constraints, Hebbian learning implies a maximally stable synaptic configuration corresponding to anatomically and physiologically realistic ''local maps'', each of macro-columnar size, and each organized as Möbius projections of a "global map" of retinotopic form. Convergence to the maximally stable configuration is facilitated by the spatio-temporal learning rule. A tiling of V1, constructed of approximately mirror-image reflections of each local map by its neighbors, is formed. The model supplements standard concepts of feed-forward visual processing by introducing a new basis for contextual modulation and neural network identifications of visual signals, as perturbation of the synaptic configuration by rapid stimulus transients. On a long time-scale, synaptic development could overwrite the Möbius configuration, while LTP and LTD could mediate synaptic gain on intermediate time-scales.
Collapse
Affiliation(s)
- J J Wright
- Department of Psychological Medicine, University of Auckland, Auckland, New Zealand,
| | | |
Collapse
|
16
|
Spatial clustering property and its self-similarity in membrane potentials of hippocampal CA1 pyramidal neurons for a spatio-temporal input sequence. Cogn Neurodyn 2007; 1:305-16. [PMID: 19003501 DOI: 10.1007/s11571-007-9026-9] [Citation(s) in RCA: 12] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/04/2007] [Accepted: 08/30/2007] [Indexed: 10/22/2022] Open
Abstract
To clarify how the information of spatiotemporal sequence of the hippocampal CA3 affects the postsynaptic membrane potentials of single pyramidal cells in the hippocampal CA1, the spatio-temporal stimuli was delivered to Schaffer collaterals of the CA3 through a pair of electrodes and the post-synaptic membrane potentials were recorded using the patch-clamp recording method. The input-output relations were sequentially analyzed by applying two measures; "spatial clustering" and its "self-similarity" index. The membrane potentials were hierarchically clustered in a self-similar manner to the input sequences. The property was significantly observed at two and three time-history steps. In addition, the properties were maintained under two different stimulus conditions, weak and strong current stimulation. The experimental results are discussed in relation to theoretical results of Cantor coding, reported by Tsuda (Behav Brain Sci 24(5):793-847, 2001) and Tsuda and Kuroda (Jpn J Indust Appl Math 18:249-258, 2001; Cortical dynamics, pp 129-139, Springer-Verlag, 2004).
Collapse
|
17
|
Tsukada M, Yamazaki Y, Kojima H. Interaction between the Spatiotemporal Learning Rule (STLR) and Hebb type (HEBB) in single pyramidal cells in the hippocampal CA1 Area. Cogn Neurodyn 2007; 1:157-67. [PMID: 19003509 DOI: 10.1007/s11571-006-9014-5] [Citation(s) in RCA: 18] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/29/2006] [Accepted: 10/18/2006] [Indexed: 10/23/2022] Open
Abstract
The spatiotemporal learning rule (STLR), proposed as a non-Hebb type by Tsukada et al. (Neural Networks 9 (1996) 1357 and Tsukada and Pan (Biol. cyberm 92 (2005) 139), 2005), consists of two distinctive factors; "cooperative plasticity without a cell spike," and "its temporal summation". On the other hand, Hebb (The organization of behavior. John Wiley, New York, 1949) proposed the idea (HEBB) that synaptic modification is strengthened only if the pre- and post-cell are activated simultaneously. We have shown, experimentally, that both STLR and HEBB coexist in single pyramidal cells of the hippocampal CA1 area. The functional differences between STLR and HEBB in dendrite (local)-soma (global) interactions in single pyramidal cells of CA1 and the possibility of pattern separation, pattern completion and reinforcement learning were discussed.
Collapse
Affiliation(s)
- Minoru Tsukada
- Brain Science Center, Tamagawa University, 6-1-1, Tamagawagakuen, Machida, Tokyo, 194-8610, Japan,
| | | | | |
Collapse
|
18
|
Ohta H, Gunji YP. Recurrent neural network architecture with pre-synaptic inhibition for incremental learning. Neural Netw 2006; 19:1106-19. [PMID: 16989983 DOI: 10.1016/j.neunet.2006.06.005] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/31/2005] [Accepted: 06/14/2006] [Indexed: 10/24/2022]
Abstract
We propose a recurrent neural network architecture that is capable of incremental learning and test the performance of the network. In incremental learning, the consistency between the existing internal representation and a new sequence is unknown, so it is not appropriate to overwrite the existing internal representation on each new sequence. In the proposed model, the parallel pathways from input to output are preserved as possible, and the pathway which has emitted the wrong output is inhibited by the previously fired pathway. Accordingly, the network begins to try other pathways ad hoc. This modeling approach is based on the concept of the parallel pathways from input to output, instead of the view of the brain as the integration of the state spaces. We discuss the extension of this approach to building a model of the higher functions such as decision making.
Collapse
Affiliation(s)
- Hiroyuki Ohta
- Graduate School of Science and Technology, Kobe University, Rokkodai, Nada, Kobe, Japan.
| | | |
Collapse
|
19
|
Pan X, Tsukada M. A model of the hippocampal-cortical memory system. BIOLOGICAL CYBERNETICS 2006; 95:159-67. [PMID: 16699781 DOI: 10.1007/s00422-006-0074-8] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/17/2005] [Accepted: 03/30/2006] [Indexed: 05/09/2023]
Abstract
Based on physiological evidence, we propose a theoretical model of the hippocampal-cortical memory system. The model consists of the following components: the sensory system, the hippocampus (short-term memory), and the association cortex (long-term memory). A series of key codes (local information) is supplied from the sensory system, while context (global information) is inputted from the hippocampus. The two inputs interact dynamically in the association cortex. The interactive neurons work as a detector of coincidence. The cortical network learns the memory information through the coincidence window and, finally, stores it in the form of attractors. This local-global information works as an addressor to designate the stored location of the memory in the association cortex and accelerates the process of storing and retrieving memory information.
Collapse
Affiliation(s)
- Xiaochuan Pan
- Department of Information-Communication, Engineering/Brain Science Research Center, Research Institute, Tamagawa University, 6-1-1, Tamagawagakuen, Machida, Tokyo 194, Japan
| | | |
Collapse
|