1
|
Correction to: Modeling Axonal Plasticity in Artificial Neural Networks. Neural Process Lett 2021. [DOI: 10.1007/s11063-021-10526-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
|
2
|
|
3
|
He H, Shang Y, Yang X, Di Y, Lin J, Zhu Y, Zheng W, Zhao J, Ji M, Dong L, Deng N, Lei Y, Chai Z. Constructing an Associative Memory System Using Spiking Neural Network. Front Neurosci 2019; 13:650. [PMID: 31333397 PMCID: PMC6615473 DOI: 10.3389/fnins.2019.00650] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/28/2019] [Accepted: 06/06/2019] [Indexed: 11/13/2022] Open
Abstract
Development of computer science has led to the blooming of artificial intelligence (AI), and neural networks are the core of AI research. Although mainstream neural networks have done well in the fields of image processing and speech recognition, they do not perform well in models aimed at understanding contextual information. In our opinion, the reason for this is that the essence of building a neural network through parameter training is to fit the data to the statistical law through parameter training. Since the neural network built using this approach does not possess memory ability, it cannot reflect the relationship between data with respect to the causality. Biological memory is fundamentally different from the current mainstream digital memory in terms of the storage method. The information stored in digital memory is converted to binary code and written in separate storage units. This physical isolation destroys the correlation of information. Therefore, the information stored in digital memory does not have the recall or association functions of biological memory which can present causality. In this paper, we present the results of our preliminary effort at constructing an associative memory system based on a spiking neural network. We broke the neural network building process into two phases: the Structure Formation Phase and the Parameter Training Phase. The Structure Formation Phase applies a learning method based on Hebb's rule to provoke neurons in the memory layer growing new synapses to connect to neighbor neurons as a response to the specific input spiking sequences fed to the neural network. The aim of this phase is to train the neural network to memorize the specific input spiking sequences. During the Parameter Training Phase, STDP and reinforcement learning are employed to optimize the weight of synapses and thus to find a way to let the neural network recall the memorized specific input spiking sequences. The results show that our memory neural network could memorize different targets and could recall the images it had memorized.
Collapse
Affiliation(s)
- Hu He
- Institute of Microelectronics, Tsinghua University, Beijing, China
| | - Yingjie Shang
- Institute of Microelectronics, Tsinghua University, Beijing, China
| | - Xu Yang
- School of Computer Science and Technology, Beijing Institute of Technology, Beijing, China
| | - Yingze Di
- School of Computer Science and Technology, Beijing Institute of Technology, Beijing, China
| | - Jiajun Lin
- School of Computer Science and Technology, Beijing Institute of Technology, Beijing, China
| | - Yimeng Zhu
- School of Computer Science and Technology, Beijing Institute of Technology, Beijing, China
| | - Wenhao Zheng
- School of Computer Science and Technology, Beijing Institute of Technology, Beijing, China
| | - Jinfeng Zhao
- School of Computer Science and Technology, Beijing Institute of Technology, Beijing, China
| | - Mengyao Ji
- School of Computer Science and Technology, Beijing Institute of Technology, Beijing, China
| | - Liya Dong
- Institute of Microelectronics, Tsinghua University, Beijing, China
| | - Ning Deng
- Institute of Microelectronics, Tsinghua University, Beijing, China
| | - Yunlin Lei
- School of Computer Science and Technology, Beijing Institute of Technology, Beijing, China
| | - Zenghao Chai
- School of Computer Science and Technology, Beijing Institute of Technology, Beijing, China
| |
Collapse
|
4
|
Exploration of a mechanism to form bionic, self-growing and self-organizing neural network. Artif Intell Rev 2019. [DOI: 10.1007/s10462-018-9626-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/17/2022]
|
5
|
Alahakoon D, Halgamuge SK, Srinivasan B. Dynamic self-organizing maps with controlled growth for knowledge discovery. ACTA ACUST UNITED AC 2012; 11:601-14. [PMID: 18249788 DOI: 10.1109/72.846732] [Citation(s) in RCA: 351] [Impact Index Per Article: 29.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
Abstract
The growing self-organizing map (GSOM) has been presented as an extended version of the self-organizing map (SOM), which has significant advantages for knowledge discovery applications. In this paper, the GSOM algorithm is presented in detail and the effect of a spread factor, which can be used to measure and control the spread of the GSOM, is investigated. The spread factor is independent of the dimensionality of the data and as such can be used as a controlling measure for generating maps with different dimensionality, which can then be compared and analyzed with better accuracy. The spread factor is also presented as a method of achieving hierarchical clustering of a data set with the GSOM. Such hierarchical clustering allows the data analyst to identify significant and interesting clusters at a higher level of the hierarchy, and as such continue with finer clustering of only the interesting clusters. Therefore, only a small map is created in the beginning with a low spread factor, which can be generated for even a very large data set. Further analysis is conducted on selected sections of the data and as such of smaller volume. Therefore, this method facilitates the analysis of even very large data sets.
Collapse
Affiliation(s)
- D Alahakoon
- School of Computer Science and Software Engineering, Monash University, Caulfield East, Vic. 3145, Australia
| | | | | |
Collapse
|
6
|
Wei H. A Neural Dynamic Model Based on Activation Diffusion and a Micro-Explanation for Cognitive Operations. INTERNATIONAL JOURNAL OF COGNITIVE INFORMATICS AND NATURAL INTELLIGENCE 2012. [DOI: 10.4018/jcini.2012040101] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
The neural mechanism of memory has a very close relation with the problem of representation in artificial intelligence. In this paper a computational model was proposed to simulate the network of neurons in brain and how they process information. The model refers to morphological and electrophysiological characteristics of neural information processing, and is based on the assumption that neurons encode their firing sequence. The network structure, functions for neural encoding at different stages, the representation of stimuli in memory, and an algorithm to form a memory were presented. It also analyzed the stability and recall rate for learning and the capacity of memory. Because neural dynamic processes, one succeeding another, achieve a neuron-level and coherent form by which information is represented and processed, it may facilitate examination of various branches of Artificial Intelligence (AI), such as inference, problem solving, pattern recognition, natural language processing and learning. The processes of cognitive manipulation occurring in intelligent behavior have a consistent representation while all being modeled from the perspective of computational neuroscience. Thus, the dynamics of neurons make it possible to explain the inner mechanisms of different intelligent behaviors by a unified model of cognitive architecture at a micro-level.
Collapse
|
7
|
Précis of neuroconstructivism: how the brain constructs cognition. Behav Brain Sci 2008; 31:321-31; discussion 331-56. [PMID: 18578929 DOI: 10.1017/s0140525x0800407x] [Citation(s) in RCA: 65] [Impact Index Per Article: 4.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
Abstract
Neuroconstructivism: How the Brain Constructs Cognition proposes a unifying framework for the study of cognitive development that brings together (1) constructivism (which views development as the progressive elaboration of increasingly complex structures), (2) cognitive neuroscience (which aims to understand the neural mechanisms underlying behavior), and (3) computational modeling (which proposes formal and explicit specifications of information processing). The guiding principle of our approach is context dependence, within and (in contrast to Marr [1982]) between levels of organization. We propose that three mechanisms guide the emergence of representations: competition, cooperation, and chronotopy; which themselves allow for two central processes: proactivity and progressive specialization. We suggest that the main outcome of development is partial representations, distributed across distinct functional circuits. This framework is derived by examining development at the level of single neurons, brain systems, and whole organisms. We use the terms encellment, embrainment, and embodiment to describe the higher-level contextual influences that act at each of these levels of organization. To illustrate these mechanisms in operation we provide case studies in early visual perception, infant habituation, phonological development, and object representations in infancy. Three further case studies are concerned with interactions between levels of explanation: social development, atypical development and within that, developmental dyslexia. We conclude that cognitive development arises from a dynamic, contextual change in embodied neural structures leading to partial representations across multiple brain regions and timescales, in response to proactively specified physical and social environment.
Collapse
|
8
|
Quinlan PT, van der Maas HLJ, Jansen BRJ, Booij O, Rendell M. Re-thinking stages of cognitive development: An appraisal of connectionist models of the balance scale task. Cognition 2007; 103:413-59. [PMID: 16574091 DOI: 10.1016/j.cognition.2006.02.004] [Citation(s) in RCA: 10] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/21/2005] [Revised: 01/31/2006] [Accepted: 02/09/2006] [Indexed: 10/24/2022]
Abstract
The present paper re-appraises connectionist attempts to explain how human cognitive development appears to progress through a series of sequential stages. Models of performance on the Piagetian balance scale task are the focus of attention. Limitations of these models are discussed and replications and extensions to the work are provided via the Cascade-Correlation algorithm. An application of multi-group latent class analysis for examining performance of the networks is described and these results reveal fundamental functional characteristics of the networks. Evidence is provided that strongly suggests that the networks are unable to acquire a mastery of torque and, although they do recover certain rules of operation that humans do, they also show a propensity to acquire rules never previously seen.
Collapse
Affiliation(s)
- Philip T Quinlan
- Department of Psychology, University of York, Heslington, York, UK.
| | | | | | | | | |
Collapse
|
9
|
Westermann G, Sirois S, Shultz TR, Mareschal D. Modeling developmental cognitive neuroscience. Trends Cogn Sci 2006; 10:227-32. [PMID: 16603407 DOI: 10.1016/j.tics.2006.03.009] [Citation(s) in RCA: 31] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/09/2005] [Revised: 02/23/2006] [Accepted: 03/21/2006] [Indexed: 11/23/2022]
Abstract
In the past few years connectionist models have greatly contributed to formulating theories of cognitive development. Some of these models follow the approach of developmental cognitive neuroscience in exploring interactions between brain development and cognitive development by integrating structural change into learning. We describe two classes of these models. The first focuses on experience-dependent structural elaboration within a brain region by adding or deleting units and connections during learning. The second models the gradual integration of different brain areas based on combinations of experience-dependent and maturational factors. These models provide new theories of the mechanisms of cognitive change in various domains and they offer an integrated framework to study normal and abnormal development, and normal and impaired adult processing.
Collapse
Affiliation(s)
- Gert Westermann
- Department of Psychology, Oxford Brookes University, Gipsy Lane, Oxford OX3 0BP, UK.
| | | | | | | |
Collapse
|
10
|
An application of pruning in the design of neural networks for real time flood forecasting. Neural Comput Appl 2005. [DOI: 10.1007/s00521-004-0450-z] [Citation(s) in RCA: 13] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
|
11
|
Abstract
Neural networks are applied to a theoretical subject in developmental psychology: modeling developmental transitions. Two issues that are involved will be discussed: discontinuities and acquiring qualitatively new knowledge. We will argue that by the appearance of a bifurcation, a neural network can show discontinuities and may acquire qualitatively new knowledge. First, it is shown that biological principles of neurite outgrowth result in self-organization in a neural network, which is strongly dependent on a bifurcation in the activity dynamics. Second, the effect of a bifurcation due to morphological change is investigated in an Adaptive Resonance Theory (ART) network. Exact ART networks with quantitative differences in network structure at the category level show qualitatively different dynamical regimes, which are separated by bifurcations. These qualitative differences in dynamics affect the cognitive function of Exact ART: Representations of learned categories are local or distributed.
Collapse
|
12
|
Chalup SK. Incremental learning in biological and machine learning systems. Int J Neural Syst 2002; 12:447-65. [PMID: 12528196 DOI: 10.1142/s0129065702001308] [Citation(s) in RCA: 15] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/09/2002] [Revised: 09/23/2002] [Accepted: 09/25/2002] [Indexed: 11/18/2022]
Abstract
Incremental learning concepts are reviewed in machine learning and neurobiology. They are identified in evolution, neurodevelopment and learning. A timeline of qualitative axon, neuron and synapse development summarizes the review on neurodevelopment. A discussion of experimental results on data incremental learning with recurrent artificial neural networks reveals that incremental learning often seems to be more efficient or powerful than standard learning but can produce unexpected side effects. A characterization of incremental learning is proposed which takes the elaborated biological and machine learning concepts into account.
Collapse
Affiliation(s)
- Stephan K Chalup
- School of Electrical Engineering and Computer Science, The University of Newcastle, Australia.
| |
Collapse
|
13
|
Abstract
How do the representations underlying cognitive skills emerge? It is becoming increasingly apparent that answering this question requires integration of neural, cognitive and computational perspectives. Results from this integrative approach resonate with Piaget's central constructivist themes, thus converging on a 'neural constructivist' approach to development, which itself rests on two major research developments. First, accumulating neural evidence for developmental plasticity makes nativist proposals increasingly untenable. Instead, the evidence suggests that cortical development involves the progressive elaboration of neural circuits in which experience-dependent neural growth mechanisms act alongside intrinsic developmental processes to construct the representations underlying mature skills. Second, new research involving constructivist neural networks is elucidating the dynamic interaction between environmentally derived neural activity and developmental mechanisms. Recent neurodevelopmental studies further accord with Piaget's themes, supporting the view of human cortical development as a protracted period of hierarchical-representation construction. Combining constructive growth algorithms with the hierarchical construction of cortical regions suggests that cortical development involves a cascade of increasingly complex representations. Thus, protracted cortical development, while occurring at the expense of increased vulnerability and parental investment, appears to be a powerful and flexible strategy for constructing the representations underlying cognition.
Collapse
|