151
|
Townley S, Ilchmann A, Weiss M, Mcclements W, Ruiz A, Owens D, Pratzel-Wolters D. Existence and learning of oscillations in recurrent neural networks. ACTA ACUST UNITED AC 2000; 11:205-14. [DOI: 10.1109/72.822523] [Citation(s) in RCA: 91] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
|
152
|
Arai K, Das S, Keller EL, Aiyoshi E. A distributed model of the saccade system: simulations of temporally perturbed saccades using position and velocity feedback. Neural Netw 1999; 12:1359-1375. [PMID: 12662620 DOI: 10.1016/s0893-6080(99)00077-5] [Citation(s) in RCA: 32] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/18/2022]
Abstract
Interrupted saccades, movements that are perturbed in mid-flight by pulsatile electrical stimulation in the omnipause neuron region, are known to achieve final eye displacements with accuracies that are similar to normal saccades even in the absence of visual input following the perturbation. In an attempt to explain the neurophysiological basis for this phenomenon, the present paper describes a model of the saccadic system that represents the superior colliculus as a dynamic two-dimensional, topographically arranged array of laterally interconnected units. A distributed feedback pathway to the colliculus from downstream elements, providing both eye position and velocity signals is incorporated in the model. With the help of a training procedure based on a genetic algorithm and gradient descent, the model is optimized to produce both the normal as well as slow saccades with similar accuracy. The slow movements are included in the training set to mimic the accurate saccades that occur despite alterations in alertness, as well as following various degenerative oculomotor diseases. Although interrupted saccades were not included in the training set, the model is able to produce accurate movement of this type as an emergent property for a wide range of perturbed eye velocity trajectories. Our model demonstrates for the first time, that by means of an appropriate feedback mechanism, a single-layered dynamic network can be made to retain a distributed memory of the remaining ocular displacement error even for interrupted and slow saccades. These results support the hypothesis that saccades are controlled by error feedback of signals that code efference copies of eye motion, and further, suggest a possible answer to a long standing question about the kind of the feedback signal, if any, that is received by the superior colliculus during saccadic eye movements.
Collapse
Affiliation(s)
- K Arai
- Research Center, Mitsubishi Chemical Corporation, Japan
| | | | | | | |
Collapse
|
153
|
Voegtlin T, Verschure PF. What can robots tell us about brains? A synthetic approach towards the study of learning and problem solving. Rev Neurosci 1999; 10:291-310. [PMID: 10526893 DOI: 10.1515/revneuro.1999.10.3-4.291] [Citation(s) in RCA: 13] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
This paper argues for the development of synthetic approaches towards the study of brain and behavior as a complement to the more traditional empirical mode of research. As an example we present our own work on learning and problem solving which relates to the behavioral paradigms of classical and operant conditioning. We define the concept of learning in the context of behavior and lay out the basic methodological requirements a model needs to satisfy, which includes evaluations using robots. In addition, we define a number of design principles neuronal models should obey to be considered relevant. We present in detail the construction of a neural model of short- and long-term memory which can be applied to an artificial behaving system. The presented model (DAC4) provides a novel self-consistent implementation of these processes, which satisfies our principles. This model will be interpreted towards the present understanding of the neuronal substrate of memory.
Collapse
Affiliation(s)
- T Voegtlin
- Institute of Neuroinformatics, Zurich, Switzerland
| | | |
Collapse
|
154
|
Galicki M, Leistritz L, Witte H. Learning continuous trajectories in recurrent neural networks with time-dependent weights. ACTA ACUST UNITED AC 1999; 10:741-56. [DOI: 10.1109/72.774210] [Citation(s) in RCA: 31] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
|
155
|
Campolucci P, Uncini A, Piazza F, Rao B. On-line learning algorithms for locally recurrent neural networks. ACTA ACUST UNITED AC 1999; 10:253-71. [DOI: 10.1109/72.750549] [Citation(s) in RCA: 117] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
|
156
|
Mak M, Ku K, Lu Y. On the improvement of the real time recurrent learning algorithm for recurrent neural networks. Neurocomputing 1999. [DOI: 10.1016/s0925-2312(98)00089-7] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/17/2022]
|
157
|
Sivakumar S, Robertson W, Phillips W. Online stabilization of block-diagonal recurrent neural networks. ACTA ACUST UNITED AC 1999; 10:167-75. [DOI: 10.1109/72.737503] [Citation(s) in RCA: 52] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
|
158
|
Ijspeert AJ, Kodjabachian J. Evolution and development of a central pattern generator for the swimming of a lamprey. ARTIFICIAL LIFE 1999; 5:247-269. [PMID: 10648954 DOI: 10.1162/106454699568773] [Citation(s) in RCA: 32] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/23/2023]
Abstract
This article describes the design of neural control architectures for locomotion using an evolutionary approach. Inspired by the central pattern generators found in animals, we develop neural controllers that can produce the patterns of oscillations necessary for the swimming of a simulated lamprey. This work is inspired by Ekeberg's neuronal and mechanical model of a lamprey [11] and follows experiments in which swimming controllers were evolved using a simple encoding scheme [25, 26]. Here, controllers are developed using an evolutionary algorithm based on the SGOCE encoding [31, 32] in which a genetic programming approach is used to evolve developmental programs that encode the growing of a dynamical neural network. The developmental programs determine how neurons located on a two-dimensional substrate produce new cells through cellular division and how they form efferent or afferent interconnections. Swimming controllers are generated when the growing networks eventually create connections to the muscles located on both sides of the rectangular substrate. These muscles are part of a two-dimensional mechanical simulation of the body of the lamprey in interaction with water. The motivation of this article is to develop a method for the design of control mechanisms for animal-like locomotion. Such a locomotion is characterized by a large number of actuators, a rhythmic activity, and the fact that efficient motion is only obtained when the actuators are well coordinated. The task of the control mechanism is therefore to transform commands concerning the speed and direction of motion into the signals sent to the multiple actuators. We define a fitness function, based on several simulations of the controller with different commands settings, that rewards the capacity of modulating the speed and the direction of swimming in response to simple, varying input signals. Central pattern generators are thus evolved capable of producing the relatively complex patterns of oscillations necessary for swimming. The best solutions generate traveling waves of neural activity, and propagate, similarly to the swimming of a real lamprey, undulations of the body from head to tail propelling the lamprey forward through water. By simply varying the amplitude of two input signals, the speed and the direction of swimming can be modulated.
Collapse
Affiliation(s)
- A J Ijspeert
- Department of Artificial Intelligence, University of Edinburgh, 5 Forrest Hill, Edinburgh EH1 2QL, U.K.
| | | |
Collapse
|
159
|
Ruiz A, Owens D, Townley S. Existence, learning, and replication of periodic motions in recurrent neural networks. ACTA ACUST UNITED AC 1998; 9:651-61. [DOI: 10.1109/72.701178] [Citation(s) in RCA: 52] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/05/2022]
|
160
|
Sudareshan M, Condarcure T. Recurrent neural-network training by a learning automaton approach for trajectory learning and control system design. ACTA ACUST UNITED AC 1998; 9:354-68. [DOI: 10.1109/72.668879] [Citation(s) in RCA: 25] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
|
161
|
Meert K, Rijckaert M. Intelligent modelling in the chemical process industry with neural networks: a case study. Comput Chem Eng 1998. [DOI: 10.1016/s0098-1354(98)00104-5] [Citation(s) in RCA: 20] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
|
162
|
Dominey PF. A shared system for learning serial and temporal structure of sensori-motor sequences? Evidence from simulation and human experiments. BRAIN RESEARCH. COGNITIVE BRAIN RESEARCH 1998; 6:163-72. [PMID: 9479067 DOI: 10.1016/s0926-6410(97)00029-3] [Citation(s) in RCA: 37] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/06/2023]
Abstract
This research investigates the influences of temporal structure on the representation of serial order. Experiments are performed in a neural network model of sequence learning and in human subjects. In the sequence learning model, a recurrent network of leaky integrator neurons encodes a succession of internal states that become associated, by reinforcement learning, with the correct sequential responses. First, the model is shown to learn a simple temporal discrimination task. The model is then exposed to two novel serial reaction time (SRT) experiments. In the standard SRT task (M.J. Nissen, P. Bullemer, Attentional requirements of learning: evidence from performance measures, Cogn. Psychol. 19 (1987) 1-32 [16]), reaction times for stimuli presented in a repeating sequence are reduced with respect to those for random stimuli, providing a measure of sequence learning. The novelty of the current experiments is that imbedded in the serial order of the sequences, there is a temporal structure of delays. The model is sensitive to both the serial structure and the temporal structure of the sequences. This observation is then confirmed in human subjects. These results demonstrate how a novel recurrent architecture encodes the interaction of temporal and serial structure and provide insight into related aspects of human sensori-motor sequence learning.
Collapse
Affiliation(s)
- P F Dominey
- Institut des Sciences Cognitives, UPR 9075, CNRS, 69008 Lyon, France.
| |
Collapse
|
163
|
Abstract
Learning to store information over extended time intervals by recurrent backpropagation takes a very long time, mostly because of insufficient, decaying error backflow. We briefly review Hochreiter's (1991) analysis of this problem, then address it by introducing a novel, efficient, gradient-based method called long short-term memory (LSTM). Truncating the gradient where this does not do harm, LSTM can learn to bridge minimal time lags in excess of 1000 discrete-time steps by enforcing constant error flow through constant error carousels within special units. Multiplicative gate units learn to open and close access to the constant error flow. LSTM is local in space and time; its computational complexity per time step and weight is O(1). Our experiments with artificial data involve local, distributed, real-valued, and noisy pattern representations. In comparisons with real-time recurrent learning, back propagation through time, recurrent cascade correlation, Elman nets, and neural sequence chunking, LSTM leads to many more successful runs, and learns much faster. LSTM also solves complex, artificial long-time-lag tasks that have never been solved by previous recurrent network algorithms.
Collapse
Affiliation(s)
- S Hochreiter
- Fakultät für Informatik, Technische Universität München, Germany
| | | |
Collapse
|
164
|
|
165
|
Dominey PF, Boussaoud D. Encoding behavioral context in recurrent networks of the fronto-striatal system: a simulation study. BRAIN RESEARCH. COGNITIVE BRAIN RESEARCH 1997; 6:53-65. [PMID: 9395849 DOI: 10.1016/s0926-6410(97)00015-3] [Citation(s) in RCA: 17] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/05/2023]
Abstract
This research addresses the hypothesis that behavioral context is encoded in recurrent networks of the fronto-striatal system. Behavioral context influences the processing of subsequent brain events, including responses to sensory inputs, thus providing a basis for context-dependent behavior. We define context-dependent behavior as the adaptive ability to produce the appropriate response to a given stimulus, dependent upon the context in which it appears. Behavioral context can change with a time-scale on the order of seconds to tens of seconds or more. This suggests a flexible mechanism that encodes context via an ensemble of neural activation that will appropriately influence the processing of subsequent sensory stimuli. We present a functional model of context encoding in recurrent connections of the fronto-striatal system with simulation results that correspond closely to empirical data. Neuronal activity in monkeys that perform a context-dependent task indicate that the prefrontal cortex and striatum participate differentially in this kind of context encoding. Likewise, simulated neurons in our model of the fronto-striatal system, which performs the context-dependent task, display task-related activity remarkably similar to that found in monkey frontal cortex and striatum, supporting our hypothesis.
Collapse
|
166
|
Development of a recurrent Sigma-Pi neural network rainfall forecasting system in Hong Kong. Neural Comput Appl 1997. [DOI: 10.1007/bf01501172] [Citation(s) in RCA: 11] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/25/2022]
|
167
|
A Novel Neural-Based-Rainfall Newcasting System in Hong Kong. JOURNAL OF INTELLIGENT SYSTEMS 1997. [DOI: 10.1515/jisys.1997.7.3-4.245] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022] Open
|