1
|
Time as the fourth dimension in the hippocampus. Prog Neurobiol 2020; 199:101920. [PMID: 33053416 DOI: 10.1016/j.pneurobio.2020.101920] [Citation(s) in RCA: 13] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/28/2019] [Revised: 08/18/2020] [Accepted: 10/07/2020] [Indexed: 12/17/2022]
Abstract
Experiences of animal and human beings are structured by the continuity of space and time coupled with the uni-directionality of time. In addition to its pivotal position in spatial processing and navigation, the hippocampal system also plays a central, multiform role in several types of temporal processing. These include timing and sequence learning, at scales ranging from meso-scales of seconds to macro-scales of minutes, hours, days and beyond, encompassing the classical functions of short term memory, working memory, long term memory, and episodic memories (comprised of information about when, what, and where). This review article highlights the principal findings and behavioral contexts of experiments in rats showing: 1) timing: tracking time during delays by hippocampal 'time cells' and during free behavior by hippocampal-afferent lateral entorhinal cortex ramping cells; 2) 'online' sequence processing: activity coding sequences of events during active behavior; 3) 'off-line' sequence replay: during quiescence or sleep, orderly reactivation of neuronal assemblies coding awake sequences. Studies in humans show neurophysiological correlates of episodic memory comparable to awake replay. Neural mechanisms are discussed, including ion channel properties, plateau and ramping potentials, oscillations of excitation and inhibition of population activity, bursts of high amplitude discharges (sharp wave ripples), as well as short and long term synaptic modifications among and within cell assemblies. Specifically conceived neural network models will suggest processes supporting the emergence of scalar properties (Weber's law), and include different classes of feedforward and recurrent network models, with intrinsic hippocampal coding for 'transitions' (sequencing of events or places).
Collapse
|
2
|
Effect of an EMG-FES Interface on Ankle Joint Training Combined with Real-Time Feedback on Balance and Gait in Patients with Stroke Hemiparesis. Healthcare (Basel) 2020; 8:healthcare8030292. [PMID: 32846971 PMCID: PMC7551751 DOI: 10.3390/healthcare8030292] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/02/2020] [Revised: 08/19/2020] [Accepted: 08/21/2020] [Indexed: 11/17/2022] Open
Abstract
This study evaluated the effects of an electromyography-functional electrical stimulation interface (EMG-FES interface) combined with real-time balance and gait feedback on ankle joint training in patients with stroke hemiplegia. Twenty-six stroke patients participated in this study. All subjects were randomly assigned to either the EMG-FES interface combined with real-time feedback on ankle joint training (RFEF) group (n = 13) or the EMG-FES interface on ankle joint training (EF) group (n = 13). Subjects in both groups were trained for 20 min a day, 5 times a week, for 4 weeks. Similarly, all participants underwent a standard rehabilitation physical therapy for 60 min a day, 5 times a week, for 4 weeks. The RFEF group showed significant increases in weight-bearing lunge test (WBLT), Tardieu Scale (TS), Timed Up and Go Test (TUG), Berg Balance Scale (BBS), velocity, cadence, step length, stride length, stance per, and swing per (p < 0.05). Likewise, the EF group showed significant increases in WBLT, TUG, BBS, velocity, and cadence (p < 0.05). Moreover, the RFEF group showed significantly greater improvements than the EF group in terms of WBLT, Tardieu Scale, TUG, BBS, velocity, step length, stride length, stance per, and swing per (p < 0.05). Ankle joint training using an EMG-FES interface combined with real-time feedback improved ankle range of motion (ROM), muscle tone, balance, and gait in stroke patients. These results suggest that an EMG-FES interface combined with real-time feedback is feasible and suitable for ankle joint training in individuals with stroke.
Collapse
|
3
|
Pitti A, Quoy M, Lavandier C, Boucenna S. Gated spiking neural network using Iterative Free-Energy Optimization and rank-order coding for structure learning in memory sequences (INFERNO GATE). Neural Netw 2019; 121:242-258. [PMID: 31581065 DOI: 10.1016/j.neunet.2019.09.023] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/28/2019] [Revised: 09/16/2019] [Accepted: 09/17/2019] [Indexed: 11/16/2022]
Abstract
We present a framework based on iterative free-energy optimization with spiking neural networks for modeling the fronto-striatal system (PFC-BG) for the generation and recall of audio memory sequences. In line with neuroimaging studies carried out in the PFC, we propose a genuine coding strategy using the gain-modulation mechanism to represent abstract sequences based solely on the rank and location of items within them. Based on this mechanism, we show that we can construct a repertoire of neurons sensitive to the temporal structure in sequences from which we can represent any novel sequences. Free-energy optimization is then used to explore and to retrieve the missing indices of the items in the correct order for executive control and compositionality. We show that the gain-modulation mechanism permits the network to be robust to variabilities and to have long-term dependencies as it implements a gated recurrent neural network. This model, called Inferno Gate, is an extension of the neural architecture Inferno standing for Iterative Free-Energy Optimization of Recurrent Neural Networks with Gating or Gain-modulation. In experiments performed with an audio database of ten thousand MFCC vectors, Inferno Gate is capable of encoding efficiently and retrieving chunks of fifty items length. We then discuss the potential of our network to model the features of working memory in the PFC-BG loop for structural learning, goal-direction and hierarchical reinforcement learning.
Collapse
Affiliation(s)
- Alexandre Pitti
- Laboratoire ETIS UMR 8051, Université Paris-Seine, Université de Cergy-Pontoise, ENSEA, CNRS, France.
| | - Mathias Quoy
- Laboratoire ETIS UMR 8051, Université Paris-Seine, Université de Cergy-Pontoise, ENSEA, CNRS, France.
| | - Catherine Lavandier
- Laboratoire ETIS UMR 8051, Université Paris-Seine, Université de Cergy-Pontoise, ENSEA, CNRS, France.
| | - Sofiane Boucenna
- Laboratoire ETIS UMR 8051, Université Paris-Seine, Université de Cergy-Pontoise, ENSEA, CNRS, France.
| |
Collapse
|
4
|
Park JH. Effects of mental imagery training combined electromyogram-triggered neuromuscular electrical stimulation on upper limb function and activities of daily living in patients with chronic stroke: a randomized controlled trial. Disabil Rehabil 2019; 42:2876-2881. [DOI: 10.1080/09638288.2019.1577502] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/27/2022]
Affiliation(s)
- Jin-Hyuck Park
- Department of Bio and Brain Engineering, Korea Advanced Institute of Science and Technology, Daejeon, Republic of Korea
| |
Collapse
|
5
|
Bae S, Kim KY. Dual-afferent sensory input training for voluntary movement after stroke: A pilot randomized controlled study. NeuroRehabilitation 2017; 40:293-300. [DOI: 10.3233/nre-161417] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
|
6
|
Eser J, Zheng P, Triesch J. Nonlinear dynamics analysis of a self-organizing recurrent neural network: chaos waning. PLoS One 2014; 9:e86962. [PMID: 24466301 PMCID: PMC3900684 DOI: 10.1371/journal.pone.0086962] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/07/2013] [Accepted: 12/17/2013] [Indexed: 12/03/2022] Open
Abstract
Self-organization is thought to play an important role in structuring nervous systems. It frequently arises as a consequence of plasticity mechanisms in neural networks: connectivity determines network dynamics which in turn feed back on network structure through various forms of plasticity. Recently, self-organizing recurrent neural network models (SORNs) have been shown to learn non-trivial structure in their inputs and to reproduce the experimentally observed statistics and fluctuations of synaptic connection strengths in cortex and hippocampus. However, the dynamics in these networks and how they change with network evolution are still poorly understood. Here we investigate the degree of chaos in SORNs by studying how the networks' self-organization changes their response to small perturbations. We study the effect of perturbations to the excitatory-to-excitatory weight matrix on connection strengths and on unit activities. We find that the network dynamics, characterized by an estimate of the maximum Lyapunov exponent, becomes less chaotic during its self-organization, developing into a regime where only few perturbations become amplified. We also find that due to the mixing of discrete and (quasi-)continuous variables in SORNs, small perturbations to the synaptic weights may become amplified only after a substantial delay, a phenomenon we propose to call deferred chaos.
Collapse
Affiliation(s)
- Jürgen Eser
- Frankfurt Institute for Advanced Studies, Frankfurt am Main, Germany
| | - Pengsheng Zheng
- Frankfurt Institute for Advanced Studies, Frankfurt am Main, Germany
| | - Jochen Triesch
- Frankfurt Institute for Advanced Studies, Frankfurt am Main, Germany
| |
Collapse
|
7
|
Govan G, Xenos A, Frisco P. A critical study of network models for neural networks and their dynamics. J Theor Biol 2013; 336:1-10. [PMID: 23871957 DOI: 10.1016/j.jtbi.2013.07.005] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/27/2013] [Revised: 06/21/2013] [Accepted: 07/07/2013] [Indexed: 11/17/2022]
Abstract
We use three network models, Erdős-Rényi, Watts-Strogatz and structured nodes, to generate networks sharing several topological features with the neural network of C. elegans (our target network). A new topological measurement, incoming and outgoing edges heat maps, is introduced and used to compare the considered networks. We run these networks as random recurrent neural networks and study their dynamics. We find out that none of the considered network models generates networks similar to the target one both in their topological features and dynamics. Moreover, we find that the dynamics of the target network are very robust to the rewiring of its edges.
Collapse
Affiliation(s)
- G Govan
- School of Mathematical and Computer Sciences, Heriot-Watt University, Edinburgh, EH14 4AS, United Kingdom.
| | | | | |
Collapse
|
8
|
Leleu T, Aihara K. Spontaneous slow oscillations and sequential patterns due to short-term plasticity in a model of the cortex. Neural Comput 2013; 25:3131-82. [PMID: 24001341 DOI: 10.1162/neco_a_00513] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
We study a realistic model of a cortical column taking into account short-term plasticity between pyramidal cells and interneurons. The simulation of leaky integrate-and-fire neurons shows that low-frequency oscillations emerge spontaneously as a result of intrinsic network properties. These oscillations are composed of prolonged phases of high and low activity reminiscent of cortical up and down states, respectively. We simplify the description of the network activity by using a mean field approximation and reduce the system to two slow variables exhibiting some relaxation oscillations. We identify two types of slow oscillations. When the combination of dynamic synapses between pyramidal cells and those between interneurons accounts for the generation of these slow oscillations, the end of the up phase is characterized by asynchronous fluctuations of the membrane potentials. When the slow oscillations are mainly driven by the dynamic synapses between interneurons, the network exhibits fluctuations of membrane potentials, which are more synchronous at the end than at the beginning of the up phase. Additionally, finite size effect and slow synaptic currents can modify the irregularity and frequency, respectively, of these oscillations. Finally, we consider possible roles of a slow oscillatory input modeling long-range interactions in the brain. Spontaneous slow oscillations of local networks are modulated by the oscillatory input, which induces, notably, synchronization, subharmonic synchronization, and chaotic relaxation oscillations in the mean field approximation. In the case of forced oscillations, the slow population-averaged activity of leaky integrate-and-fire neurons can have both deterministic and stochastic temporal features. We discuss the possibility that long-range connectivity controls the emergence of slow sequential patterns in local populations due to the tendency of a cortical column to oscillate at low frequency.
Collapse
Affiliation(s)
- Timothée Leleu
- Graduate School of Engineering, University of Tokyo, Bunkyo-ku, Tokyo 113-8505, Japan
| | | |
Collapse
|
9
|
Leleu T, Aihara K. Combined effects of LTP/LTD and synaptic scaling in formation of discrete and line attractors with persistent activity from non-trivial baseline. Cogn Neurodyn 2012; 6:499-524. [PMID: 24294335 PMCID: PMC3495077 DOI: 10.1007/s11571-012-9211-3] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/03/2012] [Revised: 05/13/2012] [Accepted: 06/28/2012] [Indexed: 11/28/2022] Open
Abstract
In this article, we analyze combined effects of LTP/LTD and synaptic scaling and study the creation of persistent activity from a periodic or chaotic baseline attractor. The bifurcations leading to the creation of new attractors have been detailed; this was achieved using a mean field approximation. Attractors encoding persistent activity can notably appear via generalized period-doubling bifurcations, tangent bifurcations of the second iterates or boundary crises, after which the basins of attraction become irregular. Synaptic scaling is shown to maintain the coexistence of a state of persistent activity and the baseline. According to the rate of change of the external inputs, different types of attractors can be formed: line attractors for rapidly changing external inputs and discrete attractors for constant external inputs.
Collapse
Affiliation(s)
- Timothee Leleu
- Graduate School of Engineering, The University of Tokyo, 7-3-1, Hongo, Bunkyo-ku, Tokyo, 113-8656 Japan
| | - Kazuyuki Aihara
- Graduate School of Engineering, The University of Tokyo, 7-3-1, Hongo, Bunkyo-ku, Tokyo, 113-8656 Japan
- Institute of Industrial Science, The University of Tokyo, 4-6-1 Komaba, Meguro-ku, Tokyo, 153-8505 Japan
| |
Collapse
|
10
|
Marković D, Gros C. Intrinsic Adaptation in Autonomous Recurrent Neural Networks. Neural Comput 2012; 24:523-40. [DOI: 10.1162/neco_a_00232] [Citation(s) in RCA: 24] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/17/2023]
Abstract
A massively recurrent neural network responds on one side to input stimuli and is autonomously active, on the other side, in the absence of sensory inputs. Stimuli and information processing depend crucially on the qualia of the autonomous-state dynamics of the ongoing neural activity. This default neural activity may be dynamically structured in time and space, showing regular, synchronized, bursting, or chaotic activity patterns. We study the influence of nonsynaptic plasticity on the default dynamical state of recurrent neural networks. The nonsynaptic adaption considered acts on intrinsic neural parameters, such as the threshold and the gain, and is driven by the optimization of the information entropy. We observe, in the presence of the intrinsic adaptation processes, three distinct and globally attracting dynamical regimes: a regular synchronized, an overall chaotic, and an intermittent bursting regime. The intermittent bursting regime is characterized by intervals of regular flows, which are quite insensitive to external stimuli, interceded by chaotic bursts that respond sensitively to input signals. We discuss these findings in the context of self-organized information processing and critical brain dynamics.
Collapse
Affiliation(s)
- Dimitrije Marković
- Institute for Theoretical Physics, J. W. Goethe University, 60438 Frankfurt am Main, Hessen, Germany
| | - Claudius Gros
- Institute for Theoretical Physics, J. W. Goethe University, 60438 Frankfurt am Main, Hessen, Germany
| |
Collapse
|
11
|
Riano L, McGinnity TM. Quantifying the role of complexity in a system’s performance. EVOLVING SYSTEMS 2011. [DOI: 10.1007/s12530-011-9031-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/18/2022]
|
12
|
Negrello M. Neural Communication: Messages Between Modules. INVARIANTS OF BEHAVIOR 2011:213-238. [DOI: 10.1007/978-1-4419-8804-1_10] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/02/2023]
|
13
|
De Loor P, Manac’h K, Tisseau J. Enaction-Based Artificial Intelligence: Toward Co-evolution with Humans in the Loop. Minds Mach (Dordr) 2009. [DOI: 10.1007/s11023-009-9165-3] [Citation(s) in RCA: 12] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
|
14
|
Siri B, Berry H, Cessac B, Delord B, Quoy M. A mathematical analysis of the effects of Hebbian learning rules on the dynamics and structure of discrete-time random recurrent neural networks. Neural Comput 2009; 20:2937-66. [PMID: 18624656 DOI: 10.1162/neco.2008.05-07-530] [Citation(s) in RCA: 20] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
We present a mathematical analysis of the effects of Hebbian learning in random recurrent neural networks, with a generic Hebbian learning rule, including passive forgetting and different timescales, for neuronal activity and learning dynamics. Previous numerical work has reported that Hebbian learning drives the system from chaos to a steady state through a sequence of bifurcations. Here, we interpret these results mathematically and show that these effects, involving a complex coupling between neuronal dynamics and synaptic graph structure, can be analyzed using Jacobian matrices, which introduce both a structural and a dynamical point of view on neural network evolution. Furthermore, we show that sensitivity to a learned pattern is maximal when the largest Lyapunov exponent is close to 0. We discuss how neural networks may take advantage of this regime of high functional interest.
Collapse
Affiliation(s)
- Benoît Siri
- Team Alchemy, INRIA, Parc Club Orsay Université, Orsay Cedex, France.
| | | | | | | | | |
Collapse
|
15
|
Siri B, Quoy M, Delord B, Cessac B, Berry H. Effects of Hebbian learning on the dynamics and structure of random networks with inhibitory and excitatory neurons. ACTA ACUST UNITED AC 2007; 101:136-48. [PMID: 18042357 DOI: 10.1016/j.jphysparis.2007.10.003] [Citation(s) in RCA: 44] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022]
Abstract
The aim of the present paper is to study the effects of Hebbian learning in random recurrent neural networks with biological connectivity, i.e. sparse connections and separate populations of excitatory and inhibitory neurons. We furthermore consider that the neuron dynamics may occur at a (shorter) time scale than synaptic plasticity and consider the possibility of learning rules with passive forgetting. We show that the application of such Hebbian learning leads to drastic changes in the network dynamics and structure. In particular, the learning rule contracts the norm of the weight matrix and yields a rapid decay of the dynamics complexity and entropy. In other words, the network is rewired by Hebbian learning into a new synaptic structure that emerges with learning on the basis of the correlations that progressively build up between neurons. We also observe that, within this emerging structure, the strongest synapses organize as a small-world network. The second effect of the decay of the weight matrix spectral radius consists in a rapid contraction of the spectral radius of the Jacobian matrix. This drives the system through the "edge of chaos" where sensitivity to the input pattern is maximal. Taken together, this scenario is remarkably predicted by theoretical arguments derived from dynamical systems and graph theory.
Collapse
Affiliation(s)
- Benoît Siri
- INRIA, Futurs Research Centre, Project-Team Alchemy, 4 rue J Monod, 91893, Orsay Cedex, France
| | | | | | | | | |
Collapse
|
16
|
Cessac B. A discrete time neural network model with spiking neurons. Rigorous results on the spontaneous dynamics. J Math Biol 2007; 56:311-45. [PMID: 17874106 DOI: 10.1007/s00285-007-0117-3] [Citation(s) in RCA: 33] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/16/2006] [Revised: 05/04/2007] [Indexed: 10/22/2022]
Abstract
We derive rigorous results describing the asymptotic dynamics of a discrete time model of spiking neurons introduced in Soula et al. (Neural Comput. 18, 1, 2006). Using symbolic dynamic techniques we show how the dynamics of membrane potential has a one to one correspondence with sequences of spikes patterns ("raster plots"). Moreover, though the dynamics is generically periodic, it has a weak form of initial conditions sensitivity due to the presence of a sharp threshold in the model definition. As a consequence, the model exhibits a dynamical regime indistinguishable from chaos in numerical experiments.
Collapse
Affiliation(s)
- B Cessac
- INRIA, 2004 Route des Lucioles, 06902 Sophia-Antipolis, France.
| |
Collapse
|
17
|
Molter C, Salihoglu U, Bersini H. The Road to Chaos by Time-Asymmetric Hebbian Learning in Recurrent Neural Networks. Neural Comput 2007; 19:80-110. [PMID: 17134318 DOI: 10.1162/neco.2007.19.1.80] [Citation(s) in RCA: 14] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
This letter aims at studying the impact of iterative Hebbian learning algorithms on the recurrent neural network's underlying dynamics. First, an iterative supervised learning algorithm is discussed. An essential improvement of this algorithm consists of indexing the attractor information items by means of external stimuli rather than by using only initial conditions, as Hopfield originally proposed. Modifying the stimuli mainly results in a change of the entire internal dynamics, leading to an enlargement of the set of attractors and potential memory bags. The impact of the learning on the network's dynamics is the following: the more information to be stored as limit cycle attractors of the neural network, the more chaos prevails as the background dynamical regime of the network. In fact, the background chaos spreads widely and adopts a very unstructured shape similar to white noise.Next, we introduce a new form of supervised learning that is more plausible from a biological point of view: the network has to learn to react to an external stimulus by cycling through a sequence that is no longer specified a priori. Based on its spontaneous dynamics, the network decides “on its own” the dynamical patterns to be associated with the stimuli. Compared with classical supervised learning, huge enhancements in storing capacity and computational cost have been observed. Moreover, this new form of supervised learning, by being more “respectful” of the network intrinsic dynamics, maintains much more structure in the obtained chaos. It is still possible to observe the traces of the learned attractors in the chaotic regime. This complex but still very informative regime is referred to as “frustrated chaos.”
Collapse
Affiliation(s)
- Colin Molter
- Laboratory for Dynamics of Emergent Intelligence, RIKEN Brain Science Institute, Wako, Saitama, 351-0198, Japan.
| | | | | |
Collapse
|
18
|
Perrinet L, Samuelides M, Thorpe S. Coding Static Natural Images Using Spiking Event Times: Do Neurons Cooperate? ACTA ACUST UNITED AC 2004; 15:1164-75. [PMID: 15484892 DOI: 10.1109/tnn.2004.833303] [Citation(s) in RCA: 40] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
Abstract
To understand possible strategies of temporal spike coding in the central nervous system, we study functional neuromimetic models of visual processing for static images. We will first present the retinal model which was introduced by Van Rullen and Thorpe and which represents the multiscale contrast values of the image using an orthonormal wavelet transform. These analog values activate a set of spiking neurons which each fire once to produce an asynchronous wave of spikes. According to this model, the image may be progressively reconstructed from this spike wave thanks to regularities in the statistics of the coefficients determined with natural images. Here, we study mathematically how the quality of information transmission carried by this temporal representation varies over time. In particular, we study how these regularities can be used to optimize information transmission by using a form of temporal cooperation of neurons to code analog values. The original model used wavelet transforms that are close to orthogonal. However, the selectivity of realistic neurons overlap, and we propose an extension of the previous model by adding a spatial cooperation between filters. This model extends the previous scheme for arbitrary--and possibly nonorthogonal--representations of features in the images. In particular, we compared the performance of increasingly over-complete representations in the retina. Results show that this algorithm provides an efficient spike coding strategy for low-level visual processing which may adapt to the complexity of the visual input.
Collapse
|
19
|
Bertschinger N, Natschläger T. Real-Time Computation at the Edge of Chaos in Recurrent Neural Networks. Neural Comput 2004; 16:1413-36. [PMID: 15165396 DOI: 10.1162/089976604323057443] [Citation(s) in RCA: 324] [Impact Index Per Article: 16.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Depending on the connectivity, recurrent networks of simple computational units can show very different types of dynamics, ranging from totally ordered to chaotic. We analyze how the type of dynamics (ordered or chaotic) exhibited by randomly connected networks of threshold gates driven by a time-varying input signal depends on the parameters describing the distribution of the connectivity matrix. In particular, we calculate the critical boundary in parameter space where the transition from ordered to chaotic dynamics takes place. Employing a recently developed framework for analyzing real-time computations, we show that only near the critical boundary can such networks perform complex computations on time series. Hence, this result strongly supports conjectures that dynamical systems that are capable of doing complex computational tasks should operate near the edge of chaos, that is, the transition from ordered to chaotic dynamics.
Collapse
Affiliation(s)
- Nils Bertschinger
- Institute for Theoretical Computer Science, Technische Universitaet Graz, A-8010 Graz, Austria.
| | | |
Collapse
|
20
|
Quoy M, Moga S, Gaussier P. Dynamical neural networks for planning and low-level robot control. ACTA ACUST UNITED AC 2003. [DOI: 10.1109/tsmca.2003.809224] [Citation(s) in RCA: 29] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
|
21
|
Bersini H, Sener P. The connections between the frustrated chaos and the intermittency chaos in small Hopfield networks. Neural Netw 2002; 15:1197-204. [PMID: 12425438 DOI: 10.1016/s0893-6080(02)00096-5] [Citation(s) in RCA: 57] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
Abstract
In a previous paper we introduced the notion of frustrated chaos occurring in Hopfield networks [Neural Networks 11 (1998) 1017]. It is a dynamical regime which appears in a network when the global structure is such that local connectivity patterns responsible for stable oscillatory behaviors are intertwined, leading to mutually competing attractors and unpredictable itinerancy among brief appearance of these attractors. Frustration destabilizes the network and provokes an erratic 'wavering' among the orbits that characterize the same network when it is connected in a non-frustrated way. In this paper, through a detailed study of the bifurcation diagram given for some connection weights, we will show that this frustrated chaos belongs to the family of intermittency type of chaos, first described by Berge et al. [Order within chaos (1984)] and Pomeau and Manneville [Commun. Math. Phys. 74 (1980) 189]. Indeed, the transition to chaos is a critical one, and all along the bifurcation diagram, in any chaotic window, the duration of the intermittent cycles, between two chaotic bursts, grows as an invert ratio of the connection weight. Specific to this regime are the intermittent cycles easily identifiable as the non-frustrated regimes obtained by altering the values of these same connection weights. We will more specifically show that anywhere in the bifurcation diagram, a chaotic window always lies between two oscillatory regimes, and that the resulting chaos is a merging of, among others, the cycles at both ends. The strength (i.e. the duration of its oscillatory phase before the chaotic burst) of the first cycle decreases while the regime tends to stabilize into the second cycle (with the strength of this second cycle increasing) that will finally get the control. Since in our study, the bifurcation diagram concerns the same connection weights responsible for the learning mechanism of the Hopfield network, we will discuss the relations existing between bifurcation, learning and control of chaos. We will show that, in some cases, the addition of a slower Hebbian learning mechanism onto the Hopfield networks makes the resulting global dynamics to drive the network into a stable oscillatory regime, through a succession of intermittent and quasiperiodic regimes. Finally, we will present a series of possible logical steps to manually construct a frustrated network.
Collapse
Affiliation(s)
- Hugues Bersini
- IRIDIA-CP 19416, Université Libre de Bruxelles, Belgium.
| | | |
Collapse
|
22
|
|