1
|
Elliott T. Stability against fluctuations: a two-dimensional study of scaling, bifurcations and spontaneous symmetry breaking in stochastic models of synaptic plasticity. BIOLOGICAL CYBERNETICS 2024; 118:39-81. [PMID: 38583095 DOI: 10.1007/s00422-024-00985-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/12/2022] [Accepted: 02/12/2024] [Indexed: 04/08/2024]
Abstract
Stochastic models of synaptic plasticity must confront the corrosive influence of fluctuations in synaptic strength on patterns of synaptic connectivity. To solve this problem, we have proposed that synapses act as filters, integrating plasticity induction signals and expressing changes in synaptic strength only upon reaching filter threshold. Our earlier analytical study calculated the lifetimes of quasi-stable patterns of synaptic connectivity with synaptic filtering. We showed that the plasticity step size in a stochastic model of spike-timing-dependent plasticity (STDP) acts as a temperature-like parameter, exhibiting a critical value below which neuronal structure formation occurs. The filter threshold scales this temperature-like parameter downwards, cooling the dynamics and enhancing stability. A key step in this calculation was a resetting approximation, essentially reducing the dynamics to one-dimensional processes. Here, we revisit our earlier study to examine this resetting approximation, with the aim of understanding in detail why it works so well by comparing it, and a simpler approximation, to the system's full dynamics consisting of various embedded two-dimensional processes without resetting. Comparing the full system to the simpler approximation, to our original resetting approximation, and to a one-afferent system, we show that their equilibrium distributions of synaptic strengths and critical plasticity step sizes are all qualitatively similar, and increasingly quantitatively similar as the filter threshold increases. This increasing similarity is due to the decorrelation in changes in synaptic strength between different afferents caused by our STDP model, and the amplification of this decorrelation with larger synaptic filters.
Collapse
Affiliation(s)
- Terry Elliott
- Department of Electronics and Computer Science, University of Southampton, Highfield, Southampton, SO17 1BJ, UK.
| |
Collapse
|
2
|
Elliott T. Mean First Passage Memory Lifetimes by Reducing Complex Synapses to Simple Synapses. Neural Comput 2017; 29:1468-1527. [PMID: 28333590 DOI: 10.1162/neco_a_00956] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Memory models that store new memories by forgetting old ones have memory lifetimes that are rather short and grow only logarithmically in the number of synapses. Attempts to overcome these deficits include "complex" models of synaptic plasticity in which synapses possess internal states governing the expression of synaptic plasticity. Integrate-and-express, filter-based models of synaptic plasticity propose that synapses act as low-pass filters, integrating plasticity induction signals before expressing synaptic plasticity. Such mechanisms enhance memory lifetimes, leading to an initial rise in the memory signal that is in radical contrast to other related, but nonintegrative, memory models. Because of the complexity of models with internal synaptic states, however, their dynamics can be more difficult to extract compared to "simple" models that lack internal states. Here, we show that by focusing only on processes that lead to changes in synaptic strength, we can integrate out internal synaptic states and effectively reduce complex synapses to simple synapses. For binary-strength synapses, these simplified dynamics then allow us to work directly in the transitions in perceptron activation induced by memory storage rather than in the underlying transitions in synaptic configurations. This permits us to write down master and Fokker-Planck equations that may be simplified under certain, well-defined approximations. These methods allow us to see that memory based on synaptic filters can be viewed as an initial transient that leads to memory signal rise, followed by the emergence of Ornstein-Uhlenbeck-like dynamics that return the system to equilibrium. We may use this approach to compute mean first passage time-defined memory lifetimes for complex models of memory storage.
Collapse
Affiliation(s)
- Terry Elliott
- Department of Electronics and Computer Science, University of Southampton, Highfield, Southampton, SO17 1BJ, U.K.
| |
Collapse
|
3
|
Elliott T. Variations on the Theme of Synaptic Filtering: A Comparison of Integrate-and-Express Models of Synaptic Plasticity for Memory Lifetimes. Neural Comput 2016; 28:2393-2460. [PMID: 27626970 DOI: 10.1162/neco_a_00889] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Integrate-and-express models of synaptic plasticity propose that synapses integrate plasticity induction signals before expressing synaptic plasticity. By discerning trends in their induction signals, synapses can control destabilizing fluctuations in synaptic strength. In a feedforward perceptron framework with binary-strength synapses for associative memory storage, we have previously shown that such a filter-based model outperforms other, nonintegrative, "cascade"-type models of memory storage in most regions of biologically relevant parameter space. Here, we consider some natural extensions of our earlier filter model, including one specifically tailored to binary-strength synapses and one that demands a fixed, consecutive number of same-type induction signals rather than merely an excess before expressing synaptic plasticity. With these extensions, we show that filter-based models outperform nonintegrative models in all regions of biologically relevant parameter space except for a small sliver in which all models encode memories only weakly. In this sliver, which model is superior depends on the metric used to gauge memory lifetimes (whether a signal-to-noise ratio or a mean first passage time). After comparing and contrasting these various filter models, we discuss the multiple mechanisms and timescales that underlie both synaptic plasticity and memory phenomena and suggest that multiple, different filtering mechanisms may operate at single synapses.
Collapse
Affiliation(s)
- Terry Elliott
- Department of Electronics and Computer Science, University of Southampton, Highfield, Southampton, SO17 1BJ, U.K.
| |
Collapse
|
4
|
Elliott T. The Enhanced Rise and Delayed Fall of Memory in a Model of Synaptic Integration: Extension to Discrete State Synapses. Neural Comput 2016; 28:1927-84. [PMID: 27391686 DOI: 10.1162/neco_a_00867] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Integrate-and-express models of synaptic plasticity propose that synapses may act as low-pass filters, integrating synaptic plasticity induction signals in order to discern trends before expressing synaptic plasticity. We have previously shown that synaptic filtering strongly controls destabilizing fluctuations in developmental models. When applied to palimpsest memory systems that learn new memories by forgetting old ones, we have also shown that with binary-strength synapses, integrative synapses lead to an initial memory signal rise before its fall back to equilibrium. Such an initial rise is in dramatic contrast to nonintegrative synapses, in which the memory signal falls monotonically. We now extend our earlier analysis of palimpsest memories with synaptic filters to consider the more general case of discrete state, multilevel synapses. We derive exact results for the memory signal dynamics and then consider various simplifying approximations. We show that multilevel synapses enhance the initial rise in the memory signal and then delay its subsequent fall by inducing a plateau-like region in the memory signal. Such dynamics significantly increase memory lifetimes, defined by a signal-to-noise ratio (SNR). We derive expressions for optimal choices of synaptic parameters (filter size, number of strength states, number of synapses) that maximize SNR memory lifetimes. However, we find that with memory lifetimes defined via mean-first-passage times, such optimality conditions do not exist, suggesting that optimality may be an artifact of SNRs.
Collapse
Affiliation(s)
- Terry Elliott
- Department of Electronics and Computer Science, University of Southampton, Highfield, Southampton, SO17 1BJ, U.K.
| |
Collapse
|
5
|
Elliott T. Memory nearly on a spring: a mean first passage time approach to memory lifetimes. Neural Comput 2014; 26:1873-923. [PMID: 24877738 DOI: 10.1162/neco_a_00622] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
We study memory lifetimes in a perceptron-based framework with binary synapses, using the mean first passage time for the perceptron's total input to fall below firing threshold to define memory lifetimes. Working with the simplest memory-related model of synaptic plasticity, we may obtain exact results for memory lifetimes or, working in the continuum limit, good analytical approximations that afford either much qualitative insight or extremely good quantitative agreement. In one particular limit, we find that memory dynamics reduce to the well-understood Ornstein-Uhlenbeck process. We show that asymptotically, the lifetimes of memories grow logarithmically in the number of synapses when the perceptron's firing threshold is zero, reproducing standard results from signal-to-noise ratio analyses. However, this is only an asymptotically valid result, and we show that extending its application outside the range of its validity leads to a massive overestimate of the minimum number of synapses required for successful memory encoding. In the case that the perceptron's firing threshold is positive, we find the remarkable result that memory lifetimes are strictly bounded from above. Asymptotically, the dependence of memory lifetimes on the number of synapses drops out entirely, and this asymptotic result provides a strict upper bound on memory lifetimes away from this asymptotic regime. The classic logarithmic growth of memory lifetimes in the simplest, palimpsest memories is therefore untypical and nongeneric: memory lifetimes are typically strictly bounded from above.
Collapse
Affiliation(s)
- Terry Elliott
- Department of Electronics and Computer Science, University of Southampton, Highfield, Southampton, SO17 1BJ, U.K.
| |
Collapse
|
6
|
Wright JJ, Bourke PD. On the dynamics of cortical development: synchrony and synaptic self-organization. Front Comput Neurosci 2013; 7:4. [PMID: 23596410 PMCID: PMC3573321 DOI: 10.3389/fncom.2013.00004] [Citation(s) in RCA: 16] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/12/2012] [Accepted: 01/24/2013] [Indexed: 12/02/2022] Open
Abstract
We describe a model for cortical development that resolves long-standing difficulties of earlier models. It is proposed that, during embryonic development, synchronous firing of neurons and their competition for limited metabolic resources leads to selection of an array of neurons with ultra-small-world characteristics. Consequently, in the visual cortex, macrocolumns linked by superficial patchy connections emerge in anatomically realistic patterns, with an ante-natal arrangement which projects signals from the surrounding cortex onto each macrocolumn in a form analogous to the projection of a Euclidean plane onto a Möbius strip. This configuration reproduces typical cortical response maps, and simulations of signal flow explain cortical responses to moving lines as functions of stimulus velocity, length, and orientation. With the introduction of direct visual inputs, under the operation of Hebbian learning, development of mature selective response “tuning” to stimuli of given orientation, spatial frequency, and temporal frequency would then take place, overwriting the earlier ante-natal configuration. The model is provisionally extended to hierarchical interactions of the visual cortex with higher centers, and a general principle for cortical processing of spatio-temporal images is sketched.
Collapse
Affiliation(s)
- James Joseph Wright
- Department of Psychological Medicine, Faculty of Medicine, The University of Auckland Auckland, New Zealand ; Liggins Institute, The University of Auckland Auckland, New Zealand
| | | |
Collapse
|
7
|
Sandberg A. Feasibility of Whole Brain Emulation. STUDIES IN APPLIED PHILOSOPHY, EPISTEMOLOGY AND RATIONAL ETHICS 2013. [DOI: 10.1007/978-3-642-31674-6_19] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/13/2022]
|
8
|
Elliott T, Lagogiannis K. The Rise and Fall of Memory in a Model of Synaptic Integration. Neural Comput 2012; 24:2604-54. [DOI: 10.1162/neco_a_00335] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Plasticity-inducing stimuli must typically be presented many times before synaptic plasticity is expressed, perhaps because induction signals gradually accumulate before overt strength changes occur. We consider memory dynamics in a mathematical model with synapses that integrate plasticity induction signals before expressing plasticity. We find that the memory trace initially rises before reaching a maximum and then falling. The memory signal dissociates into separate oblivescence and reminiscence components, with reminiscence initially dominating recall. In radical contrast, related but nonintegrative models exhibit only a highly problematic oblivescence. Synaptic integration mechanisms possess natural timescales, depending on the statistics of the induction signals. Together with neuromodulation, these timescales may therefore also begin to provide a natural account of the well-known spacing effect in the transition to late-phase plasticity. Finally, we propose experiments that could distinguish between integrative and nonintegrative synapses. Such experiments should further elucidate the synaptic signal processing mechanisms postulated by our model.
Collapse
Affiliation(s)
- Terry Elliott
- Department of Electronics and Computer Science, University of Southampton, Highfield, Southampton, SO17 1BJ, U.K
| | - Konstantinos Lagogiannis
- Department of Electronics and Computer Science, University of Southampton, Highfield, Southampton, SO17 1BJ, U.K
| |
Collapse
|
9
|
Elliott T. Cross-Talk Induces Bifurcations in Nonlinear Models of Synaptic Plasticity. Neural Comput 2012; 24:455-522. [DOI: 10.1162/neco_a_00224] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Linear models of synaptic plasticity provide a useful starting-point for examining the dynamics of neuronal development and learning, but their inherent problems are well known. Models of synaptic plasticity that embrace the demands of biological realism are therefore typically nonlinear. Viewed from a more abstract perspective, nonlinear models of synaptic plasticity are a subset of nonlinear dynamical systems. As such, they may therefore exhibit bifurcations under the variation of control parameters, including noise and errors in synaptic updates. One source of noise or error is the cross-talk that occurs during otherwise Hebbian plasticity. Under cross-talk, stimulation of a set of synapses can induce or modify plasticity in adjacent, unstimulated synapses. Here, we analyze two nonlinear models of developmental synaptic plasticity and a model of independent component analysis in the presence of a simple model of cross-talk. We show that cross-talk does indeed induce bifurcations in these models, entirely destroying their ability to acquire either developmentally or learning-related patterns of fixed points. Importantly, the critical level of cross-talk required to induce bifurcations in these models is very sensitive to the statistics of the afferents’ activities and the number of afferents synapsing on a postsynaptic cell. In particular, the critical level can be made arbitrarily small. Because bifurcations are inevitable in nonlinear models, our results likely apply to many nonlinear models of synaptic plasticity, although the precise details vary by model. Hence, many nonlinear models of synaptic plasticity are potentially fatally compromised by the toxic influence of cross-talk and other sources of noise and errors more generally. We conclude by arguing that biologically realistic models of synaptic plasticity must be robust against noise-induced bifurcations and that biological systems may have evolved strategies to circumvent their possible dangers.
Collapse
Affiliation(s)
- Terry Elliott
- Department of Electronics and Computer Science, University of Southampton, Highfield, Southampton SO17 1BJ, U.K
| |
Collapse
|