1
|
Yang X, La Camera G. Co-existence of synaptic plasticity and metastable dynamics in a spiking model of cortical circuits. PLoS Comput Biol 2024; 20:e1012220. [PMID: 38950068 PMCID: PMC11244818 DOI: 10.1371/journal.pcbi.1012220] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/05/2024] [Revised: 07/12/2024] [Accepted: 06/01/2024] [Indexed: 07/03/2024] Open
Abstract
Evidence for metastable dynamics and its role in brain function is emerging at a fast pace and is changing our understanding of neural coding by putting an emphasis on hidden states of transient activity. Clustered networks of spiking neurons have enhanced synaptic connections among groups of neurons forming structures called cell assemblies; such networks are capable of producing metastable dynamics that is in agreement with many experimental results. However, it is unclear how a clustered network structure producing metastable dynamics may emerge from a fully local plasticity rule, i.e., a plasticity rule where each synapse has only access to the activity of the neurons it connects (as opposed to the activity of other neurons or other synapses). Here, we propose a local plasticity rule producing ongoing metastable dynamics in a deterministic, recurrent network of spiking neurons. The metastable dynamics co-exists with ongoing plasticity and is the consequence of a self-tuning mechanism that keeps the synaptic weights close to the instability line where memories are spontaneously reactivated. In turn, the synaptic structure is stable to ongoing dynamics and random perturbations, yet it remains sufficiently plastic to remap sensory representations to encode new sets of stimuli. Both the plasticity rule and the metastable dynamics scale well with network size, with synaptic stability increasing with the number of neurons. Overall, our results show that it is possible to generate metastable dynamics over meaningful hidden states using a simple but biologically plausible plasticity rule which co-exists with ongoing neural dynamics.
Collapse
Affiliation(s)
- Xiaoyu Yang
- Graduate Program in Physics and Astronomy, Stony Brook University, Stony Brook, New York, United States of America
- Department of Neurobiology and Behavior, Stony Brook University, Stony Brook, New York, United States of America
- Center for Neural Circuit Dynamics, Stony Brook University, Stony Brook, New York, United States of America
| | - Giancarlo La Camera
- Department of Neurobiology and Behavior, Stony Brook University, Stony Brook, New York, United States of America
- Center for Neural Circuit Dynamics, Stony Brook University, Stony Brook, New York, United States of America
| |
Collapse
|
2
|
Yang X, La Camera G. Co-existence of synaptic plasticity and metastable dynamics in a spiking model of cortical circuits. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2023.12.07.570692. [PMID: 38106233 PMCID: PMC10723399 DOI: 10.1101/2023.12.07.570692] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/19/2023]
Abstract
Evidence for metastable dynamics and its role in brain function is emerging at a fast pace and is changing our understanding of neural coding by putting an emphasis on hidden states of transient activity. Clustered networks of spiking neurons have enhanced synaptic connections among groups of neurons forming structures called cell assemblies; such networks are capable of producing metastable dynamics that is in agreement with many experimental results. However, it is unclear how a clustered network structure producing metastable dynamics may emerge from a fully local plasticity rule, i.e., a plasticity rule where each synapse has only access to the activity of the neurons it connects (as opposed to the activity of other neurons or other synapses). Here, we propose a local plasticity rule producing ongoing metastable dynamics in a deterministic, recurrent network of spiking neurons. The metastable dynamics co-exists with ongoing plasticity and is the consequence of a self-tuning mechanism that keeps the synaptic weights close to the instability line where memories are spontaneously reactivated. In turn, the synaptic structure is stable to ongoing dynamics and random perturbations, yet it remains sufficiently plastic to remap sensory representations to encode new sets of stimuli. Both the plasticity rule and the metastable dynamics scale well with network size, with synaptic stability increasing with the number of neurons. Overall, our results show that it is possible to generate metastable dynamics over meaningful hidden states using a simple but biologically plausible plasticity rule which co-exists with ongoing neural dynamics.
Collapse
Affiliation(s)
- Xiaoyu Yang
- Graduate Program in Physics and Astronomy, Stony Brook University
- Department of Neurobiology & Behavior, Stony Brook University
- Center for Neural Circuit Dynamics, Stony Brook University
| | - Giancarlo La Camera
- Department of Neurobiology & Behavior, Stony Brook University
- Center for Neural Circuit Dynamics, Stony Brook University
| |
Collapse
|
3
|
Temporal progression along discrete coding states during decision-making in the mouse gustatory cortex. PLoS Comput Biol 2023; 19:e1010865. [PMID: 36749734 PMCID: PMC9904478 DOI: 10.1371/journal.pcbi.1010865] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/15/2022] [Accepted: 01/10/2023] [Indexed: 02/08/2023] Open
Abstract
The mouse gustatory cortex (GC) is involved in taste-guided decision-making in addition to sensory processing. Rodent GC exhibits metastable neural dynamics during ongoing and stimulus-evoked activity, but how these dynamics evolve in the context of a taste-based decision-making task remains unclear. Here we employ analytical and modeling approaches to i) extract metastable dynamics in ensemble spiking activity recorded from the GC of mice performing a perceptual decision-making task; ii) investigate the computational mechanisms underlying GC metastability in this task; and iii) establish a relationship between GC dynamics and behavioral performance. Our results show that activity in GC during perceptual decision-making is metastable and that this metastability may serve as a substrate for sequentially encoding sensory, abstract cue, and decision information over time. Perturbations of the model's metastable dynamics indicate that boosting inhibition in different coding epochs differentially impacts network performance, explaining a counterintuitive effect of GC optogenetic silencing on mouse behavior.
Collapse
|
4
|
Brinkman BAW, Yan H, Maffei A, Park IM, Fontanini A, Wang J, La Camera G. Metastable dynamics of neural circuits and networks. APPLIED PHYSICS REVIEWS 2022; 9:011313. [PMID: 35284030 PMCID: PMC8900181 DOI: 10.1063/5.0062603] [Citation(s) in RCA: 13] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/06/2021] [Accepted: 01/31/2022] [Indexed: 05/14/2023]
Abstract
Cortical neurons emit seemingly erratic trains of action potentials or "spikes," and neural network dynamics emerge from the coordinated spiking activity within neural circuits. These rich dynamics manifest themselves in a variety of patterns, which emerge spontaneously or in response to incoming activity produced by sensory inputs. In this Review, we focus on neural dynamics that is best understood as a sequence of repeated activations of a number of discrete hidden states. These transiently occupied states are termed "metastable" and have been linked to important sensory and cognitive functions. In the rodent gustatory cortex, for instance, metastable dynamics have been associated with stimulus coding, with states of expectation, and with decision making. In frontal, parietal, and motor areas of macaques, metastable activity has been related to behavioral performance, choice behavior, task difficulty, and attention. In this article, we review the experimental evidence for neural metastable dynamics together with theoretical approaches to the study of metastable activity in neural circuits. These approaches include (i) a theoretical framework based on non-equilibrium statistical physics for network dynamics; (ii) statistical approaches to extract information about metastable states from a variety of neural signals; and (iii) recent neural network approaches, informed by experimental results, to model the emergence of metastable dynamics. By discussing these topics, we aim to provide a cohesive view of how transitions between different states of activity may provide the neural underpinnings for essential functions such as perception, memory, expectation, or decision making, and more generally, how the study of metastable neural activity may advance our understanding of neural circuit function in health and disease.
Collapse
Affiliation(s)
| | - H. Yan
- State Key Laboratory of Electroanalytical Chemistry, Changchun Institute of Applied Chemistry, Chinese Academy of Sciences, Changchun, Jilin 130022, People's Republic of China
| | | | | | | | - J. Wang
- Authors to whom correspondence should be addressed: and
| | - G. La Camera
- Authors to whom correspondence should be addressed: and
| |
Collapse
|
5
|
The Mean Field Approach for Populations of Spiking Neurons. ADVANCES IN EXPERIMENTAL MEDICINE AND BIOLOGY 2022; 1359:125-157. [DOI: 10.1007/978-3-030-89439-9_6] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
Abstract
AbstractMean field theory is a device to analyze the collective behavior of a dynamical system comprising many interacting particles. The theory allows to reduce the behavior of the system to the properties of a handful of parameters. In neural circuits, these parameters are typically the firing rates of distinct, homogeneous subgroups of neurons. Knowledge of the firing rates under conditions of interest can reveal essential information on both the dynamics of neural circuits and the way they can subserve brain function. The goal of this chapter is to provide an elementary introduction to the mean field approach for populations of spiking neurons. We introduce the general idea in networks of binary neurons, starting from the most basic results and then generalizing to more relevant situations. This allows to derive the mean field equations in a simplified setting. We then derive the mean field equations for populations of integrate-and-fire neurons. An effort is made to derive the main equations of the theory using only elementary methods from calculus and probability theory. The chapter ends with a discussion of the assumptions of the theory and some of the consequences of violating those assumptions. This discussion includes an introduction to balanced and metastable networks and a brief catalogue of successful applications of the mean field approach to the study of neural circuits.
Collapse
|
6
|
Expectation-induced modulation of metastable activity underlies faster coding of sensory stimuli. Nat Neurosci 2019; 22:787-796. [PMID: 30936557 PMCID: PMC6516078 DOI: 10.1038/s41593-019-0364-9] [Citation(s) in RCA: 45] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/06/2017] [Accepted: 02/15/2019] [Indexed: 11/22/2022]
Abstract
Sensory stimuli can be recognized more rapidly when they are expected. This phenomenon depends on expectation affecting the cortical processing of sensory information. However, the mechanisms responsible for the effects of expectation on sensory circuits remain elusive. Here, we report a novel computational mechanism underlying the expectation-dependent acceleration of coding observed in the gustatory cortex of alert rats. We use a recurrent spiking network model with a clustered architecture capturing essential features of cortical activity, such as its intrinsically generated metastable dynamics. Relying on network theory and computer simulations, we propose that expectation exerts its function by modulating the intrinsically generated dynamics preceding taste delivery. Our model’s predictions were confirmed in the experimental data, demonstrating how the modulation of ongoing activity can shape sensory coding. Altogether, these results provide a biologically plausible theory of expectation and ascribe a new functional role to intrinsically generated, metastable activity.
Collapse
|
7
|
Murray JM, Escola GS. Learning multiple variable-speed sequences in striatum via cortical tutoring. eLife 2017; 6. [PMID: 28481200 PMCID: PMC5446244 DOI: 10.7554/elife.26084] [Citation(s) in RCA: 66] [Impact Index Per Article: 9.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/16/2017] [Accepted: 05/07/2017] [Indexed: 01/16/2023] Open
Abstract
Sparse, sequential patterns of neural activity have been observed in numerous brain areas during timekeeping and motor sequence tasks. Inspired by such observations, we construct a model of the striatum, an all-inhibitory circuit where sequential activity patterns are prominent, addressing the following key challenges: (i) obtaining control over temporal rescaling of the sequence speed, with the ability to generalize to new speeds; (ii) facilitating flexible expression of distinct sequences via selective activation, concatenation, and recycling of specific subsequences; and (iii) enabling the biologically plausible learning of sequences, consistent with the decoupling of learning and execution suggested by lesion studies showing that cortical circuits are necessary for learning, but that subcortical circuits are sufficient to drive learned behaviors. The same mechanisms that we describe can also be applied to circuits with both excitatory and inhibitory populations, and hence may underlie general features of sequential neural activity pattern generation in the brain. DOI:http://dx.doi.org/10.7554/eLife.26084.001
Collapse
Affiliation(s)
- James M Murray
- Center for Theoretical Neuroscience, Columbia University, New York, United States
| | - G Sean Escola
- Center for Theoretical Neuroscience, Columbia University, New York, United States
| |
Collapse
|
8
|
Lavigne F, Longrée D, Mayaffre D, Mellet S. Semantic integration by pattern priming: experiment and cortical network model. Cogn Neurodyn 2016; 10:513-533. [PMID: 27891200 PMCID: PMC5106460 DOI: 10.1007/s11571-016-9410-4] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/13/2016] [Revised: 07/18/2016] [Accepted: 09/06/2016] [Indexed: 01/09/2023] Open
Abstract
Neural network models describe semantic priming effects by way of mechanisms of activation of neurons coding for words that rely strongly on synaptic efficacies between pairs of neurons. Biologically inspired Hebbian learning defines efficacy values as a function of the activity of pre- and post-synaptic neurons only. It generates only pair associations between words in the semantic network. However, the statistical analysis of large text databases points to the frequent occurrence not only of pairs of words (e.g., "the way") but also of patterns of more than two words (e.g., "by the way"). The learning of these frequent patterns of words is not reducible to associations between pairs of words but must take into account the higher level of coding of three-word patterns. The processing and learning of pattern of words challenges classical Hebbian learning algorithms used in biologically inspired models of priming. The aim of the present study was to test the effects of patterns on the semantic processing of words and to investigate how an inter-synaptic learning algorithm succeeds at reproducing the experimental data. The experiment manipulates the frequency of occurrence of patterns of three words in a multiple-paradigm protocol. Results show for the first time that target words benefit more priming when embedded in a pattern with the two primes than when only associated with each prime in pairs. A biologically inspired inter-synaptic learning algorithm is tested that potentiates synapses as a function of the activation of more than two pre- and post-synaptic neurons. Simulations show that the network can learn patterns of three words to reproduce the experimental results.
Collapse
Affiliation(s)
- Frédéric Lavigne
- BCL, UMR 7320 CNRS et Université de Nice-Sophia Antipolis, Campus Saint Jean d’Angely - SJA3/MSHS Sud-Est/BCL, 24 Avenue des diables bleus, 06357 Nice Cedex 4, France
| | | | | | | |
Collapse
|
9
|
Miller P. Itinerancy between attractor states in neural systems. Curr Opin Neurobiol 2016; 40:14-22. [PMID: 27318972 DOI: 10.1016/j.conb.2016.05.005] [Citation(s) in RCA: 30] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/01/2016] [Revised: 05/20/2016] [Accepted: 05/27/2016] [Indexed: 11/25/2022]
Abstract
Converging evidence from neural, perceptual and simulated data suggests that discrete attractor states form within neural circuits through learning and development. External stimuli may bias neural activity to one attractor state or cause activity to transition between several discrete states. Evidence for such transitions, whose timing can vary across trials, is best accrued through analyses that avoid any trial-averaging of data. One such method, hidden Markov modeling, has been effective in this context, revealing state transitions in many neural circuits during many tasks. Concurrently, modeling efforts have revealed computational benefits of stimulus processing via transitions between attractor states. This review describes the current state of the field, with comments on how its perceived limitations have been addressed.
Collapse
Affiliation(s)
- Paul Miller
- Volen National Center for Complex Systems, Brandeis University, Waltham, MA 02454-9110, USA
| |
Collapse
|
10
|
The dynamics of memory retrieval in hierarchical networks. J Comput Neurosci 2016; 40:247-68. [DOI: 10.1007/s10827-016-0595-7] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/08/2015] [Revised: 12/07/2015] [Accepted: 02/08/2016] [Indexed: 01/30/2023]
|
11
|
Mazzucato L, Fontanini A, La Camera G. Stimuli Reduce the Dimensionality of Cortical Activity. Front Syst Neurosci 2016; 10:11. [PMID: 26924968 PMCID: PMC4756130 DOI: 10.3389/fnsys.2016.00011] [Citation(s) in RCA: 67] [Impact Index Per Article: 8.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/31/2015] [Accepted: 02/02/2016] [Indexed: 12/31/2022] Open
Abstract
The activity of ensembles of simultaneously recorded neurons can be represented as a set of points in the space of firing rates. Even though the dimension of this space is equal to the ensemble size, neural activity can be effectively localized on smaller subspaces. The dimensionality of the neural space is an important determinant of the computational tasks supported by the neural activity. Here, we investigate the dimensionality of neural ensembles from the sensory cortex of alert rats during periods of ongoing (inter-trial) and stimulus-evoked activity. We find that dimensionality grows linearly with ensemble size, and grows significantly faster during ongoing activity compared to evoked activity. We explain these results using a spiking network model based on a clustered architecture. The model captures the difference in growth rate between ongoing and evoked activity and predicts a characteristic scaling with ensemble size that could be tested in high-density multi-electrode recordings. Moreover, we present a simple theory that predicts the existence of an upper bound on dimensionality. This upper bound is inversely proportional to the amount of pair-wise correlations and, compared to a homogeneous network without clusters, it is larger by a factor equal to the number of clusters. The empirical estimation of such bounds depends on the number and duration of trials and is well predicted by the theory. Together, these results provide a framework to analyze neural dimensionality in alert animals, its behavior under stimulus presentation, and its theoretical dependence on ensemble size, number of clusters, and correlations in spiking network models.
Collapse
Affiliation(s)
- Luca Mazzucato
- Department of Neurobiology and Behavior, State University of New York at Stony Brook Stony Brook, NY, USA
| | - Alfredo Fontanini
- Department of Neurobiology and Behavior, State University of New York at Stony BrookStony Brook, NY, USA; Graduate Program in Neuroscience, State University of New York at Stony BrookStony Brook, NY, USA
| | - Giancarlo La Camera
- Department of Neurobiology and Behavior, State University of New York at Stony BrookStony Brook, NY, USA; Graduate Program in Neuroscience, State University of New York at Stony BrookStony Brook, NY, USA
| |
Collapse
|
12
|
Recanatesi S, Katkov M, Romani S, Tsodyks M. Neural Network Model of Memory Retrieval. Front Comput Neurosci 2016; 9:149. [PMID: 26732491 PMCID: PMC4681782 DOI: 10.3389/fncom.2015.00149] [Citation(s) in RCA: 25] [Impact Index Per Article: 3.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/21/2015] [Accepted: 11/26/2015] [Indexed: 11/13/2022] Open
Abstract
Human memory can store large amount of information. Nevertheless, recalling is often a challenging task. In a classical free recall paradigm, where participants are asked to repeat a briefly presented list of words, people make mistakes for lists as short as 5 words. We present a model for memory retrieval based on a Hopfield neural network where transition between items are determined by similarities in their long-term memory representations. Meanfield analysis of the model reveals stable states of the network corresponding (1) to single memory representations and (2) intersection between memory representations. We show that oscillating feedback inhibition in the presence of noise induces transitions between these states triggering the retrieval of different memories. The network dynamics qualitatively predicts the distribution of time intervals required to recall new memory items observed in experiments. It shows that items having larger number of neurons in their representation are statistically easier to recall and reveals possible bottlenecks in our ability of retrieving memories. Overall, we propose a neural network model of information retrieval broadly compatible with experimental observations and is consistent with our recent graphical model (Romani et al., 2013).
Collapse
Affiliation(s)
- Stefano Recanatesi
- Department of Neurobiology, Weizmann Institute of Science Rehovot, Israel
| | - Mikhail Katkov
- Department of Neurobiology, Weizmann Institute of Science Rehovot, Israel
| | - Sandro Romani
- Janelia Farm Research Campus, Howard Hughes Medical Institute Ashburn, VA, USA
| | - Misha Tsodyks
- Department of Neurobiology, Weizmann Institute of ScienceRehovot, Israel; Department of Neurotechnologies, Lobachevsky State University of Nizhny NovgorodNizhny Novgorod, Russia
| |
Collapse
|
13
|
Giulioni M, Corradi F, Dante V, del Giudice P. Real time unsupervised learning of visual stimuli in neuromorphic VLSI systems. Sci Rep 2015; 5:14730. [PMID: 26463272 PMCID: PMC4604465 DOI: 10.1038/srep14730] [Citation(s) in RCA: 19] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/21/2014] [Accepted: 08/12/2015] [Indexed: 11/10/2022] Open
Abstract
Neuromorphic chips embody computational principles operating in the nervous system, into microelectronic devices. In this domain it is important to identify computational primitives that theory and experiments suggest as generic and reusable cognitive elements. One such element is provided by attractor dynamics in recurrent networks. Point attractors are equilibrium states of the dynamics (up to fluctuations), determined by the synaptic structure of the network; a 'basin' of attraction comprises all initial states leading to a given attractor upon relaxation, hence making attractor dynamics suitable to implement robust associative memory. The initial network state is dictated by the stimulus, and relaxation to the attractor state implements the retrieval of the corresponding memorized prototypical pattern. In a previous work we demonstrated that a neuromorphic recurrent network of spiking neurons and suitably chosen, fixed synapses supports attractor dynamics. Here we focus on learning: activating on-chip synaptic plasticity and using a theory-driven strategy for choosing network parameters, we show that autonomous learning, following repeated presentation of simple visual stimuli, shapes a synaptic connectivity supporting stimulus-selective attractors. Associative memory develops on chip as the result of the coupled stimulus-driven neural activity and ensuing synaptic dynamics, with no artificial separation between learning and retrieval phases.
Collapse
Affiliation(s)
| | - Federico Corradi
- Department of Technologies and Health, Istituto Superiore di Sanitá, Roma, Italy
- Institute of Neuroinformatics, University of Zürich and ETH Zürich, Switzerland
| | - Vittorio Dante
- Department of Technologies and Health, Istituto Superiore di Sanitá, Roma, Italy
| | - Paolo del Giudice
- Department of Technologies and Health, Istituto Superiore di Sanitá, Roma, Italy
- National Institute for Nuclear Physics, Rome, Italy
| |
Collapse
|
14
|
Lavigne F, Avnaïm F, Dumercy L. Inter-synaptic learning of combination rules in a cortical network model. Front Psychol 2014; 5:842. [PMID: 25221529 PMCID: PMC4148068 DOI: 10.3389/fpsyg.2014.00842] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/08/2014] [Accepted: 07/15/2014] [Indexed: 11/28/2022] Open
Abstract
Selecting responses in working memory while processing combinations of stimuli depends strongly on their relations stored in long-term memory. However, the learning of XOR-like combinations of stimuli and responses according to complex rules raises the issue of the non-linear separability of the responses within the space of stimuli. One proposed solution is to add neurons that perform a stage of non-linear processing between the stimuli and responses, at the cost of increasing the network size. Based on the non-linear integration of synaptic inputs within dendritic compartments, we propose here an inter-synaptic (IS) learning algorithm that determines the probability of potentiating/depressing each synapse as a function of the co-activity of the other synapses within the same dendrite. The IS learning is effective with random connectivity and without either a priori wiring or additional neurons. Our results show that IS learning generates efficacy values that are sufficient for the processing of XOR-like combinations, on the basis of the sole correlational structure of the stimuli and responses. We analyze the types of dendrites involved in terms of the number of synapses from pre-synaptic neurons coding for the stimuli and responses. The synaptic efficacy values obtained show that different dendrites specialize in the detection of different combinations of stimuli. The resulting behavior of the cortical network model is analyzed as a function of inter-synaptic vs. Hebbian learning. Combinatorial priming effects show that the retrospective activity of neurons coding for the stimuli trigger XOR-like combination-selective prospective activity of neurons coding for the expected response. The synergistic effects of inter-synaptic learning and of mixed-coding neurons are simulated. The results show that, although each mechanism is sufficient by itself, their combined effects improve the performance of the network.
Collapse
Affiliation(s)
- Frédéric Lavigne
- UMR 7320 CNRS, BCL, Université Nice Sophia AntipolisNice, France
| | | | - Laurent Dumercy
- UMR 7320 CNRS, BCL, Université Nice Sophia AntipolisNice, France
| |
Collapse
|
15
|
Bernacchia A, La Camera G, Lavigne F. A latch on priming. Front Psychol 2014; 5:869. [PMID: 25157236 PMCID: PMC4127813 DOI: 10.3389/fpsyg.2014.00869] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/03/2014] [Accepted: 07/21/2014] [Indexed: 11/13/2022] Open
Affiliation(s)
- Alberto Bernacchia
- School of Engineering and Science, Jacobs University Bremen gGmbH Bremen, Germany
| | - Giancarlo La Camera
- Department of Neurobiology and Behavior and Program in Neuroscience, State University of New York at Stony Brook Stony Brook, NY, USA
| | - Frédéric Lavigne
- Laboratoire Bases, Corpus, Langage, UMR 7320 CNRS, Université de Nice - Sophia Antipolis Nice, France
| |
Collapse
|
16
|
Hiratani N, Teramae JN, Fukai T. Associative memory model with long-tail-distributed Hebbian synaptic connections. Front Comput Neurosci 2013; 6:102. [PMID: 23403536 PMCID: PMC3566427 DOI: 10.3389/fncom.2012.00102] [Citation(s) in RCA: 22] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/13/2012] [Accepted: 12/30/2012] [Indexed: 11/17/2022] Open
Abstract
The postsynaptic potentials of pyramidal neurons have a non-Gaussian amplitude distribution with a heavy tail in both hippocampus and neocortex. Such distributions of synaptic weights were recently shown to generate spontaneous internal noise optimal for spike propagation in recurrent cortical circuits. However, whether this internal noise generation by heavy-tailed weight distributions is possible for and beneficial to other computational functions remains unknown. To clarify this point, we construct an associative memory (AM) network model of spiking neurons that stores multiple memory patterns in a connection matrix with a lognormal weight distribution. In AM networks, non-retrieved memory patterns generate a cross-talk noise that severely disturbs memory recall. We demonstrate that neurons encoding a retrieved memory pattern and those encoding non-retrieved memory patterns have different subthreshold membrane-potential distributions in our model. Consequently, the probability of responding to inputs at strong synapses increases for the encoding neurons, whereas it decreases for the non-encoding neurons. Our results imply that heavy-tailed distributions of connection weights can generate noise useful for AM recall.
Collapse
Affiliation(s)
- Naoki Hiratani
- Department of Complexity Science and Engineering, Graduate School of Frontier Sciences, The University of Tokyo Kashiwa, Japan ; Laboratory for Neural Circuit Theory, RIKEN Brain Science Institute Wako, Japan
| | | | | |
Collapse
|
17
|
Dynamics of the semantic priming shift: behavioral experiments and cortical network model. Cogn Neurodyn 2012; 6:467-83. [PMID: 24294333 DOI: 10.1007/s11571-012-9206-0] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/06/2011] [Revised: 05/10/2012] [Accepted: 05/30/2012] [Indexed: 10/28/2022] Open
Abstract
Multiple semantic priming processes between several related and/or unrelated words are at work during the processing of sequences of words. Multiple priming generates rich dynamics of effects depending on the relationship between the target word and the first and/or second prime previously presented. The experimental literature suggests that during the on-line processing of the primes, the activation can shift from associates to the first prime to associates to the second prime. Though the semantic priming shift is central to the on-line and rapid updating of word meanings in the working memory, its precise dynamics are still poorly understood and it is still a challenge to model how it functions in the cerebral cortex. Four multiple priming experiments are proposed that cross-manipulate delays and association strength between the primes and the target. Results show for the first time that association strength determines complex dynamics of the semantic priming shift, ranging from an absence of a shift to a complete shift. A cortical network model of spike frequency adaptive neuron populations is proposed to account for the non-continuous evolution of the priming shift over time. It allows linking the dynamics of the priming shift assessed at the behavioral level to the non-linear dynamics of the firing rates of neurons populations.
Collapse
|
18
|
Huang Y, Amit Y. Capacity analysis in multi-state synaptic models: a retrieval probability perspective. J Comput Neurosci 2010; 30:699-720. [PMID: 20978831 DOI: 10.1007/s10827-010-0287-7] [Citation(s) in RCA: 18] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/13/2010] [Revised: 09/20/2010] [Accepted: 10/05/2010] [Indexed: 10/18/2022]
Abstract
We define the memory capacity of networks of binary neurons with finite-state synapses in terms of retrieval probabilities of learned patterns under standard asynchronous dynamics with a predetermined threshold. The threshold is set to control the proportion of non-selective neurons that fire. An optimal inhibition level is chosen to stabilize network behavior. For any local learning rule we provide a computationally efficient and highly accurate approximation to the retrieval probability of a pattern as a function of its age. The method is applied to the sequential models (Fusi and Abbott, Nat Neurosci 10:485-493, 2007) and meta-plasticity models (Fusi et al., Neuron 45(4):599-611, 2005; Leibold and Kempter, Cereb Cortex 18:67-77, 2008). We show that as the number of synaptic states increases, the capacity, as defined here, either plateaus or decreases. In the few cases where multi-state models exceed the capacity of binary synapse models the improvement is small.
Collapse
Affiliation(s)
- Yibi Huang
- Department of Statistics, University of Chicago, 5734 S University Ave, Chicago, IL 60637, USA.
| | | |
Collapse
|
19
|
Lavigne F, Dumercy L, Darmon N. Determinants of multiple semantic priming: a meta-analysis and spike frequency adaptive model of a cortical network. J Cogn Neurosci 2010; 23:1447-74. [PMID: 20429855 DOI: 10.1162/jocn.2010.21504] [Citation(s) in RCA: 13] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Recall and language comprehension while processing sequences of words involves multiple semantic priming between several related and/or unrelated words. Accounting for multiple and interacting priming effects in terms of underlying neuronal structure and dynamics is a challenge for current models of semantic priming. Further elaboration of current models requires a quantifiable and reliable account of the simplest case of multiple priming resulting from two primes on a target. The meta-analytic approach offers a better understanding of the experimental data from studies on multiple priming regarding the additivity pattern of priming. The meta-analysis points to the effects of prime-target stimuli onset asynchronies on the pattern of underadditivity, overadditivity, or strict additivity of converging activation from multiple primes. The modeling approach is then constrained by results of the meta-analysis. We propose a model of a cortical network embedding spike frequency adaptation, which allows frequency and time-dependent modulation of neural activity. Model results give a comprehensive understanding of the meta-analysis results in terms of dynamics of neuron populations. They also give predictions regarding how stimuli intensities, association strength, and spike frequency adaptation influence multiple priming effects.
Collapse
Affiliation(s)
- Frédéric Lavigne
- Laboratoire de Psychologie Cognitive et Sociale, Université de Nice-Sophia Antipolis, Nice, France.
| | | | | |
Collapse
|
20
|
Abstract
Contextual recall in humans relies on the semantic relationships between items stored in memory. These relationships can be probed by priming experiments. Such experiments have revealed a rich phenomenology on how reaction times depend on various factors such as strength and nature of associations, time intervals between stimulus presentations, and so forth. Experimental protocols on humans present striking similarities with pair association task experiments in monkeys. Electrophysiological recordings of cortical neurons in such tasks have found two types of task-related activity, "retrospective" (related to a previously shown stimulus), and "prospective" (related to a stimulus that the monkey expects to appear, due to learned association between both stimuli). Mathematical models of cortical networks allow theorists to understand the link between the physiology of single neurons and synapses, and network behavior giving rise to retrospective and/or prospective activity. Here, we show that this type of network model can account for a large variety of priming effects. Furthermore, the model allows us to interpret semantic priming differences between the two hemispheres as depending on a single association strength parameter.
Collapse
|
21
|
Huerta R, Nowotny T. Fast and Robust Learning by Reinforcement Signals: Explorations in the Insect Brain. Neural Comput 2009; 21:2123-51. [PMID: 19538091 DOI: 10.1162/neco.2009.03-08-733] [Citation(s) in RCA: 46] [Impact Index Per Article: 3.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
We propose a model for pattern recognition in the insect brain. Departing from a well-known body of knowledge about the insect brain, we investigate which of the potentially present features may be useful to learn input patterns rapidly and in a stable manner. The plasticity underlying pattern recognition is situated in the insect mushroom bodies and requires an error signal to associate the stimulus with a proper response. As a proof of concept, we used our model insect brain to classify the well-known MNIST database of handwritten digits, a popular benchmark for classifiers. We show that the structural organization of the insect brain appears to be suitable for both fast learning of new stimuli and reasonable performance in stationary conditions. Furthermore, it is extremely robust to damage to the brain structures involved in sensory processing. Finally, we suggest that spatiotemporal dynamics can improve the level of confidence in a classification decision. The proposed approach allows testing the effect of hypothesized mechanisms rather than speculating on their benefit for system performance or confidence in its responses.
Collapse
Affiliation(s)
- Ramón Huerta
- Institute for Nonlinear Science, University of California San Diego, La Jolla CA 92093-0402, U.S.A
| | - Thomas Nowotny
- Centre for Computational Neuroscience and Robotics, Department of Informatics, University of Sussex, Falmer, Brighton, BN1 9QJ, U.K
| |
Collapse
|
22
|
La Camera G, Giugliano M, Senn W, Fusi S. The response of cortical neurons to in vivo-like input current: theory and experiment : I. Noisy inputs with stationary statistics. BIOLOGICAL CYBERNETICS 2008; 99:279-301. [PMID: 18985378 DOI: 10.1007/s00422-008-0272-7] [Citation(s) in RCA: 33] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/11/2008] [Accepted: 10/07/2008] [Indexed: 05/27/2023]
Abstract
The study of several aspects of the collective dynamics of interacting neurons can be highly simplified if one assumes that the statistics of the synaptic input is the same for a large population of similarly behaving neurons (mean field approach). In particular, under such an assumption, it is possible to determine and study all the equilibrium points of the network dynamics when the neuronal response to noisy, in vivo-like, synaptic currents is known. The response function can be computed analytically for simple integrate-and-fire neuron models and it can be measured directly in experiments in vitro. Here we review theoretical and experimental results about the neural response to noisy inputs with stationary statistics. These response functions are important to characterize the collective neural dynamics that are proposed to be the neural substrate of working memory, decision making and other cognitive functions. Applications to the case of time-varying inputs are reviewed in a companion paper (Giugliano et al. in Biol Cybern, 2008). We conclude that modified integrate-and-fire neuron models are good enough to reproduce faithfully many of the relevant dynamical aspects of the neuronal response measured in experiments on real neurons in vitro.
Collapse
Affiliation(s)
- Giancarlo La Camera
- Laboratory of Neuropsychology, National Institute of Mental Health, National Institutes of Health, 49 Convent Dr, Rm 1B80, Bethesda, MD 20892, USA.
| | | | | | | |
Collapse
|
23
|
Memory retrieval time and memory capacity of the CA3 network: role of gamma frequency oscillations. Learn Mem 2007; 14:795-806. [PMID: 18007022 DOI: 10.1101/lm.730207] [Citation(s) in RCA: 55] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
Abstract
The existence of recurrent synaptic connections in CA3 led to the hypothesis that CA3 is an autoassociative network similar to the Hopfield networks studied by theorists. CA3 undergoes gamma frequency periodic inhibition that prevents a persistent attractor state. This argues against the analogy to Hopfield nets, in which an attractor state can be used for working memory. However, we show that such periodic inhibition allows one cycle of recurrent excitatory activity and that this is sufficient for memory retrieval (within milliseconds). Thus, gamma oscillations are compatible with a long-term autoassociative memory function for CA3. A second goal of our work was to evaluate previous methods for estimating the memory capacity (P) of CA3. We confirm the equation, P = c/a(2), where c is the probability that any two cells are recurrently connected and a is the fraction of cells representing a memory item. In applying this to CA3, we focus on CA3a, the subregion where recurrent connections are most numerous (c = 0.2) and approximate randomness. We estimate that a memory item is represented by approximately 225 of the 70,000 neurons in CA3a (a = 0.003) and that approximately 20,000 memory items can be stored. Our general conclusion is that the physiological and anatomical findings of CA3a are consistent with an autoassociative function. The nature of the information that is associated in CA3a is discussed. We also discuss how the autoassociative properties of CA3 and the heteroassociative properties of dentate synapses (linking sequential memories) form an integrated system for the storage and recall of item sequences. The recall process generates the phase precession in dentate, CA3, and entorhinal cortex.
Collapse
|
24
|
Warden MR, Miller EK. The Representation of Multiple Objects in Prefrontal Neuronal Delay Activity. Cereb Cortex 2007; 17 Suppl 1:i41-50. [PMID: 17726003 DOI: 10.1093/cercor/bhm070] [Citation(s) in RCA: 83] [Impact Index Per Article: 4.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/14/2022] Open
Abstract
The ability to retain multiple items in short-term memory is fundamental for intelligent behavior, yet little is known about its neural basis. To explore the mechanisms underlying this ability, we trained 2 monkeys to remember a sequence of 2 objects across a short delay. We then recorded the activity of neurons from the lateral prefrontal cortex during task performance and found that most neurons had activity that depended on the identity of both objects while a minority reflected just one object. Further, the activity driven by a particular combination of objects was not a simple addition of the activity elicited by individual objects. Instead, the representation of the first object was altered by the addition of the second object to memory, and the form of this change was not systematically predictable. These results indicate that multiple objects are not stored in separate groups of prefrontal neurons. Rather, they are represented by a single population of neurons in a complex fashion. We also found that the strength of the memory trace associated with each object decayed over time, leading to a relatively stronger representation of more recently seen objects. This is a potential mechanism for representing the temporal order of objects.
Collapse
Affiliation(s)
- Melissa R Warden
- The Picower Institute for Learning and Memory, RIKEN-MIT Neuroscience Research Center, Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA 02139, USA
| | | |
Collapse
|
25
|
Abstract
A fundamental problem in neuroscience is understanding how working memory—the ability to store information at intermediate timescales, like tens of seconds—is implemented in realistic neuronal networks. The most likely candidate mechanism is the attractor network, and a great deal of effort has gone toward investigating it theoretically. Yet, despite almost a quarter century of intense work, attractor networks are not fully understood. In particular, there are still two unanswered questions. First, how is it that attractor networks exhibit irregular firing, as is observed experimentally during working memory tasks? And second, how many memories can be stored under biologically realistic conditions? Here we answer both questions by studying an attractor neural network in which inhibition and excitation balance each other. Using mean-field analysis, we derive a three-variable description of attractor networks. From this description it follows that irregular firing can exist only if the number of neurons involved in a memory is large. The same mean-field analysis also shows that the number of memories that can be stored in a network scales with the number of excitatory connections, a result that has been suggested for simple models but never shown for realistic ones. Both of these predictions are verified using simulations with large networks of spiking neurons. A critical component of cognition is memory—the ability to store information, and to readily retrieve it on cue. Existing models postulate that recalled items are represented by self-sustained activity; that is, they are represented by activity that can exist in the absence of input. These models, however, are incomplete, in the sense that they do not explain two salient experimentally observed features of persistent activity: low firing rates and high neuronal variability. Here we propose a model that can explain both. The model makes two predictions: changes in synaptic weights during learning should be much smaller than the background weights, and the fraction of neurons selective for a memory should be above some threshold. Experimental confirmation of these predictions would provide strong support for the model, and constitute an important step toward a complete theory of memory storage and retrieval.
Collapse
Affiliation(s)
- Yasser Roudi
- Gatsby Computational Neuroscience Unit, University College London, London, United Kingdom.
| | | |
Collapse
|
26
|
Romani S, Amit DJ, Mongillo G. Mean-field analysis of selective persistent activity in presence of short-term synaptic depression. J Comput Neurosci 2006; 20:201-17. [PMID: 16699842 DOI: 10.1007/s10827-006-6308-x] [Citation(s) in RCA: 30] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/14/2005] [Revised: 10/05/2005] [Accepted: 11/21/2005] [Indexed: 11/24/2022]
Abstract
Mean-Field theory is extended to recurrent networks of spiking neurons endowed with short-term depression (STD) of synaptic transmission. The extension involves the use of the distribution of interspike intervals of an integrate-and-fire neuron receiving a Gaussian current, with a given mean and variance, in input. This, in turn, is used to obtain an accurate estimate of the resulting postsynaptic current in presence of STD. The stationary states of the network are obtained requiring self-consistency for the currents-those driving the emission processes and those generated by the emitted spikes. The model network stores in the distribution of two-state efficacies of excitatory-to-excitatory synapses, a randomly composed set of external stimuli. The resulting synaptic structure allows the network to exhibit selective persistent activity for each stimulus in the set. Theory predicts the onset of selective persistent, or working memory (WM) activity upon varying the constitutive parameters (e.g. potentiated/depressed long-term efficacy ratio, parameters associated with STD), and provides the average emission rates in the various steady states. Theoretical estimates are in remarkably good agreement with data "recorded" in computer simulations of the microscopic model.
Collapse
Affiliation(s)
- Sandro Romani
- Dottorato di Ricerca in Neurofisiologia, Dip. di Fisiologia Umana, Università di Roma La Sapienza, Rome, Italy.
| | | | | |
Collapse
|
27
|
Mongillo G, Curti E, Romani S, Amit DJ. Learning in realistic networks of spiking neurons and spike-driven plastic synapses. Eur J Neurosci 2005; 21:3143-60. [PMID: 15978023 DOI: 10.1111/j.1460-9568.2005.04087.x] [Citation(s) in RCA: 29] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022]
Abstract
We have used simulations to study the learning dynamics of an autonomous, biologically realistic recurrent network of spiking neurons connected via plastic synapses, subjected to a stream of stimulus-delay trials, in which one of a set of stimuli is presented followed by a delay. Long-term plasticity, produced by the neural activity experienced during training, structures the network and endows it with active (working) memory, i.e. enhanced, selective delay activity for every stimulus in the training set. Short-term plasticity produces transient synaptic depression. Each stimulus used in training excites a selective subset of neurons in the network, and stimuli can share neurons (overlapping stimuli). Long-term plasticity dynamics are driven by presynaptic spikes and coincident postsynaptic depolarization; stability is ensured by a refresh mechanism. In the absence of stimulation, the acquired synaptic structure persists for a very long time. The dependence of long-term plasticity dynamics on the characteristics of the stimulus response (average emission rates, time course and synchronization), and on the single-cell emission statistics (coefficient of variation) is studied. The study clarifies the specific roles of short-term synaptic depression, NMDA receptors, stimulus representation overlaps, selective stimulation of inhibition, and spike asynchrony during stimulation. Patterns of network spiking activity before, during and after training reproduce most of the in vivo physiological observations in the literature.
Collapse
Affiliation(s)
- Gianluigi Mongillo
- Dipartimento di Fisiologia Umana and Dottorato di ricerca in Neurofisiologia, Universita' di Roma La Sapienza, Rome, Italy.
| | | | | | | |
Collapse
|