1
|
Corr PJ. Clarifying Problems in Behavioural Control: Interface, Lateness and Consciousness. EUROPEAN JOURNAL OF PERSONALITY 2021. [DOI: 10.1002/per.781] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/13/2023]
Abstract
The target paper highlights a number of unresolved issues that, I believe, continue to impede the construction of a viable model of behavioural control in personality psychology; namely, (a) the relationship between controlled and automatic processing (the ‘interface’ problem’) and (b) the time it takes for controlled processes, including consciousness, to be generated (the ‘lateness’ problem). The diversity of views expressed in the commentatories indicates that these are, indeed, real and unresolved problems. This response is structured around the following key questions. (1) How long–term goal planning interfaces with the automatic machinery of behaviour? (2) The extent of the impact of the ‘lateness’ of controlled (including conscious) processes for building models of behavioural control? (3) How best to characterise the personality traits associated with the FFFS, BIS and BAS? (4) How does the BIS control mismatch detection, the generation of error signals, and response inhibition and switching? (5) Is consciousness really a necessary explanatory construct in models of behavioural control? (6) Might neural ‘crosstalk’ of encapsulated action–goal response systems point to the functional significance of consciousness? (7) What are the implications of issues raised in the target paper for lexical and social–cognitive approaches to personality? I conclude by re–iterating the importance of the problems of ‘lateness’ and ‘interface’ for the construction of a viable model of behavioural control sufficient for the fostering of theoretical integration within personality psychology as well as affording the building of conceptual bridges with general psychology. Copyright © 2010 John Wiley & Sons, Ltd.
Collapse
|
2
|
A biologically plausible network model for pattern storage and recall inspired by Dentate Gyrus. Neural Comput Appl 2020. [DOI: 10.1007/s00521-019-04670-3] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
|
3
|
Rennó-Costa C, da Silva ACC, Blanco W, Ribeiro S. Computational models of memory consolidation and long-term synaptic plasticity during sleep. Neurobiol Learn Mem 2018; 160:32-47. [PMID: 30321652 DOI: 10.1016/j.nlm.2018.10.003] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/31/2018] [Revised: 09/18/2018] [Accepted: 10/11/2018] [Indexed: 12/19/2022]
Abstract
The brain stores memories by persistently changing the connectivity between neurons. Sleep is known to be critical for these changes to endure. Research on the neurobiology of sleep and the mechanisms of long-term synaptic plasticity has provided data in support of various theories of how brain activity during sleep affects long-term synaptic plasticity. The experimental findings - and therefore the theories - are apparently quite contradictory, with some evidence pointing to a role of sleep in the forgetting of irrelevant memories, whereas other results indicate that sleep supports the reinforcement of the most valuable recollections. A unified theoretical framework is in need. Computational modeling and simulation provide grounds for the quantitative testing and comparison of theoretical predictions and observed data, and might serve as a strategy to organize the rather complicated and diverse pool of data and methodologies used in sleep research. This review article outlines the emerging progress in the computational modeling and simulation of the main theories on the role of sleep in memory consolidation.
Collapse
Affiliation(s)
- César Rennó-Costa
- BioMe - Bioinformatics Multidisciplinary Environment, Federal University of Rio Grande do Norte, Natal, Brazil; Digital Metropolis Institute, Federal University of Rio Grande do Norte, Natal, Brazil
| | - Ana Cláudia Costa da Silva
- BioMe - Bioinformatics Multidisciplinary Environment, Federal University of Rio Grande do Norte, Natal, Brazil; Digital Metropolis Institute, Federal University of Rio Grande do Norte, Natal, Brazil; Brain Institute, Federal University of Rio Grande do Norte, Natal, Brazil; Federal University of Paraiba, João Pessoa, Brazil
| | - Wilfredo Blanco
- BioMe - Bioinformatics Multidisciplinary Environment, Federal University of Rio Grande do Norte, Natal, Brazil; Brain Institute, Federal University of Rio Grande do Norte, Natal, Brazil; State University of Rio Grande do Norte, Natal, Brazil
| | - Sidarta Ribeiro
- Brain Institute, Federal University of Rio Grande do Norte, Natal, Brazil.
| |
Collapse
|
4
|
Bhalla US. Dendrites, deep learning, and sequences in the hippocampus. Hippocampus 2017; 29:239-251. [PMID: 29024221 DOI: 10.1002/hipo.22806] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/09/2017] [Revised: 10/06/2017] [Accepted: 10/10/2017] [Indexed: 11/06/2022]
Abstract
The hippocampus places us both in time and space. It does so over remarkably large spans: milliseconds to years, and centimeters to kilometers. This works for sensory representations, for memory, and for behavioral context. How does it fit in such wide ranges of time and space scales, and keep order among the many dimensions of stimulus context? A key organizing principle for a wide sweep of scales and stimulus dimensions is that of order in time, or sequences. Sequences of neuronal activity are ubiquitous in sensory processing, in motor control, in planning actions, and in memory. Against this strong evidence for the phenomenon, there are currently more models than definite experiments about how the brain generates ordered activity. The flip side of sequence generation is discrimination. Discrimination of sequences has been extensively studied at the behavioral, systems, and modeling level, but again physiological mechanisms are fewer. It is against this backdrop that I discuss two recent developments in neural sequence computation, that at face value share little beyond the label "neural." These are dendritic sequence discrimination, and deep learning. One derives from channel physiology and molecular signaling, the other from applied neural network theory - apparently extreme ends of the spectrum of neural circuit detail. I suggest that each of these topics has deep lessons about the possible mechanisms, scales, and capabilities of hippocampal sequence computation.
Collapse
Affiliation(s)
- Upinder S Bhalla
- Neurobiology, National Centre for Biological Sciences, Tata Institute of Fundamental Research, Bellary Road, Bangalore 560065, Karnataka, India
| |
Collapse
|
5
|
Stachenfeld KL, Botvinick MM, Gershman SJ. The hippocampus as a predictive map. Nat Neurosci 2017; 20:1643-1653. [PMID: 28967910 DOI: 10.1038/nn.4650] [Citation(s) in RCA: 358] [Impact Index Per Article: 51.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/19/2016] [Accepted: 08/29/2017] [Indexed: 12/19/2022]
Abstract
A cognitive map has long been the dominant metaphor for hippocampal function, embracing the idea that place cells encode a geometric representation of space. However, evidence for predictive coding, reward sensitivity and policy dependence in place cells suggests that the representation is not purely spatial. We approach this puzzle from a reinforcement learning perspective: what kind of spatial representation is most useful for maximizing future reward? We show that the answer takes the form of a predictive representation. This representation captures many aspects of place cell responses that fall outside the traditional view of a cognitive map. Furthermore, we argue that entorhinal grid cells encode a low-dimensionality basis set for the predictive representation, useful for suppressing noise in predictions and extracting multiscale structure for hierarchical planning.
Collapse
Affiliation(s)
- Kimberly L Stachenfeld
- DeepMind, London, UK.,Princeton Neuroscience Institute, Princeton University, Princeton, New Jersey, USA
| | - Matthew M Botvinick
- DeepMind, London, UK.,Gatsby Computational Neuroscience Unit, University College London, London, UK
| | - Samuel J Gershman
- Department of Psychology and Center for Brain Science, Harvard University, Cambridge, Massachusetts, USA
| |
Collapse
|
6
|
Abstract
Rational analyses of memory suggest that retrievability of past experience depends on its usefulness for predicting the future: memory is adapted to the temporal structure of the environment. Recent research has enriched this view by applying it to semantic memory and reinforcement learning. This paper describes how multiple forms of memory can be linked via common predictive principles, possibly subserved by a shared neural substrate in the hippocampus. Predictive principles offer an explanation for a wide range of behavioral and neural phenomena, including semantic fluency, temporal contiguity effects in episodic memory, and the topological properties of hippocampal place cells.
Collapse
Affiliation(s)
- Samuel J Gershman
- Department of Psychology and Center for Brain Science, Harvard University
| |
Collapse
|
7
|
Gershman SJ, Monfils MH, Norman KA, Niv Y. The computational nature of memory modification. eLife 2017; 6. [PMID: 28294944 PMCID: PMC5391211 DOI: 10.7554/elife.23763] [Citation(s) in RCA: 61] [Impact Index Per Article: 8.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/10/2016] [Accepted: 03/13/2017] [Indexed: 11/25/2022] Open
Abstract
Retrieving a memory can modify its influence on subsequent behavior. We develop a computational theory of memory modification, according to which modification of a memory trace occurs through classical associative learning, but which memory trace is eligible for modification depends on a structure learning mechanism that discovers the units of association by segmenting the stream of experience into statistically distinct clusters (latent causes). New memories are formed when the structure learning mechanism infers that a new latent cause underlies current sensory observations. By the same token, old memories are modified when old and new sensory observations are inferred to have been generated by the same latent cause. We derive this framework from probabilistic principles, and present a computational implementation. Simulations demonstrate that our model can reproduce the major experimental findings from studies of memory modification in the Pavlovian conditioning literature. DOI:http://dx.doi.org/10.7554/eLife.23763.001 Our memories contain our expectations about the world that we can retrieve to make predictions about the future. For example, most people would expect a chocolate bar to taste good, because they have previously learned to associate chocolate with pleasure. When a surprising event occurs, such as tasting an unpalatable chocolate bar, the brain therefore faces a dilemma. Should it update the existing memory and overwrite the association between chocolate and pleasure? Or should it create an additional memory? In the latter case, the brain would form a new association between chocolate and displeasure that competes with, but does not overwrite, the original one between chocolate and pleasure. Previous studies have shown that surprising events tend to create new memories unless the existing memory is briefly reactivated before the surprising event occurs. In other words, retrieving old memories makes them more malleable. Gershman et al. have now developed a computational model for how the brain decides whether to update an old memory or create a new one. The idea at the heart of the model is that the brain will attempt to infer what caused the surprising event. The reason the chocolate bar tastes unpalatable, for example, might be because it was old and had spoiled. Every time the brain infers a new possible cause for a surprising event, it will create an additional memory to store this new set of expectations. In the future we will know that spoiled chocolate bars taste bad. However, if the brain cannot infer a new cause for the surprising event – because, for example, there appears to be nothing unusual about the unpalatable chocolate bar – it will instead opt to update the existing memory. The next time we buy a chocolate bar, we will have slightly lower expectations about how good it will taste. The dilemma of whether to update an existing memory or create a new one thus boils down to the question: is the surprising event the consequence of a new cause or an old one? This theory implies that retrieving a memory nudges the brain to infer that its associated cause is once again active and, since this is an old cause, it means that the memory will be eligible for updating. Many experiments have been performed on the topic of modifying memories, but this is the first computational model that offers a unifying explanation for the results. The next step is to work out how to apply the model, which is phrased in abstract terms, to networks of neurons that are more biologically realistic. DOI:http://dx.doi.org/10.7554/eLife.23763.002
Collapse
Affiliation(s)
- Samuel J Gershman
- Department of Psychology and Center for Brain Science, Harvard University, Cambridge, United States
| | - Marie-H Monfils
- Department of Psychology, University of Texas, Austin, United States
| | - Kenneth A Norman
- Princeton Neuroscience Institute and Department of Psychology, Princeton University, Princeton, United States
| | - Yael Niv
- Princeton Neuroscience Institute and Department of Psychology, Princeton University, Princeton, United States
| |
Collapse
|
8
|
Abstract
The "problem of serial order in behavior," as formulated and discussed by Lashley (1951), is arguably more pervasive and more profound both than originally stated and than currently appreciated. We spell out two complementary aspects of what we term the generalized problem of behavior: (i) multimodality, stemming from the disparate nature of the sensorimotor variables and processes that underlie behavior, and (ii) concurrency, which reflects the parallel unfolding in time of these processes and of their asynchronous interactions. We illustrate these on a number of examples, with a special focus on language, briefly survey the computational approaches to multimodal concurrency, offer some hypotheses regarding the manner in which brains address it, and discuss some of the broader implications of these as yet unresolved issues for cognitive science.
Collapse
Affiliation(s)
- Oren Kolodny
- Department of Zoology, Tel Aviv University, Tel Aviv, Israel
| | - Shimon Edelman
- Department of Psychology, Cornell University, Ithaca, NY, USA.
| |
Collapse
|
9
|
Hippocampal Sequences and the Cognitive Map. SPRINGER SERIES IN COMPUTATIONAL NEUROSCIENCE 2015. [DOI: 10.1007/978-1-4939-1969-7_5] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/31/2022]
|
10
|
Hunsaker MR, Kesner RP. The operation of pattern separation and pattern completion processes associated with different attributes or domains of memory. Neurosci Biobehav Rev 2012; 37:36-58. [PMID: 23043857 DOI: 10.1016/j.neubiorev.2012.09.014] [Citation(s) in RCA: 172] [Impact Index Per Article: 14.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/29/2012] [Revised: 09/19/2012] [Accepted: 09/26/2012] [Indexed: 12/21/2022]
Abstract
Pattern separation and pattern completion processes are central to how the brain processes information in an efficient manner. Research into these processes is escalating and deficient pattern separation is being implicated in a wide array of genetic disorders as well as in neurocognitive aging. Despite the quantity of research, there remains a controversy as to precisely which behavioral paradigms should be used to best tap into pattern separation and pattern completion processes, as well as to what constitute legitimate outcome measures reflecting impairments in pattern separation and pattern completion. This review will discuss a theory based on multiple memory systems that provides a framework upon which behavioral tasks can be designed and their results interpreted. Furthermore, this review will discuss the nature of pattern separation and pattern completion and extend these processes outside the hippocampus and across all domains of information processing. After these discussions, an optimal strategy for designing behavioral paradigms to evaluate pattern separation and pattern completion processes will be provided.
Collapse
Affiliation(s)
- Michael R Hunsaker
- Department of Psychiatry and Behavioral Sciences, MIND Institute, University of California, Davis Medical Center, 2805 50th Street, Room 1415, Sacramento, CA 95817, USA.
| | | |
Collapse
|
11
|
Kryukov VI. Towards a unified model of pavlovian conditioning: short review of trace conditioning models. Cogn Neurodyn 2012; 6:377-98. [PMID: 24082960 PMCID: PMC3438324 DOI: 10.1007/s11571-012-9195-z] [Citation(s) in RCA: 10] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/28/2010] [Revised: 12/12/2011] [Accepted: 02/03/2012] [Indexed: 12/18/2022] Open
Abstract
There are three basic paradigms of classical conditioning: delay, trace and context conditioning where presentation of a conditioned stimulus (CS) or a context typically predicts an unconditioned stimulus (US). In delay conditioning CS and US normally coterminate, whereas in trace conditioning an interval of time exists between CS termination and US onset. The modeling of trace conditioning is a rather difficult computational problem and is a challenge to the behavior and connectionist approaches mainly due to a time gap between CS and US. To account for trace conditioning, Pavlov (Conditioned reflexes: an investigation of the physiological activity of the cerebral cortex, Oxford University Press, London, 1927) postulated the existence of a stimulus "trace" in the nervous system. Meanwhile, there exist many other options for solving this association problem. There are several excellent reviews of computational models of classical conditioning but none has thus far been devoted to trace conditioning. Eight representative models of trace conditioning aimed at building a prospective model are being reviewed below in a brief form. As a result, one of them, comprising the most important features of its predecessors, can be suggested as a real candidate for a unified model of trace conditioning.
Collapse
Affiliation(s)
- V. I. Kryukov
- St. Daniel Monastery, Danilovsky Val 22, 115191 Moscow, Russia
| |
Collapse
|
12
|
Gershman SJ, Moore CD, Todd MT, Norman KA, Sederberg PB. The successor representation and temporal context. Neural Comput 2012; 24:1553-68. [PMID: 22364500 DOI: 10.1162/neco_a_00282] [Citation(s) in RCA: 64] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
The successor representation was introduced into reinforcement learning by Dayan ( 1993 ) as a means of facilitating generalization between states with similar successors. Although reinforcement learning in general has been used extensively as a model of psychological and neural processes, the psychological validity of the successor representation has yet to be explored. An interesting possibility is that the successor representation can be used not only for reinforcement learning but for episodic learning as well. Our main contribution is to show that a variant of the temporal context model (TCM; Howard & Kahana, 2002 ), an influential model of episodic memory, can be understood as directly estimating the successor representation using the temporal difference learning algorithm (Sutton & Barto, 1998 ). This insight leads to a generalization of TCM and new experimental predictions. In addition to casting a new normative light on TCM, this equivalence suggests a previously unexplored point of contact between different learning systems.
Collapse
Affiliation(s)
- Samuel J Gershman
- Department of Psychology and Princeton Neuroscience Institute, Princeton University, Princeton, NJ 08540, USA.
| | | | | | | | | |
Collapse
|
13
|
Myers CE, Scharfman HE. Pattern separation in the dentate gyrus: a role for the CA3 backprojection. Hippocampus 2011; 21:1190-215. [PMID: 20683841 PMCID: PMC2976779 DOI: 10.1002/hipo.20828] [Citation(s) in RCA: 86] [Impact Index Per Article: 6.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 04/21/2010] [Indexed: 01/31/2023]
Abstract
Many theories of hippocampal function assume that area CA3 of hippocampus is capable of performing rapid pattern storage, as well as pattern completion when a partial version of a familiar pattern is presented, and that the dentate gyrus (DG) is a preprocessor that performs pattern separation, facilitating storage and recall in CA3. The latter assumption derives partly from the anatomical and physiological properties of DG. However, the major output of DG is from a large number of DG granule cells to a smaller number of CA3 pyramidal cells, which potentially negates the pattern separation performed in the DG. Here, we consider a simple CA3 network model, and consider how it might interact with a previously developed computational model of the DG. The resulting "standard" DG-CA3 model performs pattern storage and completion well, given a small set of sparse, randomly derived patterns representing entorhinal input to the DG and CA3. However, under many circumstances, the pattern separation achieved in the DG is not as robust in CA3, resulting in a low storage capacity for CA3, compared to previous mathematical estimates of the storage capacity for an autoassociative network of this size. We also examine an often-overlooked aspect of hippocampal anatomy that might increase functionality in the combined DG-CA3 model. Specifically, axon collaterals of CA3 pyramidal cells project "back" to the DG ("backprojections"), exerting inhibitory effects on granule cells that could potentially ensure that different subpopulations of granule cells are recruited to respond to similar patterns. In the model, addition of such backprojections improves both pattern separation and storage capacity. We also show that the DG-CA3 model with backprojections provides a better fit to empirical data than a model without backprojections. Therefore, we hypothesize that CA3 backprojections might play an important role in hippocampal function.
Collapse
Affiliation(s)
- Catherine E Myers
- Department of Psychology, Rutgers University, Newark, New Jersey, USA.
| | | |
Collapse
|
14
|
Byrnes S, Burkitt AN, Grayden DB, Meffin H. Learning a Sparse Code for Temporal Sequences Using STDP and Sequence Compression. Neural Comput 2011; 23:2567-98. [DOI: 10.1162/neco_a_00184] [Citation(s) in RCA: 19] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
A spiking neural network that learns temporal sequences is described. A sparse code in which individual neurons represent sequences and subsequences enables multiple sequences to be stored without interference. The network is founded on a model of sequence compression in the hippocampus that is robust to variation in sequence element duration and well suited to learn sequences through spike-timing dependent plasticity (STDP). Three additions to the sequence compression model underlie the sparse representation: synapses connecting the neurons of the network that are subject to STDP, a competitive plasticity rule so that neurons specialize to individual sequences, and neural depolarization after spiking so that neurons have a memory. The response to new sequence elements is determined by the neurons that have responded to the previous subsequence, according to the competitively learned synaptic connections. Numerical simulations show that the model can learn sets of intersecting sequences, presented with widely differing frequencies, with elements of varying duration.
Collapse
Affiliation(s)
- Sean Byrnes
- Bionic Ear Institute, East Melbourne, Victoria 3002, Australia, and Department of Electrical and Electronic Engineering, University of Melbourne, Victoria 3010, Australia
| | - Anthony N. Burkitt
- Department of Electrical and Electronic Engineering, University of Melbourne, Victoria 3010, Australia, and Bionic Ear Institute, East Melbourne, Victoria 3002, Australia
| | - David B. Grayden
- Department of Electrical and Electronic Engineering, University of Melbourne, Victoria 3010, Australia, and Bionic Ear Institute, East Melbourne, Victoria 3002, Australia
| | - Hamish Meffin
- NICTA and Department of Electrical and Electronic Engineering, University of Melbourne, Victoria 3010, Australia
| |
Collapse
|
15
|
Cell assembly sequences arising from spike threshold adaptation keep track of time in the hippocampus. J Neurosci 2011; 31:2828-34. [PMID: 21414904 DOI: 10.1523/jneurosci.3773-10.2011] [Citation(s) in RCA: 94] [Impact Index Per Article: 7.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022] Open
Abstract
Hippocampal neurons can display reliable and long-lasting sequences of transient firing patterns, even in the absence of changing external stimuli. We suggest that time-keeping is an important function of these sequences, and propose a network mechanism for their generation. We show that sequences of neuronal assemblies recorded from rat hippocampal CA1 pyramidal cells can reliably predict elapsed time (15-20 s) during wheel running with a precision of 0.5 s. In addition, we demonstrate the generation of multiple reliable, long-lasting sequences in a recurrent network model. These sequences are generated in the presence of noisy, unstructured inputs to the network, mimicking stationary sensory input. Identical initial conditions generate similar sequences, whereas different initial conditions give rise to distinct sequences. The key ingredients responsible for sequence generation in the model are threshold-adaptation and a Mexican-hat-like pattern of connectivity among pyramidal cells. This pattern may arise from recurrent systems such as the hippocampal CA3 region or the entorhinal cortex. We hypothesize that mechanisms that evolved for spatial navigation also support tracking of elapsed time in behaviorally relevant contexts.
Collapse
|
16
|
Abstract
Traditionally, the hippocampal system has been studied in relation to the goal of retrieving memories about the past. Recent work in humans and rodents suggests that the hippocampal system may be better understood as a system that facilitates predictions about upcoming events. The hippocampus and associated cortical structures are active when people envision future events, and damage that includes the hippocampal region impairs this ability. In rats, hippocampal ensembles preplay and replay event sequences in the absence of overt behavior. If strung together in novel combinations, these sequences could provide the neural building blocks for simulating upcoming events during decision-making, planning, and when imagining novel scenarios. Moreover, in both humans and rodents, the hippocampal system is spontaneously active during task-free epochs and sleep, further suggesting that the system may use idle moments to derive new representations that set the context for future behaviors.
Collapse
Affiliation(s)
- Randy L Buckner
- Howard Hughes Medical Institute at Harvard University, Cambridge, Massachusetts 02138, USA.
| |
Collapse
|
17
|
Byrnes S, Burkitt AN, Grayden DB, Meffin H. Spiking Neuron Model for Temporal Sequence Recognition. Neural Comput 2010; 22:61-93. [DOI: 10.1162/neco.2009.12-07-679] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
A biologically inspired neuronal network that stores and recognizes temporal sequences of symbols is described. Each symbol is represented by excitatory input to distinct groups of neurons (symbol pools). Unambiguous storage of multiple sequences with common subsequences is ensured by partitioning each symbol pool into subpools that respond only when the current symbol has been preceded by a particular sequence of symbols. We describe synaptic structure and neural dynamics that permit the selective activation of subpools by the correct sequence. Symbols may have varying durations of the order of hundreds of milliseconds. Physiologically plausible plasticity mechanisms operate on a time scale of tens of milliseconds; an interaction of the excitatory input with periodic global inhibition bridges this gap so that neural events representing successive symbols occur on this much faster timescale. The network is shown to store multiple overlapping sequences of events. It is robust to variation in symbol duration, it is scalable, and its performance degrades gracefully with perturbation of its parameters.
Collapse
Affiliation(s)
- Sean Byrnes
- Bionic Ear Institute, East Melbourne, Victoria 3002, Australia, and Department of Otolaryngology, University of Melbourne, Victoria 3010, Australia
| | - Anthony N. Burkitt
- Department of Electrical and Electronic Engineering, University of Melbourne, Victoria 3010, Australia, and Bionic Ear Institute, East Melbourne, Victoria 3002, Australia
| | - David B. Grayden
- Department of Electrical and Electronic Engineering, University of Melbourne, Victoria 3010, Australia, and Bionic Ear Institute, East Melbourne, Victoria 3002, Australia
| | - Hamish Meffin
- NICTA, Department of Electrical and Electronic Engineering, University of Melbourne, Victoria 3010
| |
Collapse
|
18
|
Hsu D, Chen W, Hsu M, Beggs JM. An open hypothesis: is epilepsy learned, and can it be unlearned? Epilepsy Behav 2008; 13:511-22. [PMID: 18573694 PMCID: PMC2611958 DOI: 10.1016/j.yebeh.2008.05.007] [Citation(s) in RCA: 27] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 02/22/2008] [Revised: 05/13/2008] [Accepted: 05/14/2008] [Indexed: 10/21/2022]
Abstract
Plasticity is central to the ability of a neural system to learn and also to its ability to develop spontaneous seizures. What is the connection between the two? Learning itself is known to be a destabilizing process at the algorithmic level. We have investigated necessary constraints on a spontaneously active Hebbian learning system and find that the ability to learn appears to confer an intrinsic vulnerability to epileptogenesis on that system. We hypothesize that epilepsy arises as an abnormal learned response of such a system to certain repeated provocations. This response is a network-level effect. If epilepsy really is a learned response, then it should be possible to reverse it, that is, to unlearn epilepsy. Unlearning epilepsy may then provide a new approach to its treatment.
Collapse
Affiliation(s)
- David Hsu
- Department of Neurology, University of Wisconsin, Madison, WI 53792, USA.
| | - Wei Chen
- Department of Physics, Indiana University, Bloomington IN
| | - Murielle Hsu
- Department of Neurology, University of Wisconsin, Madison WI
| | - John M. Beggs
- Department of Physics, Indiana University, Bloomington IN
| |
Collapse
|
19
|
Pastalkova E, Itskov V, Amarasingham A, Buzsáki G. Internally generated cell assembly sequences in the rat hippocampus. Science 2008; 321:1322-7. [PMID: 18772431 DOI: 10.1126/science.1159775] [Citation(s) in RCA: 787] [Impact Index Per Article: 49.2] [Reference Citation Analysis] [Abstract] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/02/2022]
Abstract
A long-standing conjecture in neuroscience is that aspects of cognition depend on the brain's ability to self-generate sequential neuronal activity. We found that reliably and continually changing cell assemblies in the rat hippocampus appeared not only during spatial navigation but also in the absence of changing environmental or body-derived inputs. During the delay period of a memory task, each moment in time was characterized by the activity of a particular assembly of neurons. Identical initial conditions triggered a similar assembly sequence, whereas different conditions gave rise to different sequences, thereby predicting behavioral choices, including errors. Such sequences were not formed in control (nonmemory) tasks. We hypothesize that neuronal representations, evolved for encoding distance in spatial navigation, also support episodic recall and the planning of action sequences.
Collapse
Affiliation(s)
- Eva Pastalkova
- Center for Molecular and Behavioral Neuroscience, Rutgers, State University of New Jersey, 197 University Avenue, Newark, NJ 07102, USA
| | | | | | | |
Collapse
|
20
|
Wang R, Zhang Z, Chen G. Energy function and energy evolution on neuronal populations. ACTA ACUST UNITED AC 2008; 19:535-8. [PMID: 18334373 DOI: 10.1109/tnn.2007.914177] [Citation(s) in RCA: 35] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Based on the principle of energy coding, an energy function of a variety of electric potentials of a neural population in cerebral cortex is formulated. The energy function is used to describe the energy evolution of the neuronal population with time and the coupled relationship between neurons at the subthreshold and the suprathreshold states. The Hamiltonian motion equation with the membrane potential is obtained from the neuroelectrophysiological data contaminated by Gaussian white noise. The results of this research show that the mean membrane potential is the exact solution of the motion equation of the membrane potential developed in a previously published paper. It also shows that the Hamiltonian energy function derived in this brief is not only correct but also effective. Particularly, based on the principle of energy coding, an interesting finding is that in some subsets of neurons, firing action potentials at the suprathreshold and some others simultaneously perform activities at the subthreshold level in neural ensembles. Notably, this kind of coupling has not been found in other models of biological neural networks.
Collapse
Affiliation(s)
- Rubin Wang
- Institute for Brain Information Processing and Cognitive Neurodynamics, School of Information Science and Engineering, East China University of Science and Technology, Shanghai 200237, P R China.
| | | | | |
Collapse
|
21
|
Hocking AB, Levy WB. Theta-Modulated Input Reduces Intrinsic Gamma Oscillations in a Hippocampal Model. Neurocomputing 2007; 70:2074-2078. [PMID: 19593393 PMCID: PMC2707940 DOI: 10.1016/j.neucom.2006.10.086] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/23/2022]
Abstract
Introducing theta-modulated input into a minimal model of the CA3 region of the hippocampus has significant effects on gamma oscillations. In the absence of theta-modulated input, the gamma oscillations are robust across a range of parameters. Introducing theta-modulated input weakens the gamma oscillations to a power more consistent with power spectra acquired from laboratory animals. With these changes, simulations of the hippocampal model are able to reproduce hippocampal power spectra measured in awake mice.
Collapse
Affiliation(s)
- Ashlie B. Hocking
- Department of Neurosurgery, University of Virginia, Charlottesville, VA
- Department of Computer Science, University of Virginia, Charlottesville, VA
| | - William B Levy
- Department of Neurosurgery, University of Virginia, Charlottesville, VA
| |
Collapse
|
22
|
Smolen P. A model of late long-term potentiation simulates aspects of memory maintenance. PLoS One 2007; 2:e445. [PMID: 17505541 PMCID: PMC1865388 DOI: 10.1371/journal.pone.0000445] [Citation(s) in RCA: 17] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/04/2007] [Accepted: 04/25/2007] [Indexed: 11/21/2022] Open
Abstract
Late long-term potentiation (L-LTP) denotes long-lasting strengthening of synapses between neurons. L-LTP appears essential for the formation of long-term memory, with memories at least partly encoded by patterns of strengthened synapses. How memories are preserved for months or years, despite molecular turnover, is not well understood. Ongoing recurrent neuronal activity, during memory recall or during sleep, has been hypothesized to preferentially potentiate strong synapses, preserving memories. This hypothesis has not been evaluated in the context of a mathematical model representing ongoing activity and biochemical pathways important for L-LTP. In this study, ongoing activity was incorporated into two such models – a reduced model that represents some of the essential biochemical processes, and a more detailed published model. The reduced model represents synaptic tagging and gene induction simply and intuitively, and the detailed model adds activation of essential kinases by Ca2+. Ongoing activity was modeled as continual brief elevations of Ca2+. In each model, two stable states of synaptic strength/weight resulted. Positive feedback between synaptic weight and the amplitude of ongoing Ca2+ transients underlies this bistability. A tetanic or theta-burst stimulus switches a model synapse from a low basal weight to a high weight that is stabilized by ongoing activity. Bistability was robust to parameter variations in both models. Simulations illustrated that prolonged periods of decreased activity reset synaptic strengths to low values, suggesting a plausible forgetting mechanism. However, episodic activity with shorter inactive intervals maintained strong synapses. Both models support experimental predictions. Tests of these predictions are expected to further understanding of how neuronal activity is coupled to maintenance of synaptic strength. Further investigations that examine the dynamics of activity and synaptic maintenance can be expected to help in understanding how memories are preserved for up to a lifetime in animals including humans.
Collapse
Affiliation(s)
- Paul Smolen
- Department of Neurobiology and Anatomy, W.M. Keck Center for the Neurobiology of Learning and Memory, The University of Texas Medical School at Houston, Houston, Texas, United States of America.
| |
Collapse
|
23
|
|
24
|
|
25
|
Lawrence M, Trappenberg T, Fine A. Rapid learning and robust recall of long sequences in modular associator networks. Neurocomputing 2006. [DOI: 10.1016/j.neucom.2005.12.003] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/25/2022]
|
26
|
Levy WB, Sanyal A, Rodriguez P, Sullivan DW, Wu XB. The formation of neural codes in the hippocampus: trace conditioning as a prototypical paradigm for studying the random recoding hypothesis. BIOLOGICAL CYBERNETICS 2005; 92:409-26. [PMID: 15965710 DOI: 10.1007/s00422-005-0568-9] [Citation(s) in RCA: 16] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/01/2004] [Accepted: 03/18/2005] [Indexed: 05/03/2023]
Abstract
The trace version of classical conditioning is used as a prototypical hippocampal-dependent task to study the recoding sequence prediction theory of hippocampal function. This theory conjectures that the hippocampus is a random recoder of sequences and that, once formed, the neuronal codes are suitable for prediction. As such, a trace conditioning paradigm, which requires a timely prediction, seems by far the simplest of the behaviorally-relevant paradigms for studying hippocampal recoding. Parameters that affect the formation of these random codes include the temporal aspects of the behavioral/cognitive paradigm and certain basic characteristics of hippocampal region CA3 anatomy and physiology such as connectivity and activity. Here we describe some of the dynamics of code formation and describe how biological and paradigmatic parameters affect the neural codes that are formed. In addition to a backward cascade of coding neurons, we point out, for the first time, a higher-order dynamic growing out of the backward cascade-a particular forward and backward stabilization of codes as training progresses. We also observe that there is a performance compromise involved in the setting of activity levels due to the existence of three behavioral failure modes. Each of these behavioral failure modes exists in the computational model and, presumably, natural selection produced the compromise performance observed by psychologists. Thus, examining the parametric sensitivities of the codes and their dynamic formation gives insight into the constraints on natural computation and into the computational compromises ensuing from these constraints.
Collapse
Affiliation(s)
- W B Levy
- Department of Neurosurgery, University of Virginia, P.O. Box 800420, Charlottesville, VA 22908, USA.
| | | | | | | | | |
Collapse
|