1
|
Gastaldi C, Gerstner W. A Computational Framework for Memory Engrams. ADVANCES IN NEUROBIOLOGY 2024; 38:237-257. [PMID: 39008019 DOI: 10.1007/978-3-031-62983-9_13] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 07/16/2024]
Abstract
Memory engrams in mice brains are potentially related to groups of concept cells in human brains. A single concept cell in human hippocampus responds, for example, not only to different images of the same object or person but also to its name written down in characters. Importantly, a single mental concept (object or person) is represented by several concept cells and each concept cell can respond to more than one concept. Computational work shows how mental concepts can be embedded in recurrent artificial neural networks as memory engrams and how neurons that are shared between different engrams can lead to associations between concepts. Therefore, observations at the level of neurons can be linked to cognitive notions of memory recall and association chains between memory items.
Collapse
Affiliation(s)
- Chiara Gastaldi
- Brain Mind Institute - School of Computer and Communication Sciences - School of Life Sciences, EPFL, Lausanne, Switzerland
| | - Wulfram Gerstner
- Brain Mind Institute - School of Computer and Communication Sciences - School of Life Sciences, EPFL, Lausanne, Switzerland.
| |
Collapse
|
2
|
Rolls ET. Hippocampal spatial view cells for memory and navigation, and their underlying connectivity in humans. Hippocampus 2023; 33:533-572. [PMID: 36070199 PMCID: PMC10946493 DOI: 10.1002/hipo.23467] [Citation(s) in RCA: 29] [Impact Index Per Article: 29.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/19/2022] [Revised: 08/16/2022] [Accepted: 08/16/2022] [Indexed: 01/08/2023]
Abstract
Hippocampal and parahippocampal gyrus spatial view neurons in primates respond to the spatial location being looked at. The representation is allocentric, in that the responses are to locations "out there" in the world, and are relatively invariant with respect to retinal position, eye position, head direction, and the place where the individual is located. The underlying connectivity in humans is from ventromedial visual cortical regions to the parahippocampal scene area, leading to the theory that spatial view cells are formed by combinations of overlapping feature inputs self-organized based on their closeness in space. Thus, although spatial view cells represent "where" for episodic memory and navigation, they are formed by ventral visual stream feature inputs in the parahippocampal gyrus in what is the parahippocampal scene area. A second "where" driver of spatial view cells are parietal inputs, which it is proposed provide the idiothetic update for spatial view cells, used for memory recall and navigation when the spatial view details are obscured. Inferior temporal object "what" inputs and orbitofrontal cortex reward inputs connect to the human hippocampal system, and in macaques can be associated in the hippocampus with spatial view cell "where" representations to implement episodic memory. Hippocampal spatial view cells also provide a basis for navigation to a series of viewed landmarks, with the orbitofrontal cortex reward inputs to the hippocampus providing the goals for navigation, which can then be implemented by hippocampal connectivity in humans to parietal cortex regions involved in visuomotor actions in space. The presence of foveate vision and the highly developed temporal lobe for object and scene processing in primates including humans provide a basis for hippocampal spatial view cells to be key to understanding episodic memory in the primate and human hippocampus, and the roles of this system in primate including human navigation.
Collapse
Affiliation(s)
- Edmund T. Rolls
- Oxford Centre for Computational NeuroscienceOxfordUK
- Department of Computer ScienceUniversity of WarwickCoventryUK
| |
Collapse
|
3
|
Gastaldi C, Schwalger T, De Falco E, Quiroga RQ, Gerstner W. When shared concept cells support associations: Theory of overlapping memory engrams. PLoS Comput Biol 2021; 17:e1009691. [PMID: 34968383 PMCID: PMC8754331 DOI: 10.1371/journal.pcbi.1009691] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/22/2021] [Revised: 01/12/2022] [Accepted: 11/29/2021] [Indexed: 12/02/2022] Open
Abstract
Assemblies of neurons, called concepts cells, encode acquired concepts in human Medial Temporal Lobe. Those concept cells that are shared between two assemblies have been hypothesized to encode associations between concepts. Here we test this hypothesis in a computational model of attractor neural networks. We find that for concepts encoded in sparse neural assemblies there is a minimal fraction cmin of neurons shared between assemblies below which associations cannot be reliably implemented; and a maximal fraction cmax of shared neurons above which single concepts can no longer be retrieved. In the presence of a periodically modulated background signal, such as hippocampal oscillations, recall takes the form of association chains reminiscent of those postulated by theories of free recall of words. Predictions of an iterative overlap-generating model match experimental data on the number of concepts to which a neuron responds. Experimental evidence suggests that associations between concepts are encoded in the hippocampus by cells shared between neuronal assemblies (“overlap” of concepts). What is the necessary overlap that ensures a reliable encoding of associations? Under which conditions can associations induce a simultaneous or a chain-like activation of concepts? Our theoretical model shows that the ideal overlap presents a tradeoff: the overlap should be larger than a minimum value in order to reliably encode associations, but lower than a maximum value to prevent loss of individual memories. Our theory explains experimental data from human Medial Temporal Lobe and provides a mechanism for chain-like recall in presence of inhibition, while still allowing for simultaneous recall if inhibition is weak.
Collapse
Affiliation(s)
- Chiara Gastaldi
- School of Computer and Communication Sciences and School of Life Sciences, École Polytechnique Fédérale de Lausanne, Lausanne, Switzerland
- * E-mail:
| | - Tilo Schwalger
- Institut für Mathematik, Technische Universität Berlin, Berlin, Germany
| | - Emanuela De Falco
- School of Life Sciences, École Polytechnique Fédérale de Lausanne, Lausanne, Switzerland
| | - Rodrigo Quian Quiroga
- Centre for Systems Neuroscience, University of Leicester, Leicester, United Kingdom
- Peng Cheng Laboratory, Shenzhen, China
| | - Wulfram Gerstner
- School of Computer and Communication Sciences and School of Life Sciences, École Polytechnique Fédérale de Lausanne, Lausanne, Switzerland
| |
Collapse
|
4
|
Balkenius C, Tjøstheim TA, Johansson B, Gärdenfors P. From Focused Thought to Reveries: A Memory System for a Conscious Robot. Front Robot AI 2018; 5:29. [PMID: 33500916 PMCID: PMC7805698 DOI: 10.3389/frobt.2018.00029] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/31/2017] [Accepted: 03/07/2018] [Indexed: 11/26/2022] Open
Abstract
We introduce a memory model for robots that can account for many aspects of an inner world, ranging from object permanence, episodic memory, and planning to imagination and reveries. It is modeled after neurophysiological data and includes parts of the cerebral cortex together with models of arousal systems that are relevant for consciousness. The three central components are an identification network, a localization network, and a working memory network. Attention serves as the interface between the inner and the external world. It directs the flow of information from sensory organs to memory, as well as controlling top-down influences on perception. It also compares external sensations to internal top-down expectations. The model is tested in a number of computer simulations that illustrate how it can operate as a component in various cognitive tasks including perception, the A-not-B test, delayed matching to sample, episodic recall, and vicarious trial and error.
Collapse
Affiliation(s)
- Christian Balkenius
- Lund University Cognitive Science, Department of Philosophy, Lund University, Lund, Sweden
| | - Trond A Tjøstheim
- Lund University Cognitive Science, Department of Philosophy, Lund University, Lund, Sweden
| | - Birger Johansson
- Lund University Cognitive Science, Department of Philosophy, Lund University, Lund, Sweden
| | - Peter Gärdenfors
- Lund University Cognitive Science, Department of Philosophy, Lund University, Lund, Sweden.,University of Technology Sydney, Ultimo, NSW, Australia
| |
Collapse
|
5
|
Recanatesi S, Katkov M, Tsodyks M. Memory States and Transitions between Them in Attractor Neural Networks. Neural Comput 2017; 29:2684-2711. [PMID: 28777725 DOI: 10.1162/neco_a_00998] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Human memory is capable of retrieving similar memories to a just retrieved one. This associative ability is at the base of our everyday processing of information. Current models of memory have not been able to underpin the mechanism that the brain could use in order to actively exploit similarities between memories. The current idea is that to induce transitions in attractor neural networks, it is necessary to extinguish the current memory. We introduce a novel mechanism capable of inducing transitions between memories where similarities between memories are actively exploited by the neural dynamics to retrieve a new memory. Populations of neurons that are selective for multiple memories play a crucial role in this mechanism by becoming attractors on their own. The mechanism is based on the ability of the neural network to control the excitation-inhibition balance.
Collapse
Affiliation(s)
- Stefano Recanatesi
- Neurobiology Department, Weizmann Institute of Science, Rehovot 76100, Israel
| | - Mikhail Katkov
- Neurobiology Department, Weizmann Institute of Science, Rehovot 76100, Israel
| | - Misha Tsodyks
- Neurobiology Department, Weizmann Institute of Science, Rehovot 76100, Israel
| |
Collapse
|
6
|
Katkov M, Romani S, Tsodyks M. Memory Retrieval from First Principles. Neuron 2017; 94:1027-1032. [DOI: 10.1016/j.neuron.2017.03.048] [Citation(s) in RCA: 22] [Impact Index Per Article: 3.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/29/2016] [Revised: 03/16/2017] [Accepted: 03/31/2017] [Indexed: 11/30/2022]
|
7
|
Baglietto G, Gigante G, Del Giudice P. Density-based clustering: A 'landscape view' of multi-channel neural data for inference and dynamic complexity analysis. PLoS One 2017; 12:e0174918. [PMID: 28369106 PMCID: PMC5378378 DOI: 10.1371/journal.pone.0174918] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/20/2017] [Accepted: 03/17/2017] [Indexed: 11/18/2022] Open
Abstract
Two, partially interwoven, hot topics in the analysis and statistical modeling of neural data, are the development of efficient and informative representations of the time series derived from multiple neural recordings, and the extraction of information about the connectivity structure of the underlying neural network from the recorded neural activities. In the present paper we show that state-space clustering can provide an easy and effective option for reducing the dimensionality of multiple neural time series, that it can improve inference of synaptic couplings from neural activities, and that it can also allow the construction of a compact representation of the multi-dimensional dynamics, that easily lends itself to complexity measures. We apply a variant of the ‘mean-shift’ algorithm to perform state-space clustering, and validate it on an Hopfield network in the glassy phase, in which metastable states are largely uncorrelated from memories embedded in the synaptic matrix. In this context, we show that the neural states identified as clusters’ centroids offer a parsimonious parametrization of the synaptic matrix, which allows a significant improvement in inferring the synaptic couplings from the neural activities. Moving to the more realistic case of a multi-modular spiking network, with spike-frequency adaptation inducing history-dependent effects, we propose a procedure inspired by Boltzmann learning, but extending its domain of application, to learn inter-module synaptic couplings so that the spiking network reproduces a prescribed pattern of spatial correlations; we then illustrate, in the spiking network, how clustering is effective in extracting relevant features of the network’s state-space landscape. Finally, we show that the knowledge of the cluster structure allows casting the multi-dimensional neural dynamics in the form of a symbolic dynamics of transitions between clusters; as an illustration of the potential of such reduction, we define and analyze a measure of complexity of the neural time series.
Collapse
Affiliation(s)
- Gabriel Baglietto
- INFN-Roma1, Italian National Institute for Nuclear Research (INFN), Rome, Italy
- IFLYSIB Instituto de Física de Líquidos y Sistemas Biológicos (UNLP-CONICET), La Plata, Argentina
- * E-mail:
| | - Guido Gigante
- Italian Institute of Health (ISS), Rome, Italy
- Mperience srl, Rome, Italy
| | - Paolo Del Giudice
- INFN-Roma1, Italian National Institute for Nuclear Research (INFN), Rome, Italy
- Italian Institute of Health (ISS), Rome, Italy
| |
Collapse
|
8
|
Miller P. Itinerancy between attractor states in neural systems. Curr Opin Neurobiol 2016; 40:14-22. [PMID: 27318972 DOI: 10.1016/j.conb.2016.05.005] [Citation(s) in RCA: 30] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/01/2016] [Revised: 05/20/2016] [Accepted: 05/27/2016] [Indexed: 11/25/2022]
Abstract
Converging evidence from neural, perceptual and simulated data suggests that discrete attractor states form within neural circuits through learning and development. External stimuli may bias neural activity to one attractor state or cause activity to transition between several discrete states. Evidence for such transitions, whose timing can vary across trials, is best accrued through analyses that avoid any trial-averaging of data. One such method, hidden Markov modeling, has been effective in this context, revealing state transitions in many neural circuits during many tasks. Concurrently, modeling efforts have revealed computational benefits of stimulus processing via transitions between attractor states. This review describes the current state of the field, with comments on how its perceived limitations have been addressed.
Collapse
Affiliation(s)
- Paul Miller
- Volen National Center for Complex Systems, Brandeis University, Waltham, MA 02454-9110, USA
| |
Collapse
|
9
|
Abstract
The brain can reproduce memories from partial data; this ability is critical for memory recall. The process of memory recall has been studied using autoassociative networks such as the Hopfield model. This kind of model reliably converges to stored patterns that contain the memory. However, it is unclear how the behavior is controlled by the brain so that after convergence to one configuration, it can proceed with recognition of another one. In the Hopfield model, this happens only through unrealistic changes of an effective global temperature that destabilizes all stored configurations. Here we show that spike-frequency adaptation (SFA), a common mechanism affecting neuron activation in the brain, can provide state-dependent control of pattern retrieval. We demonstrate this in a Hopfield network modified to include SFA, and also in a model network of biophysical neurons. In both cases, SFA allows for selective stabilization of attractors with different basins of attraction, and also for temporal dynamics of attractor switching that is not possible in standard autoassociative schemes. The dynamics of our models give a plausible account of different sorts of memory retrieval.
Collapse
Affiliation(s)
- James P. Roach
- Neuroscience Graduate Program, University of Michigan, Ann Arbor, Michigan 48109, USA
| | - Leonard M. Sander
- Department of Physics, University of Michigan, Ann Arbor, Michigan 48109, USA
- Center for the Study of Complex Systems, University of Michigan, Ann Arbor, Michigan 48109, USA
| | - Michal R. Zochowski
- Department of Physics, University of Michigan, Ann Arbor, Michigan 48109, USA
- Center for the Study of Complex Systems, University of Michigan, Ann Arbor, Michigan 48109, USA
- Biophysics Program, University of Michigan, Ann Arbor, Michigan 48109, USA
| |
Collapse
|
10
|
Stratton P, Wiles J. Global segregation of cortical activity and metastable dynamics. Front Syst Neurosci 2015; 9:119. [PMID: 26379514 PMCID: PMC4548222 DOI: 10.3389/fnsys.2015.00119] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/20/2015] [Accepted: 08/07/2015] [Indexed: 11/23/2022] Open
Abstract
Cortical activity exhibits persistent metastable dynamics. Assemblies of neurons transiently couple (integrate) and decouple (segregate) at multiple spatiotemporal scales; both integration and segregation are required to support metastability. Integration of distant brain regions can be achieved through long range excitatory projections, but the mechanism supporting long range segregation is not clear. We argue that the thalamocortical matrix connections, which project diffusely from the thalamus to the cortex and have long been thought to support cortical gain control, play an equally-important role in cortical segregation. We present a computational model of the diffuse thalamocortical loop, called the competitive cross-coupling (CXC) spiking network. Simulations of the model show how different levels of tonic input from the brainstem to the thalamus could control dynamical complexity in the cortex, directing transitions between sleep, wakefulness and high attention or vigilance. The model also explains how mutually-exclusive activity could arise across large portions of the cortex, such as between the default-mode and task-positive networks. It is robust to noise but does not require noise to autonomously generate metastability. We conclude that the long range segregation observed in brain activity and required for global metastable dynamics could be provided by the thalamocortical matrix, and is strongly modulated by brainstem input to the thalamus.
Collapse
Affiliation(s)
- Peter Stratton
- Queensland Brain Institute, The University of Queensland Brisbane, QLD, Australia ; Centre for Clinical Research, The University of Queensland Brisbane, QLD, Australia
| | - Janet Wiles
- School of Information Technology and Electrical Engineering, The University of Queensland Brisbane, QLD, Australia
| |
Collapse
|
11
|
Generating functionals for autonomous latching dynamics in attractor relict networks. Sci Rep 2014; 3:2042. [PMID: 23784373 PMCID: PMC3687227 DOI: 10.1038/srep02042] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/06/2013] [Accepted: 05/30/2013] [Indexed: 11/08/2022] Open
Abstract
Coupling local, slowly adapting variables to an attractor network allows to destabilize all attractors, turning them into attractor ruins. The resulting attractor relict network may show ongoing autonomous latching dynamics. We propose to use two generating functionals for the construction of attractor relict networks, a Hopfield energy functional generating a neural attractor network and a functional based on information-theoretical principles, encoding the information content of the neural firing statistics, which induces latching transition from one transiently stable attractor ruin to the next. We investigate the influence of stress, in terms of conflicting optimization targets, on the resulting dynamics. Objective function stress is absent when the target level for the mean of neural activities is identical for the two generating functionals and the resulting latching dynamics is then found to be regular. Objective function stress is present when the respective target activity levels differ, inducing intermittent bursting latching dynamics.
Collapse
|
12
|
Yakovlev V, Amit Y, Hochstein S. It's hard to forget: resetting memory in delay-match-to-multiple-image tasks. Front Hum Neurosci 2013; 7:765. [PMID: 24294199 PMCID: PMC3827555 DOI: 10.3389/fnhum.2013.00765] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/08/2013] [Accepted: 10/24/2013] [Indexed: 11/13/2022] Open
Abstract
The Delay-Match-to-Sample (DMS) task has been used in countless studies of memory, undergoing numerous modifications, making the task more and more challenging to participants. The physiological correlate of memory is modified neural activity during the cue-to-match delay period reflecting reverberating attractor activity in multiple interconnected cells. DMS tasks may use a fixed set of well-practiced stimulus images-allowing for creation of attractors-or unlimited novel images, for which no attractor exists. Using well-learned stimuli requires that participants determine if a remembered image was seen in the same or a preceding trial, only responding to the former. Thus, trial-to-trial transitions must include a "reset" mechanism to mark old images as such. We test two groups of monkeys on a delay-match-to-multiple-images task, one with well-trained and one with novel images. Only the first developed a reset mechanism. We then switched tasks between the groups. We find that introducing fixed images initiates development of reset, and once established, switching to novel images does not disable its use. Without reset, memory decays slowly, leaving ~40% recognizable after a minute. Here, presence of reward further enhances memory of previously-seen images.
Collapse
Affiliation(s)
- Volodya Yakovlev
- Neurobiology Department, Life Sciences Institute and Safra Center for Brain Research, Safra Campus, Hebrew UniversityJerusalem, Israel
| | - Yali Amit
- Departments of Statistics and Computer Science, Chicago UniversityChicago, IL, USA
| | - Shaul Hochstein
- Neurobiology Department, Life Sciences Institute and Safra Center for Brain Research, Safra Campus, Hebrew UniversityJerusalem, Israel
| |
Collapse
|
13
|
Lansner A, Marklund P, Sikström S, Nilsson LG. Reactivation in working memory: an attractor network model of free recall. PLoS One 2013; 8:e73776. [PMID: 24023690 PMCID: PMC3758294 DOI: 10.1371/journal.pone.0073776] [Citation(s) in RCA: 28] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/05/2012] [Accepted: 07/23/2013] [Indexed: 11/18/2022] Open
Abstract
The dynamic nature of human working memory, the general-purpose system for processing continuous input, while keeping no longer externally available information active in the background, is well captured in immediate free recall of supraspan word-lists. Free recall tasks produce several benchmark memory phenomena, like the U-shaped serial position curve, reflecting enhanced memory for early and late list items. To account for empirical data, including primacy and recency as well as contiguity effects, we propose here a neurobiologically based neural network model that unifies short- and long-term forms of memory and challenges both the standard view of working memory as persistent activity and dual-store accounts of free recall. Rapidly expressed and volatile synaptic plasticity, modulated intrinsic excitability, and spike-frequency adaptation are suggested as key cellular mechanisms underlying working memory encoding, reactivation and recall. Recent findings on the synaptic and molecular mechanisms behind early LTP and on spiking activity during delayed-match-to-sample tasks support this view.
Collapse
Affiliation(s)
- Anders Lansner
- Department of Numerical Analysis and Computer Science, Stockholm University, Stockholm, Sweden
- School of Computer Science and Communication, Department of Computational Biology, KTH (Royal Institute of Technology), Stockholm, Sweden
- Stockholm Brain Institute, Stockholm, Sweden
| | - Petter Marklund
- Stockholm Brain Institute, Stockholm, Sweden
- Department of Psychology, Stockholm University, Stockholm, Sweden
| | - Sverker Sikström
- Stockholm Brain Institute, Stockholm, Sweden
- Department of Psychology, Lund University, Lund, Sweden
| | - Lars-Göran Nilsson
- Stockholm Brain Institute, Stockholm, Sweden
- Department of Psychology, Stockholm University, Stockholm, Sweden
| |
Collapse
|
14
|
Lerner I, Bentin S, Shriki O. Excessive attractor instability accounts for semantic priming in schizophrenia. PLoS One 2012; 7:e40663. [PMID: 22844407 PMCID: PMC3402492 DOI: 10.1371/journal.pone.0040663] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/28/2012] [Accepted: 06/11/2012] [Indexed: 11/23/2022] Open
Abstract
One of the most pervasive findings in studies of schizophrenics with thought disorders is their peculiar pattern of semantic priming, which presumably reflects abnormal associative processes in the semantic system of these patients. Semantic priming is manifested by faster and more accurate recognition of a word-target when preceded by a semantically related prime, relative to an unrelated prime condition. Compared to control, semantic priming in schizophrenics is characterized by reduced priming effects at long prime-target Stimulus Onset Asynchrony (SOA) and, sometimes, augmented priming at short SOA. In addition, unlike controls, schizophrenics consistently show indirect (mediated) priming (such as from the prime ‘wedding’ to the target ‘finger’, mediated by ‘ring’). In a previous study, we developed a novel attractor neural network model with synaptic adaptation mechanisms that could account for semantic priming patterns in healthy individuals. Here, we examine the consequences of introducing attractor instability to this network, which is hypothesized to arise from dysfunctional synaptic transmission known to occur in schizophrenia. In two simulated experiments, we demonstrate how such instability speeds up the network’s dynamics and, consequently, produces the full spectrum of priming effects previously reported in patients. The model also explains the inconsistency of augmented priming results at short SOAs using directly related pairs relative to the consistency of indirect priming. Further, we discuss how the same mechanism could account for other symptoms of the disease, such as derailment (‘loose associations’) or the commonly seen difficulty of patients in utilizing context. Finally, we show how the model can statistically implement the overly-broad wave of spreading activation previously presumed to characterize thought-disorders in schizophrenia.
Collapse
Affiliation(s)
- Itamar Lerner
- Interdisciplinary Center for Neural Computation, The Hebrew University of Jerusalem, Jerusalem, Israel
- * E-mail: (OS); (IL)
| | - Shlomo Bentin
- Department of Psychology and Interdisciplinary Center for Neural Computation, The Hebrew University of Jerusalem, Jerusalem, Israel
| | - Oren Shriki
- Section on Critical Brain Dynamics, National Institute of Mental Health, Bethesda, Maryland, United States of America
- * E-mail: (OS); (IL)
| |
Collapse
|
15
|
Russo E, Treves A. Cortical free-association dynamics: distinct phases of a latching network. PHYSICAL REVIEW. E, STATISTICAL, NONLINEAR, AND SOFT MATTER PHYSICS 2012; 85:051920. [PMID: 23004800 DOI: 10.1103/physreve.85.051920] [Citation(s) in RCA: 21] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/02/2011] [Indexed: 06/01/2023]
Abstract
A Potts associative memory network has been proposed as a simplified model of macroscopic cortical dynamics, in which each Potts unit stands for a patch of cortex, which can be activated in one of S local attractor states. The internal neuronal dynamics of the patch is not described by the model, rather it is subsumed into an effective description in terms of graded Potts units, with adaptation effects both specific to each attractor state and generic to the patch. If each unit, or patch, receives effective (tensor) connections from C other units, the network has been shown to be able to store a large number p of global patterns, or network attractors, each with a fraction a of the units active, where the critical load p_{c} scales roughly like p_{c}≈CS^{2}/aln(1/a) (if the patterns are randomly correlated). Interestingly, after retrieving an externally cued attractor, the network can continue jumping, or latching, from attractor to attractor, driven by adaptation effects. The occurrence and duration of latching dynamics is found through simulations to depend critically on the strength of local attractor states, expressed in the Potts model by a parameter w. Here we describe with simulations and then analytically the boundaries between distinct phases of no latching, of transient and sustained latching, deriving a phase diagram in the plane w-T, where T parametrizes thermal noise effects. Implications for real cortical dynamics are briefly reviewed in the conclusions.
Collapse
Affiliation(s)
- Eleonora Russo
- SISSA, Cognitive Neuroscience, via Bonomea 265, 34136 Trieste, Italy.
| | | |
Collapse
|
16
|
Network, cellular, and molecular mechanisms underlying long-term memory formation. Curr Top Behav Neurosci 2012; 15:73-115. [PMID: 22976275 DOI: 10.1007/7854_2012_229] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/27/2022]
Abstract
The neural network stores information through activity-dependent synaptic plasticity that occurs in populations of neurons. Persistent forms of synaptic plasticity may account for long-term memory storage, and the most salient forms are the changes in the structure of synapses. The theory proposes that encoding should use a sparse code and evidence suggests that this can be achieved through offline reactivation or by sparse initial recruitment of the network units. This idea implies that in some cases the neurons that underwent structural synaptic plasticity might be a subpopulation of those originally recruited; However, it is not yet clear whether all the neurons recruited during acquisition are the ones that underwent persistent forms of synaptic plasticity and responsible for memory retrieval. To determine which neural units underlie long-term memory storage, we need to characterize which are the persistent forms of synaptic plasticity occurring in these neural ensembles and the best hints so far are the molecular signals underlying structural modifications of the synapses. Structural synaptic plasticity can be achieved by the activity of various signal transduction pathways, including the NMDA-CaMKII and ACh-MAPK. These pathways converge with the Rho family of GTPases and the consequent ERK 1/2 activation, which regulates multiple cellular functions such as protein translation, protein trafficking, and gene transcription. The most detailed explanation may come from models that allow us to determine the contribution of each piece of this fascinating puzzle that is the neuron and the neural network.
Collapse
|