1
|
Köster M, Gruber T. Rhythms of human attention and memory: An embedded process perspective. Front Hum Neurosci 2022; 16:905837. [PMID: 36277046 PMCID: PMC9579292 DOI: 10.3389/fnhum.2022.905837] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/28/2022] [Accepted: 08/29/2022] [Indexed: 11/28/2022] Open
Abstract
It remains a dogma in cognitive neuroscience to separate human attention and memory into distinct modules and processes. Here we propose that brain rhythms reflect the embedded nature of these processes in the human brain, as evident from their shared neural signatures: gamma oscillations (30-90 Hz) reflect sensory information processing and activated neural representations (memory items). The theta rhythm (3-8 Hz) is a pacemaker of explicit control processes (central executive), structuring neural information processing, bit by bit, as reflected in the theta-gamma code. By representing memory items in a sequential and time-compressed manner the theta-gamma code is hypothesized to solve key problems of neural computation: (1) attentional sampling (integrating and segregating information processing), (2) mnemonic updating (implementing Hebbian learning), and (3) predictive coding (advancing information processing ahead of the real time to guide behavior). In this framework, reduced alpha oscillations (8-14 Hz) reflect activated semantic networks, involved in both explicit and implicit mnemonic processes. Linking recent theoretical accounts and empirical insights on neural rhythms to the embedded-process model advances our understanding of the integrated nature of attention and memory - as the bedrock of human cognition.
Collapse
Affiliation(s)
- Moritz Köster
- Faculty of Education and Psychology, Freie Universität Berlin, Berlin, Germany
- Institute of Psychology, University of Regensburg, Regensburg, Germany
| | - Thomas Gruber
- Institute of Psychology, Osnabrück University, Osnabrück, Germany
| |
Collapse
|
2
|
Printzlau FAB, Myers NE, Manohar SG, Stokes MG. Neural Reinstatement Tracks Spread of Attention between Object Features in Working Memory. J Cogn Neurosci 2022; 34:1681-1701. [PMID: 35704549 DOI: 10.1162/jocn_a_01879] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Attention can be allocated in working memory (WM) to select and privilege relevant content. It is unclear whether attention selects individual features or whole objects in WM. Here, we used behavioral measures, eye-tracking, and EEG to test the hypothesis that attention spreads between an object's features in WM. Twenty-six participants completed a WM task that asked them to recall the angle of one of two oriented, colored bars after a delay while EEG and eye-tracking data were collected. During the delay, an orthogonal "incidental task" cued the color of one item for a match/mismatch judgment. On congruent trials (50%), the cued item was probed for subsequent orientation recall; on incongruent trials (50%), the other memory item was probed. As predicted, selecting the color of an object in WM brought other features of the cued object into an attended state as revealed by EEG decoding, oscillatory α-power, gaze bias, and improved orientation recall performance. Together, the results show that attentional selection spreads between an object's features in WM, consistent with object-based attentional selection. Analyses of neural processing at recall revealed that the selected object was automatically compared with the probe, whether it was the target for recall or not. This provides a potential mechanism for the observed benefits of nonpredictive cueing in WM, where a selected item is prioritized for subsequent decision-making.
Collapse
|
3
|
Organization and Priming of Long-term Memory Representations with Two-phase Plasticity. Cognit Comput 2022. [DOI: 10.1007/s12559-022-10021-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
Abstract
Abstract
Background / Introduction
In recurrent neural networks in the brain, memories are represented by so-called Hebbian cell assemblies. Such assemblies are groups of neurons with particularly strong synaptic connections formed by synaptic plasticity and consolidated by synaptic tagging and capture (STC). To link these synaptic mechanisms to long-term memory on the level of cognition and behavior, their functional implications on the level of neural networks have to be understood.
Methods
We employ a biologically detailed recurrent network of spiking neurons featuring synaptic plasticity and STC to model the learning and consolidation of long-term memory representations. Using this, we investigate the effects of different organizational paradigms, and of priming stimulation, on the functionality of multiple memory representations. We quantify these effects by the spontaneous activation of memory representations driven by background noise.
Results
We find that the learning order of the memory representations significantly biases the likelihood of activation towards more recently learned representations, and that hub-like overlap structure counters this effect. We identify long-term depression as the mechanism underlying these findings. Finally, we demonstrate that STC has functional consequences for the interaction of long-term memory representations: 1. intermediate consolidation in between learning the individual representations strongly alters the previously described effects, and 2. STC enables the priming of a long-term memory representation on a timescale of minutes to hours.
Conclusion
Our findings show how synaptic and neuronal mechanisms can provide an explanatory basis for known cognitive effects.
Collapse
|
4
|
Hribkova H, Svoboda O, Bartecku E, Zelinkova J, Horinkova J, Lacinova L, Piskacek M, Lipovy B, Provaznik I, Glover JC, Kasparek T, Sun YM. Clozapine Reverses Dysfunction of Glutamatergic Neurons Derived From Clozapine-Responsive Schizophrenia Patients. Front Cell Neurosci 2022; 16:830757. [PMID: 35281293 PMCID: PMC8904748 DOI: 10.3389/fncel.2022.830757] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/07/2021] [Accepted: 01/28/2022] [Indexed: 11/26/2022] Open
Abstract
The cellular pathology of schizophrenia and the potential of antipsychotics to target underlying neuronal dysfunctions are still largely unknown. We employed glutamatergic neurons derived from induced pluripotent stem cells (iPSC) obtained from schizophrenia patients with known histories of response to clozapine and healthy controls to decipher the mechanisms of action of clozapine, spanning from molecular (transcriptomic profiling) and cellular (electrophysiology) levels to observed clinical effects in living patients. Glutamatergic neurons derived from schizophrenia patients exhibited deficits in intrinsic electrophysiological properties, synaptic function and network activity. Deficits in K+ and Na+ currents, network behavior, and glutamatergic synaptic signaling were restored by clozapine treatment, but only in neurons from clozapine-responsive patients. Moreover, neurons from clozapine-responsive patients exhibited a reciprocal dysregulation of gene expression, particularly related to glutamatergic and downstream signaling, which was reversed by clozapine treatment. Only neurons from clozapine responders showed return to normal function and transcriptomic profile. Our results underscore the importance of K+ and Na+ channels and glutamatergic synaptic signaling in the pathogenesis of schizophrenia and demonstrate that clozapine might act by normalizing perturbances in this signaling pathway. To our knowledge this is the first study to demonstrate that schizophrenia iPSC-derived neurons exhibit a response phenotype correlated with clinical response to an antipsychotic. This opens a new avenue in the search for an effective treatment agent tailored to the needs of individual patients.
Collapse
Affiliation(s)
- Hana Hribkova
- Department of Biology, Masaryk University, Brno, Czechia
| | - Ondrej Svoboda
- Department of Biomedical Engineering, Faculty of Electrical Engineering and Communication, Brno University of Technology, Brno, Czechia
| | - Elis Bartecku
- Department of Psychiatry, Faculty of Medicine and University Hospital Brno, Brno, Czechia
| | - Jana Zelinkova
- Department of Biology, Masaryk University, Brno, Czechia
| | - Jana Horinkova
- Department of Psychiatry, Faculty of Medicine and University Hospital Brno, Brno, Czechia
| | - Lubica Lacinova
- Center of Bioscience, Institute of Molecular Physiology and Genetics, Slovak Academy of Sciences, Bratislava, Slovakia
| | - Martin Piskacek
- Department of Pathological Physiology, Masaryk University, Brno, Czechia
| | - Bretislav Lipovy
- Department of Burns and Plastic Surgery, Faculty of Medicine and University Hospital Brno, Brno, Czechia
| | - Ivo Provaznik
- Department of Biomedical Engineering, Faculty of Electrical Engineering and Communication, Brno University of Technology, Brno, Czechia
- Department of Physiology, Faculty of Medicine, Masaryk University, Brno, Czechia
| | - Joel C. Glover
- Department of Molecular Medicine, Institute of Basic Medical Sciences, University of Oslo, Oslo, Norway
- Norwegian Center for Stem Cell Research, Department of Immunology and Transfusion Medicine, Oslo University Hospital, Oslo, Norway
| | - Tomas Kasparek
- Department of Psychiatry, Faculty of Medicine and University Hospital Brno, Brno, Czechia
- *Correspondence: Tomas Kasparek,
| | - Yuh-Man Sun
- Department of Biology, Masaryk University, Brno, Czechia
| |
Collapse
|
5
|
Boboeva V, Pezzotta A, Clopath C. Free recall scaling laws and short-term memory effects in a latching attractor network. Proc Natl Acad Sci U S A 2021; 118:e2026092118. [PMID: 34873052 PMCID: PMC8670499 DOI: 10.1073/pnas.2026092118] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 10/14/2021] [Indexed: 11/18/2022] Open
Abstract
Despite the complexity of human memory, paradigms like free recall have revealed robust qualitative and quantitative characteristics, such as power laws governing recall capacity. Although abstract random matrix models could explain such laws, the possibility of their implementation in large networks of interacting neurons has so far remained underexplored. We study an attractor network model of long-term memory endowed with firing rate adaptation and global inhibition. Under appropriate conditions, the transitioning behavior of the network from memory to memory is constrained by limit cycles that prevent the network from recalling all memories, with scaling similar to what has been found in experiments. When the model is supplemented with a heteroassociative learning rule, complementing the standard autoassociative learning rule, as well as short-term synaptic facilitation, our model reproduces other key findings in the free recall literature, namely, serial position effects, contiguity and forward asymmetry effects, and the semantic effects found to guide memory recall. The model is consistent with a broad series of manipulations aimed at gaining a better understanding of the variables that affect recall, such as the role of rehearsal, presentation rates, and continuous and/or end-of-list distractor conditions. We predict that recall capacity may be increased with the addition of small amounts of noise, for example, in the form of weak random stimuli during recall. Finally, we predict that, although the statistics of the encoded memories has a strong effect on the recall capacity, the power laws governing recall capacity may still be expected to hold.
Collapse
Affiliation(s)
- Vezha Boboeva
- Department of Bioengineering, Imperial College London, London SW7 2BX, United Kingdom
| | - Alberto Pezzotta
- Developmental Dynamics Laboratory, The Francis Crick Institute, London NW1 1AT, United Kingdom
| | - Claudia Clopath
- Department of Bioengineering, Imperial College London, London SW7 2BX, United Kingdom;
| |
Collapse
|
6
|
An Indexing Theory for Working Memory Based on Fast Hebbian Plasticity. eNeuro 2020; 7:ENEURO.0374-19.2020. [PMID: 32127347 PMCID: PMC7189483 DOI: 10.1523/eneuro.0374-19.2020] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/17/2019] [Revised: 01/17/2020] [Accepted: 01/27/2020] [Indexed: 12/21/2022] Open
Abstract
Working memory (WM) is a key component of human memory and cognition. Computational models have been used to study the underlying neural mechanisms, but neglected the important role of short-term memory (STM) and long-term memory (LTM) interactions for WM. Here, we investigate these using a novel multiarea spiking neural network model of prefrontal cortex (PFC) and two parietotemporal cortical areas based on macaque data. We propose a WM indexing theory that explains how PFC could associate, maintain, and update multimodal LTM representations. Our simulations demonstrate how simultaneous, brief multimodal memory cues could build a temporary joint memory representation as an “index” in PFC by means of fast Hebbian synaptic plasticity. This index can then reactivate spontaneously and thereby also the associated LTM representations. Cueing one LTM item rapidly pattern completes the associated uncued item via PFC. The PFC–STM network updates flexibly as new stimuli arrive, thereby gradually overwriting older representations.
Collapse
|
7
|
Martinez RH, Lansner A, Herman P. Probabilistic associative learning suffices for learning the temporal structure of multiple sequences. PLoS One 2019; 14:e0220161. [PMID: 31369571 PMCID: PMC6675053 DOI: 10.1371/journal.pone.0220161] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/22/2019] [Accepted: 07/08/2019] [Indexed: 11/19/2022] Open
Abstract
From memorizing a musical tune to navigating a well known route, many of our underlying behaviors have a strong temporal component. While the mechanisms behind the sequential nature of the underlying brain activity are likely multifarious and multi-scale, in this work we attempt to characterize to what degree some of this properties can be explained as a consequence of simple associative learning. To this end, we employ a parsimonious firing-rate attractor network equipped with the Hebbian-like Bayesian Confidence Propagating Neural Network (BCPNN) learning rule relying on synaptic traces with asymmetric temporal characteristics. The proposed network model is able to encode and reproduce temporal aspects of the input, and offers internal control of the recall dynamics by gain modulation. We provide an analytical characterisation of the relationship between the structure of the weight matrix, the dynamical network parameters and the temporal aspects of sequence recall. We also present a computational study of the performance of the system under the effects of noise for an extensive region of the parameter space. Finally, we show how the inclusion of modularity in our network structure facilitates the learning and recall of multiple overlapping sequences even in a noisy regime.
Collapse
Affiliation(s)
- Ramon H. Martinez
- Computational Brain Science Lab, KTH Royal Institute of Technology, Stockholm, Sweden
| | - Anders Lansner
- Computational Brain Science Lab, KTH Royal Institute of Technology, Stockholm, Sweden
- Mathematics Department, Stockholm University, Stockholm, Sweden
| | - Pawel Herman
- Computational Brain Science Lab, KTH Royal Institute of Technology, Stockholm, Sweden
| |
Collapse
|
8
|
Fernandez-Leon JA, Hansen BJ, Dragoi V. Representation of Rapid Image Sequences in V4 Networks. Cereb Cortex 2018. [PMID: 28637171 DOI: 10.1093/cercor/bhx146] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022] Open
Abstract
Natural viewing often consists of sequences of brief fixations to image patches of different structure. Whether and how briefly presented sequential stimuli are encoded in a temporal-position manner is poorly understood. Here, we performed multiple-electrode recordings in the visual cortex (area V4) of nonhuman primates (Macaca mulatta) viewing a sequence of 7 briefly flashed natural images, and measured correlations between the cue-triggered population response in the presence and absence of the stimulus. Surprisingly, we found significant correlations for images occurring at the beginning and the end of a sequence, but not for those in the middle. The correlation strength increased with stimulus exposure and favored the image position in the sequence rather than image identity. These results challenge the commonly held view that images are represented in visual cortex exclusively based on their informational content, and indicate that, in the absence of sensory information, neuronal populations exhibit reactivation of stimulus-evoked responses in a way that reflects temporal position within a stimulus sequence.
Collapse
Affiliation(s)
- Jose A Fernandez-Leon
- Department of Neurobiology and Anatomy, University of Texas-Houston Medical School, Houston, TX, USA.,Department of Neurology, Brigham and Women's Hospital-Harvard Medical School, Boston, MA, USA
| | - Bryan J Hansen
- Department of Neurobiology and Anatomy, University of Texas-Houston Medical School, Houston, TX, USA.,In Vivo Pharmacology, Merck & Co., Inc., Kenilworth, NJ, USA
| | - Valentin Dragoi
- Department of Neurobiology and Anatomy, University of Texas-Houston Medical School, Houston, TX, USA
| |
Collapse
|
9
|
A Spiking Working Memory Model Based on Hebbian Short-Term Potentiation. J Neurosci 2017; 37:83-96. [PMID: 28053032 PMCID: PMC5214637 DOI: 10.1523/jneurosci.1989-16.2016] [Citation(s) in RCA: 56] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/21/2016] [Revised: 10/05/2016] [Accepted: 10/19/2016] [Indexed: 11/26/2022] Open
Abstract
A dominant theory of working memory (WM), referred to as the persistent activity hypothesis, holds that recurrently connected neural networks, presumably located in the prefrontal cortex, encode and maintain WM memory items through sustained elevated activity. Reexamination of experimental data has shown that prefrontal cortex activity in single units during delay periods is much more variable than predicted by such a theory and associated computational models. Alternative models of WM maintenance based on synaptic plasticity, such as short-term nonassociative (non-Hebbian) synaptic facilitation, have been suggested but cannot account for encoding of novel associations. Here we test the hypothesis that a recently identified fast-expressing form of Hebbian synaptic plasticity (associative short-term potentiation) is a possible mechanism for WM encoding and maintenance. Our simulations using a spiking neural network model of cortex reproduce a range of cognitive memory effects in the classical multi-item WM task of encoding and immediate free recall of word lists. Memory reactivation in the model occurs in discrete oscillatory bursts rather than as sustained activity. We relate dynamic network activity as well as key synaptic characteristics to electrophysiological measurements. Our findings support the hypothesis that fast Hebbian short-term potentiation is a key WM mechanism. SIGNIFICANCE STATEMENT Working memory (WM) is a key component of cognition. Hypotheses about the neural mechanism behind WM are currently under revision. Reflecting recent findings of fast Hebbian synaptic plasticity in cortex, we test whether a cortical spiking neural network model with such a mechanism can learn a multi-item WM task (word list learning). We show that our model can reproduce human cognitive phenomena and achieve comparable memory performance in both free and cued recall while being simultaneously compatible with experimental data on structure, connectivity, and neurophysiology of the underlying cortical tissue. These findings are directly relevant to the ongoing paradigm shift in the WM field.
Collapse
|
10
|
Spike-Based Bayesian-Hebbian Learning of Temporal Sequences. PLoS Comput Biol 2016; 12:e1004954. [PMID: 27213810 PMCID: PMC4877102 DOI: 10.1371/journal.pcbi.1004954] [Citation(s) in RCA: 23] [Impact Index Per Article: 2.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/29/2015] [Accepted: 04/28/2016] [Indexed: 11/25/2022] Open
Abstract
Many cognitive and motor functions are enabled by the temporal representation and processing of stimuli, but it remains an open issue how neocortical microcircuits can reliably encode and replay such sequences of information. To better understand this, a modular attractor memory network is proposed in which meta-stable sequential attractor transitions are learned through changes to synaptic weights and intrinsic excitabilities via the spike-based Bayesian Confidence Propagation Neural Network (BCPNN) learning rule. We find that the formation of distributed memories, embodied by increased periods of firing in pools of excitatory neurons, together with asymmetrical associations between these distinct network states, can be acquired through plasticity. The model’s feasibility is demonstrated using simulations of adaptive exponential integrate-and-fire model neurons (AdEx). We show that the learning and speed of sequence replay depends on a confluence of biophysically relevant parameters including stimulus duration, level of background noise, ratio of synaptic currents, and strengths of short-term depression and adaptation. Moreover, sequence elements are shown to flexibly participate multiple times in the sequence, suggesting that spiking attractor networks of this type can support an efficient combinatorial code. The model provides a principled approach towards understanding how multiple interacting plasticity mechanisms can coordinate hetero-associative learning in unison. From one moment to the next, in an ever-changing world, and awash in a deluge of sensory data, the brain fluidly guides our actions throughout an astonishing variety of tasks. Processing this ongoing bombardment of information is a fundamental problem faced by its underlying neural circuits. Given that the structure of our actions along with the organization of the environment in which they are performed can be intuitively decomposed into sequences of simpler patterns, an encoding strategy reflecting the temporal nature of these patterns should offer an efficient approach for assembling more complex memories and behaviors. We present a model that demonstrates how activity could propagate through recurrent cortical microcircuits as a result of a learning rule based on neurobiologically plausible time courses and dynamics. The model predicts that the interaction between several learning and dynamical processes constitute a compound mnemonic engram that can flexibly generate sequential step-wise increases of activity within neural populations.
Collapse
|
11
|
Recanatesi S, Katkov M, Romani S, Tsodyks M. Neural Network Model of Memory Retrieval. Front Comput Neurosci 2016; 9:149. [PMID: 26732491 PMCID: PMC4681782 DOI: 10.3389/fncom.2015.00149] [Citation(s) in RCA: 25] [Impact Index Per Article: 3.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/21/2015] [Accepted: 11/26/2015] [Indexed: 11/13/2022] Open
Abstract
Human memory can store large amount of information. Nevertheless, recalling is often a challenging task. In a classical free recall paradigm, where participants are asked to repeat a briefly presented list of words, people make mistakes for lists as short as 5 words. We present a model for memory retrieval based on a Hopfield neural network where transition between items are determined by similarities in their long-term memory representations. Meanfield analysis of the model reveals stable states of the network corresponding (1) to single memory representations and (2) intersection between memory representations. We show that oscillating feedback inhibition in the presence of noise induces transitions between these states triggering the retrieval of different memories. The network dynamics qualitatively predicts the distribution of time intervals required to recall new memory items observed in experiments. It shows that items having larger number of neurons in their representation are statistically easier to recall and reveals possible bottlenecks in our ability of retrieving memories. Overall, we propose a neural network model of information retrieval broadly compatible with experimental observations and is consistent with our recent graphical model (Romani et al., 2013).
Collapse
Affiliation(s)
- Stefano Recanatesi
- Department of Neurobiology, Weizmann Institute of Science Rehovot, Israel
| | - Mikhail Katkov
- Department of Neurobiology, Weizmann Institute of Science Rehovot, Israel
| | - Sandro Romani
- Janelia Farm Research Campus, Howard Hughes Medical Institute Ashburn, VA, USA
| | - Misha Tsodyks
- Department of Neurobiology, Weizmann Institute of ScienceRehovot, Israel; Department of Neurotechnologies, Lobachevsky State University of Nizhny NovgorodNizhny Novgorod, Russia
| |
Collapse
|
12
|
Nyberg L, Eriksson J. Working Memory: Maintenance, Updating, and the Realization of Intentions. Cold Spring Harb Perspect Biol 2015; 8:a021816. [PMID: 26637287 DOI: 10.1101/cshperspect.a021816] [Citation(s) in RCA: 41] [Impact Index Per Article: 4.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
Abstract
"Working memory" refers to a vast set of mnemonic processes and associated brain networks, relates to basic intellectual abilities, and underlies many real-world functions. Working-memory maintenance involves frontoparietal regions and distributed representational areas, and can be based on persistent activity in reentrant loops, synchronous oscillations, or changes in synaptic strength. Manipulation of content of working memory depends on the dorsofrontal cortex, and updating is realized by a frontostriatal '"gating" function. Goals and intentions are represented as cognitive and motivational contexts in the rostrofrontal cortex. Different working-memory networks are linked via associative reinforcement-learning mechanisms into a self-organizing system. Normal capacity variation, as well as working-memory deficits, can largely be accounted for by the effectiveness and integrity of the basal ganglia and dopaminergic neurotransmission.
Collapse
Affiliation(s)
- Lars Nyberg
- Umeå Center for Functional Brain Imaging (UFBI), Umeå University, 901 87 Umeå, Sweden
| | - Johan Eriksson
- Umeå Center for Functional Brain Imaging (UFBI), Umeå University, 901 87 Umeå, Sweden
| |
Collapse
|
13
|
Abstract
A crucial role for working memory in temporary information processing and guidance of complex behavior has been recognized for many decades. There is emerging consensus that working-memory maintenance results from the interactions among long-term memory representations and basic processes, including attention, that are instantiated as reentrant loops between frontal and posterior cortical areas, as well as sub-cortical structures. The nature of such interactions can account for capacity limitations, lifespan changes, and restricted transfer after working-memory training. Recent data and models indicate that working memory may also be based on synaptic plasticity and that working memory can operate on non-consciously perceived information.
Collapse
Affiliation(s)
- Johan Eriksson
- Department of Integrative Medical Biology, Umeå University, 901 87 Umeå, Sweden; Umeå Center for Function Brain Imaging (UFBI), Umeå University, 901 87 Umeå, Sweden.
| | - Edward K Vogel
- Department of Psychology, Institute for Mind and Biology, University of Chicago, Chicago, IL 60637, USA
| | - Anders Lansner
- Department of Computational Biology, KTH Royal Institute of Technology, 100 44 Stockholm, Sweden; Department of Numerical Analysis and Computer Science, Stockholm University, 106 91 Stockholm, Sweden
| | - Fredrik Bergström
- Department of Integrative Medical Biology, Umeå University, 901 87 Umeå, Sweden; Umeå Center for Function Brain Imaging (UFBI), Umeå University, 901 87 Umeå, Sweden
| | - Lars Nyberg
- Department of Integrative Medical Biology, Umeå University, 901 87 Umeå, Sweden; Umeå Center for Function Brain Imaging (UFBI), Umeå University, 901 87 Umeå, Sweden; Department of Radiation Sciences, Umeå University, 901 87 Umeå, Sweden
| |
Collapse
|
14
|
Tessier S, Lambert A, Scherzer P, Jemel B, Godbout R. REM sleep and emotional face memory in typically-developing children and children with autism. Biol Psychol 2015. [DOI: 10.1016/j.biopsycho.2015.07.012] [Citation(s) in RCA: 19] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/06/2023]
|
15
|
Bill J, Buesing L, Habenschuss S, Nessler B, Maass W, Legenstein R. Distributed Bayesian Computation and Self-Organized Learning in Sheets of Spiking Neurons with Local Lateral Inhibition. PLoS One 2015; 10:e0134356. [PMID: 26284370 PMCID: PMC4540468 DOI: 10.1371/journal.pone.0134356] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/27/2015] [Accepted: 07/09/2015] [Indexed: 11/24/2022] Open
Abstract
During the last decade, Bayesian probability theory has emerged as a framework in cognitive science and neuroscience for describing perception, reasoning and learning of mammals. However, our understanding of how probabilistic computations could be organized in the brain, and how the observed connectivity structure of cortical microcircuits supports these calculations, is rudimentary at best. In this study, we investigate statistical inference and self-organized learning in a spatially extended spiking network model, that accommodates both local competitive and large-scale associative aspects of neural information processing, under a unified Bayesian account. Specifically, we show how the spiking dynamics of a recurrent network with lateral excitation and local inhibition in response to distributed spiking input, can be understood as sampling from a variational posterior distribution of a well-defined implicit probabilistic model. This interpretation further permits a rigorous analytical treatment of experience-dependent plasticity on the network level. Using machine learning theory, we derive update rules for neuron and synapse parameters which equate with Hebbian synaptic and homeostatic intrinsic plasticity rules in a neural implementation. In computer simulations, we demonstrate that the interplay of these plasticity rules leads to the emergence of probabilistic local experts that form distributed assemblies of similarly tuned cells communicating through lateral excitatory connections. The resulting sparse distributed spike code of a well-adapted network carries compressed information on salient input features combined with prior experience on correlations among them. Our theory predicts that the emergence of such efficient representations benefits from network architectures in which the range of local inhibition matches the spatial extent of pyramidal cells that share common afferent input.
Collapse
Affiliation(s)
- Johannes Bill
- Institute for Theoretical Computer Science, TU Graz, Graz, Austria
- * E-mail:
| | - Lars Buesing
- Department of Statistics, Columbia University, New York, New York, United States of America
| | | | - Bernhard Nessler
- Frankfurt Institute for Advanced Studies, Frankfurt am Main, Germany
| | - Wolfgang Maass
- Institute for Theoretical Computer Science, TU Graz, Graz, Austria
| | | |
Collapse
|
16
|
Vogginger B, Schüffny R, Lansner A, Cederström L, Partzsch J, Höppner S. Reducing the computational footprint for real-time BCPNN learning. Front Neurosci 2015; 9:2. [PMID: 25657618 PMCID: PMC4302947 DOI: 10.3389/fnins.2015.00002] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/22/2014] [Accepted: 01/03/2015] [Indexed: 11/26/2022] Open
Abstract
The implementation of synaptic plasticity in neural simulation or neuromorphic hardware is usually very resource-intensive, often requiring a compromise between efficiency and flexibility. A versatile, but computationally-expensive plasticity mechanism is provided by the Bayesian Confidence Propagation Neural Network (BCPNN) paradigm. Building upon Bayesian statistics, and having clear links to biological plasticity processes, the BCPNN learning rule has been applied in many fields, ranging from data classification, associative memory, reward-based learning, probabilistic inference to cortical attractor memory networks. In the spike-based version of this learning rule the pre-, postsynaptic and coincident activity is traced in three low-pass-filtering stages, requiring a total of eight state variables, whose dynamics are typically simulated with the fixed step size Euler method. We derive analytic solutions allowing an efficient event-driven implementation of this learning rule. Further speedup is achieved by first rewriting the model which reduces the number of basic arithmetic operations per update to one half, and second by using look-up tables for the frequently calculated exponential decay. Ultimately, in a typical use case, the simulation using our approach is more than one order of magnitude faster than with the fixed step size Euler method. Aiming for a small memory footprint per BCPNN synapse, we also evaluate the use of fixed-point numbers for the state variables, and assess the number of bits required to achieve same or better accuracy than with the conventional explicit Euler method. All of this will allow a real-time simulation of a reduced cortex model based on BCPNN in high performance computing. More important, with the analytic solution at hand and due to the reduced memory bandwidth, the learning rule can be efficiently implemented in dedicated or existing digital neuromorphic hardware.
Collapse
Affiliation(s)
- Bernhard Vogginger
- Department of Electrical Engineering and Information Technology, Technische Universität Dresden Germany
| | - René Schüffny
- Department of Electrical Engineering and Information Technology, Technische Universität Dresden Germany
| | - Anders Lansner
- Department of Computational Biology, School of Computer Science and Communication, Royal Institute of Technology (KTH) Stockholm, Sweden ; Department of Numerical Analysis and Computer Science, Stockholm University Stockholm, Sweden
| | - Love Cederström
- Department of Electrical Engineering and Information Technology, Technische Universität Dresden Germany
| | - Johannes Partzsch
- Department of Electrical Engineering and Information Technology, Technische Universität Dresden Germany
| | - Sebastian Höppner
- Department of Electrical Engineering and Information Technology, Technische Universität Dresden Germany
| |
Collapse
|
17
|
Fiebig F, Lansner A. Memory consolidation from seconds to weeks: a three-stage neural network model with autonomous reinstatement dynamics. Front Comput Neurosci 2014; 8:64. [PMID: 25071536 PMCID: PMC4077014 DOI: 10.3389/fncom.2014.00064] [Citation(s) in RCA: 22] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/14/2014] [Accepted: 05/24/2014] [Indexed: 11/29/2022] Open
Abstract
Declarative long-term memories are not created in an instant. Gradual stabilization and temporally shifting dependence of acquired declarative memories in different brain regions-called systems consolidation-can be tracked in time by lesion experiments. The observation of temporally graded retrograde amnesia (RA) following hippocampal lesions points to a gradual transfer of memory from hippocampus to neocortical long-term memory. Spontaneous reactivations of hippocampal memories, as observed in place cell reactivations during slow-wave-sleep, are supposed to drive neocortical reinstatements and facilitate this process. We propose a functional neural network implementation of these ideas and furthermore suggest an extended three-state framework that includes the prefrontal cortex (PFC). It bridges the temporal chasm between working memory percepts on the scale of seconds and consolidated long-term memory on the scale of weeks or months. We show that our three-stage model can autonomously produce the necessary stochastic reactivation dynamics for successful episodic memory consolidation. The resulting learning system is shown to exhibit classical memory effects seen in experimental studies, such as retrograde and anterograde amnesia (AA) after simulated hippocampal lesioning; furthermore the model reproduces peculiar biological findings on memory modulation, such as retrograde facilitation of memory after suppressed acquisition of new long-term memories-similar to the effects of benzodiazepines on memory.
Collapse
Affiliation(s)
- Florian Fiebig
- Department of Computational Biology, Royal Institute of Technology (KTH)Stockholm, Sweden
- Institute for Adaptive and Neural Computation, School of Informatics, Edinburgh UniversityEdinburgh, Scotland
| | - Anders Lansner
- Department of Computational Biology, Royal Institute of Technology (KTH)Stockholm, Sweden
- Department of Numerical Analysis and Computer Science, Stockholm UniversityStockholm, Sweden
| |
Collapse
|
18
|
Tully PJ, Hennig MH, Lansner A. Synaptic and nonsynaptic plasticity approximating probabilistic inference. Front Synaptic Neurosci 2014; 6:8. [PMID: 24782758 PMCID: PMC3986567 DOI: 10.3389/fnsyn.2014.00008] [Citation(s) in RCA: 20] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/19/2013] [Accepted: 03/20/2014] [Indexed: 12/28/2022] Open
Abstract
Learning and memory operations in neural circuits are believed to involve molecular cascades of synaptic and nonsynaptic changes that lead to a diverse repertoire of dynamical phenomena at higher levels of processing. Hebbian and homeostatic plasticity, neuromodulation, and intrinsic excitability all conspire to form and maintain memories. But it is still unclear how these seemingly redundant mechanisms could jointly orchestrate learning in a more unified system. To this end, a Hebbian learning rule for spiking neurons inspired by Bayesian statistics is proposed. In this model, synaptic weights and intrinsic currents are adapted on-line upon arrival of single spikes, which initiate a cascade of temporally interacting memory traces that locally estimate probabilities associated with relative neuronal activation levels. Trace dynamics enable synaptic learning to readily demonstrate a spike-timing dependence, stably return to a set-point over long time scales, and remain competitive despite this stability. Beyond unsupervised learning, linking the traces with an external plasticity-modulating signal enables spike-based reinforcement learning. At the postsynaptic neuron, the traces are represented by an activity-dependent ion channel that is shown to regulate the input received by a postsynaptic cell and generate intrinsic graded persistent firing levels. We show how spike-based Hebbian-Bayesian learning can be performed in a simulated inference task using integrate-and-fire (IAF) neurons that are Poisson-firing and background-driven, similar to the preferred regime of cortical neurons. Our results support the view that neurons can represent information in the form of probability distributions, and that probabilistic inference could be a functional by-product of coupled synaptic and nonsynaptic mechanisms operating over several timescales. The model provides a biophysical realization of Bayesian computation by reconciling several observed neural phenomena whose functional effects are only partially understood in concert.
Collapse
Affiliation(s)
- Philip J Tully
- Department of Computational Biology, Royal Institute of Technology (KTH) Stockholm, Sweden ; Stockholm Brain Institute, Karolinska Institute Stockholm, Sweden ; School of Informatics, Institute for Adaptive and Neural Computation, University of Edinburgh Edinburgh, UK
| | - Matthias H Hennig
- School of Informatics, Institute for Adaptive and Neural Computation, University of Edinburgh Edinburgh, UK
| | - Anders Lansner
- Department of Computational Biology, Royal Institute of Technology (KTH) Stockholm, Sweden ; Stockholm Brain Institute, Karolinska Institute Stockholm, Sweden ; Department of Numerical Analysis and Computer Science, Stockholm University Stockholm, Sweden
| |
Collapse
|