1
|
Lohnas LJ, Howard MW. The influence of emotion on temporal context models. Cogn Emot 2024:1-29. [PMID: 39007902 DOI: 10.1080/02699931.2024.2371075] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/12/2023] [Accepted: 06/17/2024] [Indexed: 07/16/2024]
Abstract
Temporal context models (TCMs) have been influential in understanding episodic memory and its neural underpinnings. Recently, TCMs have been extended to explain emotional memory effects, one of the most clinically important findings in the field of memory research. This review covers recent advances in hypotheses for the neural representation of spatiotemporal context through the lens of TCMs, including their ability to explain the influence of emotion on episodic and temporal memory. In recent years, simplifying assumptions of "classical" TCMs - with exponential trace decay and the mechanism by which temporal context is recovered - have become increasingly clear. The review also outlines how recent advances could be incorporated into a future TCM, beyond classical assumptions, to integrate emotional modulation.
Collapse
Affiliation(s)
- Lynn J Lohnas
- Department of Psychology, Syracuse University, Syracuse, NY, USA
| | - Marc W Howard
- Department of Psychological and Brain Sciences, Boston University, Boston, MA, USA
| |
Collapse
|
2
|
Rolls ET, Treves A. A theory of hippocampal function: New developments. Prog Neurobiol 2024; 238:102636. [PMID: 38834132 DOI: 10.1016/j.pneurobio.2024.102636] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/27/2024] [Revised: 04/15/2024] [Accepted: 05/30/2024] [Indexed: 06/06/2024]
Abstract
We develop further here the only quantitative theory of the storage of information in the hippocampal episodic memory system and its recall back to the neocortex. The theory is upgraded to account for a revolution in understanding of spatial representations in the primate, including human, hippocampus, that go beyond the place where the individual is located, to the location being viewed in a scene. This is fundamental to much primate episodic memory and navigation: functions supported in humans by pathways that build 'where' spatial view representations by feature combinations in a ventromedial visual cortical stream, separate from those for 'what' object and face information to the inferior temporal visual cortex, and for reward information from the orbitofrontal cortex. Key new computational developments include the capacity of the CA3 attractor network for storing whole charts of space; how the correlations inherent in self-organizing continuous spatial representations impact the storage capacity; how the CA3 network can combine continuous spatial and discrete object and reward representations; the roles of the rewards that reach the hippocampus in the later consolidation into long-term memory in part via cholinergic pathways from the orbitofrontal cortex; and new ways of analysing neocortical information storage using Potts networks.
Collapse
Affiliation(s)
- Edmund T Rolls
- Oxford Centre for Computational Neuroscience, Oxford, UK; Department of Computer Science, University of Warwick, Coventry CV4 7AL, UK.
| | | |
Collapse
|
3
|
Sutton NM, Gutiérrez-Guzmán BE, Dannenberg H, Ascoli GA. A Continuous Attractor Model with Realistic Neural and Synaptic Properties Quantitatively Reproduces Grid Cell Physiology. Int J Mol Sci 2024; 25:6059. [PMID: 38892248 PMCID: PMC11173171 DOI: 10.3390/ijms25116059] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/29/2024] [Revised: 05/25/2024] [Accepted: 05/26/2024] [Indexed: 06/21/2024] Open
Abstract
Computational simulations with data-driven physiological detail can foster a deeper understanding of the neural mechanisms involved in cognition. Here, we utilize the wealth of cellular properties from Hippocampome.org to study neural mechanisms of spatial coding with a spiking continuous attractor network model of medial entorhinal cortex circuit activity. The primary goal is to investigate if adding such realistic constraints could produce firing patterns similar to those measured in real neurons. Biological characteristics included in the work are excitability, connectivity, and synaptic signaling of neuron types defined primarily by their axonal and dendritic morphologies. We investigate the spiking dynamics in specific neuron types and the synaptic activities between groups of neurons. Modeling the rodent hippocampal formation keeps the simulations to a computationally reasonable scale while also anchoring the parameters and results to experimental measurements. Our model generates grid cell activity that well matches the spacing, size, and firing rates of grid fields recorded in live behaving animals from both published datasets and new experiments performed for this study. Our simulations also recreate different scales of those properties, e.g., small and large, as found along the dorsoventral axis of the medial entorhinal cortex. Computational exploration of neuronal and synaptic model parameters reveals that a broad range of neural properties produce grid fields in the simulation. These results demonstrate that the continuous attractor network model of grid cells is compatible with a spiking neural network implementation sourcing data-driven biophysical and anatomical parameters from Hippocampome.org. The software (version 1.0) is released as open source to enable broad community reuse and encourage novel applications.
Collapse
Affiliation(s)
- Nate M. Sutton
- Bioengineering Department, George Mason University, Fairfax, VA 22030, USA; (N.M.S.); (B.E.G.-G.); (H.D.)
| | - Blanca E. Gutiérrez-Guzmán
- Bioengineering Department, George Mason University, Fairfax, VA 22030, USA; (N.M.S.); (B.E.G.-G.); (H.D.)
| | - Holger Dannenberg
- Bioengineering Department, George Mason University, Fairfax, VA 22030, USA; (N.M.S.); (B.E.G.-G.); (H.D.)
- Interdisciplinary Program in Neuroscience, George Mason University, Fairfax, VA 22030, USA
| | - Giorgio A. Ascoli
- Bioengineering Department, George Mason University, Fairfax, VA 22030, USA; (N.M.S.); (B.E.G.-G.); (H.D.)
- Interdisciplinary Program in Neuroscience, George Mason University, Fairfax, VA 22030, USA
| |
Collapse
|
4
|
Delamare G, Zaki Y, Cai DJ, Clopath C. Drift of neural ensembles driven by slow fluctuations of intrinsic excitability. eLife 2024; 12:RP88053. [PMID: 38712831 PMCID: PMC11076042 DOI: 10.7554/elife.88053] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/08/2024] Open
Abstract
Representational drift refers to the dynamic nature of neural representations in the brain despite the behavior being seemingly stable. Although drift has been observed in many different brain regions, the mechanisms underlying it are not known. Since intrinsic neural excitability is suggested to play a key role in regulating memory allocation, fluctuations of excitability could bias the reactivation of previously stored memory ensembles and therefore act as a motor for drift. Here, we propose a rate-based plastic recurrent neural network with slow fluctuations of intrinsic excitability. We first show that subsequent reactivations of a neural ensemble can lead to drift of this ensemble. The model predicts that drift is induced by co-activation of previously active neurons along with neurons with high excitability which leads to remodeling of the recurrent weights. Consistent with previous experimental works, the drifting ensemble is informative about its temporal history. Crucially, we show that the gradual nature of the drift is necessary for decoding temporal information from the activity of the ensemble. Finally, we show that the memory is preserved and can be decoded by an output neuron having plastic synapses with the main region.
Collapse
Affiliation(s)
- Geoffroy Delamare
- Department of Bioengineering, Imperial College LondonLondonUnited Kingdom
| | - Yosif Zaki
- Department of Neuroscience, Icahn School of Medicine at Mount SinaiNew YorkUnited States
| | - Denise J Cai
- Department of Neuroscience, Icahn School of Medicine at Mount SinaiNew YorkUnited States
| | - Claudia Clopath
- Department of Bioengineering, Imperial College LondonLondonUnited Kingdom
| |
Collapse
|
5
|
Sutton N, Gutiérrez-Guzmán B, Dannenberg H, Ascoli GA. A Continuous Attractor Model with Realistic Neural and Synaptic Properties Quantitatively Reproduces Grid Cell Physiology. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.04.29.591748. [PMID: 38746202 PMCID: PMC11092518 DOI: 10.1101/2024.04.29.591748] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/16/2024]
Abstract
Computational simulations with data-driven physiological detail can foster a deeper understanding of the neural mechanisms involved in cognition. Here, we utilize the wealth of cellular properties from Hippocampome.org to study neural mechanisms of spatial coding with a spiking continuous attractor network model of medial entorhinal cortex circuit activity. The primary goal was to investigate if adding such realistic constraints could produce firing patterns similar to those measured in real neurons. Biological characteristics included in the work are excitability, connectivity, and synaptic signaling of neuron types defined primarily by their axonal and dendritic morphologies. We investigate the spiking dynamics in specific neuron types and the synaptic activities between groups of neurons. Modeling the rodent hippocampal formation keeps the simulations to a computationally reasonable scale while also anchoring the parameters and results to experimental measurements. Our model generates grid cell activity that well matches the spacing, size, and firing rates of grid fields recorded in live behaving animals from both published datasets and new experiments performed for this study. Our simulations also recreate different scales of those properties, e.g., small and large, as found along the dorsoventral axis of the medial entorhinal cortex. Computational exploration of neuronal and synaptic model parameters reveals that a broad range of neural properties produce grid fields in the simulation. These results demonstrate that the continuous attractor network model of grid cells is compatible with a spiking neural network implementation sourcing data-driven biophysical and anatomical parameters from Hippocampome.org. The software is released as open source to enable broad community reuse and encourage novel applications.
Collapse
Affiliation(s)
- Nate Sutton
- Bioengineering Department, at George Mason University
| | | | - Holger Dannenberg
- Bioengineering Department, at George Mason University
- Interdisciplinary Program in Neuroscience at George Mason University
| | - Giorgio A. Ascoli
- Bioengineering Department, at George Mason University
- Interdisciplinary Program in Neuroscience at George Mason University
| |
Collapse
|
6
|
Boboeva V, Pezzotta A, Clopath C, Akrami A. Unifying network model links recency and central tendency biases in working memory. eLife 2024; 12:RP86725. [PMID: 38656279 DOI: 10.7554/elife.86725] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/26/2024] Open
Abstract
The central tendency bias, or contraction bias, is a phenomenon where the judgment of the magnitude of items held in working memory appears to be biased toward the average of past observations. It is assumed to be an optimal strategy by the brain and commonly thought of as an expression of the brain's ability to learn the statistical structure of sensory input. On the other hand, recency biases such as serial dependence are also commonly observed and are thought to reflect the content of working memory. Recent results from an auditory delayed comparison task in rats suggest that both biases may be more related than previously thought: when the posterior parietal cortex (PPC) was silenced, both short-term and contraction biases were reduced. By proposing a model of the circuit that may be involved in generating the behavior, we show that a volatile working memory content susceptible to shifting to the past sensory experience - producing short-term sensory history biases - naturally leads to contraction bias. The errors, occurring at the level of individual trials, are sampled from the full distribution of the stimuli and are not due to a gradual shift of the memory toward the sensory distribution's mean. Our results are consistent with a broad set of behavioral findings and provide predictions of performance across different stimulus distributions and timings, delay intervals, as well as neuronal dynamics in putative working memory areas. Finally, we validate our model by performing a set of human psychophysics experiments of an auditory parametric working memory task.
Collapse
Affiliation(s)
- Vezha Boboeva
- Sainsbury Wellcome Centre, University College London, London, United Kingdom
- Department of Bioengineering, Imperial College London, London, United Kingdom
| | - Alberto Pezzotta
- Gatsby Computational Neuroscience Unit, University College London, London, United Kingdom
- The Francis Crick Institute, London, United Kingdom
| | - Claudia Clopath
- Sainsbury Wellcome Centre, University College London, London, United Kingdom
- Department of Bioengineering, Imperial College London, London, United Kingdom
| | - Athena Akrami
- Sainsbury Wellcome Centre, University College London, London, United Kingdom
| |
Collapse
|
7
|
McNamee DC. The generative neural microdynamics of cognitive processing. Curr Opin Neurobiol 2024; 85:102855. [PMID: 38428170 DOI: 10.1016/j.conb.2024.102855] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/07/2023] [Revised: 02/06/2024] [Accepted: 02/07/2024] [Indexed: 03/03/2024]
Abstract
The entorhinal cortex and hippocampus form a recurrent network that informs many cognitive processes, including memory, planning, navigation, and imagination. Neural recordings from these regions reveal spatially organized population codes corresponding to external environments and abstract spaces. Aligning the former cognitive functionalities with the latter neural phenomena is a central challenge in understanding the entorhinal-hippocampal circuit (EHC). Disparate experiments demonstrate a surprising level of complexity and apparent disorder in the intricate spatiotemporal dynamics of sequential non-local hippocampal reactivations, which occur particularly, though not exclusively, during immobile pauses and rest. We review these phenomena with a particular focus on their apparent lack of physical simulative realism. These observations are then integrated within a theoretical framework and proposed neural circuit mechanisms that normatively characterize this neural complexity by conceiving different regimes of hippocampal microdynamics as neuromarkers of diverse cognitive computations.
Collapse
|
8
|
Mehrotra D, Dubé L. Accounting for multiscale processing in adaptive real-world decision-making via the hippocampus. Front Neurosci 2023; 17:1200842. [PMID: 37732307 PMCID: PMC10508350 DOI: 10.3389/fnins.2023.1200842] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/05/2023] [Accepted: 08/25/2023] [Indexed: 09/22/2023] Open
Abstract
For adaptive real-time behavior in real-world contexts, the brain needs to allow past information over multiple timescales to influence current processing for making choices that create the best outcome as a person goes about making choices in their everyday life. The neuroeconomics literature on value-based decision-making has formalized such choice through reinforcement learning models for two extreme strategies. These strategies are model-free (MF), which is an automatic, stimulus-response type of action, and model-based (MB), which bases choice on cognitive representations of the world and causal inference on environment-behavior structure. The emphasis of examining the neural substrates of value-based decision making has been on the striatum and prefrontal regions, especially with regards to the "here and now" decision-making. Yet, such a dichotomy does not embrace all the dynamic complexity involved. In addition, despite robust research on the role of the hippocampus in memory and spatial learning, its contribution to value-based decision making is just starting to be explored. This paper aims to better appreciate the role of the hippocampus in decision-making and advance the successor representation (SR) as a candidate mechanism for encoding state representations in the hippocampus, separate from reward representations. To this end, we review research that relates hippocampal sequences to SR models showing that the implementation of such sequences in reinforcement learning agents improves their performance. This also enables the agents to perform multiscale temporal processing in a biologically plausible manner. Altogether, we articulate a framework to advance current striatal and prefrontal-focused decision making to better account for multiscale mechanisms underlying various real-world time-related concepts such as the self that cumulates over a person's life course.
Collapse
Affiliation(s)
- Dhruv Mehrotra
- Integrated Program in Neuroscience, McGill University, Montréal, QC, Canada
- Montréal Neurological Institute, McGill University, Montréal, QC, Canada
| | - Laurette Dubé
- Desautels Faculty of Management, McGill University, Montréal, QC, Canada
- McGill Center for the Convergence of Health and Economics, McGill University, Montréal, QC, Canada
| |
Collapse
|
9
|
Li PY, Roxin A. Rapid memory encoding in a recurrent network model with behavioral time scale synaptic plasticity. PLoS Comput Biol 2023; 19:e1011139. [PMID: 37624848 PMCID: PMC10484462 DOI: 10.1371/journal.pcbi.1011139] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/29/2023] [Revised: 09/07/2023] [Accepted: 07/10/2023] [Indexed: 08/27/2023] Open
Abstract
Episodic memories are formed after a single exposure to novel stimuli. The plasticity mechanisms underlying such fast learning still remain largely unknown. Recently, it was shown that cells in area CA1 of the hippocampus of mice could form or shift their place fields after a single traversal of a virtual linear track. In-vivo intracellular recordings in CA1 cells revealed that previously silent inputs from CA3 could be switched on when they occurred within a few seconds of a dendritic plateau potential (PP) in the post-synaptic cell, a phenomenon dubbed Behavioral Time-scale Plasticity (BTSP). A recently developed computational framework for BTSP in which the dynamics of synaptic traces related to the pre-synaptic activity and post-synaptic PP are explicitly modelled, can account for experimental findings. Here we show that this model of plasticity can be further simplified to a 1D map which describes changes to the synaptic weights after a single trial. We use a temporally symmetric version of this map to study the storage of a large number of spatial memories in a recurrent network, such as CA3. Specifically, the simplicity of the map allows us to calculate the correlation of the synaptic weight matrix with any given past environment analytically. We show that the calculated memory trace can be used to predict the emergence and stability of bump attractors in a high dimensional neural network model endowed with BTSP.
Collapse
Affiliation(s)
- Pan Ye Li
- Centre de Recerca Matemàtica, Barcelona, Spain
| | - Alex Roxin
- Centre de Recerca Matemàtica, Barcelona, Spain
| |
Collapse
|
10
|
Andreetta S, Spalla D, Treves A. Narratives need not end well; nor say it all. Behav Brain Sci 2023; 46:e83. [PMID: 37154132 DOI: 10.1017/s0140525x22002655] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/10/2023]
Abstract
To fully embrace situations of radical uncertainty, we argue that the theory should abandon the requirements that narratives, in general, must lead to affective evaluation, and that they have to explain (and potentially simulate) all or even the bulk of the current decisional context. Evidence from studies of incidental learning show that narrative schemata can bias decisions while remaining fragmentary, insufficient for prediction, and devoid of utility values.
Collapse
Affiliation(s)
- Sara Andreetta
- Cognitive Neuroscience, SISSA, 34136 Trieste, Italy. ://people.sissa.it/~ale/limbo.html
| | - Davide Spalla
- Cognitive Neuroscience, SISSA, 34136 Trieste, Italy. ://people.sissa.it/~ale/limbo.html
| | - Alessandro Treves
- Cognitive Neuroscience, SISSA, 34136 Trieste, Italy. ://people.sissa.it/~ale/limbo.html
| |
Collapse
|
11
|
Rolls ET. Hippocampal spatial view cells for memory and navigation, and their underlying connectivity in humans. Hippocampus 2023; 33:533-572. [PMID: 36070199 PMCID: PMC10946493 DOI: 10.1002/hipo.23467] [Citation(s) in RCA: 33] [Impact Index Per Article: 33.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/19/2022] [Revised: 08/16/2022] [Accepted: 08/16/2022] [Indexed: 01/08/2023]
Abstract
Hippocampal and parahippocampal gyrus spatial view neurons in primates respond to the spatial location being looked at. The representation is allocentric, in that the responses are to locations "out there" in the world, and are relatively invariant with respect to retinal position, eye position, head direction, and the place where the individual is located. The underlying connectivity in humans is from ventromedial visual cortical regions to the parahippocampal scene area, leading to the theory that spatial view cells are formed by combinations of overlapping feature inputs self-organized based on their closeness in space. Thus, although spatial view cells represent "where" for episodic memory and navigation, they are formed by ventral visual stream feature inputs in the parahippocampal gyrus in what is the parahippocampal scene area. A second "where" driver of spatial view cells are parietal inputs, which it is proposed provide the idiothetic update for spatial view cells, used for memory recall and navigation when the spatial view details are obscured. Inferior temporal object "what" inputs and orbitofrontal cortex reward inputs connect to the human hippocampal system, and in macaques can be associated in the hippocampus with spatial view cell "where" representations to implement episodic memory. Hippocampal spatial view cells also provide a basis for navigation to a series of viewed landmarks, with the orbitofrontal cortex reward inputs to the hippocampus providing the goals for navigation, which can then be implemented by hippocampal connectivity in humans to parietal cortex regions involved in visuomotor actions in space. The presence of foveate vision and the highly developed temporal lobe for object and scene processing in primates including humans provide a basis for hippocampal spatial view cells to be key to understanding episodic memory in the primate and human hippocampus, and the roles of this system in primate including human navigation.
Collapse
Affiliation(s)
- Edmund T. Rolls
- Oxford Centre for Computational NeuroscienceOxfordUK
- Department of Computer ScienceUniversity of WarwickCoventryUK
| |
Collapse
|
12
|
Li C, Zhang X, Chen P, Zhou K, Yu J, Wu G, Xiang D, Jiang H, Wang M, Liu Q. Short-term synaptic plasticity in emerging devices for neuromorphic computing. iScience 2023; 26:106315. [PMID: 36950108 PMCID: PMC10025973 DOI: 10.1016/j.isci.2023.106315] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/06/2023] Open
Abstract
Neuromorphic computing is a promising computing paradigm toward building next-generation artificial intelligence machines, in which diverse types of synaptic plasticity play an active role in information processing. Compared to long-term plasticity (LTP) forming the foundation of learning and memory, short-term plasticity (STP) is essential for critical computational functions. So far, the practical applications of LTP have been widely investigated, whereas the implementation of STP in hardware is still elusive. Here, we review the development of STP by bridging the physics in emerging devices and biological behaviors. We explore the computational functions of various STP in biology and review their recent progress. Finally, we discuss the main challenges of introducing STP into synaptic devices and offer the potential approaches to utilize STP to enrich systems' capabilities. This review is expected to provide prospective ideas for implementing STP in emerging devices and may promote the construction of high-level neuromorphic machines.
Collapse
Affiliation(s)
- Chao Li
- State Key Laboratory of Integrated Chip and System, Frontier Institute of Chip and System, Fudan University, Shanghai 200433, China
- Key Laboratory of Microelectronics Device & Integrated Technology, Institute of Microelectronics of Chinese Academy of Sciences, Beijing 100029, China
- University of Chinese Academy of Sciences, Beijing 100049, China
| | - Xumeng Zhang
- State Key Laboratory of Integrated Chip and System, Frontier Institute of Chip and System, Fudan University, Shanghai 200433, China
- Zhangjiang Fudan International Innovation Center, Fudan University, Shanghai 200433, China
- Shanghai Qi Zhi Institute, Shanghai 200232, China
| | - Pei Chen
- State Key Laboratory of Integrated Chip and System, Frontier Institute of Chip and System, Fudan University, Shanghai 200433, China
| | - Keji Zhou
- State Key Laboratory of Integrated Chip and System, Frontier Institute of Chip and System, Fudan University, Shanghai 200433, China
- Zhangjiang Fudan International Innovation Center, Fudan University, Shanghai 200433, China
- Shanghai Qi Zhi Institute, Shanghai 200232, China
| | - Jie Yu
- State Key Laboratory of Integrated Chip and System, Frontier Institute of Chip and System, Fudan University, Shanghai 200433, China
- Zhangjiang Fudan International Innovation Center, Fudan University, Shanghai 200433, China
| | - Guangjian Wu
- State Key Laboratory of Integrated Chip and System, Frontier Institute of Chip and System, Fudan University, Shanghai 200433, China
- Zhangjiang Fudan International Innovation Center, Fudan University, Shanghai 200433, China
- Shanghai Qi Zhi Institute, Shanghai 200232, China
| | - Du Xiang
- State Key Laboratory of Integrated Chip and System, Frontier Institute of Chip and System, Fudan University, Shanghai 200433, China
- Zhangjiang Fudan International Innovation Center, Fudan University, Shanghai 200433, China
- Shanghai Qi Zhi Institute, Shanghai 200232, China
| | - Hao Jiang
- State Key Laboratory of Integrated Chip and System, Frontier Institute of Chip and System, Fudan University, Shanghai 200433, China
- Zhangjiang Fudan International Innovation Center, Fudan University, Shanghai 200433, China
- Shanghai Qi Zhi Institute, Shanghai 200232, China
| | - Ming Wang
- State Key Laboratory of Integrated Chip and System, Frontier Institute of Chip and System, Fudan University, Shanghai 200433, China
- Zhangjiang Fudan International Innovation Center, Fudan University, Shanghai 200433, China
- Shanghai Qi Zhi Institute, Shanghai 200232, China
| | - Qi Liu
- State Key Laboratory of Integrated Chip and System, Frontier Institute of Chip and System, Fudan University, Shanghai 200433, China
- Zhangjiang Fudan International Innovation Center, Fudan University, Shanghai 200433, China
- Shanghai Qi Zhi Institute, Shanghai 200232, China
| |
Collapse
|
13
|
Andreetta S, Soldatkina O, Boboeva V, Treves A. In poetry, if meter has to help memory, it takes its time. OPEN RESEARCH EUROPE 2023; 1:59. [PMID: 37645121 PMCID: PMC10445917 DOI: 10.12688/openreseurope.13663.2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Accepted: 12/21/2022] [Indexed: 08/31/2023]
Abstract
To test the idea that poetic meter emerged as a cognitive schema to aid verbal memory, we focused on classical Italian poetry and on three components of meter: rhyme, accent, and verse length. Meaningless poems were generated by introducing prosody-invariant non-words into passages from Dante's Divina Commedia and Ariosto's Orlando Furioso. We then ablated rhymes, modified accent patterns, or altered the number of syllables. The resulting versions of each non-poem were presented to Italian native speakers, who were then asked to retrieve three target non-words. Surprisingly, we found that the integrity of Dante's meter has no significant effect on memory performance. With Ariosto, instead, removing each component downgrades memory proportionally to its contribution to perceived metric plausibility. Counterintuitively, the fully metric versions required longer reaction times, implying that activating metric schemata involves a cognitive cost. Within schema theories, this finding provides evidence for high-level interactions between procedural and episodic memory.
Collapse
Affiliation(s)
| | | | - Vezha Boboeva
- Cognitive Neuroscience, SISSA, Trieste, 34136, Italy
- Bioengineering, Imperial College London, London, SW7 2AZ, UK
| | | |
Collapse
|
14
|
Ryom KI, Stendardi D, Ciaramelli E, Treves A. Computational constraints on the associative recall of spatial scenes. Hippocampus 2023; 33:635-645. [PMID: 36762712 DOI: 10.1002/hipo.23511] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/18/2022] [Revised: 01/17/2023] [Accepted: 01/19/2023] [Indexed: 02/11/2023]
Abstract
We consider a model of associative storage and retrieval of compositional memories in an extended cortical network. Our model network is comprised of Potts units, which represent patches of cortex, interacting through long-range connections. The critical assumption is that a memory, for example of a spatial view, is composed of a limited number of items, each of which has a pre-established representation: storing a new memory only involves acquiring the connections, if novel, among the participating items. The model is shown to have a much lower storage capacity than when it stores simple unitary representations. It is also shown that an input from the hippocampus facilitates associative retrieval. When it is absent, it is advantageous to cue rare rather than frequent items. The implications of these results for emerging trends in empirical research are discussed.
Collapse
Affiliation(s)
| | - Debora Stendardi
- Dipartimento di Psicologia Renzo Canestrari, Università di Bologna, Bologna, Italy
| | - Elisa Ciaramelli
- Dipartimento di Psicologia Renzo Canestrari, Università di Bologna, Bologna, Italy
| | | |
Collapse
|
15
|
Gao Y. A computational model of learning flexible navigation in a maze by layout-conforming replay of place cells. Front Comput Neurosci 2023; 17:1053097. [PMID: 36846726 PMCID: PMC9947252 DOI: 10.3389/fncom.2023.1053097] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/25/2022] [Accepted: 01/16/2023] [Indexed: 02/11/2023] Open
Abstract
Recent experimental observations have shown that the reactivation of hippocampal place cells (PC) during sleep or wakeful immobility depicts trajectories that can go around barriers and can flexibly adapt to a changing maze layout. However, existing computational models of replay fall short of generating such layout-conforming replay, restricting their usage to simple environments, like linear tracks or open fields. In this paper, we propose a computational model that generates layout-conforming replay and explains how such replay drives the learning of flexible navigation in a maze. First, we propose a Hebbian-like rule to learn the inter-PC synaptic strength during exploration. Then we use a continuous attractor network (CAN) with feedback inhibition to model the interaction among place cells and hippocampal interneurons. The activity bump of place cells drifts along paths in the maze, which models layout-conforming replay. During replay in sleep, the synaptic strengths from place cells to striatal medium spiny neurons (MSN) are learned by a novel dopamine-modulated three-factor rule to store place-reward associations. During goal-directed navigation, the CAN periodically generates replay trajectories from the animal's location for path planning, and the trajectory leading to a maximal MSN activity is followed by the animal. We have implemented our model into a high-fidelity virtual rat in the MuJoCo physics simulator. Extensive experiments have demonstrated that its superior flexibility during navigation in a maze is due to a continuous re-learning of inter-PC and PC-MSN synaptic strength.
Collapse
Affiliation(s)
- Yuanxiang Gao
- School of Information and Communication Engineering, University of Electronic Science and Technology of China, Chengdu, China,CAS Key Laboratory of Theoretical Physics, Institute of Theoretical Physics, Chinese Academy of Sciences, Beijing, China,*Correspondence: Yuanxiang Gao ✉
| |
Collapse
|
16
|
Larner A. Transient global amnesia: model, mechanism, hypothesis. Cortex 2022; 149:137-147. [DOI: 10.1016/j.cortex.2022.01.011] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/18/2021] [Revised: 11/13/2021] [Accepted: 01/19/2022] [Indexed: 01/03/2023]
|