1
|
Kern S, Nagel J, Gerchen MF, Gürsoy Ç, Meyer-Lindenberg A, Kirsch P, Dolan RJ, Gais S, Feld GB. Reactivation strength during cued recall is modulated by graph distance within cognitive maps. eLife 2024; 12:RP93357. [PMID: 38810249 PMCID: PMC11136493 DOI: 10.7554/elife.93357] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/31/2024] Open
Abstract
Declarative memory retrieval is thought to involve reinstatement of neuronal activity patterns elicited and encoded during a prior learning episode. Furthermore, it is suggested that two mechanisms operate during reinstatement, dependent on task demands: individual memory items can be reactivated simultaneously as a clustered occurrence or, alternatively, replayed sequentially as temporally separate instances. In the current study, participants learned associations between images that were embedded in a directed graph network and retained this information over a brief 8 min consolidation period. During a subsequent cued recall session, participants retrieved the learned information while undergoing magnetoencephalographic recording. Using a trained stimulus decoder, we found evidence for clustered reactivation of learned material. Reactivation strength of individual items during clustered reactivation decreased as a function of increasing graph distance, an ordering present solely for successful retrieval but not for retrieval failure. In line with previous research, we found evidence that sequential replay was dependent on retrieval performance and was most evident in low performers. The results provide evidence for distinct performance-dependent retrieval mechanisms, with graded clustered reactivation emerging as a plausible mechanism to search within abstract cognitive maps.
Collapse
Affiliation(s)
- Simon Kern
- Clinical Psychology, Central Institute of Mental Health, Medical Faculty Mannheim, University of HeidelbergMannheimGermany
- Psychiatry and Psychotherapy, Central Institute of Mental Health, Medical Faculty Mannheim, University of HeidelbergMannheimGermany
- Addiction Behavior and Addiction Medicine, Central Institute of Mental Health, Medical Faculty Mannheim, University of HeidelbergMannheimGermany
| | - Juliane Nagel
- Clinical Psychology, Central Institute of Mental Health, Medical Faculty Mannheim, University of HeidelbergMannheimGermany
- Psychiatry and Psychotherapy, Central Institute of Mental Health, Medical Faculty Mannheim, University of HeidelbergMannheimGermany
- Addiction Behavior and Addiction Medicine, Central Institute of Mental Health, Medical Faculty Mannheim, University of HeidelbergMannheimGermany
| | - Martin F Gerchen
- Clinical Psychology, Central Institute of Mental Health, Medical Faculty Mannheim, University of HeidelbergMannheimGermany
- Department of Psychology, Ruprecht Karl University of HeidelbergHeidelbergGermany
- Bernstein Center for Computational Neuroscience Heidelberg/MannheimMannheimGermany
| | - Çağatay Gürsoy
- Clinical Psychology, Central Institute of Mental Health, Medical Faculty Mannheim, University of HeidelbergMannheimGermany
- Psychiatry and Psychotherapy, Central Institute of Mental Health, Medical Faculty Mannheim, University of HeidelbergMannheimGermany
- Addiction Behavior and Addiction Medicine, Central Institute of Mental Health, Medical Faculty Mannheim, University of HeidelbergMannheimGermany
| | - Andreas Meyer-Lindenberg
- Psychiatry and Psychotherapy, Central Institute of Mental Health, Medical Faculty Mannheim, University of HeidelbergMannheimGermany
- Bernstein Center for Computational Neuroscience Heidelberg/MannheimMannheimGermany
| | - Peter Kirsch
- Clinical Psychology, Central Institute of Mental Health, Medical Faculty Mannheim, University of HeidelbergMannheimGermany
- Department of Psychology, Ruprecht Karl University of HeidelbergHeidelbergGermany
- Bernstein Center for Computational Neuroscience Heidelberg/MannheimMannheimGermany
| | - Raymond J Dolan
- Max Planck UCL Centre for Computational Psychiatry and Ageing ResearchLondonUnited Kingdom
- Wellcome Centre for Human Neuroimaging, University College LondonLondonUnited Kingdom
| | - Steffen Gais
- Institute of Medical Psychology and Behavioral Neurobiology, Eberhard-Karls-University TübingenTübingenGermany
| | - Gordon B Feld
- Clinical Psychology, Central Institute of Mental Health, Medical Faculty Mannheim, University of HeidelbergMannheimGermany
- Psychiatry and Psychotherapy, Central Institute of Mental Health, Medical Faculty Mannheim, University of HeidelbergMannheimGermany
- Addiction Behavior and Addiction Medicine, Central Institute of Mental Health, Medical Faculty Mannheim, University of HeidelbergMannheimGermany
- Department of Psychology, Ruprecht Karl University of HeidelbergHeidelbergGermany
| |
Collapse
|
2
|
Zhou D, Bornstein AM. Expanding horizons in reinforcement learning for curious exploration and creative planning. Behav Brain Sci 2024; 47:e118. [PMID: 38770877 DOI: 10.1017/s0140525x23003394] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/22/2024]
Abstract
Curiosity and creativity are expressions of the trade-off between leveraging that with which we are familiar or seeking out novelty. Through the computational lens of reinforcement learning, we describe how formulating the value of information seeking and generation via their complementary effects on planning horizons formally captures a range of solutions to striking this balance.
Collapse
Affiliation(s)
- Dale Zhou
- Neurobiology and Behavior, 519 Biological Sciences Quad, University of California, Irvine, CA, USA ://dalezhou.com
- Center for the Neurobiology of Learning and Memory, Qureshey, Research Laboratory, University of California, Irvine, CA, USA ://aaron.bornstein.org/
| | - Aaron M Bornstein
- Center for the Neurobiology of Learning and Memory, Qureshey, Research Laboratory, University of California, Irvine, CA, USA ://aaron.bornstein.org/
- Department of Cognitive Sciences, 2318 Social & Behavioral Sciences Gateway, University of California, Irvine, CA, USA
| |
Collapse
|
3
|
Alejandro RJ, Holroyd CB. Hierarchical control over foraging behavior by anterior cingulate cortex. Neurosci Biobehav Rev 2024; 160:105623. [PMID: 38490499 DOI: 10.1016/j.neubiorev.2024.105623] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/03/2023] [Revised: 02/14/2024] [Accepted: 03/13/2024] [Indexed: 03/17/2024]
Abstract
Foraging is a natural behavior that involves making sequential decisions to maximize rewards while minimizing the costs incurred when doing so. The prevalence of foraging across species suggests that a common brain computation underlies its implementation. Although anterior cingulate cortex is believed to contribute to foraging behavior, its specific role has been contentious, with predominant theories arguing either that it encodes environmental value or choice difficulty. Additionally, recent attempts to characterize foraging have taken place within the reinforcement learning framework, with increasingly complex models scaling with task complexity. Here we review reinforcement learning foraging models, highlighting the hierarchical structure of many foraging problems. We extend this literature by proposing that ACC guides foraging according to principles of model-based hierarchical reinforcement learning. This idea holds that ACC function is organized hierarchically along a rostral-caudal gradient, with rostral structures monitoring the status and completion of high-level task goals (like finding food), and midcingulate structures overseeing the execution of task options (subgoals, like harvesting fruit) and lower-level actions (such as grabbing an apple).
Collapse
Affiliation(s)
| | - Clay B Holroyd
- Department of Experimental Psychology, Ghent University, Ghent, Belgium
| |
Collapse
|
4
|
Galloni AR, Yuan Y, Zhu M, Yu H, Bisht RS, Wu CTM, Grienberger C, Ramanathan S, Milstein AD. Neuromorphic one-shot learning utilizing a phase-transition material. Proc Natl Acad Sci U S A 2024; 121:e2318362121. [PMID: 38630718 PMCID: PMC11047090 DOI: 10.1073/pnas.2318362121] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/20/2023] [Accepted: 03/25/2024] [Indexed: 04/19/2024] Open
Abstract
Design of hardware based on biological principles of neuronal computation and plasticity in the brain is a leading approach to realizing energy- and sample-efficient AI and learning machines. An important factor in selection of the hardware building blocks is the identification of candidate materials with physical properties suitable to emulate the large dynamic ranges and varied timescales of neuronal signaling. Previous work has shown that the all-or-none spiking behavior of neurons can be mimicked by threshold switches utilizing material phase transitions. Here, we demonstrate that devices based on a prototypical metal-insulator-transition material, vanadium dioxide (VO2), can be dynamically controlled to access a continuum of intermediate resistance states. Furthermore, the timescale of their intrinsic relaxation can be configured to match a range of biologically relevant timescales from milliseconds to seconds. We exploit these device properties to emulate three aspects of neuronal analog computation: fast (~1 ms) spiking in a neuronal soma compartment, slow (~100 ms) spiking in a dendritic compartment, and ultraslow (~1 s) biochemical signaling involved in temporal credit assignment for a recently discovered biological mechanism of one-shot learning. Simulations show that an artificial neural network using properties of VO2 devices to control an agent navigating a spatial environment can learn an efficient path to a reward in up to fourfold fewer trials than standard methods. The phase relaxations described in our study may be engineered in a variety of materials and can be controlled by thermal, electrical, or optical stimuli, suggesting further opportunities to emulate biological learning in neuromorphic hardware.
Collapse
Affiliation(s)
- Alessandro R. Galloni
- Department of Neuroscience and Cell Biology, Robert Wood Johnson Medical School, Rutgers, The State University of New Jersey, Piscataway, NJ08854
- Center for Advanced Biotechnology and Medicine, Rutgers, The State University of New Jersey, Piscataway, NJ08854
| | - Yifan Yuan
- Department of Electrical and Computer Engineering, Rutgers, The State University of New Jersey, Piscataway, NJ08854
| | - Minning Zhu
- Department of Electrical and Computer Engineering, Rutgers, The State University of New Jersey, Piscataway, NJ08854
| | - Haoming Yu
- School of Materials Engineering, Purdue University, West Lafayette, IN47907
| | - Ravindra S. Bisht
- Department of Electrical and Computer Engineering, Rutgers, The State University of New Jersey, Piscataway, NJ08854
| | - Chung-Tse Michael Wu
- Department of Electrical and Computer Engineering, Rutgers, The State University of New Jersey, Piscataway, NJ08854
| | - Christine Grienberger
- Department of Neuroscience, Brandeis University, Waltham, MA02453
- Department of Biology and Volen National Center for Complex Systems, Brandeis University, Waltham, MA02453
| | - Shriram Ramanathan
- Department of Electrical and Computer Engineering, Rutgers, The State University of New Jersey, Piscataway, NJ08854
| | - Aaron D. Milstein
- Department of Neuroscience and Cell Biology, Robert Wood Johnson Medical School, Rutgers, The State University of New Jersey, Piscataway, NJ08854
- Center for Advanced Biotechnology and Medicine, Rutgers, The State University of New Jersey, Piscataway, NJ08854
| |
Collapse
|
5
|
Heimer O, Hertz U. The spread of affective and semantic valence representations across states. Cognition 2024; 244:105714. [PMID: 38176154 DOI: 10.1016/j.cognition.2023.105714] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/10/2023] [Revised: 12/22/2023] [Accepted: 12/24/2023] [Indexed: 01/06/2024]
Abstract
In many decision problems, outcomes are not reached after a single action but rather after a series of events or states. To optimize decisions over multiple states, representations of how good or bad the outcomes are, that is, the outcomes' valence, should spread across states. One mechanism for valence spreading is a temporal, state-independent process in which a single valence representation is updated when an outcome is experienced and fades away afterwards. Each state's valence is based on its temporal proximity to the experienced outcome. An alternative, state-dependent mechanism relies on the structure of transitions between states, updating a separate valence representation for each state according to its spatial distance from the outcomes. We examined how these mechanistic accounts shape the spread of two formats of valence representation, feelings (affective valence) and knowledge (semantic valence), between states. In two pre-registered experiments (N = 585), we used a novel task in which participants move in a four-state maze, one of which contains an outcome. The participants provide self-reports of affective and semantic valence throughout the maze and after finishing it. Results show that the affective representation of negative valence is more localized in state-space than the semantic representation. We also found evidence for the relative reliance of the affective valence on a temporal, state-independent mechanism and of the semantic valence on a structured, state-dependent mechanism. Our findings provide mechanistic accounts for the differences between affective and semantic valence representations and indicate how such representations may play a role in associative learning and decision-making.
Collapse
Affiliation(s)
- Orit Heimer
- Department of Psychology, University of Haifa, Haifa, Israel.
| | - Uri Hertz
- Department of Cognitive Sciences, University of Haifa, Haifa, Israel
| |
Collapse
|
6
|
Jiang LP, Rao RPN. Dynamic predictive coding: A model of hierarchical sequence learning and prediction in the neocortex. PLoS Comput Biol 2024; 20:e1011801. [PMID: 38330098 PMCID: PMC10880975 DOI: 10.1371/journal.pcbi.1011801] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/05/2023] [Revised: 02/21/2024] [Accepted: 01/04/2024] [Indexed: 02/10/2024] Open
Abstract
We introduce dynamic predictive coding, a hierarchical model of spatiotemporal prediction and sequence learning in the neocortex. The model assumes that higher cortical levels modulate the temporal dynamics of lower levels, correcting their predictions of dynamics using prediction errors. As a result, lower levels form representations that encode sequences at shorter timescales (e.g., a single step) while higher levels form representations that encode sequences at longer timescales (e.g., an entire sequence). We tested this model using a two-level neural network, where the top-down modulation creates low-dimensional combinations of a set of learned temporal dynamics to explain input sequences. When trained on natural videos, the lower-level model neurons developed space-time receptive fields similar to those of simple cells in the primary visual cortex while the higher-level responses spanned longer timescales, mimicking temporal response hierarchies in the cortex. Additionally, the network's hierarchical sequence representation exhibited both predictive and postdictive effects resembling those observed in visual motion processing in humans (e.g., in the flash-lag illusion). When coupled with an associative memory emulating the role of the hippocampus, the model allowed episodic memories to be stored and retrieved, supporting cue-triggered recall of an input sequence similar to activity recall in the visual cortex. When extended to three hierarchical levels, the model learned progressively more abstract temporal representations along the hierarchy. Taken together, our results suggest that cortical processing and learning of sequences can be interpreted as dynamic predictive coding based on a hierarchical spatiotemporal generative model of the visual world.
Collapse
Affiliation(s)
- Linxing Preston Jiang
- Paul G. Allen School of Computer Science & Engineering, University of Washington, Seattle, Washington, United States of America
- Center for Neurotechnology, University of Washington, Seattle, Washington, United States of America
- Computational Neuroscience Center, University of Washington, Seattle, Washington, United States of America
| | - Rajesh P. N. Rao
- Paul G. Allen School of Computer Science & Engineering, University of Washington, Seattle, Washington, United States of America
- Center for Neurotechnology, University of Washington, Seattle, Washington, United States of America
- Computational Neuroscience Center, University of Washington, Seattle, Washington, United States of America
| |
Collapse
|
7
|
Yang L, Jin F, Yang L, Li J, Li Z, Li M, Shang Z. The Hippocampus in Pigeons Contributes to the Model-Based Valuation and the Relationship between Temporal Context States. Animals (Basel) 2024; 14:431. [PMID: 38338074 PMCID: PMC10854895 DOI: 10.3390/ani14030431] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/30/2023] [Revised: 01/25/2024] [Accepted: 01/25/2024] [Indexed: 02/12/2024] Open
Abstract
Model-based decision-making guides organism behavior by the representation of the relationships between different states. Previous studies have shown that the mammalian hippocampus (Hp) plays a key role in learning the structure of relationships among experiences. However, the hippocampal neural mechanisms of birds for model-based learning have rarely been reported. Here, we trained six pigeons to perform a two-step task and explore whether their Hp contributes to model-based learning. Behavioral performance and hippocampal multi-channel local field potentials (LFPs) were recorded during the task. We estimated the subjective values using a reinforcement learning model dynamically fitted to the pigeon's choice of behavior. The results show that the model-based learner can capture the behavioral choices of pigeons well throughout the learning process. Neural analysis indicated that high-frequency (12-100 Hz) power in Hp represented the temporal context states. Moreover, dynamic correlation and decoding results provided further support for the high-frequency dependence of model-based valuations. In addition, we observed a significant increase in hippocampal neural similarity at the low-frequency band (1-12 Hz) for common temporal context states after learning. Overall, our findings suggest that pigeons use model-based inferences to learn multi-step tasks, and multiple LFP frequency bands collaboratively contribute to model-based learning. Specifically, the high-frequency (12-100 Hz) oscillations represent model-based valuations, while the low-frequency (1-12 Hz) neural similarity is influenced by the relationship between temporal context states. These results contribute to our understanding of the neural mechanisms underlying model-based learning and broaden the scope of hippocampal contributions to avian behavior.
Collapse
Affiliation(s)
- Lifang Yang
- School of Electrical and Information Engineering, Zhengzhou University, Zhengzhou 450001, China; (L.Y.); (F.J.); (L.Y.); (J.L.); (Z.L.)
- Henan Key Laboratory of Brain Science and Brain-Computer Interface Technology, Zhengzhou 450001, China
| | - Fuli Jin
- School of Electrical and Information Engineering, Zhengzhou University, Zhengzhou 450001, China; (L.Y.); (F.J.); (L.Y.); (J.L.); (Z.L.)
- Henan Key Laboratory of Brain Science and Brain-Computer Interface Technology, Zhengzhou 450001, China
| | - Long Yang
- School of Electrical and Information Engineering, Zhengzhou University, Zhengzhou 450001, China; (L.Y.); (F.J.); (L.Y.); (J.L.); (Z.L.)
- Henan Key Laboratory of Brain Science and Brain-Computer Interface Technology, Zhengzhou 450001, China
| | - Jiajia Li
- School of Electrical and Information Engineering, Zhengzhou University, Zhengzhou 450001, China; (L.Y.); (F.J.); (L.Y.); (J.L.); (Z.L.)
- Henan Key Laboratory of Brain Science and Brain-Computer Interface Technology, Zhengzhou 450001, China
| | - Zhihui Li
- School of Electrical and Information Engineering, Zhengzhou University, Zhengzhou 450001, China; (L.Y.); (F.J.); (L.Y.); (J.L.); (Z.L.)
- Henan Key Laboratory of Brain Science and Brain-Computer Interface Technology, Zhengzhou 450001, China
- Institute of Medical Engineering Technology and Data Mining, Zhengzhou University, Zhengzhou 450001, China
| | - Mengmeng Li
- School of Electrical and Information Engineering, Zhengzhou University, Zhengzhou 450001, China; (L.Y.); (F.J.); (L.Y.); (J.L.); (Z.L.)
- Henan Key Laboratory of Brain Science and Brain-Computer Interface Technology, Zhengzhou 450001, China
| | - Zhigang Shang
- School of Electrical and Information Engineering, Zhengzhou University, Zhengzhou 450001, China; (L.Y.); (F.J.); (L.Y.); (J.L.); (Z.L.)
- Henan Key Laboratory of Brain Science and Brain-Computer Interface Technology, Zhengzhou 450001, China
- Institute of Medical Engineering Technology and Data Mining, Zhengzhou University, Zhengzhou 450001, China
| |
Collapse
|
8
|
Son JY, Bhandari A, FeldmanHall O. Abstract cognitive maps of social network structure aid adaptive inference. Proc Natl Acad Sci U S A 2023; 120:e2310801120. [PMID: 37963254 PMCID: PMC10666027 DOI: 10.1073/pnas.2310801120] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/28/2023] [Accepted: 10/12/2023] [Indexed: 11/16/2023] Open
Abstract
Social navigation-such as anticipating where gossip may spread, or identifying which acquaintances can help land a job-relies on knowing how people are connected within their larger social communities. Problematically, for most social networks, the space of possible relationships is too vast to observe and memorize. Indeed, people's knowledge of these social relations is well known to be biased and error-prone. Here, we reveal that these biased representations reflect a fundamental computation that abstracts over individual relationships to enable principled inferences about unseen relationships. We propose a theory of network representation that explains how people learn inferential cognitive maps of social relations from direct observation, what kinds of knowledge structures emerge as a consequence, and why it can be beneficial to encode systematic biases into social cognitive maps. Leveraging simulations, laboratory experiments, and "field data" from a real-world network, we find that people abstract observations of direct relations (e.g., friends) into inferences of multistep relations (e.g., friends-of-friends). This multistep abstraction mechanism enables people to discover and represent complex social network structure, affording adaptive inferences across a variety of contexts, including friendship, trust, and advice-giving. Moreover, this multistep abstraction mechanism unifies a variety of otherwise puzzling empirical observations about social behavior. Our proposal generalizes the theory of cognitive maps to the fundamental computational problem of social inference, presenting a powerful framework for understanding the workings of a predictive mind operating within a complex social world.
Collapse
Affiliation(s)
- Jae-Young Son
- Department of Cognitive, Linguistic, and Psychological Sciences, Brown University, Providence, RI02912
| | - Apoorva Bhandari
- Department of Cognitive, Linguistic, and Psychological Sciences, Brown University, Providence, RI02912
| | - Oriel FeldmanHall
- Department of Cognitive, Linguistic, and Psychological Sciences, Brown University, Providence, RI02912
- Carney Institute for Brain Sciences, Brown University, Providence, RI02912
| |
Collapse
|
9
|
Foldes T, Santamaria L, Lewis P. Sleep-related benefits to transitive inference are modulated by encoding strength and joint rank. Learn Mem 2023; 30:201-211. [PMID: 37726142 PMCID: PMC10547378 DOI: 10.1101/lm.053787.123] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/30/2023] [Accepted: 07/11/2023] [Indexed: 09/21/2023]
Abstract
Transitive inference is a measure of relational learning that has been shown to improve across sleep. Here, we examine this phenomenon further by studying the impact of encoding strength and joint rank. In experiment 1, participants learned adjacent premise pairs and were then tested on inferential problems derived from those pairs. In line with prior work, we found improved transitive inference performance after retention across a night of sleep compared with wake alone. Experiment 2 extended these findings using a within-subject design and found superior transitive inference performance on a hierarchy, consolidated across 27 h including sleep compared with just 3 h of wake. In both experiments, consolidation-related improvement was enhanced when presleep learning (i.e., encoding strength) was stronger. We also explored the interaction of these effects with the joint rank effect, in which items were scored according to their rank in the hierarchy, with more dominant item pairs having the lowest scores. Interestingly, the consolidation-related benefit was greatest for more dominant inference pairs (i.e., those with low joint rank scores). Overall, our findings provide further support for the improvement of transitive inference across a consolidation period that includes sleep. We additionally show that encoding strength and joint rank strongly modulate this effect.
Collapse
Affiliation(s)
- Tamas Foldes
- Cardiff University Brain Research Imaging Centre (CUBRIC), Cardiff University, Cardiff, Wales CF24 4HQ, United Kingdom
| | - Lorena Santamaria
- Cardiff University Brain Research Imaging Centre (CUBRIC), Cardiff University, Cardiff, Wales CF24 4HQ, United Kingdom
| | - Penny Lewis
- Cardiff University Brain Research Imaging Centre (CUBRIC), Cardiff University, Cardiff, Wales CF24 4HQ, United Kingdom
| |
Collapse
|
10
|
Etter G, Carmichael JE, Williams S. Linking temporal coordination of hippocampal activity to memory function. Front Cell Neurosci 2023; 17:1233849. [PMID: 37720546 PMCID: PMC10501408 DOI: 10.3389/fncel.2023.1233849] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/02/2023] [Accepted: 08/01/2023] [Indexed: 09/19/2023] Open
Abstract
Oscillations in neural activity are widespread throughout the brain and can be observed at the population level through the local field potential. These rhythmic patterns are associated with cycles of excitability and are thought to coordinate networks of neurons, in turn facilitating effective communication both within local circuits and across brain regions. In the hippocampus, theta rhythms (4-12 Hz) could contribute to several key physiological mechanisms including long-range synchrony, plasticity, and at the behavioral scale, support memory encoding and retrieval. While neurons in the hippocampus appear to be temporally coordinated by theta oscillations, they also tend to fire in sequences that are developmentally preconfigured. Although loss of theta rhythmicity impairs memory, these sequences of spatiotemporal representations persist in conditions of altered hippocampal oscillations. The focus of this review is to disentangle the relative contribution of hippocampal oscillations from single-neuron activity in learning and memory. We first review cellular, anatomical, and physiological mechanisms underlying the generation and maintenance of hippocampal rhythms and how they contribute to memory function. We propose candidate hypotheses for how septohippocampal oscillations could support memory function while not contributing directly to hippocampal sequences. In particular, we explore how theta rhythms could coordinate the integration of upstream signals in the hippocampus to form future decisions, the relevance of such integration to downstream regions, as well as setting the stage for behavioral timescale synaptic plasticity. Finally, we leverage stimulation-based treatment in Alzheimer's disease conditions as an opportunity to assess the sufficiency of hippocampal oscillations for memory function.
Collapse
Affiliation(s)
| | | | - Sylvain Williams
- Department of Psychiatry, Douglas Mental Health Research Institute, McGill University, Montreal, QC, Canada
| |
Collapse
|
11
|
Sinclair AH, Wang YC, Adcock RA. Instructed motivational states bias reinforcement learning and memory formation. Proc Natl Acad Sci U S A 2023; 120:e2304881120. [PMID: 37490530 PMCID: PMC10401012 DOI: 10.1073/pnas.2304881120] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/06/2023] [Accepted: 06/19/2023] [Indexed: 07/27/2023] Open
Abstract
Motivation influences goals, decisions, and memory formation. Imperative motivation links urgent goals to actions, narrowing the focus of attention and memory. Conversely, interrogative motivation integrates goals over time and space, supporting rich memory encoding for flexible future use. We manipulated motivational states via cover stories for a reinforcement learning task: The imperative group imagined executing a museum heist, whereas the interrogative group imagined planning a future heist. Participants repeatedly chose among four doors, representing different museum rooms, to sample trial-unique paintings with variable rewards (later converted to bonus payments). The next day, participants performed a surprise memory test. Crucially, only the cover stories differed between the imperative and interrogative groups; the reinforcement learning task was identical, and all participants had the same expectations about how and when bonus payments would be awarded. In an initial sample and a preregistered replication, we demonstrated that imperative motivation increased exploitation during reinforcement learning. Conversely, interrogative motivation increased directed (but not random) exploration, despite the cost to participants' earnings. At test, the interrogative group was more accurate at recognizing paintings and recalling associated values. In the interrogative group, higher value paintings were more likely to be remembered; imperative motivation disrupted this effect of reward modulating memory. Overall, we demonstrate that a prelearning motivational manipulation can bias learning and memory, bearing implications for education, behavior change, clinical interventions, and communication.
Collapse
Affiliation(s)
- Alyssa H. Sinclair
- Department of Psychology & Neuroscience, Duke University, Durham, NC27710
| | - Yuxi C. Wang
- Department of Psychology & Neuroscience, Duke University, Durham, NC27710
| | - R. Alison Adcock
- Department of Psychology & Neuroscience, Duke University, Durham, NC27710
- Department of Psychiatry & Behavioral Sciences, Duke University, Durham, NC27710
| |
Collapse
|
12
|
Tarder-Stoll H, Baldassano C, Aly M. The brain hierarchically represents the past and future during multistep anticipation. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2023:2023.07.24.550399. [PMID: 37546761 PMCID: PMC10402095 DOI: 10.1101/2023.07.24.550399] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 08/08/2023]
Abstract
Memory for temporal structure enables both planning of future events and retrospection of past events. We investigated how the brain flexibly represents extended temporal sequences into the past and future during anticipation. Participants learned sequences of environments in immersive virtual reality. Pairs of sequences had the same environments in a different order, enabling context-specific learning. During fMRI, participants anticipated upcoming environments multiple steps into the future in a given sequence. Temporal structure was represented in the hippocampus and across visual regions (1) bidirectionally, with graded representations into the past and future and (2) hierarchically, with further events into the past and future represented in successively more anterior brain regions. Further, context-specific predictions were prioritized in the forward but not backward direction. Together, this work sheds light on how we flexibly represent sequential structure to enable planning over multiple timescales.
Collapse
|
13
|
Ginosar G, Aljadeff J, Las L, Derdikman D, Ulanovsky N. Are grid cells used for navigation? On local metrics, subjective spaces, and black holes. Neuron 2023; 111:1858-1875. [PMID: 37044087 DOI: 10.1016/j.neuron.2023.03.027] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/09/2022] [Revised: 11/18/2022] [Accepted: 03/20/2023] [Indexed: 04/14/2023]
Abstract
The symmetric, lattice-like spatial pattern of grid-cell activity is thought to provide a neuronal global metric for space. This view is compatible with grid cells recorded in empty boxes but inconsistent with data from more naturalistic settings. We review evidence arguing against the global-metric notion, including the distortion and disintegration of the grid pattern in complex and three-dimensional environments. We argue that deviations from lattice symmetry are key for understanding grid-cell function. We propose three possible functions for grid cells, which treat real-world grid distortions as a feature rather than a bug. First, grid cells may constitute a local metric for proximal space rather than a global metric for all space. Second, grid cells could form a metric for subjective action-relevant space rather than physical space. Third, distortions may represent salient locations. Finally, we discuss mechanisms that can underlie these functions. These ideas may transform our thinking about grid cells.
Collapse
Affiliation(s)
- Gily Ginosar
- Department of Brain Sciences, Weizmann Institute of Science, Rehovot 76100, Israel
| | - Johnatan Aljadeff
- Department of Neurobiology, University of California, San Diego, La Jolla, CA 92093, USA
| | - Liora Las
- Department of Brain Sciences, Weizmann Institute of Science, Rehovot 76100, Israel
| | - Dori Derdikman
- Department of Neuroscience, Rappaport Faculty of Medicine and Research Institute, Technion, Haifa 31096, Israel.
| | - Nachum Ulanovsky
- Department of Brain Sciences, Weizmann Institute of Science, Rehovot 76100, Israel.
| |
Collapse
|
14
|
Miller AMP, Jacob AD, Ramsaran AI, De Snoo ML, Josselyn SA, Frankland PW. Emergence of a predictive model in the hippocampus. Neuron 2023; 111:1952-1965.e5. [PMID: 37015224 PMCID: PMC10293047 DOI: 10.1016/j.neuron.2023.03.011] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/08/2022] [Revised: 01/23/2023] [Accepted: 03/08/2023] [Indexed: 04/05/2023]
Abstract
The brain organizes experiences into memories that guide future behavior. Hippocampal CA1 population activity is hypothesized to reflect predictive models that contain information about future events, but little is known about how they develop. We trained mice on a series of problems with or without a common statistical structure to observe how memories are formed and updated. Mice that learned structured problems integrated their experiences into a predictive model that contained the solutions to upcoming novel problems. Retrieving the model during learning improved discrimination accuracy and facilitated learning. Using calcium imaging to track CA1 activity during learning, we found that hippocampal ensemble activity became more stable as mice formed a predictive model. The hippocampal ensemble was reactivated during training and incorporated new activity patterns from each training problem. These results show how hippocampal activity supports building predictive models by organizing new information with respect to existing memories.
Collapse
Affiliation(s)
- Adam M P Miller
- Program in Neurosciences and Mental Health, The Hospital for Sick Children, Toronto, ON, Canada
| | - Alex D Jacob
- Program in Neurosciences and Mental Health, The Hospital for Sick Children, Toronto, ON, Canada; Department of Psychology, University of Toronto, Toronto, ON, Canada
| | - Adam I Ramsaran
- Program in Neurosciences and Mental Health, The Hospital for Sick Children, Toronto, ON, Canada; Department of Psychology, University of Toronto, Toronto, ON, Canada
| | - Mitchell L De Snoo
- Program in Neurosciences and Mental Health, The Hospital for Sick Children, Toronto, ON, Canada; Institute of Medical Sciences, University of Toronto, Toronto, ON, Canada
| | - Sheena A Josselyn
- Program in Neurosciences and Mental Health, The Hospital for Sick Children, Toronto, ON, Canada; Department of Psychology, University of Toronto, Toronto, ON, Canada; Department of Physiology, University of Toronto, Toronto, ON, Canada; Institute of Medical Sciences, University of Toronto, Toronto, ON, Canada; Brain, Mind, & Consciousness Program, Canadian Institute for Advanced Research, Toronto, ON, Canada
| | - Paul W Frankland
- Program in Neurosciences and Mental Health, The Hospital for Sick Children, Toronto, ON, Canada; Department of Psychology, University of Toronto, Toronto, ON, Canada; Department of Physiology, University of Toronto, Toronto, ON, Canada; Institute of Medical Sciences, University of Toronto, Toronto, ON, Canada; Child & Brain Development Program, Canadian Institute for Advanced Research, Toronto, ON, Canada.
| |
Collapse
|
15
|
Crivelli-Decker J, Clarke A, Park SA, Huffman DJ, Boorman ED, Ranganath C. Goal-oriented representations in the human hippocampus during planning and navigation. Nat Commun 2023; 14:2946. [PMID: 37221176 PMCID: PMC10206082 DOI: 10.1038/s41467-023-35967-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/07/2021] [Accepted: 01/10/2023] [Indexed: 05/25/2023] Open
Abstract
Recent work in cognitive and systems neuroscience has suggested that the hippocampus might support planning, imagination, and navigation by forming cognitive maps that capture the abstract structure of physical spaces, tasks, and situations. Navigation involves disambiguating similar contexts, and the planning and execution of a sequence of decisions to reach a goal. Here, we examine hippocampal activity patterns in humans during a goal-directed navigation task to investigate how contextual and goal information are incorporated in the construction and execution of navigational plans. During planning, hippocampal pattern similarity is enhanced across routes that share a context and a goal. During navigation, we observe prospective activation in the hippocampus that reflects the retrieval of pattern information related to a key-decision point. These results suggest that, rather than simply representing overlapping associations or state transitions, hippocampal activity patterns are shaped by context and goals.
Collapse
Affiliation(s)
- Jordan Crivelli-Decker
- Center for Neuroscience, University of California, Davis, CA, USA.
- Department of Psychology, University of California, Davis, CA, USA.
| | - Alex Clarke
- Department of Psychology, University of Cambridge, Cambridge, UK
| | - Seongmin A Park
- Center for Neuroscience, University of California, Davis, CA, USA
- Center for Mind and Brain, University of California, Davis, CA, USA
| | - Derek J Huffman
- Center for Neuroscience, University of California, Davis, CA, USA
- Department of Psychology, Colby College, Waterville, ME, USA
| | - Erie D Boorman
- Center for Neuroscience, University of California, Davis, CA, USA
- Department of Psychology, University of Cambridge, Cambridge, UK
| | - Charan Ranganath
- Center for Neuroscience, University of California, Davis, CA, USA
- Department of Psychology, University of California, Davis, CA, USA
| |
Collapse
|
16
|
McFadyen J, Dolan RJ. Spatiotemporal Precision of Neuroimaging in Psychiatry. Biol Psychiatry 2023; 93:671-680. [PMID: 36376110 DOI: 10.1016/j.biopsych.2022.08.016] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/09/2022] [Revised: 07/20/2022] [Accepted: 08/12/2022] [Indexed: 12/23/2022]
Abstract
Aberrant patterns of cognition, perception, and behavior seen in psychiatric disorders are thought to be driven by a complex interplay of neural processes that evolve at a rapid temporal scale. Understanding these dynamic processes in vivo in humans has been hampered by a trade-off between spatial and temporal resolutions inherent to current neuroimaging technology. A recent trend in psychiatric research has been the use of high temporal resolution imaging, particularly magnetoencephalography, often in conjunction with sophisticated machine learning decoding techniques. Developments here promise novel insights into the spatiotemporal dynamics of cognitive phenomena, including domains relevant to psychiatric illnesses such as reward and avoidance learning, memory, and planning. This review considers recent advances afforded by exploiting this increased spatiotemporal precision, with specific reference to applications that seek to drive a mechanistic understanding of psychopathology and the realization of preclinical translation.
Collapse
Affiliation(s)
- Jessica McFadyen
- UCL Max Planck Centre for Computational Psychiatry and Ageing Research and Wellcome Centre for Human Neuroimaging, University College London, London, United Kingdom; State Key Laboratory of Cognitive Neuroscience and Learning, IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing, China.
| | - Raymond J Dolan
- State Key Laboratory of Cognitive Neuroscience and Learning, IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing, China
| |
Collapse
|
17
|
Brunec IK, Nantais MM, Sutton JE, Epstein RA, Newcombe NS. Exploration patterns shape cognitive map learning. Cognition 2023; 233:105360. [PMID: 36549130 PMCID: PMC9983142 DOI: 10.1016/j.cognition.2022.105360] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/11/2022] [Revised: 12/08/2022] [Accepted: 12/11/2022] [Indexed: 12/24/2022]
Abstract
Spontaneous, volitional spatial exploration is crucial for building up a cognitive map of the environment. However, decades of research have primarily measured the fidelity of cognitive maps after discrete, controlled learning episodes. We know little about how cognitive maps are formed during naturalistic free exploration. Here, we investigated whether exploration trajectories predicted cognitive map accuracy, and how these patterns were shaped by environmental structure. In two experiments, participants freely explored a previously unfamiliar virtual environment. We related their exploration trajectories to a measure of how long they spent in areas with high global environmental connectivity (integration, as assessed by space syntax). In both experiments, we found that participants who spent more time on paths that offered opportunities for integration formed more accurate cognitive maps. Interestingly, we found no support for our pre-registered hypothesis that self-reported trait differences in navigation ability would mediate this relationship. Our findings suggest that exploration patterns predict cognitive map accuracy, even for people who self-report low ability, and highlight the importance of considering both environmental structure and individual variability in formal theory- and model-building.
Collapse
|
18
|
Fang C, Aronov D, Abbott LF, Mackevicius EL. Neural learning rules for generating flexible predictions and computing the successor representation. eLife 2023; 12:e80680. [PMID: 36928104 PMCID: PMC10019889 DOI: 10.7554/elife.80680] [Citation(s) in RCA: 10] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/30/2022] [Accepted: 10/26/2022] [Indexed: 03/18/2023] Open
Abstract
The predictive nature of the hippocampus is thought to be useful for memory-guided cognitive behaviors. Inspired by the reinforcement learning literature, this notion has been formalized as a predictive map called the successor representation (SR). The SR captures a number of observations about hippocampal activity. However, the algorithm does not provide a neural mechanism for how such representations arise. Here, we show the dynamics of a recurrent neural network naturally calculate the SR when the synaptic weights match the transition probability matrix. Interestingly, the predictive horizon can be flexibly modulated simply by changing the network gain. We derive simple, biologically plausible learning rules to learn the SR in a recurrent network. We test our model with realistic inputs and match hippocampal data recorded during random foraging. Taken together, our results suggest that the SR is more accessible in neural circuits than previously thought and can support a broad range of cognitive functions.
Collapse
Affiliation(s)
- Ching Fang
- Zuckerman Institute, Department of Neuroscience, Columbia UniversityNew YorkUnited States
| | - Dmitriy Aronov
- Zuckerman Institute, Department of Neuroscience, Columbia UniversityNew YorkUnited States
| | - LF Abbott
- Zuckerman Institute, Department of Neuroscience, Columbia UniversityNew YorkUnited States
| | - Emily L Mackevicius
- Zuckerman Institute, Department of Neuroscience, Columbia UniversityNew YorkUnited States
- Basis Research InstituteNew YorkUnited States
| |
Collapse
|
19
|
Bono J, Zannone S, Pedrosa V, Clopath C. Learning predictive cognitive maps with spiking neurons during behavior and replays. eLife 2023; 12:e80671. [PMID: 36927625 PMCID: PMC10019888 DOI: 10.7554/elife.80671] [Citation(s) in RCA: 7] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/30/2022] [Accepted: 01/12/2023] [Indexed: 03/18/2023] Open
Abstract
The hippocampus has been proposed to encode environments using a representation that contains predictive information about likely future states, called the successor representation. However, it is not clear how such a representation could be learned in the hippocampal circuit. Here, we propose a plasticity rule that can learn this predictive map of the environment using a spiking neural network. We connect this biologically plausible plasticity rule to reinforcement learning, mathematically and numerically showing that it implements the TD-lambda algorithm. By spanning these different levels, we show how our framework naturally encompasses behavioral activity and replays, smoothly moving from rate to temporal coding, and allows learning over behavioral timescales with a plasticity rule acting on a timescale of milliseconds. We discuss how biological parameters such as dwelling times at states, neuronal firing rates and neuromodulation relate to the delay discounting parameter of the TD algorithm, and how they influence the learned representation. We also find that, in agreement with psychological studies and contrary to reinforcement learning theory, the discount factor decreases hyperbolically with time. Finally, our framework suggests a role for replays, in both aiding learning in novel environments and finding shortcut trajectories that were not experienced during behavior, in agreement with experimental data.
Collapse
Affiliation(s)
- Jacopo Bono
- Department of Bioengineering, Imperial College LondonLondonUnited Kingdom
| | - Sara Zannone
- Department of Bioengineering, Imperial College LondonLondonUnited Kingdom
| | - Victor Pedrosa
- Department of Bioengineering, Imperial College LondonLondonUnited Kingdom
| | - Claudia Clopath
- Department of Bioengineering, Imperial College LondonLondonUnited Kingdom
| |
Collapse
|
20
|
Stoewer P, Schilling A, Maier A, Krauss P. Neural network based formation of cognitive maps of semantic spaces and the putative emergence of abstract concepts. Sci Rep 2023; 13:3644. [PMID: 36871003 PMCID: PMC9985610 DOI: 10.1038/s41598-023-30307-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/28/2022] [Accepted: 02/21/2023] [Indexed: 03/06/2023] Open
Abstract
How do we make sense of the input from our sensory organs, and put the perceived information into context of our past experiences? The hippocampal-entorhinal complex plays a major role in the organization of memory and thought. The formation of and navigation in cognitive maps of arbitrary mental spaces via place and grid cells can serve as a representation of memories and experiences and their relations to each other. The multi-scale successor representation is proposed to be the mathematical principle underlying place and grid cell computations. Here, we present a neural network, which learns a cognitive map of a semantic space based on 32 different animal species encoded as feature vectors. The neural network successfully learns the similarities between different animal species, and constructs a cognitive map of 'animal space' based on the principle of successor representations with an accuracy of around 30% which is near to the theoretical maximum regarding the fact that all animal species have more than one possible successor, i.e. nearest neighbor in feature space. Furthermore, a hierarchical structure, i.e. different scales of cognitive maps, can be modeled based on multi-scale successor representations. We find that, in fine-grained cognitive maps, the animal vectors are evenly distributed in feature space. In contrast, in coarse-grained maps, animal vectors are highly clustered according to their biological class, i.e. amphibians, mammals and insects. This could be a putative mechanism enabling the emergence of new, abstract semantic concepts. Finally, even completely new or incomplete input can be represented by interpolation of the representations from the cognitive map with remarkable high accuracy of up to 95%. We conclude that the successor representation can serve as a weighted pointer to past memories and experiences, and may therefore be a crucial building block to include prior knowledge, and to derive context knowledge from novel input. Thus, our model provides a new tool to complement contemporary deep learning approaches on the road towards artificial general intelligence.
Collapse
Affiliation(s)
- Paul Stoewer
- Cognitive Computational Neuroscience Group, University Erlangen-Nuremberg, Erlangen, Germany.,Pattern Recognition Lab, University Erlangen-Nuremberg, Erlangen, Germany
| | - Achim Schilling
- Cognitive Computational Neuroscience Group, University Erlangen-Nuremberg, Erlangen, Germany.,Neuroscience Lab, University Hospital Erlangen, Erlangen, Germany
| | - Andreas Maier
- Pattern Recognition Lab, University Erlangen-Nuremberg, Erlangen, Germany
| | - Patrick Krauss
- Cognitive Computational Neuroscience Group, University Erlangen-Nuremberg, Erlangen, Germany. .,Pattern Recognition Lab, University Erlangen-Nuremberg, Erlangen, Germany. .,Neuroscience Lab, University Hospital Erlangen, Erlangen, Germany. .,Linguistics Lab, University Erlangen-Nuremberg, Erlangen, Germany.
| |
Collapse
|
21
|
Gao Y. A computational model of learning flexible navigation in a maze by layout-conforming replay of place cells. Front Comput Neurosci 2023; 17:1053097. [PMID: 36846726 PMCID: PMC9947252 DOI: 10.3389/fncom.2023.1053097] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/25/2022] [Accepted: 01/16/2023] [Indexed: 02/11/2023] Open
Abstract
Recent experimental observations have shown that the reactivation of hippocampal place cells (PC) during sleep or wakeful immobility depicts trajectories that can go around barriers and can flexibly adapt to a changing maze layout. However, existing computational models of replay fall short of generating such layout-conforming replay, restricting their usage to simple environments, like linear tracks or open fields. In this paper, we propose a computational model that generates layout-conforming replay and explains how such replay drives the learning of flexible navigation in a maze. First, we propose a Hebbian-like rule to learn the inter-PC synaptic strength during exploration. Then we use a continuous attractor network (CAN) with feedback inhibition to model the interaction among place cells and hippocampal interneurons. The activity bump of place cells drifts along paths in the maze, which models layout-conforming replay. During replay in sleep, the synaptic strengths from place cells to striatal medium spiny neurons (MSN) are learned by a novel dopamine-modulated three-factor rule to store place-reward associations. During goal-directed navigation, the CAN periodically generates replay trajectories from the animal's location for path planning, and the trajectory leading to a maximal MSN activity is followed by the animal. We have implemented our model into a high-fidelity virtual rat in the MuJoCo physics simulator. Extensive experiments have demonstrated that its superior flexibility during navigation in a maze is due to a continuous re-learning of inter-PC and PC-MSN synaptic strength.
Collapse
Affiliation(s)
- Yuanxiang Gao
- School of Information and Communication Engineering, University of Electronic Science and Technology of China, Chengdu, China,CAS Key Laboratory of Theoretical Physics, Institute of Theoretical Physics, Chinese Academy of Sciences, Beijing, China,*Correspondence: Yuanxiang Gao ✉
| |
Collapse
|
22
|
Ekman M, Kusch S, de Lange FP. Successor-like representation guides the prediction of future events in human visual cortex and hippocampus. eLife 2023; 12:78904. [PMID: 36729024 PMCID: PMC9894584 DOI: 10.7554/elife.78904] [Citation(s) in RCA: 10] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/23/2022] [Accepted: 01/13/2023] [Indexed: 02/03/2023] Open
Abstract
Human agents build models of their environment, which enable them to anticipate and plan upcoming events. However, little is known about the properties of such predictive models. Recently, it has been proposed that hippocampal representations take the form of a predictive map-like structure, the so-called successor representation (SR). Here, we used human functional magnetic resonance imaging to probe whether activity in the early visual cortex (V1) and hippocampus adhere to the postulated properties of the SR after visual sequence learning. Participants were exposed to an arbitrary spatiotemporal sequence consisting of four items (A-B-C-D). We found that after repeated exposure to the sequence, merely presenting single sequence items (e.g., - B - -) resulted in V1 activation at the successor locations of the full sequence (e.g., C-D), but not at the predecessor locations (e.g., A). This highlights that visual representations are skewed toward future states, in line with the SR. Similar results were also found in the hippocampus. Moreover, the hippocampus developed a coactivation profile that showed sensitivity to the temporal distance in sequence space, with fading representations for sequence events in the more distant past and future. V1, in contrast, showed a coactivation profile that was only sensitive to spatial distance in stimulus space. Taken together, these results provide empirical evidence for the proposition that both visual and hippocampal cortex represent a predictive map of the visual world akin to the SR.
Collapse
Affiliation(s)
- Matthias Ekman
- Radboud University Nijmegen, Donders Institute for Brain, Cognition and BehaviourNijmegenNetherlands
| | - Sarah Kusch
- Radboud University Nijmegen, Donders Institute for Brain, Cognition and BehaviourNijmegenNetherlands
| | - Floris P de Lange
- Radboud University Nijmegen, Donders Institute for Brain, Cognition and BehaviourNijmegenNetherlands
| |
Collapse
|
23
|
Linton P, Morgan MJ, Read JCA, Vishwanath D, Creem-Regehr SH, Domini F. New Approaches to 3D Vision. Philos Trans R Soc Lond B Biol Sci 2023; 378:20210443. [PMID: 36511413 PMCID: PMC9745878 DOI: 10.1098/rstb.2021.0443] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/30/2022] [Accepted: 10/25/2022] [Indexed: 12/15/2022] Open
Abstract
New approaches to 3D vision are enabling new advances in artificial intelligence and autonomous vehicles, a better understanding of how animals navigate the 3D world, and new insights into human perception in virtual and augmented reality. Whilst traditional approaches to 3D vision in computer vision (SLAM: simultaneous localization and mapping), animal navigation (cognitive maps), and human vision (optimal cue integration) start from the assumption that the aim of 3D vision is to provide an accurate 3D model of the world, the new approaches to 3D vision explored in this issue challenge this assumption. Instead, they investigate the possibility that computer vision, animal navigation, and human vision can rely on partial or distorted models or no model at all. This issue also highlights the implications for artificial intelligence, autonomous vehicles, human perception in virtual and augmented reality, and the treatment of visual disorders, all of which are explored by individual articles. This article is part of a discussion meeting issue 'New approaches to 3D vision'.
Collapse
Affiliation(s)
- Paul Linton
- Presidential Scholars in Society and Neuroscience, Center for Science and Society, Columbia University, New York, NY 10027, USA
- Italian Academy for Advanced Studies in America, Columbia University, New York, NY 10027, USA
- Visual Inference Lab, Zuckerman Mind Brain Behavior Institute, Columbia University, New York, NY 10027, USA
| | - Michael J. Morgan
- Department of Optometry and Visual Sciences, City, University of London, Northampton Square, London EC1V 0HB, UK
| | - Jenny C. A. Read
- Biosciences Institute, Newcastle University, Newcastle upon Tyne, Tyne & Wear NE2 4HH, UK
| | - Dhanraj Vishwanath
- School of Psychology and Neuroscience, University of St Andrews, St Andrews, Fife KY16 9JP, UK
| | | | - Fulvio Domini
- Department of Cognitive, Linguistic, and Psychological Sciences, Brown University, Providence, RI 02912-9067, USA
| |
Collapse
|
24
|
Momennejad I. A rubric for human-like agents and NeuroAI. Philos Trans R Soc Lond B Biol Sci 2023; 378:20210446. [PMID: 36511409 PMCID: PMC9745874 DOI: 10.1098/rstb.2021.0446] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/17/2022] [Accepted: 10/27/2022] [Indexed: 12/15/2022] Open
Abstract
Researchers across cognitive, neuro- and computer sciences increasingly reference 'human-like' artificial intelligence and 'neuroAI'. However, the scope and use of the terms are often inconsistent. Contributed research ranges widely from mimicking behaviour, to testing machine learning methods as neurally plausible hypotheses at the cellular or functional levels, or solving engineering problems. However, it cannot be assumed nor expected that progress on one of these three goals will automatically translate to progress in others. Here, a simple rubric is proposed to clarify the scope of individual contributions, grounded in their commitments to human-like behaviour, neural plausibility or benchmark/engineering/computer science goals. This is clarified using examples of weak and strong neuroAI and human-like agents, and discussing the generative, corroborate and corrective ways in which the three dimensions interact with one another. The author maintains that future progress in artificial intelligence will need strong interactions across the disciplines, with iterative feedback loops and meticulous validity tests-leading to both known and yet-unknown advances that may span decades to come. This article is part of a discussion meeting issue 'New approaches to 3D vision'.
Collapse
Affiliation(s)
- Ida Momennejad
- Microsoft Research NYC, Reinforcement Learning Station, 300 Lafayette, New York, NY 10012, USA
| |
Collapse
|
25
|
Barack DL, Bakkour A, Shohamy D, Salzman CD. Visuospatial information foraging describes search behavior in learning latent environmental features. Sci Rep 2023; 13:1126. [PMID: 36670132 PMCID: PMC9860038 DOI: 10.1038/s41598-023-27662-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/19/2022] [Accepted: 01/05/2023] [Indexed: 01/22/2023] Open
Abstract
In the real world, making sequences of decisions to achieve goals often depends upon the ability to learn aspects of the environment that are not directly perceptible. Learning these so-called latent features requires seeking information about them. Prior efforts to study latent feature learning often used single decisions, used few features, and failed to distinguish between reward-seeking and information-seeking. To overcome this, we designed a task in which humans and monkeys made a series of choices to search for shapes hidden on a grid. On our task, the effects of reward and information outcomes from uncovering parts of shapes could be disentangled. Members of both species adeptly learned the shapes and preferred to select tiles expected to be informative earlier in trials than previously rewarding ones, searching a part of the grid until their outcomes dropped below the average information outcome-a pattern consistent with foraging behavior. In addition, how quickly humans learned the shapes was predicted by how well their choice sequences matched the foraging pattern, revealing an unexpected connection between foraging and learning. This adaptive search for information may underlie the ability in humans and monkeys to learn latent features to support goal-directed behavior in the long run.
Collapse
Affiliation(s)
- David L Barack
- Department of Neuroscience, Columbia University, New York, USA.
- Mortimer B. Zuckerman Mind Brain and Behavior Institute, Columbia University, New York, USA.
| | - Akram Bakkour
- Department of Psychology, University of Chicago, Chicago, USA
| | - Daphna Shohamy
- Mortimer B. Zuckerman Mind Brain and Behavior Institute, Columbia University, New York, USA
- Department of Psychology, Columbia University, New York, USA
- Kavli Institute for Brain Sciences, Columbia University, New York, USA
| | - C Daniel Salzman
- Department of Neuroscience, Columbia University, New York, USA
- Mortimer B. Zuckerman Mind Brain and Behavior Institute, Columbia University, New York, USA
- Kavli Institute for Brain Sciences, Columbia University, New York, USA
- Department of Psychiatry, Columbia University, New York, USA
- New York State Psychiatric Institute, New York, USA
| |
Collapse
|
26
|
Billig AJ, Lad M, Sedley W, Griffiths TD. The hearing hippocampus. Prog Neurobiol 2022; 218:102326. [PMID: 35870677 PMCID: PMC10510040 DOI: 10.1016/j.pneurobio.2022.102326] [Citation(s) in RCA: 19] [Impact Index Per Article: 9.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/01/2022] [Revised: 06/08/2022] [Accepted: 07/18/2022] [Indexed: 11/17/2022]
Abstract
The hippocampus has a well-established role in spatial and episodic memory but a broader function has been proposed including aspects of perception and relational processing. Neural bases of sound analysis have been described in the pathway to auditory cortex, but wider networks supporting auditory cognition are still being established. We review what is known about the role of the hippocampus in processing auditory information, and how the hippocampus itself is shaped by sound. In examining imaging, recording, and lesion studies in species from rodents to humans, we uncover a hierarchy of hippocampal responses to sound including during passive exposure, active listening, and the learning of associations between sounds and other stimuli. We describe how the hippocampus' connectivity and computational architecture allow it to track and manipulate auditory information - whether in the form of speech, music, or environmental, emotional, or phantom sounds. Functional and structural correlates of auditory experience are also identified. The extent of auditory-hippocampal interactions is consistent with the view that the hippocampus makes broad contributions to perception and cognition, beyond spatial and episodic memory. More deeply understanding these interactions may unlock applications including entraining hippocampal rhythms to support cognition, and intervening in links between hearing loss and dementia.
Collapse
Affiliation(s)
| | - Meher Lad
- Translational and Clinical Research Institute, Newcastle University Medical School, Newcastle upon Tyne, UK
| | - William Sedley
- Translational and Clinical Research Institute, Newcastle University Medical School, Newcastle upon Tyne, UK
| | - Timothy D Griffiths
- Biosciences Institute, Newcastle University Medical School, Newcastle upon Tyne, UK; Wellcome Centre for Human Neuroimaging, UCL Queen Square Institute of Neurology, University College London, London, UK; Human Brain Research Laboratory, Department of Neurosurgery, University of Iowa Hospitals and Clinics, Iowa City, USA
| |
Collapse
|
27
|
Berens SC, Bird CM. Hippocampal and medial prefrontal cortices encode structural task representations following progressive and interleaved training schedules. PLoS Comput Biol 2022; 18:e1010566. [PMID: 36251731 PMCID: PMC9612823 DOI: 10.1371/journal.pcbi.1010566] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/08/2022] [Revised: 10/27/2022] [Accepted: 09/13/2022] [Indexed: 12/04/2022] Open
Abstract
Memory generalisations may be underpinned by either encoding- or retrieval-based generalisation mechanisms and different training schedules may bias some learners to favour one of these mechanisms over the other. We used a transitive inference task to investigate whether generalisation is influenced by progressive vs randomly interleaved training, and overnight consolidation. On consecutive days, participants learnt pairwise discriminations from two transitive hierarchies before being tested during fMRI. Inference performance was consistently better following progressive training, and for pairs further apart in the transitive hierarchy. BOLD pattern similarity correlated with hierarchical distances in the left hippocampus (HIP) and medial prefrontal cortex (MPFC) following both training schedules. These results are consistent with the use of structural representations that directly encode hierarchical relationships between task features. However, such effects were only observed in the MPFC for recently learnt relationships. Furthermore, the MPFC appeared to maintain structural representations in participants who performed at chance on the inference task. We conclude that humans preferentially employ encoding-based mechanisms to store map-like relational codes that can be used for memory generalisation. These codes are expressed in the HIP and MPFC following both progressive and interleaved training but are not sufficient for accurate inference.
Collapse
Affiliation(s)
- Sam C. Berens
- School of Psychology, University of Sussex, Brighton, United Kingdom
- * E-mail:
| | - Chris M. Bird
- School of Psychology, University of Sussex, Brighton, United Kingdom
| |
Collapse
|
28
|
Gatti D, Marelli M, Vecchi T, Rinaldi L. Spatial Representations Without Spatial Computations. Psychol Sci 2022; 33:1947-1958. [PMID: 36201754 DOI: 10.1177/09567976221094863] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022] Open
Abstract
Cognitive maps are assumed to be fundamentally spatial and grounded only in perceptual processes, as supported by the discovery of functionally dedicated cell types in the human brain, which tile the environment in a maplike fashion. Challenging this view, we demonstrate that spatial representations-such as large-scale geographical maps-can be as well retrieved with high confidence from natural language through cognitively plausible artificial-intelligence models on the basis of nonspatial associative-learning mechanisms. More critically, we show that linguistic information accounts for the specific distortions observed in tasks when college-age adults have to judge the geographical positions of cities, even when these positions are estimated on real maps. These findings indicate that language experience can encode and reproduce cognitive maps without the need for a dedicated spatial-representation system, thus suggesting that the formation of these maps is the result of a strict interplay between spatial- and nonspatial-learning principles.
Collapse
Affiliation(s)
- Daniele Gatti
- Department of Brain and Behavioral Sciences, University of Pavia
| | - Marco Marelli
- Department of Psychology, University of Milano-Bicocca.,NeuroMI, Milan Center for Neuroscience, Milano, Italy
| | - Tomaso Vecchi
- Department of Brain and Behavioral Sciences, University of Pavia.,Cognitive Psychology Unit, IRCCS Mondino Foundation, Pavia, Italy
| | - Luca Rinaldi
- Department of Brain and Behavioral Sciences, University of Pavia.,Cognitive Psychology Unit, IRCCS Mondino Foundation, Pavia, Italy
| |
Collapse
|
29
|
Zang W, Yao P, Song D. Underwater gliders linear trajectory tracking: The experience breeding actor-critic approach. ISA TRANSACTIONS 2022; 129:415-423. [PMID: 35039155 DOI: 10.1016/j.isatra.2021.12.029] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/24/2021] [Revised: 12/20/2021] [Accepted: 12/20/2021] [Indexed: 06/14/2023]
Abstract
This paper studies the underwater glider trajectory tracking in currents field. The objective is to ensure that trajectories fit to the straight target track. The underwater glider model is introduced to demonstrate the vehicle dynamic properties. Considering currents disturbance as well as the uncertain status of the glider controlled by complicated roll policies, the trajectory tracking task can be classified into the model-free optimization. Such problem is difficult to solve with mathematical analysis. This work transfers the underwater glider trajectory tracking into a Markov Decision Process by specifying the actions and observations as well as rewards. On this basis, a neural network controls framework called experience breeding actor-critic is proposed to handle the trajectory tracking. The EBAC enhances the explorations to the potentially high reward area. And it steers glider heading meticulously so as to counteract the currents influence. Through simulation results, the EBAC shows a desired performance in controlling the gliders to accurately fit the target track.
Collapse
Affiliation(s)
- Wenchuan Zang
- College of Information Science and Engineering, Ocean University of China, No. 238 Songling Rd, Qingdao, 266100, Shandong, China
| | - Peng Yao
- College of Engineering, Ocean University of China, No. 238 Songling Rd, Qingdao, 266100, Shandong, China
| | - Dalei Song
- College of Engineering, Ocean University of China, No. 238 Songling Rd, Qingdao, 266100, Shandong, China; Institute for Advanced Ocean Study, Ocean University of China, No. 238 Songling Rd, Qingdao, 266100, Shandong, China.
| |
Collapse
|
30
|
Abstract
Learning and interpreting the structure of the environment is an innate feature of biological systems, and is integral to guiding flexible behaviors for evolutionary viability. The concept of a cognitive map has emerged as one of the leading metaphors for these capacities, and unraveling the learning and neural representation of such a map has become a central focus of neuroscience. In recent years, many models have been developed to explain cellular responses in the hippocampus and other brain areas. Because it can be difficult to see how these models differ, how they relate and what each model can contribute, this Review aims to organize these models into a clear ontology. This ontology reveals parallels between existing empirical results, and implies new approaches to understand hippocampal-cortical interactions and beyond.
Collapse
|
31
|
Oversampled and undersolved: Depressive rumination from an active inference perspective. Neurosci Biobehav Rev 2022; 142:104873. [PMID: 36116573 DOI: 10.1016/j.neubiorev.2022.104873] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/10/2022] [Revised: 09/12/2022] [Accepted: 09/12/2022] [Indexed: 11/22/2022]
Abstract
Rumination is a widely recognized cognitive deviation in depression. Despite the recognition, researchers have struggled to explain why patients cannot disengage from the process, although it depresses their mood and fails to lead to effective problem-solving. We rethink rumination as repetitive but unsuccessful problem-solving attempts. Appealing to an active inference account, we suggest that adaptive problem-solving is based on the generation, evaluation, and performance of candidate policies that increase an organism's knowledge of its environment. We argue that the problem-solving process is distorted during rumination. Specifically, rumination is understood as engaging in excessive yet unsuccessful oversampling of policy candidates that do not resolve uncertainty. Because candidates are sampled from policies that were selected in states resembling one's current state, "bad" starting points (e.g., depressed mood, physical inactivity) make the problem-solving process vulnerable for generating a ruminative "halting problem". This problem leads to high opportunity costs, learned helplessness and diminished overt behavior. Besides reviewing evidence for the conceptual paths of this model, we discuss its neurophysiological correlates and point towards clinical implications.
Collapse
|
32
|
Pudhiyidath A, Morton NW, Viveros Duran R, Schapiro AC, Momennejad I, Hinojosa-Rowland DM, Molitor RJ, Preston AR. Representations of Temporal Community Structure in Hippocampus and Precuneus Predict Inductive Reasoning Decisions. J Cogn Neurosci 2022; 34:1736-1760. [PMID: 35579986 PMCID: PMC10262802 DOI: 10.1162/jocn_a_01864] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Our understanding of the world is shaped by inferences about underlying structure. For example, at the gym, you might notice that the same people tend to arrive around the same time and infer that they are friends that work out together. Consistent with this idea, after participants are presented with a temporal sequence of objects that follows an underlying community structure, they are biased to infer that objects from the same community share the same properties. Here, we used fMRI to measure neural representations of objects after temporal community structure learning and examine how these representations support inference about object relationships. We found that community structure learning affected inferred object similarity: When asked to spatially group items based on their experience, participants tended to group together objects from the same community. Neural representations in perirhinal cortex predicted individual differences in object grouping, suggesting that high-level object representations are affected by temporal community learning. Furthermore, participants were biased to infer that objects from the same community would share the same properties. Using computational modeling of temporal learning and inference decisions, we found that inductive reasoning is influenced by both detailed knowledge of temporal statistics and abstract knowledge of the temporal communities. The fidelity of temporal community representations in hippocampus and precuneus predicted the degree to which temporal community membership biased reasoning decisions. Our results suggest that temporal knowledge is represented at multiple levels of abstraction, and that perirhinal cortex, hippocampus, and precuneus may support inference based on this knowledge.
Collapse
|
33
|
Neural network based successor representations to form cognitive maps of space and language. Sci Rep 2022; 12:11233. [PMID: 35787659 PMCID: PMC9253065 DOI: 10.1038/s41598-022-14916-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/02/2022] [Accepted: 06/15/2022] [Indexed: 11/21/2022] Open
Abstract
How does the mind organize thoughts? The hippocampal-entorhinal complex is thought to support domain-general representation and processing of structural knowledge of arbitrary state, feature and concept spaces. In particular, it enables the formation of cognitive maps, and navigation on these maps, thereby broadly contributing to cognition. It has been proposed that the concept of multi-scale successor representations provides an explanation of the underlying computations performed by place and grid cells. Here, we present a neural network based approach to learn such representations, and its application to different scenarios: a spatial exploration task based on supervised learning, a spatial navigation task based on reinforcement learning, and a non-spatial task where linguistic constructions have to be inferred by observing sample sentences. In all scenarios, the neural network correctly learns and approximates the underlying structure by building successor representations. Furthermore, the resulting neural firing patterns are strikingly similar to experimentally observed place and grid cell firing patterns. We conclude that cognitive maps and neural network-based successor representations of structured knowledge provide a promising way to overcome some of the short comings of deep learning towards artificial general intelligence.
Collapse
|
34
|
Abstract
People with damage to the orbitofrontal cortex (OFC) have specific problems making decisions, whereas their other cognitive functions are spared. Neurophysiological studies have shown that OFC neurons fire in proportion to the value of anticipated outcomes. Thus, a central role of the OFC is to guide optimal decision-making by signalling values associated with different choices. Until recently, this view of OFC function dominated the field. New data, however, suggest that the OFC may have a much broader role in cognition by representing cognitive maps that can be used to guide behaviour and that value is just one of many variables that are important for behavioural control. In this Review, we critically evaluate these two alternative accounts of OFC function and examine how they might be reconciled.
Collapse
Affiliation(s)
- Eric B Knudsen
- Helen Wills Neuroscience Institute, University of California Berkeley, Berkeley, California, USA
| | - Joni D Wallis
- Department of Psychology and Helen Wills Neuroscience Institute, University of California Berkeley, Berkeley, California, USA.
| |
Collapse
|
35
|
Chen BW, Yang SH, Kuo CH, Chen JW, Lo YC, Kuo YT, Lin YC, Chang HC, Lin SH, Yu X, Qu B, Ro SCV, Lai HY, Chen YY. Neuro-Inspired Reinforcement Learning To Improve Trajectory Prediction In Reward-Guided Behavior. Int J Neural Syst 2022; 32:2250038. [DOI: 10.1142/s0129065722500381] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022]
|
36
|
Zhou D, Lynn CW, Cui Z, Ciric R, Baum GL, Moore TM, Roalf DR, Detre JA, Gur RC, Gur RE, Satterthwaite TD, Bassett DS. Efficient coding in the economics of human brain connectomics. Netw Neurosci 2022; 6:234-274. [PMID: 36605887 PMCID: PMC9810280 DOI: 10.1162/netn_a_00223] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/07/2021] [Accepted: 12/08/2021] [Indexed: 01/07/2023] Open
Abstract
In systems neuroscience, most models posit that brain regions communicate information under constraints of efficiency. Yet, evidence for efficient communication in structural brain networks characterized by hierarchical organization and highly connected hubs remains sparse. The principle of efficient coding proposes that the brain transmits maximal information in a metabolically economical or compressed form to improve future behavior. To determine how structural connectivity supports efficient coding, we develop a theory specifying minimum rates of message transmission between brain regions to achieve an expected fidelity, and we test five predictions from the theory based on random walk communication dynamics. In doing so, we introduce the metric of compression efficiency, which quantifies the trade-off between lossy compression and transmission fidelity in structural networks. In a large sample of youth (n = 1,042; age 8-23 years), we analyze structural networks derived from diffusion-weighted imaging and metabolic expenditure operationalized using cerebral blood flow. We show that structural networks strike compression efficiency trade-offs consistent with theoretical predictions. We find that compression efficiency prioritizes fidelity with development, heightens when metabolic resources and myelination guide communication, explains advantages of hierarchical organization, links higher input fidelity to disproportionate areal expansion, and shows that hubs integrate information by lossy compression. Lastly, compression efficiency is predictive of behavior-beyond the conventional network efficiency metric-for cognitive domains including executive function, memory, complex reasoning, and social cognition. Our findings elucidate how macroscale connectivity supports efficient coding and serve to foreground communication processes that utilize random walk dynamics constrained by network connectivity.
Collapse
Affiliation(s)
- Dale Zhou
- Department of Neuroscience, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA
| | - Christopher W. Lynn
- Initiative for the Theoretical Sciences, Graduate Center, City University of New York, New York, NY, USA,Joseph Henry Laboratories of Physics, Princeton University, Princeton, NJ, USA
| | - Zaixu Cui
- Department of Psychiatry, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA
| | - Rastko Ciric
- Department of Bioengineering, Schools of Engineering and Medicine, Stanford University, Stanford, CA, USA
| | - Graham L. Baum
- Department of Psychology and Center for Brain Science, Harvard University, Cambridge, MA, USA
| | - Tyler M. Moore
- Department of Psychiatry, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA,Penn-Children’s Hospital of Philadelphia Lifespan Brain Institute, Philadelphia, PA, USA
| | - David R. Roalf
- Department of Neurology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA
| | - John A. Detre
- Department of Neurology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA
| | - Ruben C. Gur
- Department of Psychiatry, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA,Penn-Children’s Hospital of Philadelphia Lifespan Brain Institute, Philadelphia, PA, USA
| | - Raquel E. Gur
- Department of Psychiatry, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA,Penn-Children’s Hospital of Philadelphia Lifespan Brain Institute, Philadelphia, PA, USA
| | - Theodore D. Satterthwaite
- Department of Psychiatry, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA,Penn-Children’s Hospital of Philadelphia Lifespan Brain Institute, Philadelphia, PA, USA
| | - Dani S. Bassett
- Department of Psychiatry, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA,Department of Neurology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA,Department of Physics & Astronomy, College of Arts and Sciences, University of Pennsylvania, Philadelphia, PA, USA,Department of Bioengineering, School of Engineering and Applied Sciences, University of Pennsylvania, Philadelphia, PA, USA,Department of Electrical & Systems Engineering, School of Engineering and Applied Sciences, University of Pennsylvania, Philadelphia, PA, USA,Santa Fe Institute, Santa Fe, NM, USA,* Corresponding Author:
| |
Collapse
|
37
|
Momennejad I. Collective minds: social network topology shapes collective cognition. Philos Trans R Soc Lond B Biol Sci 2022; 377:20200315. [PMID: 34894735 PMCID: PMC8666914 DOI: 10.1098/rstb.2020.0315] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/30/2021] [Accepted: 10/06/2021] [Indexed: 11/22/2022] Open
Abstract
Human cognition is not solitary, it is shaped by collective learning and memory. Unlike swarms or herds, human social networks have diverse topologies, serving diverse modes of collective cognition and behaviour. Here, we review research that combines network structure with psychological and neural experiments and modelling to understand how the topology of social networks shapes collective cognition. First, we review graph-theoretical approaches to behavioural experiments on collective memory, belief propagation and problem solving. These results show that different topologies of communication networks synchronize or integrate knowledge differently, serving diverse collective goals. Second, we discuss neuroimaging studies showing that human brains encode the topology of one's larger social network and show similar neural patterns to neural patterns of our friends and community ties (e.g. when watching movies). Third, we discuss cognitive similarities between learning social and non-social topologies, e.g. in spatial and associative learning, as well as common brain regions involved in processing social and non-social topologies. Finally, we discuss recent machine learning approaches to collective communication and cooperation in multi-agent artificial networks. Combining network science with cognitive, neural and computational approaches empowers investigating how social structures shape collective cognition, which can in turn help design goal-directed social network topologies. This article is part of a discussion meeting issue 'The emergence of collective knowledge and cumulative culture in animals, humans and machines'.
Collapse
|
38
|
Brunec IK, Momennejad I. Predictive Representations in Hippocampal and Prefrontal Hierarchies. J Neurosci 2022; 42:299-312. [PMID: 34799416 PMCID: PMC8802932 DOI: 10.1523/jneurosci.1327-21.2021] [Citation(s) in RCA: 28] [Impact Index Per Article: 14.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/25/2021] [Revised: 10/19/2021] [Accepted: 10/22/2021] [Indexed: 11/21/2022] Open
Abstract
As we navigate the world, we use learned representations of relational structures to explore and to reach goals. Studies of how relational knowledge enables inference and planning are typically conducted in controlled small-scale settings. It remains unclear, however, how people use stored knowledge in continuously unfolding navigation (e.g., walking long distances in a city). We hypothesized that multiscale predictive representations guide naturalistic navigation in humans, and these scales are organized along posterior-anterior prefrontal and hippocampal hierarchies. We conducted model-based representational similarity analyses of neuroimaging data collected while male and female participants navigated realistically long paths in virtual reality. We tested the pattern similarity of each point, along each path, to a weighted sum of its successor points within predictive horizons of different scales. We found that anterior PFC showed the largest predictive horizons, posterior hippocampus the smallest, with the anterior hippocampus and orbitofrontal regions in between. Our findings offer novel insights into how cognitive maps support hierarchical planning at multiple scales.SIGNIFICANCE STATEMENT Whenever we navigate the world, we represent our journey at multiple horizons: from our immediate surroundings to our distal goal. How are such cognitive maps at different horizons simultaneously represented in the brain? Here, we applied a reinforcement learning-based analysis to neuroimaging data acquired while participants virtually navigated their hometown. We investigated neural patterns in the hippocampus and PFC, key cognitive map regions. We uncovered predictive representations with multiscale horizons in prefrontal and hippocampal gradients, with the longest predictive horizons in anterior PFC and the shortest in posterior hippocampus. These findings provide empirical support for the computational hypothesis that multiscale neural representations guide goal-directed navigation. This advances our understanding of hierarchical planning in everyday navigation of realistic distances.
Collapse
Affiliation(s)
- Iva K Brunec
- Department of Psychology, University of Pennsylvania, Philadelphia, Pennsylvania 19104
| | | |
Collapse
|
39
|
Nyberg N, Duvelle É, Barry C, Spiers HJ. Spatial goal coding in the hippocampal formation. Neuron 2022; 110:394-422. [PMID: 35032426 DOI: 10.1016/j.neuron.2021.12.012] [Citation(s) in RCA: 42] [Impact Index Per Article: 21.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/02/2021] [Revised: 10/18/2021] [Accepted: 12/08/2021] [Indexed: 12/22/2022]
Abstract
The mammalian hippocampal formation contains several distinct populations of neurons involved in representing self-position and orientation. These neurons, which include place, grid, head direction, and boundary-vector cells, are thought to collectively instantiate cognitive maps supporting flexible navigation. However, to flexibly navigate, it is necessary to also maintain internal representations of goal locations, such that goal-directed routes can be planned and executed. Although it has remained unclear how the mammalian brain represents goal locations, multiple neural candidates have recently been uncovered during different phases of navigation. For example, during planning, sequential activation of spatial cells may enable simulation of future routes toward the goal. During travel, modulation of spatial cells by the prospective route, or by distance and direction to the goal, may allow maintenance of route and goal-location information, supporting navigation on an ongoing basis. As the goal is approached, an increased activation of spatial cells may enable the goal location to become distinctly represented within cognitive maps, aiding goal localization. Lastly, after arrival at the goal, sequential activation of spatial cells may represent the just-taken route, enabling route learning and evaluation. Here, we review and synthesize these and other evidence for goal coding in mammalian brains, relate the experimental findings to predictions from computational models, and discuss outstanding questions and future challenges.
Collapse
Affiliation(s)
- Nils Nyberg
- Institute of Behavioural Neuroscience, Department of Experimental Psychology, University College London, London, UK.
| | - Éléonore Duvelle
- Department of Psychological and Brain Sciences, Dartmouth College, Hanover, NH, USA
| | - Caswell Barry
- Department of Cell and Developmental Biology, University College London, London, UK
| | - Hugo J Spiers
- Institute of Behavioural Neuroscience, Department of Experimental Psychology, University College London, London, UK.
| |
Collapse
|
40
|
Houser TM. Spatialization of Time in the Entorhinal-Hippocampal System. Front Behav Neurosci 2022; 15:807197. [PMID: 35069143 PMCID: PMC8770534 DOI: 10.3389/fnbeh.2021.807197] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/01/2021] [Accepted: 12/06/2021] [Indexed: 11/19/2022] Open
Abstract
The functional role of the entorhinal-hippocampal system has been a long withstanding mystery. One key theory that has become most popular is that the entorhinal-hippocampal system represents space to facilitate navigation in one's surroundings. In this Perspective article, I introduce a novel idea that undermines the inherent uniqueness of spatial information in favor of time driving entorhinal-hippocampal activity. Specifically, by spatializing events that occur in succession (i.e., across time), the entorhinal-hippocampal system is critical for all types of cognitive representations. I back up this argument with empirical evidence that hints at a role for the entorhinal-hippocampal system in non-spatial representation, and computational models of the logarithmic compression of time in the brain.
Collapse
Affiliation(s)
- Troy M. Houser
- Department of Psychology, University of Oregon, Eugene, OR, United States
| |
Collapse
|
41
|
Collins AGE, Shenhav A. Advances in modeling learning and decision-making in neuroscience. Neuropsychopharmacology 2022; 47:104-118. [PMID: 34453117 PMCID: PMC8617262 DOI: 10.1038/s41386-021-01126-y] [Citation(s) in RCA: 19] [Impact Index Per Article: 9.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 04/03/2021] [Revised: 07/14/2021] [Accepted: 07/22/2021] [Indexed: 02/07/2023]
Abstract
An organism's survival depends on its ability to learn about its environment and to make adaptive decisions in the service of achieving the best possible outcomes in that environment. To study the neural circuits that support these functions, researchers have increasingly relied on models that formalize the computations required to carry them out. Here, we review the recent history of computational modeling of learning and decision-making, and how these models have been used to advance understanding of prefrontal cortex function. We discuss how such models have advanced from their origins in basic algorithms of updating and action selection to increasingly account for complexities in the cognitive processes required for learning and decision-making, and the representations over which they operate. We further discuss how a deeper understanding of the real-world complexities in these computations has shed light on the fundamental constraints on optimal behavior, and on the complex interactions between corticostriatal pathways to determine such behavior. The continuing and rapid development of these models holds great promise for understanding the mechanisms by which animals adapt to their environments, and what leads to maladaptive forms of learning and decision-making within clinical populations.
Collapse
Affiliation(s)
- Anne G E Collins
- Department of Psychology and Helen Wills Neuroscience Institute, University of California, Berkeley, Berkeley, CA, USA.
| | - Amitai Shenhav
- Department of Cognitive, Linguistic, & Psychological Sciences and Carney Institute for Brain Science, Brown University, Providence, RI, USA.
| |
Collapse
|
42
|
Polti I, Nau M, Kaplan R, van Wassenhove V, Doeller CF. Rapid encoding of task regularities in the human hippocampus guides sensorimotor timing. eLife 2022; 11:79027. [PMID: 36317500 PMCID: PMC9625083 DOI: 10.7554/elife.79027] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/28/2022] [Accepted: 10/02/2022] [Indexed: 11/17/2022] Open
Abstract
The brain encodes the statistical regularities of the environment in a task-specific yet flexible and generalizable format. Here, we seek to understand this process by bridging two parallel lines of research, one centered on sensorimotor timing, and the other on cognitive mapping in the hippocampal system. By combining functional magnetic resonance imaging (fMRI) with a fast-paced time-to-contact (TTC) estimation task, we found that the hippocampus signaled behavioral feedback received in each trial as well as performance improvements across trials along with reward-processing regions. Critically, it signaled performance improvements independent from the tested intervals, and its activity accounted for the trial-wise regression-to-the-mean biases in TTC estimation. This is in line with the idea that the hippocampus supports the rapid encoding of temporal context even on short time scales in a behavior-dependent manner. Our results emphasize the central role of the hippocampus in statistical learning and position it at the core of a brain-wide network updating sensorimotor representations in real time for flexible behavior.
Collapse
Affiliation(s)
- Ignacio Polti
- Kavli Institute for Systems Neuroscience, Centre for Neural Computation, The Egil and Pauline Braathen and Fred Kavli Centre for Cortical Microcircuits, Jebsen Centre for Alzheimer’s Disease, Norwegian University of Science and TechnologyTrondheimNorway,Max Planck Institute for Human Cognitive and Brain SciencesLeipzigGermany
| | - Matthias Nau
- Kavli Institute for Systems Neuroscience, Centre for Neural Computation, The Egil and Pauline Braathen and Fred Kavli Centre for Cortical Microcircuits, Jebsen Centre for Alzheimer’s Disease, Norwegian University of Science and TechnologyTrondheimNorway,Max Planck Institute for Human Cognitive and Brain SciencesLeipzigGermany
| | - Raphael Kaplan
- Kavli Institute for Systems Neuroscience, Centre for Neural Computation, The Egil and Pauline Braathen and Fred Kavli Centre for Cortical Microcircuits, Jebsen Centre for Alzheimer’s Disease, Norwegian University of Science and TechnologyTrondheimNorway,Department of Basic Psychology, Clinical Psychology, and Psychobiology, Universitat Jaume ICastellón de la PlanaSpain
| | - Virginie van Wassenhove
- CEA DRF/Joliot, NeuroSpin; INSERM, Cognitive Neuroimaging Unit; CNRS, Université Paris-SaclayGif-Sur-YvetteFrance
| | - Christian F Doeller
- Kavli Institute for Systems Neuroscience, Centre for Neural Computation, The Egil and Pauline Braathen and Fred Kavli Centre for Cortical Microcircuits, Jebsen Centre for Alzheimer’s Disease, Norwegian University of Science and TechnologyTrondheimNorway,Max Planck Institute for Human Cognitive and Brain SciencesLeipzigGermany,Wilhelm Wundt Institute of Psychology, Leipzig UniversityLeipzigGermany
| |
Collapse
|
43
|
Jabir B, Rabhi L, Falih N. RNN- and CNN-based weed detection for crop improvement: An overview. FOODS AND RAW MATERIALS 2021. [DOI: 10.21603/2308-4057-2021-2-387-396] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022] Open
Abstract
Introduction. Deep learning is a modern technique for image processing and data analysis with promising results and great potential. Successfully applied in various fields, it has recently entered the field of agriculture to address such agricultural problems as disease identification, fruit/plant classification, fruit counting, pest identification, and weed detection. The latter was the subject of our work. Weeds are harmful plants that grow in crops, competing for things like sunlight and water and causing crop yield losses. Traditional data processing techniques have several limitations and consume a lot of time. Therefore, we aimed to take inventory of deep learning networks used in agriculture and conduct experiments to reveal the most efficient ones for weed control.
Study objects and methods. We used new advanced algorithms based on deep learning to process data in real time with high precision and efficiency. These algorithms were trained on a dataset containing real images of weeds taken from Moroccan fields.
Results and discussion. The analysis of deep learning methods and algorithms trained to detect weeds showed that the Convolutional Neural Network is the most widely used in agriculture and the most efficient in weed detection compared to others, such as the Recurrent Neural Network.
Conclusion. Since the Convolutional Neural Network demonstrated excellent accuracy in weed detection, we adopted it in building a smart system for detecting weeds and spraying them in place.
Collapse
|
44
|
Hayes TL, Krishnan GP, Bazhenov M, Siegelmann HT, Sejnowski TJ, Kanan C. Replay in Deep Learning: Current Approaches and Missing Biological Elements. Neural Comput 2021; 33:2908-2950. [PMID: 34474476 PMCID: PMC9074752 DOI: 10.1162/neco_a_01433] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/29/2021] [Accepted: 05/28/2021] [Indexed: 11/04/2022]
Abstract
Replay is the reactivation of one or more neural patterns that are similar to the activation patterns experienced during past waking experiences. Replay was first observed in biological neural networks during sleep, and it is now thought to play a critical role in memory formation, retrieval, and consolidation. Replay-like mechanisms have been incorporated in deep artificial neural networks that learn over time to avoid catastrophic forgetting of previous knowledge. Replay algorithms have been successfully used in a wide range of deep learning methods within supervised, unsupervised, and reinforcement learning paradigms. In this letter, we provide the first comprehensive comparison between replay in the mammalian brain and replay in artificial neural networks. We identify multiple aspects of biological replay that are missing in deep learning systems and hypothesize how they could be used to improve artificial neural networks.
Collapse
Affiliation(s)
- Tyler L Hayes
- Rochester Institute of Technology, Rochester, NY 14623, U.S.A.
| | - Giri P Krishnan
- University of California at San Diego, La Jolla, CA 92093, U.S.A.
| | - Maxim Bazhenov
- University of California at San Diego, La Jolla, CA 92093, U.S.A.
| | | | - Terrence J Sejnowski
- University of California at San Diego, La Jolla, CA 92093, U.S.A., and Salk Institute for Biological Studies, La Jolla, CA 92037, U.S.A.
| | - Christopher Kanan
- Rochester Institute of Technology, Rochester, NY 14623, U.S.A.; Paige, New York, NY 10036, U.S.A.; and Cornell Tech, New York, NY 10044, U.S.A.
| |
Collapse
|
45
|
Son JY, Bhandari A, FeldmanHall O. Cognitive maps of social features enable flexible inference in social networks. Proc Natl Acad Sci U S A 2021; 118:e2021699118. [PMID: 34518372 PMCID: PMC8488581 DOI: 10.1073/pnas.2021699118] [Citation(s) in RCA: 14] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 08/01/2021] [Indexed: 11/18/2022] Open
Abstract
In order to navigate a complex web of relationships, an individual must learn and represent the connections between people in a social network. However, the sheer size and complexity of the social world makes it impossible to acquire firsthand knowledge of all relations within a network, suggesting that people must make inferences about unobserved relationships to fill in the gaps. Across three studies (n = 328), we show that people can encode information about social features (e.g., hobbies, clubs) and subsequently deploy this knowledge to infer the existence of unobserved friendships in the network. Using computational models, we test various feature-based mechanisms that could support such inferences. We find that people's ability to successfully generalize depends on two representational strategies: a simple but inflexible similarity heuristic that leverages homophily, and a complex but flexible cognitive map that encodes the statistical relationships between social features and friendships. Together, our studies reveal that people can build cognitive maps encoding arbitrary patterns of latent relations in many abstract feature spaces, allowing social networks to be represented in a flexible format. Moreover, these findings shed light on open questions across disciplines about how people learn and represent social networks and may have implications for generating more human-like link prediction in machine learning algorithms.
Collapse
Affiliation(s)
- Jae-Young Son
- Department of Cognitive, Linguistic, and Psychological Sciences, Brown University, Providence, RI 02912
| | - Apoorva Bhandari
- Department of Cognitive, Linguistic, and Psychological Sciences, Brown University, Providence, RI 02912
| | - Oriel FeldmanHall
- Department of Cognitive, Linguistic, and Psychological Sciences, Brown University, Providence, RI 02912;
- Carney Institute for Brain Sciences, Brown University, Providence, RI 02912
| |
Collapse
|
46
|
Abstract
Entorhinal cortical grid cells fire in a periodic pattern that tiles space, which is suggestive of a spatial coordinate system. However, irregularities in the grid pattern as well as responses of grid cells in contexts other than spatial navigation have presented a challenge to existing models of entorhinal function. In this Perspective, we propose that hippocampal input provides a key informative drive to the grid network in both spatial and non-spatial circumstances, particularly around salient events. We build on previous models in which neural activity propagates through the entorhinal-hippocampal network in time. This temporal contiguity in network activity points to temporal order as a necessary characteristic of representations generated by the hippocampal formation. We advocate that interactions in the entorhinal-hippocampal loop build a topological representation that is rooted in the temporal order of experience. In this way, the structure of grid cell firing supports a learned topology rather than a rigid coordinate frame that is bound to measurements of the physical world.
Collapse
|
47
|
Wittkuhn L, Chien S, Hall-McMaster S, Schuck NW. Replay in minds and machines. Neurosci Biobehav Rev 2021; 129:367-388. [PMID: 34371078 DOI: 10.1016/j.neubiorev.2021.08.002] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/22/2021] [Revised: 07/19/2021] [Accepted: 08/01/2021] [Indexed: 11/19/2022]
Abstract
Experience-related brain activity patterns reactivate during sleep, wakeful rest, and brief pauses from active behavior. In parallel, machine learning research has found that experience replay can lead to substantial performance improvements in artificial agents. Together, these lines of research suggest replay has a variety of computational benefits for decision-making and learning. Here, we provide an overview of putative computational functions of replay as suggested by machine learning and neuroscientific research. We show that replay can lead to faster learning, less forgetting, reorganization or augmentation of experiences, and support planning and generalization. In addition, we highlight the benefits of reactivating abstracted internal representations rather than veridical memories, and discuss how replay could provide a mechanism to build internal representations that improve learning and decision-making.
Collapse
Affiliation(s)
- Lennart Wittkuhn
- Max Planck Research Group NeuroCode, Max Planck Institute for Human Development, Lentzeallee 94, D-14195 Berlin, Germany; Max Planck UCL Centre for Computational Psychiatry and Ageing Research, Lentzeallee 94, D-14195 Berlin, Germany.
| | - Samson Chien
- Max Planck Research Group NeuroCode, Max Planck Institute for Human Development, Lentzeallee 94, D-14195 Berlin, Germany; Max Planck UCL Centre for Computational Psychiatry and Ageing Research, Lentzeallee 94, D-14195 Berlin, Germany
| | - Sam Hall-McMaster
- Max Planck Research Group NeuroCode, Max Planck Institute for Human Development, Lentzeallee 94, D-14195 Berlin, Germany; Max Planck UCL Centre for Computational Psychiatry and Ageing Research, Lentzeallee 94, D-14195 Berlin, Germany
| | - Nicolas W Schuck
- Max Planck Research Group NeuroCode, Max Planck Institute for Human Development, Lentzeallee 94, D-14195 Berlin, Germany; Max Planck UCL Centre for Computational Psychiatry and Ageing Research, Lentzeallee 94, D-14195 Berlin, Germany.
| |
Collapse
|
48
|
Abstract
An organism's survival can depend on its ability to recall and navigate to spatial locations associated with rewards, such as food or a home. Accumulating research has revealed that computations of reward and its prediction occur on multiple levels across a complex set of interacting brain regions, including those that support memory and navigation. However, how the brain coordinates the encoding, recall and use of reward information to guide navigation remains incompletely understood. In this Review, we propose that the brain's classical navigation centres - the hippocampus and the entorhinal cortex - are ideally suited to coordinate this larger network by representing both physical and mental space as a series of states. These states may be linked to reward via neuromodulatory inputs to the hippocampus-entorhinal cortex system. Hippocampal outputs can then broadcast sequences of states to the rest of the brain to store reward associations or to facilitate decision-making, potentially engaging additional value signals downstream. This proposal is supported by recent advances in both experimental and theoretical neuroscience. By discussing the neural systems traditionally tied to navigation and reward at their intersection, we aim to offer an integrated framework for understanding navigation to reward as a fundamental feature of many cognitive processes.
Collapse
|
49
|
Wise T, Liu Y, Chowdhury F, Dolan RJ. Model-based aversive learning in humans is supported by preferential task state reactivation. SCIENCE ADVANCES 2021; 7:eabf9616. [PMID: 34321205 PMCID: PMC8318377 DOI: 10.1126/sciadv.abf9616] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 12/01/2020] [Accepted: 06/10/2021] [Indexed: 06/13/2023]
Abstract
Harm avoidance is critical for survival, yet little is known regarding the neural mechanisms supporting avoidance in the absence of trial-and-error experience. Flexible avoidance may be supported by a mental model (i.e., model-based), a process for which neural reactivation and sequential replay have emerged as candidate mechanisms. During an aversive learning task, combined with magnetoencephalography, we show prospective and retrospective reactivation during planning and learning, respectively, coupled to evidence for sequential replay. Specifically, when individuals plan in an aversive context, we find preferential reactivation of subsequently chosen goal states. Stronger reactivation is associated with greater hippocampal theta power. At outcome receipt, unchosen goal states are reactivated regardless of outcome valence. Replay of paths leading to goal states was modulated by outcome valence, with aversive outcomes associated with stronger reverse replay than safe outcomes. Our findings are suggestive of avoidance involving simulation of unexperienced states through hippocampally mediated reactivation and replay.
Collapse
Affiliation(s)
- Toby Wise
- Max Planck UCL Centre for Computational Psychiatry and Ageing Research, University College London, London, UK.
- Wellcome Centre for Human Neuroimaging, University College London, London, UK
- Division of the Humanities and Social Sciences, California Institute of Technology, Pasadena, CA, USA
| | - Yunzhe Liu
- State Key Laboratory of Cognitive Neuroscience and Learning, IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing, China
- Chinese Institute for Brain Research, Beijing, China
| | - Fatima Chowdhury
- Max Planck UCL Centre for Computational Psychiatry and Ageing Research, University College London, London, UK
- Wellcome Centre for Human Neuroimaging, University College London, London, UK
- Queen Square MS Centre, Department of Neuroinflammation, UCL Queen Square Institute of Neurology, London, UK
| | - Raymond J Dolan
- Max Planck UCL Centre for Computational Psychiatry and Ageing Research, University College London, London, UK
- Wellcome Centre for Human Neuroimaging, University College London, London, UK
- State Key Laboratory of Cognitive Neuroscience and Learning, IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing, China
| |
Collapse
|
50
|
Soutschek A, Moisa M, Ruff CC, Tobler PN. Frontopolar theta oscillations link metacognition with prospective decision making. Nat Commun 2021; 12:3943. [PMID: 34168135 PMCID: PMC8225860 DOI: 10.1038/s41467-021-24197-3] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/15/2020] [Accepted: 05/26/2021] [Indexed: 11/16/2022] Open
Abstract
Prospective decision making considers the future consequences of actions and therefore requires agents to represent their present subjective preferences reliably across time. Here, we test the link of frontopolar theta oscillations to both metacognitive ability and prospective choice behavior. We target these oscillations with transcranial alternating current stimulation while participants make decisions between smaller-sooner and larger-later monetary rewards and rate their choice confidence after each decision. Stimulation designed to enhance frontopolar theta oscillations increases metacognitive accuracy in reports of subjective uncertainty in intertemporal decisions. Moreover, the stimulation also enhances the willingness of participants to restrict their future access to short-term gratification by strengthening the awareness of potential preference reversals. Our results suggest a mechanistic link between frontopolar theta oscillations and metacognitive knowledge about the stability of subjective value representations, providing a potential explanation for why frontopolar cortex also shields prospective decision making against future temptation.
Collapse
Affiliation(s)
| | - Marius Moisa
- Zurich Center for Neuroeconomics, University of Zurich, Zurich, Switzerland
| | - Christian C Ruff
- Zurich Center for Neuroeconomics, University of Zurich, Zurich, Switzerland
- Zurich Center for Neuroscience, University of Zurich and ETH Zurich, Zurich, Switzerland
| | - Philippe N Tobler
- Zurich Center for Neuroeconomics, University of Zurich, Zurich, Switzerland
- Zurich Center for Neuroscience, University of Zurich and ETH Zurich, Zurich, Switzerland
| |
Collapse
|