51
|
Qin S, Farashahi S, Lipshutz D, Sengupta AM, Chklovskii DB, Pehlevan C. Coordinated drift of receptive fields in Hebbian/anti-Hebbian network models during noisy representation learning. Nat Neurosci 2023; 26:339-349. [PMID: 36635497 DOI: 10.1038/s41593-022-01225-z] [Citation(s) in RCA: 7] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/21/2021] [Accepted: 10/28/2022] [Indexed: 01/13/2023]
Abstract
Recent experiments have revealed that neural population codes in many brain areas continuously change even when animals have fully learned and stably perform their tasks. This representational 'drift' naturally leads to questions about its causes, dynamics and functions. Here we explore the hypothesis that neural representations optimize a representational objective with a degenerate solution space, and noisy synaptic updates drive the network to explore this (near-)optimal space causing representational drift. We illustrate this idea and explore its consequences in simple, biologically plausible Hebbian/anti-Hebbian network models of representation learning. We find that the drifting receptive fields of individual neurons can be characterized by a coordinated random walk, with effective diffusion constants depending on various parameters such as learning rate, noise amplitude and input statistics. Despite such drift, the representational similarity of population codes is stable over time. Our model recapitulates experimental observations in the hippocampus and posterior parietal cortex and makes testable predictions that can be probed in future experiments.
Collapse
Affiliation(s)
- Shanshan Qin
- John A. Paulson School of Engineering and Applied Sciences, Harvard University, Cambridge, MA, USA
- Center for Brain Science, Harvard University, Cambridge, MA, USA
| | - Shiva Farashahi
- Center for Computational Neuroscience, Flatiron Institute, New York, NY, USA
| | - David Lipshutz
- Center for Computational Neuroscience, Flatiron Institute, New York, NY, USA
| | - Anirvan M Sengupta
- Center for Computational Neuroscience, Flatiron Institute, New York, NY, USA
- Department of Physics and Astronomy, Rutgers University, New Brunswick, NJ, USA
| | - Dmitri B Chklovskii
- Center for Computational Neuroscience, Flatiron Institute, New York, NY, USA
- NYU Langone Medical Center, New York, NY, USA
| | - Cengiz Pehlevan
- John A. Paulson School of Engineering and Applied Sciences, Harvard University, Cambridge, MA, USA.
- Center for Brain Science, Harvard University, Cambridge, MA, USA.
| |
Collapse
|
52
|
Roüast NM, Schönauer M. Continuously changing memories: a framework for proactive and non-linear consolidation. Trends Neurosci 2023; 46:8-19. [PMID: 36428193 DOI: 10.1016/j.tins.2022.10.013] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/11/2022] [Revised: 10/10/2022] [Accepted: 10/27/2022] [Indexed: 11/23/2022]
Abstract
The traditional view of long-term memory is that memory traces mature in a predetermined 'linear' process: their neural substrate shifts from rapidly plastic medial temporal regions towards stable neocortical networks. We propose that memories remain malleable, not by repeated reinstantiations of this linear process but instead via dynamic routes of proactive and non-linear consolidation: memories change, their trajectory is flexible and reversible, and their physical basis develops continuously according to anticipated demands. Studies demonstrating memory updating, increasing hippocampal dependence to support adaptive use, and rapid neocortical plasticity provide evidence for continued non-linear consolidation. Although anticipated demand can affect all stages of memory formation, the extent to which it shapes the physical memory trace repeatedly and proactively will require further dedicated research.
Collapse
Affiliation(s)
- Nora Malika Roüast
- Institute for Psychology, Neuropsychology, University of Freiburg, Freiburg, Germany.
| | - Monika Schönauer
- Institute for Psychology, Neuropsychology, University of Freiburg, Freiburg, Germany.
| |
Collapse
|
53
|
Shan H, Sompolinsky H. Minimum perturbation theory of deep perceptual learning. Phys Rev E 2022; 106:064406. [PMID: 36671118 DOI: 10.1103/physreve.106.064406] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/03/2022] [Accepted: 11/22/2022] [Indexed: 06/17/2023]
Abstract
Perceptual learning (PL) involves long-lasting improvement in perceptual tasks following extensive training and is accompanied by modified neuronal responses in sensory cortical areas in the brain. Understanding the dynamics of PL and the resultant synaptic changes is important for causally connecting PL to the observed neural plasticity. This is theoretically challenging because learning-related changes are distributed across many stages of the sensory hierarchy. In this paper, we modeled the sensory hierarchy as a deep nonlinear neural network and studied PL of fine discrimination, a common and well-studied paradigm of PL. Using tools from statistical physics, we developed a mean-field theory of the network in the limit of a large number of neurons and large number of examples. Our theory suggests that, in this thermodynamic limit, the input-output function of the network can be exactly mapped to that of a deep linear network, allowing us to characterize the space of solutions for the task. Surprisingly, we found that modifying synaptic weights in the first layer of the hierarchy is both sufficient and necessary for PL. To address the degeneracy of the space of solutions, we postulate that PL dynamics are constrained by a normative minimum perturbation (MP) principle, which favors weight matrices with minimal changes relative to their prelearning values. Interestingly, MP plasticity induces changes to weights and neural representations in all layers of the network, except for the readout weight vector. While weight changes in higher layers are not necessary for learning, they help reduce overall perturbation to the network. In addition, such plasticity can be learned simply through slow learning. We further elucidate the properties of MP changes and compare them against experimental findings. Overall, our statistical mechanics theory of PL provides mechanistic and normative understanding of several important empirical findings of PL.
Collapse
Affiliation(s)
- Haozhe Shan
- Center for Brain Science, Harvard University, Cambridge, Massachusetts 02138, USA and Program in Neuroscience, Harvard Medical School, Boston, Massachusetts 02115, USA
| | - Haim Sompolinsky
- Center for Brain Science, Harvard University, Cambridge, Massachusetts 02138, USA and Edmond and Lily Safra Center for Brain Sciences, Hebrew University of Jerusalem, Jerusalem 9190401, Israel
| |
Collapse
|
54
|
Jensen KT, Kadmon Harpaz N, Dhawale AK, Wolff SBE, Ölveczky BP. Long-term stability of single neuron activity in the motor system. Nat Neurosci 2022; 25:1664-1674. [PMID: 36357811 PMCID: PMC11152193 DOI: 10.1038/s41593-022-01194-3] [Citation(s) in RCA: 14] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/02/2021] [Accepted: 10/03/2022] [Indexed: 11/12/2022]
Abstract
How an established behavior is retained and consistently produced by a nervous system in constant flux remains a mystery. One possible solution to ensure long-term stability in motor output is to fix the activity patterns of single neurons in the relevant circuits. Alternatively, activity in single cells could drift over time provided that the population dynamics are constrained to produce the same behavior. To arbitrate between these possibilities, we recorded single-unit activity in motor cortex and striatum continuously for several weeks as rats performed stereotyped motor behaviors-both learned and innate. We found long-term stability in single neuron activity patterns across both brain regions. A small amount of drift in neural activity, observed over weeks of recording, could be explained by concomitant changes in task-irrelevant aspects of the behavior. These results suggest that long-term stable behaviors are generated by single neuron activity patterns that are themselves highly stable.
Collapse
Affiliation(s)
- Kristopher T Jensen
- Department of Organismic and Evolutionary Biology and Center for Brain Science, Harvard University, Cambridge, MA, USA
- Computational and Biological Learning Lab, Department of Engineering, University of Cambridge, Cambridge, UK
| | - Naama Kadmon Harpaz
- Department of Organismic and Evolutionary Biology and Center for Brain Science, Harvard University, Cambridge, MA, USA
| | - Ashesh K Dhawale
- Department of Organismic and Evolutionary Biology and Center for Brain Science, Harvard University, Cambridge, MA, USA
- Centre for Neuroscience, Indian Institute of Science, Bangalore, India
| | - Steffen B E Wolff
- Department of Organismic and Evolutionary Biology and Center for Brain Science, Harvard University, Cambridge, MA, USA
- Department of Pharmacology, University of Maryland School of Medicine, Baltimore, MD, USA
| | - Bence P Ölveczky
- Department of Organismic and Evolutionary Biology and Center for Brain Science, Harvard University, Cambridge, MA, USA.
| |
Collapse
|
55
|
Ordering in heterogeneous connectome weights for visual information processing. Proc Natl Acad Sci U S A 2022; 119:e2216092119. [PMID: 36409900 PMCID: PMC9860139 DOI: 10.1073/pnas.2216092119] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
|
56
|
Tuning instability of non-columnar neurons in the salt-and-pepper whisker map in somatosensory cortex. Nat Commun 2022; 13:6611. [PMID: 36329010 PMCID: PMC9633707 DOI: 10.1038/s41467-022-34261-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/02/2021] [Accepted: 10/19/2022] [Indexed: 11/06/2022] Open
Abstract
Rodent sensory cortex contains salt-and-pepper maps of sensory features, whose structure is not fully known. Here we investigated the structure of the salt-and-pepper whisker somatotopic map among L2/3 pyramidal neurons in somatosensory cortex, in awake mice performing one-vs-all whisker discrimination. Neurons tuned for columnar (CW) and non-columnar (non-CW) whiskers were spatially intermixed, with co-tuned neurons forming local (20 µm) clusters. Whisker tuning was markedly unstable in expert mice, with 35-46% of pyramidal cells significantly shifting tuning over 5-18 days. Tuning instability was highly concentrated in non-CW tuned neurons, and thus was structured in the map. Instability of non-CW neurons was unchanged during chronic whisker paralysis and when mice discriminated individual whiskers, suggesting it is an inherent feature. Thus, L2/3 combines two distinct components: a stable columnar framework of CW-tuned cells that may promote spatial perceptual stability, plus an intermixed, non-columnar surround with highly unstable tuning.
Collapse
|
57
|
Aitken K, Garrett M, Olsen S, Mihalas S. The geometry of representational drift in natural and artificial neural networks. PLoS Comput Biol 2022; 18:e1010716. [PMID: 36441762 PMCID: PMC9731438 DOI: 10.1371/journal.pcbi.1010716] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/31/2022] [Revised: 12/08/2022] [Accepted: 11/07/2022] [Indexed: 11/29/2022] Open
Abstract
Neurons in sensory areas encode/represent stimuli. Surprisingly, recent studies have suggested that, even during persistent performance, these representations are not stable and change over the course of days and weeks. We examine stimulus representations from fluorescence recordings across hundreds of neurons in the visual cortex using in vivo two-photon calcium imaging and we corroborate previous studies finding that such representations change as experimental trials are repeated across days. This phenomenon has been termed "representational drift". In this study we geometrically characterize the properties of representational drift in the primary visual cortex of mice in two open datasets from the Allen Institute and propose a potential mechanism behind such drift. We observe representational drift both for passively presented stimuli, as well as for stimuli which are behaviorally relevant. Across experiments, the drift differs from in-session variance and most often occurs along directions that have the most in-class variance, leading to a significant turnover in the neurons used for a given representation. Interestingly, despite this significant change due to drift, linear classifiers trained to distinguish neuronal representations show little to no degradation in performance across days. The features we observe in the neural data are similar to properties of artificial neural networks where representations are updated by continual learning in the presence of dropout, i.e. a random masking of nodes/weights, but not other types of noise. Therefore, we conclude that a potential reason for the representational drift in biological networks is driven by an underlying dropout-like noise while continuously learning and that such a mechanism may be computational advantageous for the brain in the same way it is for artificial neural networks, e.g. preventing overfitting.
Collapse
Affiliation(s)
- Kyle Aitken
- MindScope Program, Allen Institute, Seattle, Washington, United States of America
| | - Marina Garrett
- MindScope Program, Allen Institute, Seattle, Washington, United States of America
| | - Shawn Olsen
- MindScope Program, Allen Institute, Seattle, Washington, United States of America
| | - Stefan Mihalas
- MindScope Program, Allen Institute, Seattle, Washington, United States of America
| |
Collapse
|
58
|
Zaki Y, Mau W, Cincotta C, Monasterio A, Odom E, Doucette E, Grella SL, Merfeld E, Shpokayte M, Ramirez S. Hippocampus and amygdala fear memory engrams re-emerge after contextual fear relapse. Neuropsychopharmacology 2022; 47:1992-2001. [PMID: 35941286 PMCID: PMC9485238 DOI: 10.1038/s41386-022-01407-0] [Citation(s) in RCA: 20] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 04/20/2022] [Revised: 06/17/2022] [Accepted: 07/16/2022] [Indexed: 11/09/2022]
Abstract
The formation and extinction of fear memories represent two forms of learning that each engage the hippocampus and amygdala. How cell populations in these areas contribute to fear relapse, however, remains unclear. Here, we demonstrate that, in male mice, cells active during fear conditioning in the dentate gyrus of hippocampus exhibit decreased activity during extinction and are re-engaged after contextual fear relapse. In vivo calcium imaging reveals that relapse drives population dynamics in the basolateral amygdala to revert to a network state similar to the state present during fear conditioning. Finally, we find that optogenetic inactivation of neuronal ensembles active during fear conditioning in either the hippocampus or amygdala is sufficient to disrupt fear expression after relapse, while optogenetic stimulation of these same ensembles after extinction is insufficient to artificially mimic fear relapse. These results suggest that fear relapse triggers a partial re-emergence of the original fear memory representation, providing new insight into the neural substrates of fear relapse.
Collapse
Affiliation(s)
- Yosif Zaki
- Department of Neuroscience, Icahn School of Medicine at Mount Sinai, New York, NY, 10029, USA
| | - William Mau
- Department of Neuroscience, Icahn School of Medicine at Mount Sinai, New York, NY, 10029, USA
| | - Christine Cincotta
- Department of Psychological and Brain Sciences, Boston University, Boston, MA, 02215, USA
| | - Amy Monasterio
- Department of Psychological and Brain Sciences, Boston University, Boston, MA, 02215, USA
| | - Emma Odom
- Department of Psychological and Brain Sciences, Boston University, Boston, MA, 02215, USA
| | - Emily Doucette
- Department of Psychological and Brain Sciences, Boston University, Boston, MA, 02215, USA
| | - Stephanie L Grella
- Department of Psychological and Brain Sciences, Boston University, Boston, MA, 02215, USA
| | - Emily Merfeld
- Department of Psychological and Brain Sciences, Boston University, Boston, MA, 02215, USA
| | - Monika Shpokayte
- Department of Psychological and Brain Sciences, Boston University, Boston, MA, 02215, USA
| | - Steve Ramirez
- Department of Psychological and Brain Sciences, Boston University, Boston, MA, 02215, USA.
| |
Collapse
|
59
|
Whittington JCR, McCaffary D, Bakermans JJW, Behrens TEJ. How to build a cognitive map. Nat Neurosci 2022; 25:1257-1272. [PMID: 36163284 DOI: 10.1038/s41593-022-01153-y] [Citation(s) in RCA: 45] [Impact Index Per Article: 22.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/12/2021] [Accepted: 07/25/2022] [Indexed: 11/08/2022]
Abstract
Learning and interpreting the structure of the environment is an innate feature of biological systems, and is integral to guiding flexible behaviors for evolutionary viability. The concept of a cognitive map has emerged as one of the leading metaphors for these capacities, and unraveling the learning and neural representation of such a map has become a central focus of neuroscience. In recent years, many models have been developed to explain cellular responses in the hippocampus and other brain areas. Because it can be difficult to see how these models differ, how they relate and what each model can contribute, this Review aims to organize these models into a clear ontology. This ontology reveals parallels between existing empirical results, and implies new approaches to understand hippocampal-cortical interactions and beyond.
Collapse
Affiliation(s)
- James C R Whittington
- Department of Applied Physics, Stanford University, Stanford, CA, USA.
- Wellcome Centre for Integrative Neuroimaging, University of Oxford, Oxford, UK.
| | - David McCaffary
- Wellcome Centre for Integrative Neuroimaging, University of Oxford, Oxford, UK
| | - Jacob J W Bakermans
- Wellcome Centre for Integrative Neuroimaging, University of Oxford, Oxford, UK
| | - Timothy E J Behrens
- Wellcome Centre for Integrative Neuroimaging, University of Oxford, Oxford, UK
- Wellcome Centre for Human Neuroimaging, University College London, London, UK
- Sainsbury Wellcome Centre for Neural Circuits and Behaviour, University College London, London, UK
| |
Collapse
|
60
|
Tsao A, Yousefzadeh SA, Meck WH, Moser MB, Moser EI. The neural bases for timing of durations. Nat Rev Neurosci 2022; 23:646-665. [PMID: 36097049 DOI: 10.1038/s41583-022-00623-3] [Citation(s) in RCA: 20] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 07/19/2022] [Indexed: 11/10/2022]
Abstract
Durations are defined by a beginning and an end, and a major distinction is drawn between durations that start in the present and end in the future ('prospective timing') and durations that start in the past and end either in the past or the present ('retrospective timing'). Different psychological processes are thought to be engaged in each of these cases. The former is thought to engage a clock-like mechanism that accurately tracks the continuing passage of time, whereas the latter is thought to engage a reconstructive process that utilizes both temporal and non-temporal information from the memory of past events. We propose that, from a biological perspective, these two forms of duration 'estimation' are supported by computational processes that are both reliant on population state dynamics but are nevertheless distinct. Prospective timing is effectively carried out in a single step where the ongoing dynamics of population activity directly serve as the computation of duration, whereas retrospective timing is carried out in two steps: the initial generation of population state dynamics through the process of event segmentation and the subsequent computation of duration utilizing the memory of those dynamics.
Collapse
Affiliation(s)
- Albert Tsao
- Department of Biology, Stanford University, Stanford, CA, USA.
| | | | - Warren H Meck
- Department of Psychology and Neuroscience, Duke University, Durham, NC, USA
| | - May-Britt Moser
- Centre for Neural Computation, Kavli Institute for Systems Neuroscience, Norwegian University of Science and Technology, Trondheim, Norway
| | - Edvard I Moser
- Centre for Neural Computation, Kavli Institute for Systems Neuroscience, Norwegian University of Science and Technology, Trondheim, Norway.
| |
Collapse
|
61
|
Miehl C, Onasch S, Festa D, Gjorgjieva J. Formation and computational implications of assemblies in neural circuits. J Physiol 2022. [PMID: 36068723 DOI: 10.1113/jp282750] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/20/2022] [Accepted: 08/22/2022] [Indexed: 11/08/2022] Open
Abstract
In the brain, patterns of neural activity represent sensory information and store it in non-random synaptic connectivity. A prominent theoretical hypothesis states that assemblies, groups of neurons that are strongly connected to each other, are the key computational units underlying perception and memory formation. Compatible with these hypothesised assemblies, experiments have revealed groups of neurons that display synchronous activity, either spontaneously or upon stimulus presentation, and exhibit behavioural relevance. While it remains unclear how assemblies form in the brain, theoretical work has vastly contributed to the understanding of various interacting mechanisms in this process. Here, we review the recent theoretical literature on assembly formation by categorising the involved mechanisms into four components: synaptic plasticity, symmetry breaking, competition and stability. We highlight different approaches and assumptions behind assembly formation and discuss recent ideas of assemblies as the key computational unit in the brain. Abstract figure legend Assembly Formation. Assemblies are groups of strongly connected neurons formed by the interaction of multiple mechanisms and with vast computational implications. Four interacting components are thought to drive assembly formation: synaptic plasticity, symmetry breaking, competition and stability. This article is protected by copyright. All rights reserved.
Collapse
Affiliation(s)
- Christoph Miehl
- Computation in Neural Circuits, Max Planck Institute for Brain Research, 60438, Frankfurt, Germany.,School of Life Sciences, Technical University of Munich, 85354, Freising, Germany
| | - Sebastian Onasch
- Computation in Neural Circuits, Max Planck Institute for Brain Research, 60438, Frankfurt, Germany.,School of Life Sciences, Technical University of Munich, 85354, Freising, Germany
| | - Dylan Festa
- Computation in Neural Circuits, Max Planck Institute for Brain Research, 60438, Frankfurt, Germany.,School of Life Sciences, Technical University of Munich, 85354, Freising, Germany
| | - Julijana Gjorgjieva
- Computation in Neural Circuits, Max Planck Institute for Brain Research, 60438, Frankfurt, Germany.,School of Life Sciences, Technical University of Munich, 85354, Freising, Germany
| |
Collapse
|
62
|
Patel S, Johnson K, Adank D, Rosas-Vidal LE. Longitudinal monitoring of prefrontal cortical ensemble dynamics reveals new insights into stress habituation. Neurobiol Stress 2022; 20:100481. [PMID: 36160815 PMCID: PMC9489534 DOI: 10.1016/j.ynstr.2022.100481] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/29/2022] [Revised: 08/12/2022] [Accepted: 08/24/2022] [Indexed: 01/25/2023] Open
Abstract
The prefrontal cortex is highly susceptible to the detrimental effects of stress and has been implicated in the pathogenesis of stress-related psychiatric disorders. It is not well understood, however, how stress is represented at the neuronal level in the prefrontal cortical neuronal ensembles. Even less understood is how the representation of stress changes over time with repeated exposure. Here we show that the prelimbic prefrontal neuronal ensemble representation of foot shock stress exhibits rapid spatial drift within and between sessions. Despite this rapid spatial drift of the ensemble, the representation of the stressor itself stabilizes over days. Our results suggest that stress is represented by rapidly drifting ensembles and despite this rapid drift, important features of the neuronal representation are stabilized, suggesting a neural correlate of stress habituation is present within prefrontal cortical neuron populations.
Collapse
Affiliation(s)
- Sachin Patel
- Department of Psychiatry and Behavioral Sciences, Northwestern University Feinberg School of Medicine, Chicago, IL, 60611, USA
| | - Keenan Johnson
- Department of Psychiatry and Behavioral Sciences, Northwestern University Feinberg School of Medicine, Chicago, IL, 60611, USA
| | - Danielle Adank
- Vanderbilt Brain Institute, Vanderbilt University, Nashville, TN, USA
- Interdisciplinary Program in Neuroscience, Vanderbilt University, Nashville, TN, USA
| | - Luis E. Rosas-Vidal
- Department of Psychiatry and Behavioral Sciences, Vanderbilt University Medical Center Nashville, TN, USA
| |
Collapse
|
63
|
Sadeh S, Clopath C. Contribution of behavioural variability to representational drift. eLife 2022; 11:e77907. [PMID: 36040010 PMCID: PMC9481246 DOI: 10.7554/elife.77907] [Citation(s) in RCA: 21] [Impact Index Per Article: 10.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/15/2022] [Accepted: 08/24/2022] [Indexed: 11/25/2022] Open
Abstract
Neuronal responses to similar stimuli change dynamically over time, raising the question of how internal representations can provide a stable substrate for neural coding. Recent work has suggested a large degree of drift in neural representations even in sensory cortices, which are believed to store stable representations of the external world. While the drift of these representations is mostly characterized in relation to external stimuli, the behavioural state of the animal (for instance, the level of arousal) is also known to strongly modulate the neural activity. We therefore asked how the variability of such modulatory mechanisms can contribute to representational changes. We analysed large-scale recording of neural activity from the Allen Brain Observatory, which was used before to document representational drift in the mouse visual cortex. We found that, within these datasets, behavioural variability significantly contributes to representational changes. This effect was broadcasted across various cortical areas in the mouse, including the primary visual cortex, higher order visual areas, and even regions not primarily linked to vision like hippocampus. Our computational modelling suggests that these results are consistent with independent modulation of neural activity by behaviour over slower timescales. Importantly, our analysis suggests that reliable but variable modulation of neural representations by behaviour can be misinterpreted as representational drift if neuronal representations are only characterized in the stimulus space and marginalized over behavioural parameters.
Collapse
Affiliation(s)
- Sadra Sadeh
- Department of Bioengineering, Imperial College LondonLondonUnited Kingdom
| | - Claudia Clopath
- Department of Bioengineering, Imperial College LondonLondonUnited Kingdom
| |
Collapse
|
64
|
Takehara-Nishiuchi K. Flexibility of memory for future-oriented cognition. Curr Opin Neurobiol 2022; 76:102622. [PMID: 35994840 DOI: 10.1016/j.conb.2022.102622] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/01/2022] [Revised: 07/15/2022] [Accepted: 07/21/2022] [Indexed: 11/26/2022]
Abstract
Memories of daily experiences contain incidental details unique to each experience as well as common latent patterns shared with others. Neural representations focusing on the latter aspect can be reinstated by similar new experiences even though their perceptual features do not match the original experiences perfectly. Such flexible memory use allows for faster learning and better decision-making in novel situations. Here, I review evidence from rodent and primate electrophysiological studies to discuss how memory flexibility is implemented in the spiking activity of neuronal ensembles. These findings uncovered innate and learned coding properties and their potential refinement during sleep that support flexible integration and application of memories for better future adaptation.
Collapse
Affiliation(s)
- Kaori Takehara-Nishiuchi
- Department of Psychology, University of Toronto, Toronto, M5S 3G3, Canada; Department of Cell and Systems Biology, University of Toronto, Toronto, M5S 3G3, Canada; Neuroscience Program, University of Toronto, Toronto, M5S 3G3, Canada.
| |
Collapse
|
65
|
Driscoll LN, Duncker L, Harvey CD. Representational drift: Emerging theories for continual learning and experimental future directions. Curr Opin Neurobiol 2022; 76:102609. [PMID: 35939861 DOI: 10.1016/j.conb.2022.102609] [Citation(s) in RCA: 35] [Impact Index Per Article: 17.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/04/2022] [Revised: 06/08/2022] [Accepted: 06/23/2022] [Indexed: 11/03/2022]
Abstract
Recent work has revealed that the neural activity patterns correlated with sensation, cognition, and action often are not stable and instead undergo large scale changes over days and weeks-a phenomenon called representational drift. Here, we highlight recent observations of drift, how drift is unlikely to be explained by experimental confounds, and how the brain can likely compensate for drift to allow stable computation. We propose that drift might have important roles in neural computation to allow continual learning, both for separating and relating memories that occur at distinct times. Finally, we present an outlook on future experimental directions that are needed to further characterize drift and to test emerging theories for drift's role in computation.
Collapse
Affiliation(s)
- Laura N Driscoll
- Department of Electrical Engineering, Stanford University, Stanford, CA, USA.
| | - Lea Duncker
- Howard Hughes Medical Institute, Stanford University, Stanford, CA, USA.
| | | |
Collapse
|
66
|
Hennig MH. The sloppy relationship between neural circuit structure and function. J Physiol 2022. [PMID: 35876720 DOI: 10.1113/jp282757] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/14/2022] [Accepted: 07/20/2022] [Indexed: 11/08/2022] Open
Abstract
Investigating and describing the relationships between the structure of a circuit and its function has a long tradition in neuroscience. Since neural circuits acquire their structure through sophisticated developmental programmes, and memories and experiences are maintained through synaptic modification, it is to be expected that structure is closely linked to function. Recent findings challenge this hypothesis from three different angles: Function does not strongly constrain circuit parameters, many parameters in neural circuits are irrelevant and contribute little to function, and circuit parameters are unstable and subject to constant random drift. At the same time however, recent work also showed that dynamics in neural circuit activity that is related to function are robust over time and across individuals. Here this apparent contradiction is addressed by considering the properties of neural manifolds that restrict circuit activity to functionally relevant subspaces, and it will be suggested that degenerate, anisotropic and unstable parameter spaces are a closely related to the structure and implementation of functionally relevant neural manifolds. Abstract figure legend What are the relationships between noisy and highly variable microscopic neural circuit variables on the one hand and the generation of behaviour on the other? Here it is proposed that an intermediate level of description exists where this relationship can be understood in terms of low-dimensional dynamics. Recordings of neural activity during unconstrained behaviour and the development of new machine learning methods will help to uncover these links. This article is protected by copyright. All rights reserved.
Collapse
Affiliation(s)
- Matthias H Hennig
- Institute for Adaptive and Neural Computation, School of Informatics, University of Edinburgh
| |
Collapse
|
67
|
de Wit MM, Matheson HE. Context-sensitive computational mechanistic explanation in cognitive neuroscience. Front Psychol 2022; 13:903960. [PMID: 35936251 PMCID: PMC9355036 DOI: 10.3389/fpsyg.2022.903960] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/24/2022] [Accepted: 06/27/2022] [Indexed: 11/17/2022] Open
Abstract
Mainstream cognitive neuroscience aims to build mechanistic explanations of behavior by mapping abilities described at the organismal level via the subpersonal level of computation onto specific brain networks. We provide an integrative review of these commitments and their mismatch with empirical research findings. Context-dependent neural tuning, neural reuse, degeneracy, plasticity, functional recovery, and the neural correlates of enculturated skills each show that there is a lack of stable mappings between organismal, computational, and neural levels of analysis. We furthermore highlight recent research suggesting that task context at the organismal level determines the dynamic parcellation of functional components at the neural level. Such instability prevents the establishment of specific computational descriptions of neural function, which remains a central goal of many brain mappers - including those who are sympathetic to the notion of many-to-many mappings between organismal and neural levels. This between-level instability presents a deep epistemological challenge and requires a reorientation of methodological and theoretical commitments within cognitive neuroscience. We demonstrate the need for change to brain mapping efforts in the face of instability if cognitive neuroscience is to maintain its central goal of constructing computational mechanistic explanations of behavior; we show that such explanations must be contextual at all levels.
Collapse
Affiliation(s)
- Matthieu M. de Wit
- Department of Neuroscience, Muhlenberg College, Allentown, PA, United States
| | - Heath E. Matheson
- Department of Psychology, University of Northern British Columbia, Prince George, BC, Canada
| |
Collapse
|
68
|
Suri H, Rothschild G. Enhanced stability of complex sound representations relative to simple sounds in the auditory cortex. eNeuro 2022; 9:ENEURO.0031-22.2022. [PMID: 35868858 PMCID: PMC9347310 DOI: 10.1523/eneuro.0031-22.2022] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/21/2022] [Revised: 06/29/2022] [Accepted: 06/30/2022] [Indexed: 11/29/2022] Open
Abstract
Typical everyday sounds, such as those of speech or running water, are spectrotemporally complex. The ability to recognize complex sounds (CxS) and their associated meaning is presumed to rely on their stable neural representations across time. The auditory cortex is critical for processing of CxS, yet little is known of the degree of stability of auditory cortical representations of CxS across days. Previous studies have shown that the auditory cortex represents CxS identity with a substantial degree of invariance to basic sound attributes such as frequency. We therefore hypothesized that auditory cortical representations of CxS are more stable across days than those of sounds that lack spectrotemporal structure such as pure tones (PTs). To test this hypothesis, we recorded responses of identified L2/3 auditory cortical excitatory neurons to both PTs and CxS across days using two-photon calcium imaging in awake mice. Auditory cortical neurons showed significant daily changes of responses to both types of sounds, yet responses to CxS exhibited significantly lower rates of daily change than those of PTs. Furthermore, daily changes in response profiles to PTs tended to be more stimulus-specific, reflecting changes in sound selectivity, as compared to changes of CxS responses. Lastly, the enhanced stability of responses to CxS was evident across longer time intervals as well. Together, these results suggest that spectrotemporally CxS are more stably represented in the auditory cortex across time than PTs. These findings support a role of the auditory cortex in representing CxS identity across time.Significance statementThe ability to recognize everyday complex sounds such as those of speech or running water is presumed to rely on their stable neural representations. Yet, little is known of the degree of stability of single-neuron sound responses across days. As the auditory cortex is critical for complex sound perception, we hypothesized that the auditory cortical representations of complex sounds are relatively stable across days. To test this, we recorded sound responses of identified auditory cortical neurons across days in awake mice. We found that auditory cortical responses to complex sounds are significantly more stable across days as compared to those of simple pure tones. These findings support a role of the auditory cortex in representing complex sound identity across time.
Collapse
Affiliation(s)
- Harini Suri
- Department of Psychology, University of Michigan, Ann Arbor, MI, 48109, USA
| | - Gideon Rothschild
- Department of Psychology, University of Michigan, Ann Arbor, MI, 48109, USA
- Kresge Hearing Research Institute and Department of Otolaryngology - Head and Neck Surgery, University of Michigan, Ann Arbor, MI 48109, USA
| |
Collapse
|
69
|
Animal-to-Animal Variability in Partial Hippocampal Remapping in Repeated Environments. J Neurosci 2022; 42:5268-5280. [PMID: 35641190 PMCID: PMC9236289 DOI: 10.1523/jneurosci.3221-20.2022] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/14/2020] [Revised: 04/11/2022] [Accepted: 05/10/2022] [Indexed: 12/31/2022] Open
Abstract
Hippocampal place cells form a map of the environment of an animal. Changes in the hippocampal map can be brought about in a number of ways, including changes to the environment, task, internal state of the subject, and the passage of time. These changes in the hippocampal map have been called remapping. In this study, we examine remapping during repeated exposure to the same environment. Different animals can have different remapping responses to the same changes. This variability across animals in remapping behavior is not well understood. In this work, we analyzed electrophysiological recordings from the CA3 region of the hippocampus performed by Alme et al. (2014), in which five male rats were exposed to 11 different environments, including a variety of repetitions of those environments. To compare the hippocampal maps between two experiences, we computed average rate map correlation coefficients. We found changes in the hippocampal maps between different sessions in the same environment. These changes consisted of partial remapping, a form of remapping in which some place cells maintain their place fields, whereas other place cells remap their place fields. Each animal exhibited partial remapping differently. We discovered that the heterogeneity in hippocampal representational changes across animals is structured; individual animals had consistently different levels of partial remapping across a range of independent comparisons. Our findings highlight that partial hippocampal remapping between repeated environments depends on animal-specific factors.SIGNIFICANCE STATEMENT Context identification is a difficult problem. Animals are not provided with objective context identity labels, so they must infer which experiences come from which contexts. Different animals may have different strategies for performing this inference. We find that different animals have stereotypically different extents of partial hippocampal remapping, a neural correlate of subjective assessment of context identity.
Collapse
|
70
|
Biswas T, Fitzgerald JE. Geometric framework to predict structure from function in neural networks. PHYSICAL REVIEW RESEARCH 2022; 4:023255. [PMID: 37635906 PMCID: PMC10456994 DOI: 10.1103/physrevresearch.4.023255] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Indexed: 08/29/2023]
Abstract
Neural computation in biological and artificial networks relies on the nonlinear summation of many inputs. The structural connectivity matrix of synaptic weights between neurons is a critical determinant of overall network function, but quantitative links between neural network structure and function are complex and subtle. For example, many networks can give rise to similar functional responses, and the same network can function differently depending on context. Whether certain patterns of synaptic connectivity are required to generate specific network-level computations is largely unknown. Here we introduce a geometric framework for identifying synaptic connections required by steady-state responses in recurrent networks of threshold-linear neurons. Assuming that the number of specified response patterns does not exceed the number of input synapses, we analytically calculate the solution space of all feedforward and recurrent connectivity matrices that can generate the specified responses from the network inputs. A generalization accounting for noise further reveals that the solution space geometry can undergo topological transitions as the allowed error increases, which could provide insight into both neuroscience and machine learning. We ultimately use this geometric characterization to derive certainty conditions guaranteeing a nonzero synapse between neurons. Our theoretical framework could thus be applied to neural activity data to make rigorous anatomical predictions that follow generally from the model architecture.
Collapse
Affiliation(s)
- Tirthabir Biswas
- Janelia Research Campus, Howard Hughes Medical Institute, Ashburn, Virginia 20147, USA
- Department of Physics, Loyola University, New Orleans, Louisiana 70118, USA
| | - James E. Fitzgerald
- Janelia Research Campus, Howard Hughes Medical Institute, Ashburn, Virginia 20147, USA
| |
Collapse
|
71
|
Masset P, Qin S, Zavatone-Veth JA. Drifting neuronal representations: Bug or feature? BIOLOGICAL CYBERNETICS 2022; 116:253-266. [PMID: 34993613 DOI: 10.1007/s00422-021-00916-3] [Citation(s) in RCA: 10] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/25/2021] [Accepted: 11/17/2021] [Indexed: 06/14/2023]
Abstract
The brain displays a remarkable ability to sustain stable memories, allowing animals to execute precise behaviors or recall stimulus associations years after they were first learned. Yet, recent long-term recording experiments have revealed that single-neuron representations continuously change over time, contravening the classical assumption that learned features remain static. How do unstable neural codes support robust perception, memories, and actions? Here, we review recent experimental evidence for such representational drift across brain areas, as well as dissections of its functional characteristics and underlying mechanisms. We emphasize theoretical proposals for how drift need not only be a form of noise for which the brain must compensate. Rather, it can emerge from computationally beneficial mechanisms in hierarchical networks performing robust probabilistic computations.
Collapse
Affiliation(s)
- Paul Masset
- Center for Brain Science, Harvard University, Cambridge, MA, USA.
- Department of Molecular and Cellular Biology, Harvard University, Cambridge, MA, USA.
| | - Shanshan Qin
- Center for Brain Science, Harvard University, Cambridge, MA, USA
- School of Engineering and Applied Sciences, Harvard University, Cambridge, MA, USA
| | - Jacob A Zavatone-Veth
- Center for Brain Science, Harvard University, Cambridge, MA, USA
- Department of Physics, Harvard University, Cambridge, MA, USA
| |
Collapse
|
72
|
Anwar H, Caby S, Dura-Bernal S, D’Onofrio D, Hasegan D, Deible M, Grunblatt S, Chadderdon GL, Kerr CC, Lakatos P, Lytton WW, Hazan H, Neymotin SA. Training a spiking neuronal network model of visual-motor cortex to play a virtual racket-ball game using reinforcement learning. PLoS One 2022; 17:e0265808. [PMID: 35544518 PMCID: PMC9094569 DOI: 10.1371/journal.pone.0265808] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/06/2021] [Accepted: 03/08/2022] [Indexed: 11/18/2022] Open
Abstract
Recent models of spiking neuronal networks have been trained to perform behaviors in static environments using a variety of learning rules, with varying degrees of biological realism. Most of these models have not been tested in dynamic visual environments where models must make predictions on future states and adjust their behavior accordingly. The models using these learning rules are often treated as black boxes, with little analysis on circuit architectures and learning mechanisms supporting optimal performance. Here we developed visual/motor spiking neuronal network models and trained them to play a virtual racket-ball game using several reinforcement learning algorithms inspired by the dopaminergic reward system. We systematically investigated how different architectures and circuit-motifs (feed-forward, recurrent, feedback) contributed to learning and performance. We also developed a new biologically-inspired learning rule that significantly enhanced performance, while reducing training time. Our models included visual areas encoding game inputs and relaying the information to motor areas, which used this information to learn to move the racket to hit the ball. Neurons in the early visual area relayed information encoding object location and motion direction across the network. Neuronal association areas encoded spatial relationships between objects in the visual scene. Motor populations received inputs from visual and association areas representing the dorsal pathway. Two populations of motor neurons generated commands to move the racket up or down. Model-generated actions updated the environment and triggered reward or punishment signals that adjusted synaptic weights so that the models could learn which actions led to reward. Here we demonstrate that our biologically-plausible learning rules were effective in training spiking neuronal network models to solve problems in dynamic environments. We used our models to dissect the circuit architectures and learning rules most effective for learning. Our model shows that learning mechanisms involving different neural circuits produce similar performance in sensory-motor tasks. In biological networks, all learning mechanisms may complement one another, accelerating the learning capabilities of animals. Furthermore, this also highlights the resilience and redundancy in biological systems.
Collapse
Affiliation(s)
- Haroon Anwar
- Center for Biomedical Imaging and Neuromodulation, Nathan Kline Institute for Psychiatric Research, Orangeburg, New York, United States of America
| | - Simon Caby
- Center for Biomedical Imaging and Neuromodulation, Nathan Kline Institute for Psychiatric Research, Orangeburg, New York, United States of America
| | - Salvador Dura-Bernal
- Center for Biomedical Imaging and Neuromodulation, Nathan Kline Institute for Psychiatric Research, Orangeburg, New York, United States of America
- Dept. Physiology & Pharmacology, State University of New York Downstate, Brooklyn, New York, United States of America
| | - David D’Onofrio
- Center for Biomedical Imaging and Neuromodulation, Nathan Kline Institute for Psychiatric Research, Orangeburg, New York, United States of America
| | - Daniel Hasegan
- Center for Biomedical Imaging and Neuromodulation, Nathan Kline Institute for Psychiatric Research, Orangeburg, New York, United States of America
| | - Matt Deible
- University of Pittsburgh, Pittsburgh, Pennsylvania, United States of America
| | - Sara Grunblatt
- Center for Biomedical Imaging and Neuromodulation, Nathan Kline Institute for Psychiatric Research, Orangeburg, New York, United States of America
| | - George L. Chadderdon
- Dept. Physiology & Pharmacology, State University of New York Downstate, Brooklyn, New York, United States of America
| | - Cliff C. Kerr
- Dept Physics, University of Sydney, Sydney, Australia
- Institute for Disease Modeling, Global Health Division, Bill & Melinda Gates Foundation, Seattle, Washington, United States of America
| | - Peter Lakatos
- Center for Biomedical Imaging and Neuromodulation, Nathan Kline Institute for Psychiatric Research, Orangeburg, New York, United States of America
- Dept. Psychiatry, NYU Grossman School of Medicine, New York, New York, United States of America
| | - William W. Lytton
- Dept. Physiology & Pharmacology, State University of New York Downstate, Brooklyn, New York, United States of America
- Dept Neurology, Kings County Hospital Center, Brooklyn, New York, United States of America
| | - Hananel Hazan
- Dept of Biology, Tufts University, Medford, Massachusetts, United States of America
| | - Samuel A. Neymotin
- Center for Biomedical Imaging and Neuromodulation, Nathan Kline Institute for Psychiatric Research, Orangeburg, New York, United States of America
- Dept. Psychiatry, NYU Grossman School of Medicine, New York, New York, United States of America
| |
Collapse
|
73
|
Keinath AT, Mosser CA, Brandon MP. The representation of context in mouse hippocampus is preserved despite neural drift. Nat Commun 2022; 13:2415. [PMID: 35504915 PMCID: PMC9065029 DOI: 10.1038/s41467-022-30198-7] [Citation(s) in RCA: 20] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/12/2021] [Accepted: 04/19/2022] [Indexed: 12/18/2022] Open
Abstract
The hippocampus is thought to mediate episodic memory through the instantiation and reinstatement of context-specific cognitive maps. However, recent longitudinal experiments have challenged this view, reporting that most hippocampal cells change their tuning properties over days even in the same environment. Often referred to as neural or representational drift, these dynamics raise questions about the capacity and content of the hippocampal code. One such question is whether and how these long-term dynamics impact the hippocampal code for context. To address this, we image large CA1 populations over more than a month of daily experience as freely behaving mice participate in an extended geometric morph paradigm. We find that long-timescale changes in population activity occur orthogonally to the representation of context in network space, allowing for consistent readout of contextual information across weeks. This population-level structure is supported by heterogeneous patterns of activity at the level of individual cells, where we observe evidence of a positive relationship between interpretable contextual coding and long-term stability. Together, these results demonstrate that long-timescale changes to the CA1 spatial code preserve the relative structure of contextual representation.
Collapse
Affiliation(s)
- Alexandra T Keinath
- Department of Psychiatry, Douglas Hospital Research Centre, McGill University, 6875 Boulevard LaSalle, Verdun, QC, H4H 1R3, Canada.
| | - Coralie-Anne Mosser
- Department of Psychiatry, Douglas Hospital Research Centre, McGill University, 6875 Boulevard LaSalle, Verdun, QC, H4H 1R3, Canada
| | - Mark P Brandon
- Department of Psychiatry, Douglas Hospital Research Centre, McGill University, 6875 Boulevard LaSalle, Verdun, QC, H4H 1R3, Canada.
| |
Collapse
|
74
|
Johnson C, Kretsge LN, Yen WW, Sriram B, O'Connor A, Liu RS, Jimenez JC, Phadke RA, Wingfield KK, Yeung C, Jinadasa TJ, Nguyen TPH, Cho ES, Fuchs E, Spevack ED, Velasco BE, Hausmann FS, Fournier LA, Brack A, Melzer S, Cruz-Martín A. Highly unstable heterogeneous representations in VIP interneurons of the anterior cingulate cortex. Mol Psychiatry 2022; 27:2602-2618. [PMID: 35246635 PMCID: PMC11128891 DOI: 10.1038/s41380-022-01485-y] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 02/05/2021] [Revised: 02/07/2022] [Accepted: 02/09/2022] [Indexed: 11/09/2022]
Abstract
A hallmark of the anterior cingulate cortex (ACC) is its functional heterogeneity. Functional and imaging studies revealed its importance in the encoding of anxiety-related and social stimuli, but it is unknown how microcircuits within the ACC encode these distinct stimuli. One type of inhibitory interneuron, which is positive for vasoactive intestinal peptide (VIP), is known to modulate the activity of pyramidal cells in local microcircuits, but it is unknown whether VIP cells in the ACC (VIPACC) are engaged by particular contexts or stimuli. Additionally, recent studies demonstrated that neuronal representations in other cortical areas can change over time at the level of the individual neuron. However, it is not known whether stimulus representations in the ACC remain stable over time. Using in vivo Ca2+ imaging and miniscopes in freely behaving mice to monitor neuronal activity with cellular resolution, we identified individual VIPACC that preferentially activated to distinct stimuli across diverse tasks. Importantly, although the population-level activity of the VIPACC remained stable across trials, the stimulus-selectivity of individual interneurons changed rapidly. These findings demonstrate marked functional heterogeneity and instability within interneuron populations in the ACC. This work contributes to our understanding of how the cortex encodes information across diverse contexts and provides insight into the complexity of neural processes involved in anxiety and social behavior.
Collapse
Affiliation(s)
- Connor Johnson
- Neurobiology Section in the Department of Biology, Boston University, Boston, MA, USA
| | - Lisa N Kretsge
- The Graduate Program for Neuroscience, Boston University, Boston, MA, USA
- Neurophotonics Center, Boston University, Boston, MA, USA
| | - William W Yen
- Neurobiology Section in the Department of Biology, Boston University, Boston, MA, USA
| | | | - Alexandra O'Connor
- Department of Biomedical Engineering, Boston University, Boston, MA, USA
| | - Ruichen Sky Liu
- MS in Statistical Practice Program, Boston University, Boston, MA, USA
| | - Jessica C Jimenez
- Department of Neurology, University of California, San Francisco, San Francisco, CA, USA
| | - Rhushikesh A Phadke
- Molecular Biology, Cell Biology and Biochemistry Program, Boston University, Boston, MA, USA
| | - Kelly K Wingfield
- Neurophotonics Center, Boston University, Boston, MA, USA
- Department of Pharmacology and Experimental Therapeutics, Boston University, Boston, MA, USA
| | - Charlotte Yeung
- Neurobiology Section in the Department of Biology, Boston University, Boston, MA, USA
| | - Tushare J Jinadasa
- Neurobiology Section in the Department of Biology, Boston University, Boston, MA, USA
| | - Thanh P H Nguyen
- Department of Biomedical Engineering, Boston University, Boston, MA, USA
| | - Eun Seon Cho
- Neurobiology Section in the Department of Biology, Boston University, Boston, MA, USA
| | - Erelle Fuchs
- Neurobiology Section in the Department of Biology, Boston University, Boston, MA, USA
| | - Eli D Spevack
- Neurobiology Section in the Department of Biology, Boston University, Boston, MA, USA
| | - Berta Escude Velasco
- Neurobiology Section in the Department of Biology, Boston University, Boston, MA, USA
| | - Frances S Hausmann
- Neurobiology Section in the Department of Biology, Boston University, Boston, MA, USA
| | - Luke A Fournier
- Neurobiology Section in the Department of Biology, Boston University, Boston, MA, USA
| | - Alison Brack
- Molecular Biology, Cell Biology and Biochemistry Program, Boston University, Boston, MA, USA
| | - Sarah Melzer
- Department of Neurobiology, Howard Hughes Medical Institute, Harvard Medical School, Boston, MA, USA
| | - Alberto Cruz-Martín
- Neurobiology Section in the Department of Biology, Boston University, Boston, MA, USA.
- Neurophotonics Center, Boston University, Boston, MA, USA.
- Molecular Biology, Cell Biology and Biochemistry Program, Boston University, Boston, MA, USA.
- Center for Systems Neuroscience, Boston University, Boston, MA, USA.
- The Center for Network Systems Biology, Boston University, Boston, MA, USA.
| |
Collapse
|
75
|
Takehara-Nishiuchi K. Neuronal Code for Episodic Time in the Lateral Entorhinal Cortex. Front Integr Neurosci 2022; 16:899412. [PMID: 35573446 PMCID: PMC9099416 DOI: 10.3389/fnint.2022.899412] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/18/2022] [Accepted: 04/11/2022] [Indexed: 11/13/2022] Open
Affiliation(s)
- Kaori Takehara-Nishiuchi
- Department of Psychology, University of Toronto, Toronto, ON, Canada
- Department of Cell and Systems Biology, University of Toronto, Toronto, ON, Canada
- Neuroscience Program, University of Toronto, Toronto, ON, Canada
- *Correspondence: Kaori Takehara-Nishiuchi
| |
Collapse
|
76
|
Pinotsis DA, Miller EK. Beyond dimension reduction: Stable electric fields emerge from and allow representational drift. Neuroimage 2022; 253:119058. [PMID: 35272022 DOI: 10.1016/j.neuroimage.2022.119058] [Citation(s) in RCA: 12] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/05/2022] [Revised: 03/03/2022] [Accepted: 03/03/2022] [Indexed: 01/18/2023] Open
Abstract
It is known that the exact neurons maintaining a given memory (the neural ensemble) change from trial to trial. This raises the question of how the brain achieves stability in the face of this representational drift. Here, we demonstrate that this stability emerges at the level of the electric fields that arise from neural activity. We show that electric fields carry information about working memory content. The electric fields, in turn, can act as "guard rails" that funnel higher dimensional variable neural activity along stable lower dimensional routes. We obtained the latent space associated with each memory. We then confirmed the stability of the electric field by mapping the latent space to different cortical patches (that comprise a neural ensemble) and reconstructing information flow between patches. Stable electric fields can allow latent states to be transferred between brain areas, in accord with modern engram theory.
Collapse
Affiliation(s)
- Dimitris A Pinotsis
- Centre for Mathematical Neuroscience and Psychology and Department of Psychology, City-University of London, London EC1V 0HB, United Kingdom; The Picower Institute for Learning and Memory and Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA 02139, USA.
| | - Earl K Miller
- The Picower Institute for Learning and Memory and Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA 02139, USA
| |
Collapse
|
77
|
Rule ME, O'Leary T. Self-healing codes: How stable neural populations can track continually reconfiguring neural representations. Proc Natl Acad Sci U S A 2022; 119:e2106692119. [PMID: 35145024 PMCID: PMC8851551 DOI: 10.1073/pnas.2106692119] [Citation(s) in RCA: 12] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/10/2021] [Accepted: 12/29/2021] [Indexed: 12/19/2022] Open
Abstract
As an adaptive system, the brain must retain a faithful representation of the world while continuously integrating new information. Recent experiments have measured population activity in cortical and hippocampal circuits over many days and found that patterns of neural activity associated with fixed behavioral variables and percepts change dramatically over time. Such "representational drift" raises the question of how malleable population codes can interact coherently with stable long-term representations that are found in other circuits and with relatively rigid topographic mappings of peripheral sensory and motor signals. We explore how known plasticity mechanisms can allow single neurons to reliably read out an evolving population code without external error feedback. We find that interactions between Hebbian learning and single-cell homeostasis can exploit redundancy in a distributed population code to compensate for gradual changes in tuning. Recurrent feedback of partially stabilized readouts could allow a pool of readout cells to further correct inconsistencies introduced by representational drift. This shows how relatively simple, known mechanisms can stabilize neural tuning in the short term and provides a plausible explanation for how plastic neural codes remain integrated with consolidated, long-term representations.
Collapse
Affiliation(s)
- Michael E Rule
- Engineering Department, University of Cambridge, Cambridge CB2 1PZ, United Kingdom
| | - Timothy O'Leary
- Engineering Department, University of Cambridge, Cambridge CB2 1PZ, United Kingdom
| |
Collapse
|
78
|
Learning-induced biases in the ongoing dynamics of sensory representations predict stimulus generalization. Cell Rep 2022; 38:110340. [PMID: 35139386 DOI: 10.1016/j.celrep.2022.110340] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/21/2021] [Revised: 11/16/2021] [Accepted: 01/14/2022] [Indexed: 11/22/2022] Open
Abstract
Sensory stimuli have long been thought to be represented in the brain as activity patterns of specific neuronal assemblies. However, we still know relatively little about the long-term dynamics of sensory representations. Using chronic in vivo calcium imaging in the mouse auditory cortex, we find that sensory representations undergo continuous recombination, even under behaviorally stable conditions. Auditory cued fear conditioning introduces a bias into these ongoing dynamics, resulting in a long-lasting increase in the number of stimuli activating the same subset of neurons. This plasticity is specific for stimuli sharing representational similarity to the conditioned sound prior to conditioning and predicts behaviorally observed stimulus generalization. Our findings demonstrate that learning-induced plasticity leading to a representational linkage between the conditioned stimulus and non-conditioned stimuli weaves into ongoing dynamics of the brain rather than acting on an otherwise static substrate.
Collapse
|
79
|
Davis GP, Katz GE, Gentili RJ, Reggia JA. NeuroLISP: High-level symbolic programming with attractor neural networks. Neural Netw 2021; 146:200-219. [PMID: 34894482 DOI: 10.1016/j.neunet.2021.11.009] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/11/2021] [Revised: 11/04/2021] [Accepted: 11/09/2021] [Indexed: 10/19/2022]
Abstract
Despite significant improvements in contemporary machine learning, symbolic methods currently outperform artificial neural networks on tasks that involve compositional reasoning, such as goal-directed planning and logical inference. This illustrates a computational explanatory gap between cognitive and neurocomputational algorithms that obscures the neurobiological mechanisms underlying cognition and impedes progress toward human-level artificial intelligence. Because of the strong relationship between cognition and working memory control, we suggest that the cognitive abilities of contemporary neural networks are limited by biologically-implausible working memory systems that rely on persistent activity maintenance and/or temporal nonlocality. Here we present NeuroLISP, an attractor neural network that can represent and execute programs written in the LISP programming language. Unlike previous approaches to high-level programming with neural networks, NeuroLISP features a temporally-local working memory based on itinerant attractor dynamics, top-down gating, and fast associative learning, and implements several high-level programming constructs such as compositional data structures, scoped variable binding, and the ability to manipulate and execute programmatic expressions in working memory (i.e., programs can be treated as data). Our computational experiments demonstrate the correctness of the NeuroLISP interpreter, and show that it can learn non-trivial programs that manipulate complex derived data structures (multiway trees), perform compositional string manipulation operations (PCFG SET task), and implement high-level symbolic AI algorithms (first-order unification). We conclude that NeuroLISP is an effective neurocognitive controller that can replace the symbolic components of hybrid models, and serves as a proof of concept for further development of high-level symbolic programming in neural networks.
Collapse
Affiliation(s)
- Gregory P Davis
- Department of Computer Science, University of Maryland, College Park, MD, USA.
| | - Garrett E Katz
- Department of Elec. Engr. and Comp. Sci., Syracuse University, Syracuse, NY, USA.
| | - Rodolphe J Gentili
- Department of Kinesiology, University of Maryland, College Park, MD, USA.
| | - James A Reggia
- Department of Computer Science, University of Maryland, College Park, MD, USA.
| |
Collapse
|
80
|
Cao L, Varga V, Chen ZS. Uncovering spatial representations from spatiotemporal patterns of rodent hippocampal field potentials. CELL REPORTS METHODS 2021; 1:100101. [PMID: 34888543 PMCID: PMC8654278 DOI: 10.1016/j.crmeth.2021.100101] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/09/2021] [Revised: 07/27/2021] [Accepted: 09/28/2021] [Indexed: 12/23/2022]
Abstract
Spatiotemporal patterns of large-scale spiking and field potentials of the rodent hippocampus encode spatial representations during maze runs, immobility, and sleep. Here, we show that multisite hippocampal field potential amplitude at ultra-high-frequency band (FPAuhf), a generalized form of multiunit activity, provides not only a fast and reliable reconstruction of the rodent's position when awake, but also a readout of replay content during sharp-wave ripples. This FPAuhf feature may serve as a robust real-time decoding strategy from large-scale recordings in closed-loop experiments. Furthermore, we develop unsupervised learning approaches to extract low-dimensional spatiotemporal FPAuhf features during run and ripple periods and to infer latent dynamical structures from lower-rank FPAuhf features. We also develop an optical flow-based method to identify propagating spatiotemporal LFP patterns from multisite array recordings, which can be used as a decoding application. Finally, we develop a prospective decoding strategy to predict an animal's future decision in goal-directed navigation.
Collapse
Affiliation(s)
- Liang Cao
- The Neuroscience Institute, New York University Grossman School of Medicine, New York, NY 10016, USA
- Department of Physics, East China Normal University, Shanghai 200241, China
| | - Viktor Varga
- The Neuroscience Institute, New York University Grossman School of Medicine, New York, NY 10016, USA
- Institute of Experimental Medicine, 43 Szigony Street, 1083 Budapest, Hungary
| | - Zhe S. Chen
- The Neuroscience Institute, New York University Grossman School of Medicine, New York, NY 10016, USA
- Department of Psychiatry, Department of Neuroscience and Physiology, New York University Grossman School of Medicine, New York, NY 10016, USA
| |
Collapse
|
81
|
Drifting assemblies for persistent memory: Neuron transitions and unsupervised compensation. Proc Natl Acad Sci U S A 2021; 118:2023832118. [PMID: 34772802 DOI: 10.1073/pnas.2023832118] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 09/11/2021] [Indexed: 11/18/2022] Open
Abstract
Change is ubiquitous in living beings. In particular, the connectome and neural representations can change. Nevertheless, behaviors and memories often persist over long times. In a standard model, associative memories are represented by assemblies of strongly interconnected neurons. For faithful storage these assemblies are assumed to consist of the same neurons over time. Here we propose a contrasting memory model with complete temporal remodeling of assemblies, based on experimentally observed changes of synapses and neural representations. The assemblies drift freely as noisy autonomous network activity and spontaneous synaptic turnover induce neuron exchange. The gradual exchange allows activity-dependent and homeostatic plasticity to conserve the representational structure and keep inputs, outputs, and assemblies consistent. This leads to persistent memory. Our findings explain recent experimental results on temporal evolution of fear memory representations and suggest that memory systems need to be understood in their completeness as individual parts may constantly change.
Collapse
|
82
|
Bauer J, Rose T. Mouse vision: Variability and stability across the visual processing hierarchy. Curr Biol 2021; 31:R1129-R1132. [PMID: 34637715 DOI: 10.1016/j.cub.2021.08.071] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
The response of individual neurons to stable sensory input or behavioral output can change over time. A new study provides evidence from the mouse visual system that such drift does not follow the hierarchy of information flow across the brain.
Collapse
Affiliation(s)
- Joel Bauer
- Max Planck Institute of Neurobiology, Am Klopferspitz 18, 82152 Martinsried, Germany
| | - Tobias Rose
- Institute for Experimental Epileptology and Cognition Research, University of Bonn, Venusberg-Campus 1, 53127 Bonn, Germany.
| |
Collapse
|
83
|
Deitch D, Rubin A, Ziv Y. Representational drift in the mouse visual cortex. Curr Biol 2021; 31:4327-4339.e6. [PMID: 34433077 DOI: 10.1016/j.cub.2021.07.062] [Citation(s) in RCA: 67] [Impact Index Per Article: 22.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/19/2020] [Revised: 07/02/2021] [Accepted: 07/26/2021] [Indexed: 10/20/2022]
Abstract
Recent studies have shown that neuronal representations gradually change over time despite no changes in the stimulus, environment, or behavior. However, such representational drift has been assumed to be a property of high-level brain structures, whereas earlier circuits, such as sensory cortices, have been assumed to stably encode information over time. Here, we analyzed large-scale optical and electrophysiological recordings from six visual cortical areas in behaving mice that were repeatedly presented with the same natural movies. Contrary to the prevailing notion, we found representational drift over timescales spanning minutes to days across multiple visual areas, cortical layers, and cell types. Notably, neural-code stability did not reflect the hierarchy of information flow across areas. Although individual neurons showed time-dependent changes in their coding properties, the structure of the relationships between population activity patterns remained stable and stereotypic. Such population-level organization may underlie stable visual perception despite continuous changes in neuronal responses.
Collapse
Affiliation(s)
- Daniel Deitch
- Department of Neurobiology, Weizmann Institute of Science, Rehovot 76100, Israel
| | - Alon Rubin
- Department of Neurobiology, Weizmann Institute of Science, Rehovot 76100, Israel
| | - Yaniv Ziv
- Department of Neurobiology, Weizmann Institute of Science, Rehovot 76100, Israel.
| |
Collapse
|
84
|
Raman DV, O'Leary T. Optimal plasticity for memory maintenance during ongoing synaptic change. eLife 2021; 10:62912. [PMID: 34519270 PMCID: PMC8504970 DOI: 10.7554/elife.62912] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/08/2020] [Accepted: 09/13/2021] [Indexed: 11/13/2022] Open
Abstract
Synaptic connections in many brain circuits fluctuate, exhibiting substantial turnover and remodelling over hours to days. Surprisingly, experiments show that most of this flux in connectivity persists in the absence of learning or known plasticity signals. How can neural circuits retain learned information despite a large proportion of ongoing and potentially disruptive synaptic changes? We address this question from first principles by analysing how much compensatory plasticity would be required to optimally counteract ongoing fluctuations, regardless of whether fluctuations are random or systematic. Remarkably, we find that the answer is largely independent of plasticity mechanisms and circuit architectures: compensatory plasticity should be at most equal in magnitude to fluctuations, and often less, in direct agreement with previously unexplained experimental observations. Moreover, our analysis shows that a high proportion of learning-independent synaptic change is consistent with plasticity mechanisms that accurately compute error gradients.
Collapse
Affiliation(s)
- Dhruva V Raman
- Department of Engineering, University of Cambridge, Cambridge, United Kingdom
| | - Timothy O'Leary
- Department of Engineering, University of Cambridge, Cambridge, United Kingdom
| |
Collapse
|
85
|
Xia J, Marks TD, Goard MJ, Wessel R. Stable representation of a naturalistic movie emerges from episodic activity with gain variability. Nat Commun 2021; 12:5170. [PMID: 34453045 PMCID: PMC8397750 DOI: 10.1038/s41467-021-25437-2] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/11/2020] [Accepted: 08/11/2021] [Indexed: 01/08/2023] Open
Abstract
Visual cortical responses are known to be highly variable across trials within an experimental session. However, the long-term stability of visual cortical responses is poorly understood. Here using chronic imaging of V1 in mice we show that neural responses to repeated natural movie clips are unstable across weeks. Individual neuronal responses consist of sparse episodic activity which are stable in time but unstable in gain across weeks. Further, we find that the individual episode, instead of neuron, serves as the basic unit of the week-to-week fluctuation. To investigate how population activity encodes the stimulus, we extract a stable one-dimensional representation of the time in the natural movie, using an unsupervised method. Most week-to-week fluctuation is perpendicular to the stimulus encoding direction, thus leaving the stimulus representation largely unaffected. We propose that precise episodic activity with coordinated gain changes are keys to maintain a stable stimulus representation in V1.
Collapse
Affiliation(s)
- Ji Xia
- Department of Physics, Washington University in St. Louis, St. Louis, MO, USA.
| | - Tyler D Marks
- Neuroscience Research Institute, University of California, Santa Barbara, CA, USA
| | - Michael J Goard
- Neuroscience Research Institute, University of California, Santa Barbara, CA, USA
- Department of Molecular, Cellular, and Developmental Biology, University of California, Santa Barbara, CA, USA
- Department of Psychological & Brain Sciences, University of California, Santa Barbara, CA, USA
| | - Ralf Wessel
- Department of Physics, Washington University in St. Louis, St. Louis, MO, USA
| |
Collapse
|
86
|
Pérez-Ortega J, Alejandre-García T, Yuste R. Long-term stability of cortical ensembles. eLife 2021; 10:e64449. [PMID: 34328414 PMCID: PMC8376248 DOI: 10.7554/elife.64449] [Citation(s) in RCA: 39] [Impact Index Per Article: 13.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/29/2020] [Accepted: 07/29/2021] [Indexed: 12/25/2022] Open
Abstract
Neuronal ensembles, coactive groups of neurons found in spontaneous and evoked cortical activity, are causally related to memories and perception, but it is still unknown how stable or flexible they are over time. We used two-photon multiplane calcium imaging to track over weeks the activity of the same pyramidal neurons in layer 2/3 of the visual cortex from awake mice and recorded their spontaneous and visually evoked responses. Less than half of the neurons remained active across any two imaging sessions. These stable neurons formed ensembles that lasted weeks, but some ensembles were also transient and appeared only in one single session. Stable ensembles preserved most of their neurons for up to 46 days, our longest imaged period, and these 'core' cells had stronger functional connectivity. Our results demonstrate that neuronal ensembles can last for weeks and could, in principle, serve as a substrate for long-lasting representation of perceptual states or memories.
Collapse
Affiliation(s)
- Jesús Pérez-Ortega
- Department of Biological Sciences, Columbia UniversityNew YorkUnited States
| | | | - Rafael Yuste
- Department of Biological Sciences, Columbia UniversityNew YorkUnited States
| |
Collapse
|
87
|
Computational roles of intrinsic synaptic dynamics. Curr Opin Neurobiol 2021; 70:34-42. [PMID: 34303124 DOI: 10.1016/j.conb.2021.06.002] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/10/2021] [Revised: 05/14/2021] [Accepted: 06/15/2021] [Indexed: 12/26/2022]
Abstract
Conventional theories assume that long-term information storage in the brain is implemented by modifying synaptic efficacy. Recent experimental findings challenge this view by demonstrating that dendritic spine sizes, or their corresponding synaptic weights, are highly volatile even in the absence of neural activity. Here, we review previous computational works on the roles of these intrinsic synaptic dynamics. We first present the possibility for neuronal networks to sustain stable performance in their presence, and we then hypothesize that intrinsic dynamics could be more than mere noise to withstand, but they may improve information processing in the brain.
Collapse
|
88
|
Aljadeff J, Gillett M, Pereira Obilinovic U, Brunel N. From synapse to network: models of information storage and retrieval in neural circuits. Curr Opin Neurobiol 2021; 70:24-33. [PMID: 34175521 DOI: 10.1016/j.conb.2021.05.005] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/03/2021] [Revised: 05/06/2021] [Accepted: 05/25/2021] [Indexed: 10/21/2022]
Abstract
The mechanisms of information storage and retrieval in brain circuits are still the subject of debate. It is widely believed that information is stored at least in part through changes in synaptic connectivity in networks that encode this information and that these changes lead in turn to modifications of network dynamics, such that the stored information can be retrieved at a later time. Here, we review recent progress in deriving synaptic plasticity rules from experimental data and in understanding how plasticity rules affect the dynamics of recurrent networks. We show that the dynamics generated by such networks exhibit a large degree of diversity, depending on parameters, similar to experimental observations in vivo during delayed response tasks.
Collapse
Affiliation(s)
- Johnatan Aljadeff
- Neurobiology Section, Division of Biological Sciences, UC San Diego, USA
| | | | | | - Nicolas Brunel
- Department of Neurobiology, Duke University, USA; Department of Physics, Duke University, USA.
| |
Collapse
|
89
|
Schoonover CE, Ohashi SN, Axel R, Fink AJP. Representational drift in primary olfactory cortex. Nature 2021; 594:541-546. [PMID: 34108681 DOI: 10.1038/s41586-021-03628-7] [Citation(s) in RCA: 103] [Impact Index Per Article: 34.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/02/2020] [Accepted: 05/11/2021] [Indexed: 02/05/2023]
Abstract
Perceptual constancy requires the brain to maintain a stable representation of sensory input. In the olfactory system, activity in primary olfactory cortex (piriform cortex) is thought to determine odour identity1-5. Here we present the results of electrophysiological recordings of single units maintained over weeks to examine the stability of odour-evoked responses in mouse piriform cortex. Although activity in piriform cortex could be used to discriminate between odorants at any moment in time, odour-evoked responses drifted over periods of days to weeks. The performance of a linear classifier trained on the first recording day approached chance levels after 32 days. Fear conditioning did not stabilize odour-evoked responses. Daily exposure to the same odorant slowed the rate of drift, but when exposure was halted the rate increased again. This demonstration of continuous drift poses the question of the role of piriform cortex in odour perception. This instability might reflect the unstructured connectivity of piriform cortex6-12, and may be a property of other unstructured cortices.
Collapse
Affiliation(s)
- Carl E Schoonover
- Howard Hughes Medical Institute, Mortimer B. Zuckerman Mind Brain Behavior Institute, Department of Neuroscience, Columbia University, New York, NY, USA.
| | - Sarah N Ohashi
- Howard Hughes Medical Institute, Mortimer B. Zuckerman Mind Brain Behavior Institute, Department of Neuroscience, Columbia University, New York, NY, USA.,Immunobiology Graduate Program, Yale School of Medicine, New Haven, CT, USA
| | - Richard Axel
- Howard Hughes Medical Institute, Mortimer B. Zuckerman Mind Brain Behavior Institute, Department of Neuroscience, Columbia University, New York, NY, USA.
| | - Andrew J P Fink
- Howard Hughes Medical Institute, Mortimer B. Zuckerman Mind Brain Behavior Institute, Department of Neuroscience, Columbia University, New York, NY, USA.
| |
Collapse
|
90
|
Goaillard JM, Marder E. Ion Channel Degeneracy, Variability, and Covariation in Neuron and Circuit Resilience. Annu Rev Neurosci 2021; 44:335-357. [PMID: 33770451 DOI: 10.1146/annurev-neuro-092920-121538] [Citation(s) in RCA: 77] [Impact Index Per Article: 25.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
The large number of ion channels found in all nervous systems poses fundamental questions concerning how the characteristic intrinsic properties of single neurons are determined by the specific subsets of channels they express. All neurons display many different ion channels with overlapping voltage- and time-dependent properties. We speculate that these overlapping properties promote resilience in neuronal function. Individual neurons of the same cell type show variability in ion channel conductance densities even though they can generate reliable and similar behavior. This complicates a simple assignment of function to any conductance and is associated with variable responses of neurons of the same cell type to perturbations, deletions, and pharmacological manipulation. Ion channel genes often show strong positively correlated expression, which may result from the molecular and developmental rules that determine which ion channels are expressed in a given cell type.
Collapse
Affiliation(s)
| | - Eve Marder
- Volen Center and Department of Biology, Brandeis University, Waltham, Massachusetts 02454, USA;
| |
Collapse
|
91
|
Feulner B, Clopath C. Neural manifold under plasticity in a goal driven learning behaviour. PLoS Comput Biol 2021; 17:e1008621. [PMID: 33544700 PMCID: PMC7864452 DOI: 10.1371/journal.pcbi.1008621] [Citation(s) in RCA: 20] [Impact Index Per Article: 6.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/10/2020] [Accepted: 12/08/2020] [Indexed: 11/19/2022] Open
Abstract
Neural activity is often low dimensional and dominated by only a few prominent neural covariation patterns. It has been hypothesised that these covariation patterns could form the building blocks used for fast and flexible motor control. Supporting this idea, recent experiments have shown that monkeys can learn to adapt their neural activity in motor cortex on a timescale of minutes, given that the change lies within the original low-dimensional subspace, also called neural manifold. However, the neural mechanism underlying this within-manifold adaptation remains unknown. Here, we show in a computational model that modification of recurrent weights, driven by a learned feedback signal, can account for the observed behavioural difference between within- and outside-manifold learning. Our findings give a new perspective, showing that recurrent weight changes do not necessarily lead to change in the neural manifold. On the contrary, successful learning is naturally constrained to a common subspace.
Collapse
Affiliation(s)
- Barbara Feulner
- Department of Bioengineering, Imperial College London, London, United Kingdom
| | - Claudia Clopath
- Department of Bioengineering, Imperial College London, London, United Kingdom
| |
Collapse
|
92
|
Sweis BM, Mau W, Rabinowitz S, Cai DJ. Dynamic and heterogeneous neural ensembles contribute to a memory engram. Curr Opin Neurobiol 2020; 67:199-206. [PMID: 33388602 DOI: 10.1016/j.conb.2020.11.017] [Citation(s) in RCA: 26] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/15/2020] [Revised: 11/23/2020] [Accepted: 11/26/2020] [Indexed: 01/08/2023]
Abstract
In the century since the notion of the 'engram' was first introduced to describe the physical manifestation of memory, new technologies for identifying cellular activity have enabled us to deepen our understanding of the possible physical substrate of memory. A number of studies have shown that memories are stored in a sparse population of neurons known as a neural ensemble or engram cells. While earlier investigations highlighted that the stability of neural ensembles underlies a memory representation, recent studies have found that neural ensembles are more dynamic and fluid than previously understood. Additionally, a number of studies have begun to dissect the cellular and molecular diversity of functionally distinct subpopulations of cells contained within an engram. We propose that ensemble fluidity and compositional heterogeneity support memory flexibility and functional diversity.
Collapse
Affiliation(s)
- Brian M Sweis
- Icahn School of Medicine at Mount Sinai, Department of Neuroscience, New York, NY, 10029, United States; Icahn School of Medicine at Mount Sinai, Department of Psychiatry, New York, NY, 10029, United States
| | - William Mau
- Icahn School of Medicine at Mount Sinai, Department of Neuroscience, New York, NY, 10029, United States
| | - Sima Rabinowitz
- Icahn School of Medicine at Mount Sinai, Department of Neuroscience, New York, NY, 10029, United States
| | - Denise J Cai
- Icahn School of Medicine at Mount Sinai, Department of Neuroscience, New York, NY, 10029, United States.
| |
Collapse
|
93
|
Mau W, Hasselmo ME, Cai DJ. The brain in motion: How ensemble fluidity drives memory-updating and flexibility. eLife 2020; 9:e63550. [PMID: 33372892 PMCID: PMC7771967 DOI: 10.7554/elife.63550] [Citation(s) in RCA: 47] [Impact Index Per Article: 11.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/29/2020] [Accepted: 12/12/2020] [Indexed: 12/18/2022] Open
Abstract
While memories are often thought of as flashbacks to a previous experience, they do not simply conserve veridical representations of the past but must continually integrate new information to ensure survival in dynamic environments. Therefore, 'drift' in neural firing patterns, typically construed as disruptive 'instability' or an undesirable consequence of noise, may actually be useful for updating memories. In our view, continual modifications in memory representations reconcile classical theories of stable memory traces with neural drift. Here we review how memory representations are updated through dynamic recruitment of neuronal ensembles on the basis of excitability and functional connectivity at the time of learning. Overall, we emphasize the importance of considering memories not as static entities, but instead as flexible network states that reactivate and evolve across time and experience.
Collapse
Affiliation(s)
- William Mau
- Neuroscience Department, Icahn School of Medicine at Mount SinaiNew YorkUnited States
| | | | - Denise J Cai
- Neuroscience Department, Icahn School of Medicine at Mount SinaiNew YorkUnited States
| |
Collapse
|
94
|
Donato F. A gatekeeper for learning. Science 2020; 370:1410-1411. [DOI: 10.1126/science.abf4523] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/02/2022]
Abstract
During associative learning, the perirhinal cortex controls burst firing of sensory neurons
Collapse
|
95
|
Saxe A, Nelli S, Summerfield C. If deep learning is the answer, what is the question? Nat Rev Neurosci 2020; 22:55-67. [PMID: 33199854 DOI: 10.1038/s41583-020-00395-8] [Citation(s) in RCA: 112] [Impact Index Per Article: 28.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 10/02/2020] [Indexed: 11/09/2022]
Abstract
Neuroscience research is undergoing a minor revolution. Recent advances in machine learning and artificial intelligence research have opened up new ways of thinking about neural computation. Many researchers are excited by the possibility that deep neural networks may offer theories of perception, cognition and action for biological brains. This approach has the potential to radically reshape our approach to understanding neural systems, because the computations performed by deep networks are learned from experience, and not endowed by the researcher. If so, how can neuroscientists use deep networks to model and understand biological brains? What is the outlook for neuroscientists who seek to characterize computations or neural codes, or who wish to understand perception, attention, memory and executive functions? In this Perspective, our goal is to offer a road map for systems neuroscience research in the age of deep learning. We discuss the conceptual and methodological challenges of comparing behaviour, learning dynamics and neural representations in artificial and biological systems, and we highlight new research questions that have emerged for neuroscience as a direct consequence of recent advances in machine learning.
Collapse
Affiliation(s)
- Andrew Saxe
- Department of Experimental Psychology, University of Oxford, Oxford, UK.
| | - Stephanie Nelli
- Department of Experimental Psychology, University of Oxford, Oxford, UK.
| | | |
Collapse
|
96
|
Cowley BR, Snyder AC, Acar K, Williamson RC, Yu BM, Smith MA. Slow Drift of Neural Activity as a Signature of Impulsivity in Macaque Visual and Prefrontal Cortex. Neuron 2020; 108:551-567.e8. [PMID: 32810433 PMCID: PMC7822647 DOI: 10.1016/j.neuron.2020.07.021] [Citation(s) in RCA: 49] [Impact Index Per Article: 12.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/17/2020] [Revised: 06/15/2020] [Accepted: 07/17/2020] [Indexed: 12/22/2022]
Abstract
An animal's decision depends not only on incoming sensory evidence but also on its fluctuating internal state. This state embodies multiple cognitive factors, such as arousal and fatigue, but it is unclear how these factors influence the neural processes that encode sensory stimuli and form a decision. We discovered that, unprompted by task conditions, animals slowly shifted their likelihood of detecting stimulus changes over the timescale of tens of minutes. Neural population activity from visual area V4, as well as from prefrontal cortex, slowly drifted together with these behavioral fluctuations. We found that this slow drift, rather than altering the encoding of the sensory stimulus, acted as an impulsivity signal, overriding sensory evidence to dictate the final decision. Overall, this work uncovers an internal state embedded in population activity across multiple brain areas and sheds further light on how internal states contribute to the decision-making process.
Collapse
Affiliation(s)
- Benjamin R Cowley
- Princeton Neuroscience Institute, Princeton University, Princeton, NJ 08544, USA; Center for the Neural Basis of Cognition, Pittsburgh, PA 15213, USA; Department of Machine Learning, Carnegie Mellon University, Pittsburgh, PA 15213, USA
| | - Adam C Snyder
- Center for the Neural Basis of Cognition, Pittsburgh, PA 15213, USA; Department of Electrical and Computer Engineering, Carnegie Mellon University, Pittsburgh, PA 15213, USA; Department of Brain and Cognitive Sciences, University of Rochester, Rochester, NY 14642, USA; Department of Neuroscience, University of Rochester, Rochester, NY 14642, USA; Center for Visual Science, University of Rochester, Rochester, NY 14642, USA
| | - Katerina Acar
- Center for the Neural Basis of Cognition, Pittsburgh, PA 15213, USA; Neuroscience Institute, Carnegie Mellon University, Pittsburgh, PA 15213, USA; Center for Neuroscience, University of Pittsburgh School of Medicine, Pittsburgh, PA 15213, USA
| | - Ryan C Williamson
- Center for the Neural Basis of Cognition, Pittsburgh, PA 15213, USA; Department of Machine Learning, Carnegie Mellon University, Pittsburgh, PA 15213, USA; University of Pittsburgh School of Medicine, Pittsburgh, PA 15213, USA
| | - Byron M Yu
- Center for the Neural Basis of Cognition, Pittsburgh, PA 15213, USA; Neuroscience Institute, Carnegie Mellon University, Pittsburgh, PA 15213, USA; Department of Electrical and Computer Engineering, Carnegie Mellon University, Pittsburgh, PA 15213, USA; Department of Biomedical Engineering, Carnegie Mellon University, Pittsburgh, PA 15213, USA
| | - Matthew A Smith
- Center for the Neural Basis of Cognition, Pittsburgh, PA 15213, USA; Neuroscience Institute, Carnegie Mellon University, Pittsburgh, PA 15213, USA; Department of Biomedical Engineering, Carnegie Mellon University, Pittsburgh, PA 15213, USA; Department of Ophthalmology, University of Pittsburgh, Pittsburgh, PA 15213, USA.
| |
Collapse
|
97
|
Levy SJ, Kinsky NR, Mau W, Sullivan DW, Hasselmo ME. Hippocampal spatial memory representations in mice are heterogeneously stable. Hippocampus 2020; 31:244-260. [PMID: 33098619 DOI: 10.1002/hipo.23272] [Citation(s) in RCA: 17] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/14/2019] [Revised: 09/10/2020] [Accepted: 10/04/2020] [Indexed: 11/10/2022]
Abstract
The population of hippocampal neurons actively coding space continually changes across days as mice repeatedly perform tasks. Many hippocampal place cells become inactive while other previously silent neurons become active, challenging the idea that stable behaviors and memory representations are supported by stable patterns of neural activity. Active cell replacement may disambiguate unique episodes that contain overlapping memory cues, and could contribute to reorganization of memory representations. How active cell replacement affects the evolution of representations of different behaviors within a single task is unknown. We trained mice to perform a delayed nonmatching to place task over multiple weeks, and performed calcium imaging in area CA1 of the dorsal hippocampus using head-mounted miniature microscopes. Cells active on the central stem of the maze "split" their calcium activity according to the animal's upcoming turn direction (left or right), the current task phase (study or test), or both task dimensions, even while spatial cues remained unchanged. We found that, among reliably active cells, different splitter neuron populations were replaced at unequal rates, resulting in an increasing number of cells modulated by turn direction and a decreasing number of cells with combined modulation by both turn direction and task phase. Despite continual reorganization, the ensemble code stably segregated these task dimensions. These results show that hippocampal memories can heterogeneously reorganize even while behavior is unchanging.
Collapse
Affiliation(s)
- Samuel J Levy
- Center for Systems Neuroscience, Boston University, Boston, Massachusetts, USA.,Graduate Program in Neuroscience, Boston University, Boston, Massachusetts, USA
| | - Nathaniel R Kinsky
- Center for Systems Neuroscience, Boston University, Boston, Massachusetts, USA.,Graduate Program in Neuroscience, Boston University, Boston, Massachusetts, USA.,Department of Anesthesiology, University of Michigan Medical School, Ann Arbor, MI, USA
| | - William Mau
- Center for Systems Neuroscience, Boston University, Boston, Massachusetts, USA.,Graduate Program in Neuroscience, Boston University, Boston, Massachusetts, USA.,Icahn School of Medicine at Mount Sinai, New York, NY, USA
| | - David W Sullivan
- Center for Systems Neuroscience, Boston University, Boston, Massachusetts, USA.,Graduate Program in Neuroscience, Boston University, Boston, Massachusetts, USA
| | - Michael E Hasselmo
- Center for Systems Neuroscience, Boston University, Boston, Massachusetts, USA.,Graduate Program in Neuroscience, Boston University, Boston, Massachusetts, USA
| |
Collapse
|
98
|
Chen L, Cummings KA, Mau W, Zaki Y, Dong Z, Rabinowitz S, Clem RL, Shuman T, Cai DJ. The role of intrinsic excitability in the evolution of memory: Significance in memory allocation, consolidation, and updating. Neurobiol Learn Mem 2020; 173:107266. [PMID: 32512183 PMCID: PMC7429265 DOI: 10.1016/j.nlm.2020.107266] [Citation(s) in RCA: 33] [Impact Index Per Article: 8.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/02/2019] [Revised: 05/28/2020] [Accepted: 05/31/2020] [Indexed: 11/30/2022]
Abstract
Memory is a dynamic process that is continuously regulated by both synaptic and intrinsic neural mechanisms. While numerous studies have shown that synaptic plasticity is important in various types and phases of learning and memory, neuronal intrinsic excitability has received relatively less attention, especially regarding the dynamic nature of memory. In this review, we present evidence demonstrating the importance of intrinsic excitability in memory allocation, consolidation, and updating. We also consider the intricate interaction between intrinsic excitability and synaptic plasticity in shaping memory, supporting both memory stability and flexibility.
Collapse
Affiliation(s)
- Lingxuan Chen
- Icahn School of Medicine at Mount Sinai, Department of Neuroscience, New York, New York, 10029, United States
| | - Kirstie A Cummings
- Icahn School of Medicine at Mount Sinai, Department of Neuroscience, New York, New York, 10029, United States
| | - William Mau
- Icahn School of Medicine at Mount Sinai, Department of Neuroscience, New York, New York, 10029, United States
| | - Yosif Zaki
- Icahn School of Medicine at Mount Sinai, Department of Neuroscience, New York, New York, 10029, United States
| | - Zhe Dong
- Icahn School of Medicine at Mount Sinai, Department of Neuroscience, New York, New York, 10029, United States
| | - Sima Rabinowitz
- Icahn School of Medicine at Mount Sinai, Department of Neuroscience, New York, New York, 10029, United States
| | - Roger L Clem
- Icahn School of Medicine at Mount Sinai, Department of Neuroscience, New York, New York, 10029, United States
| | - Tristan Shuman
- Icahn School of Medicine at Mount Sinai, Department of Neuroscience, New York, New York, 10029, United States
| | - Denise J Cai
- Icahn School of Medicine at Mount Sinai, Department of Neuroscience, New York, New York, 10029, United States.
| |
Collapse
|
99
|
Rule ME, Loback AR, Raman DV, Driscoll LN, Harvey CD, O'Leary T. Stable task information from an unstable neural population. eLife 2020; 9:51121. [PMID: 32660692 PMCID: PMC7392606 DOI: 10.7554/elife.51121] [Citation(s) in RCA: 42] [Impact Index Per Article: 10.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/15/2019] [Accepted: 06/17/2020] [Indexed: 02/06/2023] Open
Abstract
Over days and weeks, neural activity representing an animal's position and movement in sensorimotor cortex has been found to continually reconfigure or 'drift' during repeated trials of learned tasks, with no obvious change in behavior. This challenges classical theories, which assume stable engrams underlie stable behavior. However, it is not known whether this drift occurs systematically, allowing downstream circuits to extract consistent information. Analyzing long-term calcium imaging recordings from posterior parietal cortex in mice (Mus musculus), we show that drift is systematically constrained far above chance, facilitating a linear weighted readout of behavioral variables. However, a significant component of drift continually degrades a fixed readout, implying that drift is not confined to a null coding space. We calculate the amount of plasticity required to compensate drift independently of any learning rule, and find that this is within physiologically achievable bounds. We demonstrate that a simple, biologically plausible local learning rule can achieve these bounds, accurately decoding behavior over many days.
Collapse
Affiliation(s)
- Michael E Rule
- Department of Engineering, University of Cambridge, Cambridge, United Kingdom
| | - Adrianna R Loback
- Department of Engineering, University of Cambridge, Cambridge, United Kingdom
| | - Dhruva V Raman
- Department of Engineering, University of Cambridge, Cambridge, United Kingdom
| | - Laura N Driscoll
- Department of Electrical Engineering, Stanford University, Stanford, United States
| | | | - Timothy O'Leary
- Department of Engineering, University of Cambridge, Cambridge, United Kingdom
| |
Collapse
|
100
|
Sanders H, Wilson MA, Gershman SJ. Hippocampal remapping as hidden state inference. eLife 2020; 9:51140. [PMID: 32515352 PMCID: PMC7282808 DOI: 10.7554/elife.51140] [Citation(s) in RCA: 55] [Impact Index Per Article: 13.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/20/2019] [Accepted: 05/09/2020] [Indexed: 11/13/2022] Open
Abstract
Cells in the hippocampus tuned to spatial location (place cells) typically change their tuning when an animal changes context, a phenomenon known as remapping. A fundamental challenge to understanding remapping is the fact that what counts as a ‘‘context change’’ has never been precisely defined. Furthermore, different remapping phenomena have been classified on the basis of how much the tuning changes after different types and degrees of context change, but the relationship between these variables is not clear. We address these ambiguities by formalizing remapping in terms of hidden state inference. According to this view, remapping does not directly reflect objective, observable properties of the environment, but rather subjective beliefs about the hidden state of the environment. We show how the hidden state framework can resolve a number of puzzles about the nature of remapping.
Collapse
Affiliation(s)
- Honi Sanders
- Center for Brains Minds and Machines, Harvard University, Cambridge, United States.,Picower Institute for Learning and Memory and Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, United States
| | - Matthew A Wilson
- Center for Brains Minds and Machines, Harvard University, Cambridge, United States.,Picower Institute for Learning and Memory and Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, United States
| | - Samuel J Gershman
- Center for Brains Minds and Machines, Harvard University, Cambridge, United States.,Department of Psychology, Harvard University, Cambridge, United States
| |
Collapse
|