1
|
Squadrani L, Wert-Carvajal C, Müller-Komorowska D, Bohmbach K, Henneberger C, Verzelli P, Tchumatchenko T. Astrocytes enhance plasticity response during reversal learning. Commun Biol 2024; 7:852. [PMID: 38997325 DOI: 10.1038/s42003-024-06540-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/26/2023] [Accepted: 07/03/2024] [Indexed: 07/14/2024] Open
Abstract
Astrocytes play a key role in the regulation of synaptic strength and are thought to orchestrate synaptic plasticity and memory. Yet, how specifically astrocytes and their neuroactive transmitters control learning and memory is currently an open question. Recent experiments have uncovered an astrocyte-mediated feedback loop in CA1 pyramidal neurons which is started by the release of endocannabinoids by active neurons and closed by astrocytic regulation of the D-serine levels at the dendrites. D-serine is a co-agonist for the NMDA receptor regulating the strength and direction of synaptic plasticity. Activity-dependent D-serine release mediated by astrocytes is therefore a candidate for mediating between long-term synaptic depression (LTD) and potentiation (LTP) during learning. Here, we show that the mathematical description of this mechanism leads to a biophysical model of synaptic plasticity consistent with the phenomenological model known as the BCM model. The resulting mathematical framework can explain the learning deficit observed in mice upon disruption of the D-serine regulatory mechanism. It shows that D-serine enhances plasticity during reversal learning, ensuring fast responses to changes in the external environment. The model provides new testable predictions about the learning process, driving our understanding of the functional role of neuron-glia interaction in learning.
Collapse
Affiliation(s)
- Lorenzo Squadrani
- Institute of Experimental Epileptology and Cognition Research, Medical Faculty, University of Bonn, Bonn, Germany
| | - Carlos Wert-Carvajal
- Institute of Experimental Epileptology and Cognition Research, Medical Faculty, University of Bonn, Bonn, Germany
| | | | - Kirsten Bohmbach
- Institute of Cellular Neurosciences, Medical Faculty, University of Bonn, Bonn, Germany
| | - Christian Henneberger
- Institute of Cellular Neurosciences, Medical Faculty, University of Bonn, Bonn, Germany
- German Center for Neurodegenerative Diseases (DZNE), Bonn, Germany
| | - Pietro Verzelli
- Institute of Experimental Epileptology and Cognition Research, Medical Faculty, University of Bonn, Bonn, Germany.
| | - Tatjana Tchumatchenko
- Institute of Experimental Epileptology and Cognition Research, Medical Faculty, University of Bonn, Bonn, Germany.
| |
Collapse
|
2
|
Cone I, Clopath C, Shouval HZ. Learning to express reward prediction error-like dopaminergic activity requires plastic representations of time. Nat Commun 2024; 15:5856. [PMID: 38997276 DOI: 10.1038/s41467-024-50205-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/23/2023] [Accepted: 07/02/2024] [Indexed: 07/14/2024] Open
Abstract
The dominant theoretical framework to account for reinforcement learning in the brain is temporal difference learning (TD) learning, whereby certain units signal reward prediction errors (RPE). The TD algorithm has been traditionally mapped onto the dopaminergic system, as firing properties of dopamine neurons can resemble RPEs. However, certain predictions of TD learning are inconsistent with experimental results, and previous implementations of the algorithm have made unscalable assumptions regarding stimulus-specific fixed temporal bases. We propose an alternate framework to describe dopamine signaling in the brain, FLEX (Flexibly Learned Errors in Expected Reward). In FLEX, dopamine release is similar, but not identical to RPE, leading to predictions that contrast to those of TD. While FLEX itself is a general theoretical framework, we describe a specific, biophysically plausible implementation, the results of which are consistent with a preponderance of both existing and reanalyzed experimental data.
Collapse
Affiliation(s)
- Ian Cone
- Department of Bioengineering, Imperial College London, London, UK
- Department of Neurobiology and Anatomy, University of Texas Medical School at Houston, Houston, TX, USA
- Applied Physics Program, Rice University, Houston, TX, USA
| | - Claudia Clopath
- Department of Bioengineering, Imperial College London, London, UK
| | - Harel Z Shouval
- Department of Neurobiology and Anatomy, University of Texas Medical School at Houston, Houston, TX, USA.
- Department of Electrical and Computer Engineering, Rice University, Houston, TX, USA.
| |
Collapse
|
3
|
Caya-Bissonnette L, Béïque JC. Half a century legacy of long-term potentiation. Curr Biol 2024; 34:R640-R662. [PMID: 38981433 DOI: 10.1016/j.cub.2024.05.008] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 07/11/2024]
Abstract
In 1973, two papers from Bliss and Lømo and from Bliss and Gardner-Medwin reported that high-frequency synaptic stimulation in the dentate gyrus of rabbits resulted in a long-lasting increase in synaptic strength. This form of synaptic plasticity, commonly referred to as long-term potentiation (LTP), was immediately considered as an attractive mechanism accounting for the ability of the brain to store information. In this historical piece looking back over the past 50 years, we discuss how these two landmark contributions directly motivated a colossal research effort and detail some of the resulting milestones that have shaped our evolving understanding of the molecular and cellular underpinnings of LTP. We highlight the main features of LTP, cover key experiments that defined its induction and expression mechanisms, and outline the evidence supporting a potential role of LTP in learning and memory. We also briefly explore some ramifications of LTP on network stability, consider current limitations of LTP as a model of associative memory, and entertain future research orientations.
Collapse
Affiliation(s)
- Léa Caya-Bissonnette
- Graduate Program in Neuroscience, University of Ottawa, 451 ch. Smyth Road (3501N), Ottawa, ON K1H 8M5, Canada; Brain and Mind Research Institute's Centre for Neural Dynamics and Artificial Intelligence, 451 ch. Smyth Road (3501N), Ottawa, ON K1H 8M5, Canada; Department of Cellular and Molecular Medicine, Faculty of Medicine, University of Ottawa, 451 ch. Smyth Road (3501N), Ottawa, ON K1H 8M5, Canada
| | - Jean-Claude Béïque
- Brain and Mind Research Institute's Centre for Neural Dynamics and Artificial Intelligence, 451 ch. Smyth Road (3501N), Ottawa, ON K1H 8M5, Canada; Department of Cellular and Molecular Medicine, Faculty of Medicine, University of Ottawa, 451 ch. Smyth Road (3501N), Ottawa, ON K1H 8M5, Canada.
| |
Collapse
|
4
|
Schütt HH, Kim D, Ma WJ. Reward prediction error neurons implement an efficient code for reward. Nat Neurosci 2024; 27:1333-1339. [PMID: 38898182 DOI: 10.1038/s41593-024-01671-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/03/2022] [Accepted: 04/29/2024] [Indexed: 06/21/2024]
Abstract
We use efficient coding principles borrowed from sensory neuroscience to derive the optimal neural population to encode a reward distribution. We show that the responses of dopaminergic reward prediction error neurons in mouse and macaque are similar to those of the efficient code in the following ways: the neurons have a broad distribution of midpoints covering the reward distribution; neurons with higher thresholds have higher gains, more convex tuning functions and lower slopes; and their slope is higher when the reward distribution is narrower. Furthermore, we derive learning rules that converge to the efficient code. The learning rule for the position of the neuron on the reward axis closely resembles distributional reinforcement learning. Thus, reward prediction error neuron responses may be optimized to broadcast an efficient reward signal, forming a connection between efficient coding and reinforcement learning, two of the most successful theories in computational neuroscience.
Collapse
Affiliation(s)
- Heiko H Schütt
- Center for Neural Science and Department of Psychology, New York University, New York, NY, USA.
- Department of Behavioural and Cognitive Sciences, Université du Luxembourg, Esch-Belval, Luxembourg.
| | - Dongjae Kim
- Center for Neural Science and Department of Psychology, New York University, New York, NY, USA
- Department of AI-Based Convergence, Dankook University, Yongin, Republic of Korea
| | - Wei Ji Ma
- Center for Neural Science and Department of Psychology, New York University, New York, NY, USA
| |
Collapse
|
5
|
Sosis B, Rubin JE. Distinct dopaminergic spike-timing-dependent plasticity rules are suited to different functional roles. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.06.24.600372. [PMID: 38979377 PMCID: PMC11230239 DOI: 10.1101/2024.06.24.600372] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 07/10/2024]
Abstract
Various mathematical models have been formulated to describe the changes in synaptic strengths resulting from spike-timing-dependent plasticity (STDP). A subset of these models include a third factor, dopamine, which interacts with the timing of pre- and postsynaptic spiking to contribute to plasticity at specific synapses, notably those from cortex to striatum at the input layer of the basal ganglia. Theoretical work to analyze these plasticity models has largely focused on abstract issues, such as the conditions under which they may promote synchronization and the weight distributions induced by inputs with simple correlation structures, rather than on scenarios associated with specific tasks, and has generally not considered dopamine-dependent forms of STDP. In this paper, we analyze, mathematically and with simulations, three forms of dopamine-modulated STDP in three scenarios that are relevant to corticostriatal synapses. Two of the models considered comprise previously proposed STDP rules with modifications to incorporate dopamine, while the third is a corticostriatal dopamine-dependent STDP rule adapted from a similar one already in the literature. We test the ability of each of the three models to maintain its weights in the face of noise and to complete simple reward prediction and action selection tasks, studying the learned weight distributions and corresponding task performance in each setting. Interestingly, we find that each of the three plasticity rules is well suited to a subset of the scenarios studied but falls short in others. These results show that different tasks may require different forms of synaptic plasticity, yielding the prediction that the precise form of the STDP mechanism may vary across regions of the striatum, and other brain areas impacted by dopamine, that are involved in distinct computational functions. Author summary Learning from feedback is a crucial ability that allows humans and other animals to respond and adapt to their environments. One important locus for such learning is the basal ganglia, where dopamine-modulated corticostriatal plasticity shapes the dynamics of the cortico-basal ganglia-thalamic network in response to feedback signals to promote adaptive behavior. In this paper we ask, what learning rule is best suited to modeling this dopamine-modulated plasticity? To that end we investigate three learning rules that incorporate spike-timing-dependent plasticity as well as dopaminergic modulation. We study their performance in several settings meant to model the kinds of tasks and scenarios that striatal neurons are likely to be involved in. Each plasticity rule we examined performs well in some settings but fails in others. Different plasticity mechanisms may therefore be better suited to different functional roles and potentially to different regions of the brain.
Collapse
|
6
|
Bredenberg C, Savin C. Desiderata for Normative Models of Synaptic Plasticity. Neural Comput 2024; 36:1245-1285. [PMID: 38776950 DOI: 10.1162/neco_a_01671] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/09/2023] [Accepted: 02/06/2024] [Indexed: 05/25/2024]
Abstract
Normative models of synaptic plasticity use computational rationales to arrive at predictions of behavioral and network-level adaptive phenomena. In recent years, there has been an explosion of theoretical work in this realm, but experimental confirmation remains limited. In this review, we organize work on normative plasticity models in terms of a set of desiderata that, when satisfied, are designed to ensure that a given model demonstrates a clear link between plasticity and adaptive behavior, is consistent with known biological evidence about neural plasticity and yields specific testable predictions. As a prototype, we include a detailed analysis of the REINFORCE algorithm. We also discuss how new models have begun to improve on the identified criteria and suggest avenues for further development. Overall, we provide a conceptual guide to help develop neural learning theories that are precise, powerful, and experimentally testable.
Collapse
Affiliation(s)
- Colin Bredenberg
- Center for Neural Science, New York University, New York, NY 10003, U.S.A
- Mila-Quebec AI Institute, Montréal, QC H2S 3H1, Canada
| | - Cristina Savin
- Center for Neural Science, New York University, New York, NY 10003, U.S.A
- Center for Data Science, New York University, New York, NY 10011, U.S.A.
| |
Collapse
|
7
|
Urbin MA. Adaptation in the spinal cord after stroke: Implications for restoring cortical control over the final common pathway. J Physiol 2024. [PMID: 38787922 DOI: 10.1113/jp285563] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/24/2024] [Accepted: 04/29/2024] [Indexed: 05/26/2024] Open
Abstract
Control of voluntary movement is predicated on integration between circuits in the brain and spinal cord. Although damage is often restricted to supraspinal or spinal circuits in cases of neurological injury, both spinal motor neurons and axons linking these cells to the cortical origins of descending motor commands begin showing changes soon after the brain is injured by stroke. The concept of 'transneuronal degeneration' is not new and has been documented in histological, imaging and electrophysiological studies dating back over a century. Taken together, evidence from these studies agrees more with a system attempting to survive rather than one passively surrendering to degeneration. There tends to be at least some preservation of fibres at the brainstem origin and along the spinal course of the descending white matter tracts, even in severe cases. Myelin-associated proteins are observed in the spinal cord years after stroke onset. Spinal motor neurons remain morphometrically unaltered. Skeletal muscle fibres once innervated by neurons that lose their source of trophic input receive collaterals from adjacent neurons, causing spinal motor units to consolidate and increase in size. Although some level of excitability within the distributed brain network mediating voluntary movement is needed to facilitate recovery, minimal structural connectivity between cortical and spinal motor neurons can support meaningful distal limb function. Restoring access to the final common pathway via the descending input that remains in the spinal cord therefore represents a viable target for directed plasticity, particularly in light of recent advances in rehabilitation medicine.
Collapse
Affiliation(s)
- Michael A Urbin
- Human Engineering Research Laboratories, VA RR&D Center of Excellence, VA Pittsburgh Healthcare System, Pittsburgh, Pennsylvania, USA
| |
Collapse
|
8
|
Sayegh FJP, Mouledous L, Macri C, Pi Macedo J, Lejards C, Rampon C, Verret L, Dahan L. Ventral tegmental area dopamine projections to the hippocampus trigger long-term potentiation and contextual learning. Nat Commun 2024; 15:4100. [PMID: 38773091 PMCID: PMC11109191 DOI: 10.1038/s41467-024-47481-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/09/2023] [Accepted: 03/28/2024] [Indexed: 05/23/2024] Open
Abstract
In most models of neuronal plasticity and memory, dopamine is thought to promote the long-term maintenance of Long-Term Potentiation (LTP) underlying memory processes, but not the initiation of plasticity or new information storage. Here, we used optogenetic manipulation of midbrain dopamine neurons in male DAT::Cre mice, and discovered that stimulating the Schaffer collaterals - the glutamatergic axons connecting CA3 and CA1 regions - of the dorsal hippocampus concomitantly with midbrain dopamine terminals within a 200 millisecond time-window triggers LTP at glutamatergic synapses. Moreover, we showed that the stimulation of this dopaminergic pathway facilitates contextual learning in awake behaving mice, while its inhibition hinders it. Thus, activation of midbrain dopamine can operate as a teaching signal that triggers NeoHebbian LTP and promotes supervised learning.
Collapse
Affiliation(s)
- Fares J P Sayegh
- Centre de Recherches sur la Cognition Animale (CRCA), Centre de Biologie Intégrative (CBI), Université de Toulouse; CNRS, UPS, Toulouse, France.
| | - Lionel Mouledous
- Centre de Recherches sur la Cognition Animale (CRCA), Centre de Biologie Intégrative (CBI), Université de Toulouse; CNRS, UPS, Toulouse, France
| | - Catherine Macri
- Centre de Recherches sur la Cognition Animale (CRCA), Centre de Biologie Intégrative (CBI), Université de Toulouse; CNRS, UPS, Toulouse, France
| | - Juliana Pi Macedo
- Centre de Recherches sur la Cognition Animale (CRCA), Centre de Biologie Intégrative (CBI), Université de Toulouse; CNRS, UPS, Toulouse, France
| | - Camille Lejards
- Centre de Recherches sur la Cognition Animale (CRCA), Centre de Biologie Intégrative (CBI), Université de Toulouse; CNRS, UPS, Toulouse, France
| | - Claire Rampon
- Centre de Recherches sur la Cognition Animale (CRCA), Centre de Biologie Intégrative (CBI), Université de Toulouse; CNRS, UPS, Toulouse, France
| | - Laure Verret
- Centre de Recherches sur la Cognition Animale (CRCA), Centre de Biologie Intégrative (CBI), Université de Toulouse; CNRS, UPS, Toulouse, France
| | - Lionel Dahan
- Centre de Recherches sur la Cognition Animale (CRCA), Centre de Biologie Intégrative (CBI), Université de Toulouse; CNRS, UPS, Toulouse, France.
| |
Collapse
|
9
|
Becker S, Modirshanechi A, Gerstner W. Computational models of intrinsic motivation for curiosity and creativity. Behav Brain Sci 2024; 47:e94. [PMID: 38770870 DOI: 10.1017/s0140525x23003424] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/22/2024]
Abstract
We link Ivancovsky et al.'s novelty-seeking model (NSM) to computational models of intrinsically motivated behavior and learning. We argue that dissociating different forms of curiosity, creativity, and memory based on the involvement of distinct intrinsic motivations (e.g., surprise and novelty) is essential to empirically test the conceptual claims of the NSM.
Collapse
Affiliation(s)
- Sophia Becker
- Brain Mind Institute, School of Life Sciences, École Polytechnique Fédérale de Lausanne, Lausanne, ; https://lcnwww.epfl.ch/gerstner/
- School of Computer and Communication Sciences, École Polytechnique Fédérale de Lausanne, Lausanne, Switzerland
| | - Alireza Modirshanechi
- Brain Mind Institute, School of Life Sciences, École Polytechnique Fédérale de Lausanne, Lausanne, ; https://lcnwww.epfl.ch/gerstner/
- School of Computer and Communication Sciences, École Polytechnique Fédérale de Lausanne, Lausanne, Switzerland
| | - Wulfram Gerstner
- Brain Mind Institute, School of Life Sciences, École Polytechnique Fédérale de Lausanne, Lausanne, ; https://lcnwww.epfl.ch/gerstner/
- School of Computer and Communication Sciences, École Polytechnique Fédérale de Lausanne, Lausanne, Switzerland
| |
Collapse
|
10
|
Vignoud G, Venance L, Touboul JD. Anti-Hebbian plasticity drives sequence learning in striatum. Commun Biol 2024; 7:555. [PMID: 38724614 PMCID: PMC11082161 DOI: 10.1038/s42003-024-06203-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/18/2022] [Accepted: 04/17/2024] [Indexed: 05/12/2024] Open
Abstract
Spatio-temporal activity patterns have been observed in a variety of brain areas in spontaneous activity, prior to or during action, or in response to stimuli. Biological mechanisms endowing neurons with the ability to distinguish between different sequences remain largely unknown. Learning sequences of spikes raises multiple challenges, such as maintaining in memory spike history and discriminating partially overlapping sequences. Here, we show that anti-Hebbian spike-timing dependent plasticity (STDP), as observed at cortico-striatal synapses, can naturally lead to learning spike sequences. We design a spiking model of the striatal output neuron receiving spike patterns defined as sequential input from a fixed set of cortical neurons. We use a simple synaptic plasticity rule that combines anti-Hebbian STDP and non-associative potentiation for a subset of the presented patterns called rewarded patterns. We study the ability of striatal output neurons to discriminate rewarded from non-rewarded patterns by firing only after the presentation of a rewarded pattern. In particular, we show that two biological properties of striatal networks, spiking latency and collateral inhibition, contribute to an increase in accuracy, by allowing a better discrimination of partially overlapping sequences. These results suggest that anti-Hebbian STDP may serve as a biological substrate for learning sequences of spikes.
Collapse
Affiliation(s)
- Gaëtan Vignoud
- Center for Interdisciplinary Research in Biology (CIRB), College de France, CNRS, INSERM, Université PSL, Paris, France
| | - Laurent Venance
- Center for Interdisciplinary Research in Biology (CIRB), College de France, CNRS, INSERM, Université PSL, Paris, France.
| | - Jonathan D Touboul
- Department of Mathematics and Volen National Center for Complex Systems, Brandeis University, Waltham, MA, USA.
| |
Collapse
|
11
|
Cahill MK, Collard M, Tse V, Reitman ME, Etchenique R, Kirst C, Poskanzer KE. Network-level encoding of local neurotransmitters in cortical astrocytes. Nature 2024; 629:146-153. [PMID: 38632406 PMCID: PMC11062919 DOI: 10.1038/s41586-024-07311-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/03/2023] [Accepted: 03/13/2024] [Indexed: 04/19/2024]
Abstract
Astrocytes, the most abundant non-neuronal cell type in the mammalian brain, are crucial circuit components that respond to and modulate neuronal activity through calcium (Ca2+) signalling1-7. Astrocyte Ca2+ activity is highly heterogeneous and occurs across multiple spatiotemporal scales-from fast, subcellular activity3,4 to slow, synchronized activity across connected astrocyte networks8-10-to influence many processes5,7,11. However, the inputs that drive astrocyte network dynamics remain unclear. Here we used ex vivo and in vivo two-photon astrocyte imaging while mimicking neuronal neurotransmitter inputs at multiple spatiotemporal scales. We find that brief, subcellular inputs of GABA and glutamate lead to widespread, long-lasting astrocyte Ca2+ responses beyond an individual stimulated cell. Further, we find that a key subset of Ca2+ activity-propagative activity-differentiates astrocyte network responses to these two main neurotransmitters, and may influence responses to future inputs. Together, our results demonstrate that local, transient neurotransmitter inputs are encoded by broad cortical astrocyte networks over a minutes-long time course, contributing to accumulating evidence that substantial astrocyte-neuron communication occurs across slow, network-level spatiotemporal scales12-14. These findings will enable future studies to investigate the link between specific astrocyte Ca2+ activity and specific functional outputs, which could build a consistent framework for astrocytic modulation of neuronal activity.
Collapse
Affiliation(s)
- Michelle K Cahill
- Department of Biochemistry & Biophysics, University of California, San Francisco, CA, USA
- Neuroscience Graduate Program, University of California, San Francisco, CA, USA
| | - Max Collard
- Department of Biochemistry & Biophysics, University of California, San Francisco, CA, USA
- Neuroscience Graduate Program, University of California, San Francisco, CA, USA
| | - Vincent Tse
- Department of Biochemistry & Biophysics, University of California, San Francisco, CA, USA
| | - Michael E Reitman
- Department of Biochemistry & Biophysics, University of California, San Francisco, CA, USA
- Neuroscience Graduate Program, University of California, San Francisco, CA, USA
| | - Roberto Etchenique
- Departamento de Química Inorgánica, Analítica y Química Física, INQUIMAE, Facultad de Ciencias Exactas y Naturales, Universidad de Buenos Aires, CONICET, Buenos Aires, Argentina
| | - Christoph Kirst
- Neuroscience Graduate Program, University of California, San Francisco, CA, USA
- Department of Anatomy, University of California, San Francisco, CA, USA
- Kavli Institute for Fundamental Neuroscience, San Francisco, CA, USA
- Lawrence Berkeley National Laboratory, Berkeley, CA, USA
| | - Kira E Poskanzer
- Department of Biochemistry & Biophysics, University of California, San Francisco, CA, USA.
- Neuroscience Graduate Program, University of California, San Francisco, CA, USA.
- Kavli Institute for Fundamental Neuroscience, San Francisco, CA, USA.
| |
Collapse
|
12
|
Parnas M, Manoim JE, Lin AC. Sensory encoding and memory in the mushroom body: signals, noise, and variability. Learn Mem 2024; 31:a053825. [PMID: 38862174 PMCID: PMC11199953 DOI: 10.1101/lm.053825.123] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/10/2023] [Accepted: 11/21/2023] [Indexed: 06/13/2024]
Abstract
To survive in changing environments, animals need to learn to associate specific sensory stimuli with positive or negative valence. How do they form stimulus-specific memories to distinguish between positively/negatively associated stimuli and other irrelevant stimuli? Solving this task is one of the functions of the mushroom body, the associative memory center in insect brains. Here we summarize recent work on sensory encoding and memory in the Drosophila mushroom body, highlighting general principles such as pattern separation, sparse coding, noise and variability, coincidence detection, and spatially localized neuromodulation, and placing the mushroom body in comparative perspective with mammalian memory systems.
Collapse
Affiliation(s)
- Moshe Parnas
- Department of Physiology and Pharmacology, Faculty of Medicine, Tel Aviv University, Tel Aviv 69978, Israel
- Sagol School of Neuroscience, Tel Aviv University, Tel Aviv 69978, Israel
| | - Julia E Manoim
- Department of Physiology and Pharmacology, Faculty of Medicine, Tel Aviv University, Tel Aviv 69978, Israel
| | - Andrew C Lin
- School of Biosciences, University of Sheffield, Sheffield S10 2TN, United Kingdom
- Neuroscience Institute, University of Sheffield, Sheffield S10 2TN, United Kingdom
| |
Collapse
|
13
|
Terada Y, Toyoizumi T. Chaotic neural dynamics facilitate probabilistic computations through sampling. Proc Natl Acad Sci U S A 2024; 121:e2312992121. [PMID: 38648479 PMCID: PMC11067032 DOI: 10.1073/pnas.2312992121] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/28/2023] [Accepted: 02/13/2024] [Indexed: 04/25/2024] Open
Abstract
Cortical neurons exhibit highly variable responses over trials and time. Theoretical works posit that this variability arises potentially from chaotic network dynamics of recurrently connected neurons. Here, we demonstrate that chaotic neural dynamics, formed through synaptic learning, allow networks to perform sensory cue integration in a sampling-based implementation. We show that the emergent chaotic dynamics provide neural substrates for generating samples not only of a static variable but also of a dynamical trajectory, where generic recurrent networks acquire these abilities with a biologically plausible learning rule through trial and error. Furthermore, the networks generalize their experience in the stimulus-evoked samples to the inference without partial or all sensory information, which suggests a computational role of spontaneous activity as a representation of the priors as well as a tractable biological computation for marginal distributions. These findings suggest that chaotic neural dynamics may serve for the brain function as a Bayesian generative model.
Collapse
Affiliation(s)
- Yu Terada
- Laboratory for Neural Computation and Adaptation, RIKEN Center for Brain Science, Saitama351-0198, Japan
- Department of Neurobiology, University of California, San Diego, La Jolla, CA92093
- The Institute for Physics of Intelligence, The University of Tokyo, Tokyo113-0033, Japan
| | - Taro Toyoizumi
- Laboratory for Neural Computation and Adaptation, RIKEN Center for Brain Science, Saitama351-0198, Japan
- Department of Mathematical Informatics, Graduate School of Information Science and Technology, The University of Tokyo, Tokyo113-8656, Japan
| |
Collapse
|
14
|
Galloni AR, Yuan Y, Zhu M, Yu H, Bisht RS, Wu CTM, Grienberger C, Ramanathan S, Milstein AD. Neuromorphic one-shot learning utilizing a phase-transition material. Proc Natl Acad Sci U S A 2024; 121:e2318362121. [PMID: 38630718 PMCID: PMC11047090 DOI: 10.1073/pnas.2318362121] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/20/2023] [Accepted: 03/25/2024] [Indexed: 04/19/2024] Open
Abstract
Design of hardware based on biological principles of neuronal computation and plasticity in the brain is a leading approach to realizing energy- and sample-efficient AI and learning machines. An important factor in selection of the hardware building blocks is the identification of candidate materials with physical properties suitable to emulate the large dynamic ranges and varied timescales of neuronal signaling. Previous work has shown that the all-or-none spiking behavior of neurons can be mimicked by threshold switches utilizing material phase transitions. Here, we demonstrate that devices based on a prototypical metal-insulator-transition material, vanadium dioxide (VO2), can be dynamically controlled to access a continuum of intermediate resistance states. Furthermore, the timescale of their intrinsic relaxation can be configured to match a range of biologically relevant timescales from milliseconds to seconds. We exploit these device properties to emulate three aspects of neuronal analog computation: fast (~1 ms) spiking in a neuronal soma compartment, slow (~100 ms) spiking in a dendritic compartment, and ultraslow (~1 s) biochemical signaling involved in temporal credit assignment for a recently discovered biological mechanism of one-shot learning. Simulations show that an artificial neural network using properties of VO2 devices to control an agent navigating a spatial environment can learn an efficient path to a reward in up to fourfold fewer trials than standard methods. The phase relaxations described in our study may be engineered in a variety of materials and can be controlled by thermal, electrical, or optical stimuli, suggesting further opportunities to emulate biological learning in neuromorphic hardware.
Collapse
Affiliation(s)
- Alessandro R. Galloni
- Department of Neuroscience and Cell Biology, Robert Wood Johnson Medical School, Rutgers, The State University of New Jersey, Piscataway, NJ08854
- Center for Advanced Biotechnology and Medicine, Rutgers, The State University of New Jersey, Piscataway, NJ08854
| | - Yifan Yuan
- Department of Electrical and Computer Engineering, Rutgers, The State University of New Jersey, Piscataway, NJ08854
| | - Minning Zhu
- Department of Electrical and Computer Engineering, Rutgers, The State University of New Jersey, Piscataway, NJ08854
| | - Haoming Yu
- School of Materials Engineering, Purdue University, West Lafayette, IN47907
| | - Ravindra S. Bisht
- Department of Electrical and Computer Engineering, Rutgers, The State University of New Jersey, Piscataway, NJ08854
| | - Chung-Tse Michael Wu
- Department of Electrical and Computer Engineering, Rutgers, The State University of New Jersey, Piscataway, NJ08854
| | - Christine Grienberger
- Department of Neuroscience, Brandeis University, Waltham, MA02453
- Department of Biology and Volen National Center for Complex Systems, Brandeis University, Waltham, MA02453
| | - Shriram Ramanathan
- Department of Electrical and Computer Engineering, Rutgers, The State University of New Jersey, Piscataway, NJ08854
| | - Aaron D. Milstein
- Department of Neuroscience and Cell Biology, Robert Wood Johnson Medical School, Rutgers, The State University of New Jersey, Piscataway, NJ08854
- Center for Advanced Biotechnology and Medicine, Rutgers, The State University of New Jersey, Piscataway, NJ08854
| |
Collapse
|
15
|
Mollard S, Wacongne C, Bohte SM, Roelfsema PR. Recurrent neural networks that learn multi-step visual routines with reinforcement learning. PLoS Comput Biol 2024; 20:e1012030. [PMID: 38683837 PMCID: PMC11081502 DOI: 10.1371/journal.pcbi.1012030] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/03/2023] [Revised: 05/09/2024] [Accepted: 04/01/2024] [Indexed: 05/02/2024] Open
Abstract
Many cognitive problems can be decomposed into series of subproblems that are solved sequentially by the brain. When subproblems are solved, relevant intermediate results need to be stored by neurons and propagated to the next subproblem, until the overarching goal has been completed. We will here consider visual tasks, which can be decomposed into sequences of elemental visual operations. Experimental evidence suggests that intermediate results of the elemental operations are stored in working memory as an enhancement of neural activity in the visual cortex. The focus of enhanced activity is then available for subsequent operations to act upon. The main question at stake is how the elemental operations and their sequencing can emerge in neural networks that are trained with only rewards, in a reinforcement learning setting. We here propose a new recurrent neural network architecture that can learn composite visual tasks that require the application of successive elemental operations. Specifically, we selected three tasks for which electrophysiological recordings of monkeys' visual cortex are available. To train the networks, we used RELEARNN, a biologically plausible four-factor Hebbian learning rule, which is local both in time and space. We report that networks learn elemental operations, such as contour grouping and visual search, and execute sequences of operations, solely based on the characteristics of the visual stimuli and the reward structure of a task. After training was completed, the activity of the units of the neural network elicited by behaviorally relevant image items was stronger than that elicited by irrelevant ones, just as has been observed in the visual cortex of monkeys solving the same tasks. Relevant information that needed to be exchanged between subroutines was maintained as a focus of enhanced activity and passed on to the subsequent subroutines. Our results demonstrate how a biologically plausible learning rule can train a recurrent neural network on multistep visual tasks.
Collapse
Affiliation(s)
- Sami Mollard
- Department of Vision & Cognition, Netherlands Institute for Neuroscience, Amsterdam, The Netherlands
| | - Catherine Wacongne
- Department of Vision & Cognition, Netherlands Institute for Neuroscience, Amsterdam, The Netherlands
- AnotherBrain, Paris, France
| | - Sander M. Bohte
- Machine Learning Group, Centrum Wiskunde & Informatica, Amsterdam, The Netherlands
- Swammerdam Institute for Life Sciences, University of Amsterdam, Amsterdam, The Netherlands
| | - Pieter R. Roelfsema
- Department of Vision & Cognition, Netherlands Institute for Neuroscience, Amsterdam, The Netherlands
- Laboratory of Visual Brain Therapy, Sorbonne Université, Institut National de la Santé et de la Recherche Médicale, Centre National de la Recherche Scientifique, Institut de la Vision, Paris, France
- Department of Integrative Neurophysiology, Center for Neurogenomics and Cognitive Research, VU University, Amsterdam, The Netherlands
- Department of Neurosurgery, Academic Medical Center, Amsterdam, The Netherlands
| |
Collapse
|
16
|
Li G, McLaughlin DW, Peskin CS. A biochemical description of postsynaptic plasticity-with timescales ranging from milliseconds to seconds. Proc Natl Acad Sci U S A 2024; 121:e2311709121. [PMID: 38324573 PMCID: PMC10873618 DOI: 10.1073/pnas.2311709121] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/10/2023] [Accepted: 12/29/2023] [Indexed: 02/09/2024] Open
Abstract
Synaptic plasticity [long-term potentiation/depression (LTP/D)], is a cellular mechanism underlying learning. Two distinct types of early LTP/D (E-LTP/D), acting on very different time scales, have been observed experimentally-spike timing dependent plasticity (STDP), on time scales of tens of ms; and behavioral time scale synaptic plasticity (BTSP), on time scales of seconds. BTSP is a candidate for a mechanism underlying rapid learning of spatial location by place cells. Here, a computational model of the induction of E-LTP/D at a spine head of a synapse of a hippocampal pyramidal neuron is developed. The single-compartment model represents two interacting biochemical pathways for the activation (phosphorylation) of the kinase (CaMKII) with a phosphatase, with ion inflow through channels (NMDAR, CaV1,Na). The biochemical reactions are represented by a deterministic system of differential equations, with a detailed description of the activation of CaMKII that includes the opening of the compact state of CaMKII. This single model captures realistic responses (temporal profiles with the differing timescales) of STDP and BTSP and their asymmetries. The simulations distinguish several mechanisms underlying STDP vs. BTSP, including i) the flow of [Formula: see text] through NMDAR vs. CaV1 channels, and ii) the origin of several time scales in the activation of CaMKII. The model also realizes a priming mechanism for E-LTP that is induced by [Formula: see text] flow through CaV1.3 channels. Once in the spine head, this small additional [Formula: see text] opens the compact state of CaMKII, placing CaMKII ready for subsequent induction of LTP.
Collapse
Affiliation(s)
- Guanchun Li
- Courant Institute and Center for Neural Science, Department of Mathematics, New York University, New York, NY10012
| | - David W. McLaughlin
- Courant Institute and Center for Neural Science, Department of Mathematics, New York University, New York, NY10012
- Center for Neural Science, Department of Neural Science, New York University, New York, NY10012
- Institute of Mathematical Science, Mathematics Department, New York University-Shanghai, Shanghai200122, China
- Neuroscience Institute of New York University Langone Health, New York University, New York, NY10016
| | - Charles S. Peskin
- Courant Institute and Center for Neural Science, Department of Mathematics, New York University, New York, NY10012
- Center for Neural Science, Department of Neural Science, New York University, New York, NY10012
| |
Collapse
|
17
|
Barry MLLR, Gerstner W. Fast adaptation to rule switching using neuronal surprise. PLoS Comput Biol 2024; 20:e1011839. [PMID: 38377112 PMCID: PMC10906910 DOI: 10.1371/journal.pcbi.1011839] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/22/2022] [Revised: 03/01/2024] [Accepted: 01/18/2024] [Indexed: 02/22/2024] Open
Abstract
In humans and animals, surprise is a physiological reaction to an unexpected event, but how surprise can be linked to plausible models of neuronal activity is an open problem. We propose a self-supervised spiking neural network model where a surprise signal is extracted from an increase in neural activity after an imbalance of excitation and inhibition. The surprise signal modulates synaptic plasticity via a three-factor learning rule which increases plasticity at moments of surprise. The surprise signal remains small when transitions between sensory events follow a previously learned rule but increases immediately after rule switching. In a spiking network with several modules, previously learned rules are protected against overwriting, as long as the number of modules is larger than the total number of rules-making a step towards solving the stability-plasticity dilemma in neuroscience. Our model relates the subjective notion of surprise to specific predictions on the circuit level.
Collapse
Affiliation(s)
- Martin L. L. R. Barry
- School of Computer and Communication Sciences and School of Life Sciences, Ecole Polytechnique Fédérale de Lausanne, Lausanne, Switzerland
| | - Wulfram Gerstner
- School of Computer and Communication Sciences and School of Life Sciences, Ecole Polytechnique Fédérale de Lausanne, Lausanne, Switzerland
| |
Collapse
|
18
|
Palchaudhuri S, Osypenko D, Schneggenburger R. Fear Learning: An Evolving Picture for Plasticity at Synaptic Afferents to the Amygdala. Neuroscientist 2024; 30:87-104. [PMID: 35822657 DOI: 10.1177/10738584221108083] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Unraveling the neuronal mechanisms of fear learning might allow neuroscientists to make links between a learned behavior and the underlying plasticity at specific synaptic connections. In fear learning, an innocuous sensory event such as a tone (called the conditioned stimulus, CS) acquires an emotional value when paired with an aversive outcome (unconditioned stimulus, US). Here, we review earlier studies that have shown that synaptic plasticity at thalamic and cortical afferents to the lateral amygdala (LA) is critical for the formation of auditory-cued fear memories. Despite the early progress, it has remained unclear whether there are separate synaptic inputs that carry US information to the LA to act as a teaching signal for plasticity at CS-coding synapses. Recent findings have begun to fill this gap by showing, first, that thalamic and cortical auditory afferents can also carry US information; second, that the release of neuromodulators contributes to US-driven teaching signals; and third, that synaptic plasticity additionally happens at connections up- and downstream of the LA. Together, a picture emerges in which coordinated synaptic plasticity in serial and parallel circuits enables the formation of a finely regulated fear memory.
Collapse
Affiliation(s)
- Shriya Palchaudhuri
- Laboratory of Synaptic Mechanisms, Brain Mind Institute, School of Life Sciences, Ecole Polytechnique Fédérale de Lausanne, Lausanne, Switzerland
| | - Denys Osypenko
- Laboratory of Synaptic Mechanisms, Brain Mind Institute, School of Life Sciences, Ecole Polytechnique Fédérale de Lausanne, Lausanne, Switzerland
| | - Ralf Schneggenburger
- Laboratory of Synaptic Mechanisms, Brain Mind Institute, School of Life Sciences, Ecole Polytechnique Fédérale de Lausanne, Lausanne, Switzerland
| |
Collapse
|
19
|
de Brito CSN, Gerstner W. Learning what matters: Synaptic plasticity with invariance to second-order input correlations. PLoS Comput Biol 2024; 20:e1011844. [PMID: 38346073 PMCID: PMC10890752 DOI: 10.1371/journal.pcbi.1011844] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/01/2022] [Revised: 02/23/2024] [Accepted: 01/18/2024] [Indexed: 02/25/2024] Open
Abstract
Cortical populations of neurons develop sparse representations adapted to the statistics of the environment. To learn efficient population codes, synaptic plasticity mechanisms must differentiate relevant latent features from spurious input correlations, which are omnipresent in cortical networks. Here, we develop a theory for sparse coding and synaptic plasticity that is invariant to second-order correlations in the input. Going beyond classical Hebbian learning, our learning objective explains the functional form of observed excitatory plasticity mechanisms, showing how Hebbian long-term depression (LTD) cancels the sensitivity to second-order correlations so that receptive fields become aligned with features hidden in higher-order statistics. Invariance to second-order correlations enhances the versatility of biologically realistic learning models, supporting optimal decoding from noisy inputs and sparse population coding from spatially correlated stimuli. In a spiking model with triplet spike-timing-dependent plasticity (STDP), we show that individual neurons can learn localized oriented receptive fields, circumventing the need for input preprocessing, such as whitening, or population-level lateral inhibition. The theory advances our understanding of local unsupervised learning in cortical circuits, offers new interpretations of the Bienenstock-Cooper-Munro and triplet STDP models, and assigns a specific functional role to synaptic LTD mechanisms in pyramidal neurons.
Collapse
Affiliation(s)
- Carlos Stein Naves de Brito
- École Polytechnique Fédérale de Lausanne, EPFL, Lusanne, Switzerland
- Champalimaud Research, Champalimaud Centre for the Unknown, Lisbon, Portugal
| | - Wulfram Gerstner
- École Polytechnique Fédérale de Lausanne, EPFL, Lusanne, Switzerland
| |
Collapse
|
20
|
Sanchez-Bornot J, Sotero RC, Kelso JAS, Şimşek Ö, Coyle D. Solving large-scale MEG/EEG source localisation and functional connectivity problems simultaneously using state-space models. Neuroimage 2024; 285:120458. [PMID: 37993002 DOI: 10.1016/j.neuroimage.2023.120458] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/03/2023] [Revised: 09/28/2023] [Accepted: 11/14/2023] [Indexed: 11/24/2023] Open
Abstract
State-space models are widely employed across various research disciplines to study unobserved dynamics. Conventional estimation techniques, such as Kalman filtering and expectation maximisation, offer valuable insights but incur high computational costs in large-scale analyses. Sparse inverse covariance estimators can mitigate these costs, but at the expense of a trade-off between enforced sparsity and increased estimation bias, necessitating careful assessment in low signal-to-noise ratio (SNR) situations. To address these challenges, we propose a three-fold solution: (1) Introducing multiple penalised state-space (MPSS) models that leverage data-driven regularisation; (2) Developing novel algorithms derived from backpropagation, gradient descent, and alternating least squares to solve MPSS models; (3) Presenting a K-fold cross-validation extension for evaluating regularisation parameters. We validate this MPSS regularisation framework through lower and more complex simulations under varying SNR conditions, including a large-scale synthetic magneto- and electro-encephalography (MEG/EEG) data analysis. In addition, we apply MPSS models to concurrently solve brain source localisation and functional connectivity problems for real event-related MEG/EEG data, encompassing thousands of sources on the cortical surface. The proposed methodology overcomes the limitations of existing approaches, such as constraints to small-scale and region-of-interest analyses. Thus, it may enable a more accurate and detailed exploration of cognitive brain functions.
Collapse
Affiliation(s)
- Jose Sanchez-Bornot
- Intelligent Systems Research Centre, School of Computing, Engineering and Intelligent Systems, Ulster University, Magee campus, Derry∼Londonderry, United Kingdom.
| | - Roberto C Sotero
- Department of Radiology and Hotchkiss Brain Institute, University of Calgary, Calgary, AB, Canada
| | - J A Scott Kelso
- Intelligent Systems Research Centre, School of Computing, Engineering and Intelligent Systems, Ulster University, Magee campus, Derry∼Londonderry, United Kingdom; Human Brain & Behavior laboratory, Center for Complex Systems & Brain Sciences, Florida Atlantic University, Boca Raton, FL, USA
| | - Özgür Şimşek
- Bath Institute for the Augmented Human, University of Bath, Bath, BA2 7AY, United Kingdom
| | - Damien Coyle
- Intelligent Systems Research Centre, School of Computing, Engineering and Intelligent Systems, Ulster University, Magee campus, Derry∼Londonderry, United Kingdom; Bath Institute for the Augmented Human, University of Bath, Bath, BA2 7AY, United Kingdom
| |
Collapse
|
21
|
Cahill MK, Collard M, Tse V, Reitman ME, Etchenique R, Kirst C, Poskanzer KE. Network-level encoding of local neurotransmitters in cortical astrocytes. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2023:2023.12.01.568932. [PMID: 38106119 PMCID: PMC10723263 DOI: 10.1101/2023.12.01.568932] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/19/2023]
Abstract
Astrocytes-the most abundant non-neuronal cell type in the mammalian brain-are crucial circuit components that respond to and modulate neuronal activity via calcium (Ca 2+ ) signaling 1-8 . Astrocyte Ca 2+ activity is highly heterogeneous and occurs across multiple spatiotemporal scales: from fast, subcellular activity 3,4 to slow, synchronized activity that travels across connected astrocyte networks 9-11 . Furthermore, astrocyte network activity has been shown to influence a wide range of processes 5,8,12 . While astrocyte network activity has important implications for neuronal circuit function, the inputs that drive astrocyte network dynamics remain unclear. Here we used ex vivo and in vivo two-photon Ca 2+ imaging of astrocytes while mimicking neuronal neurotransmitter inputs at multiple spatiotemporal scales. We find that brief, subcellular inputs of GABA and glutamate lead to widespread, long-lasting astrocyte Ca 2+ responses beyond an individual stimulated cell. Further, we find that a key subset of Ca 2+ activity-propagative events-differentiates astrocyte network responses to these two major neurotransmitters, and gates responses to future inputs. Together, our results demonstrate that local, transient neurotransmitter inputs are encoded by broad cortical astrocyte networks over the course of minutes, contributing to accumulating evidence across multiple model organisms that significant astrocyte-neuron communication occurs across slow, network-level spatiotemporal scales 13-15 . We anticipate that this study will be a starting point for future studies investigating the link between specific astrocyte Ca 2+ activity and specific astrocyte functional outputs, which could build a consistent framework for astrocytic modulation of neuronal activity.
Collapse
|
22
|
Sato Y, Sakai Y, Hirata S. State-transition-free reinforcement learning in chimpanzees (Pan troglodytes). Learn Behav 2023; 51:413-427. [PMID: 37369920 DOI: 10.3758/s13420-023-00591-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 06/07/2023] [Indexed: 06/29/2023]
Abstract
The outcome of an action often occurs after a delay. One solution for learning appropriate actions from delayed outcomes is to rely on a chain of state transitions. Another solution, which does not rest on state transitions, is to use an eligibility trace (ET) that directly bridges a current outcome and multiple past actions via transient memories. Previous studies revealed that humans (Homo sapiens) learned appropriate actions in a behavioral task in which solutions based on the ET were effective but transition-based solutions were ineffective. This suggests that ET may be used in human learning systems. However, no studies have examined nonhuman animals with an equivalent behavioral task. We designed a task for nonhuman animals following a previous human study. In each trial, participants chose one of two stimuli that were randomly selected from three stimulus types: a stimulus associated with a food reward delivered immediately, a stimulus associated with a reward delivered after a few trials, and a stimulus associated with no reward. The presented stimuli did not vary according to the participants' choices. To maximize the total reward, participants had to learn the value of the stimulus associated with a delayed reward. Five chimpanzees (Pan troglodytes) performed the task using a touchscreen. Two chimpanzees were able to learn successfully, indicating that learning mechanisms that do not depend on state transitions were involved in the learning processes. The current study extends previous ET research by proposing a behavioral task and providing empirical data from chimpanzees.
Collapse
Grants
- 16H06283 Ministry of Education, Culture, Sports, Science, Japan Society for the Promotion of Science
- 18H05524 Ministry of Education, Culture, Sports, Science, Japan Society for the Promotion of Science
- 19J22889 Ministry of Education, Culture, Sports, Science, Japan Society for the Promotion of Science
- 26245069 Ministry of Education, Culture, Sports, Science, Japan Society for the Promotion of Science
- U04 Program for Leading Graduate Schools
Collapse
Affiliation(s)
- Yutaro Sato
- Wildlife Research Center, Kyoto University, Kyoto, Japan.
- University Administration Office, Headquarters for Management Strategy, Niigata University, Niigata, Japan.
| | - Yutaka Sakai
- Brain Science Institute, Tamagawa University, Tokyo, Japan
| | - Satoshi Hirata
- Wildlife Research Center, Kyoto University, Kyoto, Japan
| |
Collapse
|
23
|
Haimerl C, Ruff DA, Cohen MR, Savin C, Simoncelli EP. Targeted V1 comodulation supports task-adaptive sensory decisions. Nat Commun 2023; 14:7879. [PMID: 38036519 PMCID: PMC10689451 DOI: 10.1038/s41467-023-43432-7] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/24/2023] [Accepted: 11/09/2023] [Indexed: 12/02/2023] Open
Abstract
Sensory-guided behavior requires reliable encoding of stimulus information in neural populations, and flexible, task-specific readout. The former has been studied extensively, but the latter remains poorly understood. We introduce a theory for adaptive sensory processing based on functionally-targeted stochastic modulation. We show that responses of neurons in area V1 of monkeys performing a visual discrimination task exhibit low-dimensional, rapidly fluctuating gain modulation, which is stronger in task-informative neurons and can be used to decode from neural activity after few training trials, consistent with observed behavior. In a simulated hierarchical neural network model, such labels are learned quickly and can be used to adapt downstream readout, even after several intervening processing stages. Consistently, we find the modulatory signal estimated in V1 is also present in the activity of simultaneously recorded MT units, and is again strongest in task-informative neurons. These results support the idea that co-modulation facilitates task-adaptive hierarchical information routing.
Collapse
Affiliation(s)
- Caroline Haimerl
- Center for Neural Science, New York University, New York, NY, 10003, USA.
- Champalimaud Centre for the Unknown, Lisbon, Portugal.
| | - Douglas A Ruff
- Department of Neurobiology, University of Chicago, Chicago, IL, 60637, US
| | - Marlene R Cohen
- Department of Neurobiology, University of Chicago, Chicago, IL, 60637, US
| | - Cristina Savin
- Center for Neural Science, New York University, New York, NY, 10003, USA
- Center for Data Science, New York University, New York, NY, 10011, USA
| | - Eero P Simoncelli
- Center for Neural Science, New York University, New York, NY, 10003, USA
- Center for Data Science, New York University, New York, NY, 10011, USA
- Flatiron Institute, Simons Foundation, New York, NY, 10010, USA
| |
Collapse
|
24
|
Bohnstingl T, Wozniak S, Pantazi A, Eleftheriou E. Online Spatio-Temporal Learning in Deep Neural Networks. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2023; 34:8894-8908. [PMID: 35294357 DOI: 10.1109/tnnls.2022.3153985] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
Biological neural networks are equipped with an inherent capability to continuously adapt through online learning. This aspect remains in stark contrast to learning with error backpropagation through time (BPTT) that involves offline computation of the gradients due to the need to unroll the network through time. Here, we present an alternative online learning algorithm ic framework for deep recurrent neural networks (RNNs) and spiking neural networks (SNNs), called online spatio-temporal learning (OSTL). It is based on insights from biology and proposes the clear separation of spatial and temporal gradient components. For shallow SNNs, OSTL is gradient equivalent to BPTT enabling for the first time online training of SNNs with BPTT-equivalent gradients. In addition, the proposed formulation unveils a class of SNN architectures trainable online at low time complexity. Moreover, we extend OSTL to a generic form, applicable to a wide range of network architectures, including networks comprising long short-term memory (LSTM) and gated recurrent units (GRUs). We demonstrate the operation of our algorithm ic framework on various tasks from language modeling to speech recognition and obtain results on par with the BPTT baselines.
Collapse
|
25
|
Ma H, Khaled HG, Wang X, Mandelberg NJ, Cohen SM, He X, Tsien RW. Excitation-transcription coupling, neuronal gene expression and synaptic plasticity. Nat Rev Neurosci 2023; 24:672-692. [PMID: 37773070 DOI: 10.1038/s41583-023-00742-5] [Citation(s) in RCA: 8] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 08/24/2023] [Indexed: 09/30/2023]
Abstract
Excitation-transcription coupling (E-TC) links synaptic and cellular activity to nuclear gene transcription. It is generally accepted that E-TC makes a crucial contribution to learning and memory through its role in underpinning long-lasting synaptic enhancement in late-phase long-term potentiation and has more recently been linked to late-phase long-term depression: both processes require de novo gene transcription, mRNA translation and protein synthesis. E-TC begins with the activation of glutamate-gated N-methyl-D-aspartate-type receptors and voltage-gated L-type Ca2+ channels at the membrane and culminates in the activation of transcription factors in the nucleus. These receptors and ion channels mediate E-TC through mechanisms that include long-range signalling from the synapse to the nucleus and local interactions within dendritic spines, among other possibilities. Growing experimental evidence links these E-TC mechanisms to late-phase long-term potentiation and learning and memory. These advances in our understanding of the molecular mechanisms of E-TC mean that future efforts can focus on understanding its mesoscale functions and how it regulates neuronal network activity and behaviour in physiological and pathological conditions.
Collapse
Affiliation(s)
- Huan Ma
- Department of Neurobiology, Affiliated Mental Health Center and Hangzhou Seventh People's Hospital, Zhejiang University School of Medicine, Hangzhou, China.
- Liangzhu Laboratory, MOE Frontier Science Center for Brain Science and Brain-Machine Integration, State Key Laboratory of Brain-Machine Intelligence, Zhejiang University, Hangzhou, China.
- NHC and CAMS Key Laboratory of Medical Neurobiology, Zhejiang University, Hangzhou, China.
- Research Units for Emotion and Emotional Disorders, Chinese Academy of Medical Sciences, Beijing, China.
| | - Houda G Khaled
- NYU Neuroscience Institute and Department of Neuroscience and Physiology, NYU Langone Medical Center, New York, NY, USA
- Center for Neural Science, New York University, New York, NY, USA
| | - Xiaohan Wang
- NYU Neuroscience Institute and Department of Neuroscience and Physiology, NYU Langone Medical Center, New York, NY, USA
| | - Nataniel J Mandelberg
- NYU Neuroscience Institute and Department of Neuroscience and Physiology, NYU Langone Medical Center, New York, NY, USA
| | - Samuel M Cohen
- NYU Neuroscience Institute and Department of Neuroscience and Physiology, NYU Langone Medical Center, New York, NY, USA
| | - Xingzhi He
- Department of Neurobiology, Affiliated Mental Health Center and Hangzhou Seventh People's Hospital, Zhejiang University School of Medicine, Hangzhou, China
- Liangzhu Laboratory, MOE Frontier Science Center for Brain Science and Brain-Machine Integration, State Key Laboratory of Brain-Machine Intelligence, Zhejiang University, Hangzhou, China
- NHC and CAMS Key Laboratory of Medical Neurobiology, Zhejiang University, Hangzhou, China
- Research Units for Emotion and Emotional Disorders, Chinese Academy of Medical Sciences, Beijing, China
| | - Richard W Tsien
- NYU Neuroscience Institute and Department of Neuroscience and Physiology, NYU Langone Medical Center, New York, NY, USA.
- Center for Neural Science, New York University, New York, NY, USA.
| |
Collapse
|
26
|
Ma G, Yan R, Tang H. Exploiting noise as a resource for computation and learning in spiking neural networks. PATTERNS (NEW YORK, N.Y.) 2023; 4:100831. [PMID: 37876899 PMCID: PMC10591140 DOI: 10.1016/j.patter.2023.100831] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 05/25/2023] [Revised: 07/06/2023] [Accepted: 08/07/2023] [Indexed: 10/26/2023]
Abstract
Networks of spiking neurons underpin the extraordinary information-processing capabilities of the brain and have become pillar models in neuromorphic artificial intelligence. Despite extensive research on spiking neural networks (SNNs), most studies are established on deterministic models, overlooking the inherent non-deterministic, noisy nature of neural computations. This study introduces the noisy SNN (NSNN) and the noise-driven learning (NDL) rule by incorporating noisy neuronal dynamics to exploit the computational advantages of noisy neural processing. The NSNN provides a theoretical framework that yields scalable, flexible, and reliable computation and learning. We demonstrate that this framework leads to spiking neural models with competitive performance, improved robustness against challenging perturbations compared with deterministic SNNs, and better reproducing probabilistic computation in neural coding. Generally, this study offers a powerful and easy-to-use tool for machine learning, neuromorphic intelligence practitioners, and computational neuroscience researchers.
Collapse
Affiliation(s)
- Gehua Ma
- College of Computer Science and Technology, Zhejiang University, Hangzhou, PRC
| | - Rui Yan
- College of Computer Science and Technology, Zhejiang University of Technology, Hangzhou, PRC
| | - Huajin Tang
- College of Computer Science and Technology, Zhejiang University, Hangzhou, PRC
- State Key Lab of Brain-Machine Intelligence, Zhejiang University, Hangzhou, PRC
| |
Collapse
|
27
|
Schmid D, Jarvers C, Neumann H. Canonical circuit computations for computer vision. BIOLOGICAL CYBERNETICS 2023; 117:299-329. [PMID: 37306782 PMCID: PMC10600314 DOI: 10.1007/s00422-023-00966-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/30/2022] [Accepted: 05/18/2023] [Indexed: 06/13/2023]
Abstract
Advanced computer vision mechanisms have been inspired by neuroscientific findings. However, with the focus on improving benchmark achievements, technical solutions have been shaped by application and engineering constraints. This includes the training of neural networks which led to the development of feature detectors optimally suited to the application domain. However, the limitations of such approaches motivate the need to identify computational principles, or motifs, in biological vision that can enable further foundational advances in machine vision. We propose to utilize structural and functional principles of neural systems that have been largely overlooked. They potentially provide new inspirations for computer vision mechanisms and models. Recurrent feedforward, lateral, and feedback interactions characterize general principles underlying processing in mammals. We derive a formal specification of core computational motifs that utilize these principles. These are combined to define model mechanisms for visual shape and motion processing. We demonstrate how such a framework can be adopted to run on neuromorphic brain-inspired hardware platforms and can be extended to automatically adapt to environment statistics. We argue that the identified principles and their formalization inspires sophisticated computational mechanisms with improved explanatory scope. These and other elaborated, biologically inspired models can be employed to design computer vision solutions for different tasks and they can be used to advance neural network architectures of learning.
Collapse
Affiliation(s)
- Daniel Schmid
- Institute for Neural Information Processing, Ulm University, James-Franck-Ring, Ulm, 89081 Germany
| | - Christian Jarvers
- Institute for Neural Information Processing, Ulm University, James-Franck-Ring, Ulm, 89081 Germany
| | - Heiko Neumann
- Institute for Neural Information Processing, Ulm University, James-Franck-Ring, Ulm, 89081 Germany
| |
Collapse
|
28
|
Borland MS, Buell EP, Riley JR, Carroll AM, Moreno NA, Sharma P, Grasse KM, Buell JM, Kilgard MP, Engineer CT. Precise sound characteristics drive plasticity in the primary auditory cortex with VNS-sound pairing. Front Neurosci 2023; 17:1248936. [PMID: 37732302 PMCID: PMC10508341 DOI: 10.3389/fnins.2023.1248936] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/27/2023] [Accepted: 08/22/2023] [Indexed: 09/22/2023] Open
Abstract
Introduction Repeatedly pairing a tone with vagus nerve stimulation (VNS) alters frequency tuning across the auditory pathway. Pairing VNS with speech sounds selectively enhances the primary auditory cortex response to the paired sounds. It is not yet known how altering the speech sounds paired with VNS alters responses. In this study, we test the hypothesis that the sounds that are presented and paired with VNS will influence the neural plasticity observed following VNS-sound pairing. Methods To explore the relationship between acoustic experience and neural plasticity, responses were recorded from primary auditory cortex (A1) after VNS was repeatedly paired with the speech sounds 'rad' and 'lad' or paired with only the speech sound 'rad' while 'lad' was an unpaired background sound. Results Pairing both sounds with VNS increased the response strength and neural discriminability of the paired sounds in the primary auditory cortex. Surprisingly, pairing only 'rad' with VNS did not alter A1 responses. Discussion These results suggest that the specific acoustic contrasts associated with VNS can powerfully shape neural activity in the auditory pathway. Methods to promote plasticity in the central auditory system represent a new therapeutic avenue to treat auditory processing disorders. Understanding how different sound contrasts and neural activity patterns shape plasticity could have important clinical implications.
Collapse
Affiliation(s)
- Michael S. Borland
- Department of Neuroscience, School of Behavioral and Brain Sciences, The University of Texas at Dallas, Richardson, TX, United States
- Texas Biomedical Device Center, The University of Texas at Dallas, Richardson, TX, United States
| | - Elizabeth P. Buell
- Department of Neuroscience, School of Behavioral and Brain Sciences, The University of Texas at Dallas, Richardson, TX, United States
- Texas Biomedical Device Center, The University of Texas at Dallas, Richardson, TX, United States
| | - Jonathan R. Riley
- Department of Neuroscience, School of Behavioral and Brain Sciences, The University of Texas at Dallas, Richardson, TX, United States
- Texas Biomedical Device Center, The University of Texas at Dallas, Richardson, TX, United States
| | - Alan M. Carroll
- Department of Neuroscience, School of Behavioral and Brain Sciences, The University of Texas at Dallas, Richardson, TX, United States
- Texas Biomedical Device Center, The University of Texas at Dallas, Richardson, TX, United States
| | - Nicole A. Moreno
- Department of Neuroscience, School of Behavioral and Brain Sciences, The University of Texas at Dallas, Richardson, TX, United States
- Texas Biomedical Device Center, The University of Texas at Dallas, Richardson, TX, United States
| | - Pryanka Sharma
- Department of Neuroscience, School of Behavioral and Brain Sciences, The University of Texas at Dallas, Richardson, TX, United States
- Texas Biomedical Device Center, The University of Texas at Dallas, Richardson, TX, United States
| | - Katelyn M. Grasse
- Texas Biomedical Device Center, The University of Texas at Dallas, Richardson, TX, United States
- Erik Jonsson School of Engineering and Computer Science, The University of Texas at Dallas, Richardson, TX, United States
| | - John M. Buell
- Department of Neuroscience, School of Behavioral and Brain Sciences, The University of Texas at Dallas, Richardson, TX, United States
- Texas Biomedical Device Center, The University of Texas at Dallas, Richardson, TX, United States
| | - Michael P. Kilgard
- Department of Neuroscience, School of Behavioral and Brain Sciences, The University of Texas at Dallas, Richardson, TX, United States
- Texas Biomedical Device Center, The University of Texas at Dallas, Richardson, TX, United States
| | - Crystal T. Engineer
- Department of Neuroscience, School of Behavioral and Brain Sciences, The University of Texas at Dallas, Richardson, TX, United States
- Texas Biomedical Device Center, The University of Texas at Dallas, Richardson, TX, United States
| |
Collapse
|
29
|
Syed GS, Zhou Y, Warner J, Bhaskaran H. Atomically thin optomemristive feedback neurons. NATURE NANOTECHNOLOGY 2023; 18:1036-1043. [PMID: 37142710 DOI: 10.1038/s41565-023-01391-6] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/08/2022] [Accepted: 03/24/2023] [Indexed: 05/06/2023]
Abstract
Cognitive functions such as learning in mammalian brains have been attributed to the presence of neuronal circuits with feed-forward and feedback topologies. Such networks have interactions within and between neurons that provide excitory and inhibitory modulation effects. In neuromorphic computing, neurons that combine and broadcast both excitory and inhibitory signals using one nanoscale device are still an elusive goal. Here we introduce a type-II, two-dimensional heterojunction-based optomemristive neuron, using a stack of MoS2, WS2 and graphene that demonstrates both of these effects via optoelectronic charge-trapping mechanisms. We show that such neurons provide a nonlinear and rectified integration of information, that can be optically broadcast. Such a neuron has applications in machine learning, particularly in winner-take-all networks. We then apply such networks to simulations to establish unsupervised competitive learning for data partitioning, as well as cooperative learning in solving combinatorial optimization problems.
Collapse
Affiliation(s)
- Ghazi Sarwat Syed
- IBM Research - Europe, Rüschlikon, Switzerland.
- Department of Materials, University of Oxford, Oxford, UK.
| | - Yingqiu Zhou
- Department of Materials, University of Oxford, Oxford, UK
- Denmark Technical University, Lyngby, Denmark
| | - Jamie Warner
- Walker Department of Mechanical Engineering, The University of Texas at Austin, Austin, TX, USA
- Texas Materials Institute, The University of Texas at Austin, Austin, TX, USA
| | | |
Collapse
|
30
|
Zhang T, Cheng X, Jia S, Li CT, Poo MM, Xu B. A brain-inspired algorithm that mitigates catastrophic forgetting of artificial and spiking neural networks with low computational cost. SCIENCE ADVANCES 2023; 9:eadi2947. [PMID: 37624895 PMCID: PMC10456855 DOI: 10.1126/sciadv.adi2947] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/16/2023] [Accepted: 07/27/2023] [Indexed: 08/27/2023]
Abstract
Neuromodulators in the brain act globally at many forms of synaptic plasticity, represented as metaplasticity, which is rarely considered by existing spiking (SNNs) and nonspiking artificial neural networks (ANNs). Here, we report an efficient brain-inspired computing algorithm for SNNs and ANNs, referred to here as neuromodulation-assisted credit assignment (NACA), which uses expectation signals to induce defined levels of neuromodulators to selective synapses, whereby the long-term synaptic potentiation and depression are modified in a nonlinear manner depending on the neuromodulator level. The NACA algorithm achieved high recognition accuracy with substantially reduced computational cost in learning spatial and temporal classification tasks. Notably, NACA was also verified as efficient for learning five different class continuous learning tasks with varying degrees of complexity, exhibiting a markedly mitigated catastrophic forgetting at low computational cost. Mapping synaptic weight changes showed that these benefits could be explained by the sparse and targeted synaptic modifications attributed to expectation-based global neuromodulation.
Collapse
Affiliation(s)
- Tielin Zhang
- Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China
- School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing 100049, China
- Shanghai Center for Brain Science and Brain-inspired Technology, Lingang Laboratory, Shanghai 200031, China
| | - Xiang Cheng
- Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China
- School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing 100049, China
| | - Shuncheng Jia
- Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China
- School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing 100049, China
| | - Chengyu T Li
- Shanghai Center for Brain Science and Brain-inspired Technology, Lingang Laboratory, Shanghai 200031, China
- Center for Excellence in Brain Science and Intelligence Technology, Institute of Neuroscience, Chinese Academy of Sciences, Shanghai 200031, China
| | - Mu-ming Poo
- Shanghai Center for Brain Science and Brain-inspired Technology, Lingang Laboratory, Shanghai 200031, China
- Center for Excellence in Brain Science and Intelligence Technology, Institute of Neuroscience, Chinese Academy of Sciences, Shanghai 200031, China
| | - Bo Xu
- Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China
- School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing 100049, China
| |
Collapse
|
31
|
Saponati M, Vinck M. Sequence anticipation and spike-timing-dependent plasticity emerge from a predictive learning rule. Nat Commun 2023; 14:4985. [PMID: 37604825 PMCID: PMC10442404 DOI: 10.1038/s41467-023-40651-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/01/2021] [Accepted: 08/03/2023] [Indexed: 08/23/2023] Open
Abstract
Intelligent behavior depends on the brain's ability to anticipate future events. However, the learning rules that enable neurons to predict and fire ahead of sensory inputs remain largely unknown. We propose a plasticity rule based on predictive processing, where the neuron learns a low-rank model of the synaptic input dynamics in its membrane potential. Neurons thereby amplify those synapses that maximally predict other synaptic inputs based on their temporal relations, which provide a solution to an optimization problem that can be implemented at the single-neuron level using only local information. Consequently, neurons learn sequences over long timescales and shift their spikes towards the first inputs in a sequence. We show that this mechanism can explain the development of anticipatory signalling and recall in a recurrent network. Furthermore, we demonstrate that the learning rule gives rise to several experimentally observed STDP (spike-timing-dependent plasticity) mechanisms. These findings suggest prediction as a guiding principle to orchestrate learning and synaptic plasticity in single neurons.
Collapse
Affiliation(s)
- Matteo Saponati
- Ernst Strüngmann Institute (ESI) for Neuroscience in Cooperation with Max Planck Society, 60528, Frankfurt Am Main, Germany.
- IMPRS for Neural Circuits, Max-Planck Institute for Brain Research, 60438, Frankfurt Am Main, Germany.
- Donders Centre for Neuroscience, Department of Neuroinformatics, Radboud University, 6525, Nijmegen, The Netherlands.
| | - Martin Vinck
- Ernst Strüngmann Institute (ESI) for Neuroscience in Cooperation with Max Planck Society, 60528, Frankfurt Am Main, Germany.
- Donders Centre for Neuroscience, Department of Neuroinformatics, Radboud University, 6525, Nijmegen, The Netherlands.
| |
Collapse
|
32
|
Bredenberg C, Savin C. Desiderata for normative models of synaptic plasticity. ARXIV 2023:arXiv:2308.04988v1. [PMID: 37608931 PMCID: PMC10441445] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Subscribe] [Scholar Register] [Indexed: 08/24/2023]
Abstract
Normative models of synaptic plasticity use a combination of mathematics and computational simulations to arrive at predictions of behavioral and network-level adaptive phenomena. In recent years, there has been an explosion of theoretical work on these models, but experimental confirmation is relatively limited. In this review, we organize work on normative plasticity models in terms of a set of desiderata which, when satisfied, are designed to guarantee that a model has a clear link between plasticity and adaptive behavior, consistency with known biological evidence about neural plasticity, and specific testable predictions. We then discuss how new models have begun to improve on these criteria and suggest avenues for further development. As prototypes, we provide detailed analyses of two specific models - REINFORCE and the Wake-Sleep algorithm. We provide a conceptual guide to help develop neural learning theories that are precise, powerful, and experimentally testable.
Collapse
Affiliation(s)
- Colin Bredenberg
- Center for Neural Science, New York University, New York, NY 10003, USA
- Mila-Quebec AI Institute, 6666 Rue Saint-Urbain, Montréal, QC H2S 3H1
| | - Cristina Savin
- Center for Neural Science, New York University, New York, NY 10003, USA
- Center for Data Science, New York University, New York, NY 10011, USA
| |
Collapse
|
33
|
Wärnberg E, Kumar A. Feasibility of dopamine as a vector-valued feedback signal in the basal ganglia. Proc Natl Acad Sci U S A 2023; 120:e2221994120. [PMID: 37527344 PMCID: PMC10410740 DOI: 10.1073/pnas.2221994120] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/29/2022] [Accepted: 06/08/2023] [Indexed: 08/03/2023] Open
Abstract
It is well established that midbrain dopaminergic neurons support reinforcement learning (RL) in the basal ganglia by transmitting a reward prediction error (RPE) to the striatum. In particular, different computational models and experiments have shown that a striatum-wide RPE signal can support RL over a small discrete set of actions (e.g., no/no-go, choose left/right). However, there is accumulating evidence that the basal ganglia functions not as a selector between predefined actions but rather as a dynamical system with graded, continuous outputs. To reconcile this view with RL, there is a need to explain how dopamine could support learning of continuous outputs, rather than discrete action values. Inspired by the recent observations that besides RPE, the firing rates of midbrain dopaminergic neurons correlate with motor and cognitive variables, we propose a model in which dopamine signal in the striatum carries a vector-valued error feedback signal (a loss gradient) instead of a homogeneous scalar error (a loss). We implement a local, "three-factor" corticostriatal plasticity rule involving the presynaptic firing rate, a postsynaptic factor, and the unique dopamine concentration perceived by each striatal neuron. With this learning rule, we show that such a vector-valued feedback signal results in an increased capacity to learn a multidimensional series of real-valued outputs. Crucially, we demonstrate that this plasticity rule does not require precise nigrostriatal synapses but remains compatible with experimental observations of random placement of varicosities and diffuse volume transmission of dopamine.
Collapse
Affiliation(s)
- Emil Wärnberg
- Department of Neuroscience, Karolinska Institutet, 171 77Stockholm, Sweden
- Division of Computational Science and Technology, School of Electrical Engineering and Computer Science, KTH Royal Institute of Technology, 114 28Stockholm, Sweden
| | - Arvind Kumar
- Division of Computational Science and Technology, School of Electrical Engineering and Computer Science, KTH Royal Institute of Technology, 114 28Stockholm, Sweden
| |
Collapse
|
34
|
Li PY, Roxin A. Rapid memory encoding in a recurrent network model with behavioral time scale synaptic plasticity. PLoS Comput Biol 2023; 19:e1011139. [PMID: 37624848 PMCID: PMC10484462 DOI: 10.1371/journal.pcbi.1011139] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/29/2023] [Revised: 09/07/2023] [Accepted: 07/10/2023] [Indexed: 08/27/2023] Open
Abstract
Episodic memories are formed after a single exposure to novel stimuli. The plasticity mechanisms underlying such fast learning still remain largely unknown. Recently, it was shown that cells in area CA1 of the hippocampus of mice could form or shift their place fields after a single traversal of a virtual linear track. In-vivo intracellular recordings in CA1 cells revealed that previously silent inputs from CA3 could be switched on when they occurred within a few seconds of a dendritic plateau potential (PP) in the post-synaptic cell, a phenomenon dubbed Behavioral Time-scale Plasticity (BTSP). A recently developed computational framework for BTSP in which the dynamics of synaptic traces related to the pre-synaptic activity and post-synaptic PP are explicitly modelled, can account for experimental findings. Here we show that this model of plasticity can be further simplified to a 1D map which describes changes to the synaptic weights after a single trial. We use a temporally symmetric version of this map to study the storage of a large number of spatial memories in a recurrent network, such as CA3. Specifically, the simplicity of the map allows us to calculate the correlation of the synaptic weight matrix with any given past environment analytically. We show that the calculated memory trace can be used to predict the emergence and stability of bump attractors in a high dimensional neural network model endowed with BTSP.
Collapse
Affiliation(s)
- Pan Ye Li
- Centre de Recerca Matemàtica, Barcelona, Spain
| | - Alex Roxin
- Centre de Recerca Matemàtica, Barcelona, Spain
| |
Collapse
|
35
|
Vautrelle N, Coizet V, Leriche M, Dahan L, Schulz JM, Zhang YF, Zeghbib A, Overton PG, Bracci E, Redgrave P, Reynolds JN. Sensory Reinforced Corticostriatal Plasticity. Curr Neuropharmacol 2023; 22:CN-EPUB-133306. [PMID: 37533245 PMCID: PMC11097983 DOI: 10.2174/1570159x21666230801110359] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/29/2022] [Revised: 02/04/2023] [Accepted: 02/10/2023] [Indexed: 08/04/2023] Open
Abstract
BACKGROUND Regional changes in corticostriatal transmission induced by phasic dopaminergic signals are an essential feature of the neural network responsible for instrumental reinforcement during discovery of an action. However, the timing of signals that are thought to contribute to the induction of corticostriatal plasticity is difficult to reconcile within the framework of behavioural reinforcement learning, because the reinforcer is normally delayed relative to the selection and execution of causally-related actions. OBJECTIVE While recent studies have started to address the relevance of delayed reinforcement signals and their impact on corticostriatal processing, our objective was to establish a model in which a sensory reinforcer triggers appropriately delayed reinforcement signals relayed to the striatum via intact neuronal pathways and to investigate the effects on corticostriatal plasticity. METHODS We measured corticostriatal plasticity with electrophysiological recordings using a light flash as a natural sensory reinforcer, and pharmacological manipulations were applied in an in vivo anesthetized rat model preparation. RESULTS We demonstrate that the spiking of striatal neurons evoked by single-pulse stimulation of the motor cortex can be potentiated by a natural sensory reinforcer, operating through intact afferent pathways, with signal timing approximating that required for behavioural reinforcement. The pharmacological blockade of dopamine receptors attenuated the observed potentiation of corticostriatal neurotransmission. CONCLUSION This novel in vivo model of corticostriatal plasticity offers a behaviourally relevant framework to address the physiological, anatomical, cellular, and molecular bases of instrumental reinforcement learning.
Collapse
Affiliation(s)
- Nicolas Vautrelle
- Department of Anatomy, Brain Health Research Centre, University of Otago, Dunedin 9054, New Zealand
- Department of Psychology, University of Sheffield, Sheffield, S10 2TP, UK
| | - Véronique Coizet
- Department of Psychology, University of Sheffield, Sheffield, S10 2TP, UK
- Institut des Neurosciences de Grenoble, Université Joseph Fourier, Inserm, U1216, 38706 La Tronche Cedex, France
| | - Mariana Leriche
- Department of Anatomy, Brain Health Research Centre, University of Otago, Dunedin 9054, New Zealand
- Department of Psychology, University of Sheffield, Sheffield, S10 2TP, UK
| | - Lionel Dahan
- Department of Psychology, University of Sheffield, Sheffield, S10 2TP, UK
- Centre de Recherches sur la Cognition Animale, Université de Toulouse, UPS, 118 Route de Narbonne, F-31062 Toulouse Cedex 9, France
| | - Jan M. Schulz
- Department of Anatomy, Brain Health Research Centre, University of Otago, Dunedin 9054, New Zealand
- Department of Biomedicine, University of Basel, CH - 4056 Basel, Switzerland
| | - Yan-Feng Zhang
- Department of Anatomy, Brain Health Research Centre, University of Otago, Dunedin 9054, New Zealand
- Department of Clinical and Biomedical Sciences, University of Exeter Medical School, Hatherly Laboratories, Exeter EX4 4PS, United Kingdom
| | - Abdelhafid Zeghbib
- Department of Psychology, University of Sheffield, Sheffield, S10 2TP, UK
| | - Paul G. Overton
- Department of Psychology, University of Sheffield, Sheffield, S10 2TP, UK
| | - Enrico Bracci
- Department of Psychology, University of Sheffield, Sheffield, S10 2TP, UK
| | - Peter Redgrave
- Department of Psychology, University of Sheffield, Sheffield, S10 2TP, UK
| | - John N.J. Reynolds
- Department of Anatomy, Brain Health Research Centre, University of Otago, Dunedin 9054, New Zealand
| |
Collapse
|
36
|
Zajzon B, Duarte R, Morrison A. Toward reproducible models of sequence learning: replication and analysis of a modular spiking network with reward-based learning. Front Integr Neurosci 2023; 17:935177. [PMID: 37396571 PMCID: PMC10310927 DOI: 10.3389/fnint.2023.935177] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/03/2022] [Accepted: 05/15/2023] [Indexed: 07/04/2023] Open
Abstract
To acquire statistical regularities from the world, the brain must reliably process, and learn from, spatio-temporally structured information. Although an increasing number of computational models have attempted to explain how such sequence learning may be implemented in the neural hardware, many remain limited in functionality or lack biophysical plausibility. If we are to harvest the knowledge within these models and arrive at a deeper mechanistic understanding of sequential processing in cortical circuits, it is critical that the models and their findings are accessible, reproducible, and quantitatively comparable. Here we illustrate the importance of these aspects by providing a thorough investigation of a recently proposed sequence learning model. We re-implement the modular columnar architecture and reward-based learning rule in the open-source NEST simulator, and successfully replicate the main findings of the original study. Building on these, we perform an in-depth analysis of the model's robustness to parameter settings and underlying assumptions, highlighting its strengths and weaknesses. We demonstrate a limitation of the model consisting in the hard-wiring of the sequence order in the connectivity patterns, and suggest possible solutions. Finally, we show that the core functionality of the model is retained under more biologically-plausible constraints.
Collapse
Affiliation(s)
- Barna Zajzon
- Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA-BRAIN Institute I, Jülich Research Centre, Jülich, Germany
- Department of Computer Science 3—Software Engineering, RWTH Aachen University, Aachen, Germany
| | - Renato Duarte
- Donders Institute for Brain, Cognition and Behavior, Radboud University, Nijmegen, Netherlands
| | - Abigail Morrison
- Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA-BRAIN Institute I, Jülich Research Centre, Jülich, Germany
- Department of Computer Science 3—Software Engineering, RWTH Aachen University, Aachen, Germany
| |
Collapse
|
37
|
Rozells J, Gavornik JP. Optogenetic manipulation of inhibitory interneurons can be used to validate a model of spatiotemporal sequence learning. Front Comput Neurosci 2023; 17:1198128. [PMID: 37362060 PMCID: PMC10288026 DOI: 10.3389/fncom.2023.1198128] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/31/2023] [Accepted: 05/24/2023] [Indexed: 06/28/2023] Open
Abstract
The brain uses temporal information to link discrete events into memory structures supporting recognition, prediction, and a wide variety of complex behaviors. It is still an open question how experience-dependent synaptic plasticity creates memories including temporal and ordinal information. Various models have been proposed to explain how this could work, but these are often difficult to validate in a living brain. A recent model developed to explain sequence learning in the visual cortex encodes intervals in recurrent excitatory synapses and uses a learned offset between excitation and inhibition to generate precisely timed "messenger" cells that signal the end of an instance of time. This mechanism suggests that the recall of stored temporal intervals should be particularly sensitive to the activity of inhibitory interneurons that can be easily targeted in vivo with standard optogenetic tools. In this work we examined how simulated optogenetic manipulations of inhibitory cells modifies temporal learning and recall based on these mechanisms. We show that disinhibition and excess inhibition during learning or testing cause characteristic errors in recalled timing that could be used to validate the model in vivo using either physiological or behavioral measurements.
Collapse
Affiliation(s)
| | - Jeffrey P. Gavornik
- Center for Systems Neuroscience, Department of Biology, Boston University, Boston, MA, United States
| |
Collapse
|
38
|
Brea J, Clayton NS, Gerstner W. Computational models of episodic-like memory in food-caching birds. Nat Commun 2023; 14:2979. [PMID: 37221167 DOI: 10.1038/s41467-023-38570-x] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/25/2022] [Accepted: 05/08/2023] [Indexed: 05/25/2023] Open
Abstract
Birds of the crow family adapt food-caching strategies to anticipated needs at the time of cache recovery and rely on memory of the what, where and when of previous caching events to recover their hidden food. It is unclear if this behavior can be explained by simple associative learning or if it relies on higher cognitive processes like mental time-travel. We present a computational model and propose a neural implementation of food-caching behavior. The model has hunger variables for motivational control, reward-modulated update of retrieval and caching policies and an associative neural network for remembering caching events with a memory consolidation mechanism for flexible decoding of the age of a memory. Our methodology of formalizing experimental protocols is transferable to other domains and facilitates model evaluation and experiment design. Here, we show that memory-augmented, associative reinforcement learning without mental time-travel is sufficient to explain the results of 28 behavioral experiments with food-caching birds.
Collapse
Affiliation(s)
- Johanni Brea
- School of Computer and Communication Science, École Polytechnique Fédérale de Lausanne, Lausanne, Switzerland.
- School of Life Science, École Polytechnique Fédérale de Lausanne, Lausanne, Switzerland.
| | - Nicola S Clayton
- Department of Psychology, University of Cambridge, Cambridge, UK
| | - Wulfram Gerstner
- School of Computer and Communication Science, École Polytechnique Fédérale de Lausanne, Lausanne, Switzerland
- School of Life Science, École Polytechnique Fédérale de Lausanne, Lausanne, Switzerland
| |
Collapse
|
39
|
Schmidgall S, Hays J. Meta-SpikePropamine: learning to learn with synaptic plasticity in spiking neural networks. Front Neurosci 2023; 17:1183321. [PMID: 37250397 PMCID: PMC10213417 DOI: 10.3389/fnins.2023.1183321] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/09/2023] [Accepted: 04/06/2023] [Indexed: 05/31/2023] Open
Abstract
We propose that in order to harness our understanding of neuroscience toward machine learning, we must first have powerful tools for training brain-like models of learning. Although substantial progress has been made toward understanding the dynamics of learning in the brain, neuroscience-derived models of learning have yet to demonstrate the same performance capabilities as methods in deep learning such as gradient descent. Inspired by the successes of machine learning using gradient descent, we introduce a bi-level optimization framework that seeks to both solve online learning tasks and improve the ability to learn online using models of plasticity from neuroscience. We demonstrate that models of three-factor learning with synaptic plasticity taken from the neuroscience literature can be trained in Spiking Neural Networks (SNNs) with gradient descent via a framework of learning-to-learn to address challenging online learning problems. This framework opens a new path toward developing neuroscience inspired online learning algorithms.
Collapse
Affiliation(s)
- Samuel Schmidgall
- U.S. Naval Research Laboratory, Spacecraft Engineering Department, Washington, DC, United States
- Department of Electrical and Computer Engineering, Johns Hopkins University, Baltimore, MD, United States
| | - Joe Hays
- U.S. Naval Research Laboratory, Spacecraft Engineering Department, Washington, DC, United States
| |
Collapse
|
40
|
Mishra R, Suri M. A survey and perspective on neuromorphic continual learning systems. Front Neurosci 2023; 17:1149410. [PMID: 37214407 PMCID: PMC10194827 DOI: 10.3389/fnins.2023.1149410] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/21/2023] [Accepted: 04/03/2023] [Indexed: 05/24/2023] Open
Abstract
With the advent of low-power neuromorphic computing systems, new possibilities have emerged for deployment in various sectors, like healthcare and transport, that require intelligent autonomous applications. These applications require reliable low-power solutions for sequentially adapting to new relevant data without loss of learning. Neuromorphic systems are inherently inspired by biological neural networks that have the potential to offer an efficient solution toward the feat of continual learning. With increasing attention in this area, we present a first comprehensive review of state-of-the-art neuromorphic continual learning (NCL) paradigms. The significance of our study is multi-fold. We summarize the recent progress and propose a plausible roadmap for developing end-to-end NCL systems. We also attempt to identify the gap between research and the real-world deployment of NCL systems in multiple applications. We do so by assessing the recent contributions in neuromorphic continual learning at multiple levels-applications, algorithms, architectures, and hardware. We discuss the relevance of NCL systems and draw out application-specific requisites. We analyze the biological underpinnings that are used for acquiring high-level performance. At the hardware level, we assess the ability of the current neuromorphic platforms and emerging nano-device-based architectures to support these algorithms in the presence of several constraints. Further, we propose refinements to continual learning metrics for applying them to NCL systems. Finally, the review identifies gaps and possible solutions that are not yet focused upon for deploying application-specific NCL systems in real-life scenarios.
Collapse
|
41
|
Zeng J, Li X, Zhang R, Lv M, Wang Y, Tan K, Xia X, Wan J, Jing M, Zhang X, Li Y, Yang Y, Wang L, Chu J, Li Y, Li Y. Local 5-HT signaling bi-directionally regulates the coincidence time window for associative learning. Neuron 2023; 111:1118-1135.e5. [PMID: 36706757 PMCID: PMC11152601 DOI: 10.1016/j.neuron.2022.12.034] [Citation(s) in RCA: 11] [Impact Index Per Article: 11.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/28/2022] [Revised: 10/03/2022] [Accepted: 12/30/2022] [Indexed: 01/27/2023]
Abstract
The coincidence between conditioned stimulus (CS) and unconditioned stimulus (US) is essential for associative learning; however, the mechanism regulating the duration of this temporal window remains unclear. Here, we found that serotonin (5-HT) bi-directionally regulates the coincidence time window of olfactory learning in Drosophila and affects synaptic plasticity of Kenyon cells (KCs) in the mushroom body (MB). Utilizing GPCR-activation-based (GRAB) neurotransmitter sensors, we found that KC-released acetylcholine (ACh) activates a serotonergic dorsal paired medial (DPM) neuron, which in turn provides inhibitory feedback to KCs. Physiological stimuli induce spatially heterogeneous 5-HT signals, which proportionally gate the intrinsic coincidence time windows of different MB compartments. Artificially reducing or increasing the DPM neuron-released 5-HT shortens or prolongs the coincidence window, respectively. In a sequential trace conditioning paradigm, this serotonergic neuromodulation helps to bridge the CS-US temporal gap. Altogether, we report a model circuitry for perceiving the temporal coincidence and determining the causal relationship between environmental events.
Collapse
Affiliation(s)
- Jianzhi Zeng
- State Key Laboratory of Membrane Biology, School of Life Sciences, Peking University, Beijing 100871, China; Institute of Molecular Physiology, Shenzhen Bay Laboratory, Shenzhen 518132, China; PKU-IDG/McGovern Institute for Brain Research, Beijing 100871, China; Peking-Tsinghua Center for Life Sciences, Academy for Advanced Interdisciplinary Studies, Peking University, Beijing 100871, China; National Biomedical Imaging Center, Peking University, Beijing 100871, China; Hefei National Laboratory for Physical Sciences at the Microscale, CAS Key Laboratory of Brain Function and Disease, School of Life Sciences, Division of Life Sciences and Medicine, University of Science and Technology of China, Hefei 230026, Anhui, China.
| | - Xuelin Li
- State Key Laboratory of Membrane Biology, School of Life Sciences, Peking University, Beijing 100871, China; PKU-IDG/McGovern Institute for Brain Research, Beijing 100871, China; National Biomedical Imaging Center, Peking University, Beijing 100871, China
| | - Renzimo Zhang
- State Key Laboratory of Membrane Biology, School of Life Sciences, Peking University, Beijing 100871, China; PKU-IDG/McGovern Institute for Brain Research, Beijing 100871, China; Peking-Tsinghua Center for Life Sciences, Academy for Advanced Interdisciplinary Studies, Peking University, Beijing 100871, China; National Biomedical Imaging Center, Peking University, Beijing 100871, China; Yuanpei College, Peking University, Beijing 100871, China
| | - Mingyue Lv
- State Key Laboratory of Membrane Biology, School of Life Sciences, Peking University, Beijing 100871, China; PKU-IDG/McGovern Institute for Brain Research, Beijing 100871, China; National Biomedical Imaging Center, Peking University, Beijing 100871, China
| | - Yipan Wang
- State Key Laboratory of Membrane Biology, School of Life Sciences, Peking University, Beijing 100871, China; PKU-IDG/McGovern Institute for Brain Research, Beijing 100871, China; National Biomedical Imaging Center, Peking University, Beijing 100871, China
| | - Ke Tan
- State Key Laboratory of Membrane Biology, School of Life Sciences, Peking University, Beijing 100871, China; PKU-IDG/McGovern Institute for Brain Research, Beijing 100871, China
| | - Xiju Xia
- State Key Laboratory of Membrane Biology, School of Life Sciences, Peking University, Beijing 100871, China; PKU-IDG/McGovern Institute for Brain Research, Beijing 100871, China; National Biomedical Imaging Center, Peking University, Beijing 100871, China; PKU-THU-NIBS Joint Graduate Program, Beijing 100871, China
| | - Jinxia Wan
- State Key Laboratory of Membrane Biology, School of Life Sciences, Peking University, Beijing 100871, China; PKU-IDG/McGovern Institute for Brain Research, Beijing 100871, China; National Biomedical Imaging Center, Peking University, Beijing 100871, China
| | - Miao Jing
- Chinese Institute for Brain Research, Beijing 102206, China
| | - Xiuning Zhang
- State Key Laboratory of Membrane Biology, School of Life Sciences, Peking University, Beijing 100871, China; PKU-IDG/McGovern Institute for Brain Research, Beijing 100871, China; National Biomedical Imaging Center, Peking University, Beijing 100871, China
| | - Yu Li
- School of Medicine, The Chinese University of Hong Kong, Shenzhen 518172, China
| | - Yang Yang
- Institute of Biophysics, State Key Laboratory of Brain and Cognitive Science, Center for Excellence in Biomacromolecules, Chinese Academy of Sciences, Beijing 100101, China; University of Chinese Academy of Sciences, Beijing 100049, China
| | - Liang Wang
- Guangdong Provincial Key Laboratory of Biomedical Optical Imaging Technology & Center for Biomedical Optics and Molecular Imaging & CAS Key Laboratory of Health Informatics, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China
| | - Jun Chu
- Guangdong Provincial Key Laboratory of Biomedical Optical Imaging Technology & Center for Biomedical Optics and Molecular Imaging & CAS Key Laboratory of Health Informatics, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China
| | - Yan Li
- Institute of Biophysics, State Key Laboratory of Brain and Cognitive Science, Center for Excellence in Biomacromolecules, Chinese Academy of Sciences, Beijing 100101, China; University of Chinese Academy of Sciences, Beijing 100049, China
| | - Yulong Li
- State Key Laboratory of Membrane Biology, School of Life Sciences, Peking University, Beijing 100871, China; Institute of Molecular Physiology, Shenzhen Bay Laboratory, Shenzhen 518132, China; PKU-IDG/McGovern Institute for Brain Research, Beijing 100871, China; Peking-Tsinghua Center for Life Sciences, Academy for Advanced Interdisciplinary Studies, Peking University, Beijing 100871, China; National Biomedical Imaging Center, Peking University, Beijing 100871, China; Yuanpei College, Peking University, Beijing 100871, China; PKU-THU-NIBS Joint Graduate Program, Beijing 100871, China; Chinese Institute for Brain Research, Beijing 102206, China.
| |
Collapse
|
42
|
Yagishita S. Cellular bases for reward-related dopamine actions. Neurosci Res 2023; 188:1-9. [PMID: 36496085 DOI: 10.1016/j.neures.2022.12.003] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/01/2022] [Revised: 11/09/2022] [Accepted: 12/06/2022] [Indexed: 12/12/2022]
Abstract
Dopamine neurons exhibit transient increases and decreases in their firing rate upon reward and punishment for learning. This bidirectional modulation of dopamine dynamics occurs on the order of hundreds of milliseconds, and it is sensitively detected to reinforce the preceding sensorimotor events. These observations indicate that the mechanisms of dopamine detection at the projection sites are of remarkable precision, both in time and concentration. A major target of dopamine projection is the striatum, including the ventral region of the nucleus accumbens, which mainly comprises dopamine D1 and D2 receptor (D1R and D2R)-expressing spiny projection neurons. Although the involvement of D1R and D2R in dopamine-dependent learning has been suggested, the exact cellular bases for detecting transient dopamine signaling remain unclear. This review discusses recent cellular studies on the novel synaptic mechanisms for detecting dopamine transient signals associated with learning. Analyses of behavior based on these mechanisms have further revealed new behavioral aspects that are closely associated with these synaptic mechanisms. Thus, it is gradually possible to mechanistically explain behavioral learning via synaptic and cellular bases in rodents.
Collapse
Affiliation(s)
- Sho Yagishita
- Laboratory of Structural Physiology, Center for Disease Biology and Integrative Medicine, Faculty of Medicine, The University of Tokyo, Bunkyo-ku, Tokyo, Japan; International Research Center for Neurointelligence (WPI-IRCN), UTIAS, The University of Tokyo, Bunkyo-ku, Tokyo, Japan.
| |
Collapse
|
43
|
Gao Y. A computational model of learning flexible navigation in a maze by layout-conforming replay of place cells. Front Comput Neurosci 2023; 17:1053097. [PMID: 36846726 PMCID: PMC9947252 DOI: 10.3389/fncom.2023.1053097] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/25/2022] [Accepted: 01/16/2023] [Indexed: 02/11/2023] Open
Abstract
Recent experimental observations have shown that the reactivation of hippocampal place cells (PC) during sleep or wakeful immobility depicts trajectories that can go around barriers and can flexibly adapt to a changing maze layout. However, existing computational models of replay fall short of generating such layout-conforming replay, restricting their usage to simple environments, like linear tracks or open fields. In this paper, we propose a computational model that generates layout-conforming replay and explains how such replay drives the learning of flexible navigation in a maze. First, we propose a Hebbian-like rule to learn the inter-PC synaptic strength during exploration. Then we use a continuous attractor network (CAN) with feedback inhibition to model the interaction among place cells and hippocampal interneurons. The activity bump of place cells drifts along paths in the maze, which models layout-conforming replay. During replay in sleep, the synaptic strengths from place cells to striatal medium spiny neurons (MSN) are learned by a novel dopamine-modulated three-factor rule to store place-reward associations. During goal-directed navigation, the CAN periodically generates replay trajectories from the animal's location for path planning, and the trajectory leading to a maximal MSN activity is followed by the animal. We have implemented our model into a high-fidelity virtual rat in the MuJoCo physics simulator. Extensive experiments have demonstrated that its superior flexibility during navigation in a maze is due to a continuous re-learning of inter-PC and PC-MSN synaptic strength.
Collapse
Affiliation(s)
- Yuanxiang Gao
- School of Information and Communication Engineering, University of Electronic Science and Technology of China, Chengdu, China,CAS Key Laboratory of Theoretical Physics, Institute of Theoretical Physics, Chinese Academy of Sciences, Beijing, China,*Correspondence: Yuanxiang Gao ✉
| |
Collapse
|
44
|
The molecular memory code and synaptic plasticity: A synthesis. Biosystems 2023; 224:104825. [PMID: 36610586 DOI: 10.1016/j.biosystems.2022.104825] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/14/2022] [Revised: 12/29/2022] [Accepted: 12/30/2022] [Indexed: 01/06/2023]
Abstract
The most widely accepted view of memory in the brain holds that synapses are the storage sites of memory, and that memories are formed through associative modification of synapses. This view has been challenged on conceptual and empirical grounds. As an alternative, it has been proposed that molecules within the cell body are the storage sites of memory, and that memories are formed through biochemical operations on these molecules. This paper proposes a synthesis of these two views, grounded in a computational model of memory. Synapses are conceived as storage sites for the parameters of an approximate posterior probability distribution over latent causes. Intracellular molecules are conceived as storage sites for the parameters of a generative model. The model stipulates how these two components work together as part of an integrated algorithm for learning and inference.
Collapse
|
45
|
Sheynikhovich D, Otani S, Bai J, Arleo A. Long-term memory, synaptic plasticity and dopamine in rodent medial prefrontal cortex: Role in executive functions. Front Behav Neurosci 2023; 16:1068271. [PMID: 36710953 PMCID: PMC9875091 DOI: 10.3389/fnbeh.2022.1068271] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/12/2022] [Accepted: 12/26/2022] [Indexed: 01/12/2023] Open
Abstract
Mnemonic functions, supporting rodent behavior in complex tasks, include both long-term and (short-term) working memory components. While working memory is thought to rely on persistent activity states in an active neural network, long-term memory and synaptic plasticity contribute to the formation of the underlying synaptic structure, determining the range of possible states. Whereas, the implication of working memory in executive functions, mediated by the prefrontal cortex (PFC) in primates and rodents, has been extensively studied, the contribution of long-term memory component to these tasks received little attention. This review summarizes available experimental data and theoretical work concerning cellular mechanisms of synaptic plasticity in the medial region of rodent PFC and the link between plasticity, memory and behavior in PFC-dependent tasks. A special attention is devoted to unique properties of dopaminergic modulation of prefrontal synaptic plasticity and its contribution to executive functions.
Collapse
Affiliation(s)
- Denis Sheynikhovich
- Sorbonne Université, INSERM, CNRS, Institut de la Vision, Paris, France,*Correspondence: Denis Sheynikhovich ✉
| | - Satoru Otani
- Sorbonne Université, INSERM, CNRS, Institut de la Vision, Paris, France
| | - Jing Bai
- Institute of Psychiatry and Neuroscience of Paris, INSERM U1266, Paris, France
| | - Angelo Arleo
- Sorbonne Université, INSERM, CNRS, Institut de la Vision, Paris, France
| |
Collapse
|
46
|
Zeng X, Diekmann N, Wiskott L, Cheng S. Modeling the function of episodic memory in spatial learning. Front Psychol 2023; 14:1160648. [PMID: 37138984 PMCID: PMC10149844 DOI: 10.3389/fpsyg.2023.1160648] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/07/2023] [Accepted: 03/31/2023] [Indexed: 05/05/2023] Open
Abstract
Episodic memory has been studied extensively in the past few decades, but so far little is understood about how it drives future behavior. Here we propose that episodic memory can facilitate learning in two fundamentally different modes: retrieval and replay, which is the reinstatement of hippocampal activity patterns during later sleep or awake quiescence. We study their properties by comparing three learning paradigms using computational modeling based on visually-driven reinforcement learning. Firstly, episodic memories are retrieved to learn from single experiences (one-shot learning); secondly, episodic memories are replayed to facilitate learning of statistical regularities (replay learning); and, thirdly, learning occurs online as experiences arise with no access to memories of past experiences (online learning). We found that episodic memory benefits spatial learning in a broad range of conditions, but the performance difference is meaningful only when the task is sufficiently complex and the number of learning trials is limited. Furthermore, the two modes of accessing episodic memory affect spatial learning differently. One-shot learning is typically faster than replay learning, but the latter may reach a better asymptotic performance. In the end, we also investigated the benefits of sequential replay and found that replaying stochastic sequences results in faster learning as compared to random replay when the number of replays is limited. Understanding how episodic memory drives future behavior is an important step toward elucidating the nature of episodic memory.
Collapse
Affiliation(s)
- Xiangshuai Zeng
- Department of Computer Science, Institute for Neural Computation, Ruhr University Bochum, Bochum, Germany
- International Graduate School of Neuroscience, Ruhr University Bochum, Bochum, Germany
| | - Nicolas Diekmann
- Department of Computer Science, Institute for Neural Computation, Ruhr University Bochum, Bochum, Germany
- International Graduate School of Neuroscience, Ruhr University Bochum, Bochum, Germany
| | - Laurenz Wiskott
- Department of Computer Science, Institute for Neural Computation, Ruhr University Bochum, Bochum, Germany
- International Graduate School of Neuroscience, Ruhr University Bochum, Bochum, Germany
| | - Sen Cheng
- Department of Computer Science, Institute for Neural Computation, Ruhr University Bochum, Bochum, Germany
- International Graduate School of Neuroscience, Ruhr University Bochum, Bochum, Germany
- *Correspondence: Sen Cheng
| |
Collapse
|
47
|
Scott DN, Frank MJ. Adaptive control of synaptic plasticity integrates micro- and macroscopic network function. Neuropsychopharmacology 2023; 48:121-144. [PMID: 36038780 PMCID: PMC9700774 DOI: 10.1038/s41386-022-01374-6] [Citation(s) in RCA: 6] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/11/2022] [Revised: 06/23/2022] [Accepted: 06/24/2022] [Indexed: 11/09/2022]
Abstract
Synaptic plasticity configures interactions between neurons and is therefore likely to be a primary driver of behavioral learning and development. How this microscopic-macroscopic interaction occurs is poorly understood, as researchers frequently examine models within particular ranges of abstraction and scale. Computational neuroscience and machine learning models offer theoretically powerful analyses of plasticity in neural networks, but results are often siloed and only coarsely linked to biology. In this review, we examine connections between these areas, asking how network computations change as a function of diverse features of plasticity and vice versa. We review how plasticity can be controlled at synapses by calcium dynamics and neuromodulatory signals, the manifestation of these changes in networks, and their impacts in specialized circuits. We conclude that metaplasticity-defined broadly as the adaptive control of plasticity-forges connections across scales by governing what groups of synapses can and can't learn about, when, and to what ends. The metaplasticity we discuss acts by co-opting Hebbian mechanisms, shifting network properties, and routing activity within and across brain systems. Asking how these operations can go awry should also be useful for understanding pathology, which we address in the context of autism, schizophrenia and Parkinson's disease.
Collapse
Affiliation(s)
- Daniel N Scott
- Cognitive Linguistic, and Psychological Sciences, Brown University, Providence, RI, USA.
- Carney Institute for Brain Science, Brown University, Providence, RI, USA.
| | - Michael J Frank
- Cognitive Linguistic, and Psychological Sciences, Brown University, Providence, RI, USA.
- Carney Institute for Brain Science, Brown University, Providence, RI, USA.
| |
Collapse
|
48
|
Neurodynamical Computing at the Information Boundaries of Intelligent Systems. Cognit Comput 2022. [DOI: 10.1007/s12559-022-10081-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/28/2022]
Abstract
AbstractArtificial intelligence has not achieved defining features of biological intelligence despite models boasting more parameters than neurons in the human brain. In this perspective article, we synthesize historical approaches to understanding intelligent systems and argue that methodological and epistemic biases in these fields can be resolved by shifting away from cognitivist brain-as-computer theories and recognizing that brains exist within large, interdependent living systems. Integrating the dynamical systems view of cognition with the massive distributed feedback of perceptual control theory highlights a theoretical gap in our understanding of nonreductive neural mechanisms. Cell assemblies—properly conceived as reentrant dynamical flows and not merely as identified groups of neurons—may fill that gap by providing a minimal supraneuronal level of organization that establishes a neurodynamical base layer for computation. By considering information streams from physical embodiment and situational embedding, we discuss this computational base layer in terms of conserved oscillatory and structural properties of cortical-hippocampal networks. Our synthesis of embodied cognition, based in dynamical systems and perceptual control, aims to bypass the neurosymbolic stalemates that have arisen in artificial intelligence, cognitive science, and computational neuroscience.
Collapse
|
49
|
Continuous cholinergic-dopaminergic updating in the nucleus accumbens underlies approaches to reward-predicting cues. Nat Commun 2022; 13:7924. [PMID: 36564387 PMCID: PMC9789106 DOI: 10.1038/s41467-022-35601-x] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/08/2022] [Accepted: 12/13/2022] [Indexed: 12/25/2022] Open
Abstract
The ability to learn Pavlovian associations from environmental cues predicting positive outcomes is critical for survival, motivating adaptive behaviours. This cued-motivated behaviour depends on the nucleus accumbens (NAc). NAc output activity mediated by spiny projecting neurons (SPNs) is regulated by dopamine, but also by cholinergic interneurons (CINs), which can release acetylcholine and glutamate via the activity of the vesicular acetylcholine transporter (VAChT) or the vesicular glutamate transporter (VGLUT3), respectively. Here we investigated behavioural and neurochemical changes in mice performing a touchscreen Pavlovian approach task by recording dopamine, acetylcholine, and calcium dynamics from D1- and D2-SPNs using fibre photometry in control, VAChT or VGLUT3 mutant mice to understand how these signals cooperate in the service of approach behaviours toward reward-predicting cues. We reveal that NAc acetylcholine-dopaminergic signalling is continuously updated to regulate striatal output underlying the acquisition of Pavlovian approach learning toward reward-predicting cues.
Collapse
|
50
|
Yu Q, Bi Z, Jiang S, Yan B, Chen H, Wang Y, Miao Y, Li K, Wei Z, Xie Y, Tan X, Liu X, Fu H, Cui L, Xing L, Weng S, Wang X, Yuan Y, Zhou C, Wang G, Li L, Ma L, Mao Y, Chen L, Zhang J. Visual cortex encodes timing information in humans and mice. Neuron 2022; 110:4194-4211.e10. [PMID: 36195097 DOI: 10.1016/j.neuron.2022.09.008] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/19/2021] [Revised: 03/15/2022] [Accepted: 09/07/2022] [Indexed: 11/07/2022]
Abstract
Despite the importance of timing in our daily lives, our understanding of how the human brain mediates second-scale time perception is limited. Here, we combined intracranial stereoelectroencephalography (SEEG) recordings in epileptic patients and circuit dissection in mice to show that visual cortex (VC) encodes timing information. We first asked human participants to perform an interval-timing task and found VC to be a key timing brain area. We then conducted optogenetic experiments in mice and showed that VC plays an important role in the interval-timing behavior. We further found that VC neurons fired in a time-keeping sequential manner and exhibited increased excitability in a timed manner. Finally, we used a computational model to illustrate a self-correcting learning process that generates interval-timed activities with scalar-timing property. Our work reveals how localized oscillations in VC occurring in the seconds to deca-seconds range relate timing information from the external world to guide behavior.
Collapse
Affiliation(s)
- Qingpeng Yu
- State Key Laboratory of Medical Neurobiology, MOE Frontiers Center for Brain Science and Institutes of Brain Science, Department of Neurosurgery, Huashan Hospital, Fudan University, Shanghai 200032, China
| | - Zedong Bi
- Lingang Laboratory, Shanghai 200031, China; Institute for Future, School of Automation, Qingdao University, Qingdao 266071, China; Department of Physics, Centre for Nonlinear Studies and Institute of Computational and Theoretical Studies, Hong Kong Baptist University, Kowloon Tong, Hong Kong; Research Centre, HKBU Institute of Research and Continuing Education, Shenzhen, China
| | - Shize Jiang
- State Key Laboratory of Medical Neurobiology, MOE Frontiers Center for Brain Science and Institutes of Brain Science, Department of Neurosurgery, Huashan Hospital, Fudan University, Shanghai 200032, China
| | - Biao Yan
- State Key Laboratory of Medical Neurobiology, MOE Frontiers Center for Brain Science and Institutes of Brain Science, Department of Neurosurgery, Huashan Hospital, Fudan University, Shanghai 200032, China
| | - Heming Chen
- State Key Laboratory of Medical Neurobiology, MOE Frontiers Center for Brain Science and Institutes of Brain Science, Department of Neurosurgery, Huashan Hospital, Fudan University, Shanghai 200032, China
| | - Yiting Wang
- State Key Laboratory of Medical Neurobiology, MOE Frontiers Center for Brain Science and Institutes of Brain Science, Department of Neurosurgery, Huashan Hospital, Fudan University, Shanghai 200032, China
| | - Yizhan Miao
- State Key Laboratory of Medical Neurobiology, MOE Frontiers Center for Brain Science and Institutes of Brain Science, Department of Neurosurgery, Huashan Hospital, Fudan University, Shanghai 200032, China
| | - Kexin Li
- State Key Laboratory of Medical Neurobiology, MOE Frontiers Center for Brain Science and Institutes of Brain Science, Department of Neurosurgery, Huashan Hospital, Fudan University, Shanghai 200032, China
| | - Zixuan Wei
- State Key Laboratory of Medical Neurobiology, MOE Frontiers Center for Brain Science and Institutes of Brain Science, Department of Neurosurgery, Huashan Hospital, Fudan University, Shanghai 200032, China
| | - Yuanting Xie
- State Key Laboratory of Medical Neurobiology, MOE Frontiers Center for Brain Science and Institutes of Brain Science, Department of Neurosurgery, Huashan Hospital, Fudan University, Shanghai 200032, China
| | - Xinrong Tan
- State Key Laboratory of Medical Neurobiology, MOE Frontiers Center for Brain Science and Institutes of Brain Science, Department of Neurosurgery, Huashan Hospital, Fudan University, Shanghai 200032, China
| | - Xiaodi Liu
- State Key Laboratory of Medical Neurobiology, MOE Frontiers Center for Brain Science and Institutes of Brain Science, Department of Neurosurgery, Huashan Hospital, Fudan University, Shanghai 200032, China
| | - Hang Fu
- State Key Laboratory of Medical Neurobiology, MOE Frontiers Center for Brain Science and Institutes of Brain Science, Department of Neurosurgery, Huashan Hospital, Fudan University, Shanghai 200032, China
| | - Liyuan Cui
- State Key Laboratory of Medical Neurobiology, MOE Frontiers Center for Brain Science and Institutes of Brain Science, Department of Neurosurgery, Huashan Hospital, Fudan University, Shanghai 200032, China
| | - Lu Xing
- State Key Laboratory of Medical Neurobiology, MOE Frontiers Center for Brain Science and Institutes of Brain Science, Department of Neurosurgery, Huashan Hospital, Fudan University, Shanghai 200032, China
| | - Shijun Weng
- State Key Laboratory of Medical Neurobiology, MOE Frontiers Center for Brain Science and Institutes of Brain Science, Department of Neurosurgery, Huashan Hospital, Fudan University, Shanghai 200032, China
| | - Xin Wang
- Department of Neurology and Ophthalmology, Zhongshan Hospital, Fudan University, Shanghai 200032, China
| | - Yuanzhi Yuan
- Department of Neurology and Ophthalmology, Zhongshan Hospital, Fudan University, Shanghai 200032, China
| | - Changsong Zhou
- Department of Physics, Centre for Nonlinear Studies and Institute of Computational and Theoretical Studies, Hong Kong Baptist University, Kowloon Tong, Hong Kong; Research Centre, HKBU Institute of Research and Continuing Education, Shenzhen, China
| | - Gang Wang
- Center of Brain Sciences, Beijing Institute of Basic Medical Sciences, Beijing 100850, China
| | - Liang Li
- Center of Brain Sciences, Beijing Institute of Basic Medical Sciences, Beijing 100850, China
| | - Lan Ma
- State Key Laboratory of Medical Neurobiology, MOE Frontiers Center for Brain Science and Institutes of Brain Science, Department of Neurosurgery, Huashan Hospital, Fudan University, Shanghai 200032, China
| | - Ying Mao
- State Key Laboratory of Medical Neurobiology, MOE Frontiers Center for Brain Science and Institutes of Brain Science, Department of Neurosurgery, Huashan Hospital, Fudan University, Shanghai 200032, China.
| | - Liang Chen
- State Key Laboratory of Medical Neurobiology, MOE Frontiers Center for Brain Science and Institutes of Brain Science, Department of Neurosurgery, Huashan Hospital, Fudan University, Shanghai 200032, China; Tianqiao and Chrissy Chen Institute Clinical Translational Research Center, Shanghai 200040, China.
| | - Jiayi Zhang
- State Key Laboratory of Medical Neurobiology, MOE Frontiers Center for Brain Science and Institutes of Brain Science, Department of Neurosurgery, Huashan Hospital, Fudan University, Shanghai 200032, China; Institute for Medical and Engineering Innovation, Eye & ENT Hospital, Fudan University, Shanghai 200031, China.
| |
Collapse
|