1
|
Mattera A, Alfieri V, Granato G, Baldassarre G. Chaotic recurrent neural networks for brain modelling: A review. Neural Netw 2025; 184:107079. [PMID: 39756119 DOI: 10.1016/j.neunet.2024.107079] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/06/2024] [Revised: 11/25/2024] [Accepted: 12/19/2024] [Indexed: 01/07/2025]
Abstract
Even in the absence of external stimuli, the brain is spontaneously active. Indeed, most cortical activity is internally generated by recurrence. Both theoretical and experimental studies suggest that chaotic dynamics characterize this spontaneous activity. While the precise function of brain chaotic activity is still puzzling, we know that chaos confers many advantages. From a computational perspective, chaos enhances the complexity of network dynamics. From a behavioural point of view, chaotic activity could generate the variability required for exploration. Furthermore, information storage and transfer are maximized at the critical border between order and chaos. Despite these benefits, many computational brain models avoid incorporating spontaneous chaotic activity due to the challenges it poses for learning algorithms. In recent years, however, multiple approaches have been proposed to overcome this limitation. As a result, many different algorithms have been developed, initially within the reservoir computing paradigm. Over time, the field has evolved to increase the biological plausibility and performance of the algorithms, sometimes going beyond the reservoir computing framework. In this review article, we examine the computational benefits of chaos and the unique properties of chaotic recurrent neural networks, with a particular focus on those typically utilized in reservoir computing. We also provide a detailed analysis of the algorithms designed to train chaotic RNNs, tracing their historical evolution and highlighting key milestones in their development. Finally, we explore the applications and limitations of chaotic RNNs for brain modelling, consider their potential broader impacts beyond neuroscience, and outline promising directions for future research.
Collapse
Affiliation(s)
- Andrea Mattera
- Institute of Cognitive Sciences and Technology, National Research Council, Via Romagnosi 18a, I-00196, Rome, Italy.
| | - Valerio Alfieri
- Institute of Cognitive Sciences and Technology, National Research Council, Via Romagnosi 18a, I-00196, Rome, Italy; International School of Advanced Studies, Center for Neuroscience, University of Camerino, Via Gentile III Da Varano, 62032, Camerino, Italy
| | - Giovanni Granato
- Institute of Cognitive Sciences and Technology, National Research Council, Via Romagnosi 18a, I-00196, Rome, Italy
| | - Gianluca Baldassarre
- Institute of Cognitive Sciences and Technology, National Research Council, Via Romagnosi 18a, I-00196, Rome, Italy
| |
Collapse
|
2
|
Lorenzo J, Rico-Gallego JA, Binczak S, Jacquir S. Spiking Neuron-Astrocyte Networks for Image Recognition. Neural Comput 2025; 37:635-665. [PMID: 40030144 DOI: 10.1162/neco_a_01740] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/27/2024] [Accepted: 11/25/2024] [Indexed: 03/19/2025]
Abstract
From biological and artificial network perspectives, researchers have started acknowledging astrocytes as computational units mediating neural processes. Here, we propose a novel biologically inspired neuron-astrocyte network model for image recognition, one of the first attempts at implementing astrocytes in spiking neuron networks (SNNs) using a standard data set. The architecture for image recognition has three primary units: the preprocessing unit for converting the image pixels into spiking patterns, the neuron-astrocyte network forming bipartite (neural connections) and tripartite synapses (neural and astrocytic connections), and the classifier unit. In the astrocyte-mediated SNNs, an astrocyte integrates neural signals following the simplified Postnov model. It then modulates the integrate-and-fire (IF) neurons via gliotransmission, thereby strengthening the synaptic connections of the neurons within the astrocytic territory. We develop an architecture derived from a baseline SNN model for unsupervised digit classification. The spiking neuron-astrocyte networks (SNANs) display better network performance with an optimal variance-bias trade-off than SNN alone. We demonstrate that astrocytes promote faster learning, support memory formation and recognition, and provide a simplified network architecture. Our proposed SNAN can serve as a benchmark for future researchers on astrocyte implementation in artificial networks, particularly in neuromorphic systems, for its simplified design.
Collapse
Affiliation(s)
- Jhunlyn Lorenzo
- Laboratory ImViA EA7535, Université de Bourgogne, 21078 Dijon, France
- College of Engineering and Information Technology, Cavite State University, 4122, Indang, Philippines
| | - Juan-Antonio Rico-Gallego
- Foundation for Computing and Advanced Technologies of Extremadura, Extremadura Supercomputing Center, 10071, Cáceres
| | - Stéphane Binczak
- Laboratory ImViA EA7535, Université de Bourgogne, 21078 Dijon, France
| | - Sabir Jacquir
- Université Paris-Saclay, CNRS, Institut des Neurosciences Paris-Saclay, 91190 Gif-sur-Yvette, France
| |
Collapse
|
3
|
Yoshida K, Toyoizumi T. A biological model of nonlinear dimensionality reduction. SCIENCE ADVANCES 2025; 11:eadp9048. [PMID: 39908371 PMCID: PMC11801247 DOI: 10.1126/sciadv.adp9048] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/18/2024] [Accepted: 01/06/2025] [Indexed: 02/07/2025]
Abstract
Obtaining appropriate low-dimensional representations from high-dimensional sensory inputs in an unsupervised manner is essential for straightforward downstream processing. Although nonlinear dimensionality reduction methods such as t-distributed stochastic neighbor embedding (t-SNE) have been developed, their implementation in simple biological circuits remains unclear. Here, we develop a biologically plausible dimensionality reduction algorithm compatible with t-SNE, which uses a simple three-layer feedforward network mimicking the Drosophila olfactory circuit. The proposed learning rule, described as three-factor Hebbian plasticity, is effective for datasets such as entangled rings and MNIST, comparable to t-SNE. We further show that the algorithm could be working in olfactory circuits in Drosophila by analyzing the multiple experimental data in previous studies. We lastly suggest that the algorithm is also beneficial for association learning between inputs and rewards, allowing the generalization of these associations to other inputs not yet associated with rewards.
Collapse
Affiliation(s)
- Kensuke Yoshida
- Laboratory for Neural Computation and Adaptation, RIKEN Center for Brain Science, 2-1 Hirosawa, Wako, Saitama 351-0198, Japan
- Department of Mathematical Informatics, Graduate School of Information Science and Technology, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo 113-8656, Japan
| | - Taro Toyoizumi
- Laboratory for Neural Computation and Adaptation, RIKEN Center for Brain Science, 2-1 Hirosawa, Wako, Saitama 351-0198, Japan
- Department of Mathematical Informatics, Graduate School of Information Science and Technology, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo 113-8656, Japan
| |
Collapse
|
4
|
Li X, Wang X, Hu X, Tang P, Chen C, He L, Chen M, Bello ST, Chen T, Wang X, Wong YT, Sun W, Chen X, Qu J, He J. Cortical HFS-Induced Neo-Hebbian Local Plasticity Enhances Efferent Output Signal and Strengthens Afferent Input Connectivity. eNeuro 2025; 12:ENEURO.0045-24.2024. [PMID: 39809536 PMCID: PMC11810566 DOI: 10.1523/eneuro.0045-24.2024] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/30/2024] [Revised: 12/17/2024] [Accepted: 12/25/2024] [Indexed: 01/16/2025] Open
Abstract
High-frequency stimulation (HFS)-induced long-term potentiation (LTP) is generally regarded as a homosynaptic Hebbian-type LTP, where synaptic changes are thought to occur at the synapses that project from the stimulation site and terminate onto the neurons at the recording site. In this study, we first investigated HFS-induced LTP on urethane-anesthetized rats and found that cortical HFS enhances neural responses at the recording site through the strengthening of local connectivity with nearby neurons at the stimulation site rather than through synaptic strengthening at the recording site. This enhanced local connectivity at the stimulation site leads to increased output propagation, resulting in signal potentiation at the recording site. Additionally, we discovered that HFS can also nonspecifically strengthen distant afferent synapses at the HFS site, thereby expanding its impact beyond local neural connections. This form of plasticity exhibits a neo-Hebbian characteristic as it exclusively manifests in the presence of cholecystokinin release, induced by HFS. The cortical HFS-induced local LTP was further supported by a behavioral task, providing additional evidence. Our results unveil a previously overlooked mechanism underlying cortical plasticity: synaptic plasticity is more likely to occur around the soma site of strongly activated cortical neurons rather than solely at their projection terminals.
Collapse
Affiliation(s)
- Xiao Li
- Departments of Neuroscience, City University of Hong Kong, Kowloon, Hong Kong
- Biomedical Science, City University of Hong Kong, Kowloon, Hong Kong
- Research Centre for Treatments of Brain Disorders, City University of Hong Kong, Kowloon, Hong Kong
- CAS Key Laboratory of Brain Connectome and Manipulation, the Brain Cognition and Brain Disease Institute, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China
| | - Xue Wang
- Departments of Neuroscience, City University of Hong Kong, Kowloon, Hong Kong
- Biomedical Science, City University of Hong Kong, Kowloon, Hong Kong
- Research Centre for Treatments of Brain Disorders, City University of Hong Kong, Kowloon, Hong Kong
| | - Xiaohan Hu
- Departments of Neuroscience, City University of Hong Kong, Kowloon, Hong Kong
- Biomedical Science, City University of Hong Kong, Kowloon, Hong Kong
- Research Centre for Treatments of Brain Disorders, City University of Hong Kong, Kowloon, Hong Kong
| | - Peng Tang
- Departments of Neuroscience, City University of Hong Kong, Kowloon, Hong Kong
- Biomedical Science, City University of Hong Kong, Kowloon, Hong Kong
- Research Centre for Treatments of Brain Disorders, City University of Hong Kong, Kowloon, Hong Kong
- Center of Regenerative Medicine and Health, Hong Kong Institute of Science and Innovation, Chinese Academy of Sciences, Shatin, Hong Kong
| | - Congping Chen
- Department of Electronic and Computer Engineering, The Hong Kong University of Science and Technology, Kowloon, Hong Kong
| | - Ling He
- Departments of Neuroscience, City University of Hong Kong, Kowloon, Hong Kong
- Biomedical Science, City University of Hong Kong, Kowloon, Hong Kong
- Research Centre for Treatments of Brain Disorders, City University of Hong Kong, Kowloon, Hong Kong
- CAS Key Laboratory of Brain Connectome and Manipulation, the Brain Cognition and Brain Disease Institute, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China
- Center of Regenerative Medicine and Health, Hong Kong Institute of Science and Innovation, Chinese Academy of Sciences, Shatin, Hong Kong
| | - Mengying Chen
- Departments of Neuroscience, City University of Hong Kong, Kowloon, Hong Kong
- Biomedical Science, City University of Hong Kong, Kowloon, Hong Kong
- Research Centre for Treatments of Brain Disorders, City University of Hong Kong, Kowloon, Hong Kong
| | - Stephen Temitayo Bello
- Departments of Neuroscience, City University of Hong Kong, Kowloon, Hong Kong
- Biomedical Science, City University of Hong Kong, Kowloon, Hong Kong
- Research Centre for Treatments of Brain Disorders, City University of Hong Kong, Kowloon, Hong Kong
| | - Tao Chen
- Departments of Neuroscience, City University of Hong Kong, Kowloon, Hong Kong
- Biomedical Science, City University of Hong Kong, Kowloon, Hong Kong
- Research Centre for Treatments of Brain Disorders, City University of Hong Kong, Kowloon, Hong Kong
- Center of Regenerative Medicine and Health, Hong Kong Institute of Science and Innovation, Chinese Academy of Sciences, Shatin, Hong Kong
| | - Xiaoyu Wang
- Biomedical Science, City University of Hong Kong, Kowloon, Hong Kong
| | - Yin Ting Wong
- Biomedical Science, City University of Hong Kong, Kowloon, Hong Kong
| | - Wenjian Sun
- Departments of Neuroscience, City University of Hong Kong, Kowloon, Hong Kong
- Biomedical Science, City University of Hong Kong, Kowloon, Hong Kong
| | - Xi Chen
- Departments of Neuroscience, City University of Hong Kong, Kowloon, Hong Kong
- Biomedical Science, City University of Hong Kong, Kowloon, Hong Kong
- Research Centre for Treatments of Brain Disorders, City University of Hong Kong, Kowloon, Hong Kong
| | - Jianan Qu
- Department of Electronic and Computer Engineering, The Hong Kong University of Science and Technology, Kowloon, Hong Kong
| | - Jufang He
- Departments of Neuroscience, City University of Hong Kong, Kowloon, Hong Kong
- Biomedical Science, City University of Hong Kong, Kowloon, Hong Kong
- Research Centre for Treatments of Brain Disorders, City University of Hong Kong, Kowloon, Hong Kong
- CAS Key Laboratory of Brain Connectome and Manipulation, the Brain Cognition and Brain Disease Institute, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China
- Center of Regenerative Medicine and Health, Hong Kong Institute of Science and Innovation, Chinese Academy of Sciences, Shatin, Hong Kong
| |
Collapse
|
5
|
Chen D, Peng P, Huang T, Tian Y. Fully Spiking Actor Network With Intralayer Connections for Reinforcement Learning. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2025; 36:2881-2893. [PMID: 38319762 DOI: 10.1109/tnnls.2024.3352653] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/08/2024]
Abstract
With the help of special neuromorphic hardware, spiking neural networks (SNNs) are expected to realize artificial intelligence (AI) with less energy consumption. It provides a promising energy-efficient way for realistic control tasks by combining SNNs with deep reinforcement learning (DRL). In this article, we focus on the task where the agent needs to learn multidimensional deterministic policies to control, which is very common in real scenarios. Recently, the surrogate gradient method has been utilized for training multilayer SNNs, which allows SNNs to achieve comparable performance with the corresponding deep networks in this task. Most existing spike-based reinforcement learning (RL) methods take the firing rate as the output of SNNs, and convert it to represent continuous action space (i.e., the deterministic policy) through a fully connected (FC) layer. However, the decimal characteristic of the firing rate brings the floating-point matrix operations to the FC layer, making the whole SNN unable to deploy on the neuromorphic hardware directly. To develop a fully spiking actor network (SAN) without any floating-point matrix operations, we draw inspiration from the nonspiking interneurons found in insects and employ the membrane voltage of the nonspiking neurons to represent the action. Before the nonspiking neurons, multiple population neurons are introduced to decode different dimensions of actions. Since each population is used to decode a dimension of action, we argue that the neurons in each population should be connected in time domain and space domain. Hence, the intralayer connections are used in output populations to enhance the representation capacity. This mechanism exists extensively in animals and has been demonstrated effectively. Finally, we propose a fully SAN with intralayer connections (ILC-SAN). Extensive experimental results demonstrate that the proposed method outperforms the state-of-the-art performance on continuous control tasks from OpenAI gym. Moreover, we estimate the theoretical energy consumption when deploying ILC-SAN on neuromorphic chips to illustrate its high energy efficiency.
Collapse
|
6
|
Barbier T, Teulière C, Triesch J. A spiking neural network for active efficient coding. Front Robot AI 2025; 11:1435197. [PMID: 39882552 PMCID: PMC11775837 DOI: 10.3389/frobt.2024.1435197] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/19/2024] [Accepted: 10/14/2024] [Indexed: 01/31/2025] Open
Abstract
Biological vision systems simultaneously learn to efficiently encode their visual inputs and to control the movements of their eyes based on the visual input they sample. This autonomous joint learning of visual representations and actions has previously been modeled in the Active Efficient Coding (AEC) framework and implemented using traditional frame-based cameras. However, modern event-based cameras are inspired by the retina and offer advantages in terms of acquisition rate, dynamic range, and power consumption. Here, we propose a first AEC system that is fully implemented as a Spiking Neural Network (SNN) driven by inputs from an event-based camera. This input is efficiently encoded by a two-layer SNN, which in turn feeds into a spiking reinforcement learner that learns motor commands to maximize an intrinsic reward signal. This reward signal is computed directly from the activity levels of the first two layers. We test our approach on two different behaviors: visual tracking of a translating target and stabilizing the orientation of a rotating target. To the best of our knowledge, our work represents the first ever fully spiking AEC model.
Collapse
Affiliation(s)
- Thomas Barbier
- SIGMA Clermont, Centre National de la Recherche Scientifique, Institut Pascal, Université Clermont Auvergne, Clermont-Ferrand, France
| | - Céline Teulière
- SIGMA Clermont, Centre National de la Recherche Scientifique, Institut Pascal, Université Clermont Auvergne, Clermont-Ferrand, France
| | - Jochen Triesch
- Life- and Neurosciences, Frankfurt Institute for Advanced Studies, Frankfurt am Main, Germany
| |
Collapse
|
7
|
Micheva KD, Simhal AK, Schardt J, Smith SJ, Weinberg RJ, Owen SF. Data-driven synapse classification reveals a logic of glutamate receptor diversity. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2025:2024.12.11.628056. [PMID: 39713368 PMCID: PMC11661198 DOI: 10.1101/2024.12.11.628056] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Subscribe] [Scholar Register] [Indexed: 12/24/2024]
Abstract
The rich diversity of synapses facilitates the capacity of neural circuits to transmit, process and store information. We used multiplex super-resolution proteometric imaging through array tomography to define features of single synapses in mouse neocortex. We find that glutamatergic synapses cluster into subclasses that parallel the distinct biochemical and functional categories of receptor subunits: GluA1/4, GluA2/3 and GluN1/GluN2B. Two of these subclasses align with physiological expectations based on synaptic plasticity: large AMPAR-rich synapses may represent potentiated synapses, whereas small NMDAR-rich synapses suggest "silent" synapses. The NMDA receptor content of large synapses correlates with spine neck diameter, and thus the potential for coupling to the parent dendrite. Overall, ultrastructural features predict receptor content of synapses better than parent neuron identity does, suggesting synapse subclasses act as fundamental elements of neuronal circuits. No barriers prevent future generalization of this approach to other species, or to study of human disorders and therapeutics.
Collapse
Affiliation(s)
- Kristina D. Micheva
- Department of Neurosurgery, Stanford University School of Medicine, Stanford, CA 94305
| | - Anish K. Simhal
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, NY 10065
| | - Jenna Schardt
- Allen Institute for Brain Science, Seattle, WA 98109
| | - Stephen J Smith
- Allen Institute for Brain Science, Seattle, WA 98109
- Department of Molecular and Cellular Physiology, Stanford University School of Medicine, Stanford, CA 94305
| | - Richard J. Weinberg
- Department of Cell Biology and Physiology, University of North Carolina, Chapel Hill, NC 27514
| | - Scott F. Owen
- Department of Neurosurgery, Stanford University School of Medicine, Stanford, CA 94305
- Wu Tsai Neurosciences Institute, Stanford University, Stanford, CA 94305
- Lead contact
| |
Collapse
|
8
|
Lee C, Park Y, Yoon S, Lee J, Cho Y, Park C. Brain-inspired learning rules for spiking neural network-based control: a tutorial. Biomed Eng Lett 2025; 15:37-55. [PMID: 39781065 PMCID: PMC11704115 DOI: 10.1007/s13534-024-00436-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/04/2024] [Revised: 09/24/2024] [Accepted: 09/28/2024] [Indexed: 01/12/2025] Open
Abstract
Robotic systems rely on spatio-temporal information to solve control tasks. With advancements in deep neural networks, reinforcement learning has significantly enhanced the performance of control tasks by leveraging deep learning techniques. However, as deep neural networks grow in complexity, they consume more energy and introduce greater latency. This complexity hampers their application in robotic systems that require real-time data processing. To address this issue, spiking neural networks, which emulate the biological brain by transmitting spatio-temporal information through spikes, have been developed alongside neuromorphic hardware that supports their operation. This paper reviews brain-inspired learning rules and examines the application of spiking neural networks in control tasks. We begin by exploring the features and implementations of biologically plausible spike-timing-dependent plasticity. Subsequently, we investigate the integration of a global third factor with spike-timing-dependent plasticity and its utilization and enhancements in both theoretical and applied research. We also discuss a method for locally applying a third factor that sophisticatedly modifies each synaptic weight through weight-based backpropagation. Finally, we review studies utilizing these learning rules to solve control tasks using spiking neural networks.
Collapse
Affiliation(s)
- Choongseop Lee
- Department of Computer Engineering, Kwangwoon University, Seoul, 01897 Republic of Korea
| | - Yuntae Park
- Department of Computer Engineering, Kwangwoon University, Seoul, 01897 Republic of Korea
| | - Sungmin Yoon
- Department of Computer Engineering, Kwangwoon University, Seoul, 01897 Republic of Korea
| | - Jiwoon Lee
- Department of Computer Engineering, Kwangwoon University, Seoul, 01897 Republic of Korea
| | - Youngho Cho
- Department of Electrical and Communication Engineering, Daelim University College, Anyang, 13916 Republic of Korea
| | - Cheolsoo Park
- Department of Computer Engineering, Kwangwoon University, Seoul, 01897 Republic of Korea
| |
Collapse
|
9
|
Alashram AR. Effects of robotic therapy associated with noninvasive brain stimulation on motor function in individuals with incomplete spinal cord injury: A systematic review of randomized controlled trials. J Spinal Cord Med 2025; 48:6-21. [PMID: 38265422 PMCID: PMC11749291 DOI: 10.1080/10790268.2024.2304921] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 01/25/2024] Open
Abstract
CONTEXT Motor deficits are among the most common consequences of incomplete spinal cord injury (SCI). These impairments can affect patients' levels of functioning and quality of life. Combined robotic therapy and non-invasive brain stimulation (NIBS) have been used to improve motor impairments in patients with corticospinal tract lesions. OBJECTIVES To examine the effects of combined robotic therapy and NIBS on motor function post incomplete SCI. METHODS PubMed, SCOPUS, MEDLINE, PEDro, Web of Science, REHABDATA, CINAHL, and EMBASE were searched from inception until July 2023. The Physiotherapy Evidence Database (PEDro) scale was employed to evaluate the selected studies quality. RESULTS Of 557 studies, five randomized trials (n = 122), with 25% of participants being females, were included in this review. The PEDro scores ranged from eight to nine, with a median score of nine. There were variations in treatment protocols and outcome measures, resulting in heterogeneous findings. The findings showed revealed evidence for the impacts of combined robotic therapy and NIBS on motor function in individuals with incomplete SCI. CONCLUSIONS Combined robotic training and NIBS may be safe for individuals with incomplete SCI. The existing evidence concerning its effects on motor outcomes in individuals with SCI is limited. Further experimental studies are needed to understand the effects of combined robotic training and NIBS on motor impairments in SCI populations.
Collapse
Affiliation(s)
- Anas R. Alashram
- Department of Physiotherapy, Middle East University, Amman, Jordan
- Applied Science Research Center, Applied Science Private University, Amman, Jordan
- Department of Human Sciences and Promotion of the Quality of Life, San Raffaele Roma Open University, Rome, Italy
| |
Collapse
|
10
|
Yang Z, Guo S, Fang Y, Yu Z, Liu JK. Spiking Variational Policy Gradient for Brain Inspired Reinforcement Learning. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE 2024; PP:1975-1990. [PMID: 40030447 DOI: 10.1109/tpami.2024.3511936] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 03/05/2025]
Abstract
Recent studies in reinforcement learning have explored brain-inspired function approximators and learning algorithms to simulate brain intelligence and adapt to neuromorphic hardware. Among these approaches, reward-modulated spike-timing-dependent plasticity (R-STDP) is biologically plausible and energy-efficient, but suffers from a gap between its local learning rules and the global learning objectives, which limits its performance and applicability. In this paper, we design a recurrent winner-take-all network and propose the spiking variational policy gradient (SVPG), a new R-STDP learning method derived theoretically from the global policy gradient. Specifically, the policy inference is derived from an energy-based policy function using mean-field inference, and the policy optimization is based on a last-step approximation of the global policy gradient. These fill the gap between the local learning rules and the global target. In experiments including a challenging ViZDoom vision-based navigation task and two realistic robot control tasks, SVPG successfully solves all the tasks. In addition, SVPG exhibits better inherent robustness to various kinds of input, network parameters, and environmental perturbations than compared methods.
Collapse
|
11
|
Sharbafshaaer M, Cirillo G, Esposito F, Tedeschi G, Trojsi F. Harnessing Brain Plasticity: The Therapeutic Power of Repetitive Transcranial Magnetic Stimulation (rTMS) and Theta Burst Stimulation (TBS) in Neurotransmitter Modulation, Receptor Dynamics, and Neuroimaging for Neurological Innovations. Biomedicines 2024; 12:2506. [PMID: 39595072 PMCID: PMC11592033 DOI: 10.3390/biomedicines12112506] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/31/2024] [Revised: 10/27/2024] [Accepted: 10/29/2024] [Indexed: 11/28/2024] Open
Abstract
Transcranial magnetic stimulation (TMS) methods have become exciting techniques for altering brain activity and improving synaptic plasticity, earning recognition as valuable non-medicine treatments for a wide range of neurological disorders. Among these methods, repetitive TMS (rTMS) and theta-burst stimulation (TBS) show significant promise in improving outcomes for adults with complex neurological and neurodegenerative conditions, such as Alzheimer's disease, stroke, Parkinson's disease, etc. However, optimizing their effects remains a challenge due to variability in how patients respond and a limited understanding of how these techniques interact with crucial neurotransmitter systems. This narrative review explores the mechanisms of rTMS and TBS, which enhance neuroplasticity and functional improvement. We specifically focus on their effects on GABAergic and glutamatergic pathways and how they interact with key receptors like N-Methyl-D-Aspartate (NMDA) and AMPA receptors, which play essential roles in processes like long-term potentiation (LTP) and long-term depression (LTD). Additionally, we investigate how rTMS and TBS impact neuroplasticity and functional connectivity, particularly concerning brain-derived neurotrophic factor (BDNF) and tropomyosin-related kinase receptor type B (TrkB). Here, we highlight the significant potential of this research to expand our understanding of neuroplasticity and better treatment outcomes for patients. Through clarifying the neurobiology mechanisms behind rTMS and TBS with neuroimaging findings, we aim to develop more effective, personalized treatment plans that effectively address the challenges posed by neurological disorders and ultimately enhance the quality of neurorehabilitation services and provide future directions for patients' care.
Collapse
Affiliation(s)
- Minoo Sharbafshaaer
- First Division of Neurology, Department of Advanced Medical and Surgical Sciences, University of Campania “Luigi Vanvitelli”, 80138 Naples, Italy; (F.E.); (G.T.); (F.T.)
| | - Giovanni Cirillo
- Division of Human Anatomy, Neuronal Networks Morphology & Systems Biology Lab, Department of Mental and Physical Health and Preventive Medicine, University of Campania “Luigi Vanvitelli, 80138 Naples, Italy;
| | - Fabrizio Esposito
- First Division of Neurology, Department of Advanced Medical and Surgical Sciences, University of Campania “Luigi Vanvitelli”, 80138 Naples, Italy; (F.E.); (G.T.); (F.T.)
| | - Gioacchino Tedeschi
- First Division of Neurology, Department of Advanced Medical and Surgical Sciences, University of Campania “Luigi Vanvitelli”, 80138 Naples, Italy; (F.E.); (G.T.); (F.T.)
| | - Francesca Trojsi
- First Division of Neurology, Department of Advanced Medical and Surgical Sciences, University of Campania “Luigi Vanvitelli”, 80138 Naples, Italy; (F.E.); (G.T.); (F.T.)
| |
Collapse
|
12
|
Sosis B, Rubin JE. Distinct dopaminergic spike-timing-dependent plasticity rules are suited to different functional roles. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.06.24.600372. [PMID: 38979377 PMCID: PMC11230239 DOI: 10.1101/2024.06.24.600372] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 07/10/2024]
Abstract
Various mathematical models have been formulated to describe the changes in synaptic strengths resulting from spike-timing-dependent plasticity (STDP). A subset of these models include a third factor, dopamine, which interacts with spike timing to contribute to plasticity at specific synapses, notably those from cortex to striatum at the input layer of the basal ganglia. Theoretical work to analyze these plasticity models has largely focused on abstract issues, such as the conditions under which they may promote synchronization and the weight distributions induced by inputs with simple correlation structures, rather than on scenarios associated with specific tasks, and has generally not considered dopamine-dependent forms of STDP. In this paper we introduce three forms of dopamine-modulated STDP adapted from previously proposed plasticity rules. We then analyze, mathematically and with simulations, their performance in three biologically relevant scenarios. We test the ability of each of the three models to maintain its weights in the face of noise and to complete simple reward prediction and action selection tasks, studying the learned weight distributions and corresponding task performance in each setting. Interestingly, we find that each plasticity rule is well suited to a subset of the scenarios studied but falls short in others. Different tasks may therefore require different forms of synaptic plasticity, yielding the prediction that the precise form of the STDP mechanism present may vary across regions of the striatum, and other brain areas impacted by dopamine, that are involved in distinct computational functions.
Collapse
Affiliation(s)
- Baram Sosis
- *Department of Mathematics, University of Pittsburgh, 301 Thackeray Hall, Pittsburgh, 15260, PA, USA
| | - Jonathan E. Rubin
- *Department of Mathematics, University of Pittsburgh, 301 Thackeray Hall, Pittsburgh, 15260, PA, USA
- Center for the Neural Basis of Cognition, University of Pittsburgh, 4400 Fifth Ave, Pittsburgh, 15213, PA, USA
| |
Collapse
|
13
|
Lu Y, Wu S. Learning sequence attractors in recurrent networks with hidden neurons. Neural Netw 2024; 178:106466. [PMID: 38968778 DOI: 10.1016/j.neunet.2024.106466] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/03/2024] [Revised: 05/15/2024] [Accepted: 06/13/2024] [Indexed: 07/07/2024]
Abstract
The brain is targeted for processing temporal sequence information. It remains largely unclear how the brain learns to store and retrieve sequence memories. Here, we study how recurrent networks of binary neurons learn sequence attractors to store predefined pattern sequences and retrieve them robustly. We show that to store arbitrary pattern sequences, it is necessary for the network to include hidden neurons even though their role in displaying sequence memories is indirect. We develop a local learning algorithm to learn sequence attractors in the networks with hidden neurons. The algorithm is proven to converge and lead to sequence attractors. We demonstrate that the network model can store and retrieve sequences robustly on synthetic and real-world datasets. We hope that this study provides new insights in understanding sequence memory and temporal information processing in the brain.
Collapse
Affiliation(s)
- Yao Lu
- School of Psychological and Cognitive Sciences, IDG/McGovern Institute for Brain Research, Beijing Key Laboratory of Behavior and Mental Health, Peking-Tsinghua Center for Life Sciences, Center of Quantitative Biology, Academy for Advanced Interdisciplinary Studies, Peking University, China
| | - Si Wu
- School of Psychological and Cognitive Sciences, IDG/McGovern Institute for Brain Research, Beijing Key Laboratory of Behavior and Mental Health, Peking-Tsinghua Center for Life Sciences, Center of Quantitative Biology, Academy for Advanced Interdisciplinary Studies, Peking University, China.
| |
Collapse
|
14
|
Li J, Serafin EK, Koorndyk N, Baccei ML. Astrocyte D1/D5 Dopamine Receptors Govern Non-Hebbian Long-Term Potentiation at Sensory Synapses onto Lamina I Spinoparabrachial Neurons. J Neurosci 2024; 44:e0170242024. [PMID: 38955487 PMCID: PMC11308343 DOI: 10.1523/jneurosci.0170-24.2024] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/24/2024] [Revised: 06/20/2024] [Accepted: 06/27/2024] [Indexed: 07/04/2024] Open
Abstract
Recent work demonstrated that activation of spinal D1 and D5 dopamine receptors (D1/D5Rs) facilitates non-Hebbian long-term potentiation (LTP) at primary afferent synapses onto spinal projection neurons. However, the cellular localization of the D1/D5Rs driving non-Hebbian LTP in spinal nociceptive circuits remains unknown, and it is also unclear whether D1/D5R signaling must occur concurrently with sensory input in order to promote non-Hebbian LTP at these synapses. Here we investigate these issues using cell-type-selective knockdown of D1Rs or D5Rs from lamina I spinoparabrachial neurons, dorsal root ganglion (DRG) neurons, or astrocytes in adult mice of either sex using Cre recombinase-based genetic strategies. The LTP evoked by low-frequency stimulation of primary afferents in the presence of the selective D1/D5R agonist SKF82958 persisted following the knockdown of D1R or D5R in spinoparabrachial neurons, suggesting that postsynaptic D1/D5R signaling was dispensable for non-Hebbian plasticity at sensory synapses onto these key output neurons of the superficial dorsal horn (SDH). Similarly, the knockdown of D1Rs or D5Rs in DRG neurons failed to influence SKF82958-enabled LTP in lamina I projection neurons. In contrast, SKF82958-induced LTP was suppressed by the knockdown of D1R or D5R in spinal astrocytes. Furthermore, the data indicate that the activation of D1R/D5Rs in spinal astrocytes can either retroactively or proactively drive non-Hebbian LTP in spinoparabrachial neurons. Collectively, these results suggest that dopaminergic signaling in astrocytes can strongly promote activity-dependent LTP in the SDH, which is predicted to significantly enhance the amplification of ascending nociceptive transmission from the spinal cord to the brain.
Collapse
Affiliation(s)
- Jie Li
- Department of Anesthesiology, Pain Research Center, University of Cincinnati College of Medicine, Cincinnati, Ohio 45267
| | - Elizabeth K Serafin
- Department of Anesthesiology, Pain Research Center, University of Cincinnati College of Medicine, Cincinnati, Ohio 45267
| | - Nathan Koorndyk
- Neuroscience Graduate Program, University of Cincinnati College of Medicine, Cincinnati, Ohio 45267
| | - Mark L Baccei
- Department of Anesthesiology, Pain Research Center, University of Cincinnati College of Medicine, Cincinnati, Ohio 45267
- Neuroscience Graduate Program, University of Cincinnati College of Medicine, Cincinnati, Ohio 45267
| |
Collapse
|
15
|
Goupy G, Tirilly P, Bilasco IM. Paired competing neurons improving STDP supervised local learning in Spiking Neural Networks. Front Neurosci 2024; 18:1401690. [PMID: 39119458 PMCID: PMC11307446 DOI: 10.3389/fnins.2024.1401690] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/15/2024] [Accepted: 07/11/2024] [Indexed: 08/10/2024] Open
Abstract
Direct training of Spiking Neural Networks (SNNs) on neuromorphic hardware has the potential to significantly reduce the energy consumption of artificial neural network training. SNNs trained with Spike Timing-Dependent Plasticity (STDP) benefit from gradient-free and unsupervised local learning, which can be easily implemented on ultra-low-power neuromorphic hardware. However, classification tasks cannot be performed solely with unsupervised STDP. In this paper, we propose Stabilized Supervised STDP (S2-STDP), a supervised STDP learning rule to train the classification layer of an SNN equipped with unsupervised STDP for feature extraction. S2-STDP integrates error-modulated weight updates that align neuron spikes with desired timestamps derived from the average firing time within the layer. Then, we introduce a training architecture called Paired Competing Neurons (PCN) to further enhance the learning capabilities of our classification layer trained with S2-STDP. PCN associates each class with paired neurons and encourages neuron specialization toward target or non-target samples through intra-class competition. We evaluate our methods on image recognition datasets, including MNIST, Fashion-MNIST, and CIFAR-10. Results show that our methods outperform state-of-the-art supervised STDP learning rules, for comparable architectures and numbers of neurons. Further analysis demonstrates that the use of PCN enhances the performance of S2-STDP, regardless of the hyperparameter set and without introducing any additional hyperparameters.
Collapse
|
16
|
Lindsey JW, Litwin-Kumar A. Selective consolidation of learning and memory via recall-gated plasticity. eLife 2024; 12:RP90793. [PMID: 39023518 PMCID: PMC11257680 DOI: 10.7554/elife.90793] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 07/20/2024] Open
Abstract
In a variety of species and behavioral contexts, learning and memory formation recruits two neural systems, with initial plasticity in one system being consolidated into the other over time. Moreover, consolidation is known to be selective; that is, some experiences are more likely to be consolidated into long-term memory than others. Here, we propose and analyze a model that captures common computational principles underlying such phenomena. The key component of this model is a mechanism by which a long-term learning and memory system prioritizes the storage of synaptic changes that are consistent with prior updates to the short-term system. This mechanism, which we refer to as recall-gated consolidation, has the effect of shielding long-term memory from spurious synaptic changes, enabling it to focus on reliable signals in the environment. We describe neural circuit implementations of this model for different types of learning problems, including supervised learning, reinforcement learning, and autoassociative memory storage. These implementations involve synaptic plasticity rules modulated by factors such as prediction accuracy, decision confidence, or familiarity. We then develop an analytical theory of the learning and memory performance of the model, in comparison to alternatives relying only on synapse-local consolidation mechanisms. We find that recall-gated consolidation provides significant advantages, substantially amplifying the signal-to-noise ratio with which memories can be stored in noisy environments. We show that recall-gated consolidation gives rise to a number of phenomena that are present in behavioral learning paradigms, including spaced learning effects, task-dependent rates of consolidation, and differing neural representations in short- and long-term pathways.
Collapse
Affiliation(s)
- Jack W Lindsey
- Zuckerman Mind Brain Behavior Institute, Columbia UniversityNew YorkUnited States
| | - Ashok Litwin-Kumar
- Zuckerman Mind Brain Behavior Institute, Columbia UniversityNew YorkUnited States
| |
Collapse
|
17
|
Schütt HH, Kim D, Ma WJ. Reward prediction error neurons implement an efficient code for reward. Nat Neurosci 2024; 27:1333-1339. [PMID: 38898182 DOI: 10.1038/s41593-024-01671-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/03/2022] [Accepted: 04/29/2024] [Indexed: 06/21/2024]
Abstract
We use efficient coding principles borrowed from sensory neuroscience to derive the optimal neural population to encode a reward distribution. We show that the responses of dopaminergic reward prediction error neurons in mouse and macaque are similar to those of the efficient code in the following ways: the neurons have a broad distribution of midpoints covering the reward distribution; neurons with higher thresholds have higher gains, more convex tuning functions and lower slopes; and their slope is higher when the reward distribution is narrower. Furthermore, we derive learning rules that converge to the efficient code. The learning rule for the position of the neuron on the reward axis closely resembles distributional reinforcement learning. Thus, reward prediction error neuron responses may be optimized to broadcast an efficient reward signal, forming a connection between efficient coding and reinforcement learning, two of the most successful theories in computational neuroscience.
Collapse
Affiliation(s)
- Heiko H Schütt
- Center for Neural Science and Department of Psychology, New York University, New York, NY, USA.
- Department of Behavioural and Cognitive Sciences, Université du Luxembourg, Esch-Belval, Luxembourg.
| | - Dongjae Kim
- Center for Neural Science and Department of Psychology, New York University, New York, NY, USA
- Department of AI-Based Convergence, Dankook University, Yongin, Republic of Korea
| | - Wei Ji Ma
- Center for Neural Science and Department of Psychology, New York University, New York, NY, USA
| |
Collapse
|
18
|
Bredenberg C, Savin C. Desiderata for Normative Models of Synaptic Plasticity. Neural Comput 2024; 36:1245-1285. [PMID: 38776950 DOI: 10.1162/neco_a_01671] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/09/2023] [Accepted: 02/06/2024] [Indexed: 05/25/2024]
Abstract
Normative models of synaptic plasticity use computational rationales to arrive at predictions of behavioral and network-level adaptive phenomena. In recent years, there has been an explosion of theoretical work in this realm, but experimental confirmation remains limited. In this review, we organize work on normative plasticity models in terms of a set of desiderata that, when satisfied, are designed to ensure that a given model demonstrates a clear link between plasticity and adaptive behavior, is consistent with known biological evidence about neural plasticity and yields specific testable predictions. As a prototype, we include a detailed analysis of the REINFORCE algorithm. We also discuss how new models have begun to improve on the identified criteria and suggest avenues for further development. Overall, we provide a conceptual guide to help develop neural learning theories that are precise, powerful, and experimentally testable.
Collapse
Affiliation(s)
- Colin Bredenberg
- Center for Neural Science, New York University, New York, NY 10003, U.S.A
- Mila-Quebec AI Institute, Montréal, QC H2S 3H1, Canada
| | - Cristina Savin
- Center for Neural Science, New York University, New York, NY 10003, U.S.A
- Center for Data Science, New York University, New York, NY 10011, U.S.A.
| |
Collapse
|
19
|
Sayegh FJP, Mouledous L, Macri C, Pi Macedo J, Lejards C, Rampon C, Verret L, Dahan L. Ventral tegmental area dopamine projections to the hippocampus trigger long-term potentiation and contextual learning. Nat Commun 2024; 15:4100. [PMID: 38773091 PMCID: PMC11109191 DOI: 10.1038/s41467-024-47481-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/09/2023] [Accepted: 03/28/2024] [Indexed: 05/23/2024] Open
Abstract
In most models of neuronal plasticity and memory, dopamine is thought to promote the long-term maintenance of Long-Term Potentiation (LTP) underlying memory processes, but not the initiation of plasticity or new information storage. Here, we used optogenetic manipulation of midbrain dopamine neurons in male DAT::Cre mice, and discovered that stimulating the Schaffer collaterals - the glutamatergic axons connecting CA3 and CA1 regions - of the dorsal hippocampus concomitantly with midbrain dopamine terminals within a 200 millisecond time-window triggers LTP at glutamatergic synapses. Moreover, we showed that the stimulation of this dopaminergic pathway facilitates contextual learning in awake behaving mice, while its inhibition hinders it. Thus, activation of midbrain dopamine can operate as a teaching signal that triggers NeoHebbian LTP and promotes supervised learning.
Collapse
Affiliation(s)
- Fares J P Sayegh
- Centre de Recherches sur la Cognition Animale (CRCA), Centre de Biologie Intégrative (CBI), Université de Toulouse; CNRS, UPS, Toulouse, France.
| | - Lionel Mouledous
- Centre de Recherches sur la Cognition Animale (CRCA), Centre de Biologie Intégrative (CBI), Université de Toulouse; CNRS, UPS, Toulouse, France
| | - Catherine Macri
- Centre de Recherches sur la Cognition Animale (CRCA), Centre de Biologie Intégrative (CBI), Université de Toulouse; CNRS, UPS, Toulouse, France
| | - Juliana Pi Macedo
- Centre de Recherches sur la Cognition Animale (CRCA), Centre de Biologie Intégrative (CBI), Université de Toulouse; CNRS, UPS, Toulouse, France
| | - Camille Lejards
- Centre de Recherches sur la Cognition Animale (CRCA), Centre de Biologie Intégrative (CBI), Université de Toulouse; CNRS, UPS, Toulouse, France
| | - Claire Rampon
- Centre de Recherches sur la Cognition Animale (CRCA), Centre de Biologie Intégrative (CBI), Université de Toulouse; CNRS, UPS, Toulouse, France
| | - Laure Verret
- Centre de Recherches sur la Cognition Animale (CRCA), Centre de Biologie Intégrative (CBI), Université de Toulouse; CNRS, UPS, Toulouse, France
| | - Lionel Dahan
- Centre de Recherches sur la Cognition Animale (CRCA), Centre de Biologie Intégrative (CBI), Université de Toulouse; CNRS, UPS, Toulouse, France.
| |
Collapse
|
20
|
Daruwalla K, Lipasti M. Information bottleneck-based Hebbian learning rule naturally ties working memory and synaptic updates. Front Comput Neurosci 2024; 18:1240348. [PMID: 38818385 PMCID: PMC11137249 DOI: 10.3389/fncom.2024.1240348] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/14/2023] [Accepted: 04/26/2024] [Indexed: 06/01/2024] Open
Abstract
Deep neural feedforward networks are effective models for a wide array of problems, but training and deploying such networks presents a significant energy cost. Spiking neural networks (SNNs), which are modeled after biologically realistic neurons, offer a potential solution when deployed correctly on neuromorphic computing hardware. Still, many applications train SNNs offline, and running network training directly on neuromorphic hardware is an ongoing research problem. The primary hurdle is that back-propagation, which makes training such artificial deep networks possible, is biologically implausible. Neuroscientists are uncertain about how the brain would propagate a precise error signal backward through a network of neurons. Recent progress addresses part of this question, e.g., the weight transport problem, but a complete solution remains intangible. In contrast, novel learning rules based on the information bottleneck (IB) train each layer of a network independently, circumventing the need to propagate errors across layers. Instead, propagation is implicit due the layers' feedforward connectivity. These rules take the form of a three-factor Hebbian update a global error signal modulates local synaptic updates within each layer. Unfortunately, the global signal for a given layer requires processing multiple samples concurrently, and the brain only sees a single sample at a time. We propose a new three-factor update rule where the global signal correctly captures information across samples via an auxiliary memory network. The auxiliary network can be trained a priori independently of the dataset being used with the primary network. We demonstrate comparable performance to baselines on image classification tasks. Interestingly, unlike back-propagation-like schemes where there is no link between learning and memory, our rule presents a direct connection between working memory and synaptic updates. To the best of our knowledge, this is the first rule to make this link explicit. We explore these implications in initial experiments examining the effect of memory capacity on learning performance. Moving forward, this work suggests an alternate view of learning where each layer balances memory-informed compression against task performance. This view naturally encompasses several key aspects of neural computation, including memory, efficiency, and locality.
Collapse
Affiliation(s)
- Kyle Daruwalla
- Cold Spring Harbor Laboratory, Long Island, NY, United States
| | - Mikko Lipasti
- Electrical and Computer Engineering Department, University of Wisconsin-Madison, Madison, WI, United States
| |
Collapse
|
21
|
Vignoud G, Venance L, Touboul JD. Anti-Hebbian plasticity drives sequence learning in striatum. Commun Biol 2024; 7:555. [PMID: 38724614 PMCID: PMC11082161 DOI: 10.1038/s42003-024-06203-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/18/2022] [Accepted: 04/17/2024] [Indexed: 05/12/2024] Open
Abstract
Spatio-temporal activity patterns have been observed in a variety of brain areas in spontaneous activity, prior to or during action, or in response to stimuli. Biological mechanisms endowing neurons with the ability to distinguish between different sequences remain largely unknown. Learning sequences of spikes raises multiple challenges, such as maintaining in memory spike history and discriminating partially overlapping sequences. Here, we show that anti-Hebbian spike-timing dependent plasticity (STDP), as observed at cortico-striatal synapses, can naturally lead to learning spike sequences. We design a spiking model of the striatal output neuron receiving spike patterns defined as sequential input from a fixed set of cortical neurons. We use a simple synaptic plasticity rule that combines anti-Hebbian STDP and non-associative potentiation for a subset of the presented patterns called rewarded patterns. We study the ability of striatal output neurons to discriminate rewarded from non-rewarded patterns by firing only after the presentation of a rewarded pattern. In particular, we show that two biological properties of striatal networks, spiking latency and collateral inhibition, contribute to an increase in accuracy, by allowing a better discrimination of partially overlapping sequences. These results suggest that anti-Hebbian STDP may serve as a biological substrate for learning sequences of spikes.
Collapse
Affiliation(s)
- Gaëtan Vignoud
- Center for Interdisciplinary Research in Biology (CIRB), College de France, CNRS, INSERM, Université PSL, Paris, France
| | - Laurent Venance
- Center for Interdisciplinary Research in Biology (CIRB), College de France, CNRS, INSERM, Université PSL, Paris, France.
| | - Jonathan D Touboul
- Department of Mathematics and Volen National Center for Complex Systems, Brandeis University, Waltham, MA, USA.
| |
Collapse
|
22
|
Agnes EJ, Vogels TP. Co-dependent excitatory and inhibitory plasticity accounts for quick, stable and long-lasting memories in biological networks. Nat Neurosci 2024; 27:964-974. [PMID: 38509348 PMCID: PMC11089004 DOI: 10.1038/s41593-024-01597-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/29/2022] [Accepted: 02/08/2024] [Indexed: 03/22/2024]
Abstract
The brain's functionality is developed and maintained through synaptic plasticity. As synapses undergo plasticity, they also affect each other. The nature of such 'co-dependency' is difficult to disentangle experimentally, because multiple synapses must be monitored simultaneously. To help understand the experimentally observed phenomena, we introduce a framework that formalizes synaptic co-dependency between different connection types. The resulting model explains how inhibition can gate excitatory plasticity while neighboring excitatory-excitatory interactions determine the strength of long-term potentiation. Furthermore, we show how the interplay between excitatory and inhibitory synapses can account for the quick rise and long-term stability of a variety of synaptic weight profiles, such as orientation tuning and dendritic clustering of co-active synapses. In recurrent neuronal networks, co-dependent plasticity produces rich and stable motor cortex-like dynamics with high input sensitivity. Our results suggest an essential role for the neighborly synaptic interaction during learning, connecting micro-level physiology with network-wide phenomena.
Collapse
Affiliation(s)
- Everton J Agnes
- Centre for Neural Circuits and Behaviour, University of Oxford, Oxford, UK.
- Biozentrum, University of Basel, Basel, Switzerland.
| | - Tim P Vogels
- Centre for Neural Circuits and Behaviour, University of Oxford, Oxford, UK
- Institute of Science and Technology Austria, Klosterneuburg, Austria
| |
Collapse
|
23
|
Parnas M, Manoim JE, Lin AC. Sensory encoding and memory in the mushroom body: signals, noise, and variability. Learn Mem 2024; 31:a053825. [PMID: 38862174 PMCID: PMC11199953 DOI: 10.1101/lm.053825.123] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/10/2023] [Accepted: 11/21/2023] [Indexed: 06/13/2024]
Abstract
To survive in changing environments, animals need to learn to associate specific sensory stimuli with positive or negative valence. How do they form stimulus-specific memories to distinguish between positively/negatively associated stimuli and other irrelevant stimuli? Solving this task is one of the functions of the mushroom body, the associative memory center in insect brains. Here we summarize recent work on sensory encoding and memory in the Drosophila mushroom body, highlighting general principles such as pattern separation, sparse coding, noise and variability, coincidence detection, and spatially localized neuromodulation, and placing the mushroom body in comparative perspective with mammalian memory systems.
Collapse
Affiliation(s)
- Moshe Parnas
- Department of Physiology and Pharmacology, Faculty of Medicine, Tel Aviv University, Tel Aviv 69978, Israel
- Sagol School of Neuroscience, Tel Aviv University, Tel Aviv 69978, Israel
| | - Julia E Manoim
- Department of Physiology and Pharmacology, Faculty of Medicine, Tel Aviv University, Tel Aviv 69978, Israel
| | - Andrew C Lin
- School of Biosciences, University of Sheffield, Sheffield S10 2TN, United Kingdom
- Neuroscience Institute, University of Sheffield, Sheffield S10 2TN, United Kingdom
| |
Collapse
|
24
|
Fitz H, Hagoort P, Petersson KM. Neurobiological Causal Models of Language Processing. NEUROBIOLOGY OF LANGUAGE (CAMBRIDGE, MASS.) 2024; 5:225-247. [PMID: 38645618 PMCID: PMC11025648 DOI: 10.1162/nol_a_00133] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 09/29/2022] [Accepted: 12/18/2023] [Indexed: 04/23/2024]
Abstract
The language faculty is physically realized in the neurobiological infrastructure of the human brain. Despite significant efforts, an integrated understanding of this system remains a formidable challenge. What is missing from most theoretical accounts is a specification of the neural mechanisms that implement language function. Computational models that have been put forward generally lack an explicit neurobiological foundation. We propose a neurobiologically informed causal modeling approach which offers a framework for how to bridge this gap. A neurobiological causal model is a mechanistic description of language processing that is grounded in, and constrained by, the characteristics of the neurobiological substrate. It intends to model the generators of language behavior at the level of implementational causality. We describe key features and neurobiological component parts from which causal models can be built and provide guidelines on how to implement them in model simulations. Then we outline how this approach can shed new light on the core computational machinery for language, the long-term storage of words in the mental lexicon and combinatorial processing in sentence comprehension. In contrast to cognitive theories of behavior, causal models are formulated in the "machine language" of neurobiology which is universal to human cognition. We argue that neurobiological causal modeling should be pursued in addition to existing approaches. Eventually, this approach will allow us to develop an explicit computational neurobiology of language.
Collapse
Affiliation(s)
- Hartmut Fitz
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, The Netherlands
- Neurobiology of Language Department, Max Planck Institute for Psycholinguistics, Nijmegen, The Netherlands
| | - Peter Hagoort
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, The Netherlands
- Neurobiology of Language Department, Max Planck Institute for Psycholinguistics, Nijmegen, The Netherlands
| | - Karl Magnus Petersson
- Neurobiology of Language Department, Max Planck Institute for Psycholinguistics, Nijmegen, The Netherlands
- Faculty of Medicine and Biomedical Sciences, University of Algarve, Faro, Portugal
| |
Collapse
|
25
|
Barry MLLR, Gerstner W. Fast adaptation to rule switching using neuronal surprise. PLoS Comput Biol 2024; 20:e1011839. [PMID: 38377112 PMCID: PMC10906910 DOI: 10.1371/journal.pcbi.1011839] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/22/2022] [Revised: 03/01/2024] [Accepted: 01/18/2024] [Indexed: 02/22/2024] Open
Abstract
In humans and animals, surprise is a physiological reaction to an unexpected event, but how surprise can be linked to plausible models of neuronal activity is an open problem. We propose a self-supervised spiking neural network model where a surprise signal is extracted from an increase in neural activity after an imbalance of excitation and inhibition. The surprise signal modulates synaptic plasticity via a three-factor learning rule which increases plasticity at moments of surprise. The surprise signal remains small when transitions between sensory events follow a previously learned rule but increases immediately after rule switching. In a spiking network with several modules, previously learned rules are protected against overwriting, as long as the number of modules is larger than the total number of rules-making a step towards solving the stability-plasticity dilemma in neuroscience. Our model relates the subjective notion of surprise to specific predictions on the circuit level.
Collapse
Affiliation(s)
- Martin L. L. R. Barry
- School of Computer and Communication Sciences and School of Life Sciences, Ecole Polytechnique Fédérale de Lausanne, Lausanne, Switzerland
| | - Wulfram Gerstner
- School of Computer and Communication Sciences and School of Life Sciences, Ecole Polytechnique Fédérale de Lausanne, Lausanne, Switzerland
| |
Collapse
|
26
|
Vautrelle N, Coizet V, Leriche M, Dahan L, Schulz JM, Zhang YF, Zeghbib A, Overton PG, Bracci E, Redgrave P, Reynolds JN. Sensory Reinforced Corticostriatal Plasticity. Curr Neuropharmacol 2024; 22:1513-1527. [PMID: 37533245 PMCID: PMC11097983 DOI: 10.2174/1570159x21666230801110359] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/29/2022] [Revised: 02/04/2023] [Accepted: 02/10/2023] [Indexed: 08/04/2023] Open
Abstract
BACKGROUND Regional changes in corticostriatal transmission induced by phasic dopaminergic signals are an essential feature of the neural network responsible for instrumental reinforcement during discovery of an action. However, the timing of signals that are thought to contribute to the induction of corticostriatal plasticity is difficult to reconcile within the framework of behavioural reinforcement learning, because the reinforcer is normally delayed relative to the selection and execution of causally-related actions. OBJECTIVE While recent studies have started to address the relevance of delayed reinforcement signals and their impact on corticostriatal processing, our objective was to establish a model in which a sensory reinforcer triggers appropriately delayed reinforcement signals relayed to the striatum via intact neuronal pathways and to investigate the effects on corticostriatal plasticity. METHODS We measured corticostriatal plasticity with electrophysiological recordings using a light flash as a natural sensory reinforcer, and pharmacological manipulations were applied in an in vivo anesthetized rat model preparation. RESULTS We demonstrate that the spiking of striatal neurons evoked by single-pulse stimulation of the motor cortex can be potentiated by a natural sensory reinforcer, operating through intact afferent pathways, with signal timing approximating that required for behavioural reinforcement. The pharmacological blockade of dopamine receptors attenuated the observed potentiation of corticostriatal neurotransmission. CONCLUSION This novel in vivo model of corticostriatal plasticity offers a behaviourally relevant framework to address the physiological, anatomical, cellular, and molecular bases of instrumental reinforcement learning.
Collapse
Affiliation(s)
- Nicolas Vautrelle
- Department of Anatomy, Brain Health Research Centre, University of Otago, Dunedin 9054, New Zealand
- Department of Psychology, University of Sheffield, Sheffield, S10 2TP, UK
| | - Véronique Coizet
- Department of Psychology, University of Sheffield, Sheffield, S10 2TP, UK
- Institut des Neurosciences de Grenoble, Université Joseph Fourier, Inserm, U1216, 38706 La Tronche Cedex, France
| | - Mariana Leriche
- Department of Anatomy, Brain Health Research Centre, University of Otago, Dunedin 9054, New Zealand
- Department of Psychology, University of Sheffield, Sheffield, S10 2TP, UK
| | - Lionel Dahan
- Department of Psychology, University of Sheffield, Sheffield, S10 2TP, UK
- Centre de Recherches sur la Cognition Animale, Université de Toulouse, UPS, 118 Route de Narbonne, F-31062 Toulouse Cedex 9, France
| | - Jan M. Schulz
- Department of Anatomy, Brain Health Research Centre, University of Otago, Dunedin 9054, New Zealand
- Department of Biomedicine, University of Basel, CH - 4056 Basel, Switzerland
| | - Yan-Feng Zhang
- Department of Anatomy, Brain Health Research Centre, University of Otago, Dunedin 9054, New Zealand
- Department of Clinical and Biomedical Sciences, University of Exeter Medical School, Hatherly Laboratories, Exeter EX4 4PS, United Kingdom
| | - Abdelhafid Zeghbib
- Department of Psychology, University of Sheffield, Sheffield, S10 2TP, UK
| | - Paul G. Overton
- Department of Psychology, University of Sheffield, Sheffield, S10 2TP, UK
| | - Enrico Bracci
- Department of Psychology, University of Sheffield, Sheffield, S10 2TP, UK
| | - Peter Redgrave
- Department of Psychology, University of Sheffield, Sheffield, S10 2TP, UK
| | - John N.J. Reynolds
- Department of Anatomy, Brain Health Research Centre, University of Otago, Dunedin 9054, New Zealand
| |
Collapse
|
27
|
Piette C, Gervasi N, Venance L. Synaptic plasticity through a naturalistic lens. Front Synaptic Neurosci 2023; 15:1250753. [PMID: 38145207 PMCID: PMC10744866 DOI: 10.3389/fnsyn.2023.1250753] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/30/2023] [Accepted: 11/20/2023] [Indexed: 12/26/2023] Open
Abstract
From the myriad of studies on neuronal plasticity, investigating its underlying molecular mechanisms up to its behavioral relevance, a very complex landscape has emerged. Recent efforts have been achieved toward more naturalistic investigations as an attempt to better capture the synaptic plasticity underpinning of learning and memory, which has been fostered by the development of in vivo electrophysiological and imaging tools. In this review, we examine these naturalistic investigations, by devoting a first part to synaptic plasticity rules issued from naturalistic in vivo-like activity patterns. We next give an overview of the novel tools, which enable an increased spatio-temporal specificity for detecting and manipulating plasticity expressed at individual spines up to neuronal circuit level during behavior. Finally, we put particular emphasis on works considering brain-body communication loops and macroscale contributors to synaptic plasticity, such as body internal states and brain energy metabolism.
Collapse
Affiliation(s)
- Charlotte Piette
- Center for Interdisciplinary Research in Biology (CIRB), College de France, CNRS, INSERM, Université PSL, Paris, France
| | | | - Laurent Venance
- Center for Interdisciplinary Research in Biology (CIRB), College de France, CNRS, INSERM, Université PSL, Paris, France
| |
Collapse
|
28
|
Lee MJ, DiCarlo JJ. How well do rudimentary plasticity rules predict adult visual object learning? PLoS Comput Biol 2023; 19:e1011713. [PMID: 38079444 PMCID: PMC10754461 DOI: 10.1371/journal.pcbi.1011713] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/23/2023] [Revised: 12/28/2023] [Accepted: 11/27/2023] [Indexed: 12/29/2023] Open
Abstract
A core problem in visual object learning is using a finite number of images of a new object to accurately identify that object in future, novel images. One longstanding, conceptual hypothesis asserts that this core problem is solved by adult brains through two connected mechanisms: 1) the re-representation of incoming retinal images as points in a fixed, multidimensional neural space, and 2) the optimization of linear decision boundaries in that space, via simple plasticity rules applied to a single downstream layer. Though this scheme is biologically plausible, the extent to which it explains learning behavior in humans has been unclear-in part because of a historical lack of image-computable models of the putative neural space, and in part because of a lack of measurements of human learning behaviors in difficult, naturalistic settings. Here, we addressed these gaps by 1) drawing from contemporary, image-computable models of the primate ventral visual stream to create a large set of testable learning models (n = 2,408 models), and 2) using online psychophysics to measure human learning trajectories over a varied set of tasks involving novel 3D objects (n = 371,000 trials), which we then used to develop (and publicly release) empirical benchmarks for comparing learning models to humans. We evaluated each learning model on these benchmarks, and found those based on deep, high-level representations from neural networks were surprisingly aligned with human behavior. While no tested model explained the entirety of replicable human behavior, these results establish that rudimentary plasticity rules, when combined with appropriate visual representations, have high explanatory power in predicting human behavior with respect to this core object learning problem.
Collapse
Affiliation(s)
- Michael J. Lee
- Department of Brain and Cognitive Sciences, MIT, Cambridge, Massachusetts, United States of America
- Center for Brains, Minds and Machines, MIT, Cambridge, Massachusetts, United States of America
| | - James J. DiCarlo
- Department of Brain and Cognitive Sciences, MIT, Cambridge, Massachusetts, United States of America
- Center for Brains, Minds and Machines, MIT, Cambridge, Massachusetts, United States of America
- McGovern Institute for Brain Research, MIT, Cambridge, Massachusetts, United States of America
| |
Collapse
|
29
|
Ma G, Yan R, Tang H. Exploiting noise as a resource for computation and learning in spiking neural networks. PATTERNS (NEW YORK, N.Y.) 2023; 4:100831. [PMID: 37876899 PMCID: PMC10591140 DOI: 10.1016/j.patter.2023.100831] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 05/25/2023] [Revised: 07/06/2023] [Accepted: 08/07/2023] [Indexed: 10/26/2023]
Abstract
Networks of spiking neurons underpin the extraordinary information-processing capabilities of the brain and have become pillar models in neuromorphic artificial intelligence. Despite extensive research on spiking neural networks (SNNs), most studies are established on deterministic models, overlooking the inherent non-deterministic, noisy nature of neural computations. This study introduces the noisy SNN (NSNN) and the noise-driven learning (NDL) rule by incorporating noisy neuronal dynamics to exploit the computational advantages of noisy neural processing. The NSNN provides a theoretical framework that yields scalable, flexible, and reliable computation and learning. We demonstrate that this framework leads to spiking neural models with competitive performance, improved robustness against challenging perturbations compared with deterministic SNNs, and better reproducing probabilistic computation in neural coding. Generally, this study offers a powerful and easy-to-use tool for machine learning, neuromorphic intelligence practitioners, and computational neuroscience researchers.
Collapse
Affiliation(s)
- Gehua Ma
- College of Computer Science and Technology, Zhejiang University, Hangzhou, PRC
| | - Rui Yan
- College of Computer Science and Technology, Zhejiang University of Technology, Hangzhou, PRC
| | - Huajin Tang
- College of Computer Science and Technology, Zhejiang University, Hangzhou, PRC
- State Key Lab of Brain-Machine Intelligence, Zhejiang University, Hangzhou, PRC
| |
Collapse
|
30
|
Pan W, Zhao F, Zeng Y, Han B. Adaptive structure evolution and biologically plausible synaptic plasticity for recurrent spiking neural networks. Sci Rep 2023; 13:16924. [PMID: 37805632 PMCID: PMC10560283 DOI: 10.1038/s41598-023-43488-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/07/2023] [Accepted: 09/25/2023] [Indexed: 10/09/2023] Open
Abstract
The architecture design and multi-scale learning principles of the human brain that evolved over hundreds of millions of years are crucial to realizing human-like intelligence. Spiking neural network based Liquid State Machine (LSM) serves as a suitable architecture to study brain-inspired intelligence because of its brain-inspired structure and the potential for integrating multiple biological principles. Existing researches on LSM focus on different certain perspectives, including high-dimensional encoding or optimization of the liquid layer, network architecture search, and application to hardware devices. There is still a lack of in-depth inspiration from the learning and structural evolution mechanism of the brain. Considering these limitations, this paper presents a novel LSM learning model that integrates adaptive structural evolution and multi-scale biological learning rules. For structural evolution, an adaptive evolvable LSM model is developed to optimize the neural architecture design of liquid layer with separation property. For brain-inspired learning of LSM, we propose a dopamine-modulated Bienenstock-Cooper-Munros (DA-BCM) method that incorporates global long-term dopamine regulation and local trace-based BCM synaptic plasticity. Comparative experimental results on different decision-making tasks show that introducing structural evolution of the liquid layer, and the DA-BCM regulation of the liquid layer and the readout layer could improve the decision-making ability of LSM and flexibly adapt to rule reversal. This work is committed to exploring how evolution can help to design more appropriate network architectures and how multi-scale neuroplasticity principles coordinated to enable the optimization and learning of LSMs for relatively complex decision-making tasks.
Collapse
Affiliation(s)
- Wenxuan Pan
- Brain-inspired Cognitive Intelligence Lab, Institute of Automation, Chinese Academy of Sciences, Beijing, China
- School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing, China
| | - Feifei Zhao
- Brain-inspired Cognitive Intelligence Lab, Institute of Automation, Chinese Academy of Sciences, Beijing, China
| | - Yi Zeng
- Brain-inspired Cognitive Intelligence Lab, Institute of Automation, Chinese Academy of Sciences, Beijing, China.
- School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing, China.
- School of Future Technology, University of Chinese Academy of Sciences, Beijing, China.
- Center for Excellence in Brain Science and Intelligence Technology, Chinese Academy of Sciences, Shanghai, China.
| | - Bing Han
- Brain-inspired Cognitive Intelligence Lab, Institute of Automation, Chinese Academy of Sciences, Beijing, China
- School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing, China
| |
Collapse
|
31
|
Borland MS, Buell EP, Riley JR, Carroll AM, Moreno NA, Sharma P, Grasse KM, Buell JM, Kilgard MP, Engineer CT. Precise sound characteristics drive plasticity in the primary auditory cortex with VNS-sound pairing. Front Neurosci 2023; 17:1248936. [PMID: 37732302 PMCID: PMC10508341 DOI: 10.3389/fnins.2023.1248936] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/27/2023] [Accepted: 08/22/2023] [Indexed: 09/22/2023] Open
Abstract
Introduction Repeatedly pairing a tone with vagus nerve stimulation (VNS) alters frequency tuning across the auditory pathway. Pairing VNS with speech sounds selectively enhances the primary auditory cortex response to the paired sounds. It is not yet known how altering the speech sounds paired with VNS alters responses. In this study, we test the hypothesis that the sounds that are presented and paired with VNS will influence the neural plasticity observed following VNS-sound pairing. Methods To explore the relationship between acoustic experience and neural plasticity, responses were recorded from primary auditory cortex (A1) after VNS was repeatedly paired with the speech sounds 'rad' and 'lad' or paired with only the speech sound 'rad' while 'lad' was an unpaired background sound. Results Pairing both sounds with VNS increased the response strength and neural discriminability of the paired sounds in the primary auditory cortex. Surprisingly, pairing only 'rad' with VNS did not alter A1 responses. Discussion These results suggest that the specific acoustic contrasts associated with VNS can powerfully shape neural activity in the auditory pathway. Methods to promote plasticity in the central auditory system represent a new therapeutic avenue to treat auditory processing disorders. Understanding how different sound contrasts and neural activity patterns shape plasticity could have important clinical implications.
Collapse
Affiliation(s)
- Michael S. Borland
- Department of Neuroscience, School of Behavioral and Brain Sciences, The University of Texas at Dallas, Richardson, TX, United States
- Texas Biomedical Device Center, The University of Texas at Dallas, Richardson, TX, United States
| | - Elizabeth P. Buell
- Department of Neuroscience, School of Behavioral and Brain Sciences, The University of Texas at Dallas, Richardson, TX, United States
- Texas Biomedical Device Center, The University of Texas at Dallas, Richardson, TX, United States
| | - Jonathan R. Riley
- Department of Neuroscience, School of Behavioral and Brain Sciences, The University of Texas at Dallas, Richardson, TX, United States
- Texas Biomedical Device Center, The University of Texas at Dallas, Richardson, TX, United States
| | - Alan M. Carroll
- Department of Neuroscience, School of Behavioral and Brain Sciences, The University of Texas at Dallas, Richardson, TX, United States
- Texas Biomedical Device Center, The University of Texas at Dallas, Richardson, TX, United States
| | - Nicole A. Moreno
- Department of Neuroscience, School of Behavioral and Brain Sciences, The University of Texas at Dallas, Richardson, TX, United States
- Texas Biomedical Device Center, The University of Texas at Dallas, Richardson, TX, United States
| | - Pryanka Sharma
- Department of Neuroscience, School of Behavioral and Brain Sciences, The University of Texas at Dallas, Richardson, TX, United States
- Texas Biomedical Device Center, The University of Texas at Dallas, Richardson, TX, United States
| | - Katelyn M. Grasse
- Texas Biomedical Device Center, The University of Texas at Dallas, Richardson, TX, United States
- Erik Jonsson School of Engineering and Computer Science, The University of Texas at Dallas, Richardson, TX, United States
| | - John M. Buell
- Department of Neuroscience, School of Behavioral and Brain Sciences, The University of Texas at Dallas, Richardson, TX, United States
- Texas Biomedical Device Center, The University of Texas at Dallas, Richardson, TX, United States
| | - Michael P. Kilgard
- Department of Neuroscience, School of Behavioral and Brain Sciences, The University of Texas at Dallas, Richardson, TX, United States
- Texas Biomedical Device Center, The University of Texas at Dallas, Richardson, TX, United States
| | - Crystal T. Engineer
- Department of Neuroscience, School of Behavioral and Brain Sciences, The University of Texas at Dallas, Richardson, TX, United States
- Texas Biomedical Device Center, The University of Texas at Dallas, Richardson, TX, United States
| |
Collapse
|
32
|
Vlasov D, Minnekhanov A, Rybka R, Davydov Y, Sboev A, Serenko A, Ilyasov A, Demin V. Memristor-based spiking neural network with online reinforcement learning. Neural Netw 2023; 166:512-523. [PMID: 37579580 DOI: 10.1016/j.neunet.2023.07.031] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/19/2022] [Revised: 04/28/2023] [Accepted: 07/24/2023] [Indexed: 08/16/2023]
Abstract
Neural networks implemented in memristor-based hardware can provide fast and efficient in-memory computation, but traditional learning methods such as error back-propagation are hardly feasible in it. Spiking neural networks (SNNs) are highly promising in this regard, as their weights can be changed locally in a self-organized manner without the demand for high-precision changes calculated with the use of information almost from the entire network. This problem is rather relevant for solving control tasks with neural-network reinforcement learning methods, as those are highly sensitive to any source of stochasticity in a model initialization, training, or decision-making procedure. This paper presents an online reinforcement learning algorithm in which the change of connection weights is carried out after processing each environment state during interaction-with-environment data generation. Another novel feature of the algorithm is that it is applied to SNNs with memristor-based STDP-like learning rules. The plasticity functions are obtained from real memristors based on poly-p-xylylene and CoFeB-LiNbO3 nanocomposite, which were experimentally assembled and analyzed. The SNN is comprised of leaky integrate-and-fire neurons. Environmental states are encoded by the timings of input spikes, and the control action is decoded by the first spike. The proposed learning algorithm solves the Cart-Pole benchmark task successfully. This result could be the first step towards implementing a real-time agent learning procedure in a continuous-time environment that can be run on neuromorphic systems with memristive synapses.
Collapse
Affiliation(s)
- Danila Vlasov
- NRC "Kurchatov Institute", Akademika Kurchatova sq., 1 Moscow, Russian Federation
| | - Anton Minnekhanov
- NRC "Kurchatov Institute", Akademika Kurchatova sq., 1 Moscow, Russian Federation
| | - Roman Rybka
- NRC "Kurchatov Institute", Akademika Kurchatova sq., 1 Moscow, Russian Federation; Russian Technological University "MIREA", Vernadsky av., 78 Moscow, Russian Federation.
| | - Yury Davydov
- NRC "Kurchatov Institute", Akademika Kurchatova sq., 1 Moscow, Russian Federation
| | - Alexander Sboev
- NRC "Kurchatov Institute", Akademika Kurchatova sq., 1 Moscow, Russian Federation; Russian Technological University "MIREA", Vernadsky av., 78 Moscow, Russian Federation; NRNU "MEPhi", Kashira Hwy, 31 Moscow, Russian Federation
| | - Alexey Serenko
- NRC "Kurchatov Institute", Akademika Kurchatova sq., 1 Moscow, Russian Federation
| | - Alexander Ilyasov
- NRC "Kurchatov Institute", Akademika Kurchatova sq., 1 Moscow, Russian Federation; Faculty of Physics, Lomonosov Moscow State University, Leninskie gory, 1 Moscow, Russian Federation
| | - Vyacheslav Demin
- NRC "Kurchatov Institute", Akademika Kurchatova sq., 1 Moscow, Russian Federation.
| |
Collapse
|
33
|
Bredenberg C, Savin C. Desiderata for normative models of synaptic plasticity. ARXIV 2023:arXiv:2308.04988v1. [PMID: 37608931 PMCID: PMC10441445] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Subscribe] [Scholar Register] [Indexed: 08/24/2023]
Abstract
Normative models of synaptic plasticity use a combination of mathematics and computational simulations to arrive at predictions of behavioral and network-level adaptive phenomena. In recent years, there has been an explosion of theoretical work on these models, but experimental confirmation is relatively limited. In this review, we organize work on normative plasticity models in terms of a set of desiderata which, when satisfied, are designed to guarantee that a model has a clear link between plasticity and adaptive behavior, consistency with known biological evidence about neural plasticity, and specific testable predictions. We then discuss how new models have begun to improve on these criteria and suggest avenues for further development. As prototypes, we provide detailed analyses of two specific models - REINFORCE and the Wake-Sleep algorithm. We provide a conceptual guide to help develop neural learning theories that are precise, powerful, and experimentally testable.
Collapse
Affiliation(s)
- Colin Bredenberg
- Center for Neural Science, New York University, New York, NY 10003, USA
- Mila-Quebec AI Institute, 6666 Rue Saint-Urbain, Montréal, QC H2S 3H1
| | - Cristina Savin
- Center for Neural Science, New York University, New York, NY 10003, USA
- Center for Data Science, New York University, New York, NY 10011, USA
| |
Collapse
|
34
|
Yuan Y, Zhu Y, Wang J, Li R, Xu X, Fang T, Huo H, Wan L, Li Q, Liu N, Yang S. Incorporating structural plasticity into self-organization recurrent networks for sequence learning. Front Neurosci 2023; 17:1224752. [PMID: 37592946 PMCID: PMC10427342 DOI: 10.3389/fnins.2023.1224752] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/18/2023] [Accepted: 07/13/2023] [Indexed: 08/19/2023] Open
Abstract
Introduction Spiking neural networks (SNNs), inspired by biological neural networks, have received a surge of interest due to its temporal encoding. Biological neural networks are driven by multiple plasticities, including spike timing-dependent plasticity (STDP), structural plasticity, and homeostatic plasticity, making network connection patterns and weights to change continuously during the lifecycle. However, it is unclear how these plasticities interact to shape neural networks and affect neural signal processing. Method Here, we propose a reward-modulated self-organization recurrent network with structural plasticity (RSRN-SP) to investigate this issue. Specifically, RSRN-SP uses spikes to encode information, and incorporate multiple plasticities including reward-modulated spike timing-dependent plasticity (R-STDP), homeostatic plasticity, and structural plasticity. On the one hand, combined with homeostatic plasticity, R-STDP is presented to guide the updating of synaptic weights. On the other hand, structural plasticity is utilized to simulate the growth and pruning of synaptic connections. Results and discussion Extensive experiments for sequential learning tasks are conducted to demonstrate the representational ability of the RSRN-SP, including counting task, motion prediction, and motion generation. Furthermore, the simulations also indicate that the characteristics arose from the RSRN-SP are consistent with biological observations.
Collapse
Affiliation(s)
- Ye Yuan
- School of Health Science and Engineering, Institute of Machine Intelligence, University of Shanghai for Science and Technology, Shanghai, China
| | - Yongtong Zhu
- School of Health Science and Engineering, Institute of Machine Intelligence, University of Shanghai for Science and Technology, Shanghai, China
| | - Jiaqi Wang
- School of Health Science and Engineering, Institute of Machine Intelligence, University of Shanghai for Science and Technology, Shanghai, China
| | - Ruoshi Li
- School of Health Science and Engineering, Institute of Machine Intelligence, University of Shanghai for Science and Technology, Shanghai, China
| | - Xin Xu
- School of Health Science and Engineering, Institute of Machine Intelligence, University of Shanghai for Science and Technology, Shanghai, China
| | - Tao Fang
- Automation of Department, Shanghai Jiao Tong University, Shanghai, China
| | - Hong Huo
- Automation of Department, Shanghai Jiao Tong University, Shanghai, China
| | - Lihong Wan
- Origin Dynamics Intelligent Robot Co., Ltd., Zhengzhou, China
| | - Qingdu Li
- School of Health Science and Engineering, Institute of Machine Intelligence, University of Shanghai for Science and Technology, Shanghai, China
| | - Na Liu
- School of Health Science and Engineering, Institute of Machine Intelligence, University of Shanghai for Science and Technology, Shanghai, China
| | - Shiyan Yang
- Eco-Environmental Protection Institution, Shanghai Academy of Agricultural Sciences, Shanghai, China
| |
Collapse
|
35
|
Gautam A, Kohno T. Adaptive STDP-based on-chip spike pattern detection. Front Neurosci 2023; 17:1203956. [PMID: 37521704 PMCID: PMC10374023 DOI: 10.3389/fnins.2023.1203956] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/11/2023] [Accepted: 06/15/2023] [Indexed: 08/01/2023] Open
Abstract
A spiking neural network (SNN) is a bottom-up tool used to describe information processing in brain microcircuits. It is becoming a crucial neuromorphic computational model. Spike-timing-dependent plasticity (STDP) is an unsupervised brain-like learning rule implemented in many SNNs and neuromorphic chips. However, a significant performance gap exists between ideal model simulation and neuromorphic implementation. The performance of STDP learning in neuromorphic chips deteriorates because the resolution of synaptic efficacy in such chips is generally restricted to 6 bits or less, whereas simulations employ the entire 64-bit floating-point precision available on digital computers. Previously, we introduced a bio-inspired learning rule named adaptive STDP and demonstrated via numerical simulation that adaptive STDP (using only 4-bit fixed-point synaptic efficacy) performs similarly to STDP learning (using 64-bit floating-point precision) in a noisy spike pattern detection model. Herein, we present the experimental results demonstrating the performance of adaptive STDP learning. To the best of our knowledge, this is the first study that demonstrates unsupervised noisy spatiotemporal spike pattern detection to perform well and maintain the simulation performance on a mixed-signal CMOS neuromorphic chip with low-resolution synaptic efficacy. The chip was designed in Taiwan Semiconductor Manufacturing Company (TSMC) 250 nm CMOS technology node and comprises a soma circuit and 256 synapse circuits along with their learning circuitry.
Collapse
|
36
|
Abstract
Flexible behavior requires the creation, updating, and expression of memories to depend on context. While the neural underpinnings of each of these processes have been intensively studied, recent advances in computational modeling revealed a key challenge in context-dependent learning that had been largely ignored previously: Under naturalistic conditions, context is typically uncertain, necessitating contextual inference. We review a theoretical approach to formalizing context-dependent learning in the face of contextual uncertainty and the core computations it requires. We show how this approach begins to organize a large body of disparate experimental observations, from multiple levels of brain organization (including circuits, systems, and behavior) and multiple brain regions (most prominently the prefrontal cortex, the hippocampus, and motor cortices), into a coherent framework. We argue that contextual inference may also be key to understanding continual learning in the brain. This theory-driven perspective places contextual inference as a core component of learning.
Collapse
Affiliation(s)
- James B Heald
- Department of Neuroscience and Zuckerman Mind Brain Behavior Institute, Columbia University, New York, NY, USA; ,
| | - Daniel M Wolpert
- Department of Neuroscience and Zuckerman Mind Brain Behavior Institute, Columbia University, New York, NY, USA; ,
- Computational and Biological Learning Lab, Department of Engineering, University of Cambridge, Cambridge, United Kingdom;
| | - Máté Lengyel
- Computational and Biological Learning Lab, Department of Engineering, University of Cambridge, Cambridge, United Kingdom;
- Center for Cognitive Computation, Department of Cognitive Science, Central European University, Budapest, Hungary
| |
Collapse
|
37
|
Zajzon B, Duarte R, Morrison A. Toward reproducible models of sequence learning: replication and analysis of a modular spiking network with reward-based learning. Front Integr Neurosci 2023; 17:935177. [PMID: 37396571 PMCID: PMC10310927 DOI: 10.3389/fnint.2023.935177] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/03/2022] [Accepted: 05/15/2023] [Indexed: 07/04/2023] Open
Abstract
To acquire statistical regularities from the world, the brain must reliably process, and learn from, spatio-temporally structured information. Although an increasing number of computational models have attempted to explain how such sequence learning may be implemented in the neural hardware, many remain limited in functionality or lack biophysical plausibility. If we are to harvest the knowledge within these models and arrive at a deeper mechanistic understanding of sequential processing in cortical circuits, it is critical that the models and their findings are accessible, reproducible, and quantitatively comparable. Here we illustrate the importance of these aspects by providing a thorough investigation of a recently proposed sequence learning model. We re-implement the modular columnar architecture and reward-based learning rule in the open-source NEST simulator, and successfully replicate the main findings of the original study. Building on these, we perform an in-depth analysis of the model's robustness to parameter settings and underlying assumptions, highlighting its strengths and weaknesses. We demonstrate a limitation of the model consisting in the hard-wiring of the sequence order in the connectivity patterns, and suggest possible solutions. Finally, we show that the core functionality of the model is retained under more biologically-plausible constraints.
Collapse
Affiliation(s)
- Barna Zajzon
- Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA-BRAIN Institute I, Jülich Research Centre, Jülich, Germany
- Department of Computer Science 3—Software Engineering, RWTH Aachen University, Aachen, Germany
| | - Renato Duarte
- Donders Institute for Brain, Cognition and Behavior, Radboud University, Nijmegen, Netherlands
| | - Abigail Morrison
- Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA-BRAIN Institute I, Jülich Research Centre, Jülich, Germany
- Department of Computer Science 3—Software Engineering, RWTH Aachen University, Aachen, Germany
| |
Collapse
|
38
|
Aceituno PV, Farinha MT, Loidl R, Grewe BF. Learning cortical hierarchies with temporal Hebbian updates. Front Comput Neurosci 2023; 17:1136010. [PMID: 37293353 PMCID: PMC10244748 DOI: 10.3389/fncom.2023.1136010] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/02/2023] [Accepted: 04/25/2023] [Indexed: 06/10/2023] Open
Abstract
A key driver of mammalian intelligence is the ability to represent incoming sensory information across multiple abstraction levels. For example, in the visual ventral stream, incoming signals are first represented as low-level edge filters and then transformed into high-level object representations. Similar hierarchical structures routinely emerge in artificial neural networks (ANNs) trained for object recognition tasks, suggesting that similar structures may underlie biological neural networks. However, the classical ANN training algorithm, backpropagation, is considered biologically implausible, and thus alternative biologically plausible training methods have been developed such as Equilibrium Propagation, Deep Feedback Control, Supervised Predictive Coding, and Dendritic Error Backpropagation. Several of those models propose that local errors are calculated for each neuron by comparing apical and somatic activities. Notwithstanding, from a neuroscience perspective, it is not clear how a neuron could compare compartmental signals. Here, we propose a solution to this problem in that we let the apical feedback signal change the postsynaptic firing rate and combine this with a differential Hebbian update, a rate-based version of classical spiking time-dependent plasticity (STDP). We prove that weight updates of this form minimize two alternative loss functions that we prove to be equivalent to the error-based losses used in machine learning: the inference latency and the amount of top-down feedback necessary. Moreover, we show that the use of differential Hebbian updates works similarly well in other feedback-based deep learning frameworks such as Predictive Coding or Equilibrium Propagation. Finally, our work removes a key requirement of biologically plausible models for deep learning and proposes a learning mechanism that would explain how temporal Hebbian learning rules can implement supervised hierarchical learning.
Collapse
Affiliation(s)
- Pau Vilimelis Aceituno
- Institute of Neuroinformatics, University of Zurich and ETH Zurich, Zurich, Switzerland
- ETH AI Center, ETH Zurich, Zurich, Switzerland
| | | | - Reinhard Loidl
- Institute of Neuroinformatics, University of Zurich and ETH Zurich, Zurich, Switzerland
| | - Benjamin F. Grewe
- Institute of Neuroinformatics, University of Zurich and ETH Zurich, Zurich, Switzerland
- ETH AI Center, ETH Zurich, Zurich, Switzerland
| |
Collapse
|
39
|
Schmidgall S, Hays J. Meta-SpikePropamine: learning to learn with synaptic plasticity in spiking neural networks. Front Neurosci 2023; 17:1183321. [PMID: 37250397 PMCID: PMC10213417 DOI: 10.3389/fnins.2023.1183321] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/09/2023] [Accepted: 04/06/2023] [Indexed: 05/31/2023] Open
Abstract
We propose that in order to harness our understanding of neuroscience toward machine learning, we must first have powerful tools for training brain-like models of learning. Although substantial progress has been made toward understanding the dynamics of learning in the brain, neuroscience-derived models of learning have yet to demonstrate the same performance capabilities as methods in deep learning such as gradient descent. Inspired by the successes of machine learning using gradient descent, we introduce a bi-level optimization framework that seeks to both solve online learning tasks and improve the ability to learn online using models of plasticity from neuroscience. We demonstrate that models of three-factor learning with synaptic plasticity taken from the neuroscience literature can be trained in Spiking Neural Networks (SNNs) with gradient descent via a framework of learning-to-learn to address challenging online learning problems. This framework opens a new path toward developing neuroscience inspired online learning algorithms.
Collapse
Affiliation(s)
- Samuel Schmidgall
- U.S. Naval Research Laboratory, Spacecraft Engineering Department, Washington, DC, United States
- Department of Electrical and Computer Engineering, Johns Hopkins University, Baltimore, MD, United States
| | - Joe Hays
- U.S. Naval Research Laboratory, Spacecraft Engineering Department, Washington, DC, United States
| |
Collapse
|
40
|
Gao D, Shenoy R, Yi S, Lee J, Xu M, Rong Z, Deo A, Nathan D, Zheng JG, Williams RS, Chen Y. Synaptic Resistor Circuits Based on Al Oxide and Ti Silicide for Concurrent Learning and Signal Processing in Artificial Intelligence Systems. ADVANCED MATERIALS (DEERFIELD BEACH, FLA.) 2023; 35:e2210484. [PMID: 36779432 DOI: 10.1002/adma.202210484] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/11/2022] [Revised: 01/10/2023] [Indexed: 06/18/2023]
Abstract
Neurobiological circuits containing synapses can process signals while learning concurrently in real time. Before an artificial neural network (ANN) can execute a signal-processing program, it must first be programmed by humans or trained with respect to a large and defined data set during learning processes, resulting in significant latency, high power consumption, and poor adaptability to unpredictable changing environments. In this work, a crossbar circuit of synaptic resistors (synstors) is reported, each synstor integrating a Si channel with an Al oxide memory layer and Ti silicide Schottky contacts. Individual synstors are characterized and analyzed to understand their concurrent signal-processing and learning abilities. Without any prior training, synstor circuits concurrently execute signal processing and learning in real time to fly drones toward a target position in an aerodynamically changing environment faster than human controllers, and with learning speed, performance, power consumption, and adaptability to the environment significantly superior to an ANN running on computers. The synstor circuit provides a path to establish power-efficient intelligent systems with real-time learning and adaptability in the capriciously mutable real world.
Collapse
Affiliation(s)
- Dawei Gao
- Departments of Mechanical and Aerospace Engineering, Materials Science and Engineering, Electrical and Computer Engineering, California NanoSystems Institute, University of California, Los Angeles, Los Angeles, CA, 90095, USA
| | - Rahul Shenoy
- Departments of Mechanical and Aerospace Engineering, Materials Science and Engineering, Electrical and Computer Engineering, California NanoSystems Institute, University of California, Los Angeles, Los Angeles, CA, 90095, USA
| | - Suin Yi
- Department of Electrical and Computer Engineering, Texas A&M University, College Station, TX, 77843, USA
| | - Jungmin Lee
- Departments of Mechanical and Aerospace Engineering, Materials Science and Engineering, Electrical and Computer Engineering, California NanoSystems Institute, University of California, Los Angeles, Los Angeles, CA, 90095, USA
| | - Mingjie Xu
- Irvine Materials Research Institute, University of California, Irvine, Irvine, CA, 92697-2800, USA
| | - Zixuan Rong
- Departments of Mechanical and Aerospace Engineering, Materials Science and Engineering, Electrical and Computer Engineering, California NanoSystems Institute, University of California, Los Angeles, Los Angeles, CA, 90095, USA
| | - Atharva Deo
- Departments of Mechanical and Aerospace Engineering, Materials Science and Engineering, Electrical and Computer Engineering, California NanoSystems Institute, University of California, Los Angeles, Los Angeles, CA, 90095, USA
| | - Dhruva Nathan
- Departments of Mechanical and Aerospace Engineering, Materials Science and Engineering, Electrical and Computer Engineering, California NanoSystems Institute, University of California, Los Angeles, Los Angeles, CA, 90095, USA
| | - Jian-Guo Zheng
- Irvine Materials Research Institute, University of California, Irvine, Irvine, CA, 92697-2800, USA
| | - R Stanley Williams
- Department of Electrical and Computer Engineering, Texas A&M University, College Station, TX, 77843, USA
| | - Yong Chen
- Departments of Mechanical and Aerospace Engineering, Materials Science and Engineering, Electrical and Computer Engineering, California NanoSystems Institute, University of California, Los Angeles, Los Angeles, CA, 90095, USA
| |
Collapse
|
41
|
Lansdell BJ, Kording KP. Neural spiking for causal inference and learning. PLoS Comput Biol 2023; 19:e1011005. [PMID: 37014913 PMCID: PMC10104331 DOI: 10.1371/journal.pcbi.1011005] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/29/2022] [Revised: 04/14/2023] [Accepted: 03/07/2023] [Indexed: 04/05/2023] Open
Abstract
When a neuron is driven beyond its threshold, it spikes. The fact that it does not communicate its continuous membrane potential is usually seen as a computational liability. Here we show that this spiking mechanism allows neurons to produce an unbiased estimate of their causal influence, and a way of approximating gradient descent-based learning. Importantly, neither activity of upstream neurons, which act as confounders, nor downstream non-linearities bias the results. We show how spiking enables neurons to solve causal estimation problems and that local plasticity can approximate gradient descent using spike discontinuity learning.
Collapse
Affiliation(s)
- Benjamin James Lansdell
- Department of Bioengineering, University of Pennsylvania, Philadelphia, Pennsylvania, United States of America
| | - Konrad Paul Kording
- Department of Bioengineering, University of Pennsylvania, Philadelphia, Pennsylvania, United States of America
- Department of Neuroscience, University of Pennsylvania, Philadelphia, Pennsylvania, United States of America
| |
Collapse
|
42
|
Jaskir A, Frank MJ. On the normative advantages of dopamine and striatal opponency for learning and choice. eLife 2023; 12:e85107. [PMID: 36946371 PMCID: PMC10198727 DOI: 10.7554/elife.85107] [Citation(s) in RCA: 20] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/22/2022] [Accepted: 03/14/2023] [Indexed: 03/23/2023] Open
Abstract
The basal ganglia (BG) contribute to reinforcement learning (RL) and decision-making, but unlike artificial RL agents, it relies on complex circuitry and dynamic dopamine modulation of opponent striatal pathways to do so. We develop the OpAL* model to assess the normative advantages of this circuitry. In OpAL*, learning induces opponent pathways to differentially emphasize the history of positive or negative outcomes for each action. Dynamic DA modulation then amplifies the pathway most tuned for the task environment. This efficient coding mechanism avoids a vexing explore-exploit tradeoff that plagues traditional RL models in sparse reward environments. OpAL* exhibits robust advantages over alternative models, particularly in environments with sparse reward and large action spaces. These advantages depend on opponent and nonlinear Hebbian plasticity mechanisms previously thought to be pathological. Finally, OpAL* captures risky choice patterns arising from DA and environmental manipulations across species, suggesting that they result from a normative biological mechanism.
Collapse
Affiliation(s)
- Alana Jaskir
- Department of Cognitive, Linguistic and Psychological Sciences, Carney Institute for Brain Science, Brown UniversityProvidenceUnited States
| | - Michael J Frank
- Department of Cognitive, Linguistic and Psychological Sciences, Carney Institute for Brain Science, Brown UniversityProvidenceUnited States
| |
Collapse
|
43
|
Chen S, Yang Q, Lim S. Efficient inference of synaptic plasticity rule with Gaussian process regression. iScience 2023; 26:106182. [PMID: 36879810 PMCID: PMC9985048 DOI: 10.1016/j.isci.2023.106182] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/22/2022] [Revised: 01/24/2023] [Accepted: 02/07/2023] [Indexed: 02/16/2023] Open
Abstract
Finding the form of synaptic plasticity is critical to understanding its functions underlying learning and memory. We investigated an efficient method to infer synaptic plasticity rules in various experimental settings. We considered biologically plausible models fitting a wide range of in-vitro studies and examined the recovery of their firing-rate dependence from sparse and noisy data. Among the methods assuming low-rankness or smoothness of plasticity rules, Gaussian process regression (GPR), a nonparametric Bayesian approach, performs the best. Under the conditions measuring changes in synaptic weights directly or measuring changes in neural activities as indirect observables of synaptic plasticity, which leads to different inference problems, GPR performs well. Also, GPR could simultaneously recover multiple plasticity rules and robustly perform under various plasticity rules and noise levels. Such flexibility and efficiency, particularly at the low sampling regime, make GPR suitable for recent experimental developments and inferring a broader class of plasticity models.
Collapse
Affiliation(s)
- Shirui Chen
- Department of Applied Mathematics, University of Washington, Lewis Hall 201, Box 353925, Seattle, WA 98195-3925, USA
- Neural Science, New York University Shanghai, 1555 Century Avenue, Shanghai, 200122, China
| | - Qixin Yang
- The Edmond and Lily Safra Center for Brain Sciences, The Hebrew University, The Suzanne and Charles Goodman Brain Sciences Building, Edmond J. Safra Campus, Jerusalem, 9190401, Israel
- Neural Science, New York University Shanghai, 1555 Century Avenue, Shanghai, 200122, China
| | - Sukbin Lim
- Neural Science, New York University Shanghai, 1555 Century Avenue, Shanghai, 200122, China
- NYU-ECNU Institute of Brain and Cognitive Science at NYU Shanghai, 3663 Zhongshan Road North, Shanghai, 200062, China
| |
Collapse
|
44
|
Shirsavar SR, Vahabie AH, Dehaqani MRA. Models Developed for Spiking Neural Networks. MethodsX 2023; 10:102157. [PMID: 37077894 PMCID: PMC10106956 DOI: 10.1016/j.mex.2023.102157] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/13/2023] [Accepted: 03/23/2023] [Indexed: 03/30/2023] Open
Abstract
Emergence of deep neural networks (DNNs) has raised enormous attention towards artificial neural networks (ANNs) once again. They have become the state-of-the-art models and have won different machine learning challenges. Although these networks are inspired by the brain, they lack biological plausibility, and they have structural differences compared to the brain. Spiking neural networks (SNNs) have been around for a long time, and they have been investigated to understand the dynamics of the brain. However, their application in real-world and complicated machine learning tasks were limited. Recently, they have shown great potential in solving such tasks. Due to their energy efficiency and temporal dynamics there are many promises in their future development. In this work, we reviewed the structures and performances of SNNs on image classification tasks. The comparisons illustrate that these networks show great capabilities for more complicated problems. Furthermore, the simple learning rules developed for SNNs, such as STDP and R-STDP, can be a potential alternative to replace the backpropagation algorithm used in DNNs.•Different building blocks of spiking neural networks are explained in this work.•Developed models for SNNs are introduced based on their characteristics and building blocks.
Collapse
|
45
|
Yi Z, Lian J, Liu Q, Zhu H, Liang D, Liu J. Learning Rules in Spiking Neural Networks: A Survey. Neurocomputing 2023. [DOI: 10.1016/j.neucom.2023.02.026] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/18/2023]
|
46
|
Sheynikhovich D, Otani S, Bai J, Arleo A. Long-term memory, synaptic plasticity and dopamine in rodent medial prefrontal cortex: Role in executive functions. Front Behav Neurosci 2023; 16:1068271. [PMID: 36710953 PMCID: PMC9875091 DOI: 10.3389/fnbeh.2022.1068271] [Citation(s) in RCA: 10] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/12/2022] [Accepted: 12/26/2022] [Indexed: 01/12/2023] Open
Abstract
Mnemonic functions, supporting rodent behavior in complex tasks, include both long-term and (short-term) working memory components. While working memory is thought to rely on persistent activity states in an active neural network, long-term memory and synaptic plasticity contribute to the formation of the underlying synaptic structure, determining the range of possible states. Whereas, the implication of working memory in executive functions, mediated by the prefrontal cortex (PFC) in primates and rodents, has been extensively studied, the contribution of long-term memory component to these tasks received little attention. This review summarizes available experimental data and theoretical work concerning cellular mechanisms of synaptic plasticity in the medial region of rodent PFC and the link between plasticity, memory and behavior in PFC-dependent tasks. A special attention is devoted to unique properties of dopaminergic modulation of prefrontal synaptic plasticity and its contribution to executive functions.
Collapse
Affiliation(s)
- Denis Sheynikhovich
- Sorbonne Université, INSERM, CNRS, Institut de la Vision, Paris, France,*Correspondence: Denis Sheynikhovich ✉
| | - Satoru Otani
- Sorbonne Université, INSERM, CNRS, Institut de la Vision, Paris, France
| | - Jing Bai
- Institute of Psychiatry and Neuroscience of Paris, INSERM U1266, Paris, France
| | - Angelo Arleo
- Sorbonne Université, INSERM, CNRS, Institut de la Vision, Paris, France
| |
Collapse
|
47
|
Scott DN, Frank MJ. Adaptive control of synaptic plasticity integrates micro- and macroscopic network function. Neuropsychopharmacology 2023; 48:121-144. [PMID: 36038780 PMCID: PMC9700774 DOI: 10.1038/s41386-022-01374-6] [Citation(s) in RCA: 10] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/11/2022] [Revised: 06/23/2022] [Accepted: 06/24/2022] [Indexed: 11/09/2022]
Abstract
Synaptic plasticity configures interactions between neurons and is therefore likely to be a primary driver of behavioral learning and development. How this microscopic-macroscopic interaction occurs is poorly understood, as researchers frequently examine models within particular ranges of abstraction and scale. Computational neuroscience and machine learning models offer theoretically powerful analyses of plasticity in neural networks, but results are often siloed and only coarsely linked to biology. In this review, we examine connections between these areas, asking how network computations change as a function of diverse features of plasticity and vice versa. We review how plasticity can be controlled at synapses by calcium dynamics and neuromodulatory signals, the manifestation of these changes in networks, and their impacts in specialized circuits. We conclude that metaplasticity-defined broadly as the adaptive control of plasticity-forges connections across scales by governing what groups of synapses can and can't learn about, when, and to what ends. The metaplasticity we discuss acts by co-opting Hebbian mechanisms, shifting network properties, and routing activity within and across brain systems. Asking how these operations can go awry should also be useful for understanding pathology, which we address in the context of autism, schizophrenia and Parkinson's disease.
Collapse
Affiliation(s)
- Daniel N Scott
- Cognitive Linguistic, and Psychological Sciences, Brown University, Providence, RI, USA.
- Carney Institute for Brain Science, Brown University, Providence, RI, USA.
| | - Michael J Frank
- Cognitive Linguistic, and Psychological Sciences, Brown University, Providence, RI, USA.
- Carney Institute for Brain Science, Brown University, Providence, RI, USA.
| |
Collapse
|
48
|
Whelan MT, Jimenez-Rodriguez A, Prescott TJ, Vasilaki E. A robotic model of hippocampal reverse replay for reinforcement learning. BIOINSPIRATION & BIOMIMETICS 2022; 18:015007. [PMID: 36327454 DOI: 10.1088/1748-3190/ac9ffc] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/17/2021] [Accepted: 11/03/2022] [Indexed: 06/16/2023]
Abstract
Hippocampal reverse replay, a phenomenon in which recently active hippocampal cells reactivate in the reverse order, is thought to contribute to learning, particularly reinforcement learning (RL), in animals. Here, we present a novel computational model which exploits reverse replay to improve stability and performance on a homing task. The model takes inspiration from the hippocampal-striatal network, and learning occurs via a three-factor RL rule. To augment this model with hippocampal reverse replay, we derived a policy gradient learning rule that associates place-cell activity with responses in cells representing actions and a supervised learning rule of the same form, interpreting the replay activity as a 'target' frequency. We evaluated the model using a simulated robot spatial navigation task inspired by the Morris water maze. Results suggest that reverse replay can improve performance stability over multiple trials. Our model exploits reverse reply as an additional source for propagating information about desirable synaptic changes, reducing the requirements for long-time scales in eligibility traces combined with low learning rates. We conclude that reverse replay can positively contribute to RL, although less stable learning is possible in its absence. Analogously, we postulate that reverse replay may enhance RL in the mammalian hippocampal-striatal system rather than provide its core mechanism.
Collapse
Affiliation(s)
- Matthew T Whelan
- Department of Computer Science, The University of Sheffield, Sheffield, United Kingdom
- Sheffield Robotics, Sheffield, United Kingdom
| | - Alejandro Jimenez-Rodriguez
- Department of Computer Science, The University of Sheffield, Sheffield, United Kingdom
- Sheffield Robotics, Sheffield, United Kingdom
| | - Tony J Prescott
- Department of Computer Science, The University of Sheffield, Sheffield, United Kingdom
- Sheffield Robotics, Sheffield, United Kingdom
| | - Eleni Vasilaki
- Department of Computer Science, The University of Sheffield, Sheffield, United Kingdom
- Sheffield Robotics, Sheffield, United Kingdom
| |
Collapse
|
49
|
Fisher YE, Marquis M, D'Alessandro I, Wilson RI. Dopamine promotes head direction plasticity during orienting movements. Nature 2022; 612:316-322. [PMID: 36450986 PMCID: PMC9729112 DOI: 10.1038/s41586-022-05485-4] [Citation(s) in RCA: 24] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/08/2022] [Accepted: 10/25/2022] [Indexed: 12/05/2022]
Abstract
In neural networks that store information in their connection weights, there is a tradeoff between sensitivity and stability1,2. Connections must be plastic to incorporate new information, but if they are too plastic, stored information can be corrupted. A potential solution is to allow plasticity only during epochs when task-specific information is rich, on the basis of a 'when-to-learn' signal3. We reasoned that dopamine provides a when-to-learn signal that allows the brain's spatial maps to update when new spatial information is available-that is, when an animal is moving. Here we show that the dopamine neurons innervating the Drosophila head direction network are specifically active when the fly turns to change its head direction. Moreover, their activity scales with moment-to-moment fluctuations in rotational speed. Pairing dopamine release with a visual cue persistently strengthens the cue's influence on head direction cells. Conversely, inhibiting these dopamine neurons decreases the influence of the cue. This mechanism should accelerate learning during moments when orienting movements are providing a rich stream of head direction information, allowing learning rates to be low at other times to protect stored information. Our results show how spatial learning in the brain can be compressed into discrete epochs in which high learning rates are matched to high rates of information intake.
Collapse
Affiliation(s)
- Yvette E Fisher
- Department of Neurobiology, Harvard Medical School, Boston, MA, USA
- Department of Molecular and Cellular Biology, University of California Berkeley, Berkeley, CA, USA
- Chan Zuckerberg Biohub, San Francisco, CA, USA
| | - Michael Marquis
- Department of Neurobiology, Harvard Medical School, Boston, MA, USA
| | | | - Rachel I Wilson
- Department of Neurobiology, Harvard Medical School, Boston, MA, USA.
| |
Collapse
|
50
|
Sun Y, He N, Yuan Q, Wang Y, Dong Y, Wen D. Ferroelectric Polarized in Transistor Channel Polarity Modulation for Reward-Modulated Spike-Time-Dependent Plasticity Application. J Phys Chem Lett 2022; 13:10056-10064. [PMID: 36264655 DOI: 10.1021/acs.jpclett.2c03007] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 06/16/2023]
Abstract
Reward signals reflect the developmental tendency of reinforcement learning (RL) agents. Reward-modulated spike-time-dependent plasticity (R-STDP) is an efficient and concise information processing feature in RL. However, the physical construction of R-STDP normally demands complex circuit design engineering, resulting in large power consumption and large area. In this work, we studied the role of ferroelectric polarization in the modulation of carbon nanotube transistor channel polarity. Furthermore, we applied a modulating channel method to construct a 2T synaptic component by spin-coating technology. Based on the nonvolatility of ferroelectric polarization, the synaptic component constructed has the characteristics of reconfigurable polarity. One channel could be modulated to n-type and the other to p-type. One modulated channel was used to perform the STDP function when the reward signal arrived, and the other modulated channel was used to perform the anti-STDP function when the punishment signal arrived. Finally, R-STDP learning rules are implemented on hardware. This work provides a strategy for hardware construction of RL.
Collapse
Affiliation(s)
- Yanmei Sun
- School of Electronic Engineering, Heilongjiang University, Harbin 150080, China
- Heilongjiang Provincial Key Laboratory of Micro-nano Sensitive Devices and Systems, Heilongjiang University, Harbin 150080, China
- HLJ Province Key Laboratories of Senior-Education for Electronic Engineering, Heilongjiang University, Harbin 150080, China
| | - Nian He
- School of Electronic Engineering, Heilongjiang University, Harbin 150080, China
- Heilongjiang Provincial Key Laboratory of Micro-nano Sensitive Devices and Systems, Heilongjiang University, Harbin 150080, China
- HLJ Province Key Laboratories of Senior-Education for Electronic Engineering, Heilongjiang University, Harbin 150080, China
| | - Qi Yuan
- School of Electronic Engineering, Heilongjiang University, Harbin 150080, China
- Heilongjiang Provincial Key Laboratory of Micro-nano Sensitive Devices and Systems, Heilongjiang University, Harbin 150080, China
- HLJ Province Key Laboratories of Senior-Education for Electronic Engineering, Heilongjiang University, Harbin 150080, China
| | - Yufei Wang
- School of Electronic Engineering, Heilongjiang University, Harbin 150080, China
- Heilongjiang Provincial Key Laboratory of Micro-nano Sensitive Devices and Systems, Heilongjiang University, Harbin 150080, China
- HLJ Province Key Laboratories of Senior-Education for Electronic Engineering, Heilongjiang University, Harbin 150080, China
| | - Yan Dong
- School of Electronic Engineering, Heilongjiang University, Harbin 150080, China
- Heilongjiang Provincial Key Laboratory of Micro-nano Sensitive Devices and Systems, Heilongjiang University, Harbin 150080, China
- HLJ Province Key Laboratories of Senior-Education for Electronic Engineering, Heilongjiang University, Harbin 150080, China
| | - Dianzhong Wen
- School of Electronic Engineering, Heilongjiang University, Harbin 150080, China
- Heilongjiang Provincial Key Laboratory of Micro-nano Sensitive Devices and Systems, Heilongjiang University, Harbin 150080, China
- HLJ Province Key Laboratories of Senior-Education for Electronic Engineering, Heilongjiang University, Harbin 150080, China
| |
Collapse
|