1
|
Wang B, Torok Z, Duffy A, Bell DG, Wongso S, Velho TAF, Fairhall AL, Lois C. Unsupervised restoration of a complex learned behavior after large-scale neuronal perturbation. Nat Neurosci 2024; 27:1176-1186. [PMID: 38684893 DOI: 10.1038/s41593-024-01630-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/16/2022] [Accepted: 03/26/2024] [Indexed: 05/02/2024]
Abstract
Reliable execution of precise behaviors requires that brain circuits are resilient to variations in neuronal dynamics. Genetic perturbation of the majority of excitatory neurons in HVC, a brain region involved in song production, in adult songbirds with stereotypical songs triggered severe degradation of the song. The song fully recovered within 2 weeks, and substantial improvement occurred even when animals were prevented from singing during the recovery period, indicating that offline mechanisms enable recovery in an unsupervised manner. Song restoration was accompanied by increased excitatory synaptic input to neighboring, unmanipulated neurons in the same brain region. A model inspired by the behavioral and electrophysiological findings suggests that unsupervised single-cell and population-level homeostatic plasticity rules can support the functional restoration after large-scale disruption of networks that implement sequential dynamics. These observations suggest the existence of cellular and systems-level restorative mechanisms that ensure behavioral resilience.
Collapse
Affiliation(s)
- Bo Wang
- Division of Biology and Biological Engineering, California Institute of Technology, Pasadena, CA, USA.
| | - Zsofia Torok
- Division of Biology and Biological Engineering, California Institute of Technology, Pasadena, CA, USA
| | - Alison Duffy
- Department of Physiology and Biophysics, University of Washington, Seattle, WA, USA
- Computational Neuroscience Center, University of Washington, Seattle, WA, USA
| | - David G Bell
- Computational Neuroscience Center, University of Washington, Seattle, WA, USA
- Department of Physics, University of Washington, Seattle, WA, USA
| | - Shelyn Wongso
- Division of Biology and Biological Engineering, California Institute of Technology, Pasadena, CA, USA
| | - Tarciso A F Velho
- Division of Biology and Biological Engineering, California Institute of Technology, Pasadena, CA, USA
| | - Adrienne L Fairhall
- Department of Physiology and Biophysics, University of Washington, Seattle, WA, USA
- Computational Neuroscience Center, University of Washington, Seattle, WA, USA
- Department of Physics, University of Washington, Seattle, WA, USA
| | - Carlos Lois
- Division of Biology and Biological Engineering, California Institute of Technology, Pasadena, CA, USA.
| |
Collapse
|
2
|
Mehrzadi A, Rezaee E, Gharaghani S, Fakhar Z, Mirhosseini SM. A Molecular Generative Model of COVID-19 Main Protease Inhibitors Using Long Short-Term Memory-Based Recurrent Neural Network. J Comput Biol 2024; 31:83-98. [PMID: 38054946 DOI: 10.1089/cmb.2023.0064] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/07/2023] Open
Abstract
The severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) has caused a serious threat to public health and prompted researchers to find anti-coronavirus 2019 (COVID-19) compounds. In this study, the long short-term memory-based recurrent neural network was used to generate new inhibitors for the coronavirus. First, the model was trained to generate drug compounds in the form of valid simplified molecular-input line-entry system strings. Then, the structures of COVID-19 main protease inhibitors were applied to fine-tune the model. After fine-tuning, the network could generate new molecular structures as novel SARS-CoV-2 main protease inhibitors. Molecular docking exhibited that some generated compounds have the proper affinity to the active site of the protease. Molecular Dynamics simulations explored binding free energies of the compounds over simulation trajectories. In addition, in silico absorption, distribution, metabolism, and excretion studies showed that some novel compounds could be formulated as orally active agents. Based on molecular docking and molecular dynamics simulation studies, compound AADH possessed significant binding affinity and presumably inhibition against the SARS-CoV-2 main protease enzyme. Therefore, the proposed deep learning-based model was capable of generating promising anti-COVID-19 drugs.
Collapse
Affiliation(s)
- Arash Mehrzadi
- Department of Electrical, Computer and IT Engineering, Qazvin Branch, Islamic Azad University, Qazvin, Iran
| | - Elham Rezaee
- Department of Pharmaceutical Chemistry, Shahid Beheshti University of Medical Sciences, Tehran, Iran
| | - Sajjad Gharaghani
- Department of Bioinformatics, Laboratory of Bioinformatics and Drug Design (LBD), University of Tehran, Tehran, Iran
| | - Zeynab Fakhar
- Department of Bioinformatics, Laboratory of Bioinformatics and Drug Design (LBD), University of Tehran, Tehran, Iran
| | | |
Collapse
|
3
|
Zou J, Zhao L, Shi S. Generation of focused drug molecule library using recurrent neural network. J Mol Model 2023; 29:361. [PMID: 37932607 DOI: 10.1007/s00894-023-05772-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/30/2023] [Accepted: 10/26/2023] [Indexed: 11/08/2023]
Abstract
CONTEXT With the wide application of deep learning in drug research and development, de novo molecular design methods based on recurrent neural network (RNN) have strong advantages in drug molecule generation. The RNN model can be used to learn the internal chemical structure of molecules, which is similar to a natural language processing task. Although techniques for generating target-specific molecular libraries based on RNN models are mature, research related to drug design and screening continues around the clock. Research based on de novo drug design methods to generate larger quantities of valid compounds is necessary. METHODS In this study, a molecular generation model based on RNN was designed, which abandoned the traditional way of stacked RNN and introduced the Nested long short-term memory network structure. To enrich the library of focused molecules for specific targets, we fine-tuned the model using active molecules from novel coronavirus pneumonia and screened the molecules using machine learning models. Following rigorous screening, the selected molecules underwent molecular docking with the SARS-CoV-2 M-pro receptor using AutoDock2.4 to identify the top 3 potential inhibitors. Subsequently, 100-ns molecular dynamics simulations were conducted using Amber22. Molecule parameterization involved the GAFF2 force field, while the proteins were modeled using the ff19SB force field, with solvation facilitated by a truncated octahedral TIP3P solvent environment. Upon completion of molecular dynamics simulations, stability of ligand-protein complexes was assessed by analysis of RMSD, H-bonds, and MM-GBSA. Reasonable results prove that the model can complete the task of de novo drug design and has the potential to be ideal drug molecules.
Collapse
Affiliation(s)
- Jinping Zou
- Department of Mathematics, School of Mathematics and Computer Sciences, Nanchang University, Nanchang, 330031, China
- Institute of Mathematics and Interdisciplinary Sciences, Nanchang University, Nanchang, 330031, China
| | - Long Zhao
- Department of Mathematics, School of Mathematics and Computer Sciences, Nanchang University, Nanchang, 330031, China
- Institute of Mathematics and Interdisciplinary Sciences, Nanchang University, Nanchang, 330031, China
| | - Shaoping Shi
- Department of Mathematics, School of Mathematics and Computer Sciences, Nanchang University, Nanchang, 330031, China.
- Institute of Mathematics and Interdisciplinary Sciences, Nanchang University, Nanchang, 330031, China.
| |
Collapse
|
4
|
Yu Q, Bi Z, Jiang S, Yan B, Chen H, Wang Y, Miao Y, Li K, Wei Z, Xie Y, Tan X, Liu X, Fu H, Cui L, Xing L, Weng S, Wang X, Yuan Y, Zhou C, Wang G, Li L, Ma L, Mao Y, Chen L, Zhang J. Visual cortex encodes timing information in humans and mice. Neuron 2022; 110:4194-4211.e10. [PMID: 36195097 DOI: 10.1016/j.neuron.2022.09.008] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/19/2021] [Revised: 03/15/2022] [Accepted: 09/07/2022] [Indexed: 11/07/2022]
Abstract
Despite the importance of timing in our daily lives, our understanding of how the human brain mediates second-scale time perception is limited. Here, we combined intracranial stereoelectroencephalography (SEEG) recordings in epileptic patients and circuit dissection in mice to show that visual cortex (VC) encodes timing information. We first asked human participants to perform an interval-timing task and found VC to be a key timing brain area. We then conducted optogenetic experiments in mice and showed that VC plays an important role in the interval-timing behavior. We further found that VC neurons fired in a time-keeping sequential manner and exhibited increased excitability in a timed manner. Finally, we used a computational model to illustrate a self-correcting learning process that generates interval-timed activities with scalar-timing property. Our work reveals how localized oscillations in VC occurring in the seconds to deca-seconds range relate timing information from the external world to guide behavior.
Collapse
Affiliation(s)
- Qingpeng Yu
- State Key Laboratory of Medical Neurobiology, MOE Frontiers Center for Brain Science and Institutes of Brain Science, Department of Neurosurgery, Huashan Hospital, Fudan University, Shanghai 200032, China
| | - Zedong Bi
- Lingang Laboratory, Shanghai 200031, China; Institute for Future, School of Automation, Qingdao University, Qingdao 266071, China; Department of Physics, Centre for Nonlinear Studies and Institute of Computational and Theoretical Studies, Hong Kong Baptist University, Kowloon Tong, Hong Kong; Research Centre, HKBU Institute of Research and Continuing Education, Shenzhen, China
| | - Shize Jiang
- State Key Laboratory of Medical Neurobiology, MOE Frontiers Center for Brain Science and Institutes of Brain Science, Department of Neurosurgery, Huashan Hospital, Fudan University, Shanghai 200032, China
| | - Biao Yan
- State Key Laboratory of Medical Neurobiology, MOE Frontiers Center for Brain Science and Institutes of Brain Science, Department of Neurosurgery, Huashan Hospital, Fudan University, Shanghai 200032, China
| | - Heming Chen
- State Key Laboratory of Medical Neurobiology, MOE Frontiers Center for Brain Science and Institutes of Brain Science, Department of Neurosurgery, Huashan Hospital, Fudan University, Shanghai 200032, China
| | - Yiting Wang
- State Key Laboratory of Medical Neurobiology, MOE Frontiers Center for Brain Science and Institutes of Brain Science, Department of Neurosurgery, Huashan Hospital, Fudan University, Shanghai 200032, China
| | - Yizhan Miao
- State Key Laboratory of Medical Neurobiology, MOE Frontiers Center for Brain Science and Institutes of Brain Science, Department of Neurosurgery, Huashan Hospital, Fudan University, Shanghai 200032, China
| | - Kexin Li
- State Key Laboratory of Medical Neurobiology, MOE Frontiers Center for Brain Science and Institutes of Brain Science, Department of Neurosurgery, Huashan Hospital, Fudan University, Shanghai 200032, China
| | - Zixuan Wei
- State Key Laboratory of Medical Neurobiology, MOE Frontiers Center for Brain Science and Institutes of Brain Science, Department of Neurosurgery, Huashan Hospital, Fudan University, Shanghai 200032, China
| | - Yuanting Xie
- State Key Laboratory of Medical Neurobiology, MOE Frontiers Center for Brain Science and Institutes of Brain Science, Department of Neurosurgery, Huashan Hospital, Fudan University, Shanghai 200032, China
| | - Xinrong Tan
- State Key Laboratory of Medical Neurobiology, MOE Frontiers Center for Brain Science and Institutes of Brain Science, Department of Neurosurgery, Huashan Hospital, Fudan University, Shanghai 200032, China
| | - Xiaodi Liu
- State Key Laboratory of Medical Neurobiology, MOE Frontiers Center for Brain Science and Institutes of Brain Science, Department of Neurosurgery, Huashan Hospital, Fudan University, Shanghai 200032, China
| | - Hang Fu
- State Key Laboratory of Medical Neurobiology, MOE Frontiers Center for Brain Science and Institutes of Brain Science, Department of Neurosurgery, Huashan Hospital, Fudan University, Shanghai 200032, China
| | - Liyuan Cui
- State Key Laboratory of Medical Neurobiology, MOE Frontiers Center for Brain Science and Institutes of Brain Science, Department of Neurosurgery, Huashan Hospital, Fudan University, Shanghai 200032, China
| | - Lu Xing
- State Key Laboratory of Medical Neurobiology, MOE Frontiers Center for Brain Science and Institutes of Brain Science, Department of Neurosurgery, Huashan Hospital, Fudan University, Shanghai 200032, China
| | - Shijun Weng
- State Key Laboratory of Medical Neurobiology, MOE Frontiers Center for Brain Science and Institutes of Brain Science, Department of Neurosurgery, Huashan Hospital, Fudan University, Shanghai 200032, China
| | - Xin Wang
- Department of Neurology and Ophthalmology, Zhongshan Hospital, Fudan University, Shanghai 200032, China
| | - Yuanzhi Yuan
- Department of Neurology and Ophthalmology, Zhongshan Hospital, Fudan University, Shanghai 200032, China
| | - Changsong Zhou
- Department of Physics, Centre for Nonlinear Studies and Institute of Computational and Theoretical Studies, Hong Kong Baptist University, Kowloon Tong, Hong Kong; Research Centre, HKBU Institute of Research and Continuing Education, Shenzhen, China
| | - Gang Wang
- Center of Brain Sciences, Beijing Institute of Basic Medical Sciences, Beijing 100850, China
| | - Liang Li
- Center of Brain Sciences, Beijing Institute of Basic Medical Sciences, Beijing 100850, China
| | - Lan Ma
- State Key Laboratory of Medical Neurobiology, MOE Frontiers Center for Brain Science and Institutes of Brain Science, Department of Neurosurgery, Huashan Hospital, Fudan University, Shanghai 200032, China
| | - Ying Mao
- State Key Laboratory of Medical Neurobiology, MOE Frontiers Center for Brain Science and Institutes of Brain Science, Department of Neurosurgery, Huashan Hospital, Fudan University, Shanghai 200032, China.
| | - Liang Chen
- State Key Laboratory of Medical Neurobiology, MOE Frontiers Center for Brain Science and Institutes of Brain Science, Department of Neurosurgery, Huashan Hospital, Fudan University, Shanghai 200032, China; Tianqiao and Chrissy Chen Institute Clinical Translational Research Center, Shanghai 200040, China.
| | - Jiayi Zhang
- State Key Laboratory of Medical Neurobiology, MOE Frontiers Center for Brain Science and Institutes of Brain Science, Department of Neurosurgery, Huashan Hospital, Fudan University, Shanghai 200032, China; Institute for Medical and Engineering Innovation, Eye & ENT Hospital, Fudan University, Shanghai 200031, China.
| |
Collapse
|
5
|
Calderon CB, Verguts T, Frank MJ. Thunderstruck: The ACDC model of flexible sequences and rhythms in recurrent neural circuits. PLoS Comput Biol 2022; 18:e1009854. [PMID: 35108283 PMCID: PMC8843237 DOI: 10.1371/journal.pcbi.1009854] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/19/2021] [Revised: 02/14/2022] [Accepted: 01/21/2022] [Indexed: 11/18/2022] Open
Abstract
Adaptive sequential behavior is a hallmark of human cognition. In particular, humans can learn to produce precise spatiotemporal sequences given a certain context. For instance, musicians can not only reproduce learned action sequences in a context-dependent manner, they can also quickly and flexibly reapply them in any desired tempo or rhythm without overwriting previous learning. Existing neural network models fail to account for these properties. We argue that this limitation emerges from the fact that sequence information (i.e., the position of the action) and timing (i.e., the moment of response execution) are typically stored in the same neural network weights. Here, we augment a biologically plausible recurrent neural network of cortical dynamics to include a basal ganglia-thalamic module which uses reinforcement learning to dynamically modulate action. This “associative cluster-dependent chain” (ACDC) model modularly stores sequence and timing information in distinct loci of the network. This feature increases computational power and allows ACDC to display a wide range of temporal properties (e.g., multiple sequences, temporal shifting, rescaling, and compositionality), while still accounting for several behavioral and neurophysiological empirical observations. Finally, we apply this ACDC network to show how it can learn the famous “Thunderstruck” song intro and then flexibly play it in a “bossa nova” rhythm without further training. How do humans flexibly adapt action sequences? For instance, musicians can learn a song and quickly speed up or slow down the tempo, or even play the song following a completely different rhythm (e.g., a rock song using a bossa nova rhythm). In this work, we build a biologically plausible network of cortico-basal ganglia interactions that explains how this temporal flexibility may emerge in the brain. Crucially, our model factorizes sequence order and action timing, respectively represented in cortical and basal ganglia dynamics. This factorization allows full temporal flexibility, i.e. the timing of a learned action sequence can be recomposed without interfering with the order of the sequence. As such, our model is capable of learning asynchronous action sequences, and flexibly shift, rescale, and recompose them, while accounting for biological data.
Collapse
Affiliation(s)
- Cristian Buc Calderon
- Department of Cognitive, Linguistic & Psychological Sciences, Brown University, Providence, Rhode Island, United States of America
- Department of Experimental Psychology, Ghent University, Ghent, Belgium
- Carney Institute for Brain Science, Brown University, Providence, Rhode Island, United States of America
- * E-mail:
| | - Tom Verguts
- Department of Experimental Psychology, Ghent University, Ghent, Belgium
| | - Michael J. Frank
- Department of Cognitive, Linguistic & Psychological Sciences, Brown University, Providence, Rhode Island, United States of America
- Carney Institute for Brain Science, Brown University, Providence, Rhode Island, United States of America
| |
Collapse
|
6
|
Aljadeff J, Gillett M, Pereira Obilinovic U, Brunel N. From synapse to network: models of information storage and retrieval in neural circuits. Curr Opin Neurobiol 2021; 70:24-33. [PMID: 34175521 DOI: 10.1016/j.conb.2021.05.005] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/03/2021] [Revised: 05/06/2021] [Accepted: 05/25/2021] [Indexed: 10/21/2022]
Abstract
The mechanisms of information storage and retrieval in brain circuits are still the subject of debate. It is widely believed that information is stored at least in part through changes in synaptic connectivity in networks that encode this information and that these changes lead in turn to modifications of network dynamics, such that the stored information can be retrieved at a later time. Here, we review recent progress in deriving synaptic plasticity rules from experimental data and in understanding how plasticity rules affect the dynamics of recurrent networks. We show that the dynamics generated by such networks exhibit a large degree of diversity, depending on parameters, similar to experimental observations in vivo during delayed response tasks.
Collapse
Affiliation(s)
- Johnatan Aljadeff
- Neurobiology Section, Division of Biological Sciences, UC San Diego, USA
| | | | | | - Nicolas Brunel
- Department of Neurobiology, Duke University, USA; Department of Physics, Duke University, USA.
| |
Collapse
|
7
|
Active Learning and the Potential of Neural Networks Accelerate Molecular Screening for the Design of a New Molecule Effective against SARS-CoV-2. BIOMED RESEARCH INTERNATIONAL 2021; 2021:6696012. [PMID: 34124259 PMCID: PMC8172298 DOI: 10.1155/2021/6696012] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/27/2020] [Revised: 05/07/2021] [Accepted: 05/15/2021] [Indexed: 12/04/2022]
Abstract
A global pandemic has emerged following the appearance of the new severe acute respiratory virus whose official name is the severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2), strongly affecting the health sector as well as the world economy. Indeed, following the emergence of this new virus, despite the existence of a few approved and known effective vaccines at the time of writing this original study, a sense of urgency has emerged worldwide to discover new technical tools and new drugs as soon as possible. In this context, many studies and researches are currently underway to develop new tools and therapies against SARS CoV-2 and other viruses, using different approaches. The 3-chymotrypsin (3CL) protease, which is directly involved in the cotranslational and posttranslational modifications of viral polyproteins essential for the existence and replication of the virus in the host, is one of the coronavirus target proteins that has been the subject of these extensive studies. Currently, the majority of these studies are aimed at repurposing already known and clinically approved drugs against this new virus, but this approach is not really successful. Recently, different studies have successfully demonstrated the effectiveness of artificial intelligence-based techniques to understand existing chemical spaces and generate new small molecules that are both effective and efficient. In this framework and for our study, we combined a generative recurrent neural network model with transfer learning methods and active learning-based algorithms to design novel small molecules capable of effectively inhibiting the 3CL protease in human cells. We then analyze these small molecules to find the correct binding site that matches the structure of the 3CL protease of our target virus as well as other analyses performed in this study. Based on these screening results, some molecules have achieved a good binding score close to -18 kcal/mol, which we can consider as good potential candidates for further synthesis and testing against SARS-CoV-2.
Collapse
|
8
|
|
9
|
Cone I, Shouval HZ. Learning precise spatiotemporal sequences via biophysically realistic learning rules in a modular, spiking network. eLife 2021; 10:63751. [PMID: 33734085 PMCID: PMC7972481 DOI: 10.7554/elife.63751] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/06/2020] [Accepted: 02/16/2021] [Indexed: 11/13/2022] Open
Abstract
Multiple brain regions are able to learn and express temporal sequences, and this functionality is an essential component of learning and memory. We propose a substrate for such representations via a network model that learns and recalls discrete sequences of variable order and duration. The model consists of a network of spiking neurons placed in a modular microcolumn based architecture. Learning is performed via a biophysically realistic learning rule that depends on synaptic 'eligibility traces'. Before training, the network contains no memory of any particular sequence. After training, presentation of only the first element in that sequence is sufficient for the network to recall an entire learned representation of the sequence. An extended version of the model also demonstrates the ability to successfully learn and recall non-Markovian sequences. This model provides a possible framework for biologically plausible sequence learning and memory, in agreement with recent experimental results.
Collapse
Affiliation(s)
- Ian Cone
- Neurobiology and Anatomy, University of Texas Medical School at Houston, Houston, TX, United States.,Applied Physics, Rice University, Houston, TX, United States
| | - Harel Z Shouval
- Neurobiology and Anatomy, University of Texas Medical School at Houston, Houston, TX, United States
| |
Collapse
|
10
|
Characteristics of sequential activity in networks with temporally asymmetric Hebbian learning. Proc Natl Acad Sci U S A 2020; 117:29948-29958. [PMID: 33177232 PMCID: PMC7703604 DOI: 10.1073/pnas.1918674117] [Citation(s) in RCA: 18] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
Sequential activity is a prominent feature of many neural systems, in multiple behavioral contexts. Here, we investigate how Hebbian rules lead to storage and recall of random sequences of inputs in both rate and spiking recurrent networks. In the case of the simplest (bilinear) rule, we characterize extensively the regions in parameter space that allow sequence retrieval and compute analytically the storage capacity of the network. We show that nonlinearities in the learning rule can lead to sparse sequences and find that sequences maintain robust decoding but display highly labile dynamics to continuous changes in the connectivity matrix, similar to recent observations in hippocampus and parietal cortex. Sequential activity has been observed in multiple neuronal circuits across species, neural structures, and behaviors. It has been hypothesized that sequences could arise from learning processes. However, it is still unclear whether biologically plausible synaptic plasticity rules can organize neuronal activity to form sequences whose statistics match experimental observations. Here, we investigate temporally asymmetric Hebbian rules in sparsely connected recurrent rate networks and develop a theory of the transient sequential activity observed after learning. These rules transform a sequence of random input patterns into synaptic weight updates. After learning, recalled sequential activity is reflected in the transient correlation of network activity with each of the stored input patterns. Using mean-field theory, we derive a low-dimensional description of the network dynamics and compute the storage capacity of these networks. Multiple temporal characteristics of the recalled sequential activity are consistent with experimental observations. We find that the degree of sparseness of the recalled sequences can be controlled by nonlinearities in the learning rule. Furthermore, sequences maintain robust decoding, but display highly labile dynamics, when synaptic connectivity is continuously modified due to noise or storage of other patterns, similar to recent observations in hippocampus and parietal cortex. Finally, we demonstrate that our results also hold in recurrent networks of spiking neurons with separate excitatory and inhibitory populations.
Collapse
|
11
|
Köksal Ersöz E, Aguilar C, Chossat P, Krupa M, Lavigne F. Neuronal mechanisms for sequential activation of memory items: Dynamics and reliability. PLoS One 2020; 15:e0231165. [PMID: 32298290 PMCID: PMC7161983 DOI: 10.1371/journal.pone.0231165] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/06/2019] [Accepted: 03/17/2020] [Indexed: 11/19/2022] Open
Abstract
In this article we present a biologically inspired model of activation of memory items in a sequence. Our model produces two types of sequences, corresponding to two different types of cerebral functions: activation of regular or irregular sequences. The switch between the two types of activation occurs through the modulation of biological parameters, without altering the connectivity matrix. Some of the parameters included in our model are neuronal gain, strength of inhibition, synaptic depression and noise. We investigate how these parameters enable the existence of sequences and influence the type of sequences observed. In particular we show that synaptic depression and noise drive the transitions from one memory item to the next and neuronal gain controls the switching between regular and irregular (random) activation.
Collapse
Affiliation(s)
| | - Carlos Aguilar
- Lab by MANTU, Amaris Research Unit, Route des Colles, Biot, France
| | - Pascal Chossat
- Project Team MathNeuro, INRIA-CNRS-UNS, Sophia Antipolis, France
- Université Côte d'Azur, Laboratoire Jean-Alexandre Dieudonné, Nice, France
| | - Martin Krupa
- Project Team MathNeuro, INRIA-CNRS-UNS, Sophia Antipolis, France
- Université Côte d'Azur, Laboratoire Jean-Alexandre Dieudonné, Nice, France
| | | |
Collapse
|
12
|
Pereira U, Brunel N. Unsupervised Learning of Persistent and Sequential Activity. Front Comput Neurosci 2020; 13:97. [PMID: 32009924 PMCID: PMC6978734 DOI: 10.3389/fncom.2019.00097] [Citation(s) in RCA: 12] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/03/2018] [Accepted: 12/23/2019] [Indexed: 11/25/2022] Open
Abstract
Two strikingly distinct types of activity have been observed in various brain structures during delay periods of delayed response tasks: Persistent activity (PA), in which a sub-population of neurons maintains an elevated firing rate throughout an entire delay period; and Sequential activity (SA), in which sub-populations of neurons are activated sequentially in time. It has been hypothesized that both types of dynamics can be “learned” by the relevant networks from the statistics of their inputs, thanks to mechanisms of synaptic plasticity. However, the necessary conditions for a synaptic plasticity rule and input statistics to learn these two types of dynamics in a stable fashion are still unclear. In particular, it is unclear whether a single learning rule is able to learn both types of activity patterns, depending on the statistics of the inputs driving the network. Here, we first characterize the complete bifurcation diagram of a firing rate model of multiple excitatory populations with an inhibitory mechanism, as a function of the parameters characterizing its connectivity. We then investigate how an unsupervised temporally asymmetric Hebbian plasticity rule shapes the dynamics of the network. Consistent with previous studies, we find that for stable learning of PA and SA, an additional stabilization mechanism is necessary. We show that a generalized version of the standard multiplicative homeostatic plasticity (Renart et al., 2003; Toyoizumi et al., 2014) stabilizes learning by effectively masking excitatory connections during stimulation and unmasking those connections during retrieval. Using the bifurcation diagram derived for fixed connectivity, we study analytically the temporal evolution and the steady state of the learned recurrent architecture as a function of parameters characterizing the external inputs. Slow changing stimuli lead to PA, while fast changing stimuli lead to SA. Our network model shows how a network with plastic synapses can stably and flexibly learn PA and SA in an unsupervised manner.
Collapse
Affiliation(s)
- Ulises Pereira
- Department of Statistics, The University of Chicago, Chicago, IL, United States
| | - Nicolas Brunel
- Department of Statistics, The University of Chicago, Chicago, IL, United States.,Department of Neurobiology, The University of Chicago, Chicago, IL, United States.,Department of Neurobiology, Duke University, Durham, NC, United States.,Department of Physics, Duke University, Durham, NC, United States
| |
Collapse
|
13
|
Martinez RH, Lansner A, Herman P. Probabilistic associative learning suffices for learning the temporal structure of multiple sequences. PLoS One 2019; 14:e0220161. [PMID: 31369571 PMCID: PMC6675053 DOI: 10.1371/journal.pone.0220161] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/22/2019] [Accepted: 07/08/2019] [Indexed: 11/19/2022] Open
Abstract
From memorizing a musical tune to navigating a well known route, many of our underlying behaviors have a strong temporal component. While the mechanisms behind the sequential nature of the underlying brain activity are likely multifarious and multi-scale, in this work we attempt to characterize to what degree some of this properties can be explained as a consequence of simple associative learning. To this end, we employ a parsimonious firing-rate attractor network equipped with the Hebbian-like Bayesian Confidence Propagating Neural Network (BCPNN) learning rule relying on synaptic traces with asymmetric temporal characteristics. The proposed network model is able to encode and reproduce temporal aspects of the input, and offers internal control of the recall dynamics by gain modulation. We provide an analytical characterisation of the relationship between the structure of the weight matrix, the dynamical network parameters and the temporal aspects of sequence recall. We also present a computational study of the performance of the system under the effects of noise for an extensive region of the parameter space. Finally, we show how the inclusion of modularity in our network structure facilitates the learning and recall of multiple overlapping sequences even in a noisy regime.
Collapse
Affiliation(s)
- Ramon H. Martinez
- Computational Brain Science Lab, KTH Royal Institute of Technology, Stockholm, Sweden
| | - Anders Lansner
- Computational Brain Science Lab, KTH Royal Institute of Technology, Stockholm, Sweden
- Mathematics Department, Stockholm University, Stockholm, Sweden
| | - Pawel Herman
- Computational Brain Science Lab, KTH Royal Institute of Technology, Stockholm, Sweden
| |
Collapse
|
14
|
Pang R, Fairhall AL. Fast and flexible sequence induction in spiking neural networks via rapid excitability changes. eLife 2019; 8:44324. [PMID: 31081753 PMCID: PMC6538377 DOI: 10.7554/elife.44324] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/12/2018] [Accepted: 05/11/2019] [Indexed: 12/14/2022] Open
Abstract
Cognitive flexibility likely depends on modulation of the dynamics underlying how biological neural networks process information. While dynamics can be reshaped by gradually modifying connectivity, less is known about mechanisms operating on faster timescales. A compelling entrypoint to this problem is the observation that exploratory behaviors can rapidly cause selective hippocampal sequences to 'replay' during rest. Using a spiking network model, we asked whether simplified replay could arise from three biological components: fixed recurrent connectivity; stochastic 'gating' inputs; and rapid gating input scaling via long-term potentiation of intrinsic excitability (LTP-IE). Indeed, these enabled both forward and reverse replay of recent sensorimotor-evoked sequences, despite unchanged recurrent weights. LTP-IE 'tags' specific neurons with increased spiking probability under gating input, and ordering is reconstructed from recurrent connectivity. We further show how LTP-IE can implement temporary stimulus-response mappings. This elucidates a novel combination of mechanisms that might play a role in rapid cognitive flexibility.
Collapse
Affiliation(s)
- Rich Pang
- Neuroscience Graduate ProgramUniversity of WashingtonSeattleUnited States,Department of Physiology and BiophysicsUniversity of WashingtonSeattleUnited States,Computational Neuroscience CenterUniversity of WashingtonSeattleUnited States
| | - Adrienne L Fairhall
- Department of Physiology and BiophysicsUniversity of WashingtonSeattleUnited States,Computational Neuroscience CenterUniversity of WashingtonSeattleUnited States
| |
Collapse
|
15
|
Gupta A, Müller AT, Huisman BJH, Fuchs JA, Schneider P, Schneider G. Generative Recurrent Networks for De Novo Drug Design. Mol Inform 2018; 37:1700111. [PMID: 29095571 PMCID: PMC5836943 DOI: 10.1002/minf.201700111] [Citation(s) in RCA: 224] [Impact Index Per Article: 37.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/29/2017] [Accepted: 10/16/2017] [Indexed: 11/11/2022]
Abstract
Generative artificial intelligence models present a fresh approach to chemogenomics and de novo drug design, as they provide researchers with the ability to narrow down their search of the chemical space and focus on regions of interest. We present a method for molecular de novo design that utilizes generative recurrent neural networks (RNN) containing long short-term memory (LSTM) cells. This computational model captured the syntax of molecular representation in terms of SMILES strings with close to perfect accuracy. The learned pattern probabilities can be used for de novo SMILES generation. This molecular design concept eliminates the need for virtual compound library enumeration. By employing transfer learning, we fine-tuned the RNN's predictions for specific molecular targets. This approach enables virtual compound design without requiring secondary or external activity prediction, which could introduce error or unwanted bias. The results obtained advocate this generative RNN-LSTM system for high-impact use cases, such as low-data drug discovery, fragment based molecular design, and hit-to-lead optimization for diverse drug targets.
Collapse
Affiliation(s)
- Anvita Gupta
- Swiss Federal Institute of Technology (ETH), Department of Chemistry and Applied BiosciencesVladimir–Prelog–Weg 48093ZurichSwitzerland
- Stanford University, Department of Computer Science450 Sierra MallStanford, CA94305USA
| | - Alex T. Müller
- Swiss Federal Institute of Technology (ETH), Department of Chemistry and Applied BiosciencesVladimir–Prelog–Weg 48093ZurichSwitzerland
| | - Berend J. H. Huisman
- Swiss Federal Institute of Technology (ETH), Department of Chemistry and Applied BiosciencesVladimir–Prelog–Weg 48093ZurichSwitzerland
| | - Jens A. Fuchs
- Swiss Federal Institute of Technology (ETH), Department of Chemistry and Applied BiosciencesVladimir–Prelog–Weg 48093ZurichSwitzerland
| | - Petra Schneider
- Swiss Federal Institute of Technology (ETH), Department of Chemistry and Applied BiosciencesVladimir–Prelog–Weg 48093ZurichSwitzerland
- inSili.com GmbH8049ZurichSwitzerland
| | - Gisbert Schneider
- Swiss Federal Institute of Technology (ETH), Department of Chemistry and Applied BiosciencesVladimir–Prelog–Weg 48093ZurichSwitzerland
| |
Collapse
|
16
|
Murray JM, Escola GS. Learning multiple variable-speed sequences in striatum via cortical tutoring. eLife 2017; 6. [PMID: 28481200 PMCID: PMC5446244 DOI: 10.7554/elife.26084] [Citation(s) in RCA: 66] [Impact Index Per Article: 9.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/16/2017] [Accepted: 05/07/2017] [Indexed: 01/16/2023] Open
Abstract
Sparse, sequential patterns of neural activity have been observed in numerous brain areas during timekeeping and motor sequence tasks. Inspired by such observations, we construct a model of the striatum, an all-inhibitory circuit where sequential activity patterns are prominent, addressing the following key challenges: (i) obtaining control over temporal rescaling of the sequence speed, with the ability to generalize to new speeds; (ii) facilitating flexible expression of distinct sequences via selective activation, concatenation, and recycling of specific subsequences; and (iii) enabling the biologically plausible learning of sequences, consistent with the decoupling of learning and execution suggested by lesion studies showing that cortical circuits are necessary for learning, but that subcortical circuits are sufficient to drive learned behaviors. The same mechanisms that we describe can also be applied to circuits with both excitatory and inhibitory populations, and hence may underlie general features of sequential neural activity pattern generation in the brain. DOI:http://dx.doi.org/10.7554/eLife.26084.001
Collapse
Affiliation(s)
- James M Murray
- Center for Theoretical Neuroscience, Columbia University, New York, United States
| | - G Sean Escola
- Center for Theoretical Neuroscience, Columbia University, New York, United States
| |
Collapse
|
17
|
Kaposvari P, Kumar S, Vogels R. Statistical Learning Signals in Macaque Inferior Temporal Cortex. Cereb Cortex 2016; 28:250-266. [DOI: 10.1093/cercor/bhw374] [Citation(s) in RCA: 35] [Impact Index Per Article: 4.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/18/2016] [Indexed: 11/14/2022] Open
|