1
|
Liu YH, Baratin A, Cornford J, Mihalas S, Shea-Brown E, Lajoie G. How connectivity structure shapes rich and lazy learning in neural circuits. ARXIV 2024:arXiv:2310.08513v2. [PMID: 37873007 PMCID: PMC10593070] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Download PDF] [Subscribe] [Scholar Register] [Indexed: 10/25/2023]
Abstract
In theoretical neuroscience, recent work leverages deep learning tools to explore how some network attributes critically influence its learning dynamics. Notably, initial weight distributions with small (resp. large) variance may yield a rich (resp. lazy) regime, where significant (resp. minor) changes to network states and representation are observed over the course of learning. However, in biology, neural circuit connectivity could exhibit a low-rank structure and therefore differs markedly from the random initializations generally used for these studies. As such, here we investigate how the structure of the initial weights -- in particular their effective rank -- influences the network learning regime. Through both empirical and theoretical analyses, we discover that high-rank initializations typically yield smaller network changes indicative of lazier learning, a finding we also confirm with experimentally-driven initial connectivity in recurrent neural networks. Conversely, low-rank initialization biases learning towards richer learning. Importantly, however, as an exception to this rule, we find lazier learning can still occur with a low-rank initialization that aligns with task and data statistics. Our research highlights the pivotal role of initial weight structures in shaping learning regimes, with implications for metabolic costs of plasticity and risks of catastrophic forgetting.
Collapse
|
2
|
Winding M, Pedigo BD, Barnes CL, Patsolic HG, Park Y, Kazimiers T, Fushiki A, Andrade IV, Khandelwal A, Valdes-Aleman J, Li F, Randel N, Barsotti E, Correia A, Fetter RD, Hartenstein V, Priebe CE, Vogelstein JT, Cardona A, Zlatic M. The connectome of an insect brain. Science 2023; 379:eadd9330. [PMID: 36893230 PMCID: PMC7614541 DOI: 10.1126/science.add9330] [Citation(s) in RCA: 113] [Impact Index Per Article: 56.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/17/2022] [Accepted: 02/07/2023] [Indexed: 03/11/2023]
Abstract
Brains contain networks of interconnected neurons and so knowing the network architecture is essential for understanding brain function. We therefore mapped the synaptic-resolution connectome of an entire insect brain (Drosophila larva) with rich behavior, including learning, value computation, and action selection, comprising 3016 neurons and 548,000 synapses. We characterized neuron types, hubs, feedforward and feedback pathways, as well as cross-hemisphere and brain-nerve cord interactions. We found pervasive multisensory and interhemispheric integration, highly recurrent architecture, abundant feedback from descending neurons, and multiple novel circuit motifs. The brain's most recurrent circuits comprised the input and output neurons of the learning center. Some structural features, including multilayer shortcuts and nested recurrent loops, resembled state-of-the-art deep learning architectures. The identified brain architecture provides a basis for future experimental and theoretical studies of neural circuits.
Collapse
Affiliation(s)
- Michael Winding
- University of Cambridge, Department of Zoology, Cambridge, UK
- MRC Laboratory of Molecular Biology, Neurobiology Division, Cambridge, UK
- Janelia Research Campus, Howard Hughes Medical Institute, Ashburn, VA, USA
| | - Benjamin D. Pedigo
- Johns Hopkins University, Department of Biomedical Engineering, Baltimore, MD, USA
| | - Christopher L. Barnes
- MRC Laboratory of Molecular Biology, Neurobiology Division, Cambridge, UK
- University of Cambridge, Department of Physiology, Development, and Neuroscience, Cambridge, UK
| | - Heather G. Patsolic
- Johns Hopkins University, Department of Applied Mathematics and Statistics, Baltimore, MD, USA
- Accenture, Arlington, VA, USA
| | - Youngser Park
- Johns Hopkins University, Center for Imaging Science, Baltimore, MD, USA
| | - Tom Kazimiers
- Janelia Research Campus, Howard Hughes Medical Institute, Ashburn, VA, USA
- kazmos GmbH, Dresden, Germany
| | - Akira Fushiki
- Janelia Research Campus, Howard Hughes Medical Institute, Ashburn, VA, USA
- Zuckerman Mind Brain Behavior Institute, Columbia University, New York, NY, USA
| | - Ingrid V. Andrade
- University of California Los Angeles, Department of Molecular, Cell and Developmental Biology, Los Angeles, CA, USA
| | - Avinash Khandelwal
- Janelia Research Campus, Howard Hughes Medical Institute, Ashburn, VA, USA
| | - Javier Valdes-Aleman
- University of Cambridge, Department of Zoology, Cambridge, UK
- Janelia Research Campus, Howard Hughes Medical Institute, Ashburn, VA, USA
| | - Feng Li
- Janelia Research Campus, Howard Hughes Medical Institute, Ashburn, VA, USA
| | - Nadine Randel
- University of Cambridge, Department of Zoology, Cambridge, UK
- MRC Laboratory of Molecular Biology, Neurobiology Division, Cambridge, UK
| | - Elizabeth Barsotti
- MRC Laboratory of Molecular Biology, Neurobiology Division, Cambridge, UK
- University of Cambridge, Department of Physiology, Development, and Neuroscience, Cambridge, UK
| | - Ana Correia
- MRC Laboratory of Molecular Biology, Neurobiology Division, Cambridge, UK
- University of Cambridge, Department of Physiology, Development, and Neuroscience, Cambridge, UK
| | - Richard D. Fetter
- Janelia Research Campus, Howard Hughes Medical Institute, Ashburn, VA, USA
- Stanford University, Stanford, CA, USA
| | - Volker Hartenstein
- University of California Los Angeles, Department of Molecular, Cell and Developmental Biology, Los Angeles, CA, USA
| | - Carey E. Priebe
- Johns Hopkins University, Department of Applied Mathematics and Statistics, Baltimore, MD, USA
- Johns Hopkins University, Center for Imaging Science, Baltimore, MD, USA
| | - Joshua T. Vogelstein
- Johns Hopkins University, Department of Biomedical Engineering, Baltimore, MD, USA
- Johns Hopkins University, Center for Imaging Science, Baltimore, MD, USA
| | - Albert Cardona
- MRC Laboratory of Molecular Biology, Neurobiology Division, Cambridge, UK
- Janelia Research Campus, Howard Hughes Medical Institute, Ashburn, VA, USA
- University of Cambridge, Department of Physiology, Development, and Neuroscience, Cambridge, UK
| | - Marta Zlatic
- University of Cambridge, Department of Zoology, Cambridge, UK
- MRC Laboratory of Molecular Biology, Neurobiology Division, Cambridge, UK
- Janelia Research Campus, Howard Hughes Medical Institute, Ashburn, VA, USA
| |
Collapse
|
3
|
Anwar H, Caby S, Dura-Bernal S, D’Onofrio D, Hasegan D, Deible M, Grunblatt S, Chadderdon GL, Kerr CC, Lakatos P, Lytton WW, Hazan H, Neymotin SA. Training a spiking neuronal network model of visual-motor cortex to play a virtual racket-ball game using reinforcement learning. PLoS One 2022; 17:e0265808. [PMID: 35544518 PMCID: PMC9094569 DOI: 10.1371/journal.pone.0265808] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/06/2021] [Accepted: 03/08/2022] [Indexed: 11/18/2022] Open
Abstract
Recent models of spiking neuronal networks have been trained to perform behaviors in static environments using a variety of learning rules, with varying degrees of biological realism. Most of these models have not been tested in dynamic visual environments where models must make predictions on future states and adjust their behavior accordingly. The models using these learning rules are often treated as black boxes, with little analysis on circuit architectures and learning mechanisms supporting optimal performance. Here we developed visual/motor spiking neuronal network models and trained them to play a virtual racket-ball game using several reinforcement learning algorithms inspired by the dopaminergic reward system. We systematically investigated how different architectures and circuit-motifs (feed-forward, recurrent, feedback) contributed to learning and performance. We also developed a new biologically-inspired learning rule that significantly enhanced performance, while reducing training time. Our models included visual areas encoding game inputs and relaying the information to motor areas, which used this information to learn to move the racket to hit the ball. Neurons in the early visual area relayed information encoding object location and motion direction across the network. Neuronal association areas encoded spatial relationships between objects in the visual scene. Motor populations received inputs from visual and association areas representing the dorsal pathway. Two populations of motor neurons generated commands to move the racket up or down. Model-generated actions updated the environment and triggered reward or punishment signals that adjusted synaptic weights so that the models could learn which actions led to reward. Here we demonstrate that our biologically-plausible learning rules were effective in training spiking neuronal network models to solve problems in dynamic environments. We used our models to dissect the circuit architectures and learning rules most effective for learning. Our model shows that learning mechanisms involving different neural circuits produce similar performance in sensory-motor tasks. In biological networks, all learning mechanisms may complement one another, accelerating the learning capabilities of animals. Furthermore, this also highlights the resilience and redundancy in biological systems.
Collapse
Affiliation(s)
- Haroon Anwar
- Center for Biomedical Imaging and Neuromodulation, Nathan Kline Institute for Psychiatric Research, Orangeburg, New York, United States of America
| | - Simon Caby
- Center for Biomedical Imaging and Neuromodulation, Nathan Kline Institute for Psychiatric Research, Orangeburg, New York, United States of America
| | - Salvador Dura-Bernal
- Center for Biomedical Imaging and Neuromodulation, Nathan Kline Institute for Psychiatric Research, Orangeburg, New York, United States of America
- Dept. Physiology & Pharmacology, State University of New York Downstate, Brooklyn, New York, United States of America
| | - David D’Onofrio
- Center for Biomedical Imaging and Neuromodulation, Nathan Kline Institute for Psychiatric Research, Orangeburg, New York, United States of America
| | - Daniel Hasegan
- Center for Biomedical Imaging and Neuromodulation, Nathan Kline Institute for Psychiatric Research, Orangeburg, New York, United States of America
| | - Matt Deible
- University of Pittsburgh, Pittsburgh, Pennsylvania, United States of America
| | - Sara Grunblatt
- Center for Biomedical Imaging and Neuromodulation, Nathan Kline Institute for Psychiatric Research, Orangeburg, New York, United States of America
| | - George L. Chadderdon
- Dept. Physiology & Pharmacology, State University of New York Downstate, Brooklyn, New York, United States of America
| | - Cliff C. Kerr
- Dept Physics, University of Sydney, Sydney, Australia
- Institute for Disease Modeling, Global Health Division, Bill & Melinda Gates Foundation, Seattle, Washington, United States of America
| | - Peter Lakatos
- Center for Biomedical Imaging and Neuromodulation, Nathan Kline Institute for Psychiatric Research, Orangeburg, New York, United States of America
- Dept. Psychiatry, NYU Grossman School of Medicine, New York, New York, United States of America
| | - William W. Lytton
- Dept. Physiology & Pharmacology, State University of New York Downstate, Brooklyn, New York, United States of America
- Dept Neurology, Kings County Hospital Center, Brooklyn, New York, United States of America
| | - Hananel Hazan
- Dept of Biology, Tufts University, Medford, Massachusetts, United States of America
| | - Samuel A. Neymotin
- Center for Biomedical Imaging and Neuromodulation, Nathan Kline Institute for Psychiatric Research, Orangeburg, New York, United States of America
- Dept. Psychiatry, NYU Grossman School of Medicine, New York, New York, United States of America
| |
Collapse
|
4
|
Raman DV, O'Leary T. Optimal plasticity for memory maintenance during ongoing synaptic change. eLife 2021; 10:62912. [PMID: 34519270 PMCID: PMC8504970 DOI: 10.7554/elife.62912] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/08/2020] [Accepted: 09/13/2021] [Indexed: 11/13/2022] Open
Abstract
Synaptic connections in many brain circuits fluctuate, exhibiting substantial turnover and remodelling over hours to days. Surprisingly, experiments show that most of this flux in connectivity persists in the absence of learning or known plasticity signals. How can neural circuits retain learned information despite a large proportion of ongoing and potentially disruptive synaptic changes? We address this question from first principles by analysing how much compensatory plasticity would be required to optimally counteract ongoing fluctuations, regardless of whether fluctuations are random or systematic. Remarkably, we find that the answer is largely independent of plasticity mechanisms and circuit architectures: compensatory plasticity should be at most equal in magnitude to fluctuations, and often less, in direct agreement with previously unexplained experimental observations. Moreover, our analysis shows that a high proportion of learning-independent synaptic change is consistent with plasticity mechanisms that accurately compute error gradients.
Collapse
Affiliation(s)
- Dhruva V Raman
- Department of Engineering, University of Cambridge, Cambridge, United Kingdom
| | - Timothy O'Leary
- Department of Engineering, University of Cambridge, Cambridge, United Kingdom
| |
Collapse
|