1
|
Mastrovito D, Liu YH, Kusmierz L, Shea-Brown E, Koch C, Mihalas S. Transition to chaos separates learning regimes and relates to measure of consciousness in recurrent neural networks. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.05.15.594236. [PMID: 38798582 PMCID: PMC11118502 DOI: 10.1101/2024.05.15.594236] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/29/2024]
Abstract
Recurrent neural networks exhibit chaotic dynamics when the variance in their connection strengths exceed a critical value. Recent work indicates connection variance also modulates learning strategies; networks learn "rich" representations when initialized with low coupling and "lazier" solutions with larger variance. Using Watts-Strogatz networks of varying sparsity, structure, and hidden weight variance, we find that the critical coupling strength dividing chaotic from ordered dynamics also differentiates rich and lazy learning strategies. Training moves both stable and chaotic networks closer to the edge of chaos, with networks learning richer representations before the transition to chaos. In contrast, biologically realistic connectivity structures foster stability over a wide range of variances. The transition to chaos is also reflected in a measure that clinically discriminates levels of consciousness, the perturbational complexity index (PCIst). Networks with high values of PCIst exhibit stable dynamics and rich learning, suggesting a consciousness prior may promote rich learning. The results suggest a clear relationship between critical dynamics, learning regimes and complexity-based measures of consciousness.
Collapse
|
2
|
Sui H, Dou J, Shi B, Cheng X. The reciprocity of skeletal muscle and bone: an evolving view from mechanical coupling, secretory crosstalk to stem cell exchange. Front Physiol 2024; 15:1349253. [PMID: 38505709 PMCID: PMC10949226 DOI: 10.3389/fphys.2024.1349253] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/04/2023] [Accepted: 02/19/2024] [Indexed: 03/21/2024] Open
Abstract
Introduction: Muscle and bone constitute the two main parts of the musculoskeletal system and generate an intricately coordinated motion system. The crosstalk between muscle and bone has been under investigation, leading to revolutionary perspectives in recent years. Method and results: In this review, the evolving concept of muscle-bone interaction from mechanical coupling, secretory crosstalk to stem cell exchange was explained in sequence. The theory of mechanical coupling stems from the observation that the development and maintenance of bone mass are largely dependent on muscle-derived mechanical loads, which was later proved by Wolff's law, Utah paradigm and Mechanostat hypothesis. Then bone and muscle are gradually recognized as endocrine organs, which can secrete various cytokines to modulate the tissue homeostasis and remodeling to each other. The latest view presented muscle-bone interaction in a more direct way: the resident mesenchymal stromal cell in the skeletal muscle, i.e., fibro-adipogenic progenitors (FAPs), could migrate to the bone injury site and contribute to bone regeneration. Emerging evidence even reveals the ectopic source of FAPs from tissue outside the musculoskeletal system, highlighting its dynamic property. Conclusion: FAPs have been established as the critical cell connecting muscle and bone, which provides a new modality to study inter-tissue communication. A comprehensive and integrated perspective of muscle and bone will facilitate in-depth research in the musculoskeletal system and promote novel therapeutic avenues in treating musculoskeletal disorders.
Collapse
Affiliation(s)
| | | | | | - Xu Cheng
- State Key Laboratory of Oral Diseases and National Clinical Research Center for Oral Diseases, West China School of Stomatology, Sichuan University, Chengdu, China
| |
Collapse
|
3
|
Jones RD. Information Transmission in G Protein-Coupled Receptors. Int J Mol Sci 2024; 25:1621. [PMID: 38338905 PMCID: PMC10855935 DOI: 10.3390/ijms25031621] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/30/2023] [Revised: 01/19/2024] [Accepted: 01/22/2024] [Indexed: 02/12/2024] Open
Abstract
G protein-coupled receptors (GPCRs) are the largest class of receptors in the human genome and constitute about 30% of all drug targets. In this article, intended for a non-mathematical audience, both experimental observations and new theoretical results are compared in the context of information transmission across the cell membrane. The amount of information actually currently used or projected to be used in clinical settings is a small fraction of the information transmission capacity of the GPCR. This indicates that the number of yet undiscovered drug targets within GPCRs is much larger than what is currently known. Theoretical studies with some experimental validation indicate that localized heat deposition and dissipation are key to the identification of sites and mechanisms for drug action.
Collapse
Affiliation(s)
- Roger D Jones
- European Centre for Living Technology, University of Venice, 30123 Venice, Italy
| |
Collapse
|
4
|
Benjamin AS, Kording KP. A role for cortical interneurons as adversarial discriminators. PLoS Comput Biol 2023; 19:e1011484. [PMID: 37768890 PMCID: PMC10538760 DOI: 10.1371/journal.pcbi.1011484] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/29/2022] [Accepted: 08/31/2023] [Indexed: 09/30/2023] Open
Abstract
The brain learns representations of sensory information from experience, but the algorithms by which it does so remain unknown. One popular theory formalizes representations as inferred factors in a generative model of sensory stimuli, meaning that learning must improve this generative model and inference procedure. This framework underlies many classic computational theories of sensory learning, such as Boltzmann machines, the Wake/Sleep algorithm, and a more recent proposal that the brain learns with an adversarial algorithm that compares waking and dreaming activity. However, in order for such theories to provide insights into the cellular mechanisms of sensory learning, they must be first linked to the cell types in the brain that mediate them. In this study, we examine whether a subtype of cortical interneurons might mediate sensory learning by serving as discriminators, a crucial component in an adversarial algorithm for representation learning. We describe how such interneurons would be characterized by a plasticity rule that switches from Hebbian plasticity during waking states to anti-Hebbian plasticity in dreaming states. Evaluating the computational advantages and disadvantages of this algorithm, we find that it excels at learning representations in networks with recurrent connections but scales poorly with network size. This limitation can be partially addressed if the network also oscillates between evoked activity and generative samples on faster timescales. Consequently, we propose that an adversarial algorithm with interneurons as discriminators is a plausible and testable strategy for sensory learning in biological systems.
Collapse
Affiliation(s)
- Ari S. Benjamin
- Department of Bioengineering, University of Pennsylvania, Philadelphia, Pennsylvania, United States of America
| | - Konrad P. Kording
- Department of Bioengineering, University of Pennsylvania, Philadelphia, Pennsylvania, United States of America
| |
Collapse
|
5
|
Richards BA, Kording KP. The study of plasticity has always been about gradients. J Physiol 2023; 601:3141-3149. [PMID: 37078235 DOI: 10.1113/jp282747] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/30/2022] [Accepted: 04/11/2023] [Indexed: 04/21/2023] Open
Abstract
The experimental study of learning and plasticity has always been driven by an implicit question: how can physiological changes be adaptive and improve performance? For example, in Hebbian plasticity only synapses from presynaptic neurons that were active are changed, avoiding useless changes. Similarly, in dopamine-gated learning synapse changes depend on reward or lack thereof and do not change when everything is predictable. Within machine learning we can make the question of which changes are adaptive concrete: performance improves when changes correlate with the gradient of an objective function quantifying performance. This result is general for any system that improves through small changes. As such, physiology has always implicitly been seeking mechanisms that allow the brain to approximate gradients. Coming from this perspective we review the existing literature on plasticity-related mechanisms, and we show how these mechanisms relate to gradient estimation. We argue that gradients are a unifying idea to explain the many facets of neuronal plasticity.
Collapse
Affiliation(s)
- Blake Aaron Richards
- Mila, Montreal, Quebec, Canada
- School of Computer Science, McGill University, Montreal, Quebec, Canada
- Department of Neurology & Neurosurgery, McGill University, Montreal, Quebec, Canada
- Montreal Neurological Institute, Montreal, Quebec, Canada
- Learning in Machines and Brains Program, CIFAR, Toronto, Ontario, Canada
| | - Konrad Paul Kording
- Learning in Machines and Brains Program, CIFAR, Toronto, Ontario, Canada
- Department of Bioengineering, University of Pennsylvania, Philadelphia, Pennsylvania, USA
- Department of Neuroscience, University of Pennsylvania, Philadelphia, Pennsylvania, USA
| |
Collapse
|
6
|
Jeon I, Kim T. Distinctive properties of biological neural networks and recent advances in bottom-up approaches toward a better biologically plausible neural network. Front Comput Neurosci 2023; 17:1092185. [PMID: 37449083 PMCID: PMC10336230 DOI: 10.3389/fncom.2023.1092185] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/07/2022] [Accepted: 06/12/2023] [Indexed: 07/18/2023] Open
Abstract
Although it may appear infeasible and impractical, building artificial intelligence (AI) using a bottom-up approach based on the understanding of neuroscience is straightforward. The lack of a generalized governing principle for biological neural networks (BNNs) forces us to address this problem by converting piecemeal information on the diverse features of neurons, synapses, and neural circuits into AI. In this review, we described recent attempts to build a biologically plausible neural network by following neuroscientifically similar strategies of neural network optimization or by implanting the outcome of the optimization, such as the properties of single computational units and the characteristics of the network architecture. In addition, we proposed a formalism of the relationship between the set of objectives that neural networks attempt to achieve, and neural network classes categorized by how closely their architectural features resemble those of BNN. This formalism is expected to define the potential roles of top-down and bottom-up approaches for building a biologically plausible neural network and offer a map helping the navigation of the gap between neuroscience and AI engineering.
Collapse
Affiliation(s)
| | - Taegon Kim
- Brain Science Institute, Korea Institute of Science and Technology, Seoul, Republic of Korea
| |
Collapse
|
7
|
Schmidgall S, Hays J. Meta-SpikePropamine: learning to learn with synaptic plasticity in spiking neural networks. Front Neurosci 2023; 17:1183321. [PMID: 37250397 PMCID: PMC10213417 DOI: 10.3389/fnins.2023.1183321] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/09/2023] [Accepted: 04/06/2023] [Indexed: 05/31/2023] Open
Abstract
We propose that in order to harness our understanding of neuroscience toward machine learning, we must first have powerful tools for training brain-like models of learning. Although substantial progress has been made toward understanding the dynamics of learning in the brain, neuroscience-derived models of learning have yet to demonstrate the same performance capabilities as methods in deep learning such as gradient descent. Inspired by the successes of machine learning using gradient descent, we introduce a bi-level optimization framework that seeks to both solve online learning tasks and improve the ability to learn online using models of plasticity from neuroscience. We demonstrate that models of three-factor learning with synaptic plasticity taken from the neuroscience literature can be trained in Spiking Neural Networks (SNNs) with gradient descent via a framework of learning-to-learn to address challenging online learning problems. This framework opens a new path toward developing neuroscience inspired online learning algorithms.
Collapse
Affiliation(s)
- Samuel Schmidgall
- U.S. Naval Research Laboratory, Spacecraft Engineering Department, Washington, DC, United States
- Department of Electrical and Computer Engineering, Johns Hopkins University, Baltimore, MD, United States
| | - Joe Hays
- U.S. Naval Research Laboratory, Spacecraft Engineering Department, Washington, DC, United States
| |
Collapse
|
8
|
Small, correlated changes in synaptic connectivity may facilitate rapid motor learning. Nat Commun 2022; 13:5163. [PMID: 36056006 PMCID: PMC9440011 DOI: 10.1038/s41467-022-32646-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/28/2021] [Accepted: 08/08/2022] [Indexed: 11/08/2022] Open
Abstract
Animals rapidly adapt their movements to external perturbations, a process paralleled by changes in neural activity in the motor cortex. Experimental studies suggest that these changes originate from altered inputs (Hinput) rather than from changes in local connectivity (Hlocal), as neural covariance is largely preserved during adaptation. Since measuring synaptic changes in vivo remains very challenging, we used a modular recurrent neural network to qualitatively test this interpretation. As expected, Hinput resulted in small activity changes and largely preserved covariance. Surprisingly given the presumed dependence of stable covariance on preserved circuit connectivity, Hlocal led to only slightly larger changes in activity and covariance, still within the range of experimental recordings. This similarity is due to Hlocal only requiring small, correlated connectivity changes for successful adaptation. Simulations of tasks that impose increasingly larger behavioural changes revealed a growing difference between Hinput and Hlocal, which could be exploited when designing future experiments.
Collapse
|
9
|
Smith SJ, von Zastrow M. A Molecular Landscape of Mouse Hippocampal Neuromodulation. Front Neural Circuits 2022; 16:836930. [PMID: 35601530 PMCID: PMC9120848 DOI: 10.3389/fncir.2022.836930] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/16/2021] [Accepted: 03/30/2022] [Indexed: 12/23/2022] Open
Abstract
Adaptive neuronal circuit function requires a continual adjustment of synaptic network parameters known as “neuromodulation.” This process is now understood to be based primarily on the binding of myriad secreted “modulatory” ligands such as dopamine, serotonin and the neuropeptides to G protein-coupled receptors (GPCRs) that, in turn, regulate the function of the ion channels that establish synaptic weights and membrane excitability. Many of the basic molecular mechanisms of neuromodulation are now known, but the organization of neuromodulation at a network level is still an enigma. New single-cell RNA sequencing data and transcriptomic neurotaxonomies now offer bright new lights to shine on this critical “dark matter” of neuroscience. Here we leverage these advances to explore the cell-type-specific expression of genes encoding GPCRs, modulatory ligands, ion channels and intervening signal transduction molecules in mouse hippocampus area CA1, with the goal of revealing broad outlines of this well-studied brain structure’s neuromodulatory network architecture.
Collapse
Affiliation(s)
- Stephen J Smith
- Allen Institute for Brain Science, Seattle, WA, United States
- *Correspondence: Stephen J Smith,
| | - Mark von Zastrow
- Departments of Psychiatry and Pharmacology, University of California, San Francisco, San Francisco, CA, United States
| |
Collapse
|