1
|
Richardson B, Goedert T, Quraishe S, Deinhardt K, Mudher A. How do neurons age? A focused review on the aging of the microtubular cytoskeleton. Neural Regen Res 2024; 19:1899-1907. [PMID: 38227514 DOI: 10.4103/1673-5374.390974] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/21/2023] [Accepted: 11/01/2023] [Indexed: 01/17/2024] Open
Abstract
Aging is the leading risk factor for Alzheimer's disease and other neurodegenerative diseases. We now understand that a breakdown in the neuronal cytoskeleton, mainly underpinned by protein modifications leading to the destabilization of microtubules, is central to the pathogenesis of Alzheimer's disease. This is accompanied by morphological defects across the somatodendritic compartment, axon, and synapse. However, knowledge of what occurs to the microtubule cytoskeleton and morphology of the neuron during physiological aging is comparatively poor. Several recent studies have suggested that there is an age-related increase in the phosphorylation of the key microtubule stabilizing protein tau, a modification, which is known to destabilize the cytoskeleton in Alzheimer's disease. This indicates that the cytoskeleton and potentially other neuronal structures reliant on the cytoskeleton become functionally compromised during normal physiological aging. The current literature shows age-related reductions in synaptic spine density and shifts in synaptic spine conformation which might explain age-related synaptic functional deficits. However, knowledge of what occurs to the microtubular and actin cytoskeleton, with increasing age is extremely limited. When considering the somatodendritic compartment, a regression in dendrites and loss of dendritic length and volume is reported whilst a reduction in soma volume/size is often seen. However, research into cytoskeletal change is limited to a handful of studies demonstrating reductions in and mislocalizations of microtubule-associated proteins with just one study directly exploring the integrity of the microtubules. In the axon, an increase in axonal diameter and age-related appearance of swellings is reported but like the dendrites, just one study investigates the microtubules directly with others reporting loss or mislocalization of microtubule-associated proteins. Though these are the general trends reported, there are clear disparities between model organisms and brain regions that are worthy of further investigation. Additionally, longitudinal studies of neuronal/cytoskeletal aging should also investigate whether these age-related changes contribute not just to vulnerability to disease but also to the decline in nervous system function and behavioral output that all organisms experience. This will highlight the utility, if any, of cytoskeletal fortification for the promotion of healthy neuronal aging and potential protection against age-related neurodegenerative disease. This review seeks to summarize what is currently known about the physiological aging of the neuron and microtubular cytoskeleton in the hope of uncovering mechanisms underpinning age-related risk to disease.
Collapse
Affiliation(s)
- Brad Richardson
- School of Biological Sciences, University of Southampton, Southampton, UK
| | - Thomas Goedert
- Institute of Developmental and Regenerative Medicine, University of Oxford, Oxford, UK
| | - Shmma Quraishe
- School of Biological Sciences, University of Southampton, Southampton, UK
| | - Katrin Deinhardt
- School of Biological Sciences, University of Southampton, Southampton, UK
| | - Amritpal Mudher
- School of Biological Sciences, University of Southampton, Southampton, UK
| |
Collapse
|
2
|
Wise DL, Escobedo-Lozoya Y, Valakh V, Gao EY, Bhonsle A, Lei QL, Cheng X, Greene SB, Van Hooser SD, Nelson SB. Prolonged Activity Deprivation Causes Pre- and Postsynaptic Compensatory Plasticity at Neocortical Excitatory Synapses. eNeuro 2024; 11:ENEURO.0366-23.2024. [PMID: 38777611 PMCID: PMC11163391 DOI: 10.1523/eneuro.0366-23.2024] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/19/2023] [Revised: 05/10/2024] [Accepted: 05/15/2024] [Indexed: 05/25/2024] Open
Abstract
Homeostatic plasticity stabilizes firing rates of neurons, but the pressure to restore low activity rates can significantly alter synaptic and cellular properties. Most previous studies of homeostatic readjustment to complete activity silencing in rodent forebrain have examined changes after 2 d of deprivation, but it is known that longer periods of deprivation can produce adverse effects. To better understand the mechanisms underlying these effects and to address how presynaptic as well as postsynaptic compartments change during homeostatic plasticity, we subjected mouse cortical slice cultures to a more severe 5 d deprivation paradigm. We developed and validated a computational framework to measure the number and morphology of presynaptic and postsynaptic compartments from super-resolution light microscopy images of dense cortical tissue. Using these tools, combined with electrophysiological miniature excitatory postsynaptic current measurements, and synaptic imaging at the electron microscopy level, we assessed the functional and morphological results of prolonged deprivation. Excitatory synapses were strengthened both presynaptically and postsynaptically. Surprisingly, we also observed a decrement in the density of excitatory synapses, both as measured from colocalized staining of pre- and postsynaptic proteins in tissue and from the number of dendritic spines. Overall, our results suggest that cortical networks deprived of activity progressively move toward a smaller population of stronger synapses.
Collapse
Affiliation(s)
- Derek L Wise
- Department of Biology, Brandeis University, Waltham, Massachusetts 02454 9110
| | | | - Vera Valakh
- Department of Biology, Brandeis University, Waltham, Massachusetts 02454 9110
| | - Emma Y Gao
- Department of Biology, Brandeis University, Waltham, Massachusetts 02454 9110
| | - Aishwarya Bhonsle
- Department of Biology, Brandeis University, Waltham, Massachusetts 02454 9110
| | - Qian L Lei
- Department of Biology, Brandeis University, Waltham, Massachusetts 02454 9110
| | - Xinyu Cheng
- Department of Biology, Brandeis University, Waltham, Massachusetts 02454 9110
| | - Samuel B Greene
- Department of Biology, Brandeis University, Waltham, Massachusetts 02454 9110
| | | | - Sacha B Nelson
- Department of Biology, Brandeis University, Waltham, Massachusetts 02454 9110
| |
Collapse
|
3
|
Rodriguez Gotor JJ, Mahfooz K, Perez-Otano I, Wesseling JF. Parallel processing of quickly and slowly mobilized reserve vesicles in hippocampal synapses. eLife 2024; 12:RP88212. [PMID: 38727712 PMCID: PMC11087054 DOI: 10.7554/elife.88212] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/12/2024] Open
Abstract
Vesicles within presynaptic terminals are thought to be segregated into a variety of readily releasable and reserve pools. The nature of the pools and trafficking between them is not well understood, but pools that are slow to mobilize when synapses are active are often assumed to feed pools that are mobilized more quickly, in a series. However, electrophysiological studies of synaptic transmission have suggested instead a parallel organization where vesicles within slowly and quickly mobilized reserve pools would separately feed independent reluctant- and fast-releasing subdivisions of the readily releasable pool. Here, we use FM-dyes to confirm the existence of multiple reserve pools at hippocampal synapses and a parallel organization that prevents intermixing between the pools, even when stimulation is intense enough to drive exocytosis at the maximum rate. The experiments additionally demonstrate extensive heterogeneity among synapses in the relative sizes of the slowly and quickly mobilized reserve pools, which suggests equivalent heterogeneity in the numbers of reluctant and fast-releasing readily releasable vesicles that may be relevant for understanding information processing and storage.
Collapse
Affiliation(s)
| | - Kashif Mahfooz
- Department of Pharmacology, University of OxfordOxfordUnited Kingdom
| | - Isabel Perez-Otano
- Instituto de Neurociencias de Alicante CSIC-UMHSan Juan de AlicanteSpain
| | - John F Wesseling
- Instituto de Neurociencias de Alicante CSIC-UMHSan Juan de AlicanteSpain
| |
Collapse
|
4
|
Ralowicz AJ, Hokeness S, Hoppa MB. Frequency of Spontaneous Neurotransmission at Individual Boutons Corresponds to the Size of the Readily Releasable Pool of Vesicles. J Neurosci 2024; 44:e1253232024. [PMID: 38383495 PMCID: PMC11063817 DOI: 10.1523/jneurosci.1253-23.2024] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/06/2023] [Revised: 02/09/2024] [Accepted: 02/13/2024] [Indexed: 02/23/2024] Open
Abstract
Synapses maintain two forms of neurotransmitter release to support communication in the brain. First, evoked neurotransmitter release is triggered by the invasion of an action potential (AP) across en passant boutons that form along axons. The probability of evoked release (Pr) varies substantially across boutons, even within a single axon. Such heterogeneity is the result of differences in the probability of a single synaptic vesicle (SV) fusing (Pv) and in the number of vesicles available for immediate release, known as the readily releasable pool (RRP). Spontaneous release (also known as a mini) is an important form of neurotransmission that occurs in the absence of APs. Because it cannot be triggered with electrical stimulation, much less is known about potential heterogeneity in the frequency of spontaneous release between boutons. We utilized a photostable and bright fluorescent indicator of glutamate release (iGluSnFR3) to quantify both spontaneous and evoked release at individual glutamatergic boutons. We found that the rate of spontaneous release is quite heterogenous at the level of individual boutons. Interestingly, when measuring both evoked and spontaneous release at single synapses, we found that boutons with the highest rates of spontaneous release also displayed the largest evoked responses. Using a new optical method to measure RRP at individual boutons, we found that this heterogeneity in spontaneous release was strongly correlated with the size of the RRP, but not related to Pv. We conclude that the RRP is a critical and dynamic aspect of synaptic strength that contributes to both evoked and spontaneous vesicle release.
Collapse
Affiliation(s)
- Amelia J Ralowicz
- Department of Biology, Dartmouth College, Hanover, New Hampshire 03755
| | - Sasipha Hokeness
- Department of Biology, Dartmouth College, Hanover, New Hampshire 03755
| | - Michael B Hoppa
- Department of Biology, Dartmouth College, Hanover, New Hampshire 03755
| |
Collapse
|
5
|
Uytiepo M, Zhu Y, Bushong E, Polli F, Chou K, Zhao E, Kim C, Luu D, Chang L, Quach T, Haberl M, Patapoutian L, Beutter E, Zhang W, Dong B, McCue E, Ellisman M, Maximov A. Synaptic architecture of a memory engram in the mouse hippocampus. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.04.23.590812. [PMID: 38712256 PMCID: PMC11071366 DOI: 10.1101/2024.04.23.590812] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/08/2024]
Abstract
Memory engrams are formed through experience-dependent remodeling of neural circuits, but their detailed architectures have remained unresolved. Using 3D electron microscopy, we performed nanoscale reconstructions of the hippocampal CA3-CA1 pathway following chemogenetic labeling of cellular ensembles with a remote history of correlated excitation during associative learning. Projection neurons involved in memory acquisition expanded their connectomes via multi-synaptic boutons without altering the numbers and spatial arrangements of individual axonal terminals and dendritic spines. This expansion was driven by presynaptic activity elicited by specific negative valence stimuli, regardless of the co-activation state of postsynaptic partners. The rewiring of initial ensembles representing an engram coincided with local, input-specific changes in the shapes and organelle composition of glutamatergic synapses, reflecting their weights and potential for further modifications. Our findings challenge the view that the connectivity among neuronal substrates of memory traces is governed by Hebbian mechanisms, and offer a structural basis for representational drifts.
Collapse
|
6
|
Samavat M, Bartol TM, Harris KM, Sejnowski TJ. Synaptic Information Storage Capacity Measured With Information Theory. Neural Comput 2024; 36:781-802. [PMID: 38658027 DOI: 10.1162/neco_a_01659] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/31/2023] [Accepted: 01/02/2024] [Indexed: 04/26/2024]
Abstract
Variation in the strength of synapses can be quantified by measuring the anatomical properties of synapses. Quantifying precision of synaptic plasticity is fundamental to understanding information storage and retrieval in neural circuits. Synapses from the same axon onto the same dendrite have a common history of coactivation, making them ideal candidates for determining the precision of synaptic plasticity based on the similarity of their physical dimensions. Here, the precision and amount of information stored in synapse dimensions were quantified with Shannon information theory, expanding prior analysis that used signal detection theory (Bartol et al., 2015). The two methods were compared using dendritic spine head volumes in the middle of the stratum radiatum of hippocampal area CA1 as well-defined measures of synaptic strength. Information theory delineated the number of distinguishable synaptic strengths based on nonoverlapping bins of dendritic spine head volumes. Shannon entropy was applied to measure synaptic information storage capacity (SISC) and resulted in a lower bound of 4.1 bits and upper bound of 4.59 bits of information based on 24 distinguishable sizes. We further compared the distribution of distinguishable sizes and a uniform distribution using Kullback-Leibler divergence and discovered that there was a nearly uniform distribution of spine head volumes across the sizes, suggesting optimal use of the distinguishable values. Thus, SISC provides a new analytical measure that can be generalized to probe synaptic strengths and capacity for plasticity in different brain regions of different species and among animals raised in different conditions or during learning. How brain diseases and disorders affect the precision of synaptic plasticity can also be probed.
Collapse
Affiliation(s)
- Mohammad Samavat
- Department of Electrical and Computer Engineering, Jacobs School of Engineering, University of California, San Diego
- Computational Neurobiology Laboratory, Salk Institute for Biological Sciences, La Jolla, CA 92037, U.S.A.
| | - Thomas M Bartol
- Computational Neurobiology Laboratory, Salk Institute for Biological Sciences, La Jolla, CA 92037, U.S.A.
| | - Kristen M Harris
- Center for Learning and Memory and Department of Neuroscience, University of Texas at Austin, Austin, TX 78712, U.S.A.
| | - Terrence J Sejnowski
- Computational Neurobiology Laboratory, Salk Institute for Biological Sciences, La Jolla, CA 92037, U.S.A
- Department of Neurobiology, University of California, San Diego, La Jolla, CA 92093, U.S.A.
| |
Collapse
|
7
|
Fitz H, Hagoort P, Petersson KM. Neurobiological Causal Models of Language Processing. NEUROBIOLOGY OF LANGUAGE (CAMBRIDGE, MASS.) 2024; 5:225-247. [PMID: 38645618 PMCID: PMC11025648 DOI: 10.1162/nol_a_00133] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 09/29/2022] [Accepted: 12/18/2023] [Indexed: 04/23/2024]
Abstract
The language faculty is physically realized in the neurobiological infrastructure of the human brain. Despite significant efforts, an integrated understanding of this system remains a formidable challenge. What is missing from most theoretical accounts is a specification of the neural mechanisms that implement language function. Computational models that have been put forward generally lack an explicit neurobiological foundation. We propose a neurobiologically informed causal modeling approach which offers a framework for how to bridge this gap. A neurobiological causal model is a mechanistic description of language processing that is grounded in, and constrained by, the characteristics of the neurobiological substrate. It intends to model the generators of language behavior at the level of implementational causality. We describe key features and neurobiological component parts from which causal models can be built and provide guidelines on how to implement them in model simulations. Then we outline how this approach can shed new light on the core computational machinery for language, the long-term storage of words in the mental lexicon and combinatorial processing in sentence comprehension. In contrast to cognitive theories of behavior, causal models are formulated in the "machine language" of neurobiology which is universal to human cognition. We argue that neurobiological causal modeling should be pursued in addition to existing approaches. Eventually, this approach will allow us to develop an explicit computational neurobiology of language.
Collapse
Affiliation(s)
- Hartmut Fitz
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, The Netherlands
- Neurobiology of Language Department, Max Planck Institute for Psycholinguistics, Nijmegen, The Netherlands
| | - Peter Hagoort
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, The Netherlands
- Neurobiology of Language Department, Max Planck Institute for Psycholinguistics, Nijmegen, The Netherlands
| | - Karl Magnus Petersson
- Neurobiology of Language Department, Max Planck Institute for Psycholinguistics, Nijmegen, The Netherlands
- Faculty of Medicine and Biomedical Sciences, University of Algarve, Faro, Portugal
| |
Collapse
|
8
|
Elliott T. Stability against fluctuations: a two-dimensional study of scaling, bifurcations and spontaneous symmetry breaking in stochastic models of synaptic plasticity. BIOLOGICAL CYBERNETICS 2024; 118:39-81. [PMID: 38583095 DOI: 10.1007/s00422-024-00985-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/12/2022] [Accepted: 02/12/2024] [Indexed: 04/08/2024]
Abstract
Stochastic models of synaptic plasticity must confront the corrosive influence of fluctuations in synaptic strength on patterns of synaptic connectivity. To solve this problem, we have proposed that synapses act as filters, integrating plasticity induction signals and expressing changes in synaptic strength only upon reaching filter threshold. Our earlier analytical study calculated the lifetimes of quasi-stable patterns of synaptic connectivity with synaptic filtering. We showed that the plasticity step size in a stochastic model of spike-timing-dependent plasticity (STDP) acts as a temperature-like parameter, exhibiting a critical value below which neuronal structure formation occurs. The filter threshold scales this temperature-like parameter downwards, cooling the dynamics and enhancing stability. A key step in this calculation was a resetting approximation, essentially reducing the dynamics to one-dimensional processes. Here, we revisit our earlier study to examine this resetting approximation, with the aim of understanding in detail why it works so well by comparing it, and a simpler approximation, to the system's full dynamics consisting of various embedded two-dimensional processes without resetting. Comparing the full system to the simpler approximation, to our original resetting approximation, and to a one-afferent system, we show that their equilibrium distributions of synaptic strengths and critical plasticity step sizes are all qualitatively similar, and increasingly quantitatively similar as the filter threshold increases. This increasing similarity is due to the decorrelation in changes in synaptic strength between different afferents caused by our STDP model, and the amplification of this decorrelation with larger synaptic filters.
Collapse
Affiliation(s)
- Terry Elliott
- Department of Electronics and Computer Science, University of Southampton, Highfield, Southampton, SO17 1BJ, UK.
| |
Collapse
|
9
|
Cotteret M, Greatorex H, Ziegler M, Chicca E. Vector Symbolic Finite State Machines in Attractor Neural Networks. Neural Comput 2024; 36:549-595. [PMID: 38457766 DOI: 10.1162/neco_a_01638] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/09/2023] [Accepted: 10/19/2023] [Indexed: 03/10/2024]
Abstract
Hopfield attractor networks are robust distributed models of human memory, but they lack a general mechanism for effecting state-dependent attractor transitions in response to input. We propose construction rules such that an attractor network may implement an arbitrary finite state machine (FSM), where states and stimuli are represented by high-dimensional random vectors and all state transitions are enacted by the attractor network's dynamics. Numerical simulations show the capacity of the model, in terms of the maximum size of implementable FSM, to be linear in the size of the attractor network for dense bipolar state vectors and approximately quadratic for sparse binary state vectors. We show that the model is robust to imprecise and noisy weights, and so a prime candidate for implementation with high-density but unreliable devices. By endowing attractor networks with the ability to emulate arbitrary FSMs, we propose a plausible path by which FSMs could exist as a distributed computational primitive in biological neural networks.
Collapse
Affiliation(s)
- Madison Cotteret
- Micro- and Nanoelectronic Systems, Institute of Micro- and Nanotechnologies (IMN) MacroNano, Technische Universität Ilmenau, 98693 Ilmenau, Germany
- Bio-Inspired Circuits and Systems Lab, Zernike Institute for Advanced Materials, and Groningen Cognitive Systems and Materials Center, University of Groningen, 9747 AG Groningen, Netherlands
| | - Hugh Greatorex
- Bio-Inspired Circuits and Systems Lab, Zernike Institute for Advanced Materials, and Groningen Cognitive Systems and Materials Center, University of Groningen, 9747 AG Groningen, Netherlands
| | - Martin Ziegler
- Micro- and Nanoelectronic Systems, Institute of Micro- and Nanotechnologies (IMN) MacroNano, Technische Universität Ilmenau, 98693 Ilmenau, Germany
| | - Elisabetta Chicca
- Bio-Inspired Circuits and Systems Lab, Zernike Institute for Advanced Materials, and Groningen Cognitive Systems and Materials Center, University of Groningen, 9747 AG Groningen, Netherlands
| |
Collapse
|
10
|
Kuan AT, Bondanelli G, Driscoll LN, Han J, Kim M, Hildebrand DGC, Graham BJ, Wilson DE, Thomas LA, Panzeri S, Harvey CD, Lee WCA. Synaptic wiring motifs in posterior parietal cortex support decision-making. Nature 2024; 627:367-373. [PMID: 38383788 PMCID: PMC11162200 DOI: 10.1038/s41586-024-07088-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/13/2022] [Accepted: 01/17/2024] [Indexed: 02/23/2024]
Abstract
The posterior parietal cortex exhibits choice-selective activity during perceptual decision-making tasks1-10. However, it is not known how this selective activity arises from the underlying synaptic connectivity. Here we combined virtual-reality behaviour, two-photon calcium imaging, high-throughput electron microscopy and circuit modelling to analyse how synaptic connectivity between neurons in the posterior parietal cortex relates to their selective activity. We found that excitatory pyramidal neurons preferentially target inhibitory interneurons with the same selectivity. In turn, inhibitory interneurons preferentially target pyramidal neurons with opposite selectivity, forming an opponent inhibition motif. This motif was present even between neurons with activity peaks in different task epochs. We developed neural-circuit models of the computations performed by these motifs, and found that opponent inhibition between neural populations with opposite selectivity amplifies selective inputs, thereby improving the encoding of trial-type information. The models also predict that opponent inhibition between neurons with activity peaks in different task epochs contributes to creating choice-specific sequential activity. These results provide evidence for how synaptic connectivity in cortical circuits supports a learned decision-making task.
Collapse
Affiliation(s)
- Aaron T Kuan
- Department of Neurobiology, Harvard Medical School, Boston, MA, USA
- Department of Neuroscience, Yale School of Medicine, New Haven, CT, USA
| | - Giulio Bondanelli
- Neural Computation Laboratory, Istituto Italiano di Tecnologia, Genoa, Italy
- Department of Excellence for Neural Information Processing, University Medical Center Hamburg-Eppendorf (UKE), Hamburg, Germany
| | - Laura N Driscoll
- Department of Neurobiology, Harvard Medical School, Boston, MA, USA
- Allen Institute for Neural Dynamics, Allen Institute, Seattle, WA, USA
| | - Julie Han
- Department of Neurobiology, Harvard Medical School, Boston, MA, USA
- Khoury College of Computer Sciences, Northeastern University, Seattle, WA, USA
| | - Minsu Kim
- Department of Neurobiology, Harvard Medical School, Boston, MA, USA
| | - David G C Hildebrand
- Department of Neurobiology, Harvard Medical School, Boston, MA, USA
- Laboratory of Neural Systems, The Rockefeller University, New York, NY, USA
| | - Brett J Graham
- Department of Neurobiology, Harvard Medical School, Boston, MA, USA
- Space Telescope Science Institute, Baltimore, MD, USA
| | - Daniel E Wilson
- Department of Neurobiology, Harvard Medical School, Boston, MA, USA
| | - Logan A Thomas
- Department of Neurobiology, Harvard Medical School, Boston, MA, USA
- Biophysics Graduate Group, University of California Berkeley, Berkeley, CA, USA
| | - Stefano Panzeri
- Neural Computation Laboratory, Istituto Italiano di Tecnologia, Genoa, Italy.
- Department of Excellence for Neural Information Processing, University Medical Center Hamburg-Eppendorf (UKE), Hamburg, Germany.
| | | | - Wei-Chung Allen Lee
- Department of Neurobiology, Harvard Medical School, Boston, MA, USA.
- FM Kirby Neurobiology Center, Boston Children's Hospital, Boston, MA, USA.
| |
Collapse
|
11
|
Bartol TM, Ordyan M, Sejnowski TJ, Rangamani P, Kennedy MB. A spatial model of autophosphorylation of CaMKII in a glutamatergic spine suggests a network-driven kinetic mechanism for bistable changes in synaptic strength. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.02.02.578696. [PMID: 38352446 PMCID: PMC10862815 DOI: 10.1101/2024.02.02.578696] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 02/23/2024]
Abstract
Activation of N-methyl-D-aspartate-type glutamate receptors (NMDARs) at synapses in the CNS triggers changes in synaptic strength that underlie memory formation in response to strong synaptic stimuli. The primary target of Ca2+ flowing through NMDARs is Ca2+/calmodulin-dependent protein kinase II (CaMKII) which forms dodecameric holoenzymes that are highly concentrated at the postsynaptic site. Activation of CaMKII is necessary to trigger long-term potentiation of synaptic strength (LTP), and is prolonged by autophosphorylation of subunits within the holoenzyme. Here we use MCell4, an agent-based, stochastic, modeling platform to model CaMKII holoenzymes placed within a realistic spine geometry. We show how two mechanisms of regulation of CaMKII, 'Ca2+-calmodulin-trapping (CaM-trapping)' and dephosphorylation by protein phosphatase-1 (PP1) shape the autophosphorylation response during a repeated high-frequency stimulus. Our simulation results suggest that autophosphorylation of CaMKII does not constitute a bistable switch. Instead, prolonged but temporary, autophosphorylation of CaMKII may contribute to a biochemical-network-based 'kinetic proof-reading" mechanism that controls induction of synaptic plasticity.
Collapse
Affiliation(s)
| | - Mariam Ordyan
- The Salk Institute for Biological Studies, La Jolla, CA
| | | | - Padmini Rangamani
- Department of Mechanical and Aerospace Engineering, University of California San Diego, La Jolla, CA
| | - Mary B Kennedy
- Department of Biology and Biological Engineering, California Institute of Technology, Pasadena, CA
| |
Collapse
|
12
|
Samavat M, Bartol TM, Bromer C, Hubbard DD, Hanka DC, Kuwajima M, Mendenhall JM, Parker PH, Bowden JB, Abraham WC, Sejnowski TJ, Harris KM. Long-Term Potentiation Produces a Sustained Expansion of Synaptic Information Storage Capacity in Adult Rat Hippocampus. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.01.12.574766. [PMID: 38260636 PMCID: PMC10802612 DOI: 10.1101/2024.01.12.574766] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/24/2024]
Abstract
Long-term potentiation (LTP) has become a standard model for investigating synaptic mechanisms of learning and memory. Increasingly, it is of interest to understand how LTP affects the synaptic information storage capacity of the targeted population of synapses. Here, structural synaptic plasticity during LTP was explored using three-dimensional reconstruction from serial section electron microscopy. Storage capacity was assessed by applying a new analytical approach, Shannon information theory, to delineate the number of functionally distinguishable synaptic strengths. LTP was induced by delta-burst stimulation of perforant pathway inputs to the middle molecular layer of hippocampal dentate granule cells in adult rats. Spine head volumes were measured as predictors of synaptic strength and compared between LTP and control hemispheres at 30 min and 2 hr after the induction of LTP. Synapses from the same axon onto the same dendrite were used to determine the precision of synaptic plasticity based on the similarity of their physical dimensions. Shannon entropy was measured by exploiting the frequency of spine heads in functionally distinguishable sizes to assess the degree to which LTP altered the number of bits of information storage. Outcomes from these analyses reveal that LTP expanded storage capacity; the distribution of spine head volumes was increased from 2 bits in controls to 3 bits at 30 min and 2.7 bits at 2 hr after the induction of LTP. Furthermore, the distribution of spine head volumes was more uniform across the increased number of functionally distinguishable sizes following LTP, thus achieving more efficient use of coding space across the population of synapses.
Collapse
Affiliation(s)
- Mohammad Samavat
- Department of Electrical and Computer Engineering, Jacobs School of Engineering, UC San Diego
- Computational Neurobiology Laboratory, The Salk Institute for Biological Sciences, La Jolla, CA 92037
| | - Thomas M Bartol
- Computational Neurobiology Laboratory, The Salk Institute for Biological Sciences, La Jolla, CA 92037
| | - Cailey Bromer
- Computational Neurobiology Laboratory, The Salk Institute for Biological Sciences, La Jolla, CA 92037
| | - Dusten D Hubbard
- Center for Learning and Memory, The University of Texas at Austin, Austin, TX 78712
| | - Dakota C Hanka
- Center for Learning and Memory, The University of Texas at Austin, Austin, TX 78712
| | - Masaaki Kuwajima
- Center for Learning and Memory, The University of Texas at Austin, Austin, TX 78712
| | - John M Mendenhall
- Center for Learning and Memory, The University of Texas at Austin, Austin, TX 78712
| | - Patrick H Parker
- Center for Learning and Memory, The University of Texas at Austin, Austin, TX 78712
| | - Jared B Bowden
- Center for Learning and Memory, The University of Texas at Austin, Austin, TX 78712
- Department of Neuroscience, The University of Texas at Austin, Austin, TX 78712
| | - Wickliffe C Abraham
- Department of Psychology and Brain Health Research Centre, University of Otago, Dunedin, 9016, New Zealand
| | - Terrence J Sejnowski
- Computational Neurobiology Laboratory, The Salk Institute for Biological Sciences, La Jolla, CA 92037
- Division of Biological Sciences, University of California, San Diego, La Jolla, CA 92093
| | - Kristen M Harris
- Center for Learning and Memory, The University of Texas at Austin, Austin, TX 78712
- Department of Neuroscience, The University of Texas at Austin, Austin, TX 78712
| |
Collapse
|
13
|
Karbowski J, Urban P. Information encoded in volumes and areas of dendritic spines is nearly maximal across mammalian brains. Sci Rep 2023; 13:22207. [PMID: 38097675 PMCID: PMC10721930 DOI: 10.1038/s41598-023-49321-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/05/2023] [Accepted: 12/06/2023] [Indexed: 12/17/2023] Open
Abstract
Many experiments suggest that long-term information associated with neuronal memory resides collectively in dendritic spines. However, spines can have a limited size due to metabolic and neuroanatomical constraints, which should effectively limit the amount of encoded information in excitatory synapses. This study investigates how much information can be stored in the population of sizes of dendritic spines, and whether it is optimal in any sense. It is shown here, using empirical data for several mammalian brains across different regions and physiological conditions, that dendritic spines nearly maximize entropy contained in their volumes and surface areas for a given mean size in cortical and hippocampal regions. Although both short- and heavy-tailed fitting distributions approach [Formula: see text] of maximal entropy in the majority of cases, the best maximization is obtained primarily for short-tailed gamma distribution. We find that most empirical ratios of standard deviation to mean for spine volumes and areas are in the range [Formula: see text], which is close to the theoretical optimal ratios coming from entropy maximization for gamma and lognormal distributions. On average, the highest entropy is contained in spine length ([Formula: see text] bits per spine), and the lowest in spine volume and area ([Formula: see text] bits), although the latter two are closer to optimality. In contrast, we find that entropy density (entropy per spine size) is always suboptimal. Our results suggest that spine sizes are almost as random as possible given the constraint on their size, and moreover the general principle of entropy maximization is applicable and potentially useful to information and memory storing in the population of cortical and hippocampal excitatory synapses, and to predicting their morphological properties.
Collapse
Affiliation(s)
- Jan Karbowski
- Institute of Applied Mathematics and Mechanics, University of Warsaw, Warsaw, Poland.
| | - Paulina Urban
- Laboratory of Functional and Structural Genomics, Centre of New Technologies, University of Warsaw, Warsaw, Poland
- College of Inter-Faculty Individual Studies in Mathematics and Natural Sciences, University of Warsaw, Warsaw, Poland
- Laboratory of Databases and Business Analytics, National Information Processing Institute, National Research Institute, Warsaw, Poland
| |
Collapse
|
14
|
Thomas CI, Ryan MA, Kamasawa N, Scholl B. Postsynaptic mitochondria are positioned to support functional diversity of dendritic spines. eLife 2023; 12:RP89682. [PMID: 38059805 PMCID: PMC10703439 DOI: 10.7554/elife.89682] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/08/2023] Open
Abstract
Postsynaptic mitochondria are critical for the development, plasticity, and maintenance of synaptic inputs. However, their relationship to synaptic structure and functional activity is unknown. We examined a correlative dataset from ferret visual cortex with in vivo two-photon calcium imaging of dendritic spines during visual stimulation and electron microscopy reconstructions of spine ultrastructure, investigating mitochondrial abundance near functionally and structurally characterized spines. Surprisingly, we found no correlation to structural measures of synaptic strength. Instead, we found that mitochondria are positioned near spines with orientation preferences that are dissimilar to the somatic preference. Additionally, we found that mitochondria are positioned near groups of spines with heterogeneous orientation preferences. For a subset of spines with a mitochondrion in the head or neck, synapses were larger and exhibited greater selectivity to visual stimuli than those without a mitochondrion. Our data suggest mitochondria are not necessarily positioned to support the energy needs of strong spines, but rather support the structurally and functionally diverse inputs innervating the basal dendrites of cortical neurons.
Collapse
Affiliation(s)
- Connon I Thomas
- Electron Microscopy Core Facility, Max Planck Florida Institute for Neuroscience, Max Planck WayJupiterUnited States
| | - Melissa A Ryan
- Electron Microscopy Core Facility, Max Planck Florida Institute for Neuroscience, Max Planck WayJupiterUnited States
| | - Naomi Kamasawa
- Electron Microscopy Core Facility, Max Planck Florida Institute for Neuroscience, Max Planck WayJupiterUnited States
| | - Benjamin Scholl
- Department of Neuroscience, Perelman School of Medicine at the University of PennsylvaniaPhiladelphiaUnited States
| |
Collapse
|
15
|
Monteith S, Glenn T, Geddes JR, Achtyes ED, Whybrow PC, Bauer M. Challenges and Ethical Considerations to Successfully Implement Artificial Intelligence in Clinical Medicine and Neuroscience: a Narrative Review. PHARMACOPSYCHIATRY 2023; 56:209-213. [PMID: 37643732 DOI: 10.1055/a-2142-9325] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 08/31/2023]
Abstract
This narrative review discusses how the safe and effective use of clinical artificial intelligence (AI) prediction tools requires recognition of the importance of human intelligence. Human intelligence, creativity, situational awareness, and professional knowledge, are required for successful implementation. The implementation of clinical AI prediction tools may change the workflow in medical practice resulting in new challenges and safety implications. Human understanding of how a clinical AI prediction tool performs in routine and exceptional situations is fundamental to successful implementation. Physicians must be involved in all aspects of the selection, implementation, and ongoing product monitoring of clinical AI prediction tools.
Collapse
Affiliation(s)
- Scott Monteith
- Department of Psychiatry, Michigan State University College of Human Medicine, Traverse City Campus, Traverse City, MI, USA
| | - Tasha Glenn
- ChronoRecord Association, Fullerton, CA, USA
| | - John R Geddes
- Department of Psychiatry, University of Oxford, Warneford Hospital, Oxford, UK
| | - Eric D Achtyes
- Department of Psychiatry, Western Michigan University Homer Stryker M.D. School of Medicine, Kalamazoo, MI, USA
| | - Peter C Whybrow
- Department of Psychiatry and Biobehavioral Sciences, Semel Institute for Neuroscience and Human Behavior, University of California Los Angeles (UCLA), Los Angeles, CA, USA
| | - Michael Bauer
- Department of Psychiatry and Psychotherapy, University Hospital Carl Gustav Carus Faculty of Medicine, Technische Universität Dresden, Dresden, Germany
| |
Collapse
|
16
|
Modha DS, Akopyan F, Andreopoulos A, Appuswamy R, Arthur JV, Cassidy AS, Datta P, DeBole MV, Esser SK, Otero CO, Sawada J, Taba B, Amir A, Bablani D, Carlson PJ, Flickner MD, Gandhasri R, Garreau GJ, Ito M, Klamo JL, Kusnitz JA, McClatchey NJ, McKinstry JL, Nakamura Y, Nayak TK, Risk WP, Schleupen K, Shaw B, Sivagnaname J, Smith DF, Terrizzano I, Ueda T. Neural inference at the frontier of energy, space, and time. Science 2023; 382:329-335. [PMID: 37856600 DOI: 10.1126/science.adh1174] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/17/2023] [Accepted: 09/01/2023] [Indexed: 10/21/2023]
Abstract
Computing, since its inception, has been processor-centric, with memory separated from compute. Inspired by the organic brain and optimized for inorganic silicon, NorthPole is a neural inference architecture that blurs this boundary by eliminating off-chip memory, intertwining compute with memory on-chip, and appearing externally as an active memory chip. NorthPole is a low-precision, massively parallel, densely interconnected, energy-efficient, and spatial computing architecture with a co-optimized, high-utilization programming model. On the ResNet50 benchmark image classification network, relative to a graphics processing unit (GPU) that uses a comparable 12-nanometer technology process, NorthPole achieves a 25 times higher energy metric of frames per second (FPS) per watt, a 5 times higher space metric of FPS per transistor, and a 22 times lower time metric of latency. Similar results are reported for the Yolo-v4 detection network. NorthPole outperforms all prevalent architectures, even those that use more-advanced technology processes.
Collapse
|
17
|
Thomas CI, Ryan MA, Kamasawa N, Scholl B. Postsynaptic mitochondria are positioned to support functional diversity of dendritic spines. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2023:2023.07.14.549063. [PMID: 37502969 PMCID: PMC10370038 DOI: 10.1101/2023.07.14.549063] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 07/29/2023]
Abstract
Postsynaptic mitochondria are critical to the development, plasticity, and maintenance of synaptic inputs. However, their relationship to synaptic structure and functional activity is unknown. We examined a correlative dataset from ferret visual cortex with in vivo two-photon calcium imaging of dendritic spines during visual stimulation and electron microscopy (EM) reconstructions of spine ultrastructure, investigating mitochondrial abundance near functionally- and structurally-characterized spines. Surprisingly, we found no correlation to structural measures of synaptic strength. Instead, we found that mitochondria are positioned near spines with orientation preferences that are dissimilar to the somatic preference. Additionally, we found that mitochondria are positioned near groups of spines with heterogeneous orientation preferences. For a subset of spines with mitochondrion in the head or neck, synapses were larger and exhibited greater selectivity to visual stimuli than those without a mitochondrion. Our data suggest mitochondria are not necessarily positioned to support the energy needs of strong spines, but rather support the structurally and functionally diverse inputs innervating the basal dendrites of cortical neurons.
Collapse
Affiliation(s)
- Connon I. Thomas
- Electron Microscopy Core Facility, Max Planck Florida Institute for Neuroscience, 1 Max Planck Way, Jupiter, FL 33458, USA
| | - Melissa A. Ryan
- Electron Microscopy Core Facility, Max Planck Florida Institute for Neuroscience, 1 Max Planck Way, Jupiter, FL 33458, USA
- Present Address: Department of Neuroscience, Baylor College of Medicine, Houston, TX, 77030, USA
| | - Naomi Kamasawa
- Electron Microscopy Core Facility, Max Planck Florida Institute for Neuroscience, 1 Max Planck Way, Jupiter, FL 33458, USA
| | - Benjamin Scholl
- Department of Neuroscience, Perelman School of Medicine at the University of Pennsylvania, 415 Curie Blvd, Philadelphia, PA, 19104, USA
| |
Collapse
|
18
|
Eggl MF, Chater TE, Petkovic J, Goda Y, Tchumatchenko T. Linking spontaneous and stimulated spine dynamics. Commun Biol 2023; 6:930. [PMID: 37696988 PMCID: PMC10495434 DOI: 10.1038/s42003-023-05303-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/19/2023] [Accepted: 08/29/2023] [Indexed: 09/13/2023] Open
Abstract
Our brains continuously acquire and store memories through synaptic plasticity. However, spontaneous synaptic changes can also occur and pose a challenge for maintaining stable memories. Despite fluctuations in synapse size, recent studies have shown that key population-level synaptic properties remain stable over time. This raises the question of how local synaptic plasticity affects the global population-level synaptic size distribution and whether individual synapses undergoing plasticity escape the stable distribution to encode specific memories. To address this question, we (i) studied spontaneously evolving spines and (ii) induced synaptic potentiation at selected sites while observing the spine distribution pre- and post-stimulation. We designed a stochastic model to describe how the current size of a synapse affects its future size under baseline and stimulation conditions and how these local effects give rise to population-level synaptic shifts. Our study offers insights into how seemingly spontaneous synaptic fluctuations and local plasticity both contribute to population-level synaptic dynamics.
Collapse
Affiliation(s)
- Maximilian F Eggl
- University of Mainz Medical Center, Anselm-Franz-von-Bentzel-Weg 3, 55128, Mainz, Germany
| | - Thomas E Chater
- Laboratory for Synaptic Plasticity and Connectivity, RIKEN Center for Brain Science, Wako-shi, Saitama, Japan
- Department of Physiology, Keio University School of Medicine, Tokyo, Japan
| | - Janko Petkovic
- University of Mainz Medical Center, Anselm-Franz-von-Bentzel-Weg 3, 55128, Mainz, Germany
| | - Yukiko Goda
- Laboratory for Synaptic Plasticity and Connectivity, RIKEN Center for Brain Science, Wako-shi, Saitama, Japan
- Synapse Biology Unit, Okinawa Institute of Science and Technology Graduate University, Onna-son, Kunigami-gun, Okinawa, Japan
| | - Tatjana Tchumatchenko
- University of Mainz Medical Center, Anselm-Franz-von-Bentzel-Weg 3, 55128, Mainz, Germany.
- Institute of Experimental Epileptology and Cognition Research, University of Bonn Medical Center, Venusberg-Campus 1, 53127, Bonn, Germany.
| |
Collapse
|
19
|
Benjamin AS, Kording KP. A role for cortical interneurons as adversarial discriminators. PLoS Comput Biol 2023; 19:e1011484. [PMID: 37768890 PMCID: PMC10538760 DOI: 10.1371/journal.pcbi.1011484] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/29/2022] [Accepted: 08/31/2023] [Indexed: 09/30/2023] Open
Abstract
The brain learns representations of sensory information from experience, but the algorithms by which it does so remain unknown. One popular theory formalizes representations as inferred factors in a generative model of sensory stimuli, meaning that learning must improve this generative model and inference procedure. This framework underlies many classic computational theories of sensory learning, such as Boltzmann machines, the Wake/Sleep algorithm, and a more recent proposal that the brain learns with an adversarial algorithm that compares waking and dreaming activity. However, in order for such theories to provide insights into the cellular mechanisms of sensory learning, they must be first linked to the cell types in the brain that mediate them. In this study, we examine whether a subtype of cortical interneurons might mediate sensory learning by serving as discriminators, a crucial component in an adversarial algorithm for representation learning. We describe how such interneurons would be characterized by a plasticity rule that switches from Hebbian plasticity during waking states to anti-Hebbian plasticity in dreaming states. Evaluating the computational advantages and disadvantages of this algorithm, we find that it excels at learning representations in networks with recurrent connections but scales poorly with network size. This limitation can be partially addressed if the network also oscillates between evoked activity and generative samples on faster timescales. Consequently, we propose that an adversarial algorithm with interneurons as discriminators is a plausible and testable strategy for sensory learning in biological systems.
Collapse
Affiliation(s)
- Ari S. Benjamin
- Department of Bioengineering, University of Pennsylvania, Philadelphia, Pennsylvania, United States of America
| | - Konrad P. Kording
- Department of Bioengineering, University of Pennsylvania, Philadelphia, Pennsylvania, United States of America
| |
Collapse
|
20
|
Bai Y, Grier B, Geron E. Anti-Hebbian plasticity in the motor cortex promotes defensive freezing. Curr Biol 2023; 33:3465-3477.e5. [PMID: 37543035 PMCID: PMC10538413 DOI: 10.1016/j.cub.2023.07.021] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/07/2022] [Revised: 05/05/2023] [Accepted: 07/12/2023] [Indexed: 08/07/2023]
Abstract
Regional brain activity often decreases from baseline levels in response to external events, but how neurons develop such negative responses is unclear. To study this, we leveraged the negative response that develops in the primary motor cortex (M1) after classical fear learning. We trained mice with a fear conditioning paradigm while imaging their brains with standard two-photon microscopy. This enabled monitoring changes in neuronal responses to the tone with synaptic resolution through learning. We found that M1 layer 5 pyramidal neurons (L5 PNs) developed negative tone responses within an hour after conditioning, which depended on the weakening of their dendritic spines that were active during training. Blocking this form of anti-Hebbian plasticity using an optogenetic manipulation of CaMKII activity disrupted negative tone responses and freezing. Therefore, reducing the strength of spines active at the time of memory encoding leads to negative responses of L5 PNs. In turn, these negative responses curb M1's capacity for promoting movement, thereby aiding freezing. Collectively, this work provides a mechanistic understanding of how area-specific negative responses to behaviorally relevant cues can be achieved.
Collapse
Affiliation(s)
- Yang Bai
- Neuroscience Institute, New York University, New York, NY 10016, USA
| | - Bryce Grier
- Neuroscience Institute, New York University, New York, NY 10016, USA
| | - Erez Geron
- Neuroscience Institute, New York University, New York, NY 10016, USA.
| |
Collapse
|
21
|
Keto L, Manninen T. CellRemorph: A Toolkit for Transforming, Selecting, and Slicing 3D Cell Structures on the Road to Morphologically Detailed Astrocyte Simulations. Neuroinformatics 2023; 21:483-500. [PMID: 37133688 PMCID: PMC10406679 DOI: 10.1007/s12021-023-09627-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 03/06/2023] [Indexed: 05/04/2023]
Abstract
Understanding functions of astrocytes can be greatly enhanced by building and simulating computational models that capture their morphological details. Novel computational tools enable utilization of existing morphological data of astrocytes and building models that have appropriate level of details for specific simulation purposes. In addition to analyzing existing computational tools for constructing, transforming, and assessing astrocyte morphologies, we present here the CellRemorph toolkit implemented as an add-on for Blender, a 3D modeling platform increasingly recognized for its utility for manipulating 3D biological data. To our knowledge, CellRemorph is the first toolkit for transforming astrocyte morphologies from polygonal surface meshes into adjustable surface point clouds and vice versa, precisely selecting nanoprocesses, and slicing morphologies into segments with equal surface areas or volumes. CellRemorph is an open-source toolkit under the GNU General Public License and easily accessible via an intuitive graphical user interface. CellRemorph will be a valuable addition to other Blender add-ons, providing novel functionality that facilitates the creation of realistic astrocyte morphologies for different types of morphologically detailed simulations elucidating the role of astrocytes both in health and disease.
Collapse
Affiliation(s)
- Laura Keto
- Faculty of Medicine and Health Technology, Tampere University, Tampere, Finland.
| | - Tiina Manninen
- Faculty of Medicine and Health Technology, Tampere University, Tampere, Finland.
| |
Collapse
|
22
|
Argunsah AÖ, Israely I. The temporal pattern of synaptic activation determines the longevity of structural plasticity at dendritic spines. iScience 2023; 26:106835. [PMID: 37332599 PMCID: PMC10272476 DOI: 10.1016/j.isci.2023.106835] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/18/2022] [Revised: 01/18/2023] [Accepted: 05/04/2023] [Indexed: 06/20/2023] Open
Abstract
Learning is thought to involve physiological and structural changes at individual synapses. Synaptic plasticity has predominantly been studied using regular stimulation patterns, but neuronal activity in the brain normally follows a Poisson distribution. We used two-photon imaging and glutamate uncaging to investigate the structural plasticity of single dendritic spines using naturalistic activation patterns sampled from a Poisson distribution. We showed that naturalistic activation patterns elicit structural plasticity that is both NMDAR and protein synthesis-dependent. Furthermore, we uncovered that the longevity of structural plasticity is dependent on the temporal structure of the naturalistic pattern. Finally, we found that during the delivery of the naturalistic activity, spines underwent rapid structural growth that predicted the longevity of plasticity. This was not observed with regularly spaced activity. These data reveal that different temporal organizations of the same number of synaptic stimulations can produce rather distinct short and long-lasting structural plasticity.
Collapse
Affiliation(s)
- Ali Özgür Argunsah
- Champalimaud Research, Champalimaud Centre for the Unknown, 1400-038 Lisbon, Portugal
- Laboratory of Neuronal Circuit Assembly, Brain Research Institute (HiFo), University of Zurich, Winterthurerstrasse 190, 8057 Zürich, Switzerland
- Neuroscience Center Zurich (ZNZ), Winterthurerstrasse 190, 8057 Zürich, Switzerland
| | - Inbal Israely
- Champalimaud Research, Champalimaud Centre for the Unknown, 1400-038 Lisbon, Portugal
- Department of Pathology and Cell Biology, Department of Neuroscience, in the Taub Institute for Research on Alzheimer’s Disease and the Aging Brain, Columbia University Medical Center, Vagelos College of Physicians and Surgeons, Columbia University, New York, NY 10032, USA
| |
Collapse
|
23
|
Hafner AS, Triesch J. Synaptic logistics: Competing over shared resources. Mol Cell Neurosci 2023; 125:103858. [PMID: 37172922 DOI: 10.1016/j.mcn.2023.103858] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/23/2022] [Revised: 05/05/2023] [Accepted: 05/05/2023] [Indexed: 05/15/2023] Open
Abstract
High turnover rates of synaptic proteins imply that synapses constantly need to replace their constituent building blocks. This requires sophisticated supply chains and potentially exposes synapses to shortages as they compete for limited resources. Interestingly, competition in neurons has been observed at different scales. Whether it is competition of receptors for binding sites inside a single synapse or synapses fighting for resources to grow. Here we review the implications of such competition for synaptic function and plasticity. We identify multiple mechanisms that synapses use to safeguard themselves against supply shortages and identify a fundamental neurologistic trade-off governing the sizes of reserve pools of essential synaptic building blocks.
Collapse
Affiliation(s)
- Anne-Sophie Hafner
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, Netherlands.
| | - Jochen Triesch
- Frankfurt Institute for Advanced Studies, Frankfurt am Main, Germany; Goethe University, Frankfurt am Main, Germany
| |
Collapse
|
24
|
Ziółkowska M, Borczyk M, Cały A, Tomaszewski KF, Nowacka A, Nalberczak-Skóra M, Śliwińska MA, Łukasiewicz K, Skonieczna E, Wójtowicz T, Wlodarczyk J, Bernaś T, Salamian A, Radwanska K. Phosphorylation of PSD-95 at serine 73 in dCA1 is required for extinction of contextual fear. PLoS Biol 2023; 21:e3002106. [PMID: 37155709 DOI: 10.1371/journal.pbio.3002106] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/12/2023] [Revised: 05/18/2023] [Accepted: 04/04/2023] [Indexed: 05/10/2023] Open
Abstract
The updating of contextual memories is essential for survival in a changing environment. Accumulating data indicate that the dorsal CA1 area (dCA1) contributes to this process. However, the cellular and molecular mechanisms of contextual fear memory updating remain poorly understood. Postsynaptic density protein 95 (PSD-95) regulates the structure and function of glutamatergic synapses. Here, using dCA1-targeted genetic manipulations in vivo, combined with ex vivo 3D electron microscopy and electrophysiology, we identify a novel, synaptic mechanism that is induced during attenuation of contextual fear memories and involves phosphorylation of PSD-95 at Serine 73 in dCA1. Our data provide the proof that PSD-95-dependent synaptic plasticity in dCA1 is required for updating of contextual fear memory.
Collapse
Affiliation(s)
- Magdalena Ziółkowska
- Laboratory of Molecular Basis of Behavior, the Nencki Institute of Experimental Biology of Polish Academy of Sciences, Warsaw, Poland
| | - Malgorzata Borczyk
- Laboratory of Molecular Basis of Behavior, the Nencki Institute of Experimental Biology of Polish Academy of Sciences, Warsaw, Poland
- Department Molecular Neuropharmacology, Maj Institute of Pharmacology of Polish Academy of Sciences, Krakow, Poland
| | - Anna Cały
- Laboratory of Molecular Basis of Behavior, the Nencki Institute of Experimental Biology of Polish Academy of Sciences, Warsaw, Poland
| | - Kamil F Tomaszewski
- Laboratory of Molecular Basis of Behavior, the Nencki Institute of Experimental Biology of Polish Academy of Sciences, Warsaw, Poland
| | - Agata Nowacka
- Laboratory of Molecular Basis of Behavior, the Nencki Institute of Experimental Biology of Polish Academy of Sciences, Warsaw, Poland
| | - Maria Nalberczak-Skóra
- Laboratory of Molecular Basis of Behavior, the Nencki Institute of Experimental Biology of Polish Academy of Sciences, Warsaw, Poland
| | - Małgorzata Alicja Śliwińska
- Laboratory of Molecular Basis of Behavior, the Nencki Institute of Experimental Biology of Polish Academy of Sciences, Warsaw, Poland
- Laboratory of Imaging Tissue Structure and Function, the Nencki Institute of Experimental Biology of Polish Academy of Sciences, Warsaw, Poland
| | - Kacper Łukasiewicz
- Laboratory of Molecular Basis of Behavior, the Nencki Institute of Experimental Biology of Polish Academy of Sciences, Warsaw, Poland
- Psychiatry Clinic, Medical University of Bialystok, Białystok, Poland
| | - Edyta Skonieczna
- Laboratory of Molecular Basis of Behavior, the Nencki Institute of Experimental Biology of Polish Academy of Sciences, Warsaw, Poland
| | - Tomasz Wójtowicz
- Laboratory of Cell Biophysics, the Nencki Institute of Experimental Biology of Polish Academy of Sciences, Warsaw, Poland
| | - Jakub Wlodarczyk
- Laboratory of Cell Biophysics, the Nencki Institute of Experimental Biology of Polish Academy of Sciences, Warsaw, Poland
| | - Tytus Bernaś
- Laboratory of Imaging Tissue Structure and Function, the Nencki Institute of Experimental Biology of Polish Academy of Sciences, Warsaw, Poland
- Department of Anatomy and Neurology, VCU School of Medicine, Richmond, Virginia, United States of America
| | - Ahmad Salamian
- Laboratory of Molecular Basis of Behavior, the Nencki Institute of Experimental Biology of Polish Academy of Sciences, Warsaw, Poland
| | - Kasia Radwanska
- Laboratory of Molecular Basis of Behavior, the Nencki Institute of Experimental Biology of Polish Academy of Sciences, Warsaw, Poland
| |
Collapse
|
25
|
Wilson A, Babadi M. SynapseCLR: Uncovering features of synapses in primary visual cortex through contrastive representation learning. PATTERNS 2023; 4:100693. [PMID: 37123442 PMCID: PMC10140600 DOI: 10.1016/j.patter.2023.100693] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/11/2022] [Revised: 11/23/2022] [Accepted: 01/25/2023] [Indexed: 03/09/2023]
Abstract
3D electron microscopy (EM) connectomics image volumes are surpassing 1 mm3, providing information-dense, multi-scale visualizations of brain circuitry and necessitating scalable analysis techniques. We present SynapseCLR, a self-supervised contrastive learning method for 3D EM data, and use it to extract features of synapses from mouse visual cortex. SynapseCLR feature representations separate synapses by appearance and functionally important structural annotations. We demonstrate SynapseCLR's utility for valuable downstream tasks, including one-shot identification of defective synapse segmentations, dataset-wide similarity-based querying, and accurate imputation of annotations for unlabeled synapses, using manual annotation of only 0.2% of the dataset's synapses. In particular, excitatory versus inhibitory neuronal types can be assigned with >99.8% accuracy to individual synapses and highly truncated neurites, enabling neurite-enhanced connectomics analysis. Finally, we present a data-driven, unsupervised study of synaptic structural variation on the representation manifold, revealing its intrinsic axes of variation and showing that representations contain inhibitory subtype information.
Collapse
|
26
|
KASAI H. Unraveling the mysteries of dendritic spine dynamics: Five key principles shaping memory and cognition. PROCEEDINGS OF THE JAPAN ACADEMY. SERIES B, PHYSICAL AND BIOLOGICAL SCIENCES 2023; 99:254-305. [PMID: 37821392 PMCID: PMC10749395 DOI: 10.2183/pjab.99.018] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/13/2023] [Accepted: 07/11/2023] [Indexed: 10/13/2023]
Abstract
Recent research extends our understanding of brain processes beyond just action potentials and chemical transmissions within neural circuits, emphasizing the mechanical forces generated by excitatory synapses on dendritic spines to modulate presynaptic function. From in vivo and in vitro studies, we outline five central principles of synaptic mechanics in brain function: P1: Stability - Underpinning the integral relationship between the structure and function of the spine synapses. P2: Extrinsic dynamics - Highlighting synapse-selective structural plasticity which plays a crucial role in Hebbian associative learning, distinct from pathway-selective long-term potentiation (LTP) and depression (LTD). P3: Neuromodulation - Analyzing the role of G-protein-coupled receptors, particularly dopamine receptors, in time-sensitive modulation of associative learning frameworks such as Pavlovian classical conditioning and Thorndike's reinforcement learning (RL). P4: Instability - Addressing the intrinsic dynamics crucial to memory management during continual learning, spotlighting their role in "spine dysgenesis" associated with mental disorders. P5: Mechanics - Exploring how synaptic mechanics influence both sides of synapses to establish structural traces of short- and long-term memory, thereby aiding the integration of mental functions. We also delve into the historical background and foresee impending challenges.
Collapse
Affiliation(s)
- Haruo KASAI
- International Research Center for Neurointelligence (WPI-IRCN), UTIAS, The University of Tokyo, Bunkyo-ku, Tokyo, Japan
- Laboratory of Structural Physiology, Center for Disease Biology and Integrative Medicine, Faculty of Medicine, The University of Tokyo, Bunkyo-ku, Tokyo, Japan
| |
Collapse
|
27
|
Dorkenwald S, Turner NL, Macrina T, Lee K, Lu R, Wu J, Bodor AL, Bleckert AA, Brittain D, Kemnitz N, Silversmith WM, Ih D, Zung J, Zlateski A, Tartavull I, Yu SC, Popovych S, Wong W, Castro M, Jordan CS, Wilson AM, Froudarakis E, Buchanan J, Takeno MM, Torres R, Mahalingam G, Collman F, Schneider-Mizell CM, Bumbarger DJ, Li Y, Becker L, Suckow S, Reimer J, Tolias AS, Macarico da Costa N, Reid RC, Seung HS. Binary and analog variation of synapses between cortical pyramidal neurons. eLife 2022; 11:e76120. [PMID: 36382887 PMCID: PMC9704804 DOI: 10.7554/elife.76120] [Citation(s) in RCA: 24] [Impact Index Per Article: 12.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/13/2021] [Accepted: 11/15/2022] [Indexed: 11/17/2022] Open
Abstract
Learning from experience depends at least in part on changes in neuronal connections. We present the largest map of connectivity to date between cortical neurons of a defined type (layer 2/3 [L2/3] pyramidal cells in mouse primary visual cortex), which was enabled by automated analysis of serial section electron microscopy images with improved handling of image defects (250 × 140 × 90 μm3 volume). We used the map to identify constraints on the learning algorithms employed by the cortex. Previous cortical studies modeled a continuum of synapse sizes by a log-normal distribution. A continuum is consistent with most neural network models of learning, in which synaptic strength is a continuously graded analog variable. Here, we show that synapse size, when restricted to synapses between L2/3 pyramidal cells, is well modeled by the sum of a binary variable and an analog variable drawn from a log-normal distribution. Two synapses sharing the same presynaptic and postsynaptic cells are known to be correlated in size. We show that the binary variables of the two synapses are highly correlated, while the analog variables are not. Binary variation could be the outcome of a Hebbian or other synaptic plasticity rule depending on activity signals that are relatively uniform across neuronal arbors, while analog variation may be dominated by other influences such as spontaneous dynamical fluctuations. We discuss the implications for the longstanding hypothesis that activity-dependent plasticity switches synapses between bistable states.
Collapse
Affiliation(s)
- Sven Dorkenwald
- Princeton Neuroscience Institute, Princeton UniversityPrincetonUnited States
- Computer Science Department, Princeton UniversityPrincetonUnited States
| | - Nicholas L Turner
- Princeton Neuroscience Institute, Princeton UniversityPrincetonUnited States
- Computer Science Department, Princeton UniversityPrincetonUnited States
| | - Thomas Macrina
- Princeton Neuroscience Institute, Princeton UniversityPrincetonUnited States
- Computer Science Department, Princeton UniversityPrincetonUnited States
| | - Kisuk Lee
- Princeton Neuroscience Institute, Princeton UniversityPrincetonUnited States
- Brain & Cognitive Sciences Department, Massachusetts Institute of TechnologyCambridgeUnited States
| | - Ran Lu
- Princeton Neuroscience Institute, Princeton UniversityPrincetonUnited States
| | - Jingpeng Wu
- Princeton Neuroscience Institute, Princeton UniversityPrincetonUnited States
| | - Agnes L Bodor
- Allen Institute for Brain ScienceSeattleUnited States
| | | | | | - Nico Kemnitz
- Princeton Neuroscience Institute, Princeton UniversityPrincetonUnited States
| | | | - Dodam Ih
- Princeton Neuroscience Institute, Princeton UniversityPrincetonUnited States
| | - Jonathan Zung
- Princeton Neuroscience Institute, Princeton UniversityPrincetonUnited States
| | - Aleksandar Zlateski
- Princeton Neuroscience Institute, Princeton UniversityPrincetonUnited States
| | - Ignacio Tartavull
- Princeton Neuroscience Institute, Princeton UniversityPrincetonUnited States
| | - Szi-Chieh Yu
- Princeton Neuroscience Institute, Princeton UniversityPrincetonUnited States
| | - Sergiy Popovych
- Princeton Neuroscience Institute, Princeton UniversityPrincetonUnited States
- Computer Science Department, Princeton UniversityPrincetonUnited States
| | - William Wong
- Princeton Neuroscience Institute, Princeton UniversityPrincetonUnited States
| | - Manuel Castro
- Princeton Neuroscience Institute, Princeton UniversityPrincetonUnited States
| | - Chris S Jordan
- Princeton Neuroscience Institute, Princeton UniversityPrincetonUnited States
| | - Alyssa M Wilson
- Princeton Neuroscience Institute, Princeton UniversityPrincetonUnited States
| | - Emmanouil Froudarakis
- Department of Neuroscience, Baylor College of MedicineHoustonUnited States
- Center for Neuroscience and Artificial Intelligence, Baylor College of MedicineHoustonUnited States
| | | | - Marc M Takeno
- Allen Institute for Brain ScienceSeattleUnited States
| | - Russel Torres
- Allen Institute for Brain ScienceSeattleUnited States
| | | | | | | | | | - Yang Li
- Allen Institute for Brain ScienceSeattleUnited States
| | - Lynne Becker
- Allen Institute for Brain ScienceSeattleUnited States
| | - Shelby Suckow
- Allen Institute for Brain ScienceSeattleUnited States
| | - Jacob Reimer
- Department of Neuroscience, Baylor College of MedicineHoustonUnited States
- Center for Neuroscience and Artificial Intelligence, Baylor College of MedicineHoustonUnited States
| | - Andreas S Tolias
- Department of Neuroscience, Baylor College of MedicineHoustonUnited States
- Center for Neuroscience and Artificial Intelligence, Baylor College of MedicineHoustonUnited States
- Department of Electrical and Computer Engineering, Rice UniversityHoustonUnited States
| | | | - R Clay Reid
- Allen Institute for Brain ScienceSeattleUnited States
| | - H Sebastian Seung
- Princeton Neuroscience Institute, Princeton UniversityPrincetonUnited States
- Computer Science Department, Princeton UniversityPrincetonUnited States
| |
Collapse
|
28
|
A Uniform and Isotropic Cytoskeletal Tiling Fills Dendritic Spines. eNeuro 2022; 9:ENEURO.0342-22.2022. [PMID: 36216507 PMCID: PMC9617608 DOI: 10.1523/eneuro.0342-22.2022] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/25/2022] [Accepted: 09/09/2022] [Indexed: 12/15/2022] Open
Abstract
Dendritic spines are submicron, subcellular compartments whose shape is defined by actin filaments and associated proteins. Accurately mapping the cytoskeleton is a challenge, given the small size of its components. It remains unclear whether the actin-associated structures analyzed in dendritic spines of neurons in vitro apply to dendritic spines of intact, mature neurons in situ. Here, we combined advanced preparative methods with multitilt serial section electron microscopy (EM) tomography and computational analysis to reveal the full three-dimensional (3D) internal architecture of spines in the intact brains of male mice at nanometer resolution. We compared hippocampal (CA1) pyramidal cells and cerebellar Purkinje cells in terms of the length distribution and connectivity of filaments, their branching-angles and absolute orientations, and the elementary loops formed by the network. Despite differences in shape and size across spines and between spine heads and necks, the internal organization was remarkably similar in both neuron types and largely homogeneous throughout the spine volume. In the tortuous mesh of highly branched and interconnected filaments, branches exhibited no preferred orientation except in the immediate vicinity of the cell membrane. We found that new filaments preferentially split off from the convex side of a bending filament, consistent with the behavior of Arp2/3-mediated branching of actin under mechanical deformation. Based on the quantitative analysis, the spine cytoskeleton is likely subject to considerable mechanical force in situ.
Collapse
|
29
|
Dasgupta S, Hattori D, Navlakha S. A neural theory for counting memories. Nat Commun 2022; 13:5961. [PMID: 36217003 PMCID: PMC9551066 DOI: 10.1038/s41467-022-33577-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/18/2022] [Accepted: 09/22/2022] [Indexed: 11/09/2022] Open
Abstract
Keeping track of the number of times different stimuli have been experienced is a critical computation for behavior. Here, we propose a theoretical two-layer neural circuit that stores counts of stimulus occurrence frequencies. This circuit implements a data structure, called a count sketch, that is commonly used in computer science to maintain item frequencies in streaming data. Our first model implements a count sketch using Hebbian synapses and outputs stimulus-specific frequencies. Our second model uses anti-Hebbian plasticity and only tracks frequencies within four count categories ("1-2-3-many"), which trades-off the number of categories that need to be distinguished with the potential ethological value of those categories. We show how both models can robustly track stimulus occurrence frequencies, thus expanding the traditional novelty-familiarity memory axis from binary to discrete with more than two possible values. Finally, we show that an implementation of the "1-2-3-many" count sketch exists in the insect mushroom body.
Collapse
Affiliation(s)
- Sanjoy Dasgupta
- Computer Science and Engineering Department, University of California San Diego, La Jolla, CA, 92037, USA
| | - Daisuke Hattori
- Department of Physiology, UT Southwestern Medical Center, Dallas, TX, 75390, USA.
| | - Saket Navlakha
- Simons Center for Quantitative Biology, Cold Spring Harbor Laboratory, Cold Spring Harbor, NY, 11724, USA.
| |
Collapse
|
30
|
Chu H, Yan Y, Gan L, Jia H, Qian L, Huan Y, Zheng L, Zou Z. A Neuromorphic Processing System With Spike-Driven SNN Processor for Wearable ECG Classification. IEEE TRANSACTIONS ON BIOMEDICAL CIRCUITS AND SYSTEMS 2022; 16:511-523. [PMID: 35802543 DOI: 10.1109/tbcas.2022.3189364] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
This paper presents a neuromorphic processing system with a spike-driven spiking neural network (SNN) processor design for always-on wearable electrocardiogram (ECG) classification. In the proposed system, the ECG signal is captured by level crossing (LC) sampling, achieving native temporal coding with single-bit data representation, which is directly fed into an SNN in an event-driven manner. A hardware-aware spatio-temporal backpropagation (STBP) is suggested as the training scheme to adapt to the LC-based data representation and to generate lightweight SNN models. Such a training scheme diminishes the firing rate of the network with little plenty of classification accuracy loss, thus reducing the switching activity of the circuits for low-power operation. A specialized SNN processor is designed with the spike-driven processing flow and hierarchical memory access scheme. Validated with field programmable gate arrays (FPGA) and evaluated in 40 nm CMOS technology for application-specific integrated circuit (ASIC) design, the SNN processor can achieve 98.22% classification accuracy on the MIT-BIH database for 5-category classification, with an energy efficiency of 0.75 μJ/classification.
Collapse
|
31
|
Elliott T. The Impact of Sparse Coding on Memory Lifetimes in Simple and Complex Models of Synaptic Plasticity. BIOLOGICAL CYBERNETICS 2022; 116:327-362. [PMID: 35286444 PMCID: PMC9170679 DOI: 10.1007/s00422-022-00923-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 07/28/2021] [Accepted: 02/07/2022] [Indexed: 06/14/2023]
Abstract
Models of associative memory with discrete state synapses learn new memories by forgetting old ones. In the simplest models, memories are forgotten exponentially quickly. Sparse population coding ameliorates this problem, as do complex models of synaptic plasticity that posit internal synaptic states, giving rise to synaptic metaplasticity. We examine memory lifetimes in both simple and complex models of synaptic plasticity with sparse coding. We consider our own integrative, filter-based model of synaptic plasticity, and examine the cascade and serial synapse models for comparison. We explore memory lifetimes at both the single-neuron and the population level, allowing for spontaneous activity. Memory lifetimes are defined using either a signal-to-noise ratio (SNR) approach or a first passage time (FPT) method, although we use the latter only for simple models at the single-neuron level. All studied models exhibit a decrease in the optimal single-neuron SNR memory lifetime, optimised with respect to sparseness, as the probability of synaptic updates decreases or, equivalently, as synaptic complexity increases. This holds regardless of spontaneous activity levels. In contrast, at the population level, even a low but nonzero level of spontaneous activity is critical in facilitating an increase in optimal SNR memory lifetimes with increasing synaptic complexity, but only in filter and serial models. However, SNR memory lifetimes are valid only in an asymptotic regime in which a mean field approximation is valid. By considering FPT memory lifetimes, we find that this asymptotic regime is not satisfied for very sparse coding, violating the conditions for the optimisation of single-perceptron SNR memory lifetimes with respect to sparseness. Similar violations are also expected for complex models of synaptic plasticity.
Collapse
Affiliation(s)
- Terry Elliott
- Department of Electronics and Computer Science, University of Southampton, Highfield, Southampton, SO17 1BJ, UK.
| |
Collapse
|
32
|
Chindemi G, Abdellah M, Amsalem O, Benavides-Piccione R, Delattre V, Doron M, Ecker A, Jaquier AT, King J, Kumbhar P, Monney C, Perin R, Rössert C, Tuncel AM, Van Geit W, DeFelipe J, Graupner M, Segev I, Markram H, Muller EB. A calcium-based plasticity model for predicting long-term potentiation and depression in the neocortex. Nat Commun 2022; 13:3038. [PMID: 35650191 PMCID: PMC9160074 DOI: 10.1038/s41467-022-30214-w] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/11/2021] [Accepted: 04/19/2022] [Indexed: 01/14/2023] Open
Abstract
Pyramidal cells (PCs) form the backbone of the layered structure of the neocortex, and plasticity of their synapses is thought to underlie learning in the brain. However, such long-term synaptic changes have been experimentally characterized between only a few types of PCs, posing a significant barrier for studying neocortical learning mechanisms. Here we introduce a model of synaptic plasticity based on data-constrained postsynaptic calcium dynamics, and show in a neocortical microcircuit model that a single parameter set is sufficient to unify the available experimental findings on long-term potentiation (LTP) and long-term depression (LTD) of PC connections. In particular, we find that the diverse plasticity outcomes across the different PC types can be explained by cell-type-specific synaptic physiology, cell morphology and innervation patterns, without requiring type-specific plasticity. Generalizing the model to in vivo extracellular calcium concentrations, we predict qualitatively different plasticity dynamics from those observed in vitro. This work provides a first comprehensive null model for LTP/LTD between neocortical PC types in vivo, and an open framework for further developing models of cortical synaptic plasticity.
Collapse
Affiliation(s)
- Giuseppe Chindemi
- Blue Brain Project, École Polytechnique Fédérale de Lausanne, Geneva, Switzerland.
| | - Marwan Abdellah
- Blue Brain Project, École Polytechnique Fédérale de Lausanne, Geneva, Switzerland
| | - Oren Amsalem
- Department of Neurobiology, the Hebrew University of Jerusalem, Jerusalem, Israel.,Division of Endocrinology, Diabetes and Metabolism, Beth Israel Deaconess Medical Center, Harvard Medical School, Boston, MA, 02215, USA
| | - Ruth Benavides-Piccione
- Instituto Cajal, Consejo Superior de Investigaciones Científicas, Madrid, Spain.,Laboratorio Cajal de Circuitos Corticales, Centro de Tecnología Biomédica, Universidad Politécnica de Madrid, Madrid, Spain
| | - Vincent Delattre
- Laboratory of Neural Microcircuitry, École Polytechnique Fédérale de Lausanne, Lausanne, Switzerland
| | - Michael Doron
- Edmond and Lily Safra Center for Brain Sciences, the Hebrew University of Jerusalem, Jerusalem, Israel
| | - András Ecker
- Blue Brain Project, École Polytechnique Fédérale de Lausanne, Geneva, Switzerland
| | - Aurélien T Jaquier
- Blue Brain Project, École Polytechnique Fédérale de Lausanne, Geneva, Switzerland
| | - James King
- Blue Brain Project, École Polytechnique Fédérale de Lausanne, Geneva, Switzerland
| | - Pramod Kumbhar
- Blue Brain Project, École Polytechnique Fédérale de Lausanne, Geneva, Switzerland
| | - Caitlin Monney
- Blue Brain Project, École Polytechnique Fédérale de Lausanne, Geneva, Switzerland
| | - Rodrigo Perin
- Laboratory of Neural Microcircuitry, École Polytechnique Fédérale de Lausanne, Lausanne, Switzerland
| | - Christian Rössert
- Blue Brain Project, École Polytechnique Fédérale de Lausanne, Geneva, Switzerland
| | - Anil M Tuncel
- Blue Brain Project, École Polytechnique Fédérale de Lausanne, Geneva, Switzerland
| | - Werner Van Geit
- Blue Brain Project, École Polytechnique Fédérale de Lausanne, Geneva, Switzerland
| | - Javier DeFelipe
- Instituto Cajal, Consejo Superior de Investigaciones Científicas, Madrid, Spain.,Laboratorio Cajal de Circuitos Corticales, Centro de Tecnología Biomédica, Universidad Politécnica de Madrid, Madrid, Spain
| | - Michael Graupner
- Université de Paris, SPPIN - Saints-Pères Paris Institute for the Neurosciences, CNRS, Paris, France
| | - Idan Segev
- Department of Neurobiology, the Hebrew University of Jerusalem, Jerusalem, Israel.,Edmond and Lily Safra Center for Brain Sciences, the Hebrew University of Jerusalem, Jerusalem, Israel
| | - Henry Markram
- Blue Brain Project, École Polytechnique Fédérale de Lausanne, Geneva, Switzerland.,Laboratory of Neural Microcircuitry, École Polytechnique Fédérale de Lausanne, Lausanne, Switzerland
| | - Eilif B Muller
- Blue Brain Project, École Polytechnique Fédérale de Lausanne, Geneva, Switzerland. .,Department of Neurosciences, Faculty of Medicine, Université de Montréal, Montréal, QC, Canada. .,CHU Sainte-Justine Research Center, Montréal, QC, Canada. .,Quebec Artificial Intelligence Institute (Mila), Montréal, Canada.
| |
Collapse
|
33
|
Gonzalez KC, Losonczy A, Negrean A. Dendritic Excitability and Synaptic Plasticity In Vitro and In Vivo. Neuroscience 2022; 489:165-175. [PMID: 34998890 PMCID: PMC9392867 DOI: 10.1016/j.neuroscience.2021.12.039] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/15/2021] [Revised: 12/29/2021] [Accepted: 12/30/2021] [Indexed: 02/06/2023]
Abstract
Much of our understanding of dendritic and synaptic physiology comes from in vitro experimentation, where the afforded mechanical stability and convenience of applying drugs allowed patch-clamping based recording techniques to investigate ion channel distributions, their gating kinetics, and to uncover dendritic integrative and synaptic plasticity rules. However, with current efforts to study these questions in vivo, there is a great need to translate existing knowledge between in vitro and in vivo experimental conditions. In this review, we identify discrepancies between in vitro and in vivo ionic composition of extracellular media and discuss how changes in ionic composition alter dendritic excitability and plasticity induction. Here, we argue that under physiological in vivo ionic conditions, dendrites are expected to be more excitable and the threshold for synaptic plasticity induction to be lowered. Consequently, the plasticity rules described in vitro vary significantly from those implemented in vivo.
Collapse
Affiliation(s)
- Kevin C Gonzalez
- Department of Neuroscience, Columbia University, New York, NY, USA; Mortimer B. Zuckerman Mind Brain Behavior Institute, New York, NY, USA.
| | - Attila Losonczy
- Department of Neuroscience, Columbia University, New York, NY, USA; Mortimer B. Zuckerman Mind Brain Behavior Institute, New York, NY, USA; Kavli Institute for Brain Science, New York, NY, USA.
| | - Adrian Negrean
- Department of Neuroscience, Columbia University, New York, NY, USA; Mortimer B. Zuckerman Mind Brain Behavior Institute, New York, NY, USA.
| |
Collapse
|
34
|
Yan Y, Chu H, Jin Y, Huan Y, Zou Z, Zheng L. Backpropagation With Sparsity Regularization for Spiking Neural Network Learning. Front Neurosci 2022; 16:760298. [PMID: 35495028 PMCID: PMC9047717 DOI: 10.3389/fnins.2022.760298] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/17/2021] [Accepted: 02/22/2022] [Indexed: 11/15/2022] Open
Abstract
The spiking neural network (SNN) is a possible pathway for low-power and energy-efficient processing and computing exploiting spiking-driven and sparsity features of biological systems. This article proposes a sparsity-driven SNN learning algorithm, namely backpropagation with sparsity regularization (BPSR), aiming to achieve improved spiking and synaptic sparsity. Backpropagation incorporating spiking regularization is utilized to minimize the spiking firing rate with guaranteed accuracy. Backpropagation realizes the temporal information capture and extends to the spiking recurrent layer to support brain-like structure learning. The rewiring mechanism with synaptic regularization is suggested to further mitigate the redundancy of the network structure. Rewiring based on weight and gradient regulates the pruning and growth of synapses. Experimental results demonstrate that the network learned by BPSR has synaptic sparsity and is highly similar to the biological system. It not only balances the accuracy and firing rate, but also facilitates SNN learning by suppressing the information redundancy. We evaluate the proposed BPSR on the visual dataset MNIST, N-MNIST, and CIFAR10, and further test it on the sensor dataset MIT-BIH and gas sensor. Results bespeak that our algorithm achieves comparable or superior accuracy compared to related works, with sparse spikes and synapses.
Collapse
Affiliation(s)
| | | | | | | | - Zhuo Zou
- School of Information Science and Technology, Fudan University, Shanghai, China
| | - Lirong Zheng
- School of Information Science and Technology, Fudan University, Shanghai, China
| |
Collapse
|
35
|
Optimization of hippocampus sparing during whole brain radiation therapy with simultaneous integrated boost-tutorial and efficacy of complete directional hippocampal blocking. Strahlenther Onkol 2022; 198:537-546. [PMID: 35357511 PMCID: PMC9165264 DOI: 10.1007/s00066-022-01916-3] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/25/2021] [Accepted: 02/20/2022] [Indexed: 11/25/2022]
Abstract
Purpose Hippocampus-avoidance whole brain radiotherapy with simultaneous integrated boost (HA-WBRT+SIB) is a complex treatment option for patients with multiple brain metastases, aiming to prevent neurocognitive decline and simultaneously increase tumor control. Achieving efficient hippocampal dose reduction in this context can be challenging. The aim of the current study is to present and analyze the efficacy of complete directional hippocampal blocking in reducing the hippocampal dose during HA-WBRT+SIB. Methods A total of 30 patients with multiple metastases having undergone HA-WBRT+SIB were identified. The prescribed dose was 30 Gy in 12 fractions to the whole brain, with 98% of the hippocampus receiving ≤ 9 Gy and 2% ≤ 17 Gy and with SIB to metastases/resection cavities of 36–51 Gy in 12 fractions. Alternative treatment plans were calculated using complete directional hippocampal blocking and compared to conventional plans regarding target coverage, homogeneity, conformity, dose to hippocampi and organs at risk. Results All alternative plans reached prescription doses. Hippocampal blocking enabled more successful sparing of the hippocampus, with a mean dose of 8.79 ± 0.99 Gy compared to 10.07 ± 0.96 Gy in 12 fractions with the conventional method (p < 0.0001). The mean dose to the whole brain (excluding metastases and hippocampal avoidance region) was 30.52 ± 0.80 Gy with conventional planning and 30.28 ± 0.11 Gy with hippocampal blocking (p = 0.11). Target coverage, conformity and homogeneity indices for whole brain and metastases, as well as doses to organs at risk were similar between planning methods (p > 0.003). Conclusion Complete directional hippocampal blocking is an efficient method for achieving improved hippocampal sparing during HA-WBRT+SIB.
Collapse
|
36
|
Kudithipudi D, Aguilar-Simon M, Babb J, Bazhenov M, Blackiston D, Bongard J, Brna AP, Chakravarthi Raja S, Cheney N, Clune J, Daram A, Fusi S, Helfer P, Kay L, Ketz N, Kira Z, Kolouri S, Krichmar JL, Kriegman S, Levin M, Madireddy S, Manicka S, Marjaninejad A, McNaughton B, Miikkulainen R, Navratilova Z, Pandit T, Parker A, Pilly PK, Risi S, Sejnowski TJ, Soltoggio A, Soures N, Tolias AS, Urbina-Meléndez D, Valero-Cuevas FJ, van de Ven GM, Vogelstein JT, Wang F, Weiss R, Yanguas-Gil A, Zou X, Siegelmann H. Biological underpinnings for lifelong learning machines. NAT MACH INTELL 2022. [DOI: 10.1038/s42256-022-00452-0] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/17/2022]
|
37
|
Feldhoff F, Toepfer H, Harczos T, Klefenz F. Periodicity Pitch Perception Part III: Sensibility and Pachinko Volatility. Front Neurosci 2022; 16:736642. [PMID: 35356050 PMCID: PMC8959216 DOI: 10.3389/fnins.2022.736642] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/05/2021] [Accepted: 02/07/2022] [Indexed: 11/29/2022] Open
Abstract
Neuromorphic computer models are used to explain sensory perceptions. Auditory models generate cochleagrams, which resemble the spike distributions in the auditory nerve. Neuron ensembles along the auditory pathway transform sensory inputs step by step and at the end pitch is represented in auditory categorical spaces. In two previous articles in the series on periodicity pitch perception an extended auditory model had been successfully used for explaining periodicity pitch proved for various musical instrument generated tones and sung vowels. In this third part in the series the focus is on octopus cells as they are central sensitivity elements in auditory cognition processes. A powerful numerical model had been devised, in which auditory nerve fibers (ANFs) spike events are the inputs, triggering the impulse responses of the octopus cells. Efficient algorithms are developed and demonstrated to explain the behavior of octopus cells with a focus on a simple event-based hardware implementation of a layer of octopus neurons. The main finding is, that an octopus' cell model in a local receptive field fine-tunes to a specific trajectory by a spike-timing-dependent plasticity (STDP) learning rule with synaptic pre-activation and the dendritic back-propagating signal as post condition. Successful learning explains away the teacher and there is thus no need for a temporally precise control of plasticity that distinguishes between learning and retrieval phases. Pitch learning is cascaded: At first octopus cells respond individually by self-adjustment to specific trajectories in their local receptive fields, then unions of octopus cells are collectively learned for pitch discrimination. Pitch estimation by inter-spike intervals is shown exemplary using two input scenarios: a simple sinus tone and a sung vowel. The model evaluation indicates an improvement in pitch estimation on a fixed time-scale.
Collapse
Affiliation(s)
- Frank Feldhoff
- Advanced Electromagnetics Group, Technische Universität Ilmenau, Ilmenau, Germany
| | - Hannes Toepfer
- Advanced Electromagnetics Group, Technische Universität Ilmenau, Ilmenau, Germany
| | - Tamas Harczos
- Fraunhofer-Institut für Digitale Medientechnologie, Ilmenau, Germany
- Auditory Neuroscience and Optogenetics Laboratory, German Primate Center, Göttingen, Germany
- audifon GmbH & Co. KG, Kölleda, Germany
| | - Frank Klefenz
- Fraunhofer-Institut für Digitale Medientechnologie, Ilmenau, Germany
| |
Collapse
|
38
|
Nanoscale Sub-Compartmentalization of the Dendritic Spine Compartment. Biomolecules 2021; 11:biom11111697. [PMID: 34827695 PMCID: PMC8615865 DOI: 10.3390/biom11111697] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/21/2021] [Revised: 11/10/2021] [Accepted: 11/11/2021] [Indexed: 01/04/2023] Open
Abstract
Compartmentalization of the membrane is essential for cells to perform highly specific tasks and spatially constrained biochemical functions in topographically defined areas. These membrane lateral heterogeneities range from nanoscopic dimensions, often involving only a few molecular constituents, to micron-sized mesoscopic domains resulting from the coalescence of nanodomains. Short-lived domains lasting for a few milliseconds coexist with more stable platforms lasting from minutes to days. This panoply of lateral domains subserves the great variety of demands of cell physiology, particularly high for those implicated in signaling. The dendritic spine, a subcellular structure of neurons at the receiving (postsynaptic) end of central nervous system excitatory synapses, exploits this compartmentalization principle. In its most frequent adult morphology, the mushroom-shaped spine harbors neurotransmitter receptors, enzymes, and scaffolding proteins tightly packed in a volume of a few femtoliters. In addition to constituting a mesoscopic lateral heterogeneity of the dendritic arborization, the dendritic spine postsynaptic membrane is further compartmentalized into spatially delimited nanodomains that execute separate functions in the synapse. This review discusses the functional relevance of compartmentalization and nanodomain organization in synaptic transmission and plasticity and exemplifies the importance of this parcelization in various neurotransmitter signaling systems operating at dendritic spines, using two fast ligand-gated ionotropic receptors, the nicotinic acetylcholine receptor and the glutamatergic receptor, and a second-messenger G-protein coupled receptor, the cannabinoid receptor, as paradigmatic examples.
Collapse
|
39
|
Role of NMDAR plasticity in a computational model of synaptic memory. Sci Rep 2021; 11:21182. [PMID: 34707139 PMCID: PMC8551337 DOI: 10.1038/s41598-021-00516-y] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/30/2021] [Accepted: 10/12/2021] [Indexed: 11/08/2022] Open
Abstract
A largely unexplored question in neuronal plasticity is whether synapses are capable of encoding and learning the timing of synaptic inputs. We address this question in a computational model of synaptic input time difference learning (SITDL), where N-methyl-d-aspartate receptor (NMDAR) isoform expression in silent synapses is affected by time differences between glutamate and voltage signals. We suggest that differences between NMDARs' glutamate and voltage gate conductances induce modifications of the synapse's NMDAR isoform population, consequently changing the timing of synaptic response. NMDAR expression at individual synapses can encode the precise time difference between signals. Thus, SITDL enables the learning and reconstruction of signals across multiple synapses of a single neuron. In addition to plausibly predicting the roles of NMDARs in synaptic plasticity, SITDL can be usefully applied in artificial neural network models.
Collapse
|
40
|
Raman DV, O'Leary T. Optimal plasticity for memory maintenance during ongoing synaptic change. eLife 2021; 10:62912. [PMID: 34519270 PMCID: PMC8504970 DOI: 10.7554/elife.62912] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/08/2020] [Accepted: 09/13/2021] [Indexed: 11/13/2022] Open
Abstract
Synaptic connections in many brain circuits fluctuate, exhibiting substantial turnover and remodelling over hours to days. Surprisingly, experiments show that most of this flux in connectivity persists in the absence of learning or known plasticity signals. How can neural circuits retain learned information despite a large proportion of ongoing and potentially disruptive synaptic changes? We address this question from first principles by analysing how much compensatory plasticity would be required to optimally counteract ongoing fluctuations, regardless of whether fluctuations are random or systematic. Remarkably, we find that the answer is largely independent of plasticity mechanisms and circuit architectures: compensatory plasticity should be at most equal in magnitude to fluctuations, and often less, in direct agreement with previously unexplained experimental observations. Moreover, our analysis shows that a high proportion of learning-independent synaptic change is consistent with plasticity mechanisms that accurately compute error gradients.
Collapse
Affiliation(s)
- Dhruva V Raman
- Department of Engineering, University of Cambridge, Cambridge, United Kingdom
| | - Timothy O'Leary
- Department of Engineering, University of Cambridge, Cambridge, United Kingdom
| |
Collapse
|
41
|
Primavera BA, Shainline JM. Considerations for Neuromorphic Supercomputing in Semiconducting and Superconducting Optoelectronic Hardware. Front Neurosci 2021; 15:732368. [PMID: 34552465 PMCID: PMC8450355 DOI: 10.3389/fnins.2021.732368] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/29/2021] [Accepted: 08/09/2021] [Indexed: 11/24/2022] Open
Abstract
Any large-scale spiking neuromorphic system striving for complexity at the level of the human brain and beyond will need to be co-optimized for communication and computation. Such reasoning leads to the proposal for optoelectronic neuromorphic platforms that leverage the complementary properties of optics and electronics. Starting from the conjecture that future large-scale neuromorphic systems will utilize integrated photonics and fiber optics for communication in conjunction with analog electronics for computation, we consider two possible paths toward achieving this vision. The first is a semiconductor platform based on analog CMOS circuits and waveguide-integrated photodiodes. The second is a superconducting approach that utilizes Josephson junctions and waveguide-integrated superconducting single-photon detectors. We discuss available devices, assess scaling potential, and provide a list of key metrics and demonstrations for each platform. Both platforms hold potential, but their development will diverge in important respects. Semiconductor systems benefit from a robust fabrication ecosystem and can build on extensive progress made in purely electronic neuromorphic computing but will require III-V light source integration with electronics at an unprecedented scale, further advances in ultra-low capacitance photodiodes, and success from emerging memory technologies. Superconducting systems place near theoretically minimum burdens on light sources (a tremendous boon to one of the most speculative aspects of either platform) and provide new opportunities for integrated, high-endurance synaptic memory. However, superconducting optoelectronic systems will also contend with interfacing low-voltage electronic circuits to semiconductor light sources, the serial biasing of superconducting devices on an unprecedented scale, a less mature fabrication ecosystem, and cryogenic infrastructure.
Collapse
Affiliation(s)
- Bryce A. Primavera
- National Institute of Standards and Technology, Boulder, CO, United States
- Department of Physics, University of Colorado Boulder, Boulder, CO, United States
| | | |
Collapse
|
42
|
Wittkuhn L, Chien S, Hall-McMaster S, Schuck NW. Replay in minds and machines. Neurosci Biobehav Rev 2021; 129:367-388. [PMID: 34371078 DOI: 10.1016/j.neubiorev.2021.08.002] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/22/2021] [Revised: 07/19/2021] [Accepted: 08/01/2021] [Indexed: 11/19/2022]
Abstract
Experience-related brain activity patterns reactivate during sleep, wakeful rest, and brief pauses from active behavior. In parallel, machine learning research has found that experience replay can lead to substantial performance improvements in artificial agents. Together, these lines of research suggest replay has a variety of computational benefits for decision-making and learning. Here, we provide an overview of putative computational functions of replay as suggested by machine learning and neuroscientific research. We show that replay can lead to faster learning, less forgetting, reorganization or augmentation of experiences, and support planning and generalization. In addition, we highlight the benefits of reactivating abstracted internal representations rather than veridical memories, and discuss how replay could provide a mechanism to build internal representations that improve learning and decision-making.
Collapse
Affiliation(s)
- Lennart Wittkuhn
- Max Planck Research Group NeuroCode, Max Planck Institute for Human Development, Lentzeallee 94, D-14195 Berlin, Germany; Max Planck UCL Centre for Computational Psychiatry and Ageing Research, Lentzeallee 94, D-14195 Berlin, Germany.
| | - Samson Chien
- Max Planck Research Group NeuroCode, Max Planck Institute for Human Development, Lentzeallee 94, D-14195 Berlin, Germany; Max Planck UCL Centre for Computational Psychiatry and Ageing Research, Lentzeallee 94, D-14195 Berlin, Germany
| | - Sam Hall-McMaster
- Max Planck Research Group NeuroCode, Max Planck Institute for Human Development, Lentzeallee 94, D-14195 Berlin, Germany; Max Planck UCL Centre for Computational Psychiatry and Ageing Research, Lentzeallee 94, D-14195 Berlin, Germany
| | - Nicolas W Schuck
- Max Planck Research Group NeuroCode, Max Planck Institute for Human Development, Lentzeallee 94, D-14195 Berlin, Germany; Max Planck UCL Centre for Computational Psychiatry and Ageing Research, Lentzeallee 94, D-14195 Berlin, Germany.
| |
Collapse
|
43
|
Parajuli LK, Koike M. Three-Dimensional Structure of Dendritic Spines Revealed by Volume Electron Microscopy Techniques. Front Neuroanat 2021; 15:627368. [PMID: 34135737 PMCID: PMC8200415 DOI: 10.3389/fnana.2021.627368] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/09/2020] [Accepted: 05/03/2021] [Indexed: 11/13/2022] Open
Abstract
Electron microscopy (EM)-based synaptology is a fundamental discipline for achieving a complex wiring diagram of the brain. A quantitative understanding of synaptic ultrastructure also serves as a basis to estimate the relative magnitude of synaptic transmission across individual circuits in the brain. Although conventional light microscopic techniques have substantially contributed to our ever-increasing understanding of the morphological characteristics of the putative synaptic junctions, EM is the gold standard for systematic visualization of the synaptic morphology. Furthermore, a complete three-dimensional reconstruction of an individual synaptic profile is required for the precise quantitation of different parameters that shape synaptic transmission. While volumetric imaging of synapses can be routinely obtained from the transmission EM (TEM) imaging of ultrathin sections, it requires an unimaginable amount of effort and time to reconstruct very long segments of dendrites and their spines from the serial section TEM images. The challenges of low throughput EM imaging have been addressed to an appreciable degree by the development of automated EM imaging tools that allow imaging and reconstruction of dendritic segments in a realistic time frame. Here, we review studies that have been instrumental in determining the three-dimensional ultrastructure of synapses. With a particular focus on dendritic spine synapses in the rodent brain, we discuss various key studies that have highlighted the structural diversity of spines, the principles of their organization in the dendrites, their presynaptic wiring patterns, and their activity-dependent structural remodeling.
Collapse
Affiliation(s)
- Laxmi Kumar Parajuli
- Department of Cell Biology and Neuroscience, Juntendo University Graduate School of Medicine, Tokyo, Japan
| | - Masato Koike
- Department of Cell Biology and Neuroscience, Juntendo University Graduate School of Medicine, Tokyo, Japan.,Advanced Research Institute for Health Science, Juntendo University, Tokyo, Japan
| |
Collapse
|
44
|
Ofer N, Berger DR, Kasthuri N, Lichtman JW, Yuste R. Ultrastructural analysis of dendritic spine necks reveals a continuum of spine morphologies. Dev Neurobiol 2021; 81:746-757. [PMID: 33977655 DOI: 10.1002/dneu.22829] [Citation(s) in RCA: 20] [Impact Index Per Article: 6.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/17/2021] [Revised: 05/06/2021] [Accepted: 05/07/2021] [Indexed: 01/20/2023]
Abstract
Dendritic spines are membranous protrusions that receive essentially all excitatory inputs in most mammalian neurons. Spines, with a bulbous head connected to the dendrite by a thin neck, have a variety of morphologies that likely impact their functional properties. Nevertheless, the question of whether spines belong to distinct morphological subtypes is still open. Addressing this quantitatively requires clear identification and measurements of spine necks. Recent advances in electron microscopy enable large-scale systematic reconstructions of spines with nanometer precision in 3D. Analyzing ultrastructural reconstructions from mouse neocortical neurons with computer vision algorithms, we demonstrate that the vast majority of spine structures can be rigorously separated into heads and necks, enabling morphological measurements of spine necks. We then used a database of spine morphological parameters to explore the potential existence of different spine classes. Without exception, our analysis revealed unimodal distributions of individual morphological parameters of spine heads and necks, without evidence for subtypes of spines. The postsynaptic density size was strongly correlated with the spine head volume. The spine neck diameter, but not the neck length, was also correlated with the head volume. Spines with larger head volumes often had a spine apparatus and pairs of spines in a post-synaptic cell contacted by the same axon had similar head volumes. Our data reveal a lack of morphological subtypes of spines and indicate that the spine neck length and head volume must be independently regulated. These results have repercussions for our understanding of the function of dendritic spines in neuronal circuits.
Collapse
Affiliation(s)
- Netanel Ofer
- Neurotechnology Center, Department of Biological Sciences, Columbia University, New York, NY, USA
| | - Daniel R Berger
- Department of Molecular & Cellular Biology, Harvard University, Cambridge, MA, USA
| | | | - Jeff W Lichtman
- Department of Molecular & Cellular Biology, Harvard University, Cambridge, MA, USA
| | - Rafael Yuste
- Neurotechnology Center, Department of Biological Sciences, Columbia University, New York, NY, USA.,Donostia International Physics Center, DIPC, San Sebastian, Spain
| |
Collapse
|
45
|
Kasai H, Ziv NE, Okazaki H, Yagishita S, Toyoizumi T. Spine dynamics in the brain, mental disorders and artificial neural networks. Nat Rev Neurosci 2021; 22:407-422. [PMID: 34050339 DOI: 10.1038/s41583-021-00467-3] [Citation(s) in RCA: 68] [Impact Index Per Article: 22.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 04/14/2021] [Indexed: 12/15/2022]
Abstract
In the brain, most synapses are formed on minute protrusions known as dendritic spines. Unlike their artificial intelligence counterparts, spines are not merely tuneable memory elements: they also embody algorithms that implement the brain's ability to learn from experience and cope with new challenges. Importantly, they exhibit structural dynamics that depend on activity, excitatory input and inhibitory input (synaptic plasticity or 'extrinsic' dynamics) and dynamics independent of activity ('intrinsic' dynamics), both of which are subject to neuromodulatory influences and reinforcers such as dopamine. Here we succinctly review extrinsic and intrinsic dynamics, compare these with parallels in machine learning where they exist, describe the importance of intrinsic dynamics for memory management and adaptation, and speculate on how disruption of extrinsic and intrinsic dynamics may give rise to mental disorders. Throughout, we also highlight algorithmic features of spine dynamics that may be relevant to future artificial intelligence developments.
Collapse
Affiliation(s)
- Haruo Kasai
- Laboratory of Structural Physiology, Center for Disease Biology and Integrative Medicine, Faculty of Medicine, The University of Tokyo, Tokyo, Japan. .,International Research Center for Neurointelligence (WPI-IRCN), UTIAS, The University of Tokyo, Bunkyo-ku, Tokyo, Japan.
| | - Noam E Ziv
- Technion Faculty of Medicine and Network Biology Research Labs, Technion City, Haifa, Israel
| | - Hitoshi Okazaki
- Laboratory of Structural Physiology, Center for Disease Biology and Integrative Medicine, Faculty of Medicine, The University of Tokyo, Tokyo, Japan.,International Research Center for Neurointelligence (WPI-IRCN), UTIAS, The University of Tokyo, Bunkyo-ku, Tokyo, Japan
| | - Sho Yagishita
- Laboratory of Structural Physiology, Center for Disease Biology and Integrative Medicine, Faculty of Medicine, The University of Tokyo, Tokyo, Japan.,International Research Center for Neurointelligence (WPI-IRCN), UTIAS, The University of Tokyo, Bunkyo-ku, Tokyo, Japan
| | - Taro Toyoizumi
- Laboratory for Neural Computation and Adaptation, RIKEN Center for Brain Science, Saitama, Japan.,Department of Mathematical Informatics, Graduate School of Information Science and Technology, The University of Tokyo, Tokyo, Japan
| |
Collapse
|
46
|
Moldwin T, Kalmenson M, Segev I. The gradient clusteron: A model neuron that learns to solve classification tasks via dendritic nonlinearities, structural plasticity, and gradient descent. PLoS Comput Biol 2021; 17:e1009015. [PMID: 34029309 PMCID: PMC8177649 DOI: 10.1371/journal.pcbi.1009015] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/18/2021] [Revised: 06/04/2021] [Accepted: 04/28/2021] [Indexed: 02/01/2023] Open
Abstract
Synaptic clustering on neuronal dendrites has been hypothesized to play an important role in implementing pattern recognition. Neighboring synapses on a dendritic branch can interact in a synergistic, cooperative manner via nonlinear voltage-dependent mechanisms, such as NMDA receptors. Inspired by the NMDA receptor, the single-branch clusteron learning algorithm takes advantage of location-dependent multiplicative nonlinearities to solve classification tasks by randomly shuffling the locations of "under-performing" synapses on a model dendrite during learning ("structural plasticity"), eventually resulting in synapses with correlated activity being placed next to each other on the dendrite. We propose an alternative model, the gradient clusteron, or G-clusteron, which uses an analytically-derived gradient descent rule where synapses are "attracted to" or "repelled from" each other in an input- and location-dependent manner. We demonstrate the classification ability of this algorithm by testing it on the MNIST handwritten digit dataset and show that, when using a softmax activation function, the accuracy of the G-clusteron on the all-versus-all MNIST task (~85%) approaches that of logistic regression (~93%). In addition to the location update rule, we also derive a learning rule for the synaptic weights of the G-clusteron ("functional plasticity") and show that a G-clusteron that utilizes the weight update rule can achieve ~89% accuracy on the MNIST task. We also show that a G-clusteron with both the weight and location update rules can learn to solve the XOR problem from arbitrary initial conditions.
Collapse
Affiliation(s)
- Toviah Moldwin
- Edmond and Lily Safra Center for Brain Sciences, the Hebrew University of Jerusalem, Jerusalem, Israel
- * E-mail:
| | - Menachem Kalmenson
- Department of Neurobiology, the Hebrew University of Jerusalem, Jerusalem, Israel
| | - Idan Segev
- Edmond and Lily Safra Center for Brain Sciences, the Hebrew University of Jerusalem, Jerusalem, Israel
- Department of Neurobiology, the Hebrew University of Jerusalem, Jerusalem, Israel
| |
Collapse
|
47
|
Cellular connectomes as arbiters of local circuit models in the cerebral cortex. Nat Commun 2021; 12:2785. [PMID: 33986261 PMCID: PMC8119988 DOI: 10.1038/s41467-021-22856-z] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/01/2017] [Accepted: 03/28/2021] [Indexed: 02/03/2023] Open
Abstract
With the availability of cellular-resolution connectivity maps, connectomes, from the mammalian nervous system, it is in question how informative such massive connectomic data can be for the distinction of local circuit models in the mammalian cerebral cortex. Here, we investigated whether cellular-resolution connectomic data can in principle allow model discrimination for local circuit modules in layer 4 of mouse primary somatosensory cortex. We used approximate Bayesian model selection based on a set of simple connectome statistics to compute the posterior probability over proposed models given a to-be-measured connectome. We find that the distinction of the investigated local cortical models is faithfully possible based on purely structural connectomic data with an accuracy of more than 90%, and that such distinction is stable against substantial errors in the connectome measurement. Furthermore, mapping a fraction of only 10% of the local connectome is sufficient for connectome-based model distinction under realistic experimental constraints. Together, these results show for a concrete local circuit example that connectomic data allows model selection in the cerebral cortex and define the experimental strategy for obtaining such connectomic data.
Collapse
|
48
|
Herde MK, Bohmbach K, Domingos C, Vana N, Komorowska-Müller JA, Passlick S, Schwarz I, Jackson CJ, Dietrich D, Schwarz MK, Henneberger C. Local Efficacy of Glutamate Uptake Decreases with Synapse Size. Cell Rep 2021; 32:108182. [PMID: 32966786 DOI: 10.1016/j.celrep.2020.108182] [Citation(s) in RCA: 36] [Impact Index Per Article: 12.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/28/2017] [Revised: 08/12/2020] [Accepted: 09/01/2020] [Indexed: 01/16/2023] Open
Abstract
Synaptically released glutamate is largely cleared by glutamate transporters localized on perisynaptic astrocyte processes. Therefore, the substantial variability of astrocyte coverage of individual hippocampal synapses implies that the efficacy of local glutamate uptake and thus the spatial fidelity of synaptic transmission is synapse dependent. By visualization of sub-diffraction-limit perisynaptic astrocytic processes and adjacent postsynaptic spines, we show that, relative to their size, small spines display a stronger coverage by astroglial transporters than bigger neighboring spines. Similarly, glutamate transients evoked by synaptic stimulation are more sensitive to pharmacological inhibition of glutamate uptake at smaller spines, whose high-affinity N-methyl-D-aspartate receptors (NMDARs) are better shielded from remotely released glutamate. At small spines, glutamate-induced and NMDAR-dependent Ca2+ entry is also more strongly increased by uptake inhibition. These findings indicate that spine size inversely correlates with the efficacy of local glutamate uptake and thereby likely determines the probability of synaptic crosstalk.
Collapse
Affiliation(s)
- Michel K Herde
- Institute of Cellular Neurosciences, Medical Faculty, University of Bonn, Bonn, Germany
| | - Kirsten Bohmbach
- Institute of Cellular Neurosciences, Medical Faculty, University of Bonn, Bonn, Germany
| | - Cátia Domingos
- Institute of Cellular Neurosciences, Medical Faculty, University of Bonn, Bonn, Germany
| | - Natascha Vana
- Department for Neurosurgery, University Hospital Bonn, Bonn, Germany
| | | | - Stefan Passlick
- Institute of Cellular Neurosciences, Medical Faculty, University of Bonn, Bonn, Germany
| | - Inna Schwarz
- Institute of Epileptology, Medical Faculty, University of Bonn, Bonn, Germany
| | - Colin J Jackson
- Research School of Chemistry, Australian National University, Canberra, Australia
| | - Dirk Dietrich
- Department for Neurosurgery, University Hospital Bonn, Bonn, Germany
| | - Martin K Schwarz
- Institute of Epileptology, Medical Faculty, University of Bonn, Bonn, Germany
| | - Christian Henneberger
- Institute of Cellular Neurosciences, Medical Faculty, University of Bonn, Bonn, Germany; Institute of Neurology, University College London, London, UK; German Centre for Neurodegenerative Diseases (DZNE), Bonn, Germany.
| |
Collapse
|
49
|
Cox KJA, Adams PR. A minimal model of the interaction of social and individual learning. J Theor Biol 2021; 527:110712. [PMID: 33933477 DOI: 10.1016/j.jtbi.2021.110712] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/17/2020] [Revised: 03/23/2021] [Accepted: 04/05/2021] [Indexed: 11/24/2022]
Abstract
Learning is thought to be achieved by the selective, activity dependent, adjustment of synaptic connections. Individual learning can also be very hard and/or slow. Social, supervised, learning from others might amplify individual, possibly mainly unsupervised, learning by individuals, and might underlie the development and evolution of culture. We studied a minimal neural network model of the interaction of individual, unsupervised, and social supervised learning by communicating "agents". Individual agents attempted to learn to track a hidden fluctuating "source", which, linearly mixed with other masking fluctuations, generated observable input vectors. In this model data are generated linearly, facilitating mathematical analysis. Learning was driven either solely by direct observation of input data (unsupervised, Hebbian) or, in addition, by observation of another agent's output (supervised, Delta rule). To make learning more difficult, and to enhance biological realism, the learning rules were made slightly connection-inspecific, so that incorrect individual learning sometimes occurs. We found that social interaction can foster both correct and incorrect learning. Useful social learning therefore presumably involves additional factors some of which we outline.
Collapse
Affiliation(s)
- Kingsley J A Cox
- Department of Neurobiology, Stony Brook University, Stony Brook, NY 11794, USA.
| | - Paul R Adams
- Department of Neurobiology, Stony Brook University, Stony Brook, NY 11794, USA.
| |
Collapse
|
50
|
Lim D, Semyanov A, Genazzani A, Verkhratsky A. Calcium signaling in neuroglia. INTERNATIONAL REVIEW OF CELL AND MOLECULAR BIOLOGY 2021; 362:1-53. [PMID: 34253292 DOI: 10.1016/bs.ircmb.2021.01.003] [Citation(s) in RCA: 36] [Impact Index Per Article: 12.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/12/2022]
Abstract
Glial cells exploit calcium (Ca2+) signals to perceive the information about the activity of the nervous tissue and the tissue environment to translate this information into an array of homeostatic, signaling and defensive reactions. Astrocytes, the best studied glial cells, use several Ca2+ signaling generation pathways that include Ca2+ entry through plasma membrane, release from endoplasmic reticulum (ER) and from mitochondria. Activation of metabotropic receptors on the plasma membrane of glial cells is coupled to an enzymatic cascade in which a second messenger, InsP3 is generated thus activating intracellular Ca2+ release channels in the ER endomembrane. Astrocytes also possess store-operated Ca2+ entry and express several ligand-gated Ca2+ channels. In vivo astrocytes generate heterogeneous Ca2+ signals, which are short and frequent in distal processes, but large and relatively rare in soma. In response to neuronal activity intracellular and inter-cellular astrocytic Ca2+ waves can be produced. Astrocytic Ca2+ signals are involved in secretion, they regulate ion transport across cell membranes, and are contributing to cell morphological plasticity. Therefore, astrocytic Ca2+ signals are linked to fundamental functions of the central nervous system ranging from synaptic transmission to behavior. In oligodendrocytes, Ca2+ signals are generated by plasmalemmal Ca2+ influx, or by release from intracellular stores, or by combination of both. Microglial cells exploit Ca2+ permeable ionotropic purinergic receptors and transient receptor potential channels as well as ER Ca2+ release. In this contribution, basic morphology of glial cells, glial Ca2+ signaling toolkit, intracellular Ca2+ signals and Ca2+-regulated functions are discussed with focus on astrocytes.
Collapse
Affiliation(s)
- Dmitry Lim
- Department of Pharmaceutical Sciences, Università del Piemonte Orientale, Novara, Italy.
| | - Alexey Semyanov
- Shemyakin-Ovchinnikov Institute of Bioorganic Chemistry, Russian Academy of Sciences, Moscow, Russia; Faculty of Biology, Moscow State University, Moscow, Russia; Sechenov First Moscow State Medical University, Moscow, Russia
| | - Armando Genazzani
- Department of Pharmaceutical Sciences, Università del Piemonte Orientale, Novara, Italy
| | - Alexei Verkhratsky
- Sechenov First Moscow State Medical University, Moscow, Russia; Faculty of Biology, Medicine and Health, The University of Manchester, Manchester, United Kingdom; Achucarro Centre for Neuroscience, IKERBASQUE, Basque Foundation for Science, Bilbao, Spain.
| |
Collapse
|