1
|
Song Y, Benna MK. Parallel Synapses with Transmission Nonlinearities Enhance Neuronal Classification Capacity. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.07.01.601490. [PMID: 39005326 PMCID: PMC11244940 DOI: 10.1101/2024.07.01.601490] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 07/16/2024]
Abstract
Cortical neurons often establish multiple synaptic contacts with the same postsynaptic neuron. To avoid functional redundancy of these parallel synapses, it is crucial that each synapse exhibits distinct computational properties. Here we model the current to the soma contributed by each synapse as a sigmoidal transmission function of its presynaptic input, with learnable parameters such as amplitude, slope, and threshold. We evaluate the classification capacity of a neuron equipped with such nonlinear parallel synapses, and show that with a small number of parallel synapses per axon, it substantially exceeds that of the Perceptron. Furthermore, the number of correctly classified data points can increase superlinearly as the number of presynaptic axons grows. When training with an unrestricted number of parallel synapses, our model neuron can effectively implement an arbitrary aggregate transmission function for each axon, constrained only by monotonicity. Nevertheless, successful learning in the model neuron often requires only a small number of parallel synapses. We also apply these parallel synapses in a feedforward neural network trained to classify MNIST images, and show that they can increase the test accuracy. This demonstrates that multiple nonlinear synapses per input axon can substantially enhance a neuron's computational power.
Collapse
Affiliation(s)
- Yuru Song
- Neurosciences Graduate Program, University of California, San Diego, La Jolla, CA 92093, USA
| | - Marcus K. Benna
- Department of Neurobiology, School of Biological Sciences, University of California San Diego, La Jolla, CA 92093, USA
| |
Collapse
|
2
|
Hall S. Is the Papez circuit the location of the elusive episodic memory engram? IBRO Neurosci Rep 2024; 16:249-259. [PMID: 38370006 PMCID: PMC10869290 DOI: 10.1016/j.ibneur.2024.01.016] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/25/2023] [Accepted: 01/31/2024] [Indexed: 02/20/2024] Open
Abstract
All of the brain structures and white matter that make up Papez' circuit, as well as the circuit as a whole, are implicated in the literature in episodic memory formation and recall. This paper shows that Papez' circuit has the detailed structure and connectivity that is evidently required to support the episodic memory engram, and that identifying Papez' circuit as the location of the engram answers a number of long-standing questions regarding the role of medial temporal lobe structures in episodic memory. The paper then shows that the process by which the episodic memory engram may be formed is a network-wide Hebbian potentiation termed "racetrack potentiation", whose frequency corresponds to that observed in vivo in humans for memory functions. Further, by considering the microcircuits observed in the medial temporal lobe structures forming Papez' circuit, the paper establishes the neural mechanisms behind the required functions of sensory information storage and recall, pattern completion, pattern separation, and memory consolidation. The paper shows that Papez' circuit has the necessary connectivity to gather the various elements of an episodic memory occurring within Pöppel's experienced time or "quantum of experience". Finally, the paper shows how the memory engram located in Papez' circuit might be central to the formation of a duplicate engram in the cortex enabling consolidation and long-term storage of episodic memories.
Collapse
Affiliation(s)
- Steven Hall
- Department of Psychology, University of Bolton, Deane Road, Bolton BL3 5AB, UK
| |
Collapse
|
3
|
Granato A, Phillips WA, Schulz JM, Suzuki M, Larkum ME. Dysfunctions of cellular context-sensitivity in neurodevelopmental learning disabilities. Neurosci Biobehav Rev 2024; 161:105688. [PMID: 38670298 DOI: 10.1016/j.neubiorev.2024.105688] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/23/2024] [Revised: 04/17/2024] [Accepted: 04/21/2024] [Indexed: 04/28/2024]
Abstract
Pyramidal neurons have a pivotal role in the cognitive capabilities of neocortex. Though they have been predominantly modeled as integrate-and-fire point processors, many of them have another point of input integration in their apical dendrites that is central to mechanisms endowing them with the sensitivity to context that underlies basic cognitive capabilities. Here we review evidence implicating impairments of those mechanisms in three major neurodevelopmental disabilities, fragile X, Down syndrome, and fetal alcohol spectrum disorders. Multiple dysfunctions of the mechanisms by which pyramidal cells are sensitive to context are found to be implicated in all three syndromes. Further deciphering of these cellular mechanisms would lead to the understanding of and therapies for learning disabilities beyond any that are currently available.
Collapse
Affiliation(s)
- Alberto Granato
- Dept. of Veterinary Sciences. University of Turin, Grugliasco, Turin 10095, Italy.
| | - William A Phillips
- Psychology, Faculty of Natural Sciences, University of Stirling, Scotland FK9 4LA, UK
| | - Jan M Schulz
- Roche Pharma Research & Early Development, Neuroscience & Rare Diseases Discovery, Roche Innovation Center Basel, F. Hoffmann-La Roche Ltd, Grenzacherstrasse 124, Basel 4070, Switzerland
| | - Mototaka Suzuki
- Dept. of Cognitive and Systems Neuroscience, Swammerdam Institute for Life Sciences, Faculty of Science, University of Amsterdam, Amsterdam 1098 XH, the Netherlands
| | - Matthew E Larkum
- Neurocure Center for Excellence, Charité Universitätsmedizin Berlin, Berlin 10117, Germany; Institute of Biology, Humboldt University Berlin, Berlin, Germany
| |
Collapse
|
4
|
Choucry A, Nomoto M, Inokuchi K. Engram mechanisms of memory linking and identity. Nat Rev Neurosci 2024; 25:375-392. [PMID: 38664582 DOI: 10.1038/s41583-024-00814-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 03/25/2024] [Indexed: 05/25/2024]
Abstract
Memories are thought to be stored in neuronal ensembles referred to as engrams. Studies have suggested that when two memories occur in quick succession, a proportion of their engrams overlap and the memories become linked (in a process known as prospective linking) while maintaining their individual identities. In this Review, we summarize the key principles of memory linking through engram overlap, as revealed by experimental and modelling studies. We describe evidence of the involvement of synaptic memory substrates, spine clustering and non-linear neuronal capacities in prospective linking, and suggest a dynamic somato-synaptic model, in which memories are shared between neurons yet remain separable through distinct dendritic and synaptic allocation patterns. We also bring into focus retrospective linking, in which memories become associated after encoding via offline reactivation, and discuss key temporal and mechanistic differences between prospective and retrospective linking, as well as the potential differences in their cognitive outcomes.
Collapse
Affiliation(s)
- Ali Choucry
- Graduate School of Medicine and Pharmaceutical Sciences, University of Toyama, Toyama, Japan
- Research Center for Idling Brain Science, University of Toyama, Toyama, Japan
- Department of Pharmacology and Toxicology, Faculty of Pharmacy, Cairo University, Cairo, Egypt
| | - Masanori Nomoto
- Graduate School of Medicine and Pharmaceutical Sciences, University of Toyama, Toyama, Japan
- Research Center for Idling Brain Science, University of Toyama, Toyama, Japan
- CREST, Japan Science and Technology Agency (JST), University of Toyama, Toyama, Japan
- Japan Agency for Medical Research and Development (AMED), Tokyo, Japan
| | - Kaoru Inokuchi
- Graduate School of Medicine and Pharmaceutical Sciences, University of Toyama, Toyama, Japan.
- Research Center for Idling Brain Science, University of Toyama, Toyama, Japan.
- CREST, Japan Science and Technology Agency (JST), University of Toyama, Toyama, Japan.
| |
Collapse
|
5
|
Benavides-Piccione R, Blazquez-Llorca L, Kastanauskaite A, Fernaud-Espinosa I, Tapia-González S, DeFelipe J. Key morphological features of human pyramidal neurons. Cereb Cortex 2024; 34:bhae180. [PMID: 38745556 PMCID: PMC11094408 DOI: 10.1093/cercor/bhae180] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/03/2024] [Revised: 04/01/2024] [Accepted: 04/18/2024] [Indexed: 05/16/2024] Open
Abstract
The basic building block of the cerebral cortex, the pyramidal cell, has been shown to be characterized by a markedly different dendritic structure among layers, cortical areas, and species. Functionally, differences in the structure of their dendrites and axons are critical in determining how neurons integrate information. However, within the human cortex, these neurons have not been quantified in detail. In the present work, we performed intracellular injections of Lucifer Yellow and 3D reconstructed over 200 pyramidal neurons, including apical and basal dendritic and local axonal arbors and dendritic spines, from human occipital primary visual area and associative temporal cortex. We found that human pyramidal neurons from temporal cortex were larger, displayed more complex apical and basal structural organization, and had more spines compared to those in primary sensory cortex. Moreover, these human neocortical neurons displayed specific shared and distinct characteristics in comparison to previously published human hippocampal pyramidal neurons. Additionally, we identified distinct morphological features in human neurons that set them apart from mouse neurons. Lastly, we observed certain consistent organizational patterns shared across species. This study emphasizes the existing diversity within pyramidal cell structures across different cortical areas and species, suggesting substantial species-specific variations in their computational properties.
Collapse
Affiliation(s)
- Ruth Benavides-Piccione
- Laboratorio Cajal de Circuitos Corticales, Centro de Tecnología Biomédica, Universidad Politécnica de Madrid, Pozuelo de Alarcón, Madrid 28223, Spain
- Instituto Cajal, Consejo Superior de Investigaciones Científicas (CSIC), Avda. Doctor Arce 37, Madrid 28002, Spain
- Centro de Investigación Biomédica en Red sobre Enfermedades Neurodegenerativas (CIBERNED), ISCIII, Valderrebollo 5, Madrid 28031, Spain
| | - Lidia Blazquez-Llorca
- Laboratorio Cajal de Circuitos Corticales, Centro de Tecnología Biomédica, Universidad Politécnica de Madrid, Pozuelo de Alarcón, Madrid 28223, Spain
- Centro de Investigación Biomédica en Red sobre Enfermedades Neurodegenerativas (CIBERNED), ISCIII, Valderrebollo 5, Madrid 28031, Spain
- Departamento de Tecnología Fotónica y Bioingeniería, ETSI Telecomunicación, Universidad Politécnica de Madrid, Madrid 28040, Spain
| | - Asta Kastanauskaite
- Laboratorio Cajal de Circuitos Corticales, Centro de Tecnología Biomédica, Universidad Politécnica de Madrid, Pozuelo de Alarcón, Madrid 28223, Spain
| | - Isabel Fernaud-Espinosa
- Instituto Cajal, Consejo Superior de Investigaciones Científicas (CSIC), Avda. Doctor Arce 37, Madrid 28002, Spain
| | - Silvia Tapia-González
- Laboratorio de Neurofisiología Celular, Facultad de Medicina, Universidad San Pablo-CEU, CEU Universities, Madrid, Spain
| | - Javier DeFelipe
- Laboratorio Cajal de Circuitos Corticales, Centro de Tecnología Biomédica, Universidad Politécnica de Madrid, Pozuelo de Alarcón, Madrid 28223, Spain
- Instituto Cajal, Consejo Superior de Investigaciones Científicas (CSIC), Avda. Doctor Arce 37, Madrid 28002, Spain
- Centro de Investigación Biomédica en Red sobre Enfermedades Neurodegenerativas (CIBERNED), ISCIII, Valderrebollo 5, Madrid 28031, Spain
| |
Collapse
|
6
|
Agnes EJ, Vogels TP. Co-dependent excitatory and inhibitory plasticity accounts for quick, stable and long-lasting memories in biological networks. Nat Neurosci 2024; 27:964-974. [PMID: 38509348 PMCID: PMC11089004 DOI: 10.1038/s41593-024-01597-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/29/2022] [Accepted: 02/08/2024] [Indexed: 03/22/2024]
Abstract
The brain's functionality is developed and maintained through synaptic plasticity. As synapses undergo plasticity, they also affect each other. The nature of such 'co-dependency' is difficult to disentangle experimentally, because multiple synapses must be monitored simultaneously. To help understand the experimentally observed phenomena, we introduce a framework that formalizes synaptic co-dependency between different connection types. The resulting model explains how inhibition can gate excitatory plasticity while neighboring excitatory-excitatory interactions determine the strength of long-term potentiation. Furthermore, we show how the interplay between excitatory and inhibitory synapses can account for the quick rise and long-term stability of a variety of synaptic weight profiles, such as orientation tuning and dendritic clustering of co-active synapses. In recurrent neuronal networks, co-dependent plasticity produces rich and stable motor cortex-like dynamics with high input sensitivity. Our results suggest an essential role for the neighborly synaptic interaction during learning, connecting micro-level physiology with network-wide phenomena.
Collapse
Affiliation(s)
- Everton J Agnes
- Centre for Neural Circuits and Behaviour, University of Oxford, Oxford, UK.
- Biozentrum, University of Basel, Basel, Switzerland.
| | - Tim P Vogels
- Centre for Neural Circuits and Behaviour, University of Oxford, Oxford, UK
- Institute of Science and Technology Austria, Klosterneuburg, Austria
| |
Collapse
|
7
|
Moreno-Sanchez A, Vasserman AN, Jang H, Hina BW, von Reyn CR, Ausborn J. Morphology and synapse topography optimize linear encoding of synapse numbers in Drosophila looming responsive descending neurons. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.04.24.591016. [PMID: 38712267 PMCID: PMC11071487 DOI: 10.1101/2024.04.24.591016] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/08/2024]
Abstract
Synapses are often precisely organized on dendritic arbors, yet the role of synaptic topography in dendritic integration remains poorly understood. Utilizing electron microscopy (EM) connectomics we investigate synaptic topography in Drosophila melanogaster looming circuits, focusing on retinotopically tuned visual projection neurons (VPNs) that synapse onto descending neurons (DNs). Synapses of a given VPN type project to non-overlapping regions on DN dendrites. Within these spatially constrained clusters, synapses are not retinotopically organized, but instead adopt near random distributions. To investigate how this organization strategy impacts DN integration, we developed multicompartment models of DNs fitted to experimental data and using precise EM morphologies and synapse locations. We find that DN dendrite morphologies normalize EPSP amplitudes of individual synaptic inputs and that near random distributions of synapses ensure linear encoding of synapse numbers from individual VPNs. These findings illuminate how synaptic topography influences dendritic integration and suggest that linear encoding of synapse numbers may be a default strategy established through connectivity and passive neuron properties, upon which active properties and plasticity can then tune as needed.
Collapse
Affiliation(s)
- Anthony Moreno-Sanchez
- Department of Neurobiology and Anatomy, Drexel University College of Medicine, Philadelphia, United States
| | - Alexander N. Vasserman
- Department of Neurobiology and Anatomy, Drexel University College of Medicine, Philadelphia, United States
| | - HyoJong Jang
- School of Biomedical Engineering, Science and Health Systems, Drexel University, Philadelphia, PA, United States
| | - Bryce W. Hina
- School of Biomedical Engineering, Science and Health Systems, Drexel University, Philadelphia, PA, United States
| | - Catherine R. von Reyn
- Department of Neurobiology and Anatomy, Drexel University College of Medicine, Philadelphia, United States
- School of Biomedical Engineering, Science and Health Systems, Drexel University, Philadelphia, PA, United States
| | - Jessica Ausborn
- Department of Neurobiology and Anatomy, Drexel University College of Medicine, Philadelphia, United States
| |
Collapse
|
8
|
El Srouji L, Abdelghany M, Ambethkar HR, Lee YJ, Berkay On M, Yoo SJB. Perspective: an optoelectronic future for heterogeneous, dendritic computing. Front Neurosci 2024; 18:1394271. [PMID: 38699677 PMCID: PMC11064649 DOI: 10.3389/fnins.2024.1394271] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/01/2024] [Accepted: 04/08/2024] [Indexed: 05/05/2024] Open
Abstract
With the increasing number of applications reliant on large neural network models, the pursuit of more suitable computing architectures is becoming increasingly relevant. Progress toward co-integrated silicon photonic and CMOS circuits provides new opportunities for computing architectures with high bandwidth optical networks and high-speed computing. In this paper, we discuss trends in neuromorphic computing architecture and outline an optoelectronic future for heterogeneous, dendritic neuromorphic computing.
Collapse
Affiliation(s)
| | | | | | | | | | - S. J. Ben Yoo
- Department of Electrical and Computer Engineering, University of California, Davis, Davis, CA, United States
| |
Collapse
|
9
|
Fitz H, Hagoort P, Petersson KM. Neurobiological Causal Models of Language Processing. NEUROBIOLOGY OF LANGUAGE (CAMBRIDGE, MASS.) 2024; 5:225-247. [PMID: 38645618 PMCID: PMC11025648 DOI: 10.1162/nol_a_00133] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 09/29/2022] [Accepted: 12/18/2023] [Indexed: 04/23/2024]
Abstract
The language faculty is physically realized in the neurobiological infrastructure of the human brain. Despite significant efforts, an integrated understanding of this system remains a formidable challenge. What is missing from most theoretical accounts is a specification of the neural mechanisms that implement language function. Computational models that have been put forward generally lack an explicit neurobiological foundation. We propose a neurobiologically informed causal modeling approach which offers a framework for how to bridge this gap. A neurobiological causal model is a mechanistic description of language processing that is grounded in, and constrained by, the characteristics of the neurobiological substrate. It intends to model the generators of language behavior at the level of implementational causality. We describe key features and neurobiological component parts from which causal models can be built and provide guidelines on how to implement them in model simulations. Then we outline how this approach can shed new light on the core computational machinery for language, the long-term storage of words in the mental lexicon and combinatorial processing in sentence comprehension. In contrast to cognitive theories of behavior, causal models are formulated in the "machine language" of neurobiology which is universal to human cognition. We argue that neurobiological causal modeling should be pursued in addition to existing approaches. Eventually, this approach will allow us to develop an explicit computational neurobiology of language.
Collapse
Affiliation(s)
- Hartmut Fitz
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, The Netherlands
- Neurobiology of Language Department, Max Planck Institute for Psycholinguistics, Nijmegen, The Netherlands
| | - Peter Hagoort
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, The Netherlands
- Neurobiology of Language Department, Max Planck Institute for Psycholinguistics, Nijmegen, The Netherlands
| | - Karl Magnus Petersson
- Neurobiology of Language Department, Max Planck Institute for Psycholinguistics, Nijmegen, The Netherlands
- Faculty of Medicine and Biomedical Sciences, University of Algarve, Faro, Portugal
| |
Collapse
|
10
|
Groden M, Moessinger HM, Schaffran B, DeFelipe J, Benavides-Piccione R, Cuntz H, Jedlicka P. A biologically inspired repair mechanism for neuronal reconstructions with a focus on human dendrites. PLoS Comput Biol 2024; 20:e1011267. [PMID: 38394339 PMCID: PMC10917450 DOI: 10.1371/journal.pcbi.1011267] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/14/2023] [Revised: 03/06/2024] [Accepted: 02/02/2024] [Indexed: 02/25/2024] Open
Abstract
Investigating and modelling the functionality of human neurons remains challenging due to the technical limitations, resulting in scarce and incomplete 3D anatomical reconstructions. Here we used a morphological modelling approach based on optimal wiring to repair the parts of a dendritic morphology that were lost due to incomplete tissue samples. In Drosophila, where dendritic regrowth has been studied experimentally using laser ablation, we found that modelling the regrowth reproduced a bimodal distribution between regeneration of cut branches and invasion by neighbouring branches. Interestingly, our repair model followed growth rules similar to those for the generation of a new dendritic tree. To generalise the repair algorithm from Drosophila to mammalian neurons, we artificially sectioned reconstructed dendrites from mouse and human hippocampal pyramidal cell morphologies, and showed that the regrown dendrites were morphologically similar to the original ones. Furthermore, we were able to restore their electrophysiological functionality, as evidenced by the recovery of their firing behaviour. Importantly, we show that such repairs also apply to other neuron types including hippocampal granule cells and cerebellar Purkinje cells. We then extrapolated the repair to incomplete human CA1 pyramidal neurons, where the anatomical boundaries of the particular brain areas innervated by the neurons in question were known. Interestingly, the repair of incomplete human dendrites helped to simulate the recently observed increased synaptic thresholds for dendritic NMDA spikes in human versus mouse dendrites. To make the repair tool available to the neuroscience community, we have developed an intuitive and simple graphical user interface (GUI), which is available in the TREES toolbox (www.treestoolbox.org).
Collapse
Affiliation(s)
- Moritz Groden
- 3R Computer-Based Modelling, Faculty of Medicine, ICAR3R, Justus Liebig University Giessen, Giessen, Germany
| | - Hannah M. Moessinger
- Ernst Strüngmann Institute (ESI) for Neuroscience in cooperation with the Max Planck Society, Frankfurt am Main, Germany
| | - Barbara Schaffran
- Ernst Strüngmann Institute (ESI) for Neuroscience in cooperation with the Max Planck Society, Frankfurt am Main, Germany
- Center for Neurodegenerative Diseases (DZNE), Bonn, Germany
| | - Javier DeFelipe
- Laboratorio Cajal de Circuitos Corticales (CTB), Universidad Politécnica de Madrid, Spain
- Instituto Cajal (CSIC), Madrid, Spain
| | - Ruth Benavides-Piccione
- Laboratorio Cajal de Circuitos Corticales (CTB), Universidad Politécnica de Madrid, Spain
- Instituto Cajal (CSIC), Madrid, Spain
| | - Hermann Cuntz
- 3R Computer-Based Modelling, Faculty of Medicine, ICAR3R, Justus Liebig University Giessen, Giessen, Germany
- Ernst Strüngmann Institute (ESI) for Neuroscience in cooperation with the Max Planck Society, Frankfurt am Main, Germany
- Frankfurt Institute for Advanced Studies, Frankfurt am Main, Germany
| | - Peter Jedlicka
- 3R Computer-Based Modelling, Faculty of Medicine, ICAR3R, Justus Liebig University Giessen, Giessen, Germany
- Institute of Clinical Neuroanatomy, Neuroscience Center, Goethe University, Frankfurt am Main, Germany
| |
Collapse
|
11
|
Hwang GM, Simonian AL. Special Issue-Biosensors and Neuroscience: Is Biosensors Engineering Ready to Embrace Design Principles from Neuroscience? BIOSENSORS 2024; 14:68. [PMID: 38391987 PMCID: PMC10886788 DOI: 10.3390/bios14020068] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/24/2023] [Accepted: 01/25/2024] [Indexed: 02/24/2024]
Abstract
In partnership with the Air Force Office of Scientific Research (AFOSR), the National Science Foundation's (NSF) Emerging Frontiers and Multidisciplinary Activities (EFMA) office of the Directorate for Engineering (ENG) launched an Emerging Frontiers in Research and Innovation (EFRI) topic for the fiscal years FY22 and FY23 entitled "Brain-inspired Dynamics for Engineering Energy-Efficient Circuits and Artificial Intelligence" (BRAID) [...].
Collapse
Affiliation(s)
- Grace M. Hwang
- Johns Hopkins University Applied Physics Laboratory, 111000 Johns Hopkins Road, Laurel, MD 20723, USA
| | | |
Collapse
|
12
|
Huang S, Wu SJ, Sansone G, Ibrahim LA, Fishell G. Layer 1 neocortex: Gating and integrating multidimensional signals. Neuron 2024; 112:184-200. [PMID: 37913772 PMCID: PMC11180419 DOI: 10.1016/j.neuron.2023.09.041] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/24/2023] [Revised: 09/23/2023] [Accepted: 09/28/2023] [Indexed: 11/03/2023]
Abstract
Layer 1 (L1) of the neocortex acts as a nexus for the collection and processing of widespread information. By integrating ascending inputs with extensive top-down activity, this layer likely provides critical information regulating how the perception of sensory inputs is reconciled with expectation. This is accomplished by sorting, directing, and integrating the complex network of excitatory inputs that converge onto L1. These signals are combined with neuromodulatory afferents and gated by the wealth of inhibitory interneurons that either are embedded within L1 or send axons from other cortical layers. Together, these interactions dynamically calibrate information flow throughout the neocortex. This review will primarily focus on L1 within the primary sensory cortex and will use these insights to understand L1 in other cortical areas.
Collapse
Affiliation(s)
- Shuhan Huang
- Harvard Medical School, Blavatnik Institute, Department of Neurobiology, Boston, MA 02115, USA; Program in Neuroscience, Harvard University, Cambridge, MA 02138, USA; Stanley Center for Psychiatric Research, Broad Institute of MIT and Harvard, Cambridge, MA 02142, USA
| | - Sherry Jingjing Wu
- Harvard Medical School, Blavatnik Institute, Department of Neurobiology, Boston, MA 02115, USA; Stanley Center for Psychiatric Research, Broad Institute of MIT and Harvard, Cambridge, MA 02142, USA
| | - Giulia Sansone
- Biological and Environmental Sciences and Engineering Division, King Abdullah University of Science and Technology, Thuwal 23955-6900, Kingdom of Saudi Arabia
| | - Leena Ali Ibrahim
- Biological and Environmental Sciences and Engineering Division, King Abdullah University of Science and Technology, Thuwal 23955-6900, Kingdom of Saudi Arabia.
| | - Gord Fishell
- Harvard Medical School, Blavatnik Institute, Department of Neurobiology, Boston, MA 02115, USA; Stanley Center for Psychiatric Research, Broad Institute of MIT and Harvard, Cambridge, MA 02142, USA.
| |
Collapse
|
13
|
Zheng H, Zheng Z, Hu R, Xiao B, Wu Y, Yu F, Liu X, Li G, Deng L. Temporal dendritic heterogeneity incorporated with spiking neural networks for learning multi-timescale dynamics. Nat Commun 2024; 15:277. [PMID: 38177124 PMCID: PMC10766638 DOI: 10.1038/s41467-023-44614-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/31/2023] [Accepted: 12/21/2023] [Indexed: 01/06/2024] Open
Abstract
It is widely believed the brain-inspired spiking neural networks have the capability of processing temporal information owing to their dynamic attributes. However, how to understand what kind of mechanisms contributing to the learning ability and exploit the rich dynamic properties of spiking neural networks to satisfactorily solve complex temporal computing tasks in practice still remains to be explored. In this article, we identify the importance of capturing the multi-timescale components, based on which a multi-compartment spiking neural model with temporal dendritic heterogeneity, is proposed. The model enables multi-timescale dynamics by automatically learning heterogeneous timing factors on different dendritic branches. Two breakthroughs are made through extensive experiments: the working mechanism of the proposed model is revealed via an elaborated temporal spiking XOR problem to analyze the temporal feature integration at different levels; comprehensive performance benefits of the model over ordinary spiking neural networks are achieved on several temporal computing benchmarks for speech recognition, visual recognition, electroencephalogram signal recognition, and robot place recognition, which shows the best-reported accuracy and model compactness, promising robustness and generalization, and high execution efficiency on neuromorphic hardware. This work moves neuromorphic computing a significant step toward real-world applications by appropriately exploiting biological observations.
Collapse
Affiliation(s)
- Hanle Zheng
- Center for Brain Inspired Computing Research (CBICR), Department of Precision Instrument, Tsinghua University, Beijing, China
| | - Zhong Zheng
- Center for Brain Inspired Computing Research (CBICR), Department of Precision Instrument, Tsinghua University, Beijing, China
| | - Rui Hu
- Center for Brain Inspired Computing Research (CBICR), Department of Precision Instrument, Tsinghua University, Beijing, China
| | - Bo Xiao
- Center for Brain Inspired Computing Research (CBICR), Department of Precision Instrument, Tsinghua University, Beijing, China
| | - Yujie Wu
- Institute of Theoretical Computer Science, Graz University of Technology, Graz, Austria
| | - Fangwen Yu
- Center for Brain Inspired Computing Research (CBICR), Department of Precision Instrument, Tsinghua University, Beijing, China
| | - Xue Liu
- Center for Brain Inspired Computing Research (CBICR), Department of Precision Instrument, Tsinghua University, Beijing, China
| | - Guoqi Li
- Institute of Automation, Chinese Academy of Sciences, Beijing, China
| | - Lei Deng
- Center for Brain Inspired Computing Research (CBICR), Department of Precision Instrument, Tsinghua University, Beijing, China.
| |
Collapse
|
14
|
Buxton RB, Wong EC. Metabolic energetics underlying attractors in neural models. J Neurophysiol 2024; 131:88-105. [PMID: 38056422 DOI: 10.1152/jn.00120.2023] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/20/2023] [Revised: 11/13/2023] [Accepted: 12/04/2023] [Indexed: 12/08/2023] Open
Abstract
Neural population modeling, including the role of neural attractors, is a promising tool for understanding many aspects of brain function. We propose a modeling framework to connect the abstract variables used in modeling to recent cellular-level estimates of the bioenergetic costs of different aspects of neural activity, measured in ATP consumed per second per neuron. Based on recent work, an empirical reference for brain ATP use for the awake resting brain was estimated as ∼2 × 109 ATP/s-neuron across several mammalian species. The energetics framework was applied to the Wilson-Cowan (WC) model of two interacting populations of neurons, one excitatory (E) and one inhibitory (I). Attractors were considered to exhibit steady-state behavior and limit cycle behavior, both of which end when the excitatory stimulus ends, and sustained activity that persists after the stimulus ends. The energy cost of limit cycles, with oscillations much faster than the average neuronal firing rate of the population, is tracked more closely with the firing rate than the limit cycle frequency. Self-sustained firing driven by recurrent excitation, though, involves higher firing rates and a higher energy cost. As an example of a simple network in which each node is a WC model, a combination of three nodes can serve as a flexible circuit element that turns on with an oscillating output when input passes a threshold and then persists after the input ends (an "on-switch"), with moderate overall ATP use. The proposed framework can serve as a guide for anchoring neural population models to plausible bioenergetics requirements.NEW & NOTEWORTHY This work bridges two approaches for understanding brain function: cellular-level studies of the metabolic energy costs of different aspects of neural activity and neural population modeling, including the role of neural attractors. The proposed modeling framework connects energetic costs, in ATP consumed per second per neuron, to the more abstract variables used in neural population modeling. In particular, this work anchors potential neural attractors to physiologically plausible bioenergetics requirements.
Collapse
Affiliation(s)
- Richard B Buxton
- Department of Radiology, University of California, San Diego, California, United States
| | - Eric C Wong
- Department of Radiology, University of California, San Diego, California, United States
- Department of Psychiatry, University of California, San Diego, California, United States
| |
Collapse
|
15
|
Wang C, Zhang T, Chen X, He S, Li S, Wu S. BrainPy, a flexible, integrative, efficient, and extensible framework for general-purpose brain dynamics programming. eLife 2023; 12:e86365. [PMID: 38132087 PMCID: PMC10796146 DOI: 10.7554/elife.86365] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/21/2023] [Accepted: 12/20/2023] [Indexed: 12/23/2023] Open
Abstract
Elucidating the intricate neural mechanisms underlying brain functions requires integrative brain dynamics modeling. To facilitate this process, it is crucial to develop a general-purpose programming framework that allows users to freely define neural models across multiple scales, efficiently simulate, train, and analyze model dynamics, and conveniently incorporate new modeling approaches. In response to this need, we present BrainPy. BrainPy leverages the advanced just-in-time (JIT) compilation capabilities of JAX and XLA to provide a powerful infrastructure tailored for brain dynamics programming. It offers an integrated platform for building, simulating, training, and analyzing brain dynamics models. Models defined in BrainPy can be JIT compiled into binary instructions for various devices, including Central Processing Unit, Graphics Processing Unit, and Tensor Processing Unit, which ensures high-running performance comparable to native C or CUDA. Additionally, BrainPy features an extensible architecture that allows for easy expansion of new infrastructure, utilities, and machine-learning approaches. This flexibility enables researchers to incorporate cutting-edge techniques and adapt the framework to their specific needs.
Collapse
Affiliation(s)
- Chaoming Wang
- School of Psychological and Cognitive Sciences, IDG/McGovern Institute for Brain Research, Peking-Tsinghua Center for Life Sciences, Center of Quantitative Biology, Academy for Advanced Interdisciplinary Studies, Bejing Key Laboratory of Behavior and Mental Health, Peking UniversityBeijingChina
- Guangdong Institute of Intelligence Science and TechnologyGuangdongChina
| | - Tianqiu Zhang
- School of Psychological and Cognitive Sciences, IDG/McGovern Institute for Brain Research, Peking-Tsinghua Center for Life Sciences, Center of Quantitative Biology, Academy for Advanced Interdisciplinary Studies, Bejing Key Laboratory of Behavior and Mental Health, Peking UniversityBeijingChina
| | - Xiaoyu Chen
- School of Psychological and Cognitive Sciences, IDG/McGovern Institute for Brain Research, Peking-Tsinghua Center for Life Sciences, Center of Quantitative Biology, Academy for Advanced Interdisciplinary Studies, Bejing Key Laboratory of Behavior and Mental Health, Peking UniversityBeijingChina
| | - Sichao He
- Beijing Jiaotong UniversityBeijingChina
| | - Shangyang Li
- School of Psychological and Cognitive Sciences, IDG/McGovern Institute for Brain Research, Peking-Tsinghua Center for Life Sciences, Center of Quantitative Biology, Academy for Advanced Interdisciplinary Studies, Bejing Key Laboratory of Behavior and Mental Health, Peking UniversityBeijingChina
| | - Si Wu
- School of Psychological and Cognitive Sciences, IDG/McGovern Institute for Brain Research, Peking-Tsinghua Center for Life Sciences, Center of Quantitative Biology, Academy for Advanced Interdisciplinary Studies, Bejing Key Laboratory of Behavior and Mental Health, Peking UniversityBeijingChina
- Guangdong Institute of Intelligence Science and TechnologyGuangdongChina
| |
Collapse
|
16
|
Capone C, Lupo C, Muratore P, Paolucci PS. Beyond spiking networks: The computational advantages of dendritic amplification and input segregation. Proc Natl Acad Sci U S A 2023; 120:e2220743120. [PMID: 38019856 PMCID: PMC10710097 DOI: 10.1073/pnas.2220743120] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/07/2022] [Accepted: 10/11/2023] [Indexed: 12/01/2023] Open
Abstract
The brain can efficiently learn a wide range of tasks, motivating the search for biologically inspired learning rules for improving current artificial intelligence technology. Most biological models are composed of point neurons and cannot achieve state-of-the-art performance in machine learning. Recent works have proposed that input segregation (neurons receive sensory information and higher-order feedback in segregated compartments), and nonlinear dendritic computation would support error backpropagation in biological neurons. However, these approaches require propagating errors with a fine spatiotemporal structure to all the neurons, which is unlikely to be feasible in a biological network. To relax this assumption, we suggest that bursts and dendritic input segregation provide a natural support for target-based learning, which propagates targets rather than errors. A coincidence mechanism between the basal and the apical compartments allows for generating high-frequency bursts of spikes. This architecture supports a burst-dependent learning rule, based on the comparison between the target bursting activity triggered by the teaching signal and the one caused by the recurrent connections, providing support for target-based learning. We show that this framework can be used to efficiently solve spatiotemporal tasks, such as context-dependent store and recall of three-dimensional trajectories, and navigation tasks. Finally, we suggest that this neuronal architecture naturally allows for orchestrating "hierarchical imitation learning", enabling the decomposition of challenging long-horizon decision-making tasks into simpler subtasks. We show a possible implementation of this in a two-level network, where the high network produces the contextual signal for the low network.
Collapse
Affiliation(s)
- Cristiano Capone
- Istituto Nazionale di Fisica Nucleare (INFN), Sezione di Roma, Rome00185, Italy
| | - Cosimo Lupo
- Istituto Nazionale di Fisica Nucleare (INFN), Sezione di Roma, Rome00185, Italy
| | - Paolo Muratore
- Scuola Internazionale Superiore di Studi Avanzati (SISSA), Visual Neuroscience Lab, Trieste34136, Italy
| | | |
Collapse
|
17
|
Suzuki M, Pennartz CMA, Aru J. How deep is the brain? The shallow brain hypothesis. Nat Rev Neurosci 2023; 24:778-791. [PMID: 37891398 DOI: 10.1038/s41583-023-00756-z] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 09/25/2023] [Indexed: 10/29/2023]
Abstract
Deep learning and predictive coding architectures commonly assume that inference in neural networks is hierarchical. However, largely neglected in deep learning and predictive coding architectures is the neurobiological evidence that all hierarchical cortical areas, higher or lower, project to and receive signals directly from subcortical areas. Given these neuroanatomical facts, today's dominance of cortico-centric, hierarchical architectures in deep learning and predictive coding networks is highly questionable; such architectures are likely to be missing essential computational principles the brain uses. In this Perspective, we present the shallow brain hypothesis: hierarchical cortical processing is integrated with a massively parallel process to which subcortical areas substantially contribute. This shallow architecture exploits the computational capacity of cortical microcircuits and thalamo-cortical loops that are not included in typical hierarchical deep learning and predictive coding networks. We argue that the shallow brain architecture provides several critical benefits over deep hierarchical structures and a more complete depiction of how mammalian brains achieve fast and flexible computational capabilities.
Collapse
Affiliation(s)
- Mototaka Suzuki
- Department of Cognitive and Systems Neuroscience, Swammerdam Institute for Life Sciences, University of Amsterdam, Amsterdam, The Netherlands.
| | - Cyriel M A Pennartz
- Department of Cognitive and Systems Neuroscience, Swammerdam Institute for Life Sciences, University of Amsterdam, Amsterdam, The Netherlands
| | - Jaan Aru
- Institute of Computer Science, University of Tartu, Tartu, Estonia.
| |
Collapse
|
18
|
Fitch WT. Cellular computation and cognition. Front Comput Neurosci 2023; 17:1107876. [PMID: 38077750 PMCID: PMC10702520 DOI: 10.3389/fncom.2023.1107876] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/25/2022] [Accepted: 10/09/2023] [Indexed: 05/28/2024] Open
Abstract
Contemporary neural network models often overlook a central biological fact about neural processing: that single neurons are themselves complex, semi-autonomous computing systems. Both the information processing and information storage abilities of actual biological neurons vastly exceed the simple weighted sum of synaptic inputs computed by the "units" in standard neural network models. Neurons are eukaryotic cells that store information not only in synapses, but also in their dendritic structure and connectivity, as well as genetic "marking" in the epigenome of each individual cell. Each neuron computes a complex nonlinear function of its inputs, roughly equivalent in processing capacity to an entire 1990s-era neural network model. Furthermore, individual cells provide the biological interface between gene expression, ongoing neural processing, and stored long-term memory traces. Neurons in all organisms have these properties, which are thus relevant to all of neuroscience and cognitive biology. Single-cell computation may also play a particular role in explaining some unusual features of human cognition. The recognition of the centrality of cellular computation to "natural computation" in brains, and of the constraints it imposes upon brain evolution, thus has important implications for the evolution of cognition, and how we study it.
Collapse
Affiliation(s)
- W. Tecumseh Fitch
- Faculty of Life Sciences and Vienna Cognitive Science Hub, University of Vienna, Vienna, Austria
| |
Collapse
|
19
|
Kagan BJ, Gyngell C, Lysaght T, Cole VM, Sawai T, Savulescu J. The technology, opportunities, and challenges of Synthetic Biological Intelligence. Biotechnol Adv 2023; 68:108233. [PMID: 37558186 PMCID: PMC7615149 DOI: 10.1016/j.biotechadv.2023.108233] [Citation(s) in RCA: 8] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/11/2023] [Revised: 07/15/2023] [Accepted: 08/05/2023] [Indexed: 08/11/2023]
Abstract
Integrating neural cultures developed through synthetic biology methods with digital computing has enabled the early development of Synthetic Biological Intelligence (SBI). Recently, key studies have emphasized the advantages of biological neural systems in some information processing tasks. However, neither the technology behind this early development, nor the potential ethical opportunities or challenges, have been explored in detail yet. Here, we review the key aspects that facilitate the development of SBI and explore potential applications. Considering these foreseeable use cases, various ethical implications are proposed. Ultimately, this work aims to provide a robust framework to structure ethical considerations to ensure that SBI technology can be both researched and applied responsibly.
Collapse
Affiliation(s)
| | - Christopher Gyngell
- Murdoch Children's Research Institute, Melbourne, VIC, Australia; University of Melbourne, Melbourne, VIC, Australia
| | - Tamra Lysaght
- Centre for Biomedical Ethics, Yong Loo Lin School of Medicine, National University of Singapore, Singapore
| | - Victor M Cole
- Centre for Biomedical Ethics, Yong Loo Lin School of Medicine, National University of Singapore, Singapore
| | - Tsutomu Sawai
- Graduate School of Humanities and Social Sciences, Hiroshima University, Hiroshima, Japan; Institute for the Advanced Study of Human Biology (ASHBi), Kyoto University, Kyoto, Japan
| | - Julian Savulescu
- Murdoch Children's Research Institute, Melbourne, VIC, Australia; University of Melbourne, Melbourne, VIC, Australia; Centre for Biomedical Ethics, Yong Loo Lin School of Medicine, National University of Singapore, Singapore
| |
Collapse
|
20
|
Zhang Y, He G, Ma L, Liu X, Hjorth JJJ, Kozlov A, He Y, Zhang S, Kotaleski JH, Tian Y, Grillner S, Du K, Huang T. A GPU-based computational framework that bridges neuron simulation and artificial intelligence. Nat Commun 2023; 14:5798. [PMID: 37723170 PMCID: PMC10507119 DOI: 10.1038/s41467-023-41553-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/30/2022] [Accepted: 09/08/2023] [Indexed: 09/20/2023] Open
Abstract
Biophysically detailed multi-compartment models are powerful tools to explore computational principles of the brain and also serve as a theoretical framework to generate algorithms for artificial intelligence (AI) systems. However, the expensive computational cost severely limits the applications in both the neuroscience and AI fields. The major bottleneck during simulating detailed compartment models is the ability of a simulator to solve large systems of linear equations. Here, we present a novel Dendritic Hierarchical Scheduling (DHS) method to markedly accelerate such a process. We theoretically prove that the DHS implementation is computationally optimal and accurate. This GPU-based method performs with 2-3 orders of magnitude higher speed than that of the classic serial Hines method in the conventional CPU platform. We build a DeepDendrite framework, which integrates the DHS method and the GPU computing engine of the NEURON simulator and demonstrate applications of DeepDendrite in neuroscience tasks. We investigate how spatial patterns of spine inputs affect neuronal excitability in a detailed human pyramidal neuron model with 25,000 spines. Furthermore, we provide a brief discussion on the potential of DeepDendrite for AI, specifically highlighting its ability to enable the efficient training of biophysically detailed models in typical image classification tasks.
Collapse
Affiliation(s)
- Yichen Zhang
- National Key Laboratory for Multimedia Information Processing, School of Computer Science, Peking University, Beijing, 100871, China
| | - Gan He
- National Key Laboratory for Multimedia Information Processing, School of Computer Science, Peking University, Beijing, 100871, China
| | - Lei Ma
- National Key Laboratory for Multimedia Information Processing, School of Computer Science, Peking University, Beijing, 100871, China
- Beijing Academy of Artificial Intelligence (BAAI), Beijing, 100084, China
| | - Xiaofei Liu
- National Key Laboratory for Multimedia Information Processing, School of Computer Science, Peking University, Beijing, 100871, China
- School of Information Science and Engineering, Yunnan University, Kunming, 650500, China
| | - J J Johannes Hjorth
- Science for Life Laboratory, School of Electrical Engineering and Computer Science, Royal Institute of Technology KTH, Stockholm, SE-10044, Sweden
| | - Alexander Kozlov
- Science for Life Laboratory, School of Electrical Engineering and Computer Science, Royal Institute of Technology KTH, Stockholm, SE-10044, Sweden
- Department of Neuroscience, Karolinska Institute, Stockholm, SE-17165, Sweden
| | - Yutao He
- National Key Laboratory for Multimedia Information Processing, School of Computer Science, Peking University, Beijing, 100871, China
| | - Shenjian Zhang
- National Key Laboratory for Multimedia Information Processing, School of Computer Science, Peking University, Beijing, 100871, China
| | - Jeanette Hellgren Kotaleski
- Science for Life Laboratory, School of Electrical Engineering and Computer Science, Royal Institute of Technology KTH, Stockholm, SE-10044, Sweden
- Department of Neuroscience, Karolinska Institute, Stockholm, SE-17165, Sweden
| | - Yonghong Tian
- National Key Laboratory for Multimedia Information Processing, School of Computer Science, Peking University, Beijing, 100871, China
- School of Electrical and Computer Engineering, Shenzhen Graduate School, Peking University, Shenzhen, 518055, China
| | - Sten Grillner
- Department of Neuroscience, Karolinska Institute, Stockholm, SE-17165, Sweden
| | - Kai Du
- Institute for Artificial Intelligence, Peking University, Beijing, 100871, China.
| | - Tiejun Huang
- National Key Laboratory for Multimedia Information Processing, School of Computer Science, Peking University, Beijing, 100871, China
- Beijing Academy of Artificial Intelligence (BAAI), Beijing, 100084, China
- Institute for Artificial Intelligence, Peking University, Beijing, 100871, China
| |
Collapse
|
21
|
Dimitrov AG. Resting membrane state as an interplay of electrogenic transporters with various pumps. Pflugers Arch 2023; 475:1113-1128. [PMID: 37468808 DOI: 10.1007/s00424-023-02838-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/05/2023] [Revised: 06/26/2023] [Accepted: 07/06/2023] [Indexed: 07/21/2023]
Abstract
In this study, a new idea that electrogenic transporters determine cell resting state is presented. The previous assumption was that pumps, especially the sodium one, determine it. The latter meets difficulties, because it violates the law of conservation of energy; also a significant deficit of pump activity is reported. The amount of energy carried by a single ATP molecule reflects the potential of the inner mitochondrial membrane, which is about -200 mV. If pumps enforce a resting membrane potential that is more than twice smaller, then the majority of energy stored in ATP would be dissipated by each pump turning. However, this problem could be solved if control is transferred from pumps to something else, e.g., electrogenic transporters. Then pumps would transfer the energy to the ionic gradient without losses, while the cell surface membrane potential would be associated with the reversal potential of some electrogenic transporters. A minimal scheme of this type would include a sodium-calcium exchanger as well as sodium and calcium pumps. However, note that calcium channels and pumps are positioned along both intracellular organelles and the surface membrane. Therefore, the above-mentioned scheme would involve them as well as possible intercellular communications. Such schemes where various kinds of pumps are assumed to work in parallel may explain, to a great extent, the slow turning rate of the individual members. Interaction of pumps and transporters positioned at distant biological membranes with various forms of energy transfer between them may thus result in hypoxic/reperfusion injury, different kinds of muscle fatigue, and nerve-glia interactions.
Collapse
Affiliation(s)
- A G Dimitrov
- Institute of Biophysics and Biomedical Engineering, Bulgarian Academy of Sciences, Acad. G. Bonchev Str., Bl. 105, 1113, Sofia, Bulgaria.
| |
Collapse
|
22
|
Petousakis KE, Apostolopoulou AA, Poirazi P. The impact of Hodgkin-Huxley models on dendritic research. J Physiol 2023; 601:3091-3102. [PMID: 36218068 PMCID: PMC10600871 DOI: 10.1113/jp282756] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/08/2022] [Accepted: 09/16/2022] [Indexed: 11/08/2022] Open
Abstract
For the past seven decades, the Hodgkin-Huxley (HH) formalism has been an invaluable tool in the arsenal of neuroscientists, allowing for robust and reproducible modelling of ionic conductances and the electrophysiological phenomena they underlie. Despite its apparent age, its role as a cornerstone of computational neuroscience has not waned. The discovery of dendritic regenerative events mediated by ionic and synaptic conductances has solidified the importance of HH-based models further, yielding new predictions concerning dendritic integration, synaptic plasticity and neuronal computation. These predictions are often validated through in vivo and in vitro experiments, advancing our understanding of the neuron as a biological system and emphasizing the importance of HH-based detailed computational models as an instrument of dendritic research. In this article, we discuss recent studies in which the HH formalism is used to shed new light on dendritic function and its role in neuronal phenomena.
Collapse
Affiliation(s)
- Konstantinos-Evangelos Petousakis
- Institute of Molecular Biology and Biotechnology (IMBB), Foundation for Research and Technology-Hellas (FORTH), Heraklion, Crete, Greece
- Department of Biology, University of Crete, Heraklion, Crete, Greece
| | - Anthi A Apostolopoulou
- Institute of Molecular Biology and Biotechnology (IMBB), Foundation for Research and Technology-Hellas (FORTH), Heraklion, Crete, Greece
| | - Panayiota Poirazi
- Institute of Molecular Biology and Biotechnology (IMBB), Foundation for Research and Technology-Hellas (FORTH), Heraklion, Crete, Greece
| |
Collapse
|
23
|
Dura-Bernal S, Neymotin SA, Suter BA, Dacre J, Moreira JVS, Urdapilleta E, Schiemann J, Duguid I, Shepherd GMG, Lytton WW. Multiscale model of primary motor cortex circuits predicts in vivo cell-type-specific, behavioral state-dependent dynamics. Cell Rep 2023; 42:112574. [PMID: 37300831 PMCID: PMC10592234 DOI: 10.1016/j.celrep.2023.112574] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/28/2022] [Revised: 02/27/2023] [Accepted: 05/12/2023] [Indexed: 06/12/2023] Open
Abstract
Understanding cortical function requires studying multiple scales: molecular, cellular, circuit, and behavioral. We develop a multiscale, biophysically detailed model of mouse primary motor cortex (M1) with over 10,000 neurons and 30 million synapses. Neuron types, densities, spatial distributions, morphologies, biophysics, connectivity, and dendritic synapse locations are constrained by experimental data. The model includes long-range inputs from seven thalamic and cortical regions and noradrenergic inputs. Connectivity depends on cell class and cortical depth at sublaminar resolution. The model accurately predicts in vivo layer- and cell-type-specific responses (firing rates and LFP) associated with behavioral states (quiet wakefulness and movement) and experimental manipulations (noradrenaline receptor blockade and thalamus inactivation). We generate mechanistic hypotheses underlying the observed activity and analyzed low-dimensional population latent dynamics. This quantitative theoretical framework can be used to integrate and interpret M1 experimental data and sheds light on the cell-type-specific multiscale dynamics associated with several experimental conditions and behaviors.
Collapse
Affiliation(s)
- Salvador Dura-Bernal
- Department of Physiology and Pharmacology, State University of New York (SUNY) Downstate Health Sciences University, Brooklyn, NY, USA; Center for Biomedical Imaging and Neuromodulation, Nathan S. Kline Institute for Psychiatric Research, Orangeburg, NY, USA.
| | - Samuel A Neymotin
- Center for Biomedical Imaging and Neuromodulation, Nathan S. Kline Institute for Psychiatric Research, Orangeburg, NY, USA; Department of Psychiatry, Grossman School of Medicine, New York University (NYU), New York, NY, USA
| | - Benjamin A Suter
- Department of Physiology, Northwestern University, Evanston, IL, USA
| | - Joshua Dacre
- Centre for Discovery Brain Sciences, Edinburgh Medical School: Biomedical Sciences, University of Edinburgh, Edinburgh, UK
| | - Joao V S Moreira
- Department of Physiology and Pharmacology, State University of New York (SUNY) Downstate Health Sciences University, Brooklyn, NY, USA
| | - Eugenio Urdapilleta
- Department of Physiology and Pharmacology, State University of New York (SUNY) Downstate Health Sciences University, Brooklyn, NY, USA
| | - Julia Schiemann
- Centre for Discovery Brain Sciences, Edinburgh Medical School: Biomedical Sciences, University of Edinburgh, Edinburgh, UK; Center for Integrative Physiology and Molecular Medicine, Saarland University, Saarbrücken, Germany
| | - Ian Duguid
- Centre for Discovery Brain Sciences, Edinburgh Medical School: Biomedical Sciences, University of Edinburgh, Edinburgh, UK
| | | | - William W Lytton
- Department of Physiology and Pharmacology, State University of New York (SUNY) Downstate Health Sciences University, Brooklyn, NY, USA; Aligning Science Across Parkinson's (ASAP) Collaborative Research Network, Chevy Chase, MD, USA; Department of Neurology, Kings County Hospital Center, Brooklyn, NY, USA
| |
Collapse
|
24
|
Vinck M, Uran C, Spyropoulos G, Onorato I, Broggini AC, Schneider M, Canales-Johnson A. Principles of large-scale neural interactions. Neuron 2023; 111:987-1002. [PMID: 37023720 DOI: 10.1016/j.neuron.2023.03.015] [Citation(s) in RCA: 10] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/17/2023] [Revised: 02/27/2023] [Accepted: 03/09/2023] [Indexed: 04/08/2023]
Abstract
What mechanisms underlie flexible inter-areal communication in the cortex? We consider four mechanisms for temporal coordination and their contributions to communication: (1) Oscillatory synchronization (communication-through-coherence); (2) communication-through-resonance; (3) non-linear integration; and (4) linear signal transmission (coherence-through-communication). We discuss major challenges for communication-through-coherence based on layer- and cell-type-specific analyses of spike phase-locking, heterogeneity of dynamics across networks and states, and computational models for selective communication. We argue that resonance and non-linear integration are viable alternative mechanisms that facilitate computation and selective communication in recurrent networks. Finally, we consider communication in relation to cortical hierarchy and critically examine the hypothesis that feedforward and feedback communication use fast (gamma) and slow (alpha/beta) frequencies, respectively. Instead, we propose that feedforward propagation of prediction errors relies on the non-linear amplification of aperiodic transients, whereas gamma and beta rhythms represent rhythmic equilibrium states that facilitate sustained and efficient information encoding and amplification of short-range feedback via resonance.
Collapse
Affiliation(s)
- Martin Vinck
- Ernst Struengmann Institute (ESI) for Neuroscience in Cooperation with Max Planck Society, 60528 Frankfurt am Main, Germany; Donders Centre for Neuroscience, Department of Neurophysics, Radboud University Nijmegen, 6525 Nijmegen, the Netherlands.
| | - Cem Uran
- Ernst Struengmann Institute (ESI) for Neuroscience in Cooperation with Max Planck Society, 60528 Frankfurt am Main, Germany; Donders Centre for Neuroscience, Department of Neurophysics, Radboud University Nijmegen, 6525 Nijmegen, the Netherlands
| | - Georgios Spyropoulos
- Ernst Struengmann Institute (ESI) for Neuroscience in Cooperation with Max Planck Society, 60528 Frankfurt am Main, Germany
| | - Irene Onorato
- Ernst Struengmann Institute (ESI) for Neuroscience in Cooperation with Max Planck Society, 60528 Frankfurt am Main, Germany; Donders Centre for Neuroscience, Department of Neurophysics, Radboud University Nijmegen, 6525 Nijmegen, the Netherlands
| | - Ana Clara Broggini
- Ernst Struengmann Institute (ESI) for Neuroscience in Cooperation with Max Planck Society, 60528 Frankfurt am Main, Germany
| | - Marius Schneider
- Ernst Struengmann Institute (ESI) for Neuroscience in Cooperation with Max Planck Society, 60528 Frankfurt am Main, Germany; Donders Centre for Neuroscience, Department of Neurophysics, Radboud University Nijmegen, 6525 Nijmegen, the Netherlands
| | - Andres Canales-Johnson
- Department of Psychology, University of Cambridge, CB2 3EB Cambridge, UK; Centro de Investigacion en Neuropsicologia y Neurociencias Cognitivas, Facultad de Ciencias de la Salud, Universidad Catolica del Maule, 3480122 Talca, Chile.
| |
Collapse
|
25
|
Gao S, Zhou M, Wang Z, Sugiyama D, Cheng J, Wang J, Todo Y. Fully Complex-Valued Dendritic Neuron Model. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2023; 34:2105-2118. [PMID: 34487498 DOI: 10.1109/tnnls.2021.3105901] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/10/2023]
Abstract
A single dendritic neuron model (DNM) that owns the nonlinear information processing ability of dendrites has been widely used for classification and prediction. Complex-valued neural networks that consist of a number of multiple/deep-layer McCulloch-Pitts neurons have achieved great successes so far since neural computing was utilized for signal processing. Yet no complex value representations appear in single neuron architectures. In this article, we first extend DNM from a real-value domain to a complex-valued one. Performance of complex-valued DNM (CDNM) is evaluated through a complex XOR problem, a non-minimum phase equalization problem, and a real-world wind prediction task. Also, a comparative analysis on a set of elementary transcendental functions as an activation function is implemented and preparatory experiments are carried out for determining hyperparameters. The experimental results indicate that the proposed CDNM significantly outperforms real-valued DNM, complex-valued multi-layer perceptron, and other complex-valued neuron models.
Collapse
|
26
|
Tang Y, Zhang X, An L, Yu Z, Liu JK. Diverse role of NMDA receptors for dendritic integration of neural dynamics. PLoS Comput Biol 2023; 19:e1011019. [PMID: 37036844 PMCID: PMC10085026 DOI: 10.1371/journal.pcbi.1011019] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/23/2022] [Accepted: 03/09/2023] [Indexed: 04/11/2023] Open
Abstract
Neurons, represented as a tree structure of morphology, have various distinguished branches of dendrites. Different types of synaptic receptors distributed over dendrites are responsible for receiving inputs from other neurons. NMDA receptors (NMDARs) are expressed as excitatory units, and play a key physiological role in synaptic function. Although NMDARs are widely expressed in most types of neurons, they play a different role in the cerebellar Purkinje cells (PCs). Utilizing a computational PC model with detailed dendritic morphology, we explored the role of NMDARs at different parts of dendritic branches and regions. We found somatic responses can switch from silent, to simple spikes and complex spikes, depending on specific dendritic branches. Detailed examination of the dendrites regarding their diameters and distance to soma revealed diverse response patterns, yet explain two firing modes, simple and complex spike. Taken together, these results suggest that NMDARs play an important role in controlling excitability sensitivity while taking into account the factor of dendritic properties. Given the complexity of neural morphology varying in cell types, our work suggests that the functional role of NMDARs is not stereotyped but highly interwoven with local properties of neuronal structure.
Collapse
Affiliation(s)
- Yuanhong Tang
- Institute for Artificial Intelligence, Department of Computer Science and Technology, Peking University, Beijing, China
| | - Xingyu Zhang
- Guangzhou Institute of Technology, Xidian University, Guangzhou, China
| | - Lingling An
- School of Computer Science and Technology, Xidian University, Xi'an, China
| | - Zhaofei Yu
- Institute for Artificial Intelligence, Department of Computer Science and Technology, Peking University, Beijing, China
| | - Jian K Liu
- School of Computing, University of Leeds, Leeds, United Kingdom
| |
Collapse
|
27
|
Zhang Y, Du K, Huang T. Heuristic Tree-Partition-Based Parallel Method for Biophysically Detailed Neuron Simulation. Neural Comput 2023; 35:627-644. [PMID: 36746142 DOI: 10.1162/neco_a_01565] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/24/2022] [Accepted: 10/20/2022] [Indexed: 02/08/2023]
Abstract
Biophysically detailed neuron simulation is a powerful tool to explore the mechanisms behind biological experiments and bridge the gap between various scales in neuroscience research. However, the extremely high computational complexity of detailed neuron simulation restricts the modeling and exploration of detailed network models. The bottleneck is solving the system of linear equations. To accelerate detailed simulation, we propose a heuristic tree-partition-based parallel method (HTP) to parallelize the computation of the Hines algorithm, the kernel for solving linear equations, and leverage the strong parallel capability of the graphic processing unit (GPU) to achieve further speedup. We formulate the problem of how to get a fine parallel process as a tree-partition problem. Next, we present a heuristic partition algorithm to obtain an effective partition to efficiently parallelize the equation-solving process in detailed simulation. With further optimization on GPU, our HTP method achieves 2.2 to 8.5 folds speedup compared to the state-of-the-art GPU method and 36 to 660 folds speedup compared to the typical Hines algorithm.
Collapse
Affiliation(s)
- Yichen Zhang
- School of Computer Science, Peking University, Beijing 100871, China
| | - Kai Du
- School of Computer Science and Institute for Artificial Intelligence, Peking University, Beijing 100871, China
| | - Tiejun Huang
- School of Computer Science and Institute for Artificial Intelligence, Peking University, Beijing 100871, China
| |
Collapse
|
28
|
Dainauskas JJ, Marie H, Migliore M, Saudargiene A. GluN2B-NMDAR subunit contribution on synaptic plasticity: A phenomenological model for CA3-CA1 synapses. Front Synaptic Neurosci 2023; 15:1113957. [PMID: 37008680 PMCID: PMC10050887 DOI: 10.3389/fnsyn.2023.1113957] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/01/2022] [Accepted: 02/13/2023] [Indexed: 03/17/2023] Open
Abstract
Synaptic plasticity is believed to be a key mechanism underlying learning and memory. We developed a phenomenological N-methyl-D-aspartate (NMDA) receptor-based voltage-dependent synaptic plasticity model for synaptic modifications at hippocampal CA3-CA1 synapses on a hippocampal CA1 pyramidal neuron. The model incorporates the GluN2A-NMDA and GluN2B-NMDA receptor subunit-based functions and accounts for the synaptic strength dependence on the postsynaptic NMDA receptor composition and functioning without explicitly modeling the NMDA receptor-mediated intracellular calcium, a local trigger of synaptic plasticity. We embedded the model into a two-compartmental model of a hippocampal CA1 pyramidal cell and validated it against experimental data of spike-timing-dependent synaptic plasticity (STDP), high and low-frequency stimulation. The developed model predicts altered learning rules in synapses formed on the apical dendrites of the detailed compartmental model of CA1 pyramidal neuron in the presence of the GluN2B-NMDA receptor hypofunction and can be used in hippocampal networks to model learning in health and disease.
Collapse
Affiliation(s)
- Justinas J. Dainauskas
- Laboratory of Biophysics and Bioinformatics, Neuroscience Institute, Lithuanian University of Health Sciences, Kaunas, Lithuania
- Department of Informatics, Vytautas Magnus University, Kaunas, Lithuania
| | - Hélène Marie
- Université Côte d'Azur, Centre National de la Recherche Scientifique (CNRS) UMR 7275, Institut de Pharmacologie Moléculaire et Cellulaire (IPMC), Valbonne, France
| | - Michele Migliore
- Institute of Biophysics, National Research Council, Palermo, Italy
| | - Ausra Saudargiene
- Laboratory of Biophysics and Bioinformatics, Neuroscience Institute, Lithuanian University of Health Sciences, Kaunas, Lithuania
- *Correspondence: Ausra Saudargiene
| |
Collapse
|
29
|
Li J, Liu Z, Wang R, Gao S. Dendritic Deep Residual Learning for COVID‐19 Prediction. IEEJ TRANSACTIONS ON ELECTRICAL AND ELECTRONIC ENGINEERING 2023; 18:297-299. [PMCID: PMC9874713 DOI: 10.1002/tee.23723] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/22/2022] [Revised: 09/12/2022] [Indexed: 05/25/2023]
Abstract
Deep residual network (ResNet), one of the mainstream deep learning models, has achieved groundbreaking results in various fields. However, all neurons used in ResNet are based on the McCulloch‐Pitts model which has long been criticized for its oversimplified structure. Accordingly, this paper for the first time proposes a novel dendritic residual network by considering the powerful information processing capacity of dendrites in neurons. Experimental results based on the challenging COVID‐19 prediction problem show the superiority of the proposed method in comparison with other state‐of‐the‐art ones. © 2022 Institute of Electrical Engineers of Japan. Published by Wiley Periodicals LLC.
Collapse
Affiliation(s)
- Jiayi Li
- Faculty of EngineeringUniversity of ToyamaToyama930–8555Japan
| | - Zhipeng Liu
- Faculty of EngineeringUniversity of ToyamaToyama930–8555Japan
| | - Rong‐Long Wang
- Faculty of EngineeringUniversity of FukuiFukui‐shi910‐8507Japan
| | - Shangce Gao
- Faculty of EngineeringUniversity of ToyamaToyama930–8555Japan
| |
Collapse
|
30
|
Bilash OM, Chavlis S, Johnson CD, Poirazi P, Basu J. Lateral entorhinal cortex inputs modulate hippocampal dendritic excitability by recruiting a local disinhibitory microcircuit. Cell Rep 2023; 42:111962. [PMID: 36640337 DOI: 10.1016/j.celrep.2022.111962] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/13/2022] [Revised: 10/31/2022] [Accepted: 12/20/2022] [Indexed: 01/06/2023] Open
Abstract
The lateral entorhinal cortex (LEC) provides multisensory information to the hippocampus, directly to the distal dendrites of CA1 pyramidal neurons. LEC neurons perform important functions for episodic memory processing, coding for contextually salient elements of an environment or experience. However, we know little about the functional circuit interactions between the LEC and the hippocampus. We combine functional circuit mapping and computational modeling to examine how long-range glutamatergic LEC projections modulate compartment-specific excitation-inhibition dynamics in hippocampal area CA1. We demonstrate that glutamatergic LEC inputs can drive local dendritic spikes in CA1 pyramidal neurons, aided by the recruitment of a disinhibitory VIP interneuron microcircuit. Our circuit mapping and modeling further reveal that LEC inputs also recruit CCK interneurons that may act as strong suppressors of dendritic spikes. These results highlight a cortically driven GABAergic microcircuit mechanism that gates nonlinear dendritic computations, which may support compartment-specific coding of multisensory contextual features within the hippocampus.
Collapse
Affiliation(s)
- Olesia M Bilash
- Neuroscience Institute, Department of Neuroscience and Physiology, New York University Grossman School of Medicine, NYU Langone Health, New York, NY 10016, USA
| | - Spyridon Chavlis
- Institute of Molecular Biology and Biotechnology (IMBB), Foundation for Research and Technology-Hellas (FORTH), Heraklion, Crete 70013, Greece
| | - Cara D Johnson
- Neuroscience Institute, Department of Neuroscience and Physiology, New York University Grossman School of Medicine, NYU Langone Health, New York, NY 10016, USA
| | - Panayiota Poirazi
- Institute of Molecular Biology and Biotechnology (IMBB), Foundation for Research and Technology-Hellas (FORTH), Heraklion, Crete 70013, Greece.
| | - Jayeeta Basu
- Neuroscience Institute, Department of Neuroscience and Physiology, New York University Grossman School of Medicine, NYU Langone Health, New York, NY 10016, USA; Center for Neural Science, New York University, New York, NY 10003, USA; Department of Psychiatry, New York University Grossman School of Medicine, NYU Langone Health, New York, NY 10016, USA.
| |
Collapse
|
31
|
Learning on tree architectures outperforms a convolutional feedforward network. Sci Rep 2023; 13:962. [PMID: 36717568 PMCID: PMC9886946 DOI: 10.1038/s41598-023-27986-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/20/2022] [Accepted: 01/11/2023] [Indexed: 02/01/2023] Open
Abstract
Advanced deep learning architectures consist of tens of fully connected and convolutional hidden layers, currently extended to hundreds, are far from their biological realization. Their implausible biological dynamics relies on changing a weight in a non-local manner, as the number of routes between an output unit and a weight is typically large, using the backpropagation technique. Here, a 3-layer tree architecture inspired by experimental-based dendritic tree adaptations is developed and applied to the offline and online learning of the CIFAR-10 database. The proposed architecture outperforms the achievable success rates of the 5-layer convolutional LeNet. Moreover, the highly pruned tree backpropagation approach of the proposed architecture, where a single route connects an output unit and a weight, represents an efficient dendritic deep learning.
Collapse
|
32
|
Pagkalos M, Chavlis S, Poirazi P. Introducing the Dendrify framework for incorporating dendrites to spiking neural networks. Nat Commun 2023; 14:131. [PMID: 36627284 PMCID: PMC9832130 DOI: 10.1038/s41467-022-35747-8] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/26/2022] [Accepted: 12/22/2022] [Indexed: 01/11/2023] Open
Abstract
Computational modeling has been indispensable for understanding how subcellular neuronal features influence circuit processing. However, the role of dendritic computations in network-level operations remains largely unexplored. This is partly because existing tools do not allow the development of realistic and efficient network models that account for dendrites. Current spiking neural networks, although efficient, are usually quite simplistic, overlooking essential dendritic properties. Conversely, circuit models with morphologically detailed neuron models are computationally costly, thus impractical for large-network simulations. To bridge the gap between these two extremes and facilitate the adoption of dendritic features in spiking neural networks, we introduce Dendrify, an open-source Python package based on Brian 2. Dendrify, through simple commands, automatically generates reduced compartmental neuron models with simplified yet biologically relevant dendritic and synaptic integrative properties. Such models strike a good balance between flexibility, performance, and biological accuracy, allowing us to explore dendritic contributions to network-level functions while paving the way for developing more powerful neuromorphic systems.
Collapse
Affiliation(s)
- Michalis Pagkalos
- Institute of Molecular Biology and Biotechnology (IMBB), Foundation for Research and Technology Hellas (FORTH), Heraklion, 70013, Greece
- Department of Biology, University of Crete, Heraklion, 70013, Greece
| | - Spyridon Chavlis
- Institute of Molecular Biology and Biotechnology (IMBB), Foundation for Research and Technology Hellas (FORTH), Heraklion, 70013, Greece
| | - Panayiota Poirazi
- Institute of Molecular Biology and Biotechnology (IMBB), Foundation for Research and Technology Hellas (FORTH), Heraklion, 70013, Greece.
| |
Collapse
|
33
|
Kagan BJ, Kitchen AC, Tran NT, Habibollahi F, Khajehnejad M, Parker BJ, Bhat A, Rollo B, Razi A, Friston KJ. In vitro neurons learn and exhibit sentience when embodied in a simulated game-world. Neuron 2022; 110:3952-3969.e8. [PMID: 36228614 DOI: 10.1016/j.neuron.2022.09.001] [Citation(s) in RCA: 58] [Impact Index Per Article: 29.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/18/2022] [Revised: 06/21/2022] [Accepted: 08/31/2022] [Indexed: 11/06/2022]
Abstract
Integrating neurons into digital systems may enable performance infeasible with silicon alone. Here, we develop DishBrain, a system that harnesses the inherent adaptive computation of neurons in a structured environment. In vitro neural networks from human or rodent origins are integrated with in silico computing via a high-density multielectrode array. Through electrophysiological stimulation and recording, cultures are embedded in a simulated game-world, mimicking the arcade game "Pong." Applying implications from the theory of active inference via the free energy principle, we find apparent learning within five minutes of real-time gameplay not observed in control conditions. Further experiments demonstrate the importance of closed-loop structured feedback in eliciting learning over time. Cultures display the ability to self-organize activity in a goal-directed manner in response to sparse sensory information about the consequences of their actions, which we term synthetic biological intelligence. Future applications may provide further insights into the cellular correlates of intelligence.
Collapse
Affiliation(s)
| | | | - Nhi T Tran
- The Ritchie Centre, Hudson Institute of Medical Research, Clayton, VIC, Australia
| | - Forough Habibollahi
- Department of Biomedical Engineering, The University of Melbourne, Parkville, Australia
| | - Moein Khajehnejad
- Department of Data Science and AI, Monash University, Melbourne, Australia
| | - Bradyn J Parker
- Department of Materials Science and Engineering, Monash University, Melbourne, VIC, Australia
| | - Anjali Bhat
- Wellcome Trust Centre for Neuroimaging, Institute of Neurology, University College London, London, UK
| | - Ben Rollo
- Department of Neuroscience, Central Clinical School, Monash University, Melbourne, Australia
| | - Adeel Razi
- Wellcome Trust Centre for Neuroimaging, Institute of Neurology, University College London, London, UK; Turner Institute for Brain and Mental Health, Monash University, Clayton, VIC, Australia; Monash Biomedical Imaging, Monash University, Clayton, VIC, Australia; CIFAR Azrieli Global Scholars Program, CIFAR, Toronto, Canada
| | - Karl J Friston
- Wellcome Trust Centre for Neuroimaging, Institute of Neurology, University College London, London, UK
| |
Collapse
|
34
|
Fukumasu K, Nose A, Kohsaka H. Extraction of bouton-like structures from neuropil calcium imaging data. Neural Netw 2022; 156:218-238. [DOI: 10.1016/j.neunet.2022.09.033] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/23/2021] [Revised: 09/09/2022] [Accepted: 09/28/2022] [Indexed: 11/11/2022]
|
35
|
Denizot A, Arizono M, Nägerl UV, Berry H, De Schutter E. Control of Ca 2+ signals by astrocyte nanoscale morphology at tripartite synapses. Glia 2022; 70:2378-2391. [PMID: 36097958 PMCID: PMC9825906 DOI: 10.1002/glia.24258] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/26/2022] [Revised: 07/20/2022] [Accepted: 07/28/2022] [Indexed: 01/11/2023]
Abstract
Much of the Ca2+ activity in astrocytes is spatially restricted to microdomains and occurs in fine processes that form a complex anatomical meshwork, the so-called spongiform domain. A growing body of literature indicates that those astrocytic Ca2+ signals can influence the activity of neuronal synapses and thus tune the flow of information through neuronal circuits. Because of technical difficulties in accessing the small spatial scale involved, the role of astrocyte morphology on Ca2+ microdomain activity remains poorly understood. Here, we use computational tools and idealized 3D geometries of fine processes based on recent super-resolution microscopy data to investigate the mechanistic link between astrocytic nanoscale morphology and local Ca2+ activity. Simulations demonstrate that the nano-morphology of astrocytic processes powerfully shapes the spatio-temporal properties of Ca2+ signals and promotes local Ca2+ activity. The model predicts that this effect is attenuated upon astrocytic swelling, hallmark of brain diseases, which we confirm experimentally in hypo-osmotic conditions. Upon repeated neurotransmitter release events, the model predicts that swelling hinders astrocytic signal propagation. Overall, this study highlights the influence of the complex morphology of astrocytes at the nanoscale and its remodeling in pathological conditions on neuron-astrocyte communication at so-called tripartite synapses, where astrocytic processes come into close contact with pre- and postsynaptic structures.
Collapse
Affiliation(s)
- Audrey Denizot
- Computational Neuroscience UnitOkinawa Institute of Science and TechnologyOnna‐SonJapan
| | - Misa Arizono
- Interdisciplinary Institute for NeuroscienceUniversité de BordeauxBordeauxFrance,Interdisciplinary Institute for NeuroscienceCNRS UMR 5297BordeauxFrance,Department of PharmacologyKyoto University Graduate School of MedicineKyotoJapan
| | - U. Valentin Nägerl
- Interdisciplinary Institute for NeuroscienceUniversité de BordeauxBordeauxFrance,Interdisciplinary Institute for NeuroscienceCNRS UMR 5297BordeauxFrance
| | - Hugues Berry
- LIRIS, UMR5205 CNRSUniv LyonVilleurbanneFrance,INRIAVilleurbanneFrance
| | - Erik De Schutter
- Computational Neuroscience UnitOkinawa Institute of Science and TechnologyOnna‐SonJapan
| |
Collapse
|
36
|
Dewell RB, Zhu Y, Eisenbrandt M, Morse R, Gabbiani F. Contrast polarity-specific mapping improves efficiency of neuronal computation for collision detection. eLife 2022; 11:e79772. [PMID: 36314775 PMCID: PMC9674337 DOI: 10.7554/elife.79772] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/26/2022] [Accepted: 10/27/2022] [Indexed: 11/29/2022] Open
Abstract
Neurons receive information through their synaptic inputs, but the functional significance of how those inputs are mapped on to a cell's dendrites remains unclear. We studied this question in a grasshopper visual neuron that tracks approaching objects and triggers escape behavior before an impending collision. In response to black approaching objects, the neuron receives OFF excitatory inputs that form a retinotopic map of the visual field onto compartmentalized, distal dendrites. Subsequent processing of these OFF inputs by active membrane conductances allows the neuron to discriminate the spatial coherence of such stimuli. In contrast, we show that ON excitatory synaptic inputs activated by white approaching objects map in a random manner onto a more proximal dendritic field of the same neuron. The lack of retinotopic synaptic arrangement results in the neuron's inability to discriminate the coherence of white approaching stimuli. Yet, the neuron retains the ability to discriminate stimulus coherence for checkered stimuli of mixed ON/OFF polarity. The coarser mapping and processing of ON stimuli thus has a minimal impact, while reducing the total energetic cost of the circuit. Further, we show that these differences in ON/OFF neuronal processing are behaviorally relevant, being tightly correlated with the animal's escape behavior to light and dark stimuli of variable coherence. Our results show that the synaptic mapping of excitatory inputs affects the fine stimulus discrimination ability of single neurons and document the resulting functional impact on behavior.
Collapse
Affiliation(s)
| | - Ying Zhu
- Department of Neuroscience, Baylor College of MedicineHoustonUnited States
| | | | | | - Fabrizio Gabbiani
- Department of Neuroscience, Baylor College of MedicineHoustonUnited States
| |
Collapse
|
37
|
Colameo D, Schratt G. Synaptic tagging: homeostatic plasticity goes Hebbian. EMBO J 2022; 41:e112383. [PMID: 36097740 PMCID: PMC9574739 DOI: 10.15252/embj.2022112383] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/16/2022] [Accepted: 08/29/2022] [Indexed: 11/09/2022] Open
Abstract
Distinct plasticity mechanisms enable neurons to effectively process information also when facing global perturbations in network activity. In this issue of The EMBO Journal, Dubes et al (2022) provide a molecular mechanism whereby individual synapses during periods of chronic inactivity are "tagged" for future strengthening. These results lend further support to the idea that local, nonmultiplicative mechanisms play an important role in homeostatic synaptic plasticity as has been demonstrated for Hebbian-like synaptic plasticity.
Collapse
Affiliation(s)
- David Colameo
- Laboratory of Systems Neuroscience, Institute for Neuroscience, Department of Health Science and TechnologySwiss Federal Institute of Technology ETHZürichSwitzerland
| | - Gerhard Schratt
- Laboratory of Systems Neuroscience, Institute for Neuroscience, Department of Health Science and TechnologySwiss Federal Institute of Technology ETHZürichSwitzerland
| |
Collapse
|
38
|
Chacron MJ. The role of ADM in brain function. NATURE COMPUTATIONAL SCIENCE 2022; 2:628-629. [PMID: 38177259 DOI: 10.1038/s43588-022-00320-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/06/2024]
Affiliation(s)
- Maurice J Chacron
- Department of Physiology, McGill University, Montréal, Quebec, Canada.
| |
Collapse
|
39
|
A realistic morpho-anatomical connection strategy for modelling full-scale point-neuron microcircuits. Sci Rep 2022; 12:13864. [PMID: 35974119 PMCID: PMC9381785 DOI: 10.1038/s41598-022-18024-y] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/22/2022] [Accepted: 08/03/2022] [Indexed: 01/03/2023] Open
Abstract
The modeling of extended microcircuits is emerging as an effective tool to simulate the neurophysiological correlates of brain activity and to investigate brain dysfunctions. However, for specific networks, a realistic modeling approach based on the combination of available physiological, morphological and anatomical data is still an open issue. One of the main problems in the generation of realistic networks lies in the strategy adopted to build network connectivity. Here we propose a method to implement a neuronal network at single cell resolution by using the geometrical probability volumes associated with pre- and postsynaptic neurites. This allows us to build a network with plausible connectivity properties without the explicit use of computationally intensive touch detection algorithms using full 3D neuron reconstructions. The method has been benchmarked for the mouse hippocampus CA1 area, and the results show that this approach is able to generate full-scale brain networks at single cell resolution that are in good agreement with experimental findings. This geometric reconstruction of axonal and dendritic occupancy, by effectively reflecting morphological and anatomical constraints, could be integrated into structured simulators generating entire circuits of different brain areas facilitating the simulation of different brain regions with realistic models.
Collapse
|
40
|
Xue X, Buccino AP, Saseendran Kumar S, Hierlemann A, Bartram J. Inferring monosynaptic connections from paired dendritic spine Ca 2+imaging and large-scale recording of extracellular spiking. J Neural Eng 2022; 19. [PMID: 35931040 DOI: 10.1088/1741-2552/ac8765] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/21/2022] [Accepted: 08/05/2022] [Indexed: 11/12/2022]
Abstract
Techniques to identify monosynaptic connections between neurons have been vital for neuroscience research, facilitating important advancements concerning network topology, synaptic plasticity, and synaptic integration, among others. Here, we introduce a novel approach to identify and monitor monosynaptic connections using high-resolution dendritic spine Ca2+imaging combined with simultaneous large-scale recording of extracellular electrical activity by means of high-density microelectrode arrays (HD-MEAs). We introduce an easily adoptable analysis pipeline that associates the imaged spine with its presynaptic unit and test it onin vitrorecordings. The method is further validated and optimized by simulating synaptically-evoked spine Ca2+transients based on measured spike trains in order to obtain simulated ground-truth connections. The proposed approach offers unique advantages asi) it can be used to identify monosynaptic connections with an accurate localization of the synapse within the dendritic tree,ii) it provides precise information of presynaptic spiking, andiii) postsynaptic spine Ca2+signals and, finally,iv) the non-invasive nature of the proposed method allows for long-term measurements. The analysis toolkit together with the rich data sets that were acquired are made publicly available for further exploration by the research community.
Collapse
Affiliation(s)
- Xiaohan Xue
- D-BSSE, ETH Zürich, Mattenstrasse 26, Basel, 4058, SWITZERLAND
| | | | | | | | - Julian Bartram
- D-BSSE, ETH Zürich, Mattenstrasse 26, Basel, 4058, SWITZERLAND
| |
Collapse
|
41
|
Kay JW, Schulz JM, Phillips WA. A Comparison of Partial Information Decompositions Using Data from Real and Simulated Layer 5b Pyramidal Cells. ENTROPY 2022; 24:e24081021. [PMID: 35893001 PMCID: PMC9394329 DOI: 10.3390/e24081021] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 06/13/2022] [Revised: 07/18/2022] [Accepted: 07/20/2022] [Indexed: 02/04/2023]
Abstract
Partial information decomposition allows the joint mutual information between an output and a set of inputs to be divided into components that are synergistic or shared or unique to each input. We consider five different decompositions and compare their results using data from layer 5b pyramidal cells in two different studies. The first study was on the amplification of somatic action potential output by apical dendritic input and its regulation by dendritic inhibition. We find that two of the decompositions produce much larger estimates of synergy and shared information than the others, as well as large levels of unique misinformation. When within-neuron differences in the components are examined, the five methods produce more similar results for all but the shared information component, for which two methods produce a different statistical conclusion from the others. There are some differences in the expression of unique information asymmetry among the methods. It is significantly larger, on average, under dendritic inhibition. Three of the methods support a previous conclusion that apical amplification is reduced by dendritic inhibition. The second study used a detailed compartmental model to produce action potentials for many combinations of the numbers of basal and apical synaptic inputs. Decompositions of the entire data set produce similar differences to those in the first study. Two analyses of decompositions are conducted on subsets of the data. In the first, the decompositions reveal a bifurcation in unique information asymmetry. For three of the methods, this suggests that apical drive switches to basal drive as the strength of the basal input increases, while the other two show changing mixtures of information and misinformation. Decompositions produced using the second set of subsets show that all five decompositions provide support for properties of cooperative context-sensitivity—to varying extents.
Collapse
Affiliation(s)
- Jim W. Kay
- School of Mathematics and Statistics, University of Glasgow, Glasgow G12 8QQ, UK
- Correspondence:
| | - Jan M. Schulz
- Department of Biomedicine, University of Basel, 4001 Basel, Switzerland;
| | | |
Collapse
|
42
|
Ward M, Rhodes O. Beyond LIF Neurons on Neuromorphic Hardware. Front Neurosci 2022; 16:881598. [PMID: 35864984 PMCID: PMC9294628 DOI: 10.3389/fnins.2022.881598] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/22/2022] [Accepted: 06/10/2022] [Indexed: 11/30/2022] Open
Abstract
Neuromorphic systems aim to provide accelerated low-power simulation of Spiking Neural Networks (SNNs), typically featuring simple and efficient neuron models such as the Leaky Integrate-and-Fire (LIF) model. Biologically plausible neuron models developed by neuroscientists are largely ignored in neuromorphic computing due to their increased computational costs. This work bridges this gap through implementation and evaluation of a single compartment Hodgkin-Huxley (HH) neuron and a multi-compartment neuron incorporating dendritic computation on the SpiNNaker, and SpiNNaker2 prototype neuromorphic systems. Numerical accuracy of the model implementations is benchmarked against reference models in the NEURON simulation environment, with excellent agreement achieved by both the fixed- and floating-point SpiNNaker implementations. The computational cost is evaluated in terms of timing measurements profiling neural state updates. While the additional model complexity understandably increases computation times relative to LIF models, it was found a wallclock time increase of only 8× was observed for the HH neuron (11× for the mutlicompartment model), demonstrating the potential of hardware accelerators in the next-generation neuromorphic system to optimize implementation of complex neuron models. The benefits of models directly corresponding to biophysiological data are demonstrated: HH neurons are able to express a range of output behaviors not captured by LIF neurons; and the dendritic compartment provides the first implementation of a spiking multi-compartment neuron model with XOR-solving capabilities on neuromorphic hardware. The work paves the way for inclusion of more biologically representative neuron models in neuromorphic systems, and showcases the benefits of hardware accelerators included in the next-generation SpiNNaker2 architecture.
Collapse
|
43
|
Kriener B, Hu H, Vervaeke K. Parvalbumin interneuron dendrites enhance gamma oscillations. Cell Rep 2022; 39:110948. [PMID: 35705055 DOI: 10.1016/j.celrep.2022.110948] [Citation(s) in RCA: 10] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/06/2021] [Revised: 02/07/2022] [Accepted: 05/21/2022] [Indexed: 11/24/2022] Open
Abstract
Dendrites are essential determinants of the input-output relationship of single neurons, but their role in network computations is not well understood. Here, we use a combination of dendritic patch-clamp recordings and in silico modeling to determine how dendrites of parvalbumin (PV)-expressing basket cells contribute to network oscillations in the gamma frequency band. Simultaneous soma-dendrite recordings from PV basket cells in the dentate gyrus reveal that the slope, or gain, of the dendritic input-output relationship is exceptionally low, thereby reducing the cell's sensitivity to changes in its input. By simulating gamma oscillations in detailed network models, we demonstrate that the low gain is key to increase spike synchrony in PV basket cell assemblies when cells are driven by spatially and temporally heterogeneous synaptic inputs. These results highlight the role of inhibitory neuron dendrites in synchronized network oscillations.
Collapse
Affiliation(s)
- Birgit Kriener
- Institute of Basic Medical Sciences, Section of Physiology, University of Oslo, Oslo, Norway
| | - Hua Hu
- Institute of Basic Medical Sciences, Section of Physiology, University of Oslo, Oslo, Norway
| | - Koen Vervaeke
- Institute of Basic Medical Sciences, Section of Physiology, University of Oslo, Oslo, Norway.
| |
Collapse
|
44
|
Application of the mirror technique for block-face scanning electron microscopy. Brain Struct Funct 2022; 227:1933-1947. [PMID: 35643821 PMCID: PMC9232443 DOI: 10.1007/s00429-022-02506-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/03/2021] [Accepted: 05/04/2022] [Indexed: 11/29/2022]
Abstract
The mirror technique adapted for electron microscopy allows correlating neuronal structures across the cutting plane of adjoining light microscopic sections which, however, have a limited thickness, typically less than 100 µm (Talapka et al. in Front Neuroanat, 2021, 10.3389/fnana.2021.652422). Here, we extend the mirror technique for tissue blocks in the millimeter range and demonstrate compatibility with serial block-face electron microscopy (SBEM). An essential step of the methodological improvement regards the recognition that unbound resin must be removed from the tissue surface to gain visibility of surface structures. To this, the tissue block was placed on absorbent paper during the curing process. In this way, neuronal cell bodies could be unequivocally identified using epi-illumination and confocal microscopy. Thus, the layout of cell bodies which were cut by the sectioning plane can be correlated with the layout of their complementary part in the adjoining section processed for immunohistochemistry. The modified mirror technique obviates the spatial limit in investigating synaptology of neurochemically identified structures such as neuronal processes, dendrites and axons.
Collapse
|
45
|
Müller E, Arnold E, Breitwieser O, Czierlinski M, Emmel A, Kaiser J, Mauch C, Schmitt S, Spilger P, Stock R, Stradmann Y, Weis J, Baumbach A, Billaudelle S, Cramer B, Ebert F, Göltz J, Ilmberger J, Karasenko V, Kleider M, Leibfried A, Pehle C, Schemmel J. A Scalable Approach to Modeling on Accelerated Neuromorphic Hardware. Front Neurosci 2022; 16:884128. [PMID: 35663548 PMCID: PMC9157770 DOI: 10.3389/fnins.2022.884128] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/25/2022] [Accepted: 04/20/2022] [Indexed: 11/29/2022] Open
Abstract
Neuromorphic systems open up opportunities to enlarge the explorative space for computational research. However, it is often challenging to unite efficiency and usability. This work presents the software aspects of this endeavor for the BrainScaleS-2 system, a hybrid accelerated neuromorphic hardware architecture based on physical modeling. We introduce key aspects of the BrainScaleS-2 Operating System: experiment workflow, API layering, software design, and platform operation. We present use cases to discuss and derive requirements for the software and showcase the implementation. The focus lies on novel system and software features such as multi-compartmental neurons, fast re-configuration for hardware-in-the-loop training, applications for the embedded processors, the non-spiking operation mode, interactive platform access, and sustainable hardware/software co-development. Finally, we discuss further developments in terms of hardware scale-up, system usability, and efficiency.
Collapse
Affiliation(s)
- Eric Müller
- Kirchhoff-Institute for Physics, Heidelberg University, Heidelberg, Germany
| | - Elias Arnold
- Kirchhoff-Institute for Physics, Heidelberg University, Heidelberg, Germany
| | - Oliver Breitwieser
- Kirchhoff-Institute for Physics, Heidelberg University, Heidelberg, Germany
| | - Milena Czierlinski
- Kirchhoff-Institute for Physics, Heidelberg University, Heidelberg, Germany
| | - Arne Emmel
- Kirchhoff-Institute for Physics, Heidelberg University, Heidelberg, Germany
| | - Jakob Kaiser
- Kirchhoff-Institute for Physics, Heidelberg University, Heidelberg, Germany
| | - Christian Mauch
- Kirchhoff-Institute for Physics, Heidelberg University, Heidelberg, Germany
| | - Sebastian Schmitt
- Third Institute of Physics, University of Göttingen, Göttingen, Germany
| | - Philipp Spilger
- Kirchhoff-Institute for Physics, Heidelberg University, Heidelberg, Germany
| | - Raphael Stock
- Kirchhoff-Institute for Physics, Heidelberg University, Heidelberg, Germany
| | - Yannik Stradmann
- Kirchhoff-Institute for Physics, Heidelberg University, Heidelberg, Germany
| | - Johannes Weis
- Kirchhoff-Institute for Physics, Heidelberg University, Heidelberg, Germany
| | - Andreas Baumbach
- Kirchhoff-Institute for Physics, Heidelberg University, Heidelberg, Germany
- Department of Physiology, University of Bern, Bern, Switzerland
| | | | - Benjamin Cramer
- Kirchhoff-Institute for Physics, Heidelberg University, Heidelberg, Germany
| | - Falk Ebert
- Kirchhoff-Institute for Physics, Heidelberg University, Heidelberg, Germany
| | - Julian Göltz
- Kirchhoff-Institute for Physics, Heidelberg University, Heidelberg, Germany
- Department of Physiology, University of Bern, Bern, Switzerland
| | - Joscha Ilmberger
- Kirchhoff-Institute for Physics, Heidelberg University, Heidelberg, Germany
| | - Vitali Karasenko
- Kirchhoff-Institute for Physics, Heidelberg University, Heidelberg, Germany
| | - Mitja Kleider
- Kirchhoff-Institute for Physics, Heidelberg University, Heidelberg, Germany
| | - Aron Leibfried
- Kirchhoff-Institute for Physics, Heidelberg University, Heidelberg, Germany
| | - Christian Pehle
- Kirchhoff-Institute for Physics, Heidelberg University, Heidelberg, Germany
| | - Johannes Schemmel
- Kirchhoff-Institute for Physics, Heidelberg University, Heidelberg, Germany
| |
Collapse
|
46
|
Bonilla-Quintana M, Rangamani P. Can biophysical models of dendritic spines be used to explore synaptic changes associated with addiction? Phys Biol 2022; 19. [PMID: 35508164 DOI: 10.1088/1478-3975/ac6cbe] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/06/2022] [Accepted: 05/04/2022] [Indexed: 11/11/2022]
Abstract
Effective treatments that prevent or reduce drug relapse vulnerability should be developed to relieve the high burden of drug addiction on society. This will only be possible by enhancing the understanding of the molecular mechanisms underlying the neurobiology of addiction. Recent experimental data have shown that dendritic spines, small protrusions from the dendrites that receive excitatory input, of spiny neurons in the nucleus accumbens exhibit morphological changes during drug exposure and withdrawal. Moreover, these changes relate to the characteristic drug-seeking behavior of addiction. However, due to the complexity of the dendritic spines, we do not yet fully understand the processes underlying their structural changes in response to different inputs. We propose that biophysical models can enhance the current understanding of these processes by incorporating different, and sometimes, discrepant experimental data to identify the shared underlying mechanisms and generate experimentally testable hypotheses. This review aims to give an up-to-date report on biophysical models of dendritic spines, focusing on those models that describe their shape changes, which are well-known to relate to learning and memory. Moreover, it examines how these models can enhance our understanding of the effect of the drugs and the synaptic changes during withdrawal, as well as during neurodegenerative disease progression such as Alzheimer's disease.
Collapse
Affiliation(s)
- Mayte Bonilla-Quintana
- Mechanical Aerospace Engineering, University of California San Diego, 9500 Gilman Drive, La Jolla, California, 92093-0021, UNITED STATES
| | - Padmini Rangamani
- Mechanical Aerospace Engineering, University of California San Diego, 9500 Gilman Drive, La Jolla, California, 92093-0021, UNITED STATES
| |
Collapse
|
47
|
Introduction. Neuroscience 2022; 489:1-3. [DOI: 10.1016/j.neuroscience.2022.03.037] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022]
|
48
|
Gonzalez KC, Losonczy A, Negrean A. Dendritic Excitability and Synaptic Plasticity In Vitro and In Vivo. Neuroscience 2022; 489:165-175. [PMID: 34998890 PMCID: PMC9392867 DOI: 10.1016/j.neuroscience.2021.12.039] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/15/2021] [Revised: 12/29/2021] [Accepted: 12/30/2021] [Indexed: 02/06/2023]
Abstract
Much of our understanding of dendritic and synaptic physiology comes from in vitro experimentation, where the afforded mechanical stability and convenience of applying drugs allowed patch-clamping based recording techniques to investigate ion channel distributions, their gating kinetics, and to uncover dendritic integrative and synaptic plasticity rules. However, with current efforts to study these questions in vivo, there is a great need to translate existing knowledge between in vitro and in vivo experimental conditions. In this review, we identify discrepancies between in vitro and in vivo ionic composition of extracellular media and discuss how changes in ionic composition alter dendritic excitability and plasticity induction. Here, we argue that under physiological in vivo ionic conditions, dendrites are expected to be more excitable and the threshold for synaptic plasticity induction to be lowered. Consequently, the plasticity rules described in vitro vary significantly from those implemented in vivo.
Collapse
Affiliation(s)
- Kevin C Gonzalez
- Department of Neuroscience, Columbia University, New York, NY, USA; Mortimer B. Zuckerman Mind Brain Behavior Institute, New York, NY, USA.
| | - Attila Losonczy
- Department of Neuroscience, Columbia University, New York, NY, USA; Mortimer B. Zuckerman Mind Brain Behavior Institute, New York, NY, USA; Kavli Institute for Brain Science, New York, NY, USA.
| | - Adrian Negrean
- Department of Neuroscience, Columbia University, New York, NY, USA; Mortimer B. Zuckerman Mind Brain Behavior Institute, New York, NY, USA.
| |
Collapse
|
49
|
Iyer A, Grewal K, Velu A, Souza LO, Forest J, Ahmad S. Avoiding Catastrophe: Active Dendrites Enable Multi-Task Learning in Dynamic Environments. Front Neurorobot 2022; 16:846219. [PMID: 35574225 PMCID: PMC9100780 DOI: 10.3389/fnbot.2022.846219] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/30/2021] [Accepted: 03/31/2022] [Indexed: 11/13/2022] Open
Abstract
A key challenge for AI is to build embodied systems that operate in dynamically changing environments. Such systems must adapt to changing task contexts and learn continuously. Although standard deep learning systems achieve state of the art results on static benchmarks, they often struggle in dynamic scenarios. In these settings, error signals from multiple contexts can interfere with one another, ultimately leading to a phenomenon known as catastrophic forgetting. In this article we investigate biologically inspired architectures as solutions to these problems. Specifically, we show that the biophysical properties of dendrites and local inhibitory systems enable networks to dynamically restrict and route information in a context-specific manner. Our key contributions are as follows: first, we propose a novel artificial neural network architecture that incorporates active dendrites and sparse representations into the standard deep learning framework. Next, we study the performance of this architecture on two separate benchmarks requiring task-based adaptation: Meta-World, a multi-task reinforcement learning environment where a robotic agent must learn to solve a variety of manipulation tasks simultaneously; and a continual learning benchmark in which the model's prediction task changes throughout training. Analysis on both benchmarks demonstrates the emergence of overlapping but distinct and sparse subnetworks, allowing the system to fluidly learn multiple tasks with minimal forgetting. Our neural implementation marks the first time a single architecture has achieved competitive results in both multi-task and continual learning settings. Our research sheds light on how biological properties of neurons can inform deep learning systems to address dynamic scenarios that are typically impossible for traditional ANNs to solve.
Collapse
Affiliation(s)
- Abhiram Iyer
- Numenta, Redwood City, CA, United States
- Department of Electrical and Computer Engineering, Carnegie Mellon University, Pittsburgh, PA, United States
| | | | - Akash Velu
- Department of Computer Science, Stanford University, Stanford, CA, United States
| | | | - Jeremy Forest
- Department of Psychology, Cornell University, Ithaca, NY, United States
| | | |
Collapse
|
50
|
Hodassman S, Vardi R, Tugendhaft Y, Goldental A, Kanter I. Efficient dendritic learning as an alternative to synaptic plasticity hypothesis. Sci Rep 2022; 12:6571. [PMID: 35484180 PMCID: PMC9051213 DOI: 10.1038/s41598-022-10466-8] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/10/2022] [Accepted: 04/08/2022] [Indexed: 11/09/2022] Open
Abstract
Synaptic plasticity is a long-lasting core hypothesis of brain learning that suggests local adaptation between two connecting neurons and forms the foundation of machine learning. The main complexity of synaptic plasticity is that synapses and dendrites connect neurons in series and existing experiments cannot pinpoint the significant imprinted adaptation location. We showed efficient backpropagation and Hebbian learning on dendritic trees, inspired by experimental-based evidence, for sub-dendritic adaptation and its nonlinear amplification. It has proven to achieve success rates approaching unity for handwritten digits recognition, indicating realization of deep learning even by a single dendrite or neuron. Additionally, dendritic amplification practically generates an exponential number of input crosses, higher-order interactions, with the number of inputs, which enhance success rates. However, direct implementation of a large number of the cross weights and their exhaustive manipulation independently is beyond existing and anticipated computational power. Hence, a new type of nonlinear adaptive dendritic hardware for imitating dendritic learning and estimating the computational capability of the brain must be built.
Collapse
Affiliation(s)
- Shiri Hodassman
- Department of Physics, Bar-Ilan University, 52900, Ramat-Gan, Israel
| | - Roni Vardi
- Gonda Interdisciplinary Brain Research Center, Bar-Ilan University, 52900, Ramat-Gan, Israel
| | - Yael Tugendhaft
- Department of Physics, Bar-Ilan University, 52900, Ramat-Gan, Israel
| | - Amir Goldental
- Department of Physics, Bar-Ilan University, 52900, Ramat-Gan, Israel
| | - Ido Kanter
- Department of Physics, Bar-Ilan University, 52900, Ramat-Gan, Israel. .,Gonda Interdisciplinary Brain Research Center, Bar-Ilan University, 52900, Ramat-Gan, Israel.
| |
Collapse
|