1
|
Park J, Ha S, Yu T, Neftci E, Cauwenberghs G. A 22-pJ/spike 73-Mspikes/s 130k-compartment neural array transceiver with conductance-based synaptic and membrane dynamics. Front Neurosci 2023; 17:1198306. [PMID: 37700751 PMCID: PMC10493285 DOI: 10.3389/fnins.2023.1198306] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/01/2023] [Accepted: 07/07/2023] [Indexed: 09/14/2023] Open
Abstract
Neuromorphic cognitive computing offers a bio-inspired means to approach the natural intelligence of biological neural systems in silicon integrated circuits. Typically, such circuits either reproduce biophysical neuronal dynamics in great detail as tools for computational neuroscience, or abstract away the biology by simplifying the functional forms of neural computation in large-scale systems for machine intelligence with high integration density and energy efficiency. Here we report a hybrid which offers biophysical realism in the emulation of multi-compartmental neuronal network dynamics at very large scale with high implementation efficiency, and yet with high flexibility in configuring the functional form and the network topology. The integrate-and-fire array transceiver (IFAT) chip emulates the continuous-time analog membrane dynamics of 65 k two-compartment neurons with conductance-based synapses. Fired action potentials are registered as address-event encoded output spikes, while the four types of synapses coupling to each neuron are activated by address-event decoded input spikes for fully reconfigurable synaptic connectivity, facilitating virtual wiring as implemented by routing address-event spikes externally through synaptic routing table. Peak conductance strength of synapse activation specified by the address-event input spans three decades of dynamic range, digitally controlled by pulse width and amplitude modulation (PWAM) of the drive voltage activating the log-domain linear synapse circuit. Two nested levels of micro-pipelining in the IFAT architecture improve both throughput and efficiency of synaptic input. This two-tier micro-pipelining results in a measured sustained peak throughput of 73 Mspikes/s and overall chip-level energy efficiency of 22 pJ/spike. Non-uniformity in digitally encoded synapse strength due to analog mismatch is mitigated through single-point digital offset calibration. Combined with the flexibly layered and recurrent synaptic connectivity provided by hierarchical address-event routing of registered spike events through external memory, the IFAT lends itself to efficient large-scale emulation of general biophysical spiking neural networks, as well as rate-based mapping of rectified linear unit (ReLU) neural activations.
Collapse
Affiliation(s)
- Jongkil Park
- Center for Neuromorphic Engineering, Korea Institute of Science and Technology (KIST), Seoul, Republic of Korea
- Institute for Neural Computation, University of California, San Diego, La Jolla, CA, United States
- Department of Electrical and Computer Engineering, Jacobs School of Engineering, University of California, San Diego, La Jolla, CA, United States
| | - Sohmyung Ha
- Institute for Neural Computation, University of California, San Diego, La Jolla, CA, United States
- Department of Bioengineering, Jacobs School of Engineering, University of California, San Diego, La Jolla, CA, United States
- Division of Engineering, New York University Abu Dhabi, Abu Dhabi, United Arab Emirates
| | - Theodore Yu
- Institute for Neural Computation, University of California, San Diego, La Jolla, CA, United States
- Department of Electrical and Computer Engineering, Jacobs School of Engineering, University of California, San Diego, La Jolla, CA, United States
| | - Emre Neftci
- Peter Grünberg Institute, Forschungszentrum Jülich, RWTH, Aachen, Germany
| | - Gert Cauwenberghs
- Institute for Neural Computation, University of California, San Diego, La Jolla, CA, United States
- Department of Bioengineering, Jacobs School of Engineering, University of California, San Diego, La Jolla, CA, United States
| |
Collapse
|
2
|
Wybo WAM, Tsai MC, Tran VAK, Illing B, Jordan J, Morrison A, Senn W. NMDA-driven dendritic modulation enables multitask representation learning in hierarchical sensory processing pathways. Proc Natl Acad Sci U S A 2023; 120:e2300558120. [PMID: 37523562 PMCID: PMC10410730 DOI: 10.1073/pnas.2300558120] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/13/2023] [Accepted: 06/14/2023] [Indexed: 08/02/2023] Open
Abstract
While sensory representations in the brain depend on context, it remains unclear how such modulations are implemented at the biophysical level, and how processing layers further in the hierarchy can extract useful features for each possible contextual state. Here, we demonstrate that dendritic N-Methyl-D-Aspartate spikes can, within physiological constraints, implement contextual modulation of feedforward processing. Such neuron-specific modulations exploit prior knowledge, encoded in stable feedforward weights, to achieve transfer learning across contexts. In a network of biophysically realistic neuron models with context-independent feedforward weights, we show that modulatory inputs to dendritic branches can solve linearly nonseparable learning problems with a Hebbian, error-modulated learning rule. We also demonstrate that local prediction of whether representations originate either from different inputs, or from different contextual modulations of the same input, results in representation learning of hierarchical feedforward weights across processing layers that accommodate a multitude of contexts.
Collapse
Affiliation(s)
- Willem A. M. Wybo
- Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA-Institute Brain Structure–Function Relationships (INM-10), Jülich Research Center, DE-52428Jülich, Germany
| | - Matthias C. Tsai
- Department of Physiology, University of Bern, CH-3012Bern, Switzerland
| | - Viet Anh Khoa Tran
- Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA-Institute Brain Structure–Function Relationships (INM-10), Jülich Research Center, DE-52428Jülich, Germany
- Department of Computer Science - 3, Faculty 1, RWTH Aachen University, DE-52074Aachen, Germany
| | - Bernd Illing
- Laboratory of Computational Neuroscience, École Polytechnique Fédérale de Lausanne, CH-1015Lausanne, Switzerland
| | - Jakob Jordan
- Department of Physiology, University of Bern, CH-3012Bern, Switzerland
| | - Abigail Morrison
- Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA-Institute Brain Structure–Function Relationships (INM-10), Jülich Research Center, DE-52428Jülich, Germany
- Department of Computer Science - 3, Faculty 1, RWTH Aachen University, DE-52074Aachen, Germany
| | - Walter Senn
- Department of Physiology, University of Bern, CH-3012Bern, Switzerland
| |
Collapse
|
3
|
Jeon I, Kim T. Distinctive properties of biological neural networks and recent advances in bottom-up approaches toward a better biologically plausible neural network. Front Comput Neurosci 2023; 17:1092185. [PMID: 37449083 PMCID: PMC10336230 DOI: 10.3389/fncom.2023.1092185] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/07/2022] [Accepted: 06/12/2023] [Indexed: 07/18/2023] Open
Abstract
Although it may appear infeasible and impractical, building artificial intelligence (AI) using a bottom-up approach based on the understanding of neuroscience is straightforward. The lack of a generalized governing principle for biological neural networks (BNNs) forces us to address this problem by converting piecemeal information on the diverse features of neurons, synapses, and neural circuits into AI. In this review, we described recent attempts to build a biologically plausible neural network by following neuroscientifically similar strategies of neural network optimization or by implanting the outcome of the optimization, such as the properties of single computational units and the characteristics of the network architecture. In addition, we proposed a formalism of the relationship between the set of objectives that neural networks attempt to achieve, and neural network classes categorized by how closely their architectural features resemble those of BNN. This formalism is expected to define the potential roles of top-down and bottom-up approaches for building a biologically plausible neural network and offer a map helping the navigation of the gap between neuroscience and AI engineering.
Collapse
Affiliation(s)
| | - Taegon Kim
- Brain Science Institute, Korea Institute of Science and Technology, Seoul, Republic of Korea
| |
Collapse
|
4
|
Tamura K, Yamamoto Y, Kobayashi T, Kuriyama R, Yamazaki T. Discrimination and learning of temporal input sequences in a cerebellar Purkinje cell model. Front Cell Neurosci 2023; 17:1075005. [PMID: 36816857 PMCID: PMC9932327 DOI: 10.3389/fncel.2023.1075005] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/20/2022] [Accepted: 01/10/2023] [Indexed: 02/05/2023] Open
Abstract
Introduction Temporal information processing is essential for sequential contraction of various muscles with the appropriate timing and amplitude for fast and smooth motor control. These functions depend on dynamics of neural circuits, which consist of simple neurons that accumulate incoming spikes and emit other spikes. However, recent studies indicate that individual neurons can perform complex information processing through the nonlinear dynamics of dendrites with complex shapes and ion channels. Although we have extensive evidence that cerebellar circuits play a vital role in motor control, studies investigating the computational ability of single Purkinje cells are few. Methods We found, through computer simulations, that a Purkinje cell can discriminate a series of pulses in two directions (from dendrite tip to soma, and from soma to dendrite), as cortical pyramidal cells do. Such direction sensitivity was observed in whatever compartment types of dendrites (spiny, smooth, and main), although they have dierent sets of ion channels. Results We found that the shortest and longest discriminable sequences lasted for 60 ms (6 pulses with 10 ms interval) and 4,000 ms (20 pulses with 200 ms interval), respectively. and that the ratio of discriminable sequences within the region of the interesting parameter space was, on average, 3.3% (spiny), 3.2% (smooth), and 1.0% (main). For the direction sensitivity, a T-type Ca2+ channel was necessary, in contrast with cortical pyramidal cells that have N-methyl-D-aspartate receptors (NMDARs). Furthermore, we tested whether the stimulus direction can be reversed by learning, specifically by simulated long-term depression, and obtained positive results. Discussion Our results show that individual Purkinje cells can perform more complex information processing than is conventionally assumed for a single neuron, and suggest that Purkinje cells act as sequence discriminators, a useful role in motor control and learning.
Collapse
Affiliation(s)
- Kaaya Tamura
- Graduate School of Informatics and Engineering, The University of Electro-Communications, Tokyo, Japan
| | - Yuki Yamamoto
- Graduate School of Medical and Dental Sciences, Tokyo Medical and Dental University, Tokyo, Japan
| | - Taira Kobayashi
- Graduate School of Informatics and Engineering, The University of Electro-Communications, Tokyo, Japan,Graduate School of Sciences and Technology for Innovation, Yamaguchi University, Yamaguchi, Japan
| | - Rin Kuriyama
- Graduate School of Informatics and Engineering, The University of Electro-Communications, Tokyo, Japan
| | - Tadashi Yamazaki
- Graduate School of Informatics and Engineering, The University of Electro-Communications, Tokyo, Japan,*Correspondence: Tadashi Yamazaki ✉
| |
Collapse
|
5
|
Quaresima A, Fitz H, Duarte R, Broek DVD, Hagoort P, Petersson KM. The Tripod neuron: a minimal structural reduction of the dendritic tree. J Physiol 2022. [PMID: 36168736 DOI: 10.1113/jp283399] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/31/2022] [Accepted: 09/12/2022] [Indexed: 11/08/2022] Open
Abstract
KEY POINTS We present a neuron model, called the Tripod, with two segregated dendritic branches that are connected to the axosomatic compartment. Each branch implements inhibitory GABAergic and excitatory glutamatergic synaptic transmission, including voltage-gated NMDA receptors. Dendrites are modeled on relevant geometric and physiological parameters measured in human pyramidal cells. The neuron reproduces classical dendritic computations, such as coincidence detection and pathway selection via shunting inhibition, beyond the scope of point-neuron models. Under some conditions, dendritic NMDA spikes cause plateau potentials and we show that they provide a form of short-term memory which is useful for sequence recognition. The dendritic structure of the Tripod neuron is sufficiently simple to be integrated into efficient network simulations and studied in a broad functional context. ABSTRACT Neuron models with explicit dendritic dynamics have shed light on mechanisms for coincidence detection, pathway selection, and temporal filtering. However, it is still unclear which morphological and physiological features are required to capture these phenomena. In this work, we introduce the Tripod neuron model and propose a minimal structural reduction of the dendritic tree that is able to reproduce these dendritic computations. The Tripod is a three-compartment model consisting of two segregated passive dendrites and a somatic compartment modeled as an adaptive, exponential integrate-and-fire neuron. It incorporates dendritic geometry, membrane physiology, and receptor dynamics as measured in human pyramidal cells. We characterize the response of the Tripod to glutamatergic and GABAergic inputs and identify parameters that support supra-linear integration, coincidence-detection, and pathway-specific gating through shunting inhibition. Following NMDA spikes, the Tripod neuron generates plateau potentials whose duration depends on the dendritic length and the strength of synaptic input. When fitted with distal compartments, the Tripod neuron encodes previous activity into a dendritic depolarized state. This dendritic memory allows the neuron to perform temporal binding and we show that the neuron solves transition and sequence detection tasks on which a single-compartment model fails. Thus, the Tripod neuron can account for dendritic computations previously explained only with more detailed neuron models or neural networks. Due to its simplicity, the Tripod model can be used efficiently in simulations of larger cortical circuits. Abstract figure legend Cardiopulmonary and cerebrovascular function continues to develop through childhood and adolescence, but the impact of maturation on acclimatization responses to high-altitude is unknown. 10 children (9.8±2.5 y) and 10 adults (34.7±7.1 y) were passively brought from sea level to high-altitude and measurements of minute ventilation (VE), stroke volume (SV), pulmonary artery systolic pressure (PASP), and cerebral blood flow (CBF) were obtained at 3000 m and twice at 3800 m. Children demonstrated a greater increase in VE, a smaller increase in PASP, and a greater reduction in SV, paired with larger chronotropic response at 3000 m compared to adults, but age-related differences in cardiopulmonary function and ventilatory changes diminished at 3800 m. Baseline CBF was consistently elevated in children compared to adults, but relative changes in CBF with altitude were similar between children and adults at all times.
Collapse
Affiliation(s)
- Alessio Quaresima
- Neurobiology of Language Department, Max Planck Institute for Psycholinguistics, Nijmegen, 6565 XD, the Netherlands
| | - Hartmut Fitz
- Neurobiology of Language Department, Max Planck Institute for Psycholinguistics, Nijmegen, 6565 XD, the Netherlands.,Donders Institute for Brain, Cognition and Behaviour, Nijmegen, 6500 HE, the Netherlands
| | - Renato Duarte
- Donders Institute for Brain, Cognition and Behaviour, Nijmegen, 6500 HE, the Netherlands
| | - Dick van den Broek
- Neurobiology of Language Department, Max Planck Institute for Psycholinguistics, Nijmegen, 6565 XD, the Netherlands
| | - Peter Hagoort
- Neurobiology of Language Department, Max Planck Institute for Psycholinguistics, Nijmegen, 6565 XD, the Netherlands.,Donders Institute for Brain, Cognition and Behaviour, Nijmegen, 6500 HE, the Netherlands
| | - Karl Magnus Petersson
- Neurobiology of Language Department, Max Planck Institute for Psycholinguistics, Nijmegen, 6565 XD, the Netherlands.,Donders Institute for Brain, Cognition and Behaviour, Nijmegen, 6500 HE, the Netherlands.,Faculty of Medicine and Biomedical Sciences, University of Algarve, Faro, 8000-810, Portugal
| |
Collapse
|
6
|
Jin L, Behabadi BF, Jadi MP, Ramachandra CA, Mel BW. Classical-Contextual Interactions in V1 May Rely on Dendritic Computations. Neuroscience 2022; 489:234-250. [PMID: 35272004 PMCID: PMC9049952 DOI: 10.1016/j.neuroscience.2022.02.033] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/03/2021] [Revised: 02/14/2022] [Accepted: 02/27/2022] [Indexed: 12/20/2022]
Abstract
A signature feature of the neocortex is the dense network of horizontal connections (HCs) through which pyramidal neurons (PNs) exchange "contextual" information. In primary visual cortex (V1), HCs are thought to facilitate boundary detection, a crucial operation for object recognition, but how HCs modulate PN responses to boundary cues within their classical receptive fields (CRF) remains unknown. We began by "asking" natural images, through a structured data collection and ground truth labeling process, what function a V1 cell should use to compute boundary probability from aligned edge cues within and outside its CRF. The "answer" was an asymmetric 2-D sigmoidal function, whose nonlinear form provides the first normative account for the "multiplicative" center-flanker interactions previously reported in V1 neurons (Kapadia et al., 1995, 2000; Polat et al., 1998). Using a detailed compartmental model, we then show that this boundary-detecting classical-contextual interaction function can be computed by NMDAR-dependent spatial synaptic interactions within PN dendrites - the site where classical and contextual inputs first converge in the cortex. In additional simulations, we show that local interneuron circuitry activated by HCs can powerfully leverage the nonlinear spatial computing capabilities of PN dendrites, providing the cortex with a highly flexible substrate for integration of classical and contextual information.
Collapse
Affiliation(s)
- Lei Jin
- USC Neuroscience Graduate Program, United States
| | | | | | | | - Bartlett W Mel
- USC Neuroscience Graduate Program, United States; Department of Biomedical Engineering, University of Southern California, United States.
| |
Collapse
|
7
|
Watanave M, Takahashi N, Hosoi N, Konno A, Yamamoto H, Yasui H, Kawachi M, Horii T, Matsuzaki Y, Hatada I, Hirai H. Protein kinase Cγ in cerebellar Purkinje cells regulates Ca 2+-activated large-conductance K + channels and motor coordination. Proc Natl Acad Sci U S A 2022; 119:e2113336119. [PMID: 35145028 DOI: 10.1073/pnas.2113336119] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 11/21/2021] [Indexed: 11/18/2022] Open
Abstract
The cerebellum, the site where protein kinase C (PKC) was discovered, contains the highest amount of PKCγ in the central nervous system. PKCγ in the cerebellum is exclusively confined to Purkinje cells (PCs), sole outputs from the cerebellar cortex. Systemic PKCγ-knockout mice show impaired motor coordination; however, the cause of motor defects remains unknown. Here we show that activation of PKCγ suppresses the Ca2+-activated large-conductance K+ (BK) channels located along the PC dendrites. A consequential increase in the membrane resistance attenuates electrical signal decay during propagation, resulting in an altered complex spike waveform. Our results suggest that synaptically activated PKCγ in PCs plays a critical role in motor coordination by negative modulation of BK currents. The cerebellum, the site where protein kinase C (PKC) was first discovered, contains the highest amount of PKC in the central nervous system, with PKCγ being the major isoform. Systemic PKCγ-knockout (KO) mice showed impaired motor coordination and deficient pruning of surplus climbing fibers (CFs) from developing cerebellar Purkinje cells (PCs). However, the physiological significance of PKCγ in the mature cerebellum and the cause of motor incoordination remain unknown. Using adeno-associated virus vectors targeting PCs, we showed that impaired motor coordination was restored by re-expression of PKCγ in mature PKCγ-KO mouse PCs in a kinase activity–dependent manner, while normal motor coordination in mature Prkcgfl/fl mice was impaired by the Cre-dependent removal of PKCγ from PCs. Notably, the rescue or removal of PKCγ from mature PKCγ-KO or Prkcgfl/fl mice, respectively, did not affect the CF innervation profile of PCs, suggesting the presence of a mechanism distinct from multiple CF innervation of PCs for the motor defects in PKCγ-deficient mice. We found marked potentiation of Ca2+-activated large-conductance K+ (BK) channel currents in PKCγ-deficient mice, as compared to wild-type mice, which decreased the membrane resistance, resulting in attenuation of the electrical signal during the propagation and significant alterations of the complex spike waveform. These changes in PKCγ-deficient mice were restored by the rescue of PKCγ or pharmacological suppression of BK channels. Our results suggest that PKCγ is a critical regulator that negatively modulates BK currents in PCs, which significantly influences PC output from the cerebellar cortex and, eventually, motor coordination.
Collapse
|
8
|
Mikulasch FA, Rudelt L, Priesemann V. Local dendritic balance enables learning of efficient representations in networks of spiking neurons. Proc Natl Acad Sci U S A 2021; 118:e2021925118. [PMID: 34876505 DOI: 10.1073/pnas.2021925118] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 10/26/2021] [Indexed: 11/18/2022] Open
Abstract
How can neural networks learn to efficiently represent complex and high-dimensional inputs via local plasticity mechanisms? Classical models of representation learning assume that feedforward weights are learned via pairwise Hebbian-like plasticity. Here, we show that pairwise Hebbian-like plasticity works only under unrealistic requirements on neural dynamics and input statistics. To overcome these limitations, we derive from first principles a learning scheme based on voltage-dependent synaptic plasticity rules. Here, recurrent connections learn to locally balance feedforward input in individual dendritic compartments and thereby can modulate synaptic plasticity to learn efficient representations. We demonstrate in simulations that this learning scheme works robustly even for complex high-dimensional inputs and with inhibitory transmission delays, where Hebbian-like plasticity fails. Our results draw a direct connection between dendritic excitatory-inhibitory balance and voltage-dependent synaptic plasticity as observed in vivo and suggest that both are crucial for representation learning.
Collapse
|
9
|
Bicknell BA, Häusser M. A synaptic learning rule for exploiting nonlinear dendritic computation. Neuron 2021:S0896-6273(21)00717-0. [PMID: 34715026 DOI: 10.1016/j.neuron.2021.09.044] [Citation(s) in RCA: 15] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/23/2021] [Revised: 08/10/2021] [Accepted: 09/23/2021] [Indexed: 11/23/2022]
Abstract
Information processing in the brain depends on the integration of synaptic input distributed throughout neuronal dendrites. Dendritic integration is a hierarchical process, proposed to be equivalent to integration by a multilayer network, potentially endowing single neurons with substantial computational power. However, whether neurons can learn to harness dendritic properties to realize this potential is unknown. Here, we develop a learning rule from dendritic cable theory and use it to investigate the processing capacity of a detailed pyramidal neuron model. We show that computations using spatial or temporal features of synaptic input patterns can be learned, and even synergistically combined, to solve a canonical nonlinear feature-binding problem. The voltage dependence of the learning rule drives coactive synapses to engage dendritic nonlinearities, whereas spike-timing dependence shapes the time course of subthreshold potentials. Dendritic input-output relationships can therefore be flexibly tuned through synaptic plasticity, allowing optimal implementation of nonlinear functions by single neurons.
Collapse
|
10
|
Beniaguev D, Segev I, London M. Single cortical neurons as deep artificial neural networks. Neuron 2021; 109:2727-2739.e3. [PMID: 34380016 DOI: 10.1016/j.neuron.2021.07.002] [Citation(s) in RCA: 44] [Impact Index Per Article: 14.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/07/2020] [Revised: 03/04/2021] [Accepted: 06/30/2021] [Indexed: 11/17/2022]
Abstract
Utilizing recent advances in machine learning, we introduce a systematic approach to characterize neurons' input/output (I/O) mapping complexity. Deep neural networks (DNNs) were trained to faithfully replicate the I/O function of various biophysical models of cortical neurons at millisecond (spiking) resolution. A temporally convolutional DNN with five to eight layers was required to capture the I/O mapping of a realistic model of a layer 5 cortical pyramidal cell (L5PC). This DNN generalized well when presented with inputs widely outside the training distribution. When NMDA receptors were removed, a much simpler network (fully connected neural network with one hidden layer) was sufficient to fit the model. Analysis of the DNNs' weight matrices revealed that synaptic integration in dendritic branches could be conceptualized as pattern matching from a set of spatiotemporal templates. This study provides a unified characterization of the computational complexity of single neurons and suggests that cortical networks therefore have a unique architecture, potentially supporting their computational power.
Collapse
Affiliation(s)
- David Beniaguev
- Edmond and Lily Safra Center for Brain Sciences (ELSC), The Hebrew University of Jerusalem, Jerusalem 91904, Israel.
| | - Idan Segev
- Edmond and Lily Safra Center for Brain Sciences (ELSC), The Hebrew University of Jerusalem, Jerusalem 91904, Israel; Department of Neurobiology, The Hebrew University of Jerusalem, Jerusalem 91904, Israel
| | - Michael London
- Edmond and Lily Safra Center for Brain Sciences (ELSC), The Hebrew University of Jerusalem, Jerusalem 91904, Israel; Department of Neurobiology, The Hebrew University of Jerusalem, Jerusalem 91904, Israel
| |
Collapse
|
11
|
Abstract
Computations on the dendritic trees of neurons have important constraints. Voltage dependent conductances in dendrites are not similar to arbitrary direct-current generation, they are the basis for dendritic nonlinearities and they do not allow converting positive currents into negative currents. While it has been speculated that the dendritic tree of a neuron can be seen as a multi-layer neural network and it has been shown that such an architecture could be computationally strong, we do not know if that computational strength is preserved under these biological constraints. Here we simulate models of dendritic computation with and without these constraints. We find that dendritic model performance on interesting machine learning tasks is not hurt by these constraints but may benefit from them. Our results suggest that single real dendritic trees may be able to learn a surprisingly broad range of tasks.
Collapse
Affiliation(s)
| | - Konrad Paul Kording
- Department of Neuroscience, University of Pennsylvania, United States; Department Bioengineering, University of Pennsylvania, United States
| |
Collapse
|
12
|
Wybo WA, Jordan J, Ellenberger B, Marti Mengual U, Nevian T, Senn W. Data-driven reduction of dendritic morphologies with preserved dendro-somatic responses. eLife 2021; 10:60936. [PMID: 33494860 PMCID: PMC7837682 DOI: 10.7554/elife.60936] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/10/2020] [Accepted: 01/04/2021] [Indexed: 11/13/2022] Open
Abstract
Dendrites shape information flow in neurons. Yet, there is little consensus on the level of spatial complexity at which they operate. Through carefully chosen parameter fits, solvable in the least-squares sense, we obtain accurate reduced compartmental models at any level of complexity. We show that (back-propagating) action potentials, Ca2+ spikes, and N-methyl-D-aspartate spikes can all be reproduced with few compartments. We also investigate whether afferent spatial connectivity motifs admit simplification by ablating targeted branches and grouping affected synapses onto the next proximal dendrite. We find that voltage in the remaining branches is reproduced if temporal conductance fluctuations stay below a limit that depends on the average difference in input resistance between the ablated branches and the next proximal dendrite. Furthermore, our methodology fits reduced models directly from experimental data, without requiring morphological reconstructions. We provide software that automatizes the simplification, eliminating a common hurdle toward including dendritic computations in network models.
Collapse
Affiliation(s)
- Willem Am Wybo
- Department of Physiology, University of Bern, Bern, Switzerland
| | - Jakob Jordan
- Department of Physiology, University of Bern, Bern, Switzerland
| | | | | | - Thomas Nevian
- Department of Physiology, University of Bern, Bern, Switzerland
| | - Walter Senn
- Department of Physiology, University of Bern, Bern, Switzerland
| |
Collapse
|
13
|
Callan AR, Heß M, Felmy F, Leibold C. Arrangement of Excitatory Synaptic Inputs on Dendrites of the Medial Superior Olive. J Neurosci 2021; 41:269-83. [PMID: 33208467 DOI: 10.1523/JNEUROSCI.1055-20.2020] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/04/2020] [Revised: 11/05/2020] [Accepted: 11/10/2020] [Indexed: 11/21/2022] Open
Abstract
Neurons in the medial superior olive (MSO) detect 10 µs differences in the arrival times of a sound at the two ears. Such acuity requires exquisitely precise integration of binaural synaptic inputs. There is substantial understanding of how neuronal phase locking of afferent MSO structures, and MSO membrane biophysics subserve such high precision. However, we still lack insight into how the entirety of excitatory inputs is integrated along the MSO dendrite under sound stimulation. To understand how the dendrite integrates excitatory inputs as a whole, we combined anatomic quantifications of the afferent innervation in gerbils of both sexes with computational modeling of a single cell. We present anatomic data from confocal and transmission electron microscopy showing that single afferent fibers follow a single dendrite mostly up to the soma and contact it at multiple (median 4) synaptic sites, each containing multiple independent active zones (the overall density of active zones is estimated as 1.375 per μm2). Thus, any presynaptic action potential may elicit temporally highly coordinated synaptic vesicle release at tens of active zones, thereby achieving secure transmission. Computer simulations suggest that such an anatomic arrangement boosts the amplitude and sharpens the time course of excitatory postsynaptic potentials by reducing current sinks and more efficiently recruiting subthreshold potassium channels. Both effects improve binaural coincidence detection compared with single large synapses at the soma. Our anatomic data further allow for estimation of a lower bound of 7 and an upper bound of 70 excitatory fibers per dendrite.SIGNIFICANCE STATEMENT Passive dendritic propagation attenuates the amplitude of postsynaptic potentials and widens their temporal spread. Neurons in the medial superior olive, with their large bilateral dendrites, however, can detect coincidence of binaural auditory inputs with submillisecond precision, a computation that is in stark contrast to passive dendritic processing. Here, we show that dendrites can counteract amplitude attenuation and even decrease the temporal spread of postsynaptic potentials, if active subthreshold potassium conductances are triggered in temporal coordination along the whole dendrite. Our anatomic finding that axons run in parallel to the dendrites and make multiple synaptic contacts support such coordination since incoming action potentials would depolarize the dendrite at multiple sites within a brief time interval.
Collapse
|
14
|
Hu HY, Kruijssen DLH, Frias CP, Rózsa B, Hoogenraad CC, Wierenga CJ. Endocannabinoid Signaling Mediates Local Dendritic Coordination between Excitatory and Inhibitory Synapses. Cell Rep 2020; 27:666-675.e5. [PMID: 30995465 DOI: 10.1016/j.celrep.2019.03.078] [Citation(s) in RCA: 17] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/14/2018] [Revised: 12/21/2018] [Accepted: 03/21/2019] [Indexed: 01/01/2023] Open
Abstract
Dendritic inhibitory synapses are most efficient in modulating excitatory inputs localized on the same dendrite, but it is unknown whether their location is random or regulated. Here, we show that the formation of inhibitory synapses can be directed by excitatory synaptic activity on the same dendrite. We stimulated dendritic spines close to a GABAergic axon crossing by pairing two-photon glutamate uncaging with postsynaptic depolarization in CA1 pyramidal cells. We found that repeated spine stimulation promoted growth of a GABAergic bouton onto the same dendrite. The dendritic feedback signal required postsynaptic activation of DAGL, which produces the endocannabinoid 2-AG, and was mediated by CB1 receptors. We could also induce inhibitory bouton growth by local, brief applications of 2-AG. Our findings reveal a dendritic signaling mechanism to trigger growth of an inhibitory bouton at dendritic locations with strong excitatory synaptic activity, and this mechanism may serve to ensure inhibitory control over clustered excitatory inputs.
Collapse
Affiliation(s)
- Hai Yin Hu
- Department of Biology, Science for Life, Utrecht University, 3584CH Utrecht, the Netherlands
| | - Dennis L H Kruijssen
- Department of Biology, Science for Life, Utrecht University, 3584CH Utrecht, the Netherlands
| | - Cátia P Frias
- Department of Biology, Science for Life, Utrecht University, 3584CH Utrecht, the Netherlands
| | - Balázs Rózsa
- Laboratory of 3D Functional Network and Dendritic Imaging, Institute of Experimental Medicine, Hungarian Academy of Sciences, Budapest 1083, Hungary; Faculty of Information Technology, Pázmány Péter Catholic University, Budapest 1083, Hungary
| | - Casper C Hoogenraad
- Department of Biology, Science for Life, Utrecht University, 3584CH Utrecht, the Netherlands
| | - Corette J Wierenga
- Department of Biology, Science for Life, Utrecht University, 3584CH Utrecht, the Netherlands.
| |
Collapse
|
15
|
Wybo WAM, Torben-Nielsen B, Nevian T, Gewaltig MO. Electrical Compartmentalization in Neurons. Cell Rep 2020; 26:1759-1773.e7. [PMID: 30759388 DOI: 10.1016/j.celrep.2019.01.074] [Citation(s) in RCA: 16] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/20/2018] [Revised: 10/03/2018] [Accepted: 01/17/2019] [Indexed: 12/31/2022] Open
Abstract
The dendritic tree of neurons plays an important role in information processing in the brain. While it is thought that dendrites require independent subunits to perform most of their computations, it is still not understood how they compartmentalize into functional subunits. Here, we show how these subunits can be deduced from the properties of dendrites. We devised a formalism that links the dendritic arborization to an impedance-based tree graph and show how the topology of this graph reveals independent subunits. This analysis reveals that cooperativity between synapses decreases slowly with increasing electrical separation and thus that few independent subunits coexist. We nevertheless find that balanced inputs or shunting inhibition can modify this topology and increase the number and size of the subunits in a context-dependent manner. We also find that this dynamic recompartmentalization can enable branch-specific learning of stimulus features. Analysis of dendritic patch-clamp recording experiments confirmed our theoretical predictions.
Collapse
Affiliation(s)
- Willem A M Wybo
- Blue Brain Project, École Polytechnique Fédérale de Lausanne, Geneva, Switzerland; Laboratory of Computational Neuroscience, École Polytechnique Fédérale de Lausanne, Lausanne, Switzerland; Department of Physiology, University of Bern, Bern, Switzerland
| | - Benjamin Torben-Nielsen
- Biocomputation Group, University of Hertfordshire, Hertfordshire, UK; Neurolinx Research Institute, La Jolla, CA, USA.
| | - Thomas Nevian
- Department of Physiology, University of Bern, Bern, Switzerland
| | - Marc-Oliver Gewaltig
- Blue Brain Project, École Polytechnique Fédérale de Lausanne, Geneva, Switzerland
| |
Collapse
|
16
|
Voigts J, Harnett MT. Somatic and Dendritic Encoding of Spatial Variables in Retrosplenial Cortex Differs during 2D Navigation. Neuron 2020; 105:237-245.e4. [PMID: 31759808 PMCID: PMC6981016 DOI: 10.1016/j.neuron.2019.10.016] [Citation(s) in RCA: 34] [Impact Index Per Article: 8.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/09/2019] [Revised: 08/14/2019] [Accepted: 10/09/2019] [Indexed: 12/21/2022]
Abstract
Active amplification of organized synaptic inputs in dendrites can endow individual neurons with the ability to perform complex computations. However, whether dendrites in behaving animals perform independent local computations is not known. Such activity may be particularly important for complex behavior, where neurons integrate multiple streams of information. Head-restrained imaging has yielded important insights into cellular and circuit function, but this approach limits behavior and the underlying computations. We describe a method for full-featured 2-photon imaging in awake mice during free locomotion with volitional head rotation. We examine head direction and position encoding in simultaneously imaged apical tuft dendrites and their respective cell bodies in retrosplenial cortex, an area that encodes multi-modal navigational information. Activity in dendrites was not determined solely by somatic activity but reflected distinct navigational variables, fulfilling the requirements for dendritic computation. Our approach provides a foundation for studying sub-cellular processes during complex behaviors.
Collapse
Affiliation(s)
- Jakob Voigts
- Department of Brain & Cognitive Sciences and McGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, MA 02139, USA
| | - Mark T Harnett
- Department of Brain & Cognitive Sciences and McGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, MA 02139, USA.
| |
Collapse
|
17
|
Matsumoto A, Briggman KL, Yonehara K. Spatiotemporally Asymmetric Excitation Supports Mammalian Retinal Motion Sensitivity. Curr Biol 2019; 29:3277-3288.e5. [PMID: 31564498 DOI: 10.1016/j.cub.2019.08.048] [Citation(s) in RCA: 28] [Impact Index Per Article: 5.6] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/22/2019] [Revised: 08/15/2019] [Accepted: 08/20/2019] [Indexed: 11/20/2022]
Abstract
The detection of visual motion is a fundamental function of the visual system. How motion speed and direction are computed together at the cellular level, however, remains largely unknown. Here, we suggest a circuit mechanism by which excitatory inputs to direction-selective ganglion cells in the mouse retina become sensitive to the motion speed and direction of image motion. Electrophysiological, imaging, and connectomic analyses provide evidence that the dendrites of ON direction-selective cells receive spatially offset and asymmetrically filtered glutamatergic inputs along motion-preference axis from asymmetrically wired bipolar and amacrine cell types with distinct release dynamics. A computational model shows that, with this spatiotemporal structure, the input amplitude becomes sensitive to speed and direction by a preferred direction enhancement mechanism. Our results highlight the role of an excitatory mechanism in retinal motion computation by which feature selectivity emerges from non-selective inputs.
Collapse
|
18
|
Abstract
Modeling single-neuron dynamics is the first step to quantitatively understand brain computation. Yet, the existing point neuron models fail to capture dendritic effects, which are crucial for neuronal information processing. We derive an effective point neuron model, which incorporates an additional synaptic integration current arising from the nonlinear interaction between synaptic currents across spatial dendrites. Our model captures the somatic voltage response of a neuron with complex dendrites and is capable of performing rich dendritic computations. Besides its computational efficiency in simulations, our model suggests reexamination of previous studies involving the decomposition of excitatory and inhibitory synaptic inputs based on the existing point neuron framework, e.g., the inhibition is often underestimated in experiment. Complex dendrites in general present formidable challenges to understanding neuronal information processing. To circumvent the difficulty, a prevalent viewpoint simplifies the neuronal morphology as a point representing the soma, and the excitatory and inhibitory synaptic currents originated from the dendrites are treated as linearly summed at the soma. Despite its extensive applications, the validity of the synaptic current description remains unclear, and the existing point neuron framework fails to characterize the spatiotemporal aspects of dendritic integration supporting specific computations. Using electrophysiological experiments, realistic neuronal simulations, and theoretical analyses, we demonstrate that the traditional assumption of linear summation of synaptic currents is oversimplified and underestimates the inhibition effect. We then derive a form of synaptic integration current within the point neuron framework to capture dendritic effects. In the derived form, the interaction between each pair of synaptic inputs on the dendrites can be reliably parameterized by a single coefficient, suggesting the inherent low-dimensional structure of dendritic integration. We further generalize the form of synaptic integration current to capture the spatiotemporal interactions among multiple synaptic inputs and show that a point neuron model with the synaptic integration current incorporated possesses the computational ability of a spatial neuron with dendrites, including direction selectivity, coincidence detection, logical operation, and a bilinear dendritic integration rule discovered in experiment. Our work amends the modeling of synaptic inputs and improves the computational power of a modeling neuron within the point neuron framework.
Collapse
|
19
|
Barnhart EL, Wang IE, Wei H, Desplan C, Clandinin TR. Sequential Nonlinear Filtering of Local Motion Cues by Global Motion Circuits. Neuron 2018; 100:229-243.e3. [PMID: 30220510 DOI: 10.1016/j.neuron.2018.08.022] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/20/2018] [Revised: 07/20/2018] [Accepted: 08/17/2018] [Indexed: 11/16/2022]
Abstract
Many animals guide their movements using optic flow, the displacement of stationary objects across the retina caused by self-motion. How do animals selectively synthesize a global motion pattern from its local motion components? To what extent does this feature selectivity rely on circuit mechanisms versus dendritic processing? Here we used in vivo calcium imaging to identify pre- and postsynaptic mechanisms for processing local motion signals in global motion detection circuits in Drosophila. Lobula plate tangential cells (LPTCs) detect global motion by pooling input from local motion detectors, T4/T5 neurons. We show that T4/T5 neurons suppress responses to adjacent local motion signals whereas LPTC dendrites selectively amplify spatiotemporal sequences of local motion signals consistent with preferred global patterns. We propose that sequential nonlinear suppression and amplification operations allow optic flow circuitry to simultaneously prevent saturating responses to local signals while creating selectivity for global motion patterns critical to behavior.
Collapse
Affiliation(s)
- Erin L Barnhart
- Department of Neurobiology, Stanford University, Stanford, CA 94305, USA; Department of Biology, New York University, New York, NY 10003, USA
| | - Irving E Wang
- Department of Neurobiology, Stanford University, Stanford, CA 94305, USA
| | - Huayi Wei
- Department of Biology, New York University, New York, NY 10003, USA
| | - Claude Desplan
- Department of Biology, New York University, New York, NY 10003, USA.
| | - Thomas R Clandinin
- Department of Neurobiology, Stanford University, Stanford, CA 94305, USA.
| |
Collapse
|
20
|
Abstract
Recent experimental studies suggest that, in cortical microcircuits of the mammalian brain, the majority of neuron-to-neuron connections are realized by multiple synapses. However, it is not known whether such redundant synaptic connections provide any functional benefit. Here, we show that redundant synaptic connections enable near-optimal learning in cooperation with synaptic rewiring. By constructing a simple dendritic neuron model, we demonstrate that with multisynaptic connections synaptic plasticity approximates a sample-based Bayesian filtering algorithm known as particle filtering, and wiring plasticity implements its resampling process. Extending the proposed framework to a detailed single-neuron model of perceptual learning in the primary visual cortex, we show that the model accounts for many experimental observations. In particular, the proposed model reproduces the dendritic position dependence of spike-timing-dependent plasticity and the functional synaptic organization on the dendritic tree based on the stimulus selectivity of presynaptic neurons. Our study provides a conceptual framework for synaptic plasticity and rewiring.
Collapse
|
21
|
Abstract
Understanding the neural code is to attribute proper meaning to temporal sequences of action potentials. We report a simple neural code based on distinguishing single spikes from spikes in close succession, commonly called “bursts.” By separating these two types of responses, we show that ensembles of neurons can communicate rapidly changing and graded information from two sources simultaneously and with minimal cross-talk. Second, we show that this multiplexing can optimize the information transferred per action potential when bursts are relatively rare. Finally, we show that neurons can demultiplex these two streams of information. We propose that this multiplexing may be particularly important in hierarchical communication where bottom–up and top–down information must be distinguished. Many cortical neurons combine the information ascending and descending the cortical hierarchy. In the classical view, this information is combined nonlinearly to give rise to a single firing-rate output, which collapses all input streams into one. We analyze the extent to which neurons can simultaneously represent multiple input streams by using a code that distinguishes spike timing patterns at the level of a neural ensemble. Using computational simulations constrained by experimental data, we show that cortical neurons are well suited to generate such multiplexing. Interestingly, this neural code maximizes information for short and sparse bursts, a regime consistent with in vivo recordings. Neurons can also demultiplex this information, using specific connectivity patterns. The anatomy of the adult mammalian cortex suggests that these connectivity patterns are used by the nervous system to maintain sparse bursting and optimal multiplexing. Contrary to firing-rate coding, our findings indicate that the physiology and anatomy of the cortex may be interpreted as optimizing the transmission of multiple independent signals to different targets.
Collapse
|
22
|
Hiratani N, Fukai T. Detailed Dendritic Excitatory/Inhibitory Balance through Heterosynaptic Spike-Timing-Dependent Plasticity. J Neurosci 2017; 37:12106-22. [PMID: 29089443 DOI: 10.1523/JNEUROSCI.0027-17.2017] [Citation(s) in RCA: 16] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/02/2017] [Revised: 10/10/2017] [Accepted: 10/17/2017] [Indexed: 11/21/2022] Open
Abstract
The balance between excitatory and inhibitory inputs is a key feature of cortical dynamics. Such a balance is arguably preserved in dendritic branches, yet its underlying mechanism and functional roles remain unknown. In this study, we developed computational models of heterosynaptic spike-timing-dependent plasticity (STDP) to show that the excitatory/inhibitory balance in dendritic branches is robustly achieved through heterosynaptic interactions between excitatory and inhibitory synapses. The model reproduces key features of experimental heterosynaptic STDP well, and provides analytical insights. Furthermore, heterosynaptic STDP explains how the maturation of inhibitory neurons modulates the selectivity of excitatory neurons for binocular matching in the critical period plasticity. The model also provides an alternative explanation for the potential mechanism underlying the somatic detailed balance that is commonly associated with inhibitory STDP. Our results propose heterosynaptic STDP as a critical factor in synaptic organization and the resultant dendritic computation.SIGNIFICANCE STATEMENT Recent experimental studies reveal that relative differences in spike timings experienced among neighboring glutamatergic and GABAergic synapses on a dendritic branch significantly influences changes in the efficiency of these synapses. This heterosynaptic form of spike-timing-dependent plasticity (STDP) is potentially important for shaping the synaptic organization and computation of neurons, but its functional role remains elusive. Through computational modeling at the parameter regime where previous experimental results are well reproduced, we show that heterosynaptic plasticity serves to finely balance excitatory and inhibitory inputs on the dendrite. Our results suggest a principle of GABA-driven neural circuit formation.
Collapse
|
23
|
Du K, Wu YW, Lindroos R, Liu Y, Rózsa B, Katona G, Ding JB, Kotaleski JH. Cell-type-specific inhibition of the dendritic plateau potential in striatal spiny projection neurons. Proc Natl Acad Sci U S A 2017; 114:E7612-21. [PMID: 28827326 DOI: 10.1073/pnas.1704893114] [Citation(s) in RCA: 36] [Impact Index Per Article: 5.1] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022] Open
Abstract
Striatal spiny projection neurons (SPNs) receive convergent excitatory synaptic inputs from the cortex and thalamus. Activation of spatially clustered and temporally synchronized excitatory inputs at the distal dendrites could trigger plateau potentials in SPNs. Such supralinear synaptic integration is crucial for dendritic computation. However, how plateau potentials interact with subsequent excitatory and inhibitory synaptic inputs remains unknown. By combining computational simulation, two-photon imaging, optogenetics, and dual-color uncaging of glutamate and GABA, we demonstrate that plateau potentials can broaden the spatiotemporal window for integrating excitatory inputs and promote spiking. The temporal window of spiking can be delicately controlled by GABAergic inhibition in a cell-type-specific manner. This subtle inhibitory control of plateau potential depends on the location and kinetics of the GABAergic inputs and is achieved by the balance between relief and reestablishment of NMDA receptor Mg2+ block. These findings represent a mechanism for controlling spatiotemporal synaptic integration in SPNs.
Collapse
|
24
|
Abstract
In the mammalian brain, most inputs received by a neuron are formed on the dendritic tree. In the neocortex, the dendrites of pyramidal neurons are covered by thousands of tiny protrusions known as dendritic spines, which are the major recipient sites for excitatory synaptic information in the brain. Their peculiar morphology, with a small head connected to the dendritic shaft by a slender neck, has inspired decades of theoretical and more recently experimental work in an attempt to understand how excitatory synaptic inputs are processed, stored and integrated in pyramidal neurons. Advances in electrophysiological, optical and genetic tools are now enabling us to unravel the biophysical and molecular mechanisms controlling spine function in health and disease. Here I highlight relevant findings, challenges and hypotheses on spine function, with an emphasis on the electrical properties of spines and on how these affect the storage and integration of excitatory synaptic inputs in pyramidal neurons. In an attempt to make sense of the published data, I propose that the raison d'etre for dendritic spines lies in their ability to undergo activity-dependent structural and molecular changes that can modify synaptic strength, and hence alter the gain of the linearly integrated sub-threshold depolarizations in pyramidal neuron dendrites before the generation of a dendritic spike.
Collapse
Affiliation(s)
- Roberto Araya
- Department of Neurosciences, Faculty of Medicine, University of Montreal Montreal, QC, Canada
| |
Collapse
|
25
|
Afshar S, George L, Tapson J, van Schaik A, Hamilton TJ. Racing to learn: statistical inference and learning in a single spiking neuron with adaptive kernels. Front Neurosci 2014; 8:377. [PMID: 25505378 PMCID: PMC4243566 DOI: 10.3389/fnins.2014.00377] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/14/2014] [Accepted: 11/05/2014] [Indexed: 11/17/2022] Open
Abstract
This paper describes the Synapto-dendritic Kernel Adapting Neuron (SKAN), a simple spiking neuron model that performs statistical inference and unsupervised learning of spatiotemporal spike patterns. SKAN is the first proposed neuron model to investigate the effects of dynamic synapto-dendritic kernels and demonstrate their computational power even at the single neuron scale. The rule-set defining the neuron is simple: there are no complex mathematical operations such as normalization, exponentiation or even multiplication. The functionalities of SKAN emerge from the real-time interaction of simple additive and binary processes. Like a biological neuron, SKAN is robust to signal and parameter noise, and can utilize both in its operations. At the network scale neurons are locked in a race with each other with the fastest neuron to spike effectively "hiding" its learnt pattern from its neighbors. The robustness to noise, high speed, and simple building blocks not only make SKAN an interesting neuron model in computational neuroscience, but also make it ideal for implementation in digital and analog neuromorphic systems which is demonstrated through an implementation in a Field Programmable Gate Array (FPGA). Matlab, Python, and Verilog implementations of SKAN are available at: http://www.uws.edu.au/bioelectronics_neuroscience/bens/reproducible_research.
Collapse
Affiliation(s)
- Saeed Afshar
- Bioelectronics and Neurosciences, The MARCS Institute, University of Western SydneyPenrith, NSW, Australia
| | - Libin George
- School of Electrical Engineering and Telecommunications, The University of New South WalesSydney, NSW, Australia
| | - Jonathan Tapson
- Bioelectronics and Neurosciences, The MARCS Institute, University of Western SydneyPenrith, NSW, Australia
| | - André van Schaik
- Bioelectronics and Neurosciences, The MARCS Institute, University of Western SydneyPenrith, NSW, Australia
| | - Tara J. Hamilton
- Bioelectronics and Neurosciences, The MARCS Institute, University of Western SydneyPenrith, NSW, Australia
- School of Electrical Engineering and Telecommunications, The University of New South WalesSydney, NSW, Australia
| |
Collapse
|
26
|
Zhang D, Li Y, Rasch MJ, Wu S. Nonlinear multiplicative dendritic integration in neuron and network models. Front Comput Neurosci 2013; 7:56. [PMID: 23658543 PMCID: PMC3647120 DOI: 10.3389/fncom.2013.00056] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/19/2013] [Accepted: 04/21/2013] [Indexed: 11/13/2022] Open
Abstract
Neurons receive inputs from thousands of synapses distributed across dendritic trees of complex morphology. It is known that dendritic integration of excitatory and inhibitory synapses can be highly non-linear in reality and can heavily depend on the exact location and spatial arrangement of inhibitory and excitatory synapses on the dendrite. Despite this known fact, most neuron models used in artificial neural networks today still only describe the voltage potential of a single somatic compartment and assume a simple linear summation of all individual synaptic inputs. We here suggest a new biophysical motivated derivation of a single compartment model that integrates the non-linear effects of shunting inhibition, where an inhibitory input on the route of an excitatory input to the soma cancels or “shunts” the excitatory potential. In particular, our integration of non-linear dendritic processing into the neuron model follows a simple multiplicative rule, suggested recently by experiments, and allows for strict mathematical treatment of network effects. Using our new formulation, we further devised a spiking network model where inhibitory neurons act as global shunting gates, and show that the network exhibits persistent activity in a low firing regime.
Collapse
Affiliation(s)
- Danke Zhang
- School of Automation Science and Engineering, South China University of Technology Guangzhou, China ; State Key Lab of Cognitive Neuroscience and Learning, Beijing Normal University Beijing, China
| | | | | | | |
Collapse
|
27
|
Caudron Q, Donnelly SR, Brand SPC, Timofeeva Y. Computational convergence of the path integral for real dendritic morphologies. J Math Neurosci 2012; 2:11. [PMID: 23174188 PMCID: PMC3652791 DOI: 10.1186/2190-8567-2-11] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 06/19/2012] [Accepted: 09/11/2012] [Indexed: 06/01/2023]
Abstract
Neurons are characterised by a morphological structure unique amongst biological cells, the core of which is the dendritic tree. The vast number of dendritic geometries, combined with heterogeneous properties of the cell membrane, continue to challenge scientists in predicting neuronal input-output relationships, even in the case of sub-threshold dendritic currents. The Green's function obtained for a given dendritic geometry provides this functional relationship for passive or quasi-active dendrites and can be constructed by a sum-over-trips approach based on a path integral formalism. In this paper, we introduce a number of efficient algorithms for realisation of the sum-over-trips framework and investigate the convergence of these algorithms on different dendritic geometries. We demonstrate that the convergence of the trip sampling methods strongly depends on dendritic morphology as well as the biophysical properties of the cell membrane. For real morphologies, the number of trips to guarantee a small convergence error might become very large and strongly affect computational efficiency. As an alternative, we introduce a highly-efficient matrix method which can be applied to arbitrary branching structures.
Collapse
Affiliation(s)
- Quentin Caudron
- Centre for Complexity Science, University of Warwick, Coventry, CV4 7AL, UK
- Department of Computer Science, University of Warwick, Coventry, CV4 7AL, UK
| | - Simon R Donnelly
- Doctoral Training Centre in Neuroinformatics and Computational Neuroscience, University of Edinburgh, Edinburgh, EH8 9AB, UK
| | - Samuel PC Brand
- Centre for Complexity Science, University of Warwick, Coventry, CV4 7AL, UK
- Mathematics Institute, University of Warwick, Coventry, CV4 7AL, UK
| | - Yulia Timofeeva
- Centre for Complexity Science, University of Warwick, Coventry, CV4 7AL, UK
- Department of Computer Science, University of Warwick, Coventry, CV4 7AL, UK
| |
Collapse
|
28
|
Schiess M, Urbanczik R, Senn W. Gradient estimation in dendritic reinforcement learning. J Math Neurosci 2012; 2:2. [PMID: 22657827 PMCID: PMC3365869 DOI: 10.1186/2190-8567-2-2] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 05/12/2011] [Accepted: 02/15/2012] [Indexed: 06/01/2023]
Abstract
We study synaptic plasticity in a complex neuronal cell model where NMDA-spikes can arise in certain dendritic zones. In the context of reinforcement learning, two kinds of plasticity rules are derived, zone reinforcement (ZR) and cell reinforcement (CR), which both optimize the expected reward by stochastic gradient ascent. For ZR, the synaptic plasticity response to the external reward signal is modulated exclusively by quantities which are local to the NMDA-spike initiation zone in which the synapse is situated. CR, in addition, uses nonlocal feedback from the soma of the cell, provided by mechanisms such as the backpropagating action potential. Simulation results show that, compared to ZR, the use of nonlocal feedback in CR can drastically enhance learning performance. We suggest that the availability of nonlocal feedback for learning is a key advantage of complex neurons over networks of simple point neurons, which have previously been found to be largely equivalent with regard to computational capability.
Collapse
Affiliation(s)
- Mathieu Schiess
- Department of Physiology, University of Bern, Bühlplatz 5, 3012, Bern, Switzerland
| | - Robert Urbanczik
- Department of Physiology, University of Bern, Bühlplatz 5, 3012, Bern, Switzerland
| | - Walter Senn
- Department of Physiology, University of Bern, Bühlplatz 5, 3012, Bern, Switzerland
| |
Collapse
|
29
|
Abstract
In this paper, we pursue recent observations that, through selective dendritic filtering, single neurons respond to specific sequences of presynaptic inputs. We try to provide a principled and mechanistic account of this selectivity by applying a recent free-energy principle to a dendrite that is immersed in its neuropil or environment. We assume that neurons self-organize to minimize a variational free-energy bound on the self-information or surprise of presynaptic inputs that are sampled. We model this as a selective pruning of dendritic spines that are expressed on a dendritic branch. This pruning occurs when postsynaptic gain falls below a threshold. Crucially, postsynaptic gain is itself optimized with respect to free energy. Pruning suppresses free energy as the dendrite selects presynaptic signals that conform to its expectations, specified by a generative model implicit in its intracellular kinetics. Not only does this provide a principled account of how neurons organize and selectively sample the myriad of potential presynaptic inputs they are exposed to, but it also connects the optimization of elemental neuronal (dendritic) processing to generic (surprise or evidence-based) schemes in statistics and machine learning, such as Bayesian model selection and automatic relevance determination.
Collapse
Affiliation(s)
- Stefan J Kiebel
- Department of Neurology, Max Planck Institute for Human Cognitive and Brain Sciences Leipzig, Germany
| | | |
Collapse
|
30
|
Coop AD, Cornelis H, Santamaria F. Dendritic excitability modulates dendritic information processing in a purkinje cell model. Front Comput Neurosci 2010; 4:6. [PMID: 20407613 PMCID: PMC2856590 DOI: 10.3389/fncom.2010.00006] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/06/2009] [Accepted: 03/05/2010] [Indexed: 11/17/2022] Open
Abstract
Using an electrophysiological compartmental model of a Purkinje cell we quantified the contribution of individual active dendritic currents to processing of synaptic activity from granule cells. We used mutual information as a measure to quantify the information from the total excitatory input current (IGlu) encoded in each dendritic current. In this context, each active current was considered an information channel. Our analyses showed that most of the information was encoded by the calcium (ICaP) and calcium activated potassium (IKc) currents. Mutual information between IGlu and ICaP and IKc was sensitive to different levels of excitatory and inhibitory synaptic activity that, at the same time, resulted in the same firing rate at the soma. Since dendritic excitability could be a mechanism to regulate information processing in neurons we quantified the changes in mutual information between IGlu and all Purkinje cell currents as a function of the density of dendritic Ca (gCaP) and Kca (gKc) conductances. We extended our analysis to determine the window of temporal integration of IGlu by ICaP and IKc as a function of channel density and synaptic activity. The window of information integration has a stronger dependence on increasing values of gKc than on gCaP, but at high levels of synaptic stimulation information integration is reduced to a few milliseconds. Overall, our results show that different dendritic conductances differentially encode synaptic activity and that dendritic excitability and the level of synaptic activity regulate the flow of information in dendrites.
Collapse
Affiliation(s)
- Allan D Coop
- Department of Epidemiology and Biostatistics, University of Texas Health Science Center at San Antonio San Antonio, TX, USA
| | | | | |
Collapse
|