1
|
Fitch WT. Cellular computation and cognition. Front Comput Neurosci 2023; 17:1107876. [PMID: 38077750 PMCID: PMC10702520 DOI: 10.3389/fncom.2023.1107876] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/25/2022] [Accepted: 10/09/2023] [Indexed: 05/28/2024] Open
Abstract
Contemporary neural network models often overlook a central biological fact about neural processing: that single neurons are themselves complex, semi-autonomous computing systems. Both the information processing and information storage abilities of actual biological neurons vastly exceed the simple weighted sum of synaptic inputs computed by the "units" in standard neural network models. Neurons are eukaryotic cells that store information not only in synapses, but also in their dendritic structure and connectivity, as well as genetic "marking" in the epigenome of each individual cell. Each neuron computes a complex nonlinear function of its inputs, roughly equivalent in processing capacity to an entire 1990s-era neural network model. Furthermore, individual cells provide the biological interface between gene expression, ongoing neural processing, and stored long-term memory traces. Neurons in all organisms have these properties, which are thus relevant to all of neuroscience and cognitive biology. Single-cell computation may also play a particular role in explaining some unusual features of human cognition. The recognition of the centrality of cellular computation to "natural computation" in brains, and of the constraints it imposes upon brain evolution, thus has important implications for the evolution of cognition, and how we study it.
Collapse
Affiliation(s)
- W. Tecumseh Fitch
- Faculty of Life Sciences and Vienna Cognitive Science Hub, University of Vienna, Vienna, Austria
| |
Collapse
|
2
|
Zhang Y, He G, Ma L, Liu X, Hjorth JJJ, Kozlov A, He Y, Zhang S, Kotaleski JH, Tian Y, Grillner S, Du K, Huang T. A GPU-based computational framework that bridges neuron simulation and artificial intelligence. Nat Commun 2023; 14:5798. [PMID: 37723170 PMCID: PMC10507119 DOI: 10.1038/s41467-023-41553-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/30/2022] [Accepted: 09/08/2023] [Indexed: 09/20/2023] Open
Abstract
Biophysically detailed multi-compartment models are powerful tools to explore computational principles of the brain and also serve as a theoretical framework to generate algorithms for artificial intelligence (AI) systems. However, the expensive computational cost severely limits the applications in both the neuroscience and AI fields. The major bottleneck during simulating detailed compartment models is the ability of a simulator to solve large systems of linear equations. Here, we present a novel Dendritic Hierarchical Scheduling (DHS) method to markedly accelerate such a process. We theoretically prove that the DHS implementation is computationally optimal and accurate. This GPU-based method performs with 2-3 orders of magnitude higher speed than that of the classic serial Hines method in the conventional CPU platform. We build a DeepDendrite framework, which integrates the DHS method and the GPU computing engine of the NEURON simulator and demonstrate applications of DeepDendrite in neuroscience tasks. We investigate how spatial patterns of spine inputs affect neuronal excitability in a detailed human pyramidal neuron model with 25,000 spines. Furthermore, we provide a brief discussion on the potential of DeepDendrite for AI, specifically highlighting its ability to enable the efficient training of biophysically detailed models in typical image classification tasks.
Collapse
Affiliation(s)
- Yichen Zhang
- National Key Laboratory for Multimedia Information Processing, School of Computer Science, Peking University, Beijing, 100871, China
| | - Gan He
- National Key Laboratory for Multimedia Information Processing, School of Computer Science, Peking University, Beijing, 100871, China
| | - Lei Ma
- National Key Laboratory for Multimedia Information Processing, School of Computer Science, Peking University, Beijing, 100871, China
- Beijing Academy of Artificial Intelligence (BAAI), Beijing, 100084, China
| | - Xiaofei Liu
- National Key Laboratory for Multimedia Information Processing, School of Computer Science, Peking University, Beijing, 100871, China
- School of Information Science and Engineering, Yunnan University, Kunming, 650500, China
| | - J J Johannes Hjorth
- Science for Life Laboratory, School of Electrical Engineering and Computer Science, Royal Institute of Technology KTH, Stockholm, SE-10044, Sweden
| | - Alexander Kozlov
- Science for Life Laboratory, School of Electrical Engineering and Computer Science, Royal Institute of Technology KTH, Stockholm, SE-10044, Sweden
- Department of Neuroscience, Karolinska Institute, Stockholm, SE-17165, Sweden
| | - Yutao He
- National Key Laboratory for Multimedia Information Processing, School of Computer Science, Peking University, Beijing, 100871, China
| | - Shenjian Zhang
- National Key Laboratory for Multimedia Information Processing, School of Computer Science, Peking University, Beijing, 100871, China
| | - Jeanette Hellgren Kotaleski
- Science for Life Laboratory, School of Electrical Engineering and Computer Science, Royal Institute of Technology KTH, Stockholm, SE-10044, Sweden
- Department of Neuroscience, Karolinska Institute, Stockholm, SE-17165, Sweden
| | - Yonghong Tian
- National Key Laboratory for Multimedia Information Processing, School of Computer Science, Peking University, Beijing, 100871, China
- School of Electrical and Computer Engineering, Shenzhen Graduate School, Peking University, Shenzhen, 518055, China
| | - Sten Grillner
- Department of Neuroscience, Karolinska Institute, Stockholm, SE-17165, Sweden
| | - Kai Du
- Institute for Artificial Intelligence, Peking University, Beijing, 100871, China.
| | - Tiejun Huang
- National Key Laboratory for Multimedia Information Processing, School of Computer Science, Peking University, Beijing, 100871, China
- Beijing Academy of Artificial Intelligence (BAAI), Beijing, 100084, China
- Institute for Artificial Intelligence, Peking University, Beijing, 100871, China
| |
Collapse
|
3
|
Moldwin T, Kalmenson M, Segev I. Asymmetric Voltage Attenuation in Dendrites Can Enable Hierarchical Heterosynaptic Plasticity. eNeuro 2023; 10:ENEURO.0014-23.2023. [PMID: 37414554 PMCID: PMC10354808 DOI: 10.1523/eneuro.0014-23.2023] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/17/2022] [Revised: 05/16/2023] [Accepted: 06/14/2023] [Indexed: 07/08/2023] Open
Abstract
Long-term synaptic plasticity is mediated via cytosolic calcium concentrations ([Ca2+]). Using a synaptic model that implements calcium-based long-term plasticity via two sources of Ca2+ - NMDA receptors and voltage-gated calcium channels (VGCCs) - we show in dendritic cable simulations that the interplay between these two calcium sources can result in a diverse array of heterosynaptic effects. When spatially clustered synaptic input produces a local NMDA spike, the resulting dendritic depolarization can activate VGCCs at nonactivated spines, resulting in heterosynaptic plasticity. NMDA spike activation at a given dendritic location will tend to depolarize dendritic regions that are located distally to the input site more than dendritic sites that are proximal to it. This asymmetry can produce a hierarchical effect in branching dendrites, where an NMDA spike at a proximal branch can induce heterosynaptic plasticity primarily at branches that are distal to it. We also explored how simultaneously activated synaptic clusters located at different dendritic locations synergistically affect the plasticity at the active synapses, as well as the heterosynaptic plasticity of an inactive synapse "sandwiched" between them. We conclude that the inherent electrical asymmetry of dendritic trees enables sophisticated schemes for spatially targeted supervision of heterosynaptic plasticity.
Collapse
Affiliation(s)
| | - Menachem Kalmenson
- Department of Neurobiology, The Hebrew University of Jerusalem, 91904 Jerusalem, Israel
| | - Idan Segev
- Edmond and Lily Safra Center for Brain Sciences
- Department of Neurobiology, The Hebrew University of Jerusalem, 91904 Jerusalem, Israel
| |
Collapse
|
4
|
Yu H, Shi J, Qian J, Wang S, Li S. Single dendritic neural classification with an effective spherical search-based whale learning algorithm. MATHEMATICAL BIOSCIENCES AND ENGINEERING : MBE 2023; 20:7594-7632. [PMID: 37161164 DOI: 10.3934/mbe.2023328] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/11/2023]
Abstract
McCulloch-Pitts neuron-based neural networks have been the mainstream deep learning methods, achieving breakthrough in various real-world applications. However, McCulloch-Pitts neuron is also under longtime criticism of being overly simplistic. To alleviate this issue, the dendritic neuron model (DNM), which employs non-linear information processing capabilities of dendrites, has been widely used for prediction and classification tasks. In this study, we innovatively propose a hybrid approach to co-evolve DNM in contrast to back propagation (BP) techniques, which are sensitive to initial circumstances and readily fall into local minima. The whale optimization algorithm is improved by spherical search learning to perform co-evolution through dynamic hybridizing. Eleven classification datasets were selected from the well-known UCI Machine Learning Repository. Its efficiency in our model was verified by statistical analysis of convergence speed and Wilcoxon sign-rank tests, with receiver operating characteristic curves and the calculation of area under the curve. In terms of classification accuracy, the proposed co-evolution method beats 10 existing cutting-edge non-BP methods and BP, suggesting that well-learned DNMs are computationally significantly more potent than conventional McCulloch-Pitts types and can be employed as the building blocks for the next-generation deep learning methods.
Collapse
Affiliation(s)
- Hang Yu
- College of Computer Science and Technology, Taizhou University, Taizhou 225300, China
| | - Jiarui Shi
- Department of Engineering, Wesoft Company Ltd., Kawasaki-shi 210-0024, Japan
| | - Jin Qian
- College of Computer Science and Technology, Taizhou University, Taizhou 225300, China
| | - Shi Wang
- College of Computer Science and Technology, Taizhou University, Taizhou 225300, China
| | - Sheng Li
- College of Computer Science and Technology, Taizhou University, Taizhou 225300, China
| |
Collapse
|
5
|
Feldhoff F, Toepfer H, Harczos T, Klefenz F. Periodicity Pitch Perception Part III: Sensibility and Pachinko Volatility. Front Neurosci 2022; 16:736642. [PMID: 35356050 PMCID: PMC8959216 DOI: 10.3389/fnins.2022.736642] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/05/2021] [Accepted: 02/07/2022] [Indexed: 11/29/2022] Open
Abstract
Neuromorphic computer models are used to explain sensory perceptions. Auditory models generate cochleagrams, which resemble the spike distributions in the auditory nerve. Neuron ensembles along the auditory pathway transform sensory inputs step by step and at the end pitch is represented in auditory categorical spaces. In two previous articles in the series on periodicity pitch perception an extended auditory model had been successfully used for explaining periodicity pitch proved for various musical instrument generated tones and sung vowels. In this third part in the series the focus is on octopus cells as they are central sensitivity elements in auditory cognition processes. A powerful numerical model had been devised, in which auditory nerve fibers (ANFs) spike events are the inputs, triggering the impulse responses of the octopus cells. Efficient algorithms are developed and demonstrated to explain the behavior of octopus cells with a focus on a simple event-based hardware implementation of a layer of octopus neurons. The main finding is, that an octopus' cell model in a local receptive field fine-tunes to a specific trajectory by a spike-timing-dependent plasticity (STDP) learning rule with synaptic pre-activation and the dendritic back-propagating signal as post condition. Successful learning explains away the teacher and there is thus no need for a temporally precise control of plasticity that distinguishes between learning and retrieval phases. Pitch learning is cascaded: At first octopus cells respond individually by self-adjustment to specific trajectories in their local receptive fields, then unions of octopus cells are collectively learned for pitch discrimination. Pitch estimation by inter-spike intervals is shown exemplary using two input scenarios: a simple sinus tone and a sung vowel. The model evaluation indicates an improvement in pitch estimation on a fixed time-scale.
Collapse
Affiliation(s)
- Frank Feldhoff
- Advanced Electromagnetics Group, Technische Universität Ilmenau, Ilmenau, Germany
| | - Hannes Toepfer
- Advanced Electromagnetics Group, Technische Universität Ilmenau, Ilmenau, Germany
| | - Tamas Harczos
- Fraunhofer-Institut für Digitale Medientechnologie, Ilmenau, Germany
- Auditory Neuroscience and Optogenetics Laboratory, German Primate Center, Göttingen, Germany
- audifon GmbH & Co. KG, Kölleda, Germany
| | - Frank Klefenz
- Fraunhofer-Institut für Digitale Medientechnologie, Ilmenau, Germany
| |
Collapse
|
7
|
Fitch WT. Information and the single cell. Curr Opin Neurobiol 2021; 71:150-157. [PMID: 34844102 DOI: 10.1016/j.conb.2021.10.004] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/16/2021] [Revised: 09/17/2021] [Accepted: 10/20/2021] [Indexed: 11/16/2022]
Abstract
Understanding the evolution of cognition requires an understanding of the costs and benefits of neural computation. This requires analysis of neuronal circuitry in terms of information-processing efficiency, ultimately cashed out in terms of ATP expenditures relative to adaptive problem-solving abilities. Despite a preoccupation in neuroscience with the synapse as the source of stored neural information, it is clear that, along with synaptic weights and electrochemical dynamics, neurons have multiple mechanisms which store and process information, including 'wetware' (protein phosphorylation, gene transcription, and so on) and cell morphology (dendritic form). Insights into non-synaptic information-processing can be gained by examining the surprisingly complex abilities of single-celled organisms ('cellular cognition') because neurons share many of the same abilities. Cells provide the fundamental level at which information processing interfaces with gene expression, and cell-internal information-processing mechanisms are both powerful and energetically efficient. Understanding cellular computation should be a central goal of research on cognitive evolution.
Collapse
|
8
|
A synaptic learning rule for exploiting nonlinear dendritic computation. Neuron 2021; 109:4001-4017.e10. [PMID: 34715026 PMCID: PMC8691952 DOI: 10.1016/j.neuron.2021.09.044] [Citation(s) in RCA: 22] [Impact Index Per Article: 7.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/23/2021] [Revised: 08/10/2021] [Accepted: 09/23/2021] [Indexed: 11/23/2022]
Abstract
Information processing in the brain depends on the integration of synaptic input distributed throughout neuronal dendrites. Dendritic integration is a hierarchical process, proposed to be equivalent to integration by a multilayer network, potentially endowing single neurons with substantial computational power. However, whether neurons can learn to harness dendritic properties to realize this potential is unknown. Here, we develop a learning rule from dendritic cable theory and use it to investigate the processing capacity of a detailed pyramidal neuron model. We show that computations using spatial or temporal features of synaptic input patterns can be learned, and even synergistically combined, to solve a canonical nonlinear feature-binding problem. The voltage dependence of the learning rule drives coactive synapses to engage dendritic nonlinearities, whereas spike-timing dependence shapes the time course of subthreshold potentials. Dendritic input-output relationships can therefore be flexibly tuned through synaptic plasticity, allowing optimal implementation of nonlinear functions by single neurons.
Collapse
|