1
|
George D, Lázaro-Gredilla M, Lehrach W, Dedieu A, Zhou G, Marino J. A detailed theory of thalamic and cortical microcircuits for predictive visual inference. SCIENCE ADVANCES 2025; 11:eadr6698. [PMID: 39908384 PMCID: PMC11800772 DOI: 10.1126/sciadv.adr6698] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/12/2024] [Accepted: 01/06/2025] [Indexed: 02/07/2025]
Abstract
Understanding cortical microcircuitry requires theoretical models that can tease apart their computational logic from biological details. Although Bayesian inference serves as an abstract framework of cortical computation, precisely mapping concrete instantiations of computational models to biology under real-world tasks is necessary to produce falsifiable neural models. On the basis of a recent generative model, recursive cortical networks, that demonstrated excellent performance on vision benchmarks, we derive a theoretical cortical microcircuit by placing the requirements of the computational model within biological constraints. The derived model suggests precise algorithmic roles for the columnar and laminar feed-forward, feedback, and lateral connections, the thalamic pathway, blobs and interblobs, and the innate lineage-specific interlaminar connectivity within cortical columns. The model also explains several visual phenomena, including the subjective contour effect and neon-color spreading effect, with circuit-level precision. Our model and methodology provides a path forward in understanding cortical and thalamic computations.
Collapse
|
2
|
Park P, Wong-Campos JD, Itkis DG, Lee BH, Qi Y, Davis HC, Antin B, Pasarkar A, Grimm JB, Plutkis SE, Holland KL, Paninski L, Lavis LD, Cohen AE. Dendritic excitations govern back-propagation via a spike-rate accelerometer. Nat Commun 2025; 16:1333. [PMID: 39905023 PMCID: PMC11794848 DOI: 10.1038/s41467-025-55819-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/30/2024] [Accepted: 12/31/2024] [Indexed: 02/06/2025] Open
Abstract
Dendrites on neurons support electrical excitations, but the computational significance of these events is not well understood. We developed molecular, optical, and computational tools for all-optical electrophysiology in dendrites. We mapped sub-millisecond voltage dynamics throughout the dendritic trees of CA1 pyramidal neurons under diverse optogenetic and synaptic stimulus patterns, in acute brain slices. Our data show history-dependent spike back-propagation in distal dendrites, driven by locally generated Na+ spikes (dSpikes). Dendritic depolarization created a transient window for dSpike propagation, opened by A-type KV channel inactivation, and closed by slow NaV inactivation. Collisions of dSpikes with synaptic inputs triggered calcium channel and N-methyl-D-aspartate receptor (NMDAR)-dependent dendritic plateau potentials and accompanying complex spikes at the soma. This hierarchical ion channel network acts as a spike-rate accelerometer, providing an intuitive picture connecting dendritic biophysics to associative plasticity rules.
Collapse
Affiliation(s)
- Pojeong Park
- Department of Chemistry and Chemical Biology, Harvard University, Cambridge, MA, USA
- Department of Brain Sciences, DGIST, Daegu, Republic of Korea
| | - J David Wong-Campos
- Department of Chemistry and Chemical Biology, Harvard University, Cambridge, MA, USA
| | - Daniel G Itkis
- Department of Chemistry and Chemical Biology, Harvard University, Cambridge, MA, USA
| | - Byung Hun Lee
- Department of Chemistry and Chemical Biology, Harvard University, Cambridge, MA, USA
| | - Yitong Qi
- Department of Chemistry and Chemical Biology, Harvard University, Cambridge, MA, USA
| | - Hunter C Davis
- Department of Chemistry and Chemical Biology, Harvard University, Cambridge, MA, USA
| | - Benjamin Antin
- Departments of Statistics and Neuroscience, Columbia University, New York, NY, USA
| | - Amol Pasarkar
- Departments of Statistics and Neuroscience, Columbia University, New York, NY, USA
| | - Jonathan B Grimm
- Janelia Research Campus, Howard Hughes Medical Institute, Ashburn, VA, USA
| | - Sarah E Plutkis
- Janelia Research Campus, Howard Hughes Medical Institute, Ashburn, VA, USA
| | - Katie L Holland
- Janelia Research Campus, Howard Hughes Medical Institute, Ashburn, VA, USA
| | - Liam Paninski
- Departments of Statistics and Neuroscience, Columbia University, New York, NY, USA
| | - Luke D Lavis
- Janelia Research Campus, Howard Hughes Medical Institute, Ashburn, VA, USA
| | - Adam E Cohen
- Department of Chemistry and Chemical Biology, Harvard University, Cambridge, MA, USA.
- Department of Physics, Harvard University, Cambridge, MA, USA.
| |
Collapse
|
3
|
Kirchner JH, Euler L, Fritz I, Ferreira Castro A, Gjorgjieva J. Dendritic growth and synaptic organization from activity-independent cues and local activity-dependent plasticity. eLife 2025; 12:RP87527. [PMID: 39899359 PMCID: PMC11790248 DOI: 10.7554/elife.87527] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/04/2025] Open
Abstract
Dendritic branching and synaptic organization shape single-neuron and network computations. How they emerge simultaneously during brain development as neurons become integrated into functional networks is still not mechanistically understood. Here, we propose a mechanistic model in which dendrite growth and the organization of synapses arise from the interaction of activity-independent cues from potential synaptic partners and local activity-dependent synaptic plasticity. Consistent with experiments, three phases of dendritic growth - overshoot, pruning, and stabilization - emerge naturally in the model. The model generates stellate-like dendritic morphologies that capture several morphological features of biological neurons under normal and perturbed learning rules, reflecting biological variability. Model-generated dendrites have approximately optimal wiring length consistent with experimental measurements. In addition to establishing dendritic morphologies, activity-dependent plasticity rules organize synapses into spatial clusters according to the correlated activity they experience. We demonstrate that a trade-off between activity-dependent and -independent factors influences dendritic growth and synaptic location throughout development, suggesting that early developmental variability can affect mature morphology and synaptic function. Therefore, a single mechanistic model can capture dendritic growth and account for the synaptic organization of correlated inputs during development. Our work suggests concrete mechanistic components underlying the emergence of dendritic morphologies and synaptic formation and removal in function and dysfunction, and provides experimentally testable predictions for the role of individual components.
Collapse
Affiliation(s)
- Jan H Kirchner
- School of Life Sciences, Technical University of MunichFreisingGermany
- Computation in Neural Circuits Group, Max Planck Institute for Brain ResearchFrankfurtGermany
| | - Lucas Euler
- Computation in Neural Circuits Group, Max Planck Institute for Brain ResearchFrankfurtGermany
| | - Ingo Fritz
- School of Life Sciences, Technical University of MunichFreisingGermany
| | | | - Julijana Gjorgjieva
- School of Life Sciences, Technical University of MunichFreisingGermany
- Computation in Neural Circuits Group, Max Planck Institute for Brain ResearchFrankfurtGermany
| |
Collapse
|
4
|
Griesius S, Richardson A, Kullmann DM. Supralinear dendritic integration in murine dendrite-targeting interneurons. eLife 2025; 13:RP100268. [PMID: 39887034 PMCID: PMC11785373 DOI: 10.7554/elife.100268] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/01/2025] Open
Abstract
Non-linear summation of synaptic inputs to the dendrites of pyramidal neurons has been proposed to increase the computation capacity of neurons through coincidence detection, signal amplification, and additional logic operations such as XOR. Supralinear dendritic integration has been documented extensively in principal neurons, mediated by several voltage-dependent conductances. It has also been reported in parvalbumin-positive hippocampal basket cells, in dendrites innervated by feedback excitatory synapses. Whether other interneurons, which support feed-forward or feedback inhibition of principal neuron dendrites, also exhibit local non-linear integration of synaptic excitation is not known. Here, we use patch-clamp electrophysiology, and two-photon calcium imaging and glutamate uncaging, to show that supralinear dendritic integration of near-synchronous spatially clustered glutamate-receptor mediated depolarization occurs in NDNF-positive neurogliaform cells and oriens-lacunosum moleculare interneurons in the mouse hippocampus. Supralinear summation was detected via recordings of somatic depolarizations elicited by uncaging of glutamate on dendritic fragments, and, in neurogliaform cells, by concurrent imaging of dendritic calcium transients. Supralinearity was abolished by blocking NMDA receptors (NMDARs) but resisted blockade of voltage-gated sodium channels. Blocking L-type calcium channels abolished supralinear calcium signalling but only had a minor effect on voltage supralinearity. Dendritic boosting of spatially clustered synaptic signals argues for previously unappreciated computational complexity in dendrite-projecting inhibitory cells of the hippocampus.
Collapse
Affiliation(s)
- Simonas Griesius
- Department of Clinical Experimental and Epilepsy, UCL Queen Square Institute of Neurology, University College LondonLondonUnited Kingdom
| | - Amy Richardson
- Department of Clinical Experimental and Epilepsy, UCL Queen Square Institute of Neurology, University College LondonLondonUnited Kingdom
| | - Dimitri Michael Kullmann
- Department of Clinical Experimental and Epilepsy, UCL Queen Square Institute of Neurology, University College LondonLondonUnited Kingdom
| |
Collapse
|
5
|
Zhao R, Kim SJ, Xu Y, Zhao J, Wang T, Midya R, Ganguli S, Roy AK, Dubey M, Williams RS, Yang JJ. Memristive Ion Dynamics to Enable Biorealistic Computing. Chem Rev 2025; 125:745-785. [PMID: 39729346 PMCID: PMC11759055 DOI: 10.1021/acs.chemrev.4c00587] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/08/2024] [Revised: 12/10/2024] [Accepted: 12/16/2024] [Indexed: 12/28/2024]
Abstract
Conventional artificial intelligence (AI) systems are facing bottlenecks due to the fundamental mismatches between AI models, which rely on parallel, in-memory, and dynamic computation, and traditional transistors, which have been designed and optimized for sequential logic operations. This calls for the development of novel computing units beyond transistors. Inspired by the high efficiency and adaptability of biological neural networks, computing systems mimicking the capabilities of biological structures are gaining more attention. Ion-based memristive devices (IMDs), owing to the intrinsic functional similarities to their biological counterparts, hold significant promise for implementing emerging neuromorphic learning and computing algorithms. In this article, we review the fundamental mechanisms of IMDs based on ion drift and diffusion to elucidate the origins of their diverse dynamics. We then examine how these mechanisms operate within different materials to enable IMDs with various types of switching behaviors, leading to a wide range of applications, from emulating biological components to realizing specialized computing requirements. Furthermore, we explore the potential for IMDs to be modified and tuned to achieve customized dynamics, which positions them as one of the most promising hardware candidates for executing bioinspired algorithms with unique specifications. Finally, we identify the challenges currently facing IMDs that hinder their widespread usage and highlight emerging research directions that could significantly benefit from incorporating IMDs.
Collapse
Affiliation(s)
- Ruoyu Zhao
- Ming
Hsieh Department of Electrical and Computer Engineering, University of Southern California, Los Angeles, California 90089, United States
| | - Seung Ju Kim
- Ming
Hsieh Department of Electrical and Computer Engineering, University of Southern California, Los Angeles, California 90089, United States
| | - Yichun Xu
- Ming
Hsieh Department of Electrical and Computer Engineering, University of Southern California, Los Angeles, California 90089, United States
| | - Jian Zhao
- Ming
Hsieh Department of Electrical and Computer Engineering, University of Southern California, Los Angeles, California 90089, United States
| | - Tong Wang
- Ming
Hsieh Department of Electrical and Computer Engineering, University of Southern California, Los Angeles, California 90089, United States
| | - Rivu Midya
- Sandia
National Laboratories, Livermore, California 94550, United States
- Department
of Electrical & Computer Engineering, Texas A&M University, College
Station, Texas, 77843, United States
| | - Sabyasachi Ganguli
- Air
Force Research Laboratory Materials and Manufacturing Directorate
Wright − Patterson Air Force Base Dayton, Ohio 45433, United States
| | - Ajit K. Roy
- Air
Force Research Laboratory Materials and Manufacturing Directorate
Wright − Patterson Air Force Base Dayton, Ohio 45433, United States
| | - Madan Dubey
- Sensors
and Electron Devices Directorate, U.S. Army
Research Laboratory, Adelphi, Maryland 20723, United States
| | - R. Stanley Williams
- Ming
Hsieh Department of Electrical and Computer Engineering, University of Southern California, Los Angeles, California 90089, United States
| | - J. Joshua Yang
- Ming
Hsieh Department of Electrical and Computer Engineering, University of Southern California, Los Angeles, California 90089, United States
| |
Collapse
|
6
|
Chavlis S, Poirazi P. Dendrites endow artificial neural networks with accurate, robust and parameter-efficient learning. Nat Commun 2025; 16:943. [PMID: 39843414 PMCID: PMC11754790 DOI: 10.1038/s41467-025-56297-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/04/2024] [Accepted: 01/13/2025] [Indexed: 01/24/2025] Open
Abstract
Artificial neural networks (ANNs) are at the core of most Deep Learning (DL) algorithms that successfully tackle complex problems like image recognition, autonomous driving, and natural language processing. However, unlike biological brains who tackle similar problems in a very efficient manner, DL algorithms require a large number of trainable parameters, making them energy-intensive and prone to overfitting. Here, we show that a new ANN architecture that incorporates the structured connectivity and restricted sampling properties of biological dendrites counteracts these limitations. We find that dendritic ANNs are more robust to overfitting and match or outperform traditional ANNs on several image classification tasks while using significantly fewer trainable parameters. These advantages are likely the result of a different learning strategy, whereby most of the nodes in dendritic ANNs respond to multiple classes, unlike classical ANNs that strive for class-specificity. Our findings suggest that the incorporation of dendritic properties can make learning in ANNs more precise, resilient, and parameter-efficient and shed new light on how biological features can impact the learning strategies of ANNs.
Collapse
Affiliation(s)
- Spyridon Chavlis
- Institute of Molecular Biology and Biotechnology, Foundation for Research and Technology-Hellas, Heraklion, Crete, Greece
| | - Panayiota Poirazi
- Institute of Molecular Biology and Biotechnology, Foundation for Research and Technology-Hellas, Heraklion, Crete, Greece.
| |
Collapse
|
7
|
Oh J, Ahn W, Ham A, Lee H, Lee S, Cha JH, Seo S, Kang K, Choi SY. Highly Reliable Bi 2O 2Se Dendritic Neuron Enabling Spatial-Temporal Signal Processing for Real-World Image Classification. ACS NANO 2025; 19:638-648. [PMID: 39741429 DOI: 10.1021/acsnano.4c11133] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/03/2025]
Abstract
Artificial intelligence (AI) has made significant strides by imitating biological neurons and synapses through simplified models, yet incomplete neuron functionalities can limit performance and energy efficiency in handling complex tasks. Biological neurons process input signals nonlinearly, utilizing dendrites to process spatial-temporal information. This study demonstrates the compact artificial dendrite device employing memristors based on bismuth oxyselenide (Bi2O2Se). Transfer-free Bi2O2Se switching medium is directly grown on the metal-patterned substrates via 350 °C selenization process. The layered Bi2O2Se structure, limiting metal injection, results in reliable dynamic resistive switching with excellent cycle uniformity and exceptional endurance over 2 million cycles. The highly reliable current response of dynamic resistive switching is modeled with respect to the spatial-temporal voltage input. With the Bi2O2Se dendrite device, dendritic neuron model is implemented, and the proposed neural network achieved high recognition rates of 78.3% with the street view house numbers (SVHN) data set.
Collapse
Affiliation(s)
- Jungyeop Oh
- School of Electrical Engineering, Korea Advanced Institute of Science and Technology (KAIST), Daejeon 34141, Republic of Korea
| | - Wonbae Ahn
- School of Electrical Engineering, Korea Advanced Institute of Science and Technology (KAIST), Daejeon 34141, Republic of Korea
| | - Ayoung Ham
- Department of Materials Science and Engineering, Korea Advanced Institute of Science and Technology (KAIST), Daejeon 34141, Republic of Korea
| | - Hyeonji Lee
- School of Electrical Engineering, Korea Advanced Institute of Science and Technology (KAIST), Daejeon 34141, Republic of Korea
| | - Sejin Lee
- School of Electrical Engineering, Korea Advanced Institute of Science and Technology (KAIST), Daejeon 34141, Republic of Korea
| | - Jun-Hwe Cha
- School of Electrical Engineering, Korea Advanced Institute of Science and Technology (KAIST), Daejeon 34141, Republic of Korea
| | - Seunghwan Seo
- Department of Materials Science and Engineering, Korea Advanced Institute of Science and Technology (KAIST), Daejeon 34141, Republic of Korea
| | - Kibum Kang
- Department of Materials Science and Engineering, Korea Advanced Institute of Science and Technology (KAIST), Daejeon 34141, Republic of Korea
- Graduate School of Semiconductor Technology, Korea Advanced Institute of Science and Technology (KAIST), Daejeon 34141, Republic of Korea
| | - Sung-Yool Choi
- School of Electrical Engineering, Korea Advanced Institute of Science and Technology (KAIST), Daejeon 34141, Republic of Korea
- Graduate School of Semiconductor Technology, Korea Advanced Institute of Science and Technology (KAIST), Daejeon 34141, Republic of Korea
| |
Collapse
|
8
|
Sinha A, Gleeson P, Marin B, Dura-Bernal S, Panagiotou S, Crook S, Cantarelli M, Cannon RC, Davison AP, Gurnani H, Silver RA. The NeuroML ecosystem for standardized multi-scale modeling in neuroscience. eLife 2025; 13:RP95135. [PMID: 39792574 PMCID: PMC11723582 DOI: 10.7554/elife.95135] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/12/2025] Open
Abstract
Data-driven models of neurons and circuits are important for understanding how the properties of membrane conductances, synapses, dendrites, and the anatomical connectivity between neurons generate the complex dynamical behaviors of brain circuits in health and disease. However, the inherent complexity of these biological processes makes the construction and reuse of biologically detailed models challenging. A wide range of tools have been developed to aid their construction and simulation, but differences in design and internal representation act as technical barriers to those who wish to use data-driven models in their research workflows. NeuroML, a model description language for computational neuroscience, was developed to address this fragmentation in modeling tools. Since its inception, NeuroML has evolved into a mature community standard that encompasses a wide range of model types and approaches in computational neuroscience. It has enabled the development of a large ecosystem of interoperable open-source software tools for the creation, visualization, validation, and simulation of data-driven models. Here, we describe how the NeuroML ecosystem can be incorporated into research workflows to simplify the construction, testing, and analysis of standardized models of neural systems, and supports the FAIR (Findability, Accessibility, Interoperability, and Reusability) principles, thus promoting open, transparent and reproducible science.
Collapse
Affiliation(s)
- Ankur Sinha
- Department of Neuroscience, Physiology and Pharmacology, University College LondonLondonUnited Kingdom
| | - Padraig Gleeson
- Department of Neuroscience, Physiology and Pharmacology, University College LondonLondonUnited Kingdom
| | - Bóris Marin
- Universidade Federal do ABCSão Bernardo do CampoBrazil
| | - Salvador Dura-Bernal
- SUNY Downstate Medical CenterBrooklynUnited States
- Center for Biomedical Imaging and Neuromodulation, Nathan Kline Institute for Psychiatric ResearchOrangeburgUnited States
| | | | | | | | | | | | | | - Robin Angus Silver
- Department of Neuroscience, Physiology and Pharmacology, University College LondonLondonUnited Kingdom
| |
Collapse
|
9
|
Phillips WA, Bachmann T, Spratling MW, Muckli L, Petro LS, Zolnik T. Cellular psychology: relating cognition to context-sensitive pyramidal cells. Trends Cogn Sci 2025; 29:28-40. [PMID: 39353837 DOI: 10.1016/j.tics.2024.09.002] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/12/2024] [Revised: 09/05/2024] [Accepted: 09/06/2024] [Indexed: 10/04/2024]
Abstract
'Cellular psychology' is a new field of inquiry that studies dendritic mechanisms for adapting mental events to the current context, thus increasing their coherence, flexibility, effectiveness, and comprehensibility. Apical dendrites of neocortical pyramidal cells have a crucial role in cognition - those dendrites receive input from diverse sources, including feedback, and can amplify the cell's feedforward transmission if relevant in that context. Specialized subsets of inhibitory interneurons regulate this cooperative context-sensitive processing by increasing or decreasing amplification. Apical input has different effects on cellular output depending on whether we are awake, deeply asleep, or dreaming. Furthermore, wakeful thought and imagery may depend on apical input. High-resolution neuroimaging in humans supports and complements evidence on these cellular mechanisms from other mammals.
Collapse
Affiliation(s)
- William A Phillips
- Psychology, Faculty of Natural Sciences, University of Stirling, Stirling, FK9 4LA, UK.
| | - Talis Bachmann
- Institute of Psychology, University of Tartu, Tartu, Estonia.
| | - Michael W Spratling
- Department of Behavioral and Cognitive Sciences, University of Luxembourg, L-4366 Esch-Belval, Luxembourg
| | - Lars Muckli
- Centre for Cognitive Neuroimaging, School of Psychology and Neuroscience, College of Medical, Veterinary and Life Sciences, University of Glasgow, Glasgow, G12 8QB, UK; Imaging Centre of Excellence, College of Medical, Veterinary and Life Sciences, University of Glasgow and Queen Elizabeth University Hospital, Glasgow, UK
| | - Lucy S Petro
- Centre for Cognitive Neuroimaging, School of Psychology and Neuroscience, College of Medical, Veterinary and Life Sciences, University of Glasgow, Glasgow, G12 8QB, UK; Imaging Centre of Excellence, College of Medical, Veterinary and Life Sciences, University of Glasgow and Queen Elizabeth University Hospital, Glasgow, UK
| | - Timothy Zolnik
- Department of Biochemistry, Charité Universitätsmedizin Berlin, Berlin 10117, Germany; Department of Biology, Humboldt Universität zu Berlin, Berlin 10117, Germany
| |
Collapse
|
10
|
Stringer C, Zhong L, Syeda A, Du F, Kesa M, Pachitariu M. Rastermap: a discovery method for neural population recordings. Nat Neurosci 2025; 28:201-212. [PMID: 39414974 PMCID: PMC11706777 DOI: 10.1038/s41593-024-01783-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/07/2023] [Accepted: 09/11/2024] [Indexed: 10/18/2024]
Abstract
Neurophysiology has long progressed through exploratory experiments and chance discoveries. Anecdotes abound of researchers listening to spikes in real time and noticing patterns of activity related to ongoing stimuli or behaviors. With the advent of large-scale recordings, such close observation of data has become difficult. To find patterns in large-scale neural data, we developed 'Rastermap', a visualization method that displays neurons as a raster plot after sorting them along a one-dimensional axis based on their activity patterns. We benchmarked Rastermap on realistic simulations and then used it to explore recordings of tens of thousands of neurons from mouse cortex during spontaneous, stimulus-evoked and task-evoked epochs. We also applied Rastermap to whole-brain zebrafish recordings; to wide-field imaging data; to electrophysiological recordings in rat hippocampus, monkey frontal cortex and various cortical and subcortical regions in mice; and to artificial neural networks. Finally, we illustrate high-dimensional scenarios where Rastermap and similar algorithms cannot be used effectively.
Collapse
Affiliation(s)
- Carsen Stringer
- Howard Hughes Medical Institute Janelia Research Campus, Ashburn, VA, USA.
| | - Lin Zhong
- Howard Hughes Medical Institute Janelia Research Campus, Ashburn, VA, USA
| | - Atika Syeda
- Howard Hughes Medical Institute Janelia Research Campus, Ashburn, VA, USA
| | - Fengtong Du
- Howard Hughes Medical Institute Janelia Research Campus, Ashburn, VA, USA
| | - Maria Kesa
- Howard Hughes Medical Institute Janelia Research Campus, Ashburn, VA, USA
| | - Marius Pachitariu
- Howard Hughes Medical Institute Janelia Research Campus, Ashburn, VA, USA.
| |
Collapse
|
11
|
Stingl M, Draguhn A, Both M. A dendrite is a dendrite is a dendrite? Dendritic signal integration beyond the "antenna" model. Pflugers Arch 2025; 477:9-16. [PMID: 39162833 PMCID: PMC11711151 DOI: 10.1007/s00424-024-03004-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/17/2024] [Revised: 07/24/2024] [Accepted: 07/27/2024] [Indexed: 08/21/2024]
Abstract
Neurons in central nervous systems receive multiple synaptic inputs and transform them into a largely standardized output to their target cells-the action potential. A simplified model posits that synaptic signals are integrated by linear summation and passive propagation towards the axon initial segment, where the threshold for spike generation is either crossed or not. However, multiple lines of research during past decades have shown that signal integration in individual neurons is much more complex, with important functional consequences at the cellular, network, and behavioral-cognitive level. The interplay between concomitant excitatory and inhibitory postsynaptic potentials depends strongly on the relative timing and localization of the respective synapses. In addition, dendrites contain multiple voltage-dependent conductances, which allow scaling of postsynaptic potentials, non-linear input processing, and compartmentalization of signals. Together, these features enable a rich variety of single-neuron computations, including non-linear operations and synaptic plasticity. Hence, we have to revise over-simplified messages from textbooks and use simplified computational models like integrate-and-fire neurons with some caution. This concept article summarizes the most important mechanisms of dendritic integration and highlights some recent developments in the field.
Collapse
Affiliation(s)
- Moritz Stingl
- Institute of Physiology and Pathophysiology, Medical Faculty, Heidelberg University, 69120, Heidelberg, Germany.
- Department of Physiology, University of California, San Francisco, San Francisco, CA, USA.
- Neuroscience Graduate Program, University of California, San Francisco, San Francisco, CA, USA.
| | - Andreas Draguhn
- Institute of Physiology and Pathophysiology, Medical Faculty, Heidelberg University, 69120, Heidelberg, Germany
| | - Martin Both
- Institute of Physiology and Pathophysiology, Medical Faculty, Heidelberg University, 69120, Heidelberg, Germany
| |
Collapse
|
12
|
Sadegh-Zadeh SA, Hazegh P. Advancing neural computation: experimental validation and optimization of dendritic learning in feedforward tree networks. AMERICAN JOURNAL OF NEURODEGENERATIVE DISEASE 2024; 13:49-69. [PMID: 39850544 PMCID: PMC11751443 DOI: 10.62347/fiqw7087] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/03/2024] [Accepted: 10/25/2024] [Indexed: 01/25/2025]
Abstract
OBJECTIVES This study aims to explore the capabilities of dendritic learning within feedforward tree networks (FFTN) in comparison to traditional synaptic plasticity models, particularly in the context of digit recognition tasks using the MNIST dataset. METHODS We employed FFTNs with nonlinear dendritic segment amplification and Hebbian learning rules to enhance computational efficiency. The MNIST dataset, consisting of 70,000 images of handwritten digits, was used for training and testing. Key performance metrics, including accuracy, precision, recall, and F1-score, were analysed. RESULTS The dendritic models significantly outperformed synaptic plasticity-based models across all metrics. Specifically, the dendritic learning framework achieved a test accuracy of 91%, compared to 88% for synaptic models, demonstrating superior performance in digit classification. CONCLUSIONS Dendritic learning offers a more powerful computational framework by closely mimicking biological neural processes, providing enhanced learning efficiency and scalability. These findings have important implications for advancing both artificial intelligence systems and computational neuroscience.
Collapse
Affiliation(s)
| | - Pooya Hazegh
- Department of Radiology, Carver College of Medicine, University of IowaIowa, IA 52242, USA
| |
Collapse
|
13
|
Senn W, Dold D, Kungl AF, Ellenberger B, Jordan J, Bengio Y, Sacramento J, Petrovici MA. A neuronal least-action principle for real-time learning in cortical circuits. eLife 2024; 12:RP89674. [PMID: 39704647 DOI: 10.7554/elife.89674] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/21/2024] Open
Abstract
One of the most fundamental laws of physics is the principle of least action. Motivated by its predictive power, we introduce a neuronal least-action principle for cortical processing of sensory streams to produce appropriate behavioral outputs in real time. The principle postulates that the voltage dynamics of cortical pyramidal neurons prospectively minimizes the local somato-dendritic mismatch error within individual neurons. For output neurons, the principle implies minimizing an instantaneous behavioral error. For deep network neurons, it implies the prospective firing to overcome integration delays and correct for possible output errors right in time. The neuron-specific errors are extracted in the apical dendrites of pyramidal neurons through a cortical microcircuit that tries to explain away the feedback from the periphery, and correct the trajectory on the fly. Any motor output is in a moving equilibrium with the sensory input and the motor feedback during the ongoing sensory-motor transform. Online synaptic plasticity reduces the somatodendritic mismatch error within each cortical neuron and performs gradient descent on the output cost at any moment in time. The neuronal least-action principle offers an axiomatic framework to derive local neuronal and synaptic laws for global real-time computation and learning in the brain.
Collapse
Affiliation(s)
- Walter Senn
- Department of Physiology, University of Bern, Bern, Switzerland
| | - Dominik Dold
- Department of Physiology, University of Bern, Bern, Switzerland
- Kirchhoff-Institute for Physics, Heidelberg University, Heidelberg, Germany
- European Space Research and Technology Centre, European Space Agency, Noordwijk, Netherlands
| | - Akos F Kungl
- Department of Physiology, University of Bern, Bern, Switzerland
- Kirchhoff-Institute for Physics, Heidelberg University, Heidelberg, Germany
| | - Benjamin Ellenberger
- Department of Physiology, University of Bern, Bern, Switzerland
- Insel Data Science Center, University Hospital Bern, Bern, Switzerland
| | - Jakob Jordan
- Department of Physiology, University of Bern, Bern, Switzerland
- Electrical Engineering, Yale University, New Haven, United States
| | | | - João Sacramento
- Department of Computer Science, ETH Zurich, Zurich, Switzerland
| | - Mihai A Petrovici
- Department of Physiology, University of Bern, Bern, Switzerland
- Kirchhoff-Institute for Physics, Heidelberg University, Heidelberg, Germany
| |
Collapse
|
14
|
Lei W, Clark DA, Demb JB. Compartmentalized pooling generates orientation selectivity in wide-field amacrine cells. Proc Natl Acad Sci U S A 2024; 121:e2411130121. [PMID: 39602271 PMCID: PMC11626119 DOI: 10.1073/pnas.2411130121] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/03/2024] [Accepted: 10/29/2024] [Indexed: 11/29/2024] Open
Abstract
Orientation is one of the most salient features in visual scenes. Neurons at multiple levels of the visual system detect orientation, but in many cases, the underlying biophysical mechanisms remain unresolved. Here, we studied mechanisms for orientation detection at the earliest stage in the visual system, in B/K wide-field amacrine cells (B/K WACs), a group of giant, nonspiking interneurons in the mouse retina that coexpress Bhlhe22 (B) and Kappa Opioid Receptor (K). B/K WACs exhibit orientation-tuned calcium signals along their long, straight, unbranching dendrites, which contain both synaptic inputs and outputs. Simultaneous dendritic calcium imaging and somatic voltage recordings reveal that individual B/K dendrites are electrotonically isolated, exhibiting a spatially confined yet extended receptive field along the dendrite, which we term "compartmentalized pooling." Further, the receptive field of a B/K WAC dendrite exhibits center-surround antagonism. Phenomenological receptive field models demonstrate that compartmentalized pooling generates orientation selectivity, and center-surround antagonism shapes band-pass spatial frequency tuning. At the microcircuit level, B/K WACs receive excitation driven by one contrast polarity (e.g., "ON") and glycinergic inhibition driven by the opposite polarity (e.g., "OFF"). However, this "crossover" inhibition is not essential for generating orientation selectivity. A minimal biophysical model reproduced compartmentalized pooling from feedforward excitatory inputs combined with a substantial increase in the specific membrane resistance between somatic and dendritic compartments. Collectively, our results reveal the biophysical mechanism for generating orientation selectivity in dendrites of B/K WACs, enriching our understanding of the diverse strategies employed throughout the visual system to detect orientation.
Collapse
Affiliation(s)
- Wanyu Lei
- Interdepartmental Neuroscience Program, Yale University, New Haven, CT06511
- Integrated Graduate Program in Physical and Engineering Biology, Yale University, New Haven, CT06511
| | - Damon A. Clark
- Department of Molecular, Cellular and Developmental Biology, Yale University, New Haven, CT06511
- Department of Physics, Yale University, New Haven, CT06511
- Quantitative Biology Institute, Yale University, New Haven, CT06511
- Department of Neuroscience, Yale University, New Haven, CT06511
- Wu Tsai Institute, Yale University, New Haven, CT06511
| | - Jonathan B. Demb
- Department of Neuroscience, Yale University, New Haven, CT06511
- Wu Tsai Institute, Yale University, New Haven, CT06511
- Department of Ophthalmology and Visual Science, Yale University, New Haven, CT06511
- Department of Cellular and Molecular Physiology, Yale University, New Haven, CT06511
| |
Collapse
|
15
|
Tann JY, Xu F, Kimura M, Wilkes OR, Yoong LF, Skibbe H, Moore AW. Study of Dendrite Differentiation Using Drosophila Dendritic Arborization Neurons. Cold Spring Harb Protoc 2024; 2024:pdb.top108146. [PMID: 38148165 DOI: 10.1101/pdb.top108146] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/28/2023]
Abstract
Neurons receive, process, and integrate inputs. These operations are organized by dendrite arbor morphology, and the dendritic arborization (da) neurons of the Drosophila peripheral sensory nervous system are an excellent experimental model for examining the differentiation processes that build and shape the dendrite arbor. Studies in da neurons are enabled by a wealth of fly genetic tools that allow targeted neuron manipulation and labeling of the neuron's cytoskeletal or organellar components. Moreover, as da neuron dendrite arbors cover the body wall, they are highly accessible for live imaging analysis of arbor patterning. Here, we outline the structure and function of different da neuron types and give examples of how they are used to elucidate central mechanisms of dendritic arbor formation.
Collapse
Affiliation(s)
- Jason Y Tann
- Laboratory for Neurodiversity, RIKEN Center for Brain Science, Wako-shi, 351-0106, Japan
| | - Fangke Xu
- Laboratory for Neurodiversity, RIKEN Center for Brain Science, Wako-shi, 351-0106, Japan
| | - Minami Kimura
- Laboratory for Neurodiversity, RIKEN Center for Brain Science, Wako-shi, 351-0106, Japan
| | - Oliver R Wilkes
- Laboratory for Neurodiversity, RIKEN Center for Brain Science, Wako-shi, 351-0106, Japan
- Department of Cellular and Molecular Biology, Institute for Translational Medicine, University of Liverpool, Liverpool L69 3BX, United Kingdom
| | - Li-Foong Yoong
- Laboratory for Neurodiversity, RIKEN Center for Brain Science, Wako-shi, 351-0106, Japan
| | - Henrik Skibbe
- Brain Image Analysis Unit, RIKEN Center for Brain Science, Wako-shi, 351-0106, Japan
| | - Adrian W Moore
- Laboratory for Neurodiversity, RIKEN Center for Brain Science, Wako-shi, 351-0106, Japan
| |
Collapse
|
16
|
Nordentoft MS, Takahashi N, Heltberg MS, Jensen MH, Rasmussen RN, Papoutsi A. Local changes in potassium ions regulate input integration in active dendrites. PLoS Biol 2024; 22:e3002935. [PMID: 39630876 DOI: 10.1371/journal.pbio.3002935] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/12/2024] [Revised: 12/16/2024] [Accepted: 11/13/2024] [Indexed: 12/07/2024] Open
Abstract
During neuronal activity, the extracellular concentration of potassium ions ([K+]o) increases substantially above resting levels, yet it remains unclear what role these [K+]o changes play in the dendritic integration of synaptic inputs. We here used mathematical formulations and biophysical modeling to explore the role of synaptic activity-dependent K+ changes in dendritic segments of a visual cortex pyramidal neuron, receiving inputs tuned to stimulus orientation. We found that the spatial arrangement of inputs dictates the magnitude of [K+]o changes in the dendrites: Dendritic segments receiving similarly tuned inputs can attain substantially higher [K+]o increases than segments receiving diversely tuned inputs. These [K+]o elevations in turn increase dendritic excitability, leading to more robust and prolonged dendritic spikes. Ultimately, these local effects amplify the gain of neuronal input-output transformations, causing higher orientation-tuned somatic firing rates without compromising orientation selectivity. Our results suggest that local, activity-dependent [K+]o changes in dendrites may act as a "volume knob" that determines the impact of synaptic inputs on feature-tuned neuronal firing.
Collapse
Affiliation(s)
| | - Naoya Takahashi
- University of Bordeaux, CNRS, Interdisciplinary Institute for Neuroscience (IINS), UMR 5297, Bordeaux, France
| | | | - Mogens H Jensen
- Niels Bohr Institute, University of Copenhagen, Copenhagen, Denmark
| | - Rune N Rasmussen
- Center for Translational Neuromedicine, University of Copenhagen, Copenhagen, Denmark
| | - Athanasia Papoutsi
- Institute of Molecular Biology and Biotechnology, Foundation for Research and Technology-Hellas, Crete, Greece
| |
Collapse
|
17
|
Li H, Hu J, Zhang Y, Chen A, Lin L, Chen G, Chen Y, Chai J, He Q, Wang H, Huang S, Zhou J, Xu Y, Yu B. Boolean Computation in Single-Transistor Neuron. ADVANCED MATERIALS (DEERFIELD BEACH, FLA.) 2024; 36:e2409040. [PMID: 39410727 DOI: 10.1002/adma.202409040] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/25/2024] [Revised: 09/13/2024] [Indexed: 12/06/2024]
Abstract
Brain neurons exhibit far more sophisticated and powerful information-processing capabilities than the simple integrators commonly modeled in neuromorphic computing. A biological neuron can in fact efficiently perform Boolean algebra, including linear nonseparable operations. Traditional logic circuits require more than a dozen transistors combined as NOT, AND, and OR gates to implement XOR. Lacking biological competency, artificial neural networks require multilayered solutions to exercise XOR operation. Here, it is shown that a single-transistor neuron, harnessing the intrinsic ambipolarity of graphene and ionic filamentary dynamics, can enable in situ reconfigurable multiple Boolean operations from linear separable to linear nonseparable in an ultra-compact design. By leveraging the spatiotemporal integration of inputs, bio-realistic spiking-dependent Boolean computation is fully realized, rivaling the efficiency of a human brain. Furthermore, a soft-XOR-based neural network via algorithm-hardware co-design, showcasing substantial performance improvement, is demonstrated. These results demonstrate how the artificial neuron, in the ultra-compact form of a single transistor, may function as a powerful platform for Boolean operations. These findings are anticipated to be a starting point for implementing more sophisticated computations at the individual transistor neuron level, leading to super-scalable neural networks for resource-efficient brain-inspired information processing.
Collapse
Affiliation(s)
- Hanxi Li
- College of Integrated Circuits, Zhejiang University, Hangzhou, Zhejiang, 311200, China
- ZJU-Hangzhou Global Scientific and Technological Innovation Center, Hangzhou, Zhejiang, 311200, China
| | - Jiayang Hu
- College of Integrated Circuits, Zhejiang University, Hangzhou, Zhejiang, 311200, China
- ZJU-Hangzhou Global Scientific and Technological Innovation Center, Hangzhou, Zhejiang, 311200, China
| | - Yishu Zhang
- College of Integrated Circuits, Zhejiang University, Hangzhou, Zhejiang, 311200, China
- ZJU-Hangzhou Global Scientific and Technological Innovation Center, Hangzhou, Zhejiang, 311200, China
| | - Anzhe Chen
- College of Integrated Circuits, Zhejiang University, Hangzhou, Zhejiang, 311200, China
- ZJU-Hangzhou Global Scientific and Technological Innovation Center, Hangzhou, Zhejiang, 311200, China
| | - Li Lin
- School of Materials Science and Engineering, Peking University, Beijing, 100871, China
- Technology Innovation Center of Graphene Metrology and Standardization for State Market Regulation, Beijing Graphene Institute, Beijing, 100095, China
| | - Ge Chen
- Technology Innovation Center of Graphene Metrology and Standardization for State Market Regulation, Beijing Graphene Institute, Beijing, 100095, China
| | - Yance Chen
- College of Integrated Circuits, Zhejiang University, Hangzhou, Zhejiang, 311200, China
- ZJU-Hangzhou Global Scientific and Technological Innovation Center, Hangzhou, Zhejiang, 311200, China
| | - Jian Chai
- College of Integrated Circuits, Zhejiang University, Hangzhou, Zhejiang, 311200, China
- ZJU-Hangzhou Global Scientific and Technological Innovation Center, Hangzhou, Zhejiang, 311200, China
| | - Qian He
- College of Integrated Circuits, Zhejiang University, Hangzhou, Zhejiang, 311200, China
- ZJU-Hangzhou Global Scientific and Technological Innovation Center, Hangzhou, Zhejiang, 311200, China
| | - Hailiang Wang
- College of Integrated Circuits, Zhejiang University, Hangzhou, Zhejiang, 311200, China
- ZJU-Hangzhou Global Scientific and Technological Innovation Center, Hangzhou, Zhejiang, 311200, China
| | - Shiman Huang
- College of Integrated Circuits, Zhejiang University, Hangzhou, Zhejiang, 311200, China
- ZJU-Hangzhou Global Scientific and Technological Innovation Center, Hangzhou, Zhejiang, 311200, China
| | - Jiachao Zhou
- College of Integrated Circuits, Zhejiang University, Hangzhou, Zhejiang, 311200, China
- ZJU-Hangzhou Global Scientific and Technological Innovation Center, Hangzhou, Zhejiang, 311200, China
| | - Yang Xu
- College of Integrated Circuits, Zhejiang University, Hangzhou, Zhejiang, 311200, China
- ZJU-Hangzhou Global Scientific and Technological Innovation Center, Hangzhou, Zhejiang, 311200, China
- Joint Institute of Zhejiang University and the University of Illinois at Urbana-Champaign, Zhejiang University, Haining, Zhejiang, 314400, China
| | - Bin Yu
- College of Integrated Circuits, Zhejiang University, Hangzhou, Zhejiang, 311200, China
- ZJU-Hangzhou Global Scientific and Technological Innovation Center, Hangzhou, Zhejiang, 311200, China
| |
Collapse
|
18
|
Bertalmío M, Durán Vizcaíno A, Malo J, Wichmann FA. Plaid masking explained with input-dependent dendritic nonlinearities. Sci Rep 2024; 14:24856. [PMID: 39438555 PMCID: PMC11496684 DOI: 10.1038/s41598-024-75471-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/27/2024] [Accepted: 10/07/2024] [Indexed: 10/25/2024] Open
Abstract
A serious obstacle for understanding early spatial vision comes from the failure of the so-called standard model (SM) to predict the perception of plaid masking. But the SM originated from a major oversimplification of single neuron computations, ignoring fundamental properties of dendrites. Here we show that a spatial vision model including computations mimicking the input-dependent nature of dendritic nonlinearities, i.e. including nonlinear neural summation, has the potential to explain plaid masking data.
Collapse
Affiliation(s)
| | | | - Jesús Malo
- Universitat de València, València, Spain
| | | |
Collapse
|
19
|
Dura-Bernal S, Herrera B, Lupascu C, Marsh BM, Gandolfi D, Marasco A, Neymotin S, Romani A, Solinas S, Bazhenov M, Hay E, Migliore M, Reinmann M, Arkhipov A. Large-Scale Mechanistic Models of Brain Circuits with Biophysically and Morphologically Detailed Neurons. J Neurosci 2024; 44:e1236242024. [PMID: 39358017 PMCID: PMC11450527 DOI: 10.1523/jneurosci.1236-24.2024] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/28/2024] [Revised: 07/09/2024] [Accepted: 07/31/2024] [Indexed: 10/04/2024] Open
Abstract
Understanding the brain requires studying its multiscale interactions from molecules to networks. The increasing availability of large-scale datasets detailing brain circuit composition, connectivity, and activity is transforming neuroscience. However, integrating and interpreting this data remains challenging. Concurrently, advances in supercomputing and sophisticated modeling tools now enable the development of highly detailed, large-scale biophysical circuit models. These mechanistic multiscale models offer a method to systematically integrate experimental data, facilitating investigations into brain structure, function, and disease. This review, based on a Society for Neuroscience 2024 MiniSymposium, aims to disseminate recent advances in large-scale mechanistic modeling to the broader community. It highlights (1) examples of current models for various brain regions developed through experimental data integration; (2) their predictive capabilities regarding cellular and circuit mechanisms underlying experimental recordings (e.g., membrane voltage, spikes, local-field potential, electroencephalography/magnetoencephalography) and brain function; and (3) their use in simulating biomarkers for brain diseases like epilepsy, depression, schizophrenia, and Parkinson's, aiding in understanding their biophysical underpinnings and developing novel treatments. The review showcases state-of-the-art models covering hippocampus, somatosensory, visual, motor, auditory cortical, and thalamic circuits across species. These models predict neural activity at multiple scales and provide insights into the biophysical mechanisms underlying sensation, motor behavior, brain signals, neural coding, disease, pharmacological interventions, and neural stimulation. Collaboration with experimental neuroscientists and clinicians is essential for the development and validation of these models, particularly as datasets grow. Hence, this review aims to foster interest in detailed brain circuit models, leading to cross-disciplinary collaborations that accelerate brain research.
Collapse
Affiliation(s)
- Salvador Dura-Bernal
- State University of New York (SUNY) Downstate Health Sciences University, Brooklyn, New York 11203
- Nathan S. Kline Institute for Psychiatric Research, Orangeburg, New York 10962
| | | | - Carmen Lupascu
- Institute of Biophysics, National Research Council/Human Brain Project, Palermo 90146, Italy
| | - Brianna M Marsh
- University of California San Diego, La Jolla, California 92093
| | - Daniela Gandolfi
- Department of Engineering "Enzo Ferrari", University of Modena and Reggio Emilia, Modena 41125, Italy
| | | | - Samuel Neymotin
- Nathan S. Kline Institute for Psychiatric Research, Orangeburg, New York 10962
- School of Medicine, New York University, New York 10012
| | - Armando Romani
- Swiss Federal Institute of Technology Lausanne (EPFL)/Blue Brain Project, Lausanne 1015, Switzerland
| | | | - Maxim Bazhenov
- University of California San Diego, La Jolla, California 92093
| | - Etay Hay
- Krembil Centre for Neuroinformatics, Centre for Addiction and Mental Health, Toronto, Ontario M5T 1R8, Canada
- University of Toronto, Toronto, Ontario M5S 1A1, Canada
| | - Michele Migliore
- Institute of Biophysics, National Research Council/Human Brain Project, Palermo 90146, Italy
| | - Michael Reinmann
- Swiss Federal Institute of Technology Lausanne (EPFL)/Blue Brain Project, Lausanne 1015, Switzerland
| | | |
Collapse
|
20
|
Cirtala G, De Schutter E. Branch-specific clustered parallel fiber input controls dendritic computation in Purkinje cells. iScience 2024; 27:110756. [PMID: 39286509 PMCID: PMC11404202 DOI: 10.1016/j.isci.2024.110756] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/22/2024] [Revised: 05/17/2024] [Accepted: 08/14/2024] [Indexed: 09/19/2024] Open
Abstract
Most central neurons have intricately branched dendritic trees that integrate massive numbers of synaptic inputs. Intrinsic active mechanisms in dendrites can be heterogeneous and be modulated in a branch-specific way. However, it remains poorly understood how heterogeneous intrinsic properties contribute to processing of synaptic input. We propose the first computational model of the cerebellar Purkinje cell with dendritic heterogeneity, in which each branch is an individual unit and is characterized by its own set of ion channel conductance densities. When simultaneously activating a cluster of parallel fiber synapses, we measure the peak amplitude of a response and observe how changes in P-type calcium channel conductance density shift the dendritic responses from a linear one to a bimodal one including dendritic calcium spikes and vice-versa. These changes relate to the morphology of each branch. We show how dendritic calcium spikes propagate and how Kv4.3 channels block spreading depolarization to nearby branches.
Collapse
Affiliation(s)
- Gabriela Cirtala
- Computational Neuroscience Unit, Okinawa Institute of Science and Technology Graduate University, Onna 904-0412, Okinawa, Japan
| | - Erik De Schutter
- Computational Neuroscience Unit, Okinawa Institute of Science and Technology Graduate University, Onna 904-0412, Okinawa, Japan
| |
Collapse
|
21
|
Makarov R, Chavlis S, Poirazi P. DendroTweaks: An interactive approach for unraveling dendritic dynamics. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.09.06.611191. [PMID: 39314451 PMCID: PMC11418972 DOI: 10.1101/2024.09.06.611191] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/25/2024]
Abstract
Neurons rely on the interplay between dendritic morphology and ion channels to transform synaptic inputs into a sequence of somatic spikes. Detailed biophysical models with active dendrites have been instrumental in exploring this interaction. However, such models can be challenging to understand and validate due to the large number of parameters involved. In this work, we introduce DendroTweaks - a toolbox designed to illuminate how morpho-electric properties map to dendritic events and how these dendritic events shape neuronal output. DendroTweaks features a web-based graphical interface, where users can explore single-cell neuronal models and adjust their morphological and biophysical parameters with real-time visual feedback. In particular, DendroTweaks is tailored to interactive fine-tuning of subcellular properties, such as kinetics and distributions of ion channels, as well as the dynamics and allocation of synaptic inputs. It offers an automated approach for standardization and refinement of voltage-gated ion channel models to make them more comprehensible and reusable. The toolbox allows users to run various experimental protocols and record data from multiple dendritic and somatic locations, thereby enhancing model validation. Finally, it aims to deepen our understanding of which dendritic properties are essential for neuronal input-output transformation. Using this knowledge, one can simplify models through a built-in morphology reduction algorithm and export them for further use in faster, more interpretable networks. With DendroTweaks, users can gain better control and understanding of their models, advancing research on dendritic input-output transformations and their role in network computations.
Collapse
Affiliation(s)
- Roman Makarov
- Institute of Molecular Biology and Biotechnology (IMBB), Foundation for Research and Technology-Hellas (FORTH), Heraklion, 70013, Greece
- Department of Biology, University of Crete, Heraklion, 70013, Greece
| | - Spyridon Chavlis
- Institute of Molecular Biology and Biotechnology (IMBB), Foundation for Research and Technology-Hellas (FORTH), Heraklion, 70013, Greece
| | - Panayiota Poirazi
- Institute of Molecular Biology and Biotechnology (IMBB), Foundation for Research and Technology-Hellas (FORTH), Heraklion, 70013, Greece
| |
Collapse
|
22
|
Fischer LF, Xu L, Murray KT, Harnett MT. Learning to use landmarks for navigation amplifies their representation in retrosplenial cortex. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.08.18.607457. [PMID: 39229229 PMCID: PMC11370392 DOI: 10.1101/2024.08.18.607457] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Subscribe] [Scholar Register] [Indexed: 09/05/2024]
Abstract
Visual landmarks provide powerful reference signals for efficient navigation by altering the activity of spatially tuned neurons, such as place cells, head direction cells, and grid cells. To understand the neural mechanism by which landmarks exert such strong influence, it is necessary to identify how these visual features gain spatial meaning. In this study, we characterized visual landmark representations in mouse retrosplenial cortex (RSC) using chronic two-photon imaging of the same neuronal ensembles over the course of spatial learning. We found a pronounced increase in landmark-referenced activity in RSC neurons that, once established, remained stable across days. Changing behavioral context by uncoupling treadmill motion from visual feedback systematically altered neuronal responses associated with the coherence between visual scene flow speed and self-motion. To explore potential underlying mechanisms, we modeled how burst firing, mediated by supralinear somatodendritic interactions, could efficiently mediate context- and coherence-dependent integration of landmark information. Our results show that visual encoding shifts to landmark-referenced and context-dependent codes as these cues take on spatial meaning during learning.
Collapse
Affiliation(s)
- Lukas F. Fischer
- Department of Brain and Cognitive Sciences, MIT, Cambridge, MA, USA
- McGovern Institute for Brain Research, MIT, Cambridge, MA, USA
| | - Liane Xu
- Department of Brain and Cognitive Sciences, MIT, Cambridge, MA, USA
- McGovern Institute for Brain Research, MIT, Cambridge, MA, USA
| | - Keith T. Murray
- Department of Brain and Cognitive Sciences, MIT, Cambridge, MA, USA
- McGovern Institute for Brain Research, MIT, Cambridge, MA, USA
| | - Mark T. Harnett
- Department of Brain and Cognitive Sciences, MIT, Cambridge, MA, USA
- McGovern Institute for Brain Research, MIT, Cambridge, MA, USA
| |
Collapse
|
23
|
Liao Z, Losonczy A. Learning, Fast and Slow: Single- and Many-Shot Learning in the Hippocampus. Annu Rev Neurosci 2024; 47:187-209. [PMID: 38663090 PMCID: PMC11519319 DOI: 10.1146/annurev-neuro-102423-100258] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 08/09/2024]
Abstract
The hippocampus is critical for memory and spatial navigation. The ability to map novel environments, as well as more abstract conceptual relationships, is fundamental to the cognitive flexibility that humans and other animals require to survive in a dynamic world. In this review, we survey recent advances in our understanding of how this flexibility is implemented anatomically and functionally by hippocampal circuitry, during both active exploration (online) and rest (offline). We discuss the advantages and limitations of spike timing-dependent plasticity and the more recently discovered behavioral timescale synaptic plasticity in supporting distinct learning modes in the hippocampus. Finally, we suggest complementary roles for these plasticity types in explaining many-shot and single-shot learning in the hippocampus and discuss how these rules could work together to support the learning of cognitive maps.
Collapse
Affiliation(s)
- Zhenrui Liao
- Department of Neuroscience and Mortimer B. Zuckerman Mind Brain Behavior Institute, Columbia University, New York, NY, USA;
| | - Attila Losonczy
- Department of Neuroscience and Mortimer B. Zuckerman Mind Brain Behavior Institute, Columbia University, New York, NY, USA;
| |
Collapse
|
24
|
Song Y, Benna MK. Parallel Synapses with Transmission Nonlinearities Enhance Neuronal Classification Capacity. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.07.01.601490. [PMID: 39005326 PMCID: PMC11244940 DOI: 10.1101/2024.07.01.601490] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 07/16/2024]
Abstract
Cortical neurons often establish multiple synaptic contacts with the same postsynaptic neuron. To avoid functional redundancy of these parallel synapses, it is crucial that each synapse exhibits distinct computational properties. Here we model the current to the soma contributed by each synapse as a sigmoidal transmission function of its presynaptic input, with learnable parameters such as amplitude, slope, and threshold. We evaluate the classification capacity of a neuron equipped with such nonlinear parallel synapses, and show that with a small number of parallel synapses per axon, it substantially exceeds that of the Perceptron. Furthermore, the number of correctly classified data points can increase superlinearly as the number of presynaptic axons grows. When training with an unrestricted number of parallel synapses, our model neuron can effectively implement an arbitrary aggregate transmission function for each axon, constrained only by monotonicity. Nevertheless, successful learning in the model neuron often requires only a small number of parallel synapses. We also apply these parallel synapses in a feedforward neural network trained to classify MNIST images, and show that they can increase the test accuracy. This demonstrates that multiple nonlinear synapses per input axon can substantially enhance a neuron's computational power.
Collapse
Affiliation(s)
- Yuru Song
- Neurosciences Graduate Program, University of California, San Diego, La Jolla, CA 92093, USA
| | - Marcus K. Benna
- Department of Neurobiology, School of Biological Sciences, University of California San Diego, La Jolla, CA 92093, USA
| |
Collapse
|
25
|
Hall S. Is the Papez circuit the location of the elusive episodic memory engram? IBRO Neurosci Rep 2024; 16:249-259. [PMID: 38370006 PMCID: PMC10869290 DOI: 10.1016/j.ibneur.2024.01.016] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/25/2023] [Accepted: 01/31/2024] [Indexed: 02/20/2024] Open
Abstract
All of the brain structures and white matter that make up Papez' circuit, as well as the circuit as a whole, are implicated in the literature in episodic memory formation and recall. This paper shows that Papez' circuit has the detailed structure and connectivity that is evidently required to support the episodic memory engram, and that identifying Papez' circuit as the location of the engram answers a number of long-standing questions regarding the role of medial temporal lobe structures in episodic memory. The paper then shows that the process by which the episodic memory engram may be formed is a network-wide Hebbian potentiation termed "racetrack potentiation", whose frequency corresponds to that observed in vivo in humans for memory functions. Further, by considering the microcircuits observed in the medial temporal lobe structures forming Papez' circuit, the paper establishes the neural mechanisms behind the required functions of sensory information storage and recall, pattern completion, pattern separation, and memory consolidation. The paper shows that Papez' circuit has the necessary connectivity to gather the various elements of an episodic memory occurring within Pöppel's experienced time or "quantum of experience". Finally, the paper shows how the memory engram located in Papez' circuit might be central to the formation of a duplicate engram in the cortex enabling consolidation and long-term storage of episodic memories.
Collapse
Affiliation(s)
- Steven Hall
- Department of Psychology, University of Bolton, Deane Road, Bolton BL3 5AB, UK
| |
Collapse
|
26
|
Granato A, Phillips WA, Schulz JM, Suzuki M, Larkum ME. Dysfunctions of cellular context-sensitivity in neurodevelopmental learning disabilities. Neurosci Biobehav Rev 2024; 161:105688. [PMID: 38670298 DOI: 10.1016/j.neubiorev.2024.105688] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/23/2024] [Revised: 04/17/2024] [Accepted: 04/21/2024] [Indexed: 04/28/2024]
Abstract
Pyramidal neurons have a pivotal role in the cognitive capabilities of neocortex. Though they have been predominantly modeled as integrate-and-fire point processors, many of them have another point of input integration in their apical dendrites that is central to mechanisms endowing them with the sensitivity to context that underlies basic cognitive capabilities. Here we review evidence implicating impairments of those mechanisms in three major neurodevelopmental disabilities, fragile X, Down syndrome, and fetal alcohol spectrum disorders. Multiple dysfunctions of the mechanisms by which pyramidal cells are sensitive to context are found to be implicated in all three syndromes. Further deciphering of these cellular mechanisms would lead to the understanding of and therapies for learning disabilities beyond any that are currently available.
Collapse
Affiliation(s)
- Alberto Granato
- Dept. of Veterinary Sciences. University of Turin, Grugliasco, Turin 10095, Italy.
| | - William A Phillips
- Psychology, Faculty of Natural Sciences, University of Stirling, Scotland FK9 4LA, UK
| | - Jan M Schulz
- Roche Pharma Research & Early Development, Neuroscience & Rare Diseases Discovery, Roche Innovation Center Basel, F. Hoffmann-La Roche Ltd, Grenzacherstrasse 124, Basel 4070, Switzerland
| | - Mototaka Suzuki
- Dept. of Cognitive and Systems Neuroscience, Swammerdam Institute for Life Sciences, Faculty of Science, University of Amsterdam, Amsterdam 1098 XH, the Netherlands
| | - Matthew E Larkum
- Neurocure Center for Excellence, Charité Universitätsmedizin Berlin, Berlin 10117, Germany; Institute of Biology, Humboldt University Berlin, Berlin, Germany
| |
Collapse
|
27
|
Choucry A, Nomoto M, Inokuchi K. Engram mechanisms of memory linking and identity. Nat Rev Neurosci 2024; 25:375-392. [PMID: 38664582 DOI: 10.1038/s41583-024-00814-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 03/25/2024] [Indexed: 05/25/2024]
Abstract
Memories are thought to be stored in neuronal ensembles referred to as engrams. Studies have suggested that when two memories occur in quick succession, a proportion of their engrams overlap and the memories become linked (in a process known as prospective linking) while maintaining their individual identities. In this Review, we summarize the key principles of memory linking through engram overlap, as revealed by experimental and modelling studies. We describe evidence of the involvement of synaptic memory substrates, spine clustering and non-linear neuronal capacities in prospective linking, and suggest a dynamic somato-synaptic model, in which memories are shared between neurons yet remain separable through distinct dendritic and synaptic allocation patterns. We also bring into focus retrospective linking, in which memories become associated after encoding via offline reactivation, and discuss key temporal and mechanistic differences between prospective and retrospective linking, as well as the potential differences in their cognitive outcomes.
Collapse
Affiliation(s)
- Ali Choucry
- Graduate School of Medicine and Pharmaceutical Sciences, University of Toyama, Toyama, Japan
- Research Center for Idling Brain Science, University of Toyama, Toyama, Japan
- Department of Pharmacology and Toxicology, Faculty of Pharmacy, Cairo University, Cairo, Egypt
| | - Masanori Nomoto
- Graduate School of Medicine and Pharmaceutical Sciences, University of Toyama, Toyama, Japan
- Research Center for Idling Brain Science, University of Toyama, Toyama, Japan
- CREST, Japan Science and Technology Agency (JST), University of Toyama, Toyama, Japan
- Japan Agency for Medical Research and Development (AMED), Tokyo, Japan
| | - Kaoru Inokuchi
- Graduate School of Medicine and Pharmaceutical Sciences, University of Toyama, Toyama, Japan.
- Research Center for Idling Brain Science, University of Toyama, Toyama, Japan.
- CREST, Japan Science and Technology Agency (JST), University of Toyama, Toyama, Japan.
| |
Collapse
|
28
|
Benavides-Piccione R, Blazquez-Llorca L, Kastanauskaite A, Fernaud-Espinosa I, Tapia-González S, DeFelipe J. Key morphological features of human pyramidal neurons. Cereb Cortex 2024; 34:bhae180. [PMID: 38745556 PMCID: PMC11094408 DOI: 10.1093/cercor/bhae180] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/03/2024] [Revised: 04/01/2024] [Accepted: 04/18/2024] [Indexed: 05/16/2024] Open
Abstract
The basic building block of the cerebral cortex, the pyramidal cell, has been shown to be characterized by a markedly different dendritic structure among layers, cortical areas, and species. Functionally, differences in the structure of their dendrites and axons are critical in determining how neurons integrate information. However, within the human cortex, these neurons have not been quantified in detail. In the present work, we performed intracellular injections of Lucifer Yellow and 3D reconstructed over 200 pyramidal neurons, including apical and basal dendritic and local axonal arbors and dendritic spines, from human occipital primary visual area and associative temporal cortex. We found that human pyramidal neurons from temporal cortex were larger, displayed more complex apical and basal structural organization, and had more spines compared to those in primary sensory cortex. Moreover, these human neocortical neurons displayed specific shared and distinct characteristics in comparison to previously published human hippocampal pyramidal neurons. Additionally, we identified distinct morphological features in human neurons that set them apart from mouse neurons. Lastly, we observed certain consistent organizational patterns shared across species. This study emphasizes the existing diversity within pyramidal cell structures across different cortical areas and species, suggesting substantial species-specific variations in their computational properties.
Collapse
Affiliation(s)
- Ruth Benavides-Piccione
- Laboratorio Cajal de Circuitos Corticales, Centro de Tecnología Biomédica, Universidad Politécnica de Madrid, Pozuelo de Alarcón, Madrid 28223, Spain
- Instituto Cajal, Consejo Superior de Investigaciones Científicas (CSIC), Avda. Doctor Arce 37, Madrid 28002, Spain
- Centro de Investigación Biomédica en Red sobre Enfermedades Neurodegenerativas (CIBERNED), ISCIII, Valderrebollo 5, Madrid 28031, Spain
| | - Lidia Blazquez-Llorca
- Laboratorio Cajal de Circuitos Corticales, Centro de Tecnología Biomédica, Universidad Politécnica de Madrid, Pozuelo de Alarcón, Madrid 28223, Spain
- Centro de Investigación Biomédica en Red sobre Enfermedades Neurodegenerativas (CIBERNED), ISCIII, Valderrebollo 5, Madrid 28031, Spain
- Departamento de Tecnología Fotónica y Bioingeniería, ETSI Telecomunicación, Universidad Politécnica de Madrid, Madrid 28040, Spain
| | - Asta Kastanauskaite
- Laboratorio Cajal de Circuitos Corticales, Centro de Tecnología Biomédica, Universidad Politécnica de Madrid, Pozuelo de Alarcón, Madrid 28223, Spain
| | - Isabel Fernaud-Espinosa
- Instituto Cajal, Consejo Superior de Investigaciones Científicas (CSIC), Avda. Doctor Arce 37, Madrid 28002, Spain
| | - Silvia Tapia-González
- Laboratorio de Neurofisiología Celular, Facultad de Medicina, Universidad San Pablo-CEU, CEU Universities, Madrid, Spain
| | - Javier DeFelipe
- Laboratorio Cajal de Circuitos Corticales, Centro de Tecnología Biomédica, Universidad Politécnica de Madrid, Pozuelo de Alarcón, Madrid 28223, Spain
- Instituto Cajal, Consejo Superior de Investigaciones Científicas (CSIC), Avda. Doctor Arce 37, Madrid 28002, Spain
- Centro de Investigación Biomédica en Red sobre Enfermedades Neurodegenerativas (CIBERNED), ISCIII, Valderrebollo 5, Madrid 28031, Spain
| |
Collapse
|
29
|
Agnes EJ, Vogels TP. Co-dependent excitatory and inhibitory plasticity accounts for quick, stable and long-lasting memories in biological networks. Nat Neurosci 2024; 27:964-974. [PMID: 38509348 PMCID: PMC11089004 DOI: 10.1038/s41593-024-01597-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/29/2022] [Accepted: 02/08/2024] [Indexed: 03/22/2024]
Abstract
The brain's functionality is developed and maintained through synaptic plasticity. As synapses undergo plasticity, they also affect each other. The nature of such 'co-dependency' is difficult to disentangle experimentally, because multiple synapses must be monitored simultaneously. To help understand the experimentally observed phenomena, we introduce a framework that formalizes synaptic co-dependency between different connection types. The resulting model explains how inhibition can gate excitatory plasticity while neighboring excitatory-excitatory interactions determine the strength of long-term potentiation. Furthermore, we show how the interplay between excitatory and inhibitory synapses can account for the quick rise and long-term stability of a variety of synaptic weight profiles, such as orientation tuning and dendritic clustering of co-active synapses. In recurrent neuronal networks, co-dependent plasticity produces rich and stable motor cortex-like dynamics with high input sensitivity. Our results suggest an essential role for the neighborly synaptic interaction during learning, connecting micro-level physiology with network-wide phenomena.
Collapse
Affiliation(s)
- Everton J Agnes
- Centre for Neural Circuits and Behaviour, University of Oxford, Oxford, UK.
- Biozentrum, University of Basel, Basel, Switzerland.
| | - Tim P Vogels
- Centre for Neural Circuits and Behaviour, University of Oxford, Oxford, UK
- Institute of Science and Technology Austria, Klosterneuburg, Austria
| |
Collapse
|
30
|
Moreno-Sanchez A, Vasserman AN, Jang H, Hina BW, von Reyn CR, Ausborn J. Morphology and synapse topography optimize linear encoding of synapse numbers in Drosophila looming responsive descending neurons. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.04.24.591016. [PMID: 38712267 PMCID: PMC11071487 DOI: 10.1101/2024.04.24.591016] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/08/2024]
Abstract
Synapses are often precisely organized on dendritic arbors, yet the role of synaptic topography in dendritic integration remains poorly understood. Utilizing electron microscopy (EM) connectomics we investigate synaptic topography in Drosophila melanogaster looming circuits, focusing on retinotopically tuned visual projection neurons (VPNs) that synapse onto descending neurons (DNs). Synapses of a given VPN type project to non-overlapping regions on DN dendrites. Within these spatially constrained clusters, synapses are not retinotopically organized, but instead adopt near random distributions. To investigate how this organization strategy impacts DN integration, we developed multicompartment models of DNs fitted to experimental data and using precise EM morphologies and synapse locations. We find that DN dendrite morphologies normalize EPSP amplitudes of individual synaptic inputs and that near random distributions of synapses ensure linear encoding of synapse numbers from individual VPNs. These findings illuminate how synaptic topography influences dendritic integration and suggest that linear encoding of synapse numbers may be a default strategy established through connectivity and passive neuron properties, upon which active properties and plasticity can then tune as needed.
Collapse
Affiliation(s)
- Anthony Moreno-Sanchez
- Department of Neurobiology and Anatomy, Drexel University College of Medicine, Philadelphia, United States
| | - Alexander N. Vasserman
- Department of Neurobiology and Anatomy, Drexel University College of Medicine, Philadelphia, United States
| | - HyoJong Jang
- School of Biomedical Engineering, Science and Health Systems, Drexel University, Philadelphia, PA, United States
| | - Bryce W. Hina
- School of Biomedical Engineering, Science and Health Systems, Drexel University, Philadelphia, PA, United States
| | - Catherine R. von Reyn
- Department of Neurobiology and Anatomy, Drexel University College of Medicine, Philadelphia, United States
- School of Biomedical Engineering, Science and Health Systems, Drexel University, Philadelphia, PA, United States
| | - Jessica Ausborn
- Department of Neurobiology and Anatomy, Drexel University College of Medicine, Philadelphia, United States
| |
Collapse
|
31
|
El Srouji L, Abdelghany M, Ambethkar HR, Lee YJ, Berkay On M, Yoo SJB. Perspective: an optoelectronic future for heterogeneous, dendritic computing. Front Neurosci 2024; 18:1394271. [PMID: 38699677 PMCID: PMC11064649 DOI: 10.3389/fnins.2024.1394271] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/01/2024] [Accepted: 04/08/2024] [Indexed: 05/05/2024] Open
Abstract
With the increasing number of applications reliant on large neural network models, the pursuit of more suitable computing architectures is becoming increasingly relevant. Progress toward co-integrated silicon photonic and CMOS circuits provides new opportunities for computing architectures with high bandwidth optical networks and high-speed computing. In this paper, we discuss trends in neuromorphic computing architecture and outline an optoelectronic future for heterogeneous, dendritic neuromorphic computing.
Collapse
Affiliation(s)
| | | | | | | | | | - S. J. Ben Yoo
- Department of Electrical and Computer Engineering, University of California, Davis, Davis, CA, United States
| |
Collapse
|
32
|
Fitz H, Hagoort P, Petersson KM. Neurobiological Causal Models of Language Processing. NEUROBIOLOGY OF LANGUAGE (CAMBRIDGE, MASS.) 2024; 5:225-247. [PMID: 38645618 PMCID: PMC11025648 DOI: 10.1162/nol_a_00133] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 09/29/2022] [Accepted: 12/18/2023] [Indexed: 04/23/2024]
Abstract
The language faculty is physically realized in the neurobiological infrastructure of the human brain. Despite significant efforts, an integrated understanding of this system remains a formidable challenge. What is missing from most theoretical accounts is a specification of the neural mechanisms that implement language function. Computational models that have been put forward generally lack an explicit neurobiological foundation. We propose a neurobiologically informed causal modeling approach which offers a framework for how to bridge this gap. A neurobiological causal model is a mechanistic description of language processing that is grounded in, and constrained by, the characteristics of the neurobiological substrate. It intends to model the generators of language behavior at the level of implementational causality. We describe key features and neurobiological component parts from which causal models can be built and provide guidelines on how to implement them in model simulations. Then we outline how this approach can shed new light on the core computational machinery for language, the long-term storage of words in the mental lexicon and combinatorial processing in sentence comprehension. In contrast to cognitive theories of behavior, causal models are formulated in the "machine language" of neurobiology which is universal to human cognition. We argue that neurobiological causal modeling should be pursued in addition to existing approaches. Eventually, this approach will allow us to develop an explicit computational neurobiology of language.
Collapse
Affiliation(s)
- Hartmut Fitz
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, The Netherlands
- Neurobiology of Language Department, Max Planck Institute for Psycholinguistics, Nijmegen, The Netherlands
| | - Peter Hagoort
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, The Netherlands
- Neurobiology of Language Department, Max Planck Institute for Psycholinguistics, Nijmegen, The Netherlands
| | - Karl Magnus Petersson
- Neurobiology of Language Department, Max Planck Institute for Psycholinguistics, Nijmegen, The Netherlands
- Faculty of Medicine and Biomedical Sciences, University of Algarve, Faro, Portugal
| |
Collapse
|
33
|
Groden M, Moessinger HM, Schaffran B, DeFelipe J, Benavides-Piccione R, Cuntz H, Jedlicka P. A biologically inspired repair mechanism for neuronal reconstructions with a focus on human dendrites. PLoS Comput Biol 2024; 20:e1011267. [PMID: 38394339 PMCID: PMC10917450 DOI: 10.1371/journal.pcbi.1011267] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/14/2023] [Revised: 03/06/2024] [Accepted: 02/02/2024] [Indexed: 02/25/2024] Open
Abstract
Investigating and modelling the functionality of human neurons remains challenging due to the technical limitations, resulting in scarce and incomplete 3D anatomical reconstructions. Here we used a morphological modelling approach based on optimal wiring to repair the parts of a dendritic morphology that were lost due to incomplete tissue samples. In Drosophila, where dendritic regrowth has been studied experimentally using laser ablation, we found that modelling the regrowth reproduced a bimodal distribution between regeneration of cut branches and invasion by neighbouring branches. Interestingly, our repair model followed growth rules similar to those for the generation of a new dendritic tree. To generalise the repair algorithm from Drosophila to mammalian neurons, we artificially sectioned reconstructed dendrites from mouse and human hippocampal pyramidal cell morphologies, and showed that the regrown dendrites were morphologically similar to the original ones. Furthermore, we were able to restore their electrophysiological functionality, as evidenced by the recovery of their firing behaviour. Importantly, we show that such repairs also apply to other neuron types including hippocampal granule cells and cerebellar Purkinje cells. We then extrapolated the repair to incomplete human CA1 pyramidal neurons, where the anatomical boundaries of the particular brain areas innervated by the neurons in question were known. Interestingly, the repair of incomplete human dendrites helped to simulate the recently observed increased synaptic thresholds for dendritic NMDA spikes in human versus mouse dendrites. To make the repair tool available to the neuroscience community, we have developed an intuitive and simple graphical user interface (GUI), which is available in the TREES toolbox (www.treestoolbox.org).
Collapse
Affiliation(s)
- Moritz Groden
- 3R Computer-Based Modelling, Faculty of Medicine, ICAR3R, Justus Liebig University Giessen, Giessen, Germany
| | - Hannah M. Moessinger
- Ernst Strüngmann Institute (ESI) for Neuroscience in cooperation with the Max Planck Society, Frankfurt am Main, Germany
| | - Barbara Schaffran
- Ernst Strüngmann Institute (ESI) for Neuroscience in cooperation with the Max Planck Society, Frankfurt am Main, Germany
- Center for Neurodegenerative Diseases (DZNE), Bonn, Germany
| | - Javier DeFelipe
- Laboratorio Cajal de Circuitos Corticales (CTB), Universidad Politécnica de Madrid, Spain
- Instituto Cajal (CSIC), Madrid, Spain
| | - Ruth Benavides-Piccione
- Laboratorio Cajal de Circuitos Corticales (CTB), Universidad Politécnica de Madrid, Spain
- Instituto Cajal (CSIC), Madrid, Spain
| | - Hermann Cuntz
- 3R Computer-Based Modelling, Faculty of Medicine, ICAR3R, Justus Liebig University Giessen, Giessen, Germany
- Ernst Strüngmann Institute (ESI) for Neuroscience in cooperation with the Max Planck Society, Frankfurt am Main, Germany
- Frankfurt Institute for Advanced Studies, Frankfurt am Main, Germany
| | - Peter Jedlicka
- 3R Computer-Based Modelling, Faculty of Medicine, ICAR3R, Justus Liebig University Giessen, Giessen, Germany
- Institute of Clinical Neuroanatomy, Neuroscience Center, Goethe University, Frankfurt am Main, Germany
| |
Collapse
|
34
|
Hwang GM, Simonian AL. Special Issue-Biosensors and Neuroscience: Is Biosensors Engineering Ready to Embrace Design Principles from Neuroscience? BIOSENSORS 2024; 14:68. [PMID: 38391987 PMCID: PMC10886788 DOI: 10.3390/bios14020068] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/24/2023] [Accepted: 01/25/2024] [Indexed: 02/24/2024]
Abstract
In partnership with the Air Force Office of Scientific Research (AFOSR), the National Science Foundation's (NSF) Emerging Frontiers and Multidisciplinary Activities (EFMA) office of the Directorate for Engineering (ENG) launched an Emerging Frontiers in Research and Innovation (EFRI) topic for the fiscal years FY22 and FY23 entitled "Brain-inspired Dynamics for Engineering Energy-Efficient Circuits and Artificial Intelligence" (BRAID) [...].
Collapse
Affiliation(s)
- Grace M. Hwang
- Johns Hopkins University Applied Physics Laboratory, 111000 Johns Hopkins Road, Laurel, MD 20723, USA
| | | |
Collapse
|
35
|
Huang S, Wu SJ, Sansone G, Ibrahim LA, Fishell G. Layer 1 neocortex: Gating and integrating multidimensional signals. Neuron 2024; 112:184-200. [PMID: 37913772 PMCID: PMC11180419 DOI: 10.1016/j.neuron.2023.09.041] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/24/2023] [Revised: 09/23/2023] [Accepted: 09/28/2023] [Indexed: 11/03/2023]
Abstract
Layer 1 (L1) of the neocortex acts as a nexus for the collection and processing of widespread information. By integrating ascending inputs with extensive top-down activity, this layer likely provides critical information regulating how the perception of sensory inputs is reconciled with expectation. This is accomplished by sorting, directing, and integrating the complex network of excitatory inputs that converge onto L1. These signals are combined with neuromodulatory afferents and gated by the wealth of inhibitory interneurons that either are embedded within L1 or send axons from other cortical layers. Together, these interactions dynamically calibrate information flow throughout the neocortex. This review will primarily focus on L1 within the primary sensory cortex and will use these insights to understand L1 in other cortical areas.
Collapse
Affiliation(s)
- Shuhan Huang
- Harvard Medical School, Blavatnik Institute, Department of Neurobiology, Boston, MA 02115, USA; Program in Neuroscience, Harvard University, Cambridge, MA 02138, USA; Stanley Center for Psychiatric Research, Broad Institute of MIT and Harvard, Cambridge, MA 02142, USA
| | - Sherry Jingjing Wu
- Harvard Medical School, Blavatnik Institute, Department of Neurobiology, Boston, MA 02115, USA; Stanley Center for Psychiatric Research, Broad Institute of MIT and Harvard, Cambridge, MA 02142, USA
| | - Giulia Sansone
- Biological and Environmental Sciences and Engineering Division, King Abdullah University of Science and Technology, Thuwal 23955-6900, Kingdom of Saudi Arabia
| | - Leena Ali Ibrahim
- Biological and Environmental Sciences and Engineering Division, King Abdullah University of Science and Technology, Thuwal 23955-6900, Kingdom of Saudi Arabia.
| | - Gord Fishell
- Harvard Medical School, Blavatnik Institute, Department of Neurobiology, Boston, MA 02115, USA; Stanley Center for Psychiatric Research, Broad Institute of MIT and Harvard, Cambridge, MA 02142, USA.
| |
Collapse
|
36
|
Zheng H, Zheng Z, Hu R, Xiao B, Wu Y, Yu F, Liu X, Li G, Deng L. Temporal dendritic heterogeneity incorporated with spiking neural networks for learning multi-timescale dynamics. Nat Commun 2024; 15:277. [PMID: 38177124 PMCID: PMC10766638 DOI: 10.1038/s41467-023-44614-z] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/31/2023] [Accepted: 12/21/2023] [Indexed: 01/06/2024] Open
Abstract
It is widely believed the brain-inspired spiking neural networks have the capability of processing temporal information owing to their dynamic attributes. However, how to understand what kind of mechanisms contributing to the learning ability and exploit the rich dynamic properties of spiking neural networks to satisfactorily solve complex temporal computing tasks in practice still remains to be explored. In this article, we identify the importance of capturing the multi-timescale components, based on which a multi-compartment spiking neural model with temporal dendritic heterogeneity, is proposed. The model enables multi-timescale dynamics by automatically learning heterogeneous timing factors on different dendritic branches. Two breakthroughs are made through extensive experiments: the working mechanism of the proposed model is revealed via an elaborated temporal spiking XOR problem to analyze the temporal feature integration at different levels; comprehensive performance benefits of the model over ordinary spiking neural networks are achieved on several temporal computing benchmarks for speech recognition, visual recognition, electroencephalogram signal recognition, and robot place recognition, which shows the best-reported accuracy and model compactness, promising robustness and generalization, and high execution efficiency on neuromorphic hardware. This work moves neuromorphic computing a significant step toward real-world applications by appropriately exploiting biological observations.
Collapse
Affiliation(s)
- Hanle Zheng
- Center for Brain Inspired Computing Research (CBICR), Department of Precision Instrument, Tsinghua University, Beijing, China
| | - Zhong Zheng
- Center for Brain Inspired Computing Research (CBICR), Department of Precision Instrument, Tsinghua University, Beijing, China
| | - Rui Hu
- Center for Brain Inspired Computing Research (CBICR), Department of Precision Instrument, Tsinghua University, Beijing, China
| | - Bo Xiao
- Center for Brain Inspired Computing Research (CBICR), Department of Precision Instrument, Tsinghua University, Beijing, China
| | - Yujie Wu
- Institute of Theoretical Computer Science, Graz University of Technology, Graz, Austria
| | - Fangwen Yu
- Center for Brain Inspired Computing Research (CBICR), Department of Precision Instrument, Tsinghua University, Beijing, China
| | - Xue Liu
- Center for Brain Inspired Computing Research (CBICR), Department of Precision Instrument, Tsinghua University, Beijing, China
| | - Guoqi Li
- Institute of Automation, Chinese Academy of Sciences, Beijing, China
| | - Lei Deng
- Center for Brain Inspired Computing Research (CBICR), Department of Precision Instrument, Tsinghua University, Beijing, China.
| |
Collapse
|
37
|
Buxton RB, Wong EC. Metabolic energetics underlying attractors in neural models. J Neurophysiol 2024; 131:88-105. [PMID: 38056422 DOI: 10.1152/jn.00120.2023] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/20/2023] [Revised: 11/13/2023] [Accepted: 12/04/2023] [Indexed: 12/08/2023] Open
Abstract
Neural population modeling, including the role of neural attractors, is a promising tool for understanding many aspects of brain function. We propose a modeling framework to connect the abstract variables used in modeling to recent cellular-level estimates of the bioenergetic costs of different aspects of neural activity, measured in ATP consumed per second per neuron. Based on recent work, an empirical reference for brain ATP use for the awake resting brain was estimated as ∼2 × 109 ATP/s-neuron across several mammalian species. The energetics framework was applied to the Wilson-Cowan (WC) model of two interacting populations of neurons, one excitatory (E) and one inhibitory (I). Attractors were considered to exhibit steady-state behavior and limit cycle behavior, both of which end when the excitatory stimulus ends, and sustained activity that persists after the stimulus ends. The energy cost of limit cycles, with oscillations much faster than the average neuronal firing rate of the population, is tracked more closely with the firing rate than the limit cycle frequency. Self-sustained firing driven by recurrent excitation, though, involves higher firing rates and a higher energy cost. As an example of a simple network in which each node is a WC model, a combination of three nodes can serve as a flexible circuit element that turns on with an oscillating output when input passes a threshold and then persists after the input ends (an "on-switch"), with moderate overall ATP use. The proposed framework can serve as a guide for anchoring neural population models to plausible bioenergetics requirements.NEW & NOTEWORTHY This work bridges two approaches for understanding brain function: cellular-level studies of the metabolic energy costs of different aspects of neural activity and neural population modeling, including the role of neural attractors. The proposed modeling framework connects energetic costs, in ATP consumed per second per neuron, to the more abstract variables used in neural population modeling. In particular, this work anchors potential neural attractors to physiologically plausible bioenergetics requirements.
Collapse
Affiliation(s)
- Richard B Buxton
- Department of Radiology, University of California, San Diego, California, United States
| | - Eric C Wong
- Department of Radiology, University of California, San Diego, California, United States
- Department of Psychiatry, University of California, San Diego, California, United States
| |
Collapse
|
38
|
Wang C, Zhang T, Chen X, He S, Li S, Wu S. BrainPy, a flexible, integrative, efficient, and extensible framework for general-purpose brain dynamics programming. eLife 2023; 12:e86365. [PMID: 38132087 PMCID: PMC10796146 DOI: 10.7554/elife.86365] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/21/2023] [Accepted: 12/20/2023] [Indexed: 12/23/2023] Open
Abstract
Elucidating the intricate neural mechanisms underlying brain functions requires integrative brain dynamics modeling. To facilitate this process, it is crucial to develop a general-purpose programming framework that allows users to freely define neural models across multiple scales, efficiently simulate, train, and analyze model dynamics, and conveniently incorporate new modeling approaches. In response to this need, we present BrainPy. BrainPy leverages the advanced just-in-time (JIT) compilation capabilities of JAX and XLA to provide a powerful infrastructure tailored for brain dynamics programming. It offers an integrated platform for building, simulating, training, and analyzing brain dynamics models. Models defined in BrainPy can be JIT compiled into binary instructions for various devices, including Central Processing Unit, Graphics Processing Unit, and Tensor Processing Unit, which ensures high-running performance comparable to native C or CUDA. Additionally, BrainPy features an extensible architecture that allows for easy expansion of new infrastructure, utilities, and machine-learning approaches. This flexibility enables researchers to incorporate cutting-edge techniques and adapt the framework to their specific needs.
Collapse
Affiliation(s)
- Chaoming Wang
- School of Psychological and Cognitive Sciences, IDG/McGovern Institute for Brain Research, Peking-Tsinghua Center for Life Sciences, Center of Quantitative Biology, Academy for Advanced Interdisciplinary Studies, Bejing Key Laboratory of Behavior and Mental Health, Peking UniversityBeijingChina
- Guangdong Institute of Intelligence Science and TechnologyGuangdongChina
| | - Tianqiu Zhang
- School of Psychological and Cognitive Sciences, IDG/McGovern Institute for Brain Research, Peking-Tsinghua Center for Life Sciences, Center of Quantitative Biology, Academy for Advanced Interdisciplinary Studies, Bejing Key Laboratory of Behavior and Mental Health, Peking UniversityBeijingChina
| | - Xiaoyu Chen
- School of Psychological and Cognitive Sciences, IDG/McGovern Institute for Brain Research, Peking-Tsinghua Center for Life Sciences, Center of Quantitative Biology, Academy for Advanced Interdisciplinary Studies, Bejing Key Laboratory of Behavior and Mental Health, Peking UniversityBeijingChina
| | - Sichao He
- Beijing Jiaotong UniversityBeijingChina
| | - Shangyang Li
- School of Psychological and Cognitive Sciences, IDG/McGovern Institute for Brain Research, Peking-Tsinghua Center for Life Sciences, Center of Quantitative Biology, Academy for Advanced Interdisciplinary Studies, Bejing Key Laboratory of Behavior and Mental Health, Peking UniversityBeijingChina
| | - Si Wu
- School of Psychological and Cognitive Sciences, IDG/McGovern Institute for Brain Research, Peking-Tsinghua Center for Life Sciences, Center of Quantitative Biology, Academy for Advanced Interdisciplinary Studies, Bejing Key Laboratory of Behavior and Mental Health, Peking UniversityBeijingChina
- Guangdong Institute of Intelligence Science and TechnologyGuangdongChina
| |
Collapse
|
39
|
Capone C, Lupo C, Muratore P, Paolucci PS. Beyond spiking networks: The computational advantages of dendritic amplification and input segregation. Proc Natl Acad Sci U S A 2023; 120:e2220743120. [PMID: 38019856 PMCID: PMC10710097 DOI: 10.1073/pnas.2220743120] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/07/2022] [Accepted: 10/11/2023] [Indexed: 12/01/2023] Open
Abstract
The brain can efficiently learn a wide range of tasks, motivating the search for biologically inspired learning rules for improving current artificial intelligence technology. Most biological models are composed of point neurons and cannot achieve state-of-the-art performance in machine learning. Recent works have proposed that input segregation (neurons receive sensory information and higher-order feedback in segregated compartments), and nonlinear dendritic computation would support error backpropagation in biological neurons. However, these approaches require propagating errors with a fine spatiotemporal structure to all the neurons, which is unlikely to be feasible in a biological network. To relax this assumption, we suggest that bursts and dendritic input segregation provide a natural support for target-based learning, which propagates targets rather than errors. A coincidence mechanism between the basal and the apical compartments allows for generating high-frequency bursts of spikes. This architecture supports a burst-dependent learning rule, based on the comparison between the target bursting activity triggered by the teaching signal and the one caused by the recurrent connections, providing support for target-based learning. We show that this framework can be used to efficiently solve spatiotemporal tasks, such as context-dependent store and recall of three-dimensional trajectories, and navigation tasks. Finally, we suggest that this neuronal architecture naturally allows for orchestrating "hierarchical imitation learning", enabling the decomposition of challenging long-horizon decision-making tasks into simpler subtasks. We show a possible implementation of this in a two-level network, where the high network produces the contextual signal for the low network.
Collapse
Affiliation(s)
- Cristiano Capone
- Istituto Nazionale di Fisica Nucleare (INFN), Sezione di Roma, Rome00185, Italy
| | - Cosimo Lupo
- Istituto Nazionale di Fisica Nucleare (INFN), Sezione di Roma, Rome00185, Italy
| | - Paolo Muratore
- Scuola Internazionale Superiore di Studi Avanzati (SISSA), Visual Neuroscience Lab, Trieste34136, Italy
| | | |
Collapse
|
40
|
Suzuki M, Pennartz CMA, Aru J. How deep is the brain? The shallow brain hypothesis. Nat Rev Neurosci 2023; 24:778-791. [PMID: 37891398 DOI: 10.1038/s41583-023-00756-z] [Citation(s) in RCA: 9] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 09/25/2023] [Indexed: 10/29/2023]
Abstract
Deep learning and predictive coding architectures commonly assume that inference in neural networks is hierarchical. However, largely neglected in deep learning and predictive coding architectures is the neurobiological evidence that all hierarchical cortical areas, higher or lower, project to and receive signals directly from subcortical areas. Given these neuroanatomical facts, today's dominance of cortico-centric, hierarchical architectures in deep learning and predictive coding networks is highly questionable; such architectures are likely to be missing essential computational principles the brain uses. In this Perspective, we present the shallow brain hypothesis: hierarchical cortical processing is integrated with a massively parallel process to which subcortical areas substantially contribute. This shallow architecture exploits the computational capacity of cortical microcircuits and thalamo-cortical loops that are not included in typical hierarchical deep learning and predictive coding networks. We argue that the shallow brain architecture provides several critical benefits over deep hierarchical structures and a more complete depiction of how mammalian brains achieve fast and flexible computational capabilities.
Collapse
Affiliation(s)
- Mototaka Suzuki
- Department of Cognitive and Systems Neuroscience, Swammerdam Institute for Life Sciences, University of Amsterdam, Amsterdam, The Netherlands.
| | - Cyriel M A Pennartz
- Department of Cognitive and Systems Neuroscience, Swammerdam Institute for Life Sciences, University of Amsterdam, Amsterdam, The Netherlands
| | - Jaan Aru
- Institute of Computer Science, University of Tartu, Tartu, Estonia.
| |
Collapse
|
41
|
Fitch WT. Cellular computation and cognition. Front Comput Neurosci 2023; 17:1107876. [PMID: 38077750 PMCID: PMC10702520 DOI: 10.3389/fncom.2023.1107876] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/25/2022] [Accepted: 10/09/2023] [Indexed: 05/28/2024] Open
Abstract
Contemporary neural network models often overlook a central biological fact about neural processing: that single neurons are themselves complex, semi-autonomous computing systems. Both the information processing and information storage abilities of actual biological neurons vastly exceed the simple weighted sum of synaptic inputs computed by the "units" in standard neural network models. Neurons are eukaryotic cells that store information not only in synapses, but also in their dendritic structure and connectivity, as well as genetic "marking" in the epigenome of each individual cell. Each neuron computes a complex nonlinear function of its inputs, roughly equivalent in processing capacity to an entire 1990s-era neural network model. Furthermore, individual cells provide the biological interface between gene expression, ongoing neural processing, and stored long-term memory traces. Neurons in all organisms have these properties, which are thus relevant to all of neuroscience and cognitive biology. Single-cell computation may also play a particular role in explaining some unusual features of human cognition. The recognition of the centrality of cellular computation to "natural computation" in brains, and of the constraints it imposes upon brain evolution, thus has important implications for the evolution of cognition, and how we study it.
Collapse
Affiliation(s)
- W. Tecumseh Fitch
- Faculty of Life Sciences and Vienna Cognitive Science Hub, University of Vienna, Vienna, Austria
| |
Collapse
|
42
|
Kagan BJ, Gyngell C, Lysaght T, Cole VM, Sawai T, Savulescu J. The technology, opportunities, and challenges of Synthetic Biological Intelligence. Biotechnol Adv 2023; 68:108233. [PMID: 37558186 PMCID: PMC7615149 DOI: 10.1016/j.biotechadv.2023.108233] [Citation(s) in RCA: 11] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/11/2023] [Revised: 07/15/2023] [Accepted: 08/05/2023] [Indexed: 08/11/2023]
Abstract
Integrating neural cultures developed through synthetic biology methods with digital computing has enabled the early development of Synthetic Biological Intelligence (SBI). Recently, key studies have emphasized the advantages of biological neural systems in some information processing tasks. However, neither the technology behind this early development, nor the potential ethical opportunities or challenges, have been explored in detail yet. Here, we review the key aspects that facilitate the development of SBI and explore potential applications. Considering these foreseeable use cases, various ethical implications are proposed. Ultimately, this work aims to provide a robust framework to structure ethical considerations to ensure that SBI technology can be both researched and applied responsibly.
Collapse
Affiliation(s)
| | - Christopher Gyngell
- Murdoch Children's Research Institute, Melbourne, VIC, Australia; University of Melbourne, Melbourne, VIC, Australia
| | - Tamra Lysaght
- Centre for Biomedical Ethics, Yong Loo Lin School of Medicine, National University of Singapore, Singapore
| | - Victor M Cole
- Centre for Biomedical Ethics, Yong Loo Lin School of Medicine, National University of Singapore, Singapore
| | - Tsutomu Sawai
- Graduate School of Humanities and Social Sciences, Hiroshima University, Hiroshima, Japan; Institute for the Advanced Study of Human Biology (ASHBi), Kyoto University, Kyoto, Japan
| | - Julian Savulescu
- Murdoch Children's Research Institute, Melbourne, VIC, Australia; University of Melbourne, Melbourne, VIC, Australia; Centre for Biomedical Ethics, Yong Loo Lin School of Medicine, National University of Singapore, Singapore
| |
Collapse
|
43
|
Zhang Y, He G, Ma L, Liu X, Hjorth JJJ, Kozlov A, He Y, Zhang S, Kotaleski JH, Tian Y, Grillner S, Du K, Huang T. A GPU-based computational framework that bridges neuron simulation and artificial intelligence. Nat Commun 2023; 14:5798. [PMID: 37723170 PMCID: PMC10507119 DOI: 10.1038/s41467-023-41553-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/30/2022] [Accepted: 09/08/2023] [Indexed: 09/20/2023] Open
Abstract
Biophysically detailed multi-compartment models are powerful tools to explore computational principles of the brain and also serve as a theoretical framework to generate algorithms for artificial intelligence (AI) systems. However, the expensive computational cost severely limits the applications in both the neuroscience and AI fields. The major bottleneck during simulating detailed compartment models is the ability of a simulator to solve large systems of linear equations. Here, we present a novel Dendritic Hierarchical Scheduling (DHS) method to markedly accelerate such a process. We theoretically prove that the DHS implementation is computationally optimal and accurate. This GPU-based method performs with 2-3 orders of magnitude higher speed than that of the classic serial Hines method in the conventional CPU platform. We build a DeepDendrite framework, which integrates the DHS method and the GPU computing engine of the NEURON simulator and demonstrate applications of DeepDendrite in neuroscience tasks. We investigate how spatial patterns of spine inputs affect neuronal excitability in a detailed human pyramidal neuron model with 25,000 spines. Furthermore, we provide a brief discussion on the potential of DeepDendrite for AI, specifically highlighting its ability to enable the efficient training of biophysically detailed models in typical image classification tasks.
Collapse
Affiliation(s)
- Yichen Zhang
- National Key Laboratory for Multimedia Information Processing, School of Computer Science, Peking University, Beijing, 100871, China
| | - Gan He
- National Key Laboratory for Multimedia Information Processing, School of Computer Science, Peking University, Beijing, 100871, China
| | - Lei Ma
- National Key Laboratory for Multimedia Information Processing, School of Computer Science, Peking University, Beijing, 100871, China
- Beijing Academy of Artificial Intelligence (BAAI), Beijing, 100084, China
| | - Xiaofei Liu
- National Key Laboratory for Multimedia Information Processing, School of Computer Science, Peking University, Beijing, 100871, China
- School of Information Science and Engineering, Yunnan University, Kunming, 650500, China
| | - J J Johannes Hjorth
- Science for Life Laboratory, School of Electrical Engineering and Computer Science, Royal Institute of Technology KTH, Stockholm, SE-10044, Sweden
| | - Alexander Kozlov
- Science for Life Laboratory, School of Electrical Engineering and Computer Science, Royal Institute of Technology KTH, Stockholm, SE-10044, Sweden
- Department of Neuroscience, Karolinska Institute, Stockholm, SE-17165, Sweden
| | - Yutao He
- National Key Laboratory for Multimedia Information Processing, School of Computer Science, Peking University, Beijing, 100871, China
| | - Shenjian Zhang
- National Key Laboratory for Multimedia Information Processing, School of Computer Science, Peking University, Beijing, 100871, China
| | - Jeanette Hellgren Kotaleski
- Science for Life Laboratory, School of Electrical Engineering and Computer Science, Royal Institute of Technology KTH, Stockholm, SE-10044, Sweden
- Department of Neuroscience, Karolinska Institute, Stockholm, SE-17165, Sweden
| | - Yonghong Tian
- National Key Laboratory for Multimedia Information Processing, School of Computer Science, Peking University, Beijing, 100871, China
- School of Electrical and Computer Engineering, Shenzhen Graduate School, Peking University, Shenzhen, 518055, China
| | - Sten Grillner
- Department of Neuroscience, Karolinska Institute, Stockholm, SE-17165, Sweden
| | - Kai Du
- Institute for Artificial Intelligence, Peking University, Beijing, 100871, China.
| | - Tiejun Huang
- National Key Laboratory for Multimedia Information Processing, School of Computer Science, Peking University, Beijing, 100871, China
- Beijing Academy of Artificial Intelligence (BAAI), Beijing, 100084, China
- Institute for Artificial Intelligence, Peking University, Beijing, 100871, China
| |
Collapse
|
44
|
Dimitrov AG. Resting membrane state as an interplay of electrogenic transporters with various pumps. Pflugers Arch 2023; 475:1113-1128. [PMID: 37468808 DOI: 10.1007/s00424-023-02838-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/05/2023] [Revised: 06/26/2023] [Accepted: 07/06/2023] [Indexed: 07/21/2023]
Abstract
In this study, a new idea that electrogenic transporters determine cell resting state is presented. The previous assumption was that pumps, especially the sodium one, determine it. The latter meets difficulties, because it violates the law of conservation of energy; also a significant deficit of pump activity is reported. The amount of energy carried by a single ATP molecule reflects the potential of the inner mitochondrial membrane, which is about -200 mV. If pumps enforce a resting membrane potential that is more than twice smaller, then the majority of energy stored in ATP would be dissipated by each pump turning. However, this problem could be solved if control is transferred from pumps to something else, e.g., electrogenic transporters. Then pumps would transfer the energy to the ionic gradient without losses, while the cell surface membrane potential would be associated with the reversal potential of some electrogenic transporters. A minimal scheme of this type would include a sodium-calcium exchanger as well as sodium and calcium pumps. However, note that calcium channels and pumps are positioned along both intracellular organelles and the surface membrane. Therefore, the above-mentioned scheme would involve them as well as possible intercellular communications. Such schemes where various kinds of pumps are assumed to work in parallel may explain, to a great extent, the slow turning rate of the individual members. Interaction of pumps and transporters positioned at distant biological membranes with various forms of energy transfer between them may thus result in hypoxic/reperfusion injury, different kinds of muscle fatigue, and nerve-glia interactions.
Collapse
Affiliation(s)
- A G Dimitrov
- Institute of Biophysics and Biomedical Engineering, Bulgarian Academy of Sciences, Acad. G. Bonchev Str., Bl. 105, 1113, Sofia, Bulgaria.
| |
Collapse
|
45
|
Petousakis KE, Apostolopoulou AA, Poirazi P. The impact of Hodgkin-Huxley models on dendritic research. J Physiol 2023; 601:3091-3102. [PMID: 36218068 PMCID: PMC10600871 DOI: 10.1113/jp282756] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/08/2022] [Accepted: 09/16/2022] [Indexed: 11/08/2022] Open
Abstract
For the past seven decades, the Hodgkin-Huxley (HH) formalism has been an invaluable tool in the arsenal of neuroscientists, allowing for robust and reproducible modelling of ionic conductances and the electrophysiological phenomena they underlie. Despite its apparent age, its role as a cornerstone of computational neuroscience has not waned. The discovery of dendritic regenerative events mediated by ionic and synaptic conductances has solidified the importance of HH-based models further, yielding new predictions concerning dendritic integration, synaptic plasticity and neuronal computation. These predictions are often validated through in vivo and in vitro experiments, advancing our understanding of the neuron as a biological system and emphasizing the importance of HH-based detailed computational models as an instrument of dendritic research. In this article, we discuss recent studies in which the HH formalism is used to shed new light on dendritic function and its role in neuronal phenomena.
Collapse
Affiliation(s)
- Konstantinos-Evangelos Petousakis
- Institute of Molecular Biology and Biotechnology (IMBB), Foundation for Research and Technology-Hellas (FORTH), Heraklion, Crete, Greece
- Department of Biology, University of Crete, Heraklion, Crete, Greece
| | - Anthi A Apostolopoulou
- Institute of Molecular Biology and Biotechnology (IMBB), Foundation for Research and Technology-Hellas (FORTH), Heraklion, Crete, Greece
| | - Panayiota Poirazi
- Institute of Molecular Biology and Biotechnology (IMBB), Foundation for Research and Technology-Hellas (FORTH), Heraklion, Crete, Greece
| |
Collapse
|
46
|
Dura-Bernal S, Neymotin SA, Suter BA, Dacre J, Moreira JVS, Urdapilleta E, Schiemann J, Duguid I, Shepherd GMG, Lytton WW. Multiscale model of primary motor cortex circuits predicts in vivo cell-type-specific, behavioral state-dependent dynamics. Cell Rep 2023; 42:112574. [PMID: 37300831 PMCID: PMC10592234 DOI: 10.1016/j.celrep.2023.112574] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/28/2022] [Revised: 02/27/2023] [Accepted: 05/12/2023] [Indexed: 06/12/2023] Open
Abstract
Understanding cortical function requires studying multiple scales: molecular, cellular, circuit, and behavioral. We develop a multiscale, biophysically detailed model of mouse primary motor cortex (M1) with over 10,000 neurons and 30 million synapses. Neuron types, densities, spatial distributions, morphologies, biophysics, connectivity, and dendritic synapse locations are constrained by experimental data. The model includes long-range inputs from seven thalamic and cortical regions and noradrenergic inputs. Connectivity depends on cell class and cortical depth at sublaminar resolution. The model accurately predicts in vivo layer- and cell-type-specific responses (firing rates and LFP) associated with behavioral states (quiet wakefulness and movement) and experimental manipulations (noradrenaline receptor blockade and thalamus inactivation). We generate mechanistic hypotheses underlying the observed activity and analyzed low-dimensional population latent dynamics. This quantitative theoretical framework can be used to integrate and interpret M1 experimental data and sheds light on the cell-type-specific multiscale dynamics associated with several experimental conditions and behaviors.
Collapse
Affiliation(s)
- Salvador Dura-Bernal
- Department of Physiology and Pharmacology, State University of New York (SUNY) Downstate Health Sciences University, Brooklyn, NY, USA; Center for Biomedical Imaging and Neuromodulation, Nathan S. Kline Institute for Psychiatric Research, Orangeburg, NY, USA.
| | - Samuel A Neymotin
- Center for Biomedical Imaging and Neuromodulation, Nathan S. Kline Institute for Psychiatric Research, Orangeburg, NY, USA; Department of Psychiatry, Grossman School of Medicine, New York University (NYU), New York, NY, USA
| | - Benjamin A Suter
- Department of Physiology, Northwestern University, Evanston, IL, USA
| | - Joshua Dacre
- Centre for Discovery Brain Sciences, Edinburgh Medical School: Biomedical Sciences, University of Edinburgh, Edinburgh, UK
| | - Joao V S Moreira
- Department of Physiology and Pharmacology, State University of New York (SUNY) Downstate Health Sciences University, Brooklyn, NY, USA
| | - Eugenio Urdapilleta
- Department of Physiology and Pharmacology, State University of New York (SUNY) Downstate Health Sciences University, Brooklyn, NY, USA
| | - Julia Schiemann
- Centre for Discovery Brain Sciences, Edinburgh Medical School: Biomedical Sciences, University of Edinburgh, Edinburgh, UK; Center for Integrative Physiology and Molecular Medicine, Saarland University, Saarbrücken, Germany
| | - Ian Duguid
- Centre for Discovery Brain Sciences, Edinburgh Medical School: Biomedical Sciences, University of Edinburgh, Edinburgh, UK
| | | | - William W Lytton
- Department of Physiology and Pharmacology, State University of New York (SUNY) Downstate Health Sciences University, Brooklyn, NY, USA; Aligning Science Across Parkinson's (ASAP) Collaborative Research Network, Chevy Chase, MD, USA; Department of Neurology, Kings County Hospital Center, Brooklyn, NY, USA
| |
Collapse
|
47
|
Vinck M, Uran C, Spyropoulos G, Onorato I, Broggini AC, Schneider M, Canales-Johnson A. Principles of large-scale neural interactions. Neuron 2023; 111:987-1002. [PMID: 37023720 DOI: 10.1016/j.neuron.2023.03.015] [Citation(s) in RCA: 10] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/17/2023] [Revised: 02/27/2023] [Accepted: 03/09/2023] [Indexed: 04/08/2023]
Abstract
What mechanisms underlie flexible inter-areal communication in the cortex? We consider four mechanisms for temporal coordination and their contributions to communication: (1) Oscillatory synchronization (communication-through-coherence); (2) communication-through-resonance; (3) non-linear integration; and (4) linear signal transmission (coherence-through-communication). We discuss major challenges for communication-through-coherence based on layer- and cell-type-specific analyses of spike phase-locking, heterogeneity of dynamics across networks and states, and computational models for selective communication. We argue that resonance and non-linear integration are viable alternative mechanisms that facilitate computation and selective communication in recurrent networks. Finally, we consider communication in relation to cortical hierarchy and critically examine the hypothesis that feedforward and feedback communication use fast (gamma) and slow (alpha/beta) frequencies, respectively. Instead, we propose that feedforward propagation of prediction errors relies on the non-linear amplification of aperiodic transients, whereas gamma and beta rhythms represent rhythmic equilibrium states that facilitate sustained and efficient information encoding and amplification of short-range feedback via resonance.
Collapse
Affiliation(s)
- Martin Vinck
- Ernst Struengmann Institute (ESI) for Neuroscience in Cooperation with Max Planck Society, 60528 Frankfurt am Main, Germany; Donders Centre for Neuroscience, Department of Neurophysics, Radboud University Nijmegen, 6525 Nijmegen, the Netherlands.
| | - Cem Uran
- Ernst Struengmann Institute (ESI) for Neuroscience in Cooperation with Max Planck Society, 60528 Frankfurt am Main, Germany; Donders Centre for Neuroscience, Department of Neurophysics, Radboud University Nijmegen, 6525 Nijmegen, the Netherlands
| | - Georgios Spyropoulos
- Ernst Struengmann Institute (ESI) for Neuroscience in Cooperation with Max Planck Society, 60528 Frankfurt am Main, Germany
| | - Irene Onorato
- Ernst Struengmann Institute (ESI) for Neuroscience in Cooperation with Max Planck Society, 60528 Frankfurt am Main, Germany; Donders Centre for Neuroscience, Department of Neurophysics, Radboud University Nijmegen, 6525 Nijmegen, the Netherlands
| | - Ana Clara Broggini
- Ernst Struengmann Institute (ESI) for Neuroscience in Cooperation with Max Planck Society, 60528 Frankfurt am Main, Germany
| | - Marius Schneider
- Ernst Struengmann Institute (ESI) for Neuroscience in Cooperation with Max Planck Society, 60528 Frankfurt am Main, Germany; Donders Centre for Neuroscience, Department of Neurophysics, Radboud University Nijmegen, 6525 Nijmegen, the Netherlands
| | - Andres Canales-Johnson
- Department of Psychology, University of Cambridge, CB2 3EB Cambridge, UK; Centro de Investigacion en Neuropsicologia y Neurociencias Cognitivas, Facultad de Ciencias de la Salud, Universidad Catolica del Maule, 3480122 Talca, Chile.
| |
Collapse
|
48
|
Gao S, Zhou M, Wang Z, Sugiyama D, Cheng J, Wang J, Todo Y. Fully Complex-Valued Dendritic Neuron Model. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2023; 34:2105-2118. [PMID: 34487498 DOI: 10.1109/tnnls.2021.3105901] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/10/2023]
Abstract
A single dendritic neuron model (DNM) that owns the nonlinear information processing ability of dendrites has been widely used for classification and prediction. Complex-valued neural networks that consist of a number of multiple/deep-layer McCulloch-Pitts neurons have achieved great successes so far since neural computing was utilized for signal processing. Yet no complex value representations appear in single neuron architectures. In this article, we first extend DNM from a real-value domain to a complex-valued one. Performance of complex-valued DNM (CDNM) is evaluated through a complex XOR problem, a non-minimum phase equalization problem, and a real-world wind prediction task. Also, a comparative analysis on a set of elementary transcendental functions as an activation function is implemented and preparatory experiments are carried out for determining hyperparameters. The experimental results indicate that the proposed CDNM significantly outperforms real-valued DNM, complex-valued multi-layer perceptron, and other complex-valued neuron models.
Collapse
|
49
|
Tang Y, Zhang X, An L, Yu Z, Liu JK. Diverse role of NMDA receptors for dendritic integration of neural dynamics. PLoS Comput Biol 2023; 19:e1011019. [PMID: 37036844 PMCID: PMC10085026 DOI: 10.1371/journal.pcbi.1011019] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/23/2022] [Accepted: 03/09/2023] [Indexed: 04/11/2023] Open
Abstract
Neurons, represented as a tree structure of morphology, have various distinguished branches of dendrites. Different types of synaptic receptors distributed over dendrites are responsible for receiving inputs from other neurons. NMDA receptors (NMDARs) are expressed as excitatory units, and play a key physiological role in synaptic function. Although NMDARs are widely expressed in most types of neurons, they play a different role in the cerebellar Purkinje cells (PCs). Utilizing a computational PC model with detailed dendritic morphology, we explored the role of NMDARs at different parts of dendritic branches and regions. We found somatic responses can switch from silent, to simple spikes and complex spikes, depending on specific dendritic branches. Detailed examination of the dendrites regarding their diameters and distance to soma revealed diverse response patterns, yet explain two firing modes, simple and complex spike. Taken together, these results suggest that NMDARs play an important role in controlling excitability sensitivity while taking into account the factor of dendritic properties. Given the complexity of neural morphology varying in cell types, our work suggests that the functional role of NMDARs is not stereotyped but highly interwoven with local properties of neuronal structure.
Collapse
Affiliation(s)
- Yuanhong Tang
- Institute for Artificial Intelligence, Department of Computer Science and Technology, Peking University, Beijing, China
| | - Xingyu Zhang
- Guangzhou Institute of Technology, Xidian University, Guangzhou, China
| | - Lingling An
- School of Computer Science and Technology, Xidian University, Xi'an, China
| | - Zhaofei Yu
- Institute for Artificial Intelligence, Department of Computer Science and Technology, Peking University, Beijing, China
| | - Jian K Liu
- School of Computing, University of Leeds, Leeds, United Kingdom
| |
Collapse
|
50
|
Zhang Y, Du K, Huang T. Heuristic Tree-Partition-Based Parallel Method for Biophysically Detailed Neuron Simulation. Neural Comput 2023; 35:627-644. [PMID: 36746142 DOI: 10.1162/neco_a_01565] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/24/2022] [Accepted: 10/20/2022] [Indexed: 02/08/2023]
Abstract
Biophysically detailed neuron simulation is a powerful tool to explore the mechanisms behind biological experiments and bridge the gap between various scales in neuroscience research. However, the extremely high computational complexity of detailed neuron simulation restricts the modeling and exploration of detailed network models. The bottleneck is solving the system of linear equations. To accelerate detailed simulation, we propose a heuristic tree-partition-based parallel method (HTP) to parallelize the computation of the Hines algorithm, the kernel for solving linear equations, and leverage the strong parallel capability of the graphic processing unit (GPU) to achieve further speedup. We formulate the problem of how to get a fine parallel process as a tree-partition problem. Next, we present a heuristic partition algorithm to obtain an effective partition to efficiently parallelize the equation-solving process in detailed simulation. With further optimization on GPU, our HTP method achieves 2.2 to 8.5 folds speedup compared to the state-of-the-art GPU method and 36 to 660 folds speedup compared to the typical Hines algorithm.
Collapse
Affiliation(s)
- Yichen Zhang
- School of Computer Science, Peking University, Beijing 100871, China
| | - Kai Du
- School of Computer Science and Institute for Artificial Intelligence, Peking University, Beijing 100871, China
| | - Tiejun Huang
- School of Computer Science and Institute for Artificial Intelligence, Peking University, Beijing 100871, China
| |
Collapse
|