1
|
Pietras B. Pulse Shape and Voltage-Dependent Synchronization in Spiking Neuron Networks. Neural Comput 2024; 36:1476-1540. [PMID: 39028958 DOI: 10.1162/neco_a_01680] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/05/2023] [Accepted: 03/18/2024] [Indexed: 07/21/2024]
Abstract
Pulse-coupled spiking neural networks are a powerful tool to gain mechanistic insights into how neurons self-organize to produce coherent collective behavior. These networks use simple spiking neuron models, such as the θ-neuron or the quadratic integrate-and-fire (QIF) neuron, that replicate the essential features of real neural dynamics. Interactions between neurons are modeled with infinitely narrow pulses, or spikes, rather than the more complex dynamics of real synapses. To make these networks biologically more plausible, it has been proposed that they must also account for the finite width of the pulses, which can have a significant impact on the network dynamics. However, the derivation and interpretation of these pulses are contradictory, and the impact of the pulse shape on the network dynamics is largely unexplored. Here, I take a comprehensive approach to pulse coupling in networks of QIF and θ-neurons. I argue that narrow pulses activate voltage-dependent synaptic conductances and show how to implement them in QIF neurons such that their effect can last through the phase after the spike. Using an exact low-dimensional description for networks of globally coupled spiking neurons, I prove for instantaneous interactions that collective oscillations emerge due to an effective coupling through the mean voltage. I analyze the impact of the pulse shape by means of a family of smooth pulse functions with arbitrary finite width and symmetric or asymmetric shapes. For symmetric pulses, the resulting voltage coupling is not very effective in synchronizing neurons, but pulses that are slightly skewed to the phase after the spike readily generate collective oscillations. The results unveil a voltage-dependent spike synchronization mechanism at the heart of emergent collective behavior, which is facilitated by pulses of finite width and complementary to traditional synaptic transmission in spiking neuron networks.
Collapse
Affiliation(s)
- Bastian Pietras
- Department of Information and Communication Technologies, Universitat Pompeu Fabra, 08018, Barcelona, Spain
| |
Collapse
|
2
|
Mobille Z, Sikandar UB, Sponberg S, Choi H. Temporal resolution of spike coding in feedforward networks with signal convergence and divergence. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.07.08.602598. [PMID: 39026834 PMCID: PMC11257569 DOI: 10.1101/2024.07.08.602598] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 07/20/2024]
Abstract
Convergent and divergent structures in the networks that make up biological brains are found universally across many species and brain regions at various scales. Neurons in these networks fire action potentials, or "spikes", whose precise timing is becoming increasingly appreciated as large sources of information about both sensory input and motor output. While previous theories on coding in convergent and divergent networks have largely neglected the role of precise spike timing, our model and analyses place this aspect at the forefront. For a suite of stimuli with different timescales, we demonstrate that structural bottlenecks (small groups of neurons) post-synaptic to network convergence have a stronger preference for spike timing codes than expansion layers created by structural divergence. Additionally, we found that a simple network model with similar convergence and divergence ratios to those found experimentally can reproduce the relative contribution of spike timing information about motor output in the hawkmoth Manduca sexta. Our simulations and analyses suggest a relationship between the level of convergent/divergent structure present in a feedforward network and the loss of stimulus information encoded by its population spike trains as their temporal resolution decreases, which could be confirmed experimentally across diverse neural systems in future studies. We further show that this relationship can be generalized across different models and measures, implying a potentially fundamental link between network structure and coding strategy using spikes.
Collapse
Affiliation(s)
- Zach Mobille
- School of Mathematics, Georgia Institute of Technology, Atlanta, GA 30332
- Quantitative Biosciences Program, Georgia Institute of Technology, Atlanta, GA 30332
| | - Usama Bin Sikandar
- School of Physics, Georgia Institute of Technology, Atlanta, GA 30332
- School of Biological Sciences, Georgia Institute of Technology, Atlanta, GA 30332
| | - Simon Sponberg
- Quantitative Biosciences Program, Georgia Institute of Technology, Atlanta, GA 30332
- School of Physics, Georgia Institute of Technology, Atlanta, GA 30332
- School of Biological Sciences, Georgia Institute of Technology, Atlanta, GA 30332
| | - Hannah Choi
- School of Mathematics, Georgia Institute of Technology, Atlanta, GA 30332
- Quantitative Biosciences Program, Georgia Institute of Technology, Atlanta, GA 30332
| |
Collapse
|
3
|
Moore JJ, Genkin A, Tournoy M, Pughe-Sanford JL, de Ruyter van Steveninck RR, Chklovskii DB. The neuron as a direct data-driven controller. Proc Natl Acad Sci U S A 2024; 121:e2311893121. [PMID: 38913890 PMCID: PMC11228465 DOI: 10.1073/pnas.2311893121] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/22/2023] [Accepted: 04/12/2024] [Indexed: 06/26/2024] Open
Abstract
In the quest to model neuronal function amid gaps in physiological data, a promising strategy is to develop a normative theory that interprets neuronal physiology as optimizing a computational objective. This study extends current normative models, which primarily optimize prediction, by conceptualizing neurons as optimal feedback controllers. We posit that neurons, especially those beyond early sensory areas, steer their environment toward a specific desired state through their output. This environment comprises both synaptically interlinked neurons and external motor sensory feedback loops, enabling neurons to evaluate the effectiveness of their control via synaptic feedback. To model neurons as biologically feasible controllers which implicitly identify loop dynamics, infer latent states, and optimize control we utilize the contemporary direct data-driven control (DD-DC) framework. Our DD-DC neuron model explains various neurophysiological phenomena: the shift from potentiation to depression in spike-timing-dependent plasticity with its asymmetry, the duration and adaptive nature of feedforward and feedback neuronal filters, the imprecision in spike generation under constant stimulation, and the characteristic operational variability and noise in the brain. Our model presents a significant departure from the traditional, feedforward, instant-response McCulloch-Pitts-Rosenblatt neuron, offering a modern, biologically informed fundamental unit for constructing neural networks.
Collapse
Affiliation(s)
- Jason J Moore
- Neuroscience Institute, New York University Grossman School of Medicine, New York City, NY 10016
- Center for Computational Neuroscience, Flatiron Institute, New York City, NY 10010
| | - Alexander Genkin
- Center for Computational Neuroscience, Flatiron Institute, New York City, NY 10010
| | - Magnus Tournoy
- Center for Computational Neuroscience, Flatiron Institute, New York City, NY 10010
| | | | | | - Dmitri B Chklovskii
- Neuroscience Institute, New York University Grossman School of Medicine, New York City, NY 10016
- Center for Computational Neuroscience, Flatiron Institute, New York City, NY 10010
| |
Collapse
|
4
|
Paliwal S, Ocker GK, Brinkman BAW. Metastability in networks of nonlinear stochastic integrate-and-fire neurons. ARXIV 2024:arXiv:2406.07445v1. [PMID: 38947936 PMCID: PMC11213153] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Subscribe] [Scholar Register] [Indexed: 07/02/2024]
Abstract
Neurons in the brain continuously process the barrage of sensory inputs they receive from the environment. A wide array of experimental work has shown that the collective activity of neural populations encodes and processes this constant bombardment of information. How these collective patterns of activity depend on single neuron properties is often unclear. Single-neuron recordings have shown that individual neural responses to inputs are nonlinear, which prevents a straightforward extrapolation from single neuron features to emergent collective states. In this work, we use a field theoretic formulation of a stochastic leaky integrate-and-fire model to study the impact of nonlinear intensity functions on macroscopic network activity. We show that the interplay between nonlinear spike emission and membrane potential resets can i) give rise to metastable transitions between active firing rate states, and ii) can enhance or suppress mean firing rates and membrane potentials in opposite directions.
Collapse
Affiliation(s)
- Siddharth Paliwal
- Department of Neurobiology and Behavior, Stony Brook University, Stony Brook, NY, 11794, USA
| | - Gabriel Koch Ocker
- Department of Mathematics and Statistics, Boston University, Boston, MA, 02215, USA
| | - Braden A W Brinkman
- Department of Neurobiology and Behavior, Stony Brook University, Stony Brook, NY, 11794, USA
| |
Collapse
|
5
|
Marasco A, Tribuzi C, Lupascu CA, Migliore M. Modeling realistic synaptic inputs of CA1 hippocampal pyramidal neurons and interneurons via Adaptive Generalized Leaky Integrate-and-Fire models. Math Biosci 2024; 372:109192. [PMID: 38640998 DOI: 10.1016/j.mbs.2024.109192] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/21/2023] [Revised: 03/14/2024] [Accepted: 04/09/2024] [Indexed: 04/21/2024]
Abstract
Computational models of brain regions are crucial for understanding neuronal network dynamics and the emergence of cognitive functions. However, current supercomputing limitations hinder the implementation of large networks with millions of morphological and biophysical accurate neurons. Consequently, research has focused on simplified spiking neuron models, ranging from the computationally fast Leaky Integrate and Fire (LIF) linear models to more sophisticated non-linear implementations like Adaptive Exponential (AdEX) and Izhikevic models, through Generalized Leaky Integrate and Fire (GLIF) approaches. However, in almost all cases, these models are tuned (and can be validated) only under constant current injections and they may not, in general, also reproduce experimental findings under variable currents. This study introduces an Adaptive GLIF (A-GLIF) approach that addresses this limitation by incorporating a new set of update rules. The extended A-GLIF model successfully reproduces both constant and variable current inputs, and it was validated against the results obtained using a biophysical accurate model neuron. This enhancement provides researchers with a tool to optimize spiking neuron models using classic experimental traces under constant current injections, reliably predicting responses to synaptic inputs, which can be confidently used for large-scale network implementations.
Collapse
Affiliation(s)
- A Marasco
- Department of Mathematics and Applications, University of Naples Federico II, Naples, Italy; Institute of Biophysics, National Research Council, Palermo, Italy.
| | - C Tribuzi
- Department of Mathematics and Applications, University of Naples Federico II, Naples, Italy
| | - C A Lupascu
- Institute of Biophysics, National Research Council, Palermo, Italy
| | - M Migliore
- Institute of Biophysics, National Research Council, Palermo, Italy
| |
Collapse
|
6
|
Caillet AH, Phillips ATM, Modenese L, Farina D. NeuroMechanics: Electrophysiological and computational methods to accurately estimate the neural drive to muscles in humans in vivo. J Electromyogr Kinesiol 2024; 76:102873. [PMID: 38518426 DOI: 10.1016/j.jelekin.2024.102873] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/24/2024] Open
Abstract
The ultimate neural signal for muscle control is the neural drive sent from the spinal cord to muscles. This neural signal comprises the ensemble of action potentials discharged by the active spinal motoneurons, which is transmitted to the innervated muscle fibres to generate forces. Accurately estimating the neural drive to muscles in humans in vivo is challenging since it requires the identification of the activity of a sample of motor units (MUs) that is representative of the active MU population. Current electrophysiological recordings usually fail in this task by identifying small MU samples with over-representation of higher-threshold with respect to lower-threshold MUs. Here, we describe recent advances in electrophysiological methods that allow the identification of more representative samples of greater numbers of MUs than previously possible. This is obtained with large and very dense arrays of electromyographic electrodes. Moreover, recently developed computational methods of data augmentation further extend experimental MU samples to infer the activity of the full MU pool. In conclusion, the combination of new electrode technologies and computational modelling allows for an accurate estimate of the neural drive to muscles and opens new perspectives in the study of the neural control of movement and in neural interfacing.
Collapse
Affiliation(s)
| | - Andrew T M Phillips
- Department of Civil and Environmental Engineering, Imperial College London, UK
| | - Luca Modenese
- Graduate School of Biomedical Engineering, University of New South Wales, Sydney, Australia.
| | - Dario Farina
- Department of Bioengineering, Imperial College London, UK.
| |
Collapse
|
7
|
Ramezani Z, André V, Khizroev S. Modeling the effect of magnetoelectric nanoparticles on neuronal electrical activity: An analog circuit approach. Biointerphases 2024; 19:031001. [PMID: 38738941 DOI: 10.1116/5.0199163] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/22/2024] [Accepted: 04/22/2024] [Indexed: 05/14/2024] Open
Abstract
This paper introduces a physical neuron model that incorporates magnetoelectric nanoparticles (MENPs) as an essential electrical circuit component to wirelessly control local neural activity. Availability of such a model is important as MENPs, due to their magnetoelectric effect, can wirelessly and noninvasively modulate neural activity, which, in turn, has implications for both finding cures for neurological diseases and creating a wireless noninvasive high-resolution brain-machine interface. When placed on a neuronal membrane, MENPs act as magnetic-field-controlled finite-size electric dipoles that generate local electric fields across the membrane in response to magnetic fields, thus allowing to controllably activate local ion channels and locally initiate an action potential. Herein, the neuronal electrical characteristic description is based on ion channel activation and inhibition mechanisms. A MENP-based memristive Hodgkin-Huxley circuit model is extracted by combining the Hodgkin-Huxley model and an equivalent circuit model for a single MENP. In this model, each MENP becomes an integral part of the neuron, thus enabling wireless local control of the neuron's electric circuit itself. Furthermore, the model is expanded to include multiple MENPs to describe collective effects in neural systems.
Collapse
Affiliation(s)
- Zeinab Ramezani
- Department of Electrical and Computer Engineering, College of Engineering, University of Miami, Miami, Florida 33146
| | - Victoria André
- Department of Biomedical Engineering, College of Engineering, University of Miami, Miami, Florida 33146
| | - Sakhrat Khizroev
- Department of Electrical and Computer Engineering, College of Engineering, University of Miami, Miami, Florida 33146
| |
Collapse
|
8
|
Marasco A, Tribuzi C, Iuorio A, Migliore M. Mathematical generation of data-driven hippocampal CA1 pyramidal neurons and interneurons copies via A-GLIF models for large-scale networks covering the experimental variability range. Math Biosci 2024; 371:109179. [PMID: 38521453 DOI: 10.1016/j.mbs.2024.109179] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/24/2023] [Revised: 11/10/2023] [Accepted: 03/13/2024] [Indexed: 03/25/2024]
Abstract
Efficient and accurate large-scale networks are a fundamental tool in modeling brain areas, to advance our understanding of neuronal dynamics. However, their implementation faces two key issues: computational efficiency and heterogeneity. Computational efficiency is achieved using simplified neurons, whereas there are no practical solutions available to solve the problem of reproducing in a large-scale network the experimentally observed heterogeneity of the intrinsic properties of neurons. This is important, because the use of identical nodes in a network can generate artifacts which can hinder an adequate representation of the properties of a real network. To this aim, we introduce a mathematical procedure to generate an arbitrary large number of copies of simplified hippocampal CA1 pyramidal neurons and interneurons models, which exhibit the full range of firing dynamics observed in these cells - including adapting, non-adapting and bursting. For this purpose, we rely on a recently published adaptive generalized leaky integrate-and-fire (A-GLIF) modeling approach, leveraging on its ability to reproduce the rich set of electrophysiological behaviors of these types of neurons under a variety of different stimulation currents. The generation procedure is based on a perturbation of model's parameters related to the initial data, firing block, and internal dynamics, and suitably validated against experimental data to ensure that the firing dynamics of any given cell copy remains within the experimental range. A classification procedure confirmed that the firing behavior of most of the pyramidal/interneuron copies was consistent with the experimental data. This approach allows to obtain heterogeneous copies with mathematically controlled firing properties. A full set of heterogeneous neurons composing the CA1 region of a rat hippocampus (approximately 1.2 million neurons), are provided in a database freely available in the live paper section of the EBRAINS platform. By adapting the underlying A-GLIF framework, it will be possible to extend the numerical approach presented here to create, in a mathematically controlled manner, an arbitrarily large number of non-identical copies of cell populations with firing properties related to other brain areas.
Collapse
Affiliation(s)
- A Marasco
- Department of Mathematics and Applications, University of Naples Federico II, Naples, Italy; Institute of Biophysics, National Research Council, Palermo, Italy.
| | - C Tribuzi
- Department of Mathematics and Applications, University of Naples Federico II, Naples, Italy
| | - A Iuorio
- University of Vienna, Faculty of Mathematics, Vienna, Austria; Department of Engineering, Parthenope University of Naples, Naples, Italy
| | - M Migliore
- Institute of Biophysics, National Research Council, Palermo, Italy
| |
Collapse
|
9
|
Galván Fraile J, Scherr F, Ramasco JJ, Arkhipov A, Maass W, Mirasso CR. Modeling circuit mechanisms of opposing cortical responses to visual flow perturbations. PLoS Comput Biol 2024; 20:e1011921. [PMID: 38452057 PMCID: PMC10950248 DOI: 10.1371/journal.pcbi.1011921] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/12/2023] [Revised: 03/19/2024] [Accepted: 02/18/2024] [Indexed: 03/09/2024] Open
Abstract
In an ever-changing visual world, animals' survival depends on their ability to perceive and respond to rapidly changing motion cues. The primary visual cortex (V1) is at the forefront of this sensory processing, orchestrating neural responses to perturbations in visual flow. However, the underlying neural mechanisms that lead to distinct cortical responses to such perturbations remain enigmatic. In this study, our objective was to uncover the neural dynamics that govern V1 neurons' responses to visual flow perturbations using a biologically realistic computational model. By subjecting the model to sudden changes in visual input, we observed opposing cortical responses in excitatory layer 2/3 (L2/3) neurons, namely, depolarizing and hyperpolarizing responses. We found that this segregation was primarily driven by the competition between external visual input and recurrent inhibition, particularly within L2/3 and L4. This division was not observed in excitatory L5/6 neurons, suggesting a more prominent role for inhibitory mechanisms in the visual processing of the upper cortical layers. Our findings share similarities with recent experimental studies focusing on the opposing influence of top-down and bottom-up inputs in the mouse primary visual cortex during visual flow perturbations.
Collapse
Affiliation(s)
- J. Galván Fraile
- Instituto de Física Interdisciplinar y Sistemas Complejos (IFISC), UIB-CSIC, Palma de Mallorca, Spain
| | - Franz Scherr
- Institute of Theoretical Computer Science, Graz University of Technology, Graz, Austria
| | - José J. Ramasco
- Instituto de Física Interdisciplinar y Sistemas Complejos (IFISC), UIB-CSIC, Palma de Mallorca, Spain
| | - Anton Arkhipov
- Allen Institute, Seattle, Washington, United States of America
| | - Wolfgang Maass
- Institute of Theoretical Computer Science, Graz University of Technology, Graz, Austria
| | - Claudio R. Mirasso
- Instituto de Física Interdisciplinar y Sistemas Complejos (IFISC), UIB-CSIC, Palma de Mallorca, Spain
| |
Collapse
|
10
|
Liu X, Sun C, Ye X, Zhu X, Hu C, Tan H, He S, Shao M, Li RW. Neuromorphic Nanoionics for Human-Machine Interaction: From Materials to Applications. ADVANCED MATERIALS (DEERFIELD BEACH, FLA.) 2024:e2311472. [PMID: 38421081 DOI: 10.1002/adma.202311472] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/31/2023] [Revised: 02/06/2024] [Indexed: 03/02/2024]
Abstract
Human-machine interaction (HMI) technology has undergone significant advancements in recent years, enabling seamless communication between humans and machines. Its expansion has extended into various emerging domains, including human healthcare, machine perception, and biointerfaces, thereby magnifying the demand for advanced intelligent technologies. Neuromorphic computing, a paradigm rooted in nanoionic devices that emulate the operations and architecture of the human brain, has emerged as a powerful tool for highly efficient information processing. This paper delivers a comprehensive review of recent developments in nanoionic device-based neuromorphic computing technologies and their pivotal role in shaping the next-generation of HMI. Through a detailed examination of fundamental mechanisms and behaviors, the paper explores the ability of nanoionic memristors and ion-gated transistors to emulate the intricate functions of neurons and synapses. Crucial performance metrics, such as reliability, energy efficiency, flexibility, and biocompatibility, are rigorously evaluated. Potential applications, challenges, and opportunities of using the neuromorphic computing technologies in emerging HMI technologies, are discussed and outlooked, shedding light on the fusion of humans with machines.
Collapse
Affiliation(s)
- Xuerong Liu
- CAS Key Laboratory of Magnetic Materials and Devices, and Zhejiang Province Key Laboratory of Magnetic Materials and Application Technology, Ningbo Institute of Materials Technology and Engineering, Chinese Academy of Sciences, Ningbo, 315201, China
- Zhejiang Province Key Laboratory of Magnetic Materials and Application Technology, Ningbo Institute of Materials Technology and Engineering, Chinese Academy of Sciences, Ningbo, 315201, China
- College of Materials Sciences and Opto-Electronic Technology, University of Chinese Academy of Sciences, Beijing, 100049, China
| | - Cui Sun
- CAS Key Laboratory of Magnetic Materials and Devices, and Zhejiang Province Key Laboratory of Magnetic Materials and Application Technology, Ningbo Institute of Materials Technology and Engineering, Chinese Academy of Sciences, Ningbo, 315201, China
- Zhejiang Province Key Laboratory of Magnetic Materials and Application Technology, Ningbo Institute of Materials Technology and Engineering, Chinese Academy of Sciences, Ningbo, 315201, China
| | - Xiaoyu Ye
- CAS Key Laboratory of Magnetic Materials and Devices, and Zhejiang Province Key Laboratory of Magnetic Materials and Application Technology, Ningbo Institute of Materials Technology and Engineering, Chinese Academy of Sciences, Ningbo, 315201, China
- Zhejiang Province Key Laboratory of Magnetic Materials and Application Technology, Ningbo Institute of Materials Technology and Engineering, Chinese Academy of Sciences, Ningbo, 315201, China
| | - Xiaojian Zhu
- CAS Key Laboratory of Magnetic Materials and Devices, and Zhejiang Province Key Laboratory of Magnetic Materials and Application Technology, Ningbo Institute of Materials Technology and Engineering, Chinese Academy of Sciences, Ningbo, 315201, China
- Zhejiang Province Key Laboratory of Magnetic Materials and Application Technology, Ningbo Institute of Materials Technology and Engineering, Chinese Academy of Sciences, Ningbo, 315201, China
| | - Cong Hu
- CAS Key Laboratory of Magnetic Materials and Devices, and Zhejiang Province Key Laboratory of Magnetic Materials and Application Technology, Ningbo Institute of Materials Technology and Engineering, Chinese Academy of Sciences, Ningbo, 315201, China
- Zhejiang Province Key Laboratory of Magnetic Materials and Application Technology, Ningbo Institute of Materials Technology and Engineering, Chinese Academy of Sciences, Ningbo, 315201, China
| | - Hongwei Tan
- Department of Applied Physics, Aalto University, Aalto, FI-00076, Finland
| | - Shang He
- CAS Key Laboratory of Magnetic Materials and Devices, and Zhejiang Province Key Laboratory of Magnetic Materials and Application Technology, Ningbo Institute of Materials Technology and Engineering, Chinese Academy of Sciences, Ningbo, 315201, China
- Zhejiang Province Key Laboratory of Magnetic Materials and Application Technology, Ningbo Institute of Materials Technology and Engineering, Chinese Academy of Sciences, Ningbo, 315201, China
| | - Mengjie Shao
- CAS Key Laboratory of Magnetic Materials and Devices, and Zhejiang Province Key Laboratory of Magnetic Materials and Application Technology, Ningbo Institute of Materials Technology and Engineering, Chinese Academy of Sciences, Ningbo, 315201, China
- Zhejiang Province Key Laboratory of Magnetic Materials and Application Technology, Ningbo Institute of Materials Technology and Engineering, Chinese Academy of Sciences, Ningbo, 315201, China
| | - Run-Wei Li
- CAS Key Laboratory of Magnetic Materials and Devices, and Zhejiang Province Key Laboratory of Magnetic Materials and Application Technology, Ningbo Institute of Materials Technology and Engineering, Chinese Academy of Sciences, Ningbo, 315201, China
- Zhejiang Province Key Laboratory of Magnetic Materials and Application Technology, Ningbo Institute of Materials Technology and Engineering, Chinese Academy of Sciences, Ningbo, 315201, China
| |
Collapse
|
11
|
Shirani F, Choi H. On the physiological and structural contributors to the overall balance of excitation and inhibition in local cortical networks. J Comput Neurosci 2024; 52:73-107. [PMID: 37837534 DOI: 10.1007/s10827-023-00863-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/13/2023] [Revised: 06/25/2023] [Accepted: 09/08/2023] [Indexed: 10/16/2023]
Abstract
Overall balance of excitation and inhibition in cortical networks is central to their functionality and normal operation. Such orchestrated co-evolution of excitation and inhibition is established through convoluted local interactions between neurons, which are organized by specific network connectivity structures and are dynamically controlled by modulating synaptic activities. Therefore, identifying how such structural and physiological factors contribute to establishment of overall balance of excitation and inhibition is crucial in understanding the homeostatic plasticity mechanisms that regulate the balance. We use biologically plausible mathematical models to extensively study the effects of multiple key factors on overall balance of a network. We characterize a network's baseline balanced state by certain functional properties, and demonstrate how variations in physiological and structural parameters of the network deviate this balance and, in particular, result in transitions in spontaneous activity of the network to high-amplitude slow oscillatory regimes. We show that deviations from the reference balanced state can be continuously quantified by measuring the ratio of mean excitatory to mean inhibitory synaptic conductances in the network. Our results suggest that the commonly observed ratio of the number of inhibitory to the number of excitatory neurons in local cortical networks is almost optimal for their stability and excitability. Moreover, the values of inhibitory synaptic decay time constants and density of inhibitory-to-inhibitory network connectivity are critical to overall balance and stability of cortical networks. However, network stability in our results is sufficiently robust against modulations of synaptic quantal conductances, as required by their role in learning and memory. Our study based on extensive bifurcation analyses thus reveal the functional optimality and criticality of structural and physiological parameters in establishing the baseline operating state of local cortical networks.
Collapse
Affiliation(s)
- Farshad Shirani
- School of Mathematics, Georgia Institute of Technology, Atlanta, 30332, Georgia, USA.
| | - Hannah Choi
- School of Mathematics, Georgia Institute of Technology, Atlanta, 30332, Georgia, USA
| |
Collapse
|
12
|
Ma G, Yan R, Tang H. Exploiting noise as a resource for computation and learning in spiking neural networks. PATTERNS (NEW YORK, N.Y.) 2023; 4:100831. [PMID: 37876899 PMCID: PMC10591140 DOI: 10.1016/j.patter.2023.100831] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 05/25/2023] [Revised: 07/06/2023] [Accepted: 08/07/2023] [Indexed: 10/26/2023]
Abstract
Networks of spiking neurons underpin the extraordinary information-processing capabilities of the brain and have become pillar models in neuromorphic artificial intelligence. Despite extensive research on spiking neural networks (SNNs), most studies are established on deterministic models, overlooking the inherent non-deterministic, noisy nature of neural computations. This study introduces the noisy SNN (NSNN) and the noise-driven learning (NDL) rule by incorporating noisy neuronal dynamics to exploit the computational advantages of noisy neural processing. The NSNN provides a theoretical framework that yields scalable, flexible, and reliable computation and learning. We demonstrate that this framework leads to spiking neural models with competitive performance, improved robustness against challenging perturbations compared with deterministic SNNs, and better reproducing probabilistic computation in neural coding. Generally, this study offers a powerful and easy-to-use tool for machine learning, neuromorphic intelligence practitioners, and computational neuroscience researchers.
Collapse
Affiliation(s)
- Gehua Ma
- College of Computer Science and Technology, Zhejiang University, Hangzhou, PRC
| | - Rui Yan
- College of Computer Science and Technology, Zhejiang University of Technology, Hangzhou, PRC
| | - Huajin Tang
- College of Computer Science and Technology, Zhejiang University, Hangzhou, PRC
- State Key Lab of Brain-Machine Intelligence, Zhejiang University, Hangzhou, PRC
| |
Collapse
|
13
|
Kim D, Kim IJ, Lee JS. Demonstration of the threshold-switching memory devices using EMIm(AlCl 3)Cl and ZnO for neuromorphic applications. NANOTECHNOLOGY 2023; 35:015203. [PMID: 37830748 DOI: 10.1088/1361-6528/acf93d] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/23/2023] [Accepted: 09/13/2023] [Indexed: 10/14/2023]
Abstract
The threshold-switching behaviors of the synapses lead to energy-efficient operation in the neural computing system. Here, we demonstrated the threshold-switching memory devices by inserting the ZnO layer into the ionic synaptic devices. The EMIm(AlCl3)Cl is utilized as the electrolyte because its conductance can be tuned by the charge states of the Al-based ions. The redox reactions of the Al ions in the electrolyte can lead to the analog resistive switching characteristics, such as excitatory postsynaptic current, paired-pulse facilitation, potentiation, and depression. By inserting the ZnO layer into the EMIm(AlCl3)-based ionic synaptic devices, the threshold switching behaviors are demonstrated. Using the resistivity difference between ZnO and EMIm(AlCl3)Cl, the analog resistive switching behaviors are tunned as the threshold-switching behaviors. The threshold-switching behaviors are achieved by applying the spike stimuli to the device. Demonstration of the threshold-switching behaviors of the ionic synaptic devices has a possibility to achieve high energy-efficiency for the ion-based artificial synapses.
Collapse
Affiliation(s)
- Dongshin Kim
- Department of Materials Science and Engineering, Pohang University of Science and Technology (POSTECH), Pohang 37673, Republic of Korea
| | - Ik-Jyae Kim
- Department of Materials Science and Engineering, Pohang University of Science and Technology (POSTECH), Pohang 37673, Republic of Korea
| | - Jang-Sik Lee
- Department of Materials Science and Engineering, Pohang University of Science and Technology (POSTECH), Pohang 37673, Republic of Korea
| |
Collapse
|
14
|
Marasco A, Spera E, De Falco V, Iuorio A, Lupascu CA, Solinas S, Migliore M. An Adaptive Generalized Leaky Integrate-and-Fire Model for Hippocampal CA1 Pyramidal Neurons and Interneurons. Bull Math Biol 2023; 85:109. [PMID: 37792146 PMCID: PMC10550887 DOI: 10.1007/s11538-023-01206-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/11/2023] [Accepted: 08/24/2023] [Indexed: 10/05/2023]
Abstract
Full-scale morphologically and biophysically realistic model networks, aiming at modeling multiple brain areas, provide an invaluable tool to make significant scientific advances from in-silico experiments on cognitive functions to digital twin implementations. Due to the current technical limitations of supercomputer systems in terms of computational power and memory requirements, these networks must be implemented using (at least) simplified neurons. A class of models which achieve a reasonable compromise between accuracy and computational efficiency is given by generalized leaky integrate-and fire models complemented by suitable initial and update conditions. However, we found that these models cannot reproduce the complex and highly variable firing dynamics exhibited by neurons in several brain regions, such as the hippocampus. In this work, we propose an adaptive generalized leaky integrate-and-fire model for hippocampal CA1 neurons and interneurons, in which the nonlinear nature of the firing dynamics is successfully reproduced by linear ordinary differential equations equipped with nonlinear and more realistic initial and update conditions after each spike event, which strictly depends on the external stimulation current. A mathematical analysis of the equilibria stability as well as the monotonicity properties of the analytical solution for the membrane potential allowed (i) to determine general constraints on model parameters, reducing the computational cost of an optimization procedure based on spike times in response to a set of constant currents injections; (ii) to identify additional constraints to quantitatively reproduce and predict the experimental traces from 85 neurons and interneurons in response to any stimulation protocol using constant and piecewise constant current injections. Finally, this approach allows to easily implement a procedure to create infinite copies of neurons with mathematically controlled firing properties, statistically indistinguishable from experiments, to better reproduce the full range and variability of the firing scenarios observed in a real network.
Collapse
Affiliation(s)
- Addolorata Marasco
- Department of Mathematics and Applications, University of Naples Federico II, Via Cintia ed. 5A, 80126 Naples, Italy
- Institute of Biophysics, National Research Council, Via Ugo La Malfa 153, 90146 Palermo, Italy
| | - Emiliano Spera
- Institute of Biophysics, National Research Council, Via Ugo La Malfa 153, 90146 Palermo, Italy
| | - Vittorio De Falco
- Scuola Superiore Meridionale, Largo San Marcellino 10, 80138 Naples, Napoli Italy
- Istituto Nazionale di Fisica Nucleare di Napoli, Via Cintia ed. 6, 80126 Naples, Napoli Italy
| | - Annalisa Iuorio
- Faculty of Mathematics, University of Vienna, Oskar-Morgenstern-Platz 1, 1090 Vienna, Austria
- Department of Engineering, Parthenope University of Naples, Centro Direzionale - Isola C4, 80143 Naples, Italy
| | - Carmen Alina Lupascu
- Institute of Biophysics, National Research Council, Via Ugo La Malfa 153, 90146 Palermo, Italy
| | - Sergio Solinas
- Department of Biomedical Science, University of Sassari, Viale San Pietro 23, 07100 Sassari, Italy
| | - Michele Migliore
- Institute of Biophysics, National Research Council, Via Ugo La Malfa 153, 90146 Palermo, Italy
| |
Collapse
|
15
|
Trinh AT, Girardi-Schappo M, Béïque JC, Longtin A, Maler L. Adaptive spike threshold dynamics associated with sparse spiking of hilar mossy cells are captured by a simple model. J Physiol 2023; 601:4397-4422. [PMID: 37676904 DOI: 10.1113/jp283728] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/15/2022] [Accepted: 08/17/2023] [Indexed: 09/09/2023] Open
Abstract
Hilar mossy cells (hMCs) in the dentate gyrus (DG) receive inputs from DG granule cells (GCs), CA3 pyramidal cells and inhibitory interneurons, and provide feedback input to GCs. Behavioural and in vivo recording experiments implicate hMCs in pattern separation, navigation and spatial learning. Our experiments link hMC intrinsic excitability to their synaptically evoked in vivo spiking outputs. We performed electrophysiological recordings from DG neurons and found that hMCs displayed an adaptative spike threshold that increased both in proportion to the intensity of injected currents, and in response to spiking itself, returning to baseline over a long time scale, thereby instantaneously limiting their firing rate responses. The hMC activity is additionally limited by a prominent medium after-hyperpolarizing potential (AHP) generated by small conductance K+ channels. We hypothesize that these intrinsic hMC properties are responsible for their low in vivo firing rates. Our findings extend previous studies that compare hMCs, CA3 pyramidal cells and hilar inhibitory cells and provide novel quantitative data that contrast the intrinsic properties of these cell types. We developed a phenomenological exponential integrate-and-fire model that closely reproduces the hMC adaptive threshold nonlinearities with respect to their threshold dependence on input current intensity, evoked spike latency and long-lasting spike-induced increase in spike threshold. Our robust and computationally efficient model is amenable to incorporation into large network models of the DG that will deepen our understanding of the neural bases of pattern separation, spatial navigation and learning. KEY POINTS: Previous studies have shown that hilar mossy cells (hMCs) are implicated in pattern separation and the formation of spatial memory, but how their intrinsic properties relate to their in vivo spiking patterns is still unknown. Here we show that the hMCs display electrophysiological properties that distinguish them from the other hilar cell types including a highly adaptive spike threshold that decays slowly. The spike-dependent increase in threshold combined with an after-hyperpolarizing potential mediated by a slow K+ conductance is hypothesized to be responsible for the low-firing rate of the hMC observed in vivo. The hMC's features are well captured by a modified stochastic exponential integrate-and-fire model that has the unique feature of a threshold intrinsically dependant on both the stimulus intensity and the spiking history. This computational model will allow future work to study how the hMCs can contribute to spatial memory formation and navigation.
Collapse
Affiliation(s)
- Anh-Tuan Trinh
- Kavli Institute for Systems Neuroscience, Norwegian University of Science and Technology, Trondheim, Trøndelag, Norway
- Department of Cellular and Molecular Medicine, University of Ottawa, Ottawa, Ontario, Canada
| | - Mauricio Girardi-Schappo
- Departamento de Física, Universidade Federal de Santa Catarina, Santa Catarina, Florianópolis, Brazil
- Department of Physics, University of Ottawa, Ottawa, Ontario, Canada
| | - Jean-Claude Béïque
- Department of Cellular and Molecular Medicine, University of Ottawa, Ottawa, Ontario, Canada
- Brain and Mind Institute, University of Ottawa, Ottawa, Ontario, Canada
- Center for Neural Dynamics, University of Ottawa, Ottawa, Ontario, Canada
| | - André Longtin
- Department of Physics, University of Ottawa, Ottawa, Ontario, Canada
- Brain and Mind Institute, University of Ottawa, Ottawa, Ontario, Canada
- Center for Neural Dynamics, University of Ottawa, Ottawa, Ontario, Canada
| | - Leonard Maler
- Department of Cellular and Molecular Medicine, University of Ottawa, Ottawa, Ontario, Canada
- Brain and Mind Institute, University of Ottawa, Ottawa, Ontario, Canada
- Center for Neural Dynamics, University of Ottawa, Ottawa, Ontario, Canada
| |
Collapse
|
16
|
Kern FB, Chao ZC. Short-term neuronal and synaptic plasticity act in synergy for deviance detection in spiking networks. PLoS Comput Biol 2023; 19:e1011554. [PMID: 37831721 PMCID: PMC10599548 DOI: 10.1371/journal.pcbi.1011554] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/19/2023] [Revised: 10/25/2023] [Accepted: 09/29/2023] [Indexed: 10/15/2023] Open
Abstract
Sensory areas of cortex respond more strongly to infrequent stimuli when these violate previously established regularities, a phenomenon known as deviance detection (DD). Previous modeling work has mainly attempted to explain DD on the basis of synaptic plasticity. However, a large fraction of cortical neurons also exhibit firing rate adaptation, an underexplored potential mechanism. Here, we investigate DD in a spiking neuronal network model with two types of short-term plasticity, fast synaptic short-term depression (STD) and slower threshold adaptation (TA). We probe the model with an oddball stimulation paradigm and assess DD by evaluating the network responses. We find that TA is sufficient to elicit DD. It achieves this by habituating neurons near the stimulation site that respond earliest to the frequently presented standard stimulus (local fatigue), which diminishes the response and promotes the recovery (global fatigue) of the wider network. Further, we find a synergy effect between STD and TA, where they interact with each other to achieve greater DD than the sum of their individual effects. We show that this synergy is caused by the local fatigue added by STD, which inhibits the global response to the frequently presented stimulus, allowing greater recovery of TA-mediated global fatigue and making the network more responsive to the deviant stimulus. Finally, we show that the magnitude of DD strongly depends on the timescale of stimulation. We conclude that highly predictable information can be encoded in strong local fatigue, which allows greater global recovery and subsequent heightened sensitivity for DD.
Collapse
Affiliation(s)
- Felix Benjamin Kern
- International Research Center for Neurointelligence (WPI-IRCN), The University of Tokyo, Tokyo, Japan
| | - Zenas C. Chao
- International Research Center for Neurointelligence (WPI-IRCN), The University of Tokyo, Tokyo, Japan
| |
Collapse
|
17
|
Zdeblick DN, Shea-Brown ET, Witten DM, Buice MA. Modeling functional cell types in spike train data. PLoS Comput Biol 2023; 19:e1011509. [PMID: 37824442 PMCID: PMC10569560 DOI: 10.1371/journal.pcbi.1011509] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/05/2023] [Accepted: 09/12/2023] [Indexed: 10/14/2023] Open
Abstract
A major goal of computational neuroscience is to build accurate models of the activity of neurons that can be used to interpret their function in circuits. Here, we explore using functional cell types to refine single-cell models by grouping them into functionally relevant classes. Formally, we define a hierarchical generative model for cell types, single-cell parameters, and neural responses, and then derive an expectation-maximization algorithm with variational inference that maximizes the likelihood of the neural recordings. We apply this "simultaneous" method to estimate cell types and fit single-cell models from simulated data, and find that it accurately recovers the ground truth parameters. We then apply our approach to in vitro neural recordings from neurons in mouse primary visual cortex, and find that it yields improved prediction of single-cell activity. We demonstrate that the discovered cell-type clusters are well separated and generalizable, and thus amenable to interpretation. We then compare discovered cluster memberships with locational, morphological, and transcriptomic data. Our findings reveal the potential to improve models of neural responses by explicitly allowing for shared functional properties across neurons.
Collapse
Affiliation(s)
- Daniel N. Zdeblick
- Department of Electrical and Computer Engineering, University of Washington, Seattle, Washington, United States of America
| | - Eric T. Shea-Brown
- Department of Applied Math, University of Washington, Seattle, Washington, United States of America
- MindScope Program, Allen Institute, Seattle, Washington, United States of America
| | - Daniela M. Witten
- Department of Statistics and Biostatistics, University of Washington, Seattle, Washington, United States of America
| | - Michael A. Buice
- Department of Applied Math, University of Washington, Seattle, Washington, United States of America
- MindScope Program, Allen Institute, Seattle, Washington, United States of America
| |
Collapse
|
18
|
Dey S, Dimitrov AG. Sensitivity analysis of point neuron model simulations implemented on neuromorphic hardware. Front Neurosci 2023; 17:1198282. [PMID: 37694108 PMCID: PMC10484528 DOI: 10.3389/fnins.2023.1198282] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/01/2023] [Accepted: 08/10/2023] [Indexed: 09/12/2023] Open
Abstract
With the ongoing growth in the field of neuro-inspired computing, newly arriving computational architectures demand extensive validation and testing against existing benchmarks to establish their competence and value. In our work, we break down the validation step into two parts-(1) establishing a methodological and numerical groundwork to establish a comparison between neuromorphic and conventional platforms and, (2) performing a sensitivity analysis on the obtained model regime to assess its robustness. We study the neuronal dynamics based on the Leaky Integrate and Fire (LIF) model, which is built upon data from the mouse visual cortex spanning a set of anatomical and physiological constraints. Intel Corp.'s first neuromorphic chip "Loihi" serves as our neuromorphic platform and results on it are validated against the classical simulations. After setting up a model that allows a seamless mapping between the Loihi and the classical simulations, we find that Loihi replicates classical simulations very efficiently with high precision. This model is then subjected to the second phase of validation, through sensitivity analysis, by assessing the impact on the cost function as values of the significant model parameters are varied. The work is done in two steps-(1) assessing the impact while changing one parameter at a time, (2) assessing the impact while changing two parameters at a time. We observe that the model is quite robust for majority of the parameters with slight change in the cost function. We also identify a subset of the model parameters changes which make the model more sensitive and thus, need to be defined more precisely.
Collapse
Affiliation(s)
- Srijanie Dey
- Department of Mathematics, Washington State University, Vancouver, WA, United States
| | - Alexander G. Dimitrov
- Department of Mathematics, Washington State University, Vancouver, WA, United States
| |
Collapse
|
19
|
Haufler D, Ito S, Koch C, Arkhipov A. Simulations of cortical networks using spatially extended conductance-based neuronal models. J Physiol 2023; 601:3123-3139. [PMID: 36567262 PMCID: PMC10290729 DOI: 10.1113/jp284030] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/26/2022] [Accepted: 12/19/2022] [Indexed: 12/27/2022] Open
Abstract
The Hodgkin-Huxley model of action potential generation and propagation, published in the Journal of Physiology in 1952, initiated the field of biophysically detailed computational modelling in neuroscience, which has expanded to encompass a variety of species and components of the nervous system. Here we review the developments in this area with a focus on efforts in the community towards modelling the mammalian neocortex using spatially extended conductance-based neuronal models. The Hodgkin-Huxley formalism and related foundational contributions, such as Rall's cable theory, remain widely used in these efforts to the current day. We argue that at present the field is undergoing a qualitative change due to new very rich datasets describing the composition, connectivity and functional activity of cortical circuits, which are being integrated systematically into large-scale network models. This trend, combined with the accelerating development of convenient software tools supporting such complex modelling projects, is giving rise to highly detailed models of the cortex that are extensively constrained by the data, enabling computational investigation of a multitude of questions about cortical structure and function.
Collapse
Affiliation(s)
| | - Shinya Ito
- Mindscope Program, Allen Institute, Seattle, 98109
| | | | | |
Collapse
|
20
|
Arthur BJ, Kim CM, Chen S, Preibisch S, Darshan R. A scalable implementation of the recursive least-squares algorithm for training spiking neural networks. Front Neuroinform 2023; 17:1099510. [PMID: 37441157 PMCID: PMC10333503 DOI: 10.3389/fninf.2023.1099510] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/15/2022] [Accepted: 06/05/2023] [Indexed: 07/15/2023] Open
Abstract
Training spiking recurrent neural networks on neuronal recordings or behavioral tasks has become a popular way to study computations performed by the nervous system. As the size and complexity of neural recordings increase, there is a need for efficient algorithms that can train models in a short period of time using minimal resources. We present optimized CPU and GPU implementations of the recursive least-squares algorithm in spiking neural networks. The GPU implementation can train networks of one million neurons, with 100 million plastic synapses and a billion static synapses, about 1,000 times faster than an unoptimized reference CPU implementation. We demonstrate the code's utility by training a network, in less than an hour, to reproduce the activity of > 66, 000 recorded neurons of a mouse performing a decision-making task. The fast implementation enables a more interactive in-silico study of the dynamics and connectivity underlying multi-area computations. It also admits the possibility to train models as in-vivo experiments are being conducted, thus closing the loop between modeling and experiments.
Collapse
Affiliation(s)
- Benjamin J. Arthur
- Janelia Research Campus, Howard Hughes Medical Institute, Ashburn, VA, United States
| | - Christopher M. Kim
- Janelia Research Campus, Howard Hughes Medical Institute, Ashburn, VA, United States
- Laboratory of Biological Modeling, National Institute of Diabetes and Digestive and Kidney Diseases, National Institutes of Health, Bethesda, MD, United States
| | - Susu Chen
- Janelia Research Campus, Howard Hughes Medical Institute, Ashburn, VA, United States
| | - Stephan Preibisch
- Janelia Research Campus, Howard Hughes Medical Institute, Ashburn, VA, United States
| | - Ran Darshan
- Janelia Research Campus, Howard Hughes Medical Institute, Ashburn, VA, United States
| |
Collapse
|
21
|
Chialva U, González Boscá V, Rotstein HG. Low-dimensional models of single neurons: a review. BIOLOGICAL CYBERNETICS 2023; 117:163-183. [PMID: 37060453 DOI: 10.1007/s00422-023-00960-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/13/2022] [Accepted: 03/05/2023] [Indexed: 06/13/2023]
Abstract
The classical Hodgkin-Huxley (HH) point-neuron model of action potential generation is four-dimensional. It consists of four ordinary differential equations describing the dynamics of the membrane potential and three gating variables associated to a transient sodium and a delayed-rectifier potassium ionic currents. Conductance-based models of HH type are higher-dimensional extensions of the classical HH model. They include a number of supplementary state variables associated with other ionic current types, and are able to describe additional phenomena such as subthreshold oscillations, mixed-mode oscillations (subthreshold oscillations interspersed with spikes), clustering and bursting. In this manuscript we discuss biophysically plausible and phenomenological reduced models that preserve the biophysical and/or dynamic description of models of HH type and the ability to produce complex phenomena, but the number of effective dimensions (state variables) is lower. We describe several representative models. We also describe systematic and heuristic methods of deriving reduced models from models of HH type.
Collapse
Affiliation(s)
- Ulises Chialva
- Departamento de Matemática, Universidad Nacional del Sur and CONICET, Bahía Blanca, Buenos Aires, Argentina
| | | | - Horacio G Rotstein
- Federated Department of Biological Sciences, New Jersey Institute of Technology and Rutgers University, Newark, New Jersey, USA.
- Behavioral Neurosciences Program, Rutgers University, Newark, NJ, USA.
- Corresponding Investigators Group, CONICET, Buenos Aires, Argentina.
| |
Collapse
|
22
|
Yu F, Wu Y, Ma S, Xu M, Li H, Qu H, Song C, Wang T, Zhao R, Shi L. Brain-inspired multimodal hybrid neural network for robot place recognition. Sci Robot 2023; 8:eabm6996. [PMID: 37163608 DOI: 10.1126/scirobotics.abm6996] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/12/2023]
Abstract
Place recognition is an essential spatial intelligence capability for robots to understand and navigate the world. However, recognizing places in natural environments remains a challenging task for robots because of resource limitations and changing environments. In contrast, humans and animals can robustly and efficiently recognize hundreds of thousands of places in different conditions. Here, we report a brain-inspired general place recognition system, dubbed NeuroGPR, that enables robots to recognize places by mimicking the neural mechanism of multimodal sensing, encoding, and computing through a continuum of space and time. Our system consists of a multimodal hybrid neural network (MHNN) that encodes and integrates multimodal cues from both conventional and neuromorphic sensors. Specifically, to encode different sensory cues, we built various neural networks of spatial view cells, place cells, head direction cells, and time cells. To integrate these cues, we designed a multiscale liquid state machine that can process and fuse multimodal information effectively and asynchronously using diverse neuronal dynamics and bioinspired inhibitory circuits. We deployed the MHNN on Tianjic, a hybrid neuromorphic chip, and integrated it into a quadruped robot. Our results show that NeuroGPR achieves better performance compared with conventional and existing biologically inspired approaches, exhibiting robustness to diverse environmental uncertainty, including perceptual aliasing, motion blur, light, or weather changes. Running NeuroGPR as an overall multi-neural network workload on Tianjic showcases its advantages with 10.5 times lower latency and 43.6% lower power consumption than the commonly used mobile robot processor Jetson Xavier NX.
Collapse
Affiliation(s)
- Fangwen Yu
- Center for Brain-Inspired Computing Research (CBICR), Optical Memory National Engineering Research Center, and Department of Precision Instrument, Tsinghua University, Beijing 100084, China
| | - Yujie Wu
- Center for Brain-Inspired Computing Research (CBICR), Optical Memory National Engineering Research Center, and Department of Precision Instrument, Tsinghua University, Beijing 100084, China
- Institute of Theoretical Computer Science, Graz University of Technology, Graz, Austria
| | - Songchen Ma
- Center for Brain-Inspired Computing Research (CBICR), Optical Memory National Engineering Research Center, and Department of Precision Instrument, Tsinghua University, Beijing 100084, China
| | - Mingkun Xu
- Center for Brain-Inspired Computing Research (CBICR), Optical Memory National Engineering Research Center, and Department of Precision Instrument, Tsinghua University, Beijing 100084, China
| | - Hongyi Li
- Center for Brain-Inspired Computing Research (CBICR), Optical Memory National Engineering Research Center, and Department of Precision Instrument, Tsinghua University, Beijing 100084, China
| | - Huanyu Qu
- Center for Brain-Inspired Computing Research (CBICR), Optical Memory National Engineering Research Center, and Department of Precision Instrument, Tsinghua University, Beijing 100084, China
| | - Chenhang Song
- Center for Brain-Inspired Computing Research (CBICR), Optical Memory National Engineering Research Center, and Department of Precision Instrument, Tsinghua University, Beijing 100084, China
| | - Taoyi Wang
- Center for Brain-Inspired Computing Research (CBICR), Optical Memory National Engineering Research Center, and Department of Precision Instrument, Tsinghua University, Beijing 100084, China
| | - Rong Zhao
- Center for Brain-Inspired Computing Research (CBICR), Optical Memory National Engineering Research Center, and Department of Precision Instrument, Tsinghua University, Beijing 100084, China
- IDG/McGovern Institute for Brain Research, Tsinghua University, Beijing 100084, China
| | - Luping Shi
- Center for Brain-Inspired Computing Research (CBICR), Optical Memory National Engineering Research Center, and Department of Precision Instrument, Tsinghua University, Beijing 100084, China
- IDG/McGovern Institute for Brain Research, Tsinghua University, Beijing 100084, China
- THU-CET HIK Joint Research Center for Brain-Inspired Computing, Tsinghua University, Beijing 100084, China
| |
Collapse
|
23
|
Shirani F, Choi H. On the physiological and structural contributors to the overall balance of excitation and inhibition in local cortical networks. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2023:2023.01.10.523489. [PMID: 36711468 PMCID: PMC9882012 DOI: 10.1101/2023.01.10.523489] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/18/2023]
Abstract
Overall balance of excitation and inhibition in cortical networks is central to their functionality and normal operation. Such orchestrated co-evolution of excitation and inhibition is established through convoluted local interactions between neurons, which are organized by specific network connectivity structures and are dynamically controlled by modulating synaptic activities. Therefore, identifying how such structural and physiological factors contribute to establishment of overall balance of excitation and inhibition is crucial in understanding the homeostatic plasticity mechanisms that regulate the balance. We use biologically plausible mathematical models to extensively study the effects of multiple key factors on overall balance of a network. We characterize a network's baseline balanced state by certain functional properties, and demonstrate how variations in physiological and structural parameters of the network deviate this balance and, in particular, result in transitions in spontaneous activity of the network to high-amplitude slow oscillatory regimes. We show that deviations from the reference balanced state can be continuously quantified by measuring the ratio of mean excitatory to mean inhibitory synaptic conductances in the network. Our results suggest that the commonly observed ratio of the number of inhibitory to the number of excitatory neurons in local cortical networks is almost optimal for their stability and excitability. Moreover, the values of inhibitory synaptic decay time constants and density of inhibitory-to-inhibitory network connectivity are critical to overall balance and stability of cortical networks. However, network stability in our results is sufficiently robust against modulations of synaptic quantal conductances, as required by their role in learning and memory.
Collapse
|
24
|
Winston CN, Mastrovito D, Shea-Brown E, Mihalas S. Heterogeneity in Neuronal Dynamics Is Learned by Gradient Descent for Temporal Processing Tasks. Neural Comput 2023; 35:555-592. [PMID: 36827598 PMCID: PMC10044000 DOI: 10.1162/neco_a_01571] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/19/2022] [Accepted: 11/02/2022] [Indexed: 02/26/2023]
Abstract
Individual neurons in the brain have complex intrinsic dynamics that are highly diverse. We hypothesize that the complex dynamics produced by networks of complex and heterogeneous neurons may contribute to the brain's ability to process and respond to temporally complex data. To study the role of complex and heterogeneous neuronal dynamics in network computation, we develop a rate-based neuronal model, the generalized-leaky-integrate-and-fire-rate (GLIFR) model, which is a rate equivalent of the generalized-leaky-integrate-and-fire model. The GLIFR model has multiple dynamical mechanisms, which add to the complexity of its activity while maintaining differentiability. We focus on the role of after-spike currents, currents induced or modulated by neuronal spikes, in producing rich temporal dynamics. We use machine learning techniques to learn both synaptic weights and parameters underlying intrinsic dynamics to solve temporal tasks. The GLIFR model allows the use of standard gradient descent techniques rather than surrogate gradient descent, which has been used in spiking neural networks. After establishing the ability to optimize parameters using gradient descent in single neurons, we ask how networks of GLIFR neurons learn and perform on temporally challenging tasks, such as sequential MNIST. We find that these networks learn diverse parameters, which gives rise to diversity in neuronal dynamics, as demonstrated by clustering of neuronal parameters. GLIFR networks have mixed performance when compared to vanilla recurrent neural networks, with higher performance in pixel-by-pixel MNIST but lower in line-by-line MNIST. However, they appear to be more robust to random silencing. We find that the ability to learn heterogeneity and the presence of after-spike currents contribute to these gains in performance. Our work demonstrates both the computational robustness of neuronal complexity and diversity in networks and a feasible method of training such models using exact gradients.
Collapse
Affiliation(s)
- Chloe N Winston
- Departments of Neuroscience and Computer Science, University of Washington, Seattle, WA 98195, U.S.A
- University of Washington Computational Neuroscience Center, Seattle, WA 98195, U.S.A.
| | - Dana Mastrovito
- Allen Institute for Brain Science, Seattle, WA 98109, U.S.A.
| | - Eric Shea-Brown
- University of Washington Computational Neuroscience Center, Seattle, WA 98195, U.S.A
- Allen Institute for Brain Science, Seattle, WA 98109, U.S.A
- Department of Applied Mathematics, University of Washington, Seattle, WA 98195, U.S.A.
| | - Stefan Mihalas
- University of Washington Computational Neuroscience Center, Seattle, WA 98195, U.S.A
- Allen Institute for Brain Science, Seattle, WA 98109, U.S.A
- Department of Applied Mathematics, University of Washington, Seattle, WA 98195, U.S.A.
| |
Collapse
|
25
|
Birgiolas J, Haynes V, Gleeson P, Gerkin RC, Dietrich SW, Crook S. NeuroML-DB: Sharing and characterizing data-driven neuroscience models described in NeuroML. PLoS Comput Biol 2023; 19:e1010941. [PMID: 36867658 PMCID: PMC10016719 DOI: 10.1371/journal.pcbi.1010941] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/30/2022] [Revised: 03/15/2023] [Accepted: 02/12/2023] [Indexed: 03/04/2023] Open
Abstract
As researchers develop computational models of neural systems with increasing sophistication and scale, it is often the case that fully de novo model development is impractical and inefficient. Thus arises a critical need to quickly find, evaluate, re-use, and build upon models and model components developed by other researchers. We introduce the NeuroML Database (NeuroML-DB.org), which has been developed to address this need and to complement other model sharing resources. NeuroML-DB stores over 1,500 previously published models of ion channels, cells, and networks that have been translated to the modular NeuroML model description language. The database also provides reciprocal links to other neuroscience model databases (ModelDB, Open Source Brain) as well as access to the original model publications (PubMed). These links along with Neuroscience Information Framework (NIF) search functionality provide deep integration with other neuroscience community modeling resources and greatly facilitate the task of finding suitable models for reuse. Serving as an intermediate language, NeuroML and its tooling ecosystem enable efficient translation of models to other popular simulator formats. The modular nature also enables efficient analysis of a large number of models and inspection of their properties. Search capabilities of the database, together with web-based, programmable online interfaces, allow the community of researchers to rapidly assess stored model electrophysiology, morphology, and computational complexity properties. We use these capabilities to perform a database-scale analysis of neuron and ion channel models and describe a novel tetrahedral structure formed by cell model clusters in the space of model properties and features. This analysis provides further information about model similarity to enrich database search.
Collapse
Affiliation(s)
- Justas Birgiolas
- Ronin Institute, Montclair, New Jersey, United States of America
| | - Vergil Haynes
- School of Mathematical and Statistical Sciences, Arizona State University, Tempe, Arizona, United States of America
- College of Health Solutions, Arizona State University, Phoenix, Arizona, United States of America
| | - Padraig Gleeson
- Department of Neuroscience, Physiology, and Pharmacology, University College London, London, United Kingdom
| | - Richard C. Gerkin
- School of Life Sciences, Arizona State University, Tempe, Arizona, United States of America
| | - Suzanne W. Dietrich
- School of Mathematical and Natural Sciences, Arizona State University, Tempe, Arizona, United States of America
| | - Sharon Crook
- School of Mathematical and Statistical Sciences, Arizona State University, Tempe, Arizona, United States of America
- * E-mail:
| |
Collapse
|
26
|
Zdeblick DN, Shea-Brown ET, Witten DM, Buice MA. Modeling functional cell types in spike train data. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2023:2023.02.28.530327. [PMID: 36909648 PMCID: PMC10002678 DOI: 10.1101/2023.02.28.530327] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 03/06/2023]
Abstract
A major goal of computational neuroscience is to build accurate models of the activity of neurons that can be used to interpret their function in circuits. Here, we explore using functional cell types to refine single-cell models by grouping them into functionally relevant classes. Formally, we define a hierarchical generative model for cell types, single-cell parameters, and neural responses, and then derive an expectation-maximization algorithm with variational inference that maximizes the likelihood of the neural recordings. We apply this "simultaneous" method to estimate cell types and fit single-cell models from simulated data, and find that it accurately recovers the ground truth parameters. We then apply our approach to in vitro neural recordings from neurons in mouse primary visual cortex, and find that it yields improved prediction of single-cell activity. We demonstrate that the discovered cell-type clusters are well separated and generalizable, and thus amenable to interpretation. We then compare discovered cluster memberships with locational, morphological, and transcriptomic data. Our findings reveal the potential to improve models of neural responses by explicitly allowing for shared functional properties across neurons.
Collapse
Affiliation(s)
| | | | | | - Michael A. Buice
- Applied Math, University of Washington
- Allen Institute MindScope Program
| |
Collapse
|
27
|
Zhang W, Yin M, Jiang M, Dai Q. Partitioned estimation methodology of biological neuronal networks with topology-based module detection. Comput Biol Med 2023; 154:106552. [PMID: 36738704 DOI: 10.1016/j.compbiomed.2023.106552] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/24/2022] [Revised: 12/27/2022] [Accepted: 01/11/2023] [Indexed: 02/02/2023]
Abstract
Parameter estimation of neuronal networks is closely related with information processing mechanisms in neural systems. Estimation of synaptic parameters for neuronal networks was an time consuming task. Due to complex interactions between neurons, computational efficiency and accuracy of estimation methods is relatively low. Meanwhile, inherent topological properties such as core-periphery and modular structures are not fully considered in estimation. In order to improve the efficiency and accuracy of estimation, this study proposes a two-stage PartitionMLE method which introduces detected neuronal modules as topological constraints in estimation. The proposed PartitionMLE method firstly decomposes the system into multiple non-overlapping neuronal modules, by performing topology-based module detection. Dynamic parameters including intra-modular and inter-modular parameters are estimated in two stages, using detected hubs to connect non-overlapping neuronal modules. The contributions of PartitionMLE method are two-folds: reducing estimation errors and improving the model interpretability. Experiments about neuronal networks consisting of Hodgkin-Huxley (HH) and leaky integrate-and-firing (LIF) neurons validated the effectiveness of the PartitionMLE method, with comparison to the single-stage MLE method.
Collapse
Affiliation(s)
- Wei Zhang
- Zhejiang Sci-Tech University, Second Street 928, Hangzhou, 310018, China.
| | - Muqi Yin
- Institute of Cyber-Systems and Control, Zhejiang University, Zheda Road 38, Hangzhou, 310027, China
| | - Mingfeng Jiang
- Zhejiang Sci-Tech University, Second Street 928, Hangzhou, 310018, China
| | - Qi Dai
- Zhejiang Sci-Tech University, Second Street 928, Hangzhou, 310018, China.
| |
Collapse
|
28
|
McClure JE, Li Z. Capturing membrane structure and function in lattice Boltzmann models. Phys Rev E 2023; 107:024408. [PMID: 36932594 DOI: 10.1103/physreve.107.024408] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/30/2022] [Accepted: 01/16/2023] [Indexed: 06/18/2023]
Abstract
We develop a mesoscopic approach to model the nonequilibrium behavior of membranes at the cellular scale. Relying on lattice Boltzmann methods, we develop a solution procedure to recover the Nernst-Planck equations and Gauss's law. A general closure rule is developed to describe mass transport across the membrane, which is able to account for protein-mediated diffusion based on a coarse-grained representation. We demonstrate that our model is able to recover the Goldman equation from first principles and show that hyperpolarization occurs when membrane charging dynamics are controlled by multiple relaxation timescales. The approach provides a promising way to characterize non-equilibrium behaviors that arise due to the role of membranes in mediating transport based on realistic three-dimensional cell geometries.
Collapse
Affiliation(s)
- James E McClure
- National Security Institute and Center for Soft Matter and Biological Physics Virginia Polytechnic and State University and Blacksburg, Virginia, 24060, USA
| | - Zhe Li
- Research School of Physics, The Australian National University, Canberra, 2601, Australia
| |
Collapse
|
29
|
Harkin EF, Lynn MB, Payeur A, Boucher JF, Caya-Bissonnette L, Cyr D, Stewart C, Longtin A, Naud R, Béïque JC. Temporal derivative computation in the dorsal raphe network revealed by an experimentally driven augmented integrate-and-fire modeling framework. eLife 2023; 12:72951. [PMID: 36655738 PMCID: PMC9977298 DOI: 10.7554/elife.72951] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/10/2021] [Accepted: 12/19/2022] [Indexed: 01/20/2023] Open
Abstract
By means of an expansive innervation, the serotonin (5-HT) neurons of the dorsal raphe nucleus (DRN) are positioned to enact coordinated modulation of circuits distributed across the entire brain in order to adaptively regulate behavior. Yet the network computations that emerge from the excitability and connectivity features of the DRN are still poorly understood. To gain insight into these computations, we began by carrying out a detailed electrophysiological characterization of genetically identified mouse 5-HT and somatostatin (SOM) neurons. We next developed a single-neuron modeling framework that combines the realism of Hodgkin-Huxley models with the simplicity and predictive power of generalized integrate-and-fire models. We found that feedforward inhibition of 5-HT neurons by heterogeneous SOM neurons implemented divisive inhibition, while endocannabinoid-mediated modulation of excitatory drive to the DRN increased the gain of 5-HT output. Our most striking finding was that the output of the DRN encodes a mixture of the intensity and temporal derivative of its input, and that the temporal derivative component dominates this mixture precisely when the input is increasing rapidly. This network computation primarily emerged from prominent adaptation mechanisms found in 5-HT neurons, including a previously undescribed dynamic threshold. By applying a bottom-up neural network modeling approach, our results suggest that the DRN is particularly apt to encode input changes over short timescales, reflecting one of the salient emerging computations that dominate its output to regulate behavior.
Collapse
Affiliation(s)
- Emerson F Harkin
- Brain and Mind Research Institute, Centre for Neural Dynamics, Department of Cellular and Molecular Medicine, University of OttawaOttawaCanada
| | - Michael B Lynn
- Brain and Mind Research Institute, Centre for Neural Dynamics, Department of Cellular and Molecular Medicine, University of OttawaOttawaCanada
| | - Alexandre Payeur
- Brain and Mind Research Institute, Centre for Neural Dynamics, Department of Cellular and Molecular Medicine, University of OttawaOttawaCanada
- Department of Physics, University of OttawaOttawaCanada
| | - Jean-François Boucher
- Brain and Mind Research Institute, Centre for Neural Dynamics, Department of Cellular and Molecular Medicine, University of OttawaOttawaCanada
| | - Léa Caya-Bissonnette
- Brain and Mind Research Institute, Centre for Neural Dynamics, Department of Cellular and Molecular Medicine, University of OttawaOttawaCanada
| | - Dominic Cyr
- Brain and Mind Research Institute, Centre for Neural Dynamics, Department of Cellular and Molecular Medicine, University of OttawaOttawaCanada
| | - Chloe Stewart
- Brain and Mind Research Institute, Centre for Neural Dynamics, Department of Cellular and Molecular Medicine, University of OttawaOttawaCanada
| | - André Longtin
- Brain and Mind Research Institute, Centre for Neural Dynamics, Department of Cellular and Molecular Medicine, University of OttawaOttawaCanada
- Department of Physics, University of OttawaOttawaCanada
| | - Richard Naud
- Brain and Mind Research Institute, Centre for Neural Dynamics, Department of Cellular and Molecular Medicine, University of OttawaOttawaCanada
- Department of Physics, University of OttawaOttawaCanada
| | - Jean-Claude Béïque
- Brain and Mind Research Institute, Centre for Neural Dynamics, Department of Cellular and Molecular Medicine, University of OttawaOttawaCanada
| |
Collapse
|
30
|
Sun C, Liu X, Jiang Q, Ye X, Zhu X, Li RW. Emerging electrolyte-gated transistors for neuromorphic perception. SCIENCE AND TECHNOLOGY OF ADVANCED MATERIALS 2023; 24:2162325. [PMID: 36684849 PMCID: PMC9848240 DOI: 10.1080/14686996.2022.2162325] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/03/2022] [Revised: 12/18/2022] [Accepted: 12/21/2022] [Indexed: 05/31/2023]
Abstract
With the rapid development of intelligent robotics, the Internet of Things, and smart sensor technologies, great enthusiasm has been devoted to developing next-generation intelligent systems for the emulation of advanced perception functions of humans. Neuromorphic devices, capable of emulating the learning, memory, analysis, and recognition functions of biological neural systems, offer solutions to intelligently process sensory information. As one of the most important neuromorphic devices, Electrolyte-gated transistors (EGTs) have shown great promise in implementing various vital neural functions and good compatibility with sensors. This review introduces the materials, operating principle, and performances of EGTs, followed by discussing the recent progress of EGTs for synapse and neuron emulation. Integrating EGTs with sensors that faithfully emulate diverse perception functions of humans such as tactile and visual perception is discussed. The challenges of EGTs for further development are given.
Collapse
Affiliation(s)
- Cui Sun
- CAS Key Laboratory of Magnetic Materials and Devices, and Zhejiang Province Key Laboratory of Magnetic Materials and Application Technology, Ningbo Institute of Materials Technology and Engineering, Chinese Academy of Sciences, Ningbo, China
- Zhejiang Province Key Laboratory of Magnetic Materials and Application Technology, Ningbo Institute of Materials Technology and Engineering, Chinese Academy of Sciences, Ningbo, China
| | - Xuerong Liu
- CAS Key Laboratory of Magnetic Materials and Devices, and Zhejiang Province Key Laboratory of Magnetic Materials and Application Technology, Ningbo Institute of Materials Technology and Engineering, Chinese Academy of Sciences, Ningbo, China
- Zhejiang Province Key Laboratory of Magnetic Materials and Application Technology, Ningbo Institute of Materials Technology and Engineering, Chinese Academy of Sciences, Ningbo, China
| | - Qian Jiang
- CAS Key Laboratory of Magnetic Materials and Devices, and Zhejiang Province Key Laboratory of Magnetic Materials and Application Technology, Ningbo Institute of Materials Technology and Engineering, Chinese Academy of Sciences, Ningbo, China
- Zhejiang Province Key Laboratory of Magnetic Materials and Application Technology, Ningbo Institute of Materials Technology and Engineering, Chinese Academy of Sciences, Ningbo, China
- College of Materials Sciences and Opto-Electronic Technology, University of Chinese Academy of Sciences, Beijing, China
| | - Xiaoyu Ye
- CAS Key Laboratory of Magnetic Materials and Devices, and Zhejiang Province Key Laboratory of Magnetic Materials and Application Technology, Ningbo Institute of Materials Technology and Engineering, Chinese Academy of Sciences, Ningbo, China
- Zhejiang Province Key Laboratory of Magnetic Materials and Application Technology, Ningbo Institute of Materials Technology and Engineering, Chinese Academy of Sciences, Ningbo, China
- College of Materials Sciences and Opto-Electronic Technology, University of Chinese Academy of Sciences, Beijing, China
| | - Xiaojian Zhu
- CAS Key Laboratory of Magnetic Materials and Devices, and Zhejiang Province Key Laboratory of Magnetic Materials and Application Technology, Ningbo Institute of Materials Technology and Engineering, Chinese Academy of Sciences, Ningbo, China
- Zhejiang Province Key Laboratory of Magnetic Materials and Application Technology, Ningbo Institute of Materials Technology and Engineering, Chinese Academy of Sciences, Ningbo, China
- College of Materials Sciences and Opto-Electronic Technology, University of Chinese Academy of Sciences, Beijing, China
| | - Run-Wei Li
- CAS Key Laboratory of Magnetic Materials and Devices, and Zhejiang Province Key Laboratory of Magnetic Materials and Application Technology, Ningbo Institute of Materials Technology and Engineering, Chinese Academy of Sciences, Ningbo, China
- Zhejiang Province Key Laboratory of Magnetic Materials and Application Technology, Ningbo Institute of Materials Technology and Engineering, Chinese Academy of Sciences, Ningbo, China
- College of Materials Sciences and Opto-Electronic Technology, University of Chinese Academy of Sciences, Beijing, China
| |
Collapse
|
31
|
Koren V, Bondanelli G, Panzeri S. Computational methods to study information processing in neural circuits. Comput Struct Biotechnol J 2023; 21:910-922. [PMID: 36698970 PMCID: PMC9851868 DOI: 10.1016/j.csbj.2023.01.009] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/29/2022] [Revised: 01/09/2023] [Accepted: 01/09/2023] [Indexed: 01/13/2023] Open
Abstract
The brain is an information processing machine and thus naturally lends itself to be studied using computational tools based on the principles of information theory. For this reason, computational methods based on or inspired by information theory have been a cornerstone of practical and conceptual progress in neuroscience. In this Review, we address how concepts and computational tools related to information theory are spurring the development of principled theories of information processing in neural circuits and the development of influential mathematical methods for the analyses of neural population recordings. We review how these computational approaches reveal mechanisms of essential functions performed by neural circuits. These functions include efficiently encoding sensory information and facilitating the transmission of information to downstream brain areas to inform and guide behavior. Finally, we discuss how further progress and insights can be achieved, in particular by studying how competing requirements of neural encoding and readout may be optimally traded off to optimize neural information processing.
Collapse
Affiliation(s)
- Veronika Koren
- Department of Excellence for Neural Information Processing, Center for Molecular Neurobiology (ZMNH), University Medical Center Hamburg-Eppendorf (UKE), Falkenried 94, Hamburg 20251, Germany
| | | | - Stefano Panzeri
- Department of Excellence for Neural Information Processing, Center for Molecular Neurobiology (ZMNH), University Medical Center Hamburg-Eppendorf (UKE), Falkenried 94, Hamburg 20251, Germany,Istituto Italiano di Tecnologia, Via Melen 83, Genova 16152, Italy,Corresponding author at: Department of Excellence for Neural Information Processing, Center for Molecular Neurobiology (ZMNH), University Medical Center Hamburg-Eppendorf (UKE), Falkenried 94, Hamburg 20251, Germany.
| |
Collapse
|
32
|
Oláh VJ, Pedersen NP, Rowan MJM. Ultrafast simulation of large-scale neocortical microcircuitry with biophysically realistic neurons. eLife 2022; 11:e79535. [PMID: 36341568 PMCID: PMC9640191 DOI: 10.7554/elife.79535] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/16/2022] [Accepted: 10/23/2022] [Indexed: 11/09/2022] Open
Abstract
Understanding the activity of the mammalian brain requires an integrative knowledge of circuits at distinct scales, ranging from ion channel gating to circuit connectomics. Computational models are regularly employed to understand how multiple parameters contribute synergistically to circuit behavior. However, traditional models of anatomically and biophysically realistic neurons are computationally demanding, especially when scaled to model local circuits. To overcome this limitation, we trained several artificial neural network (ANN) architectures to model the activity of realistic multicompartmental cortical neurons. We identified an ANN architecture that accurately predicted subthreshold activity and action potential firing. The ANN could correctly generalize to previously unobserved synaptic input, including in models containing nonlinear dendritic properties. When scaled, processing times were orders of magnitude faster compared with traditional approaches, allowing for rapid parameter-space mapping in a circuit model of Rett syndrome. Thus, we present a novel ANN approach allowing for rapid, detailed network experiments using inexpensive and commonly available computational resources.
Collapse
Affiliation(s)
- Viktor J Oláh
- Department of Cell Biology, Emory University School of MedicineAtlantaUnited States
| | - Nigel P Pedersen
- Department of Neurology, Emory University School of MedicineAtlantaUnited States
| | - Matthew JM Rowan
- Department of Cell Biology, Emory University School of MedicineAtlantaUnited States
| |
Collapse
|
33
|
Chen G, Scherr F, Maass W. A data-based large-scale model for primary visual cortex enables brain-like robust and versatile visual processing. SCIENCE ADVANCES 2022; 8:eabq7592. [PMID: 36322646 PMCID: PMC9629744 DOI: 10.1126/sciadv.abq7592] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 04/28/2022] [Accepted: 09/15/2022] [Indexed: 06/16/2023]
Abstract
We analyze visual processing capabilities of a large-scale model for area V1 that arguably provides the most comprehensive accumulation of anatomical and neurophysiological data to date. We find that this brain-like neural network model can reproduce a number of characteristic visual processing capabilities of the brain, in particular the capability to solve diverse visual processing tasks, also on temporally dispersed visual information, with remarkable robustness to noise. This V1 model, whose architecture and neurons markedly differ from those of deep neural networks used in current artificial intelligence (AI), such as convolutional neural networks (CNNs), also reproduces a number of characteristic neural coding properties of the brain, which provides explanations for its superior noise robustness. Because visual processing is substantially more energy efficient in the brain compared with CNNs in AI, such brain-like neural networks are likely to have an impact on future technology: as blueprints for visual processing in more energy-efficient neuromorphic hardware.
Collapse
|
34
|
Caillet AH, Phillips ATM, Farina D, Modenese L. Estimation of the firing behaviour of a complete motoneuron pool by combining electromyography signal decomposition and realistic motoneuron modelling. PLoS Comput Biol 2022; 18:e1010556. [PMID: 36174126 PMCID: PMC9553065 DOI: 10.1371/journal.pcbi.1010556] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/15/2022] [Revised: 10/11/2022] [Accepted: 09/08/2022] [Indexed: 11/18/2022] Open
Abstract
Our understanding of the firing behaviour of motoneuron (MN) pools during human voluntary muscle contractions is currently limited to electrophysiological findings from animal experiments extrapolated to humans, mathematical models of MN pools not validated for human data, and experimental results obtained from decomposition of electromyographical (EMG) signals. These approaches are limited in accuracy or provide information on only small partitions of the MN population. Here, we propose a method based on the combination of high-density EMG (HDEMG) data and realistic modelling for predicting the behaviour of entire pools of motoneurons in humans. The method builds on a physiologically realistic model of a MN pool which predicts, from the experimental spike trains of a smaller number of individual MNs identified from decomposed HDEMG signals, the unknown recruitment and firing activity of the remaining unidentified MNs in the complete MN pool. The MN pool model is described as a cohort of single-compartment leaky fire-and-integrate (LIF) models of MNs scaled by a physiologically realistic distribution of MN electrophysiological properties and driven by a spinal synaptic input, both derived from decomposed HDEMG data. The MN spike trains and effective neural drive to muscle, predicted with this method, have been successfully validated experimentally. A representative application of the method in MN-driven neuromuscular modelling is also presented. The proposed approach provides a validated tool for neuroscientists, experimentalists, and modelers to infer the firing activity of MNs that cannot be observed experimentally, investigate the neuromechanics of human MN pools, support future experimental investigations, and advance neuromuscular modelling for investigating the neural strategies controlling human voluntary contractions. Our experimental understanding of the firing behaviour of motoneuron (MN) pools during human voluntary muscle contractions is currently limited to the observation of small samples of active MNs obtained from EMG decomposition. EMG decomposition therefore provides an important but incomplete description of the role of individual MNs in the firing activity of the complete MN pool, which limits our understanding of the neural strategies of the whole MN pool and of how the firing activity of each MN contributes to the neural drive to muscle. Here, we combine decomposed high-density EMG (HDEMG) data and a physiologically realistic model of MN population to predict the unknown recruitment and firing activity of the remaining unidentified MNs in the complete MN pool. In brief, an experimental estimation of the synaptic current is input to a cohort of MN models, which are calibrated using the available decomposed HDEMG data, and predict the MN spike trains fired by the entire MN population. This novel approach is experimentally validated and applied to muscle force prediction from neuromuscular modelling.
Collapse
Affiliation(s)
- Arnault H. Caillet
- Department of Civil and Environmental Engineering, Imperial College London, United Kingdom
| | - Andrew T. M. Phillips
- Department of Civil and Environmental Engineering, Imperial College London, United Kingdom
| | - Dario Farina
- Department of Bioengineering, Imperial College London, United Kingdom
| | - Luca Modenese
- Department of Civil and Environmental Engineering, Imperial College London, United Kingdom
- Graduate School of Biomedical Engineering, University of New South Wales, Sydney, Australia
- * E-mail:
| |
Collapse
|
35
|
The dual action of glioma-derived exosomes on neuronal activity: synchronization and disruption of synchrony. Cell Death Dis 2022; 13:705. [PMID: 35963860 PMCID: PMC9376103 DOI: 10.1038/s41419-022-05144-6] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/01/2022] [Revised: 06/28/2022] [Accepted: 07/28/2022] [Indexed: 01/21/2023]
Abstract
Seizures represent a frequent symptom in gliomas and significantly impact patient morbidity and quality of life. Although the pathogenesis of tumor-related seizures is not fully understood, accumulating evidence indicates a key role of the peritumoral microenvironment. Brain cancer cells interact with neurons by forming synapses with them and by releasing exosomes, cytokines, and other small molecules. Strong interactions among neurons often lead to the synchronization of their activity. In this paper, we used an in vitro model to investigate the role of exosomes released by glioma cell lines and by patient-derived glioma stem cells (GSCs). The addition of exosomes released by U87 glioma cells to neuronal cultures at day in vitro (DIV) 4, when neurons are not yet synchronous, induces synchronization. At DIV 7-12 neurons become highly synchronous, and the addition of the same exosomes disrupts synchrony. By combining Ca2+ imaging, electrical recordings from single neurons with patch-clamp electrodes, substrate-integrated microelectrode arrays, and immunohistochemistry, we show that synchronization and de-synchronization are caused by the combined effect of (i) the formation of new neuronal branches, associated with a higher expression of Arp3, (ii) the modification of synaptic efficiency, and (iii) a direct action of exosomes on the electrical properties of neurons, more evident at DIV 7-12 when the threshold for spike initiation is significantly reduced. At DIV 7-12 exosomes also selectively boost glutamatergic signaling by increasing the number of excitatory synapses. Remarkably, de-synchronization was also observed with exosomes released by glioma-associated stem cells (GASCs) from patients with low-grade glioma but not from patients with high-grade glioma, where a more variable outcome was observed. These results show that exosomes released from glioma modify the electrical properties of neuronal networks and that de-synchronization caused by exosomes from low-grade glioma can contribute to the neurological pathologies of patients with brain cancers.
Collapse
|
36
|
Nandi A, Chartrand T, Van Geit W, Buchin A, Yao Z, Lee SY, Wei Y, Kalmbach B, Lee B, Lein E, Berg J, Sümbül U, Koch C, Tasic B, Anastassiou CA. Single-neuron models linking electrophysiology, morphology, and transcriptomics across cortical cell types. Cell Rep 2022; 40:111176. [PMID: 35947954 PMCID: PMC9793758 DOI: 10.1016/j.celrep.2022.111176] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/13/2020] [Revised: 01/28/2022] [Accepted: 07/18/2022] [Indexed: 12/30/2022] Open
Abstract
Which cell types constitute brain circuits is a fundamental question, but establishing the correspondence across cellular data modalities is challenging. Bio-realistic models allow probing cause-and-effect and linking seemingly disparate modalities. Here, we introduce a computational optimization workflow to generate 9,200 single-neuron models with active conductances. These models are based on 230 in vitro electrophysiological experiments followed by morphological reconstruction from the mouse visual cortex. We show that, in contrast to current belief, the generated models are robust representations of individual experiments and cortical cell types as defined via cellular electrophysiology or transcriptomics. Next, we show that differences in specific conductances predicted from the models reflect differences in gene expression supported by single-cell transcriptomics. The differences in model conductances, in turn, explain electrophysiological differences observed between the cortical subclasses. Our computational effort reconciles single-cell modalities that define cell types and enables causal relationships to be examined.
Collapse
Affiliation(s)
- Anirban Nandi
- Allen Institute for Brain Science, Seattle, WA 98109, USA
| | - Thomas Chartrand
- Allen Institute for Brain Science, Seattle, WA 98109, USA,These authors contributed equally
| | - Werner Van Geit
- Blue Brain Project, École Polytechnique Fédérale de Lausanne (EPFL), Campus Biotech, Geneva 1202, Switzerland,These authors contributed equally
| | - Anatoly Buchin
- Allen Institute for Brain Science, Seattle, WA 98109, USA
| | - Zizhen Yao
- Allen Institute for Brain Science, Seattle, WA 98109, USA
| | - Soo Yeun Lee
- Allen Institute for Brain Science, Seattle, WA 98109, USA
| | - Yina Wei
- Allen Institute for Brain Science, Seattle, WA 98109, USA,Zhejiang Lab, Hangzhou City, Zhejiang Province 311121, China
| | - Brian Kalmbach
- Allen Institute for Brain Science, Seattle, WA 98109, USA
| | - Brian Lee
- Allen Institute for Brain Science, Seattle, WA 98109, USA
| | - Ed Lein
- Allen Institute for Brain Science, Seattle, WA 98109, USA
| | - Jim Berg
- Allen Institute for Brain Science, Seattle, WA 98109, USA
| | - Uygar Sümbül
- Allen Institute for Brain Science, Seattle, WA 98109, USA
| | - Christof Koch
- Allen Institute for Brain Science, Seattle, WA 98109, USA
| | - Bosiljka Tasic
- Allen Institute for Brain Science, Seattle, WA 98109, USA
| | - Costas A. Anastassiou
- Allen Institute for Brain Science, Seattle, WA 98109, USA,Department of Neurosurgery, Cedars-Sinai Medical Center, Los Angeles, CA 90048, USA,Department of Neurology, Cedars-Sinai Medical Center, Los Angeles, CA 90048, USA,Board of Governors Regenerative Medicine Institute, Cedars-Sinai Medical Center, Los Angeles, CA 90048, USA,Center for Neural Science and Medicine, Department of Biomedical Sciences, Cedars-Sinai Medical Center, Los Angeles, CA 90048, USA,Lead contact,Correspondence:
| |
Collapse
|
37
|
D'Angelo E, Jirsa V. The quest for multiscale brain modeling. Trends Neurosci 2022; 45:777-790. [PMID: 35906100 DOI: 10.1016/j.tins.2022.06.007] [Citation(s) in RCA: 39] [Impact Index Per Article: 19.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/11/2022] [Revised: 05/20/2022] [Accepted: 06/21/2022] [Indexed: 01/07/2023]
Abstract
Addressing the multiscale organization of the brain, which is fundamental to the dynamic repertoire of the organ, remains challenging. In principle, it should be possible to model neurons and synapses in detail and then connect them into large neuronal assemblies to explain the relationship between microscopic phenomena, large-scale brain functions, and behavior. It is more difficult to infer neuronal functions from ensemble measurements such as those currently obtained with brain activity recordings. In this article we consider theories and strategies for combining bottom-up models, generated from principles of neuronal biophysics, with top-down models based on ensemble representations of network activity and on functional principles. These integrative approaches are hoped to provide effective multiscale simulations in virtual brains and neurorobots, and pave the way to future applications in medicine and information technologies.
Collapse
Affiliation(s)
- Egidio D'Angelo
- Department of Brain and Behavioral Sciences, University of Pavia, and Brain Connectivity Center, Istituto di Ricovero e Cura a Carattere Scientifico (IRCCS) Mondino Foundation, Pavia, Italy.
| | - Viktor Jirsa
- Institut National de la Santé et de la Recherche Médicale (INSERM) Unité 1106, Centre National de la Recherche Scientifique (CNRS), and University of Aix-Marseille, Marseille, France
| |
Collapse
|
38
|
Accelerating spiking neural networks using quantum algorithm with high success probability and high calculation accuracy. Neurocomputing 2022. [DOI: 10.1016/j.neucom.2022.02.004] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register]
|
39
|
Dey S, Dimitrov A. Mapping and Validating a Point Neuron Model on Intel's Neuromorphic Hardware Loihi. Front Neuroinform 2022; 16:883360. [PMID: 35712458 PMCID: PMC9197133 DOI: 10.3389/fnins.2022.883360] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/25/2022] [Accepted: 04/25/2022] [Indexed: 11/18/2022] Open
Abstract
Neuromorphic hardware is based on emulating the natural biological structure of the brain. Since its computational model is similar to standard neural models, it could serve as a computational accelerator for research projects in the field of neuroscience and artificial intelligence, including biomedical applications. However, in order to exploit this new generation of computer chips, we ought to perform rigorous simulation and consequent validation of neuromorphic models against their conventional implementations. In this work, we lay out the numeric groundwork to enable a comparison between neuromorphic and conventional platforms. “Loihi”—Intel's fifth generation neuromorphic chip, which is based on the idea of Spiking Neural Networks (SNNs) emulating the activity of neurons in the brain, serves as our neuromorphic platform. The work here focuses on Leaky Integrate and Fire (LIF) models based on neurons in the mouse primary visual cortex and matched to a rich data set of anatomical, physiological and behavioral constraints. Simulations on classical hardware serve as the validation platform for the neuromorphic implementation. We find that Loihi replicates classical simulations very efficiently with high precision. As a by-product, we also investigate Loihi's potential in terms of scalability and performance and find that it scales notably well in terms of run-time performance as the simulated networks become larger.
Collapse
Affiliation(s)
- Srijanie Dey
- Department of Mathematics, Washington State University, Vancouver, WA, United States
| | - Alexander Dimitrov
- Department of Mathematics, Washington State University, Vancouver, WA, United States
| |
Collapse
|
40
|
Dey S, Dimitrov A. Mapping and Validating a Point Neuron Model on Intel's Neuromorphic Hardware Loihi. Front Neuroinform 2022; 16:883360. [PMID: 36726406 PMCID: PMC9886005 DOI: 10.3389/fninf.2022.883360] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/25/2022] [Accepted: 04/25/2022] [Indexed: 01/18/2023] Open
Abstract
Neuromorphic hardware is based on emulating the natural biological structure of the brain. Since its computational model is similar to standard neural models, it could serve as a computational accelerator for research projects in the field of neuroscience and artificial intelligence, including biomedical applications. However, in order to exploit this new generation of computer chips, we ought to perform rigorous simulation and consequent validation of neuromorphic models against their conventional implementations. In this work, we lay out the numeric groundwork to enable a comparison between neuromorphic and conventional platforms. "Loihi"-Intel's fifth generation neuromorphic chip, which is based on the idea of Spiking Neural Networks (SNNs) emulating the activity of neurons in the brain, serves as our neuromorphic platform. The work here focuses on Leaky Integrate and Fire (LIF) models based on neurons in the mouse primary visual cortex and matched to a rich data set of anatomical, physiological and behavioral constraints. Simulations on classical hardware serve as the validation platform for the neuromorphic implementation. We find that Loihi replicates classical simulations very efficiently with high precision. As a by-product, we also investigate Loihi's potential in terms of scalability and performance and find that it scales notably well in terms of run-time performance as the simulated networks become larger.
Collapse
|
41
|
Accelerating Allen Brain Institute’s Large-Scale Computational Model of Mice Primary Visual Cortex. ARTIF INTELL 2022. [DOI: 10.1007/978-3-031-20503-3_57] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/23/2022]
|
42
|
van Albada SJ, Morales-Gregorio A, Dickscheid T, Goulas A, Bakker R, Bludau S, Palm G, Hilgetag CC, Diesmann M. Bringing Anatomical Information into Neuronal Network Models. ADVANCES IN EXPERIMENTAL MEDICINE AND BIOLOGY 2022; 1359:201-234. [DOI: 10.1007/978-3-030-89439-9_9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/18/2022]
|
43
|
A User’s Guide to Generalized Integrate-and-Fire Models. ADVANCES IN EXPERIMENTAL MEDICINE AND BIOLOGY 2022; 1359:69-86. [DOI: 10.1007/978-3-030-89439-9_3] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
|
44
|
Cell-type-specific neuromodulation guides synaptic credit assignment in a spiking neural network. Proc Natl Acad Sci U S A 2021; 118:2111821118. [PMID: 34916291 PMCID: PMC8713766 DOI: 10.1073/pnas.2111821118] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 10/28/2021] [Indexed: 12/27/2022] Open
Abstract
Synaptic connectivity provides the foundation for our present understanding of neuronal network function, but static connectivity cannot explain learning and memory. We propose a computational role for the diversity of cortical neuronal types and their associated cell-type–specific neuromodulators in improving the efficiency of synaptic weight adjustments for task learning in neuronal networks. Brains learn tasks via experience-driven differential adjustment of their myriad individual synaptic connections, but the mechanisms that target appropriate adjustment to particular connections remain deeply enigmatic. While Hebbian synaptic plasticity, synaptic eligibility traces, and top-down feedback signals surely contribute to solving this synaptic credit-assignment problem, alone, they appear to be insufficient. Inspired by new genetic perspectives on neuronal signaling architectures, here, we present a normative theory for synaptic learning, where we predict that neurons communicate their contribution to the learning outcome to nearby neurons via cell-type–specific local neuromodulation. Computational tests suggest that neuron-type diversity and neuron-type–specific local neuromodulation may be critical pieces of the biological credit-assignment puzzle. They also suggest algorithms for improved artificial neural network learning efficiency.
Collapse
|
45
|
Venkadesh S, Van Horn JD. Integrative Models of Brain Structure and Dynamics: Concepts, Challenges, and Methods. Front Neurosci 2021; 15:752332. [PMID: 34776853 PMCID: PMC8585845 DOI: 10.3389/fnins.2021.752332] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/02/2021] [Accepted: 10/13/2021] [Indexed: 11/24/2022] Open
Abstract
The anatomical architecture of the brain constrains the dynamics of interactions between various regions. On a microscopic scale, neural plasticity regulates the connections between individual neurons. This microstructural adaptation facilitates coordinated dynamics of populations of neurons (mesoscopic scale) and brain regions (macroscopic scale). However, the mechanisms acting on multiple timescales that govern the reciprocal relationship between neural network structure and its intrinsic dynamics are not well understood. Studies empirically investigating such relationships on the whole-brain level rely on macroscopic measurements of structural and functional connectivity estimated from various neuroimaging modalities such as Diffusion-weighted Magnetic Resonance Imaging (dMRI), Electroencephalography (EEG), Magnetoencephalography (MEG), and functional Magnetic Resonance Imaging (fMRI). dMRI measures the anisotropy of water diffusion along axonal fibers, from which structural connections are estimated. EEG and MEG signals measure electrical activity and magnetic fields induced by the electrical activity, respectively, from various brain regions with a high temporal resolution (but limited spatial coverage), whereas fMRI measures regional activations indirectly via blood oxygen level-dependent (BOLD) signals with a high spatial resolution (but limited temporal resolution). There are several studies in the neuroimaging literature reporting statistical associations between macroscopic structural and functional connectivity. On the other hand, models of large-scale oscillatory dynamics conditioned on network structure (such as the one estimated from dMRI connectivity) provide a platform to probe into the structure-dynamics relationship at the mesoscopic level. Such investigations promise to uncover the theoretical underpinnings of the interplay between network structure and dynamics and could be complementary to the macroscopic level inquiries. In this article, we review theoretical and empirical studies that attempt to elucidate the coupling between brain structure and dynamics. Special attention is given to various clinically relevant dimensions of brain connectivity such as the topological features and neural synchronization, and their applicability for a given modality, spatial or temporal scale of analysis is discussed. Our review provides a summary of the progress made along this line of research and identifies challenges and promising future directions for multi-modal neuroimaging analyses.
Collapse
Affiliation(s)
- Siva Venkadesh
- Department of Psychology, University of Virginia, Charlottesville, VA, United States
| | - John Darrell Van Horn
- Department of Psychology, University of Virginia, Charlottesville, VA, United States.,School of Data Science, University of Virginia, Charlottesville, VA, United States
| |
Collapse
|
46
|
|
47
|
Schwalger T. Mapping input noise to escape noise in integrate-and-fire neurons: a level-crossing approach. BIOLOGICAL CYBERNETICS 2021; 115:539-562. [PMID: 34668051 PMCID: PMC8551127 DOI: 10.1007/s00422-021-00899-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 07/17/2021] [Accepted: 09/27/2021] [Indexed: 06/13/2023]
Abstract
Noise in spiking neurons is commonly modeled by a noisy input current or by generating output spikes stochastically with a voltage-dependent hazard rate ("escape noise"). While input noise lends itself to modeling biophysical noise processes, the phenomenological escape noise is mathematically more tractable. Using the level-crossing theory for differentiable Gaussian processes, we derive an approximate mapping between colored input noise and escape noise in leaky integrate-and-fire neurons. This mapping requires the first-passage-time (FPT) density of an overdamped Brownian particle driven by colored noise with respect to an arbitrarily moving boundary. Starting from the Wiener-Rice series for the FPT density, we apply the second-order decoupling approximation of Stratonovich to the case of moving boundaries and derive a simplified hazard-rate representation that is local in time and numerically efficient. This simplification requires the calculation of the non-stationary auto-correlation function of the level-crossing process: For exponentially correlated input noise (Ornstein-Uhlenbeck process), we obtain an exact formula for the zero-lag auto-correlation as a function of noise parameters, mean membrane potential and its speed, as well as an exponential approximation of the full auto-correlation function. The theory well predicts the FPT and interspike interval densities as well as the population activities obtained from simulations with colored input noise and time-dependent stimulus or boundary. The agreement with simulations is strongly enhanced across the sub- and suprathreshold firing regime compared to a first-order decoupling approximation that neglects correlations between level crossings. The second-order approximation also improves upon a previously proposed theory in the subthreshold regime. Depending on a simplicity-accuracy trade-off, all considered approximations represent useful mappings from colored input noise to escape noise, enabling progress in the theory of neuronal population dynamics.
Collapse
Affiliation(s)
- Tilo Schwalger
- Institute of Mathematics, Technical University Berlin, 10623, Berlin, Germany.
- Bernstein Center for Computational Neuroscience Berlin, 10115, Berlin, Germany.
| |
Collapse
|
48
|
|
49
|
Ramlow L, Lindner B. Interspike interval correlations in neuron models with adaptation and correlated noise. PLoS Comput Biol 2021; 17:e1009261. [PMID: 34449771 PMCID: PMC8428727 DOI: 10.1371/journal.pcbi.1009261] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/28/2020] [Revised: 09/09/2021] [Accepted: 07/08/2021] [Indexed: 11/19/2022] Open
Abstract
The generation of neural action potentials (spikes) is random but nevertheless may result in a rich statistical structure of the spike sequence. In particular, contrary to the popular renewal assumption of theoreticians, the intervals between adjacent spikes are often correlated. Experimentally, different patterns of interspike-interval correlations have been observed and computational studies have identified spike-frequency adaptation and correlated noise as the two main mechanisms that can lead to such correlations. Analytical studies have focused on the single cases of either correlated (colored) noise or adaptation currents in combination with uncorrelated (white) noise. For low-pass filtered noise or adaptation, the serial correlation coefficient can be approximated as a single geometric sequence of the lag between the intervals, providing an explanation for some of the experimentally observed patterns. Here we address the problem of interval correlations for a widely used class of models, multidimensional integrate-and-fire neurons subject to a combination of colored and white noise sources and a spike-triggered adaptation current. Assuming weak noise, we derive a simple formula for the serial correlation coefficient, a sum of two geometric sequences, which accounts for a large class of correlation patterns. The theory is confirmed by means of numerical simulations in a number of special cases including the leaky, quadratic, and generalized integrate-and-fire models with colored noise and spike-frequency adaptation. Furthermore we study the case in which the adaptation current and the colored noise share the same time scale, corresponding to a slow stochastic population of adaptation channels; we demonstrate that our theory can account for a nonmonotonic dependence of the correlation coefficient on the channel’s time scale. Another application of the theory is a neuron driven by network-noise-like fluctuations (green noise). We also discuss the range of validity of our weak-noise theory and show that by changing the relative strength of white and colored noise sources, we can change the sign of the correlation coefficient. Finally, we apply our theory to a conductance-based model which demonstrates its broad applicability. The elementary processing units in the central nervous system are neurons that transmit information by short electrical pulses, so called action potentials or spikes. The generation of the action potential is a random process that can be shaped by correlated fluctuations (colored noise) and by adaptation. A consequence of these two ubiquitous features is that the successive time intervals between spikes, the interspike intervals, are not independent but correlated. As these correlations can significantly improve information transmission and weak-signal detection, it is an important task to develop analytical approaches to these statistics for well-established computational models. Here we present a theory of interval correlations for a widely used class of integrate-and-fire models endowed with an adaptation mechanism and subject to correlated fluctuations. We demonstrate which patterns of interval correlations can be expected from the interplay of colored noise, adaptation and intrinsic nonlinear dynamics.
Collapse
Affiliation(s)
- Lukas Ramlow
- Bernstein Center for Computational Neuroscience Berlin, Berlin, Germany
- Physics Department, Humboldt University zu Berlin, Berlin, Germany
- * E-mail:
| | - Benjamin Lindner
- Bernstein Center for Computational Neuroscience Berlin, Berlin, Germany
- Physics Department, Humboldt University zu Berlin, Berlin, Germany
| |
Collapse
|
50
|
Harkin EF, Shen PR, Goel A, Richards BA, Naud R. Parallel and Recurrent Cascade Models as a Unifying Force for Understanding Sub-cellular Computation. Neuroscience 2021; 489:200-215. [PMID: 34358629 DOI: 10.1016/j.neuroscience.2021.07.026] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/23/2021] [Revised: 07/06/2021] [Accepted: 07/25/2021] [Indexed: 11/15/2022]
Abstract
Neurons are very complicated computational devices, incorporating numerous non-linear processes, particularly in their dendrites. Biophysical models capture these processes directly by explicitly modelling physiological variables, such as ion channels, current flow, membrane capacitance, etc. However, another option for capturing the complexities of real neural computation is to use cascade models, which treat individual neurons as a cascade of linear and non-linear operations, akin to a multi-layer artificial neural network. Recent research has shown that cascade models can capture single-cell computation well, but there are still a number of sub-cellular, regenerative dendritic phenomena that they cannot capture, such as the interaction between sodium, calcium, and NMDA spikes in different compartments. Here, we propose that it is possible to capture these additional phenomena using parallel, recurrent cascade models, wherein an individual neuron is modelled as a cascade of parallel linear and non-linear operations that can be connected recurrently, akin to a multi-layer, recurrent, artificial neural network. Given their tractable mathematical structure, we show that neuron models expressed in terms of parallel recurrent cascades can themselves be integrated into multi-layered artificial neural networks and trained to perform complex tasks. We go on to discuss potential implications and uses of these models for artificial intelligence. Overall, we argue that parallel, recurrent cascade models provide an important, unifying tool for capturing single-cell computation and exploring the algorithmic implications of physiological phenomena.
Collapse
Affiliation(s)
- Emerson F Harkin
- uOttawa Brain and Mind Institute, Centre for Neural Dynamics, Department of Cellular and Molecular Medicine, University of Ottawa, Ottawa, ON, Canada
| | - Peter R Shen
- Department of Systems Design Engineering, University of Waterloo, Waterloo, ON, Canada
| | - Anish Goel
- Lisgar Collegiate Institute, Ottawa, ON, Canada
| | - Blake A Richards
- Mila, Montréal, QC, Canada; Montreal Neurological Institute, Montréal, QC, Canada; Department of Neurology and Neurosurgery, McGill University, Montréal, QC, Canada; School of Computer Science, McGill University, Montréal, QC, Canada.
| | - Richard Naud
- uOttawa Brain and Mind Institute, Centre for Neural Dynamics, Department of Cellular and Molecular Medicine, University of Ottawa, Ottawa, ON, Canada; Department of Physics, University of Ottawa, Ottawa, ON, Canada.
| |
Collapse
|