1
|
Bourahmah J, Sakurai A, Shilnikov AL. Error Function Optimization to Compare Neural Activity and Train Blended Rhythmic Networks. Brain Sci 2024; 14:468. [PMID: 38790447 PMCID: PMC11117979 DOI: 10.3390/brainsci14050468] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/27/2024] [Revised: 04/03/2024] [Accepted: 04/09/2024] [Indexed: 05/26/2024] Open
Abstract
We present a novel set of quantitative measures for "likeness" (error function) designed to alleviate the time-consuming and subjective nature of manually comparing biological recordings from electrophysiological experiments with the outcomes of their mathematical models. Our innovative "blended" system approach offers an objective, high-throughput, and computationally efficient method for comparing biological and mathematical models. This approach involves using voltage recordings of biological neurons to drive and train mathematical models, facilitating the derivation of the error function for further parameter optimization. Our calibration process incorporates measurements such as action potential (AP) frequency, voltage moving average, voltage envelopes, and the probability of post-synaptic channels. To assess the effectiveness of our method, we utilized the sea slug Melibe leonina swim central pattern generator (CPG) as our model circuit and conducted electrophysiological experiments with TTX to isolate CPG interneurons. During the comparison of biological recordings and mathematically simulated neurons, we performed a grid search of inhibitory and excitatory synapse conductance. Our findings indicate that a weighted sum of simple functions is essential for comprehensively capturing a neuron's rhythmic activity. Overall, our study suggests that our blended system approach holds promise for enabling objective and high-throughput comparisons between biological and mathematical models, offering significant potential for advancing research in neural circuitry and related fields.
Collapse
Affiliation(s)
- Jassem Bourahmah
- Neuroscience Institute, Georgia State University, 100 Piedmont Ave., Atlanta, GA 30303, USA;
| | - Akira Sakurai
- Department of Mathematics & Statistics, Neuroscience Institute, Georgia State University, 100 Piedmont Ave., Atlanta, GA 30303, USA;
| | - Andrey L. Shilnikov
- Department of Mathematics & Statistics, Neuroscience Institute, Georgia State University, 100 Piedmont Ave., Atlanta, GA 30303, USA;
| |
Collapse
|
2
|
Takács V, Bardóczi Z, Orosz Á, Major A, Tar L, Berki P, Papp P, Mayer MI, Sebők H, Zsolt L, Sos KE, Káli S, Freund TF, Nyiri G. Synaptic and dendritic architecture of different types of hippocampal somatostatin interneurons. PLoS Biol 2024; 22:e3002539. [PMID: 38470935 PMCID: PMC10959371 DOI: 10.1371/journal.pbio.3002539] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/08/2023] [Revised: 03/22/2024] [Accepted: 02/06/2024] [Indexed: 03/14/2024] Open
Abstract
GABAergic inhibitory neurons fundamentally shape the activity and plasticity of cortical circuits. A major subset of these neurons contains somatostatin (SOM); these cells play crucial roles in neuroplasticity, learning, and memory in many brain areas including the hippocampus, and are implicated in several neuropsychiatric diseases and neurodegenerative disorders. Two main types of SOM-containing cells in area CA1 of the hippocampus are oriens-lacunosum-moleculare (OLM) cells and hippocampo-septal (HS) cells. These cell types show many similarities in their soma-dendritic architecture, but they have different axonal targets, display different activity patterns in vivo, and are thought to have distinct network functions. However, a complete understanding of the functional roles of these interneurons requires a precise description of their intrinsic computational properties and their synaptic interactions. In the current study we generated, analyzed, and make available several key data sets that enable a quantitative comparison of various anatomical and physiological properties of OLM and HS cells in mouse. The data set includes detailed scanning electron microscopy (SEM)-based 3D reconstructions of OLM and HS cells along with their excitatory and inhibitory synaptic inputs. Combining this core data set with other anatomical data, patch-clamp electrophysiology, and compartmental modeling, we examined the precise morphological structure, inputs, outputs, and basic physiological properties of these cells. Our results highlight key differences between OLM and HS cells, particularly regarding the density and distribution of their synaptic inputs and mitochondria. For example, we estimated that an OLM cell receives about 8,400, whereas an HS cell about 15,600 synaptic inputs, about 16% of which are GABAergic. Our data and models provide insight into the possible basis of the different functionality of OLM and HS cell types and supply essential information for more detailed functional models of these neurons and the hippocampal network.
Collapse
Affiliation(s)
- Virág Takács
- Laboratory of Cerebral Cortex Research, HUN-REN Institute of Experimental Medicine, Budapest, Hungary
| | - Zsuzsanna Bardóczi
- Laboratory of Cerebral Cortex Research, HUN-REN Institute of Experimental Medicine, Budapest, Hungary
| | - Áron Orosz
- Laboratory of Cerebral Cortex Research, HUN-REN Institute of Experimental Medicine, Budapest, Hungary
- János Szentágothai Doctoral School of Neurosciences, Semmelweis University, Budapest, Hungary
| | - Abel Major
- Laboratory of Cerebral Cortex Research, HUN-REN Institute of Experimental Medicine, Budapest, Hungary
| | - Luca Tar
- Laboratory of Cerebral Cortex Research, HUN-REN Institute of Experimental Medicine, Budapest, Hungary
- Roska Tamás Doctoral School of Sciences and Technology, Pázmány Péter Catholic University, Budapest, Hungary
| | - Péter Berki
- Laboratory of Cerebral Cortex Research, HUN-REN Institute of Experimental Medicine, Budapest, Hungary
- János Szentágothai Doctoral School of Neurosciences, Semmelweis University, Budapest, Hungary
| | - Péter Papp
- Laboratory of Cerebral Cortex Research, HUN-REN Institute of Experimental Medicine, Budapest, Hungary
| | - Márton I. Mayer
- Laboratory of Cerebral Cortex Research, HUN-REN Institute of Experimental Medicine, Budapest, Hungary
- János Szentágothai Doctoral School of Neurosciences, Semmelweis University, Budapest, Hungary
| | - Hunor Sebők
- Laboratory of Cerebral Cortex Research, HUN-REN Institute of Experimental Medicine, Budapest, Hungary
| | - Luca Zsolt
- Laboratory of Cerebral Cortex Research, HUN-REN Institute of Experimental Medicine, Budapest, Hungary
| | - Katalin E. Sos
- Laboratory of Cerebral Cortex Research, HUN-REN Institute of Experimental Medicine, Budapest, Hungary
- János Szentágothai Doctoral School of Neurosciences, Semmelweis University, Budapest, Hungary
| | - Szabolcs Káli
- Laboratory of Cerebral Cortex Research, HUN-REN Institute of Experimental Medicine, Budapest, Hungary
| | - Tamás F. Freund
- Laboratory of Cerebral Cortex Research, HUN-REN Institute of Experimental Medicine, Budapest, Hungary
| | - Gábor Nyiri
- Laboratory of Cerebral Cortex Research, HUN-REN Institute of Experimental Medicine, Budapest, Hungary
| |
Collapse
|
3
|
Srikanth S, Narayanan R. Heterogeneous off-target impact of ion-channel deletion on intrinsic properties of hippocampal model neurons that self-regulate calcium. Front Cell Neurosci 2023; 17:1241450. [PMID: 37904732 PMCID: PMC10613471 DOI: 10.3389/fncel.2023.1241450] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/16/2023] [Accepted: 09/20/2023] [Indexed: 11/01/2023] Open
Abstract
How do neurons that implement cell-autonomous self-regulation of calcium react to knockout of individual ion-channel conductances? To address this question, we used a heterogeneous population of 78 conductance-based models of hippocampal pyramidal neurons that maintained cell-autonomous calcium homeostasis while receiving theta-frequency inputs. At calcium steady-state, we individually deleted each of the 11 active ion-channel conductances from each model. We measured the acute impact of deleting each conductance (one at a time) by comparing intrinsic electrophysiological properties before and immediately after channel deletion. The acute impact of deleting individual conductances on physiological properties (including calcium homeostasis) was heterogeneous, depending on the property, the specific model, and the deleted channel. The underlying many-to-many mapping between ion channels and properties pointed to ion-channel degeneracy. Next, we allowed the other conductances (barring the deleted conductance) to evolve towards achieving calcium homeostasis during theta-frequency activity. When calcium homeostasis was perturbed by ion-channel deletion, post-knockout plasticity in other conductances ensured resilience of calcium homeostasis to ion-channel deletion. These results demonstrate degeneracy in calcium homeostasis, as calcium homeostasis in knockout models was implemented in the absence of a channel that was earlier involved in the homeostatic process. Importantly, in reacquiring homeostasis, ion-channel conductances and physiological properties underwent heterogenous plasticity (dependent on the model, the property, and the deleted channel), even introducing changes in properties that were not directly connected to the deleted channel. Together, post-knockout plasticity geared towards maintaining homeostasis introduced heterogenous off-target effects on several channels and properties, suggesting that extreme caution be exercised in interpreting experimental outcomes involving channel knockouts.
Collapse
Affiliation(s)
- Sunandha Srikanth
- Cellular Neurophysiology Laboratory, Molecular Biophysics Unit, Indian Institute of Science, Bangalore, India
- Undergraduate Program, Indian Institute of Science, Bangalore, India
| | - Rishikesh Narayanan
- Cellular Neurophysiology Laboratory, Molecular Biophysics Unit, Indian Institute of Science, Bangalore, India
| |
Collapse
|
4
|
Ecker A, Bagi B, Vértes E, Steinbach-Németh O, Karlocai MR, Papp OI, Miklós I, Hájos N, Freund T, Gulyás AI, Káli S. Hippocampal sharp wave-ripples and the associated sequence replay emerge from structured synaptic interactions in a network model of area CA3. eLife 2022; 11:71850. [PMID: 35040779 PMCID: PMC8865846 DOI: 10.7554/elife.71850] [Citation(s) in RCA: 11] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/02/2021] [Accepted: 01/17/2022] [Indexed: 11/25/2022] Open
Abstract
Hippocampal place cells are activated sequentially as an animal explores its environment. These activity sequences are internally recreated (‘replayed’), either in the same or reversed order, during bursts of activity (sharp wave-ripples [SWRs]) that occur in sleep and awake rest. SWR-associated replay is thought to be critical for the creation and maintenance of long-term memory. In order to identify the cellular and network mechanisms of SWRs and replay, we constructed and simulated a data-driven model of area CA3 of the hippocampus. Our results show that the chain-like structure of recurrent excitatory interactions established during learning not only determines the content of replay, but is essential for the generation of the SWRs as well. We find that bidirectional replay requires the interplay of the experimentally confirmed, temporally symmetric plasticity rule, and cellular adaptation. Our model provides a unifying framework for diverse phenomena involving hippocampal plasticity, representations, and dynamics, and suggests that the structured neural codes induced by learning may have greater influence over cortical network states than previously appreciated.
Collapse
|
5
|
Sinha M, Narayanan R. Active Dendrites and Local Field Potentials: Biophysical Mechanisms and Computational Explorations. Neuroscience 2021; 489:111-142. [PMID: 34506834 PMCID: PMC7612676 DOI: 10.1016/j.neuroscience.2021.08.035] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/27/2021] [Revised: 08/30/2021] [Accepted: 08/31/2021] [Indexed: 10/27/2022]
Abstract
Neurons and glial cells are endowed with membranes that express a rich repertoire of ion channels, transporters, and receptors. The constant flux of ions across the neuronal and glial membranes results in voltage fluctuations that can be recorded from the extracellular matrix. The high frequency components of this voltage signal contain information about the spiking activity, reflecting the output from the neurons surrounding the recording location. The low frequency components of the signal, referred to as the local field potential (LFP), have been traditionally thought to provide information about the synaptic inputs that impinge on the large dendritic trees of various neurons. In this review, we discuss recent computational and experimental studies pointing to a critical role of several active dendritic mechanisms that can influence the genesis and the location-dependent spectro-temporal dynamics of LFPs, spanning different brain regions. We strongly emphasize the need to account for the several fast and slow dendritic events and associated active mechanisms - including gradients in their expression profiles, inter- and intra-cellular spatio-temporal interactions spanning neurons and glia, heterogeneities and degeneracy across scales, neuromodulatory influences, and activitydependent plasticity - towards gaining important insights about the origins of LFP under different behavioral states in health and disease. We provide simple but essential guidelines on how to model LFPs taking into account these dendritic mechanisms, with detailed methodology on how to account for various heterogeneities and electrophysiological properties of neurons and synapses while studying LFPs.
Collapse
Affiliation(s)
- Manisha Sinha
- Cellular Neurophysiology Laboratory, Molecular Biophysics Unit, Indian Institute of Science, Bangalore, Karnataka 560012, India
| | - Rishikesh Narayanan
- Cellular Neurophysiology Laboratory, Molecular Biophysics Unit, Indian Institute of Science, Bangalore, Karnataka 560012, India.
| |
Collapse
|
6
|
Marín M, Cruz NC, Ortigosa EM, Sáez-Lara MJ, Garrido JA, Carrillo RR. On the Use of a Multimodal Optimizer for Fitting Neuron Models. Application to the Cerebellar Granule Cell. Front Neuroinform 2021; 15:663797. [PMID: 34149387 PMCID: PMC8209370 DOI: 10.3389/fninf.2021.663797] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/03/2021] [Accepted: 04/13/2021] [Indexed: 11/19/2022] Open
Abstract
This article extends a recent methodological workflow for creating realistic and computationally efficient neuron models whilst capturing essential aspects of single-neuron dynamics. We overcome the intrinsic limitations of the extant optimization methods by proposing an alternative optimization component based on multimodal algorithms. This approach can natively explore a diverse population of neuron model configurations. In contrast to methods that focus on a single global optimum, the multimodal method allows directly obtaining a set of promising solutions for a single but complex multi-feature objective function. The final sparse population of candidate solutions has to be analyzed and evaluated according to the biological plausibility and their objective to the target features by the expert. In order to illustrate the value of this approach, we base our proposal on the optimization of cerebellar granule cell (GrC) models that replicate the essential properties of the biological cell. Our results show the emerging variability of plausible sets of values that this type of neuron can adopt underlying complex spiking characteristics. Also, the set of selected cerebellar GrC models captured spiking dynamics closer to the reference model than the single model obtained with off-the-shelf parameter optimization algorithms used in our previous article. The method hereby proposed represents a valuable strategy for adjusting a varied population of realistic and simplified neuron models. It can be applied to other kinds of neuron models and biological contexts.
Collapse
Affiliation(s)
- Milagros Marín
- Department of Biochemistry and Molecular Biology I, University of Granada, Granada, Spain
| | - Nicolás C Cruz
- Department of Informatics, University of Almería, ceiA3, Almería, Spain
| | - Eva M Ortigosa
- Department of Computer Architecture and Technology-CITIC, University of Granada, Granada, Spain
| | - María J Sáez-Lara
- Department of Biochemistry and Molecular Biology I, University of Granada, Granada, Spain
| | - Jesús A Garrido
- Department of Computer Architecture and Technology-CITIC, University of Granada, Granada, Spain
| | - Richard R Carrillo
- Department of Computer Architecture and Technology-CITIC, University of Granada, Granada, Spain
| |
Collapse
|
7
|
Sáray S, Rössert CA, Appukuttan S, Migliore R, Vitale P, Lupascu CA, Bologna LL, Van Geit W, Romani A, Davison AP, Muller E, Freund TF, Káli S. HippoUnit: A software tool for the automated testing and systematic comparison of detailed models of hippocampal neurons based on electrophysiological data. PLoS Comput Biol 2021; 17:e1008114. [PMID: 33513130 PMCID: PMC7875359 DOI: 10.1371/journal.pcbi.1008114] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/27/2020] [Revised: 02/10/2021] [Accepted: 12/24/2020] [Indexed: 11/19/2022] Open
Abstract
Anatomically and biophysically detailed data-driven neuronal models have become widely used tools for understanding and predicting the behavior and function of neurons. Due to the increasing availability of experimental data from anatomical and electrophysiological measurements as well as the growing number of computational and software tools that enable accurate neuronal modeling, there are now a large number of different models of many cell types available in the literature. These models were usually built to capture a few important or interesting properties of the given neuron type, and it is often unknown how they would behave outside their original context. In addition, there is currently no simple way of quantitatively comparing different models regarding how closely they match specific experimental observations. This limits the evaluation, re-use and further development of the existing models. Further, the development of new models could also be significantly facilitated by the ability to rapidly test the behavior of model candidates against the relevant collection of experimental data. We address these problems for the representative case of the CA1 pyramidal cell of the rat hippocampus by developing an open-source Python test suite, which makes it possible to automatically and systematically test multiple properties of models by making quantitative comparisons between the models and electrophysiological data. The tests cover various aspects of somatic behavior, and signal propagation and integration in apical dendrites. To demonstrate the utility of our approach, we applied our tests to compare the behavior of several different rat hippocampal CA1 pyramidal cell models from the ModelDB database against electrophysiological data available in the literature, and evaluated how well these models match experimental observations in different domains. We also show how we employed the test suite to aid the development of models within the European Human Brain Project (HBP), and describe the integration of the tests into the validation framework developed in the HBP, with the aim of facilitating more reproducible and transparent model building in the neuroscience community.
Collapse
Affiliation(s)
- Sára Sáray
- Faculty of Information Technology and Bionics, Pázmány Péter Catholic University, Budapest, Hungary
- Institute of Experimental Medicine, Budapest, Hungary
- * E-mail: (SS); (SK)
| | - Christian A. Rössert
- Blue Brain Project, École Polytechnique Fédérale de Lausanne, Geneva, Switzerland
| | - Shailesh Appukuttan
- Paris-Saclay Institute of Neuroscience, Centre National de la Recherche Scientifique/Université Paris-Saclay, Gif-sur-Yvette, France
| | - Rosanna Migliore
- Institute of Biophysics, National Research Council, Palermo, Italy
| | - Paola Vitale
- Institute of Biophysics, National Research Council, Palermo, Italy
| | | | - Luca L. Bologna
- Institute of Biophysics, National Research Council, Palermo, Italy
| | - Werner Van Geit
- Blue Brain Project, École Polytechnique Fédérale de Lausanne, Geneva, Switzerland
| | - Armando Romani
- Blue Brain Project, École Polytechnique Fédérale de Lausanne, Geneva, Switzerland
| | - Andrew P. Davison
- Paris-Saclay Institute of Neuroscience, Centre National de la Recherche Scientifique/Université Paris-Saclay, Gif-sur-Yvette, France
| | - Eilif Muller
- Blue Brain Project, École Polytechnique Fédérale de Lausanne, Geneva, Switzerland
- Department of Neurosciences, Faculty of Medicine, University of Montreal, Montreal, Canada
- CHU Sainte-Justine Research Center, Montreal, Canada
- Quebec Artificial Intelligence Institute (Mila), Montreal, Canada
| | - Tamás F. Freund
- Faculty of Information Technology and Bionics, Pázmány Péter Catholic University, Budapest, Hungary
- Institute of Experimental Medicine, Budapest, Hungary
| | - Szabolcs Káli
- Faculty of Information Technology and Bionics, Pázmány Péter Catholic University, Budapest, Hungary
- Institute of Experimental Medicine, Budapest, Hungary
- * E-mail: (SS); (SK)
| |
Collapse
|
8
|
Valadez-Godínez S, Sossa H, Santiago-Montero R. On the accuracy and computational cost of spiking neuron implementation. Neural Netw 2019; 122:196-217. [PMID: 31689679 DOI: 10.1016/j.neunet.2019.09.026] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/16/2019] [Revised: 09/12/2019] [Accepted: 09/17/2019] [Indexed: 10/25/2022]
Abstract
Since more than a decade ago, three statements about spiking neuron (SN) implementations have been widely accepted: 1) Hodgkin and Huxley (HH) model is computationally prohibitive, 2) Izhikevich (IZH) artificial neuron is as efficient as Leaky Integrate-and-Fire (LIF) model, and 3) IZH model is more efficient than HH model (Izhikevich, 2004). As suggested by Hodgkin and Huxley (1952), their model operates in two modes: by using the α's and β's rate functions directly (HH model) and by storing them into tables (HHT model) for computational cost reduction. Recently, it has been stated that: 1) HHT model (HH using tables) is not prohibitive, 2) IZH model is not efficient, and 3) both HHT and IZH models are comparable in computational cost (Skocik & Long, 2014). That controversy shows that there is no consensus concerning SN simulation capacities. Hence, in this work, we introduce a refined approach, based on the multiobjective optimization theory, describing the SN simulation capacities and ultimately choosing optimal simulation parameters. We have used normalized metrics to define the capacity levels of accuracy, computational cost, and efficiency. Normalized metrics allowed comparisons between SNs at the same level or scale. We conducted tests for balanced, lower, and upper boundary conditions under a regular spiking mode with constant and random current stimuli. We found optimal simulation parameters leading to a balance between computational cost and accuracy. Importantly, and, in general, we found that 1) HH model (without using tables) is the most accurate, computationally inexpensive, and efficient, 2) IZH model is the most expensive and inefficient, 3) both LIF and HHT models are the most inaccurate, 4) HHT model is more expensive and inaccurate than HH model due to α's and β's table discretization, and 5) HHT model is not comparable in computational cost to IZH model. These results refute the theory formulated over a decade ago (Izhikevich, 2004) and go more in-depth in the statements formulated by Skocik and Long (2014). Our statements imply that the number of dimensions or FLOPS in the SNs are theoretical but not practical indicators of the true computational cost. The metric we propose for the computational cost is more precise than FLOPS and was found to be invariant to computer architecture. Moreover, we found that the firing frequency used in previous works is a necessary but an insufficient metric to evaluate the simulation accuracy. We also show that our results are consistent with the theory of numerical methods and the theory of SN discontinuity. Discontinuous SNs, such LIF and IZH models, introduce a considerable error every time a spike is generated. In addition, compared to the constant input current, the random input current increases the computational cost and inaccuracy. Besides, we found that the search for optimal simulation parameters is problem-specific. That is important because most of the previous works have intended to find a general and unique optimal simulation. Here, we show that this solution could not exist because it is a multiobjective optimization problem that depends on several factors. This work sets up a renewed thesis concerning the SN simulation that is useful to several related research areas, including the emergent Deep Spiking Neural Networks.
Collapse
Affiliation(s)
- Sergio Valadez-Godínez
- Laboratorio de Robótica y Mecatrónica, Centro de Investigación en Computación, Instituto Politécnico Nacional, Av. Juan de Dios Bátiz, S/N, Col. Nva. Industrial Vallejo, Ciudad de México, México, 07738, Mexico; División de Ingeniería Informática, Instituto Tecnológico Superior de Purísima del Rincón, Gto., México, 36413, Mexico; División de Ingenierías de Educación Superior, Universidad Virtual del Estado de Guanajuato, Gto., México, 36400, Mexico.
| | - Humberto Sossa
- Laboratorio de Robótica y Mecatrónica, Centro de Investigación en Computación, Instituto Politécnico Nacional, Av. Juan de Dios Bátiz, S/N, Col. Nva. Industrial Vallejo, Ciudad de México, México, 07738, Mexico; Tecnológico de Monterrey, Campus Guadalajara, Av. Gral. Ramón Corona 2514, Zapopan, Jal., México, 45138, Mexico.
| | - Raúl Santiago-Montero
- División de Estudios de Posgrado e Investigación, Instituto Tecnológico de León, Av. Tecnológico S/N, León, Gto., México, 37290, Mexico.
| |
Collapse
|
9
|
Ujfalussy BB, Makara JK, Lengyel M, Branco T. Global and Multiplexed Dendritic Computations under In Vivo-like Conditions. Neuron 2019; 100:579-592.e5. [PMID: 30408443 PMCID: PMC6226578 DOI: 10.1016/j.neuron.2018.08.032] [Citation(s) in RCA: 44] [Impact Index Per Article: 8.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/19/2017] [Revised: 07/07/2018] [Accepted: 08/21/2018] [Indexed: 10/27/2022]
Abstract
Dendrites integrate inputs nonlinearly, but it is unclear how these nonlinearities contribute to the overall input-output transformation of single neurons. We developed statistically principled methods using a hierarchical cascade of linear-nonlinear subunits (hLN) to model the dynamically evolving somatic response of neurons receiving complex, in vivo-like spatiotemporal synaptic input patterns. We used the hLN to predict the somatic membrane potential of an in vivo-validated detailed biophysical model of a L2/3 pyramidal cell. Linear input integration with a single global dendritic nonlinearity achieved above 90% prediction accuracy. A novel hLN motif, input multiplexing into parallel processing channels, could improve predictions as much as conventionally used additional layers of local nonlinearities. We obtained similar results in two other cell types. This approach provides a data-driven characterization of a key component of cortical circuit computations: the input-output transformation of neurons during in vivo-like conditions.
Collapse
Affiliation(s)
- Balázs B Ujfalussy
- MRC Laboratory of Molecular Biology, Cambridge, UK; Laboratory of Neuronal Signaling, Institute of Experimental Medicine, Budapest, Hungary; Computational and Biological Learning Lab, Department of Engineering, University of Cambridge, Cambridge, UK; MTA Wigner Research Center for Physics, Budapest, Hungary.
| | - Judit K Makara
- Laboratory of Neuronal Signaling, Institute of Experimental Medicine, Budapest, Hungary
| | - Máté Lengyel
- Computational and Biological Learning Lab, Department of Engineering, University of Cambridge, Cambridge, UK; Department of Cognitive Science, Central European University, Budapest, Hungary
| | - Tiago Branco
- MRC Laboratory of Molecular Biology, Cambridge, UK; Sainsbury Wellcome Centre, University College London, London, UK
| |
Collapse
|
10
|
Dura-Bernal S, Suter BA, Gleeson P, Cantarelli M, Quintana A, Rodriguez F, Kedziora DJ, Chadderdon GL, Kerr CC, Neymotin SA, McDougal RA, Hines M, Shepherd GMG, Lytton WW. NetPyNE, a tool for data-driven multiscale modeling of brain circuits. eLife 2019; 8:e44494. [PMID: 31025934 PMCID: PMC6534378 DOI: 10.7554/elife.44494] [Citation(s) in RCA: 68] [Impact Index Per Article: 13.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/19/2018] [Accepted: 04/25/2019] [Indexed: 12/22/2022] Open
Abstract
Biophysical modeling of neuronal networks helps to integrate and interpret rapidly growing and disparate experimental datasets at multiple scales. The NetPyNE tool (www.netpyne.org) provides both programmatic and graphical interfaces to develop data-driven multiscale network models in NEURON. NetPyNE clearly separates model parameters from implementation code. Users provide specifications at a high level via a standardized declarative language, for example connectivity rules, to create millions of cell-to-cell connections. NetPyNE then enables users to generate the NEURON network, run efficiently parallelized simulations, optimize and explore network parameters through automated batch runs, and use built-in functions for visualization and analysis - connectivity matrices, voltage traces, spike raster plots, local field potentials, and information theoretic measures. NetPyNE also facilitates model sharing by exporting and importing standardized formats (NeuroML and SONATA). NetPyNE is already being used to teach computational neuroscience students and by modelers to investigate brain regions and phenomena.
Collapse
Affiliation(s)
- Salvador Dura-Bernal
- Department of Physiology & PharmacologyState University of New York Downstate Medical CenterBrooklynUnited States
| | - Benjamin A Suter
- Department of PhysiologyNorthwestern UniversityChicagoUnited States
| | - Padraig Gleeson
- Department of Neuroscience, Physiology and PharmacologyUniversity College LondonLondonUnited Kingdom
| | | | | | - Facundo Rodriguez
- Department of Physiology & PharmacologyState University of New York Downstate Medical CenterBrooklynUnited States
- MetaCell LLCBostonUnited States
| | - David J Kedziora
- Complex Systems Group, School of PhysicsUniversity of SydneySydneyAustralia
| | - George L Chadderdon
- Department of Physiology & PharmacologyState University of New York Downstate Medical CenterBrooklynUnited States
| | - Cliff C Kerr
- Complex Systems Group, School of PhysicsUniversity of SydneySydneyAustralia
| | - Samuel A Neymotin
- Department of Physiology & PharmacologyState University of New York Downstate Medical CenterBrooklynUnited States
- Nathan Kline Institute for Psychiatric ResearchOrangeburgUnited States
| | - Robert A McDougal
- Department of Neuroscience and School of MedicineYale UniversityNew HavenUnited States
- Center for Medical InformaticsYale UniversityNew HavenUnited States
| | - Michael Hines
- Department of Neuroscience and School of MedicineYale UniversityNew HavenUnited States
| | | | - William W Lytton
- Department of Physiology & PharmacologyState University of New York Downstate Medical CenterBrooklynUnited States
- Department of NeurologyKings County HospitalBrooklynUnited States
| |
Collapse
|
11
|
Jȩdrzejewski-Szmek Z, Abrahao KP, Jȩdrzejewska-Szmek J, Lovinger DM, Blackwell KT. Parameter Optimization Using Covariance Matrix Adaptation-Evolutionary Strategy (CMA-ES), an Approach to Investigate Differences in Channel Properties Between Neuron Subtypes. Front Neuroinform 2018; 12:47. [PMID: 30108495 PMCID: PMC6079282 DOI: 10.3389/fninf.2018.00047] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/01/2018] [Accepted: 07/06/2018] [Indexed: 11/25/2022] Open
Abstract
Computational models in neuroscience can be used to predict causal relationships between biological mechanisms in neurons and networks, such as the effect of blocking an ion channel or synaptic connection on neuron activity. Since developing a biophysically realistic, single neuron model is exceedingly difficult, software has been developed for automatically adjusting parameters of computational neuronal models. The ideal optimization software should work with commonly used neural simulation software; thus, we present software which works with models specified in declarative format for the MOOSE simulator. Experimental data can be specified using one of two different file formats. The fitness function is customizable as a weighted combination of feature differences. The optimization itself uses the covariance matrix adaptation-evolutionary strategy, because it is robust in the face of local fluctuations of the fitness function, and deals well with a high-dimensional and discontinuous fitness landscape. We demonstrate the versatility of the software by creating several model examples of each of four types of neurons (two subtypes of spiny projection neurons and two subtypes of globus pallidus neurons) by tuning to current clamp data. Optimizations reached convergence within 1,600-4,000 model evaluations (200-500 generations × population size of 8). Analysis of the parameters of the best fitting models revealed differences between neuron subtypes, which are consistent with prior experimental results. Overall our results suggest that this easy-to-use, automatic approach for finding neuron channel parameters may be applied to current clamp recordings from neurons exhibiting different biochemical markers to help characterize ionic differences between other neuron subtypes.
Collapse
Affiliation(s)
| | - Karina P. Abrahao
- Laboratory for Integrative Neuroscience, Section on Synaptic Pharmacology, National Institute on Alcohol Abuse and Alcoholism, National Institutes of Health, Rockville, MD, United States
| | | | - David M. Lovinger
- Laboratory for Integrative Neuroscience, Section on Synaptic Pharmacology, National Institute on Alcohol Abuse and Alcoholism, National Institutes of Health, Rockville, MD, United States
| | - Kim T. Blackwell
- Krasnow Institute of Advanced Study, George Mason University, Fairfax, VA, United States
- Department of Bioengineering, Volgenau School of Engineering, George Mason University, Fairfax, VA, United States
| |
Collapse
|
12
|
Antolík J, Davison AP. Arkheia: Data Management and Communication for Open Computational Neuroscience. Front Neuroinform 2018; 12:6. [PMID: 29556187 PMCID: PMC5845131 DOI: 10.3389/fninf.2018.00006] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/24/2017] [Accepted: 02/14/2018] [Indexed: 11/13/2022] Open
Abstract
Two trends have been unfolding in computational neuroscience during the last decade. First, a shift of focus to increasingly complex and heterogeneous neural network models, with a concomitant increase in the level of collaboration within the field (whether direct or in the form of building on top of existing tools and results). Second, a general trend in science toward more open communication, both internally, with other potential scientific collaborators, and externally, with the wider public. This multi-faceted development toward more integrative approaches and more intense communication within and outside of the field poses major new challenges for modelers, as currently there is a severe lack of tools to help with automatic communication and sharing of all aspects of a simulation workflow to the rest of the community. To address this important gap in the current computational modeling software infrastructure, here we introduce Arkheia. Arkheia is a web-based open science platform for computational models in systems neuroscience. It provides an automatic, interactive, graphical presentation of simulation results, experimental protocols, and interactive exploration of parameter searches, in a web browser-based application. Arkheia is focused on automatic presentation of these resources with minimal manual input from users. Arkheia is written in a modular fashion with a focus on future development of the platform. The platform is designed in an open manner, with a clearly defined and separated API for database access, so that any project can write its own backend translating its data into the Arkheia database format. Arkheia is not a centralized platform, but allows any user (or group of users) to set up their own repository, either for public access by the general population, or locally for internal use. Overall, Arkheia provides users with an automatic means to communicate information about not only their models but also individual simulation results and the entire experimental context in an approachable graphical manner, thus facilitating the user's ability to collaborate in the field and outreach to a wider audience.
Collapse
Affiliation(s)
- Ján Antolík
- Institut National de la Santé et de la Recherche Médicale UMRI S 968; Sorbonne Universits, UPMC Univ Paris 06, UMR S 968; Centre National de la Recherche Scientifique, UMR 7210, Institut de la Vision, Paris, France.,Unité de Neurosciences, Information et Complexité, Centre National de la Recherche Scientifique UPR 3293, Gif-sur-Yvette, France
| | - Andrew P Davison
- Unité de Neurosciences, Information et Complexité, Centre National de la Recherche Scientifique UPR 3293, Gif-sur-Yvette, France
| |
Collapse
|
13
|
Gouwens NW, Berg J, Feng D, Sorensen SA, Zeng H, Hawrylycz MJ, Koch C, Arkhipov A. Systematic generation of biophysically detailed models for diverse cortical neuron types. Nat Commun 2018; 9:710. [PMID: 29459718 PMCID: PMC5818534 DOI: 10.1038/s41467-017-02718-3] [Citation(s) in RCA: 67] [Impact Index Per Article: 11.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/26/2017] [Accepted: 12/20/2017] [Indexed: 01/17/2023] Open
Abstract
The cellular components of mammalian neocortical circuits are diverse, and capturing this diversity in computational models is challenging. Here we report an approach for generating biophysically detailed models of 170 individual neurons in the Allen Cell Types Database to link the systematic experimental characterization of cell types to the construction of cortical models. We build models from 3D morphologies and somatic electrophysiological responses measured in the same cells. Densities of active somatic conductances and additional parameters are optimized with a genetic algorithm to match electrophysiological features. We evaluate the models by applying additional stimuli and comparing model responses to experimental data. Applying this technique across a diverse set of neurons from adult mouse primary visual cortex, we verify that models preserve the distinctiveness of intrinsic properties between subsets of cells observed in experiments. The optimized models are accessible online alongside the experimental data. Code for optimization and simulation is also openly distributed.
Collapse
Affiliation(s)
- Nathan W Gouwens
- Allen Institute for Brain Science, 615 Westlake Avenue N, Seattle, WA, 98109, USA
| | - Jim Berg
- Allen Institute for Brain Science, 615 Westlake Avenue N, Seattle, WA, 98109, USA
| | - David Feng
- Allen Institute for Brain Science, 615 Westlake Avenue N, Seattle, WA, 98109, USA
| | - Staci A Sorensen
- Allen Institute for Brain Science, 615 Westlake Avenue N, Seattle, WA, 98109, USA
| | - Hongkui Zeng
- Allen Institute for Brain Science, 615 Westlake Avenue N, Seattle, WA, 98109, USA
| | - Michael J Hawrylycz
- Allen Institute for Brain Science, 615 Westlake Avenue N, Seattle, WA, 98109, USA
| | - Christof Koch
- Allen Institute for Brain Science, 615 Westlake Avenue N, Seattle, WA, 98109, USA
| | - Anton Arkhipov
- Allen Institute for Brain Science, 615 Westlake Avenue N, Seattle, WA, 98109, USA.
| |
Collapse
|
14
|
Rumbell TH, Draguljić D, Yadav A, Hof PR, Luebke JI, Weaver CM. Automated evolutionary optimization of ion channel conductances and kinetics in models of young and aged rhesus monkey pyramidal neurons. J Comput Neurosci 2016; 41:65-90. [PMID: 27106692 DOI: 10.1007/s10827-016-0605-9] [Citation(s) in RCA: 23] [Impact Index Per Article: 2.9] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/04/2015] [Revised: 03/09/2016] [Accepted: 04/05/2016] [Indexed: 02/03/2023]
Abstract
Conductance-based compartment modeling requires tuning of many parameters to fit the neuron model to target electrophysiological data. Automated parameter optimization via evolutionary algorithms (EAs) is a common approach to accomplish this task, using error functions to quantify differences between model and target. We present a three-stage EA optimization protocol for tuning ion channel conductances and kinetics in a generic neuron model with minimal manual intervention. We use the technique of Latin hypercube sampling in a new way, to choose weights for error functions automatically so that each function influences the parameter search to a similar degree. This protocol requires no specialized physiological data collection and is applicable to commonly-collected current clamp data and either single- or multi-objective optimization. We applied the protocol to two representative pyramidal neurons from layer 3 of the prefrontal cortex of rhesus monkeys, in which action potential firing rates are significantly higher in aged compared to young animals. Using an idealized dendritic topology and models with either 4 or 8 ion channels (10 or 23 free parameters respectively), we produced populations of parameter combinations fitting the target datasets in less than 80 hours of optimization each. Passive parameter differences between young and aged models were consistent with our prior results using simpler models and hand tuning. We analyzed parameter values among fits to a single neuron to facilitate refinement of the underlying model, and across fits to multiple neurons to show how our protocol will lead to predictions of parameter differences with aging in these neurons.
Collapse
Affiliation(s)
- Timothy H Rumbell
- Fishberg Department of Neuroscience and Friedman Brain Institute, Icahn School of Medicine at Mount Sinai, New York, NY, 10029, USA
- Computational Biology Center, IBM Research, Thomas J. Watson Research Center, Yorktown Heights, NY, 10598, USA
| | - Danel Draguljić
- Department of Mathematics, Franklin and Marshall College, Lancaster, PA, 17604, USA
| | - Aniruddha Yadav
- Fishberg Department of Neuroscience and Friedman Brain Institute, Icahn School of Medicine at Mount Sinai, New York, NY, 10029, USA
- Gauge Data Solutions Pvt Ltd, Noida, India
| | - Patrick R Hof
- Fishberg Department of Neuroscience and Friedman Brain Institute, Icahn School of Medicine at Mount Sinai, New York, NY, 10029, USA
| | - Jennifer I Luebke
- Department of Anatomy and Neurobiology, Boston University School of Medicine, Boston, MA, 02118, USA
| | - Christina M Weaver
- Department of Mathematics, Franklin and Marshall College, Lancaster, PA, 17604, USA.
| |
Collapse
|
15
|
Van Geit W, Gevaert M, Chindemi G, Rössert C, Courcol JD, Muller EB, Schürmann F, Segev I, Markram H. BluePyOpt: Leveraging Open Source Software and Cloud Infrastructure to Optimise Model Parameters in Neuroscience. Front Neuroinform 2016; 10:17. [PMID: 27375471 PMCID: PMC4896051 DOI: 10.3389/fninf.2016.00017] [Citation(s) in RCA: 85] [Impact Index Per Article: 10.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/01/2016] [Accepted: 05/06/2016] [Indexed: 11/13/2022] Open
Abstract
At many scales in neuroscience, appropriate mathematical models take the form of complex dynamical systems. Parameterizing such models to conform to the multitude of available experimental constraints is a global non-linear optimisation problem with a complex fitness landscape, requiring numerical techniques to find suitable approximate solutions. Stochastic optimisation approaches, such as evolutionary algorithms, have been shown to be effective, but often the setting up of such optimisations and the choice of a specific search algorithm and its parameters is non-trivial, requiring domain-specific expertise. Here we describe BluePyOpt, a Python package targeted at the broad neuroscience community to simplify this task. BluePyOpt is an extensible framework for data-driven model parameter optimisation that wraps and standardizes several existing open-source tools. It simplifies the task of creating and sharing these optimisations, and the associated techniques and knowledge. This is achieved by abstracting the optimisation and evaluation tasks into various reusable and flexible discrete elements according to established best-practices. Further, BluePyOpt provides methods for setting up both small- and large-scale optimisations on a variety of platforms, ranging from laptops to Linux clusters and cloud-based compute infrastructures. The versatility of the BluePyOpt framework is demonstrated by working through three representative neuroscience specific use cases.
Collapse
Affiliation(s)
- Werner Van Geit
- Blue Brain Project, École Polytechnique Fédérale de Lausanne Geneva, Switzerland
| | - Michael Gevaert
- Blue Brain Project, École Polytechnique Fédérale de Lausanne Geneva, Switzerland
| | - Giuseppe Chindemi
- Blue Brain Project, École Polytechnique Fédérale de Lausanne Geneva, Switzerland
| | - Christian Rössert
- Blue Brain Project, École Polytechnique Fédérale de Lausanne Geneva, Switzerland
| | - Jean-Denis Courcol
- Blue Brain Project, École Polytechnique Fédérale de Lausanne Geneva, Switzerland
| | - Eilif B Muller
- Blue Brain Project, École Polytechnique Fédérale de Lausanne Geneva, Switzerland
| | - Felix Schürmann
- Blue Brain Project, École Polytechnique Fédérale de Lausanne Geneva, Switzerland
| | - Idan Segev
- Department of Neurobiology, Alexander Silberman Institute of Life Sciences, The Hebrew University of JerusalemJerusalem, Israel; The Edmond and Lily Safra Centre for Brain Sciences, The Hebrew University of JerusalemJerusalem, Israel
| | - Henry Markram
- Blue Brain Project, École Polytechnique Fédérale de LausanneGeneva, Switzerland; Laboratory of Neural Microcircuitry, Brain Mind Institute, École Polytechnique Fédérale de LausanneLausanne, Switzerland
| |
Collapse
|