51
|
Harkin EF, Shen PR, Goel A, Richards BA, Naud R. Parallel and Recurrent Cascade Models as a Unifying Force for Understanding Sub-cellular Computation. Neuroscience 2021; 489:200-215. [PMID: 34358629 DOI: 10.1016/j.neuroscience.2021.07.026] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/23/2021] [Revised: 07/06/2021] [Accepted: 07/25/2021] [Indexed: 11/15/2022]
Abstract
Neurons are very complicated computational devices, incorporating numerous non-linear processes, particularly in their dendrites. Biophysical models capture these processes directly by explicitly modelling physiological variables, such as ion channels, current flow, membrane capacitance, etc. However, another option for capturing the complexities of real neural computation is to use cascade models, which treat individual neurons as a cascade of linear and non-linear operations, akin to a multi-layer artificial neural network. Recent research has shown that cascade models can capture single-cell computation well, but there are still a number of sub-cellular, regenerative dendritic phenomena that they cannot capture, such as the interaction between sodium, calcium, and NMDA spikes in different compartments. Here, we propose that it is possible to capture these additional phenomena using parallel, recurrent cascade models, wherein an individual neuron is modelled as a cascade of parallel linear and non-linear operations that can be connected recurrently, akin to a multi-layer, recurrent, artificial neural network. Given their tractable mathematical structure, we show that neuron models expressed in terms of parallel recurrent cascades can themselves be integrated into multi-layered artificial neural networks and trained to perform complex tasks. We go on to discuss potential implications and uses of these models for artificial intelligence. Overall, we argue that parallel, recurrent cascade models provide an important, unifying tool for capturing single-cell computation and exploring the algorithmic implications of physiological phenomena.
Collapse
Affiliation(s)
- Emerson F Harkin
- uOttawa Brain and Mind Institute, Centre for Neural Dynamics, Department of Cellular and Molecular Medicine, University of Ottawa, Ottawa, ON, Canada
| | - Peter R Shen
- Department of Systems Design Engineering, University of Waterloo, Waterloo, ON, Canada
| | - Anish Goel
- Lisgar Collegiate Institute, Ottawa, ON, Canada
| | - Blake A Richards
- Mila, Montréal, QC, Canada; Montreal Neurological Institute, Montréal, QC, Canada; Department of Neurology and Neurosurgery, McGill University, Montréal, QC, Canada; School of Computer Science, McGill University, Montréal, QC, Canada.
| | - Richard Naud
- uOttawa Brain and Mind Institute, Centre for Neural Dynamics, Department of Cellular and Molecular Medicine, University of Ottawa, Ottawa, ON, Canada; Department of Physics, University of Ottawa, Ottawa, ON, Canada.
| |
Collapse
|
52
|
Pulvermüller F, Tomasello R, Henningsen-Schomers MR, Wennekers T. Biological constraints on neural network models of cognitive function. Nat Rev Neurosci 2021; 22:488-502. [PMID: 34183826 PMCID: PMC7612527 DOI: 10.1038/s41583-021-00473-5] [Citation(s) in RCA: 43] [Impact Index Per Article: 14.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 05/17/2021] [Indexed: 02/06/2023]
Abstract
Neural network models are potential tools for improving our understanding of complex brain functions. To address this goal, these models need to be neurobiologically realistic. However, although neural networks have advanced dramatically in recent years and even achieve human-like performance on complex perceptual and cognitive tasks, their similarity to aspects of brain anatomy and physiology is imperfect. Here, we discuss different types of neural models, including localist, auto-associative, hetero-associative, deep and whole-brain networks, and identify aspects under which their biological plausibility can be improved. These aspects range from the choice of model neurons and of mechanisms of synaptic plasticity and learning to implementation of inhibition and control, along with neuroanatomical properties including areal structure and local and long-range connectivity. We highlight recent advances in developing biologically grounded cognitive theories and in mechanistically explaining, on the basis of these brain-constrained neural models, hitherto unaddressed issues regarding the nature, localization and ontogenetic and phylogenetic development of higher brain functions. In closing, we point to possible future clinical applications of brain-constrained modelling.
Collapse
Affiliation(s)
- Friedemann Pulvermüller
- Brain Language Laboratory, Department of Philosophy and Humanities, WE4, Freie Universität Berlin, Berlin, Germany.
- Berlin School of Mind and Brain, Humboldt-Universität zu Berlin, Berlin, Germany.
- Einstein Center for Neurosciences Berlin, Berlin, Germany.
- Cluster of Excellence 'Matters of Activity', Humboldt-Universität zu Berlin, Berlin, Germany.
| | - Rosario Tomasello
- Brain Language Laboratory, Department of Philosophy and Humanities, WE4, Freie Universität Berlin, Berlin, Germany
- Cluster of Excellence 'Matters of Activity', Humboldt-Universität zu Berlin, Berlin, Germany
| | - Malte R Henningsen-Schomers
- Brain Language Laboratory, Department of Philosophy and Humanities, WE4, Freie Universität Berlin, Berlin, Germany
- Cluster of Excellence 'Matters of Activity', Humboldt-Universität zu Berlin, Berlin, Germany
| | - Thomas Wennekers
- School of Engineering, Computing and Mathematics, University of Plymouth, Plymouth, UK
| |
Collapse
|
53
|
Ramlow L, Lindner B. Interspike interval correlations in neuron models with adaptation and correlated noise. PLoS Comput Biol 2021; 17:e1009261. [PMID: 34449771 PMCID: PMC8428727 DOI: 10.1371/journal.pcbi.1009261] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/28/2020] [Revised: 09/09/2021] [Accepted: 07/08/2021] [Indexed: 11/19/2022] Open
Abstract
The generation of neural action potentials (spikes) is random but nevertheless may result in a rich statistical structure of the spike sequence. In particular, contrary to the popular renewal assumption of theoreticians, the intervals between adjacent spikes are often correlated. Experimentally, different patterns of interspike-interval correlations have been observed and computational studies have identified spike-frequency adaptation and correlated noise as the two main mechanisms that can lead to such correlations. Analytical studies have focused on the single cases of either correlated (colored) noise or adaptation currents in combination with uncorrelated (white) noise. For low-pass filtered noise or adaptation, the serial correlation coefficient can be approximated as a single geometric sequence of the lag between the intervals, providing an explanation for some of the experimentally observed patterns. Here we address the problem of interval correlations for a widely used class of models, multidimensional integrate-and-fire neurons subject to a combination of colored and white noise sources and a spike-triggered adaptation current. Assuming weak noise, we derive a simple formula for the serial correlation coefficient, a sum of two geometric sequences, which accounts for a large class of correlation patterns. The theory is confirmed by means of numerical simulations in a number of special cases including the leaky, quadratic, and generalized integrate-and-fire models with colored noise and spike-frequency adaptation. Furthermore we study the case in which the adaptation current and the colored noise share the same time scale, corresponding to a slow stochastic population of adaptation channels; we demonstrate that our theory can account for a nonmonotonic dependence of the correlation coefficient on the channel's time scale. Another application of the theory is a neuron driven by network-noise-like fluctuations (green noise). We also discuss the range of validity of our weak-noise theory and show that by changing the relative strength of white and colored noise sources, we can change the sign of the correlation coefficient. Finally, we apply our theory to a conductance-based model which demonstrates its broad applicability.
Collapse
Affiliation(s)
- Lukas Ramlow
- Bernstein Center for Computational Neuroscience Berlin, Berlin, Germany
- Physics Department, Humboldt University zu Berlin, Berlin, Germany
| | - Benjamin Lindner
- Bernstein Center for Computational Neuroscience Berlin, Berlin, Germany
- Physics Department, Humboldt University zu Berlin, Berlin, Germany
| |
Collapse
|
54
|
Rodríguez-Collado A, Rueda C. Electrophysiological and Transcriptomic Features Reveal a Circular Taxonomy of Cortical Neurons. Front Hum Neurosci 2021; 15:684950. [PMID: 34381341 PMCID: PMC8350032 DOI: 10.3389/fnhum.2021.684950] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/24/2021] [Accepted: 06/28/2021] [Indexed: 11/24/2022] Open
Abstract
The complete understanding of the mammalian brain requires exact knowledge of the function of each neuron subpopulation composing its parts. To achieve this goal, an exhaustive, precise, reproducible, and robust neuronal taxonomy should be defined. In this paper, a new circular taxonomy based on transcriptomic features and novel electrophysiological features is proposed. The approach is validated by analysing more than 1850 electrophysiological signals of different mouse visual cortex neurons proceeding from the Allen Cell Types database. The study is conducted on two different levels: neurons and their cell-type aggregation into Cre lines. At the neuronal level, electrophysiological features have been extracted with a promising model that has already proved its worth in neuronal dynamics. At the Cre line level, electrophysiological and transcriptomic features are joined on cell types with available genetic information. A taxonomy with a circular order is revealed by a simple transformation of the first two principal components that allow the characterization of the different Cre lines. Moreover, the proposed methodology locates other Cre lines in the taxonomy that do not have transcriptomic features available. Finally, the taxonomy is validated by Machine Learning methods which are able to discriminate the different neuron types with the proposed electrophysiological features.
Collapse
|
55
|
Salaj D, Subramoney A, Kraisnikovic C, Bellec G, Legenstein R, Maass W. Spike frequency adaptation supports network computations on temporally dispersed information. eLife 2021; 10:e65459. [PMID: 34310281 PMCID: PMC8313230 DOI: 10.7554/elife.65459] [Citation(s) in RCA: 15] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/04/2020] [Accepted: 06/29/2021] [Indexed: 11/13/2022] Open
Abstract
For solving tasks such as recognizing a song, answering a question, or inverting a sequence of symbols, cortical microcircuits need to integrate and manipulate information that was dispersed over time during the preceding seconds. Creating biologically realistic models for the underlying computations, especially with spiking neurons and for behaviorally relevant integration time spans, is notoriously difficult. We examine the role of spike frequency adaptation in such computations and find that it has a surprisingly large impact. The inclusion of this well-known property of a substantial fraction of neurons in the neocortex - especially in higher areas of the human neocortex - moves the performance of spiking neural network models for computations on network inputs that are temporally dispersed from a fairly low level up to the performance level of the human brain.
Collapse
Affiliation(s)
- Darjan Salaj
- Institute of Theoretical Computer Science, Graz University of TechnologyGrazAustria
| | - Anand Subramoney
- Institute of Theoretical Computer Science, Graz University of TechnologyGrazAustria
| | - Ceca Kraisnikovic
- Institute of Theoretical Computer Science, Graz University of TechnologyGrazAustria
| | - Guillaume Bellec
- Institute of Theoretical Computer Science, Graz University of TechnologyGrazAustria
- Laboratory of Computational Neuroscience, Ecole Polytechnique Fédérale de Lausanne (EPFL)LausanneSwitzerland
| | - Robert Legenstein
- Institute of Theoretical Computer Science, Graz University of TechnologyGrazAustria
| | - Wolfgang Maass
- Institute of Theoretical Computer Science, Graz University of TechnologyGrazAustria
| |
Collapse
|
56
|
Rodríguez-Collado A, Rueda C. A simple parametric representation of the Hodgkin-Huxley model. PLoS One 2021; 16:e0254152. [PMID: 34292948 PMCID: PMC8297874 DOI: 10.1371/journal.pone.0254152] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/19/2021] [Accepted: 06/21/2021] [Indexed: 02/04/2023] Open
Abstract
The Hodgkin-Huxley model, decades after its first presentation, is still a reference model in neuroscience as it has successfully reproduced the electrophysiological activity of many organisms. The primary signal in the model represents the membrane potential of a neuron. A simple representation of this signal is presented in this paper. The new proposal is an adapted Frequency Modulated Möbius multicomponent model defined as a signal plus error model in which the signal is decomposed as a sum of waves. The main strengths of the method are the simple parametric formulation, the interpretability and flexibility of the parameters that describe and discriminate the waveforms, the estimators' identifiability and accuracy, and the robustness against noise. The approach is validated with a broad simulation experiment of Hodgkin-Huxley signals and real data from squid giant axons. Interesting differences between simulated and real data emerge from the comparison of the parameter configurations. Furthermore, the potential of the FMM parameters to predict Hodgkin-Huxley model parameters is shown using different Machine Learning methods. Finally, promising contributions of the approach in Spike Sorting and cell-type classification are detailed.
Collapse
Affiliation(s)
| | - Cristina Rueda
- Department of Statistics and Operations Research, Universidad de Valladolid, Valladolid, Spain
| |
Collapse
|
57
|
Pietras B, Gallice N, Schwalger T. Low-dimensional firing-rate dynamics for populations of renewal-type spiking neurons. Phys Rev E 2021; 102:022407. [PMID: 32942450 DOI: 10.1103/physreve.102.022407] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/20/2020] [Accepted: 06/29/2020] [Indexed: 11/07/2022]
Abstract
The macroscopic dynamics of large populations of neurons can be mathematically analyzed using low-dimensional firing-rate or neural-mass models. However, these models fail to capture spike synchronization effects and nonstationary responses of the population activity to rapidly changing stimuli. Here we derive low-dimensional firing-rate models for homogeneous populations of neurons modeled as time-dependent renewal processes. The class of renewal neurons includes integrate-and-fire models driven by white noise and has been frequently used to model neuronal refractoriness and spike synchronization dynamics. The derivation is based on an eigenmode expansion of the associated refractory density equation, which generalizes previous spectral methods for Fokker-Planck equations to arbitrary renewal models. We find a simple relation between the eigenvalues characterizing the timescales of the firing rate dynamics and the Laplace transform of the interspike interval density, for which explicit expressions are available for many renewal models. Retaining only the first eigenmode already yields a reliable low-dimensional approximation of the firing-rate dynamics that captures spike synchronization effects and fast transient dynamics at stimulus onset. We explicitly demonstrate the validity of our model for a large homogeneous population of Poisson neurons with absolute refractoriness and other renewal models that admit an explicit analytical calculation of the eigenvalues. The eigenmode expansion presented here provides a systematic framework for alternative firing-rate models in computational neuroscience based on spiking neuron dynamics with refractoriness.
Collapse
Affiliation(s)
- Bastian Pietras
- Institute of Mathematics, Technical University Berlin, 10623 Berlin, Germany.,Bernstein Center for Computational Neuroscience Berlin, 10115 Berlin, Germany
| | - Noé Gallice
- Brain Mind Institute, École polytechnique fédérale de Lausanne (EPFL), Station 15, CH-1015 Lausanne, Switzerland
| | - Tilo Schwalger
- Institute of Mathematics, Technical University Berlin, 10623 Berlin, Germany.,Bernstein Center for Computational Neuroscience Berlin, 10115 Berlin, Germany
| |
Collapse
|
58
|
Bogdan PA, Marcinnò B, Casellato C, Casali S, Rowley AGD, Hopkins M, Leporati F, D'Angelo E, Rhodes O. Towards a Bio-Inspired Real-Time Neuromorphic Cerebellum. Front Cell Neurosci 2021; 15:622870. [PMID: 34135732 PMCID: PMC8202688 DOI: 10.3389/fncel.2021.622870] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/29/2020] [Accepted: 03/24/2021] [Indexed: 11/25/2022] Open
Abstract
This work presents the first simulation of a large-scale, bio-physically constrained cerebellum model performed on neuromorphic hardware. A model containing 97,000 neurons and 4.2 million synapses is simulated on the SpiNNaker neuromorphic system. Results are validated against a baseline simulation of the same model executed with NEST, a popular spiking neural network simulator using generic computational resources and double precision floating point arithmetic. Individual cell and network-level spiking activity is validated in terms of average spike rates, relative lead or lag of spike times, and membrane potential dynamics of individual neurons, and SpiNNaker is shown to produce results in agreement with NEST. Once validated, the model is used to investigate how to accelerate the simulation speed of the network on the SpiNNaker system, with the future goal of creating a real-time neuromorphic cerebellum. Through detailed communication profiling, peak network activity is identified as one of the main challenges for simulation speed-up. Propagation of spiking activity through the network is measured, and will inform the future development of accelerated execution strategies for cerebellum models on neuromorphic hardware. The large ratio of granule cells to other cell types in the model results in high levels of activity converging onto few cells, with those cells having relatively larger time costs associated with the processing of communication. Organizing cells on SpiNNaker in accordance with their spatial position is shown to reduce the peak communication load by 41%. It is hoped that these insights, together with alternative parallelization strategies, will pave the way for real-time execution of large-scale, bio-physically constrained cerebellum models on SpiNNaker. This in turn will enable exploration of cerebellum-inspired controllers for neurorobotic applications, and execution of extended duration simulations over timescales that would currently be prohibitive using conventional computational platforms.
Collapse
Affiliation(s)
- Petruţ A Bogdan
- Department of Computer Science, The University of Manchester, Manchester, United Kingdom
| | - Beatrice Marcinnò
- Department of Electrical, Computer and Biomedical Engineering, University of Pavia, Pavia, Italy
| | - Claudia Casellato
- Neurophysiology Unit, Neurocomputational Laboratory, Department of Brain and Behavioral Sciences, University of Pavia, Pavia, Italy
| | - Stefano Casali
- Neurophysiology Unit, Neurocomputational Laboratory, Department of Brain and Behavioral Sciences, University of Pavia, Pavia, Italy
| | - Andrew G D Rowley
- Department of Computer Science, The University of Manchester, Manchester, United Kingdom
| | - Michael Hopkins
- Department of Computer Science, The University of Manchester, Manchester, United Kingdom
| | - Francesco Leporati
- Department of Electrical, Computer and Biomedical Engineering, University of Pavia, Pavia, Italy
| | - Egidio D'Angelo
- Neurophysiology Unit, Neurocomputational Laboratory, Department of Brain and Behavioral Sciences, University of Pavia, Pavia, Italy.,IRCCS Mondino Foundation, Pavia, Italy
| | - Oliver Rhodes
- Department of Computer Science, The University of Manchester, Manchester, United Kingdom
| |
Collapse
|
59
|
Tikidji-Hamburyan RA, Colonnese MT. Polynomial, piecewise-Linear, Step (PLS): A Simple, Scalable, and Efficient Framework for Modeling Neurons. Front Neuroinform 2021; 15:642933. [PMID: 34025382 PMCID: PMC8134741 DOI: 10.3389/fninf.2021.642933] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/17/2020] [Accepted: 03/29/2021] [Indexed: 01/04/2023] Open
Abstract
Biological neurons can be modeled with different levels of biophysical/biochemical details. The accuracy with which a model reflects the actual physiological processes and ultimately the information function of a neuron, can range from very detailed to a schematic phenomenological representation. This range exists due to the common problem: one needs to find an optimal trade-off between the level of details needed to capture the necessary information processing in a neuron and the computational load needed to compute 1 s of model time. An increase in modeled network size or model-time, for which the solution should be obtained, makes this trade-off pivotal in model development. Numerical simulations become incredibly challenging when an extensive network with a detailed representation of each neuron needs to be modeled over a long time interval to study slow evolving processes, e.g., development of the thalamocortical circuits. Here we suggest a simple, powerful and flexible approach in which we approximate the right-hand sides of differential equations by combinations of functions from three families: Polynomial, piecewise-Linear, Step (PLS). To obtain a single coherent framework, we provide four core principles in which PLS functions should be combined. We show the rationale behind each of the core principles. Two examples illustrate how to build a conductance-based or phenomenological model using the PLS-framework. We use the first example as a benchmark on three different computational platforms: CPU, GPU, and mobile system-on-chip devices. We show that the PLS-framework speeds up computations without increasing the memory footprint and maintains high model fidelity comparable to the fully-computed model or with lookup-table approximation. We are convinced that the full range of neuron models: from biophysical to phenomenological and even to abstract models, may benefit from using the PLS-framework.
Collapse
Affiliation(s)
| | - Matthew T Colonnese
- School of Medicine and Health Sciences, George Washington University, Washington, DC, United States
| |
Collapse
|
60
|
Ordonez AA, Bullen CK, Villabona-Rueda AF, Thompson EA, Turner ML, Davis SL, Komm O, Powell JD, D'Alessio FR, Yolken RH, Jain SK, Jones-Brando L. Sulforaphane exhibits in vitro and in vivo antiviral activity against pandemic SARS-CoV-2 and seasonal HCoV-OC43 coronaviruses. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2021. [PMID: 33791708 DOI: 10.1101/2021.03.25.437060] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/21/2022]
Abstract
Severe Acute Respiratory Syndrome Coronavirus 2 (SARS-CoV-2), the cause of coronavirus disease 2019 (COVID-19), has incited a global health crisis. Currently, there are no orally available medications for prophylaxis for those exposed to SARS-CoV-2 and limited therapeutic options for those who develop COVID-19. We evaluated the antiviral activity of sulforaphane (SFN), a naturally occurring, orally available, well-tolerated, nutritional supplement present in high concentrations in cruciferous vegetables with limited side effects. SFN inhibited in vitro replication of four strains of SARS-CoV-2 as well as that of the seasonal coronavirus HCoV-OC43. Further, SFN and remdesivir interacted synergistically to inhibit coronavirus infection in vitro. Prophylactic administration of SFN to K18-hACE2 mice prior to intranasal SARS-CoV-2 infection significantly decreased the viral load in the lungs and upper respiratory tract and reduced lung injury and pulmonary pathology compared to untreated infected mice. SFN treatment diminished immune cell activation in the lungs, including significantly lower recruitment of myeloid cells and a reduction in T cell activation and cytokine production. Our results suggest that SFN is a promising treatment for prevention of coronavirus infection or treatment of early disease.
Collapse
|
61
|
Rossbroich J, Trotter D, Beninger J, Tóth K, Naud R. Linear-nonlinear cascades capture synaptic dynamics. PLoS Comput Biol 2021; 17:e1008013. [PMID: 33720935 PMCID: PMC7993773 DOI: 10.1371/journal.pcbi.1008013] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/29/2020] [Revised: 03/25/2021] [Accepted: 02/25/2021] [Indexed: 11/18/2022] Open
Abstract
Short-term synaptic dynamics differ markedly across connections and strongly regulate how action potentials communicate information. To model the range of synaptic dynamics observed in experiments, we have developed a flexible mathematical framework based on a linear-nonlinear operation. This model can capture various experimentally observed features of synaptic dynamics and different types of heteroskedasticity. Despite its conceptual simplicity, we show that it is more adaptable than previous models. Combined with a standard maximum likelihood approach, synaptic dynamics can be accurately and efficiently characterized using naturalistic stimulation patterns. These results make explicit that synaptic processing bears algorithmic similarities with information processing in convolutional neural networks.
Collapse
Affiliation(s)
- Julian Rossbroich
- Friedrich Miescher Institute for Biomedical Research, Basel, Switzerland
| | - Daniel Trotter
- Department of Physics, University of Ottawa, Ottawa, ON, Canada
| | - John Beninger
- uOttawa Brain Mind Institute, Center for Neural Dynamics, Department of Cellular and Molecular Medicine, University of Ottawa, Ottawa, ON, Canada
| | - Katalin Tóth
- uOttawa Brain Mind Institute, Center for Neural Dynamics, Department of Cellular and Molecular Medicine, University of Ottawa, Ottawa, ON, Canada
| | - Richard Naud
- Department of Physics, University of Ottawa, Ottawa, ON, Canada
- uOttawa Brain Mind Institute, Center for Neural Dynamics, Department of Cellular and Molecular Medicine, University of Ottawa, Ottawa, ON, Canada
| |
Collapse
|
62
|
Barta T, Kostal L. Regular spiking in high-conductance states: The essential role of inhibition. Phys Rev E 2021; 103:022408. [PMID: 33736083 DOI: 10.1103/physreve.103.022408] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/09/2020] [Accepted: 02/03/2021] [Indexed: 06/12/2023]
Abstract
Strong inhibitory input to neurons, which occurs in balanced states of neural networks, increases synaptic current fluctuations. This has led to the assumption that inhibition contributes to the high spike-firing irregularity observed in vivo. We used single compartment neuronal models with time-correlated (due to synaptic filtering) and state-dependent (due to reversal potentials) input to demonstrate that inhibitory input acts to decrease membrane potential fluctuations, a result that cannot be achieved with simplified neural input models. To clarify the effects on spike-firing regularity, we used models with different spike-firing adaptation mechanisms, and we observed that the addition of inhibition increased firing regularity in models with dynamic firing thresholds and decreased firing regularity if spike-firing adaptation was implemented through ionic currents or not at all. This fluctuation-stabilization mechanism provides an alternative perspective on the importance of strong inhibitory inputs observed in balanced states of neural networks, and it highlights the key roles of biologically plausible inputs and specific adaptation mechanisms in neuronal modeling.
Collapse
Affiliation(s)
- Tomas Barta
- Institute of Physiology of the Czech Academy of Sciences, 14220 Prague, Czech Republic; Charles University, First Medical Faculty, 12108 Prague, Czech Republic; and Institute of Ecology and Environmental Sciences, INRAE, 78026 Versailles, France
| | - Lubomir Kostal
- Institute of Physiology of the Czech Academy of Sciences, 14220 Prague, Czech Republic
| |
Collapse
|
63
|
Song C, Noh G, Kim TS, Kang M, Song H, Ham A, Jo MK, Cho S, Chai HJ, Cho SR, Cho K, Park J, Song S, Song I, Bang S, Kwak JY, Kang K. Growth and Interlayer Engineering of 2D Layered Semiconductors for Future Electronics. ACS NANO 2020; 14:16266-16300. [PMID: 33301290 DOI: 10.1021/acsnano.0c06607] [Citation(s) in RCA: 20] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Layered materials that do not form a covalent bond in a vertical direction can be prepared in a few atoms to one atom thickness without dangling bonds. This distinctive characteristic of limiting thickness around the sub-nanometer level allowed scientists to explore various physical phenomena in the quantum realm. In addition to the contribution to fundamental science, various applications were proposed. Representatively, they were suggested as a promising material for future electronics. This is because (i) the dangling-bond-free nature inhibits surface scattering, thus carrier mobility can be maintained at sub-nanometer range; (ii) the ultrathin nature allows the short-channel effect to be overcome. In order to establish fundamental discoveries and utilize them in practical applications, appropriate preparation methods are required. On the other hand, adjusting properties to fit the desired application properly is another critical issue. Hence, in this review, we first describe the preparation method of layered materials. Proper growth techniques for target applications and the growth of emerging materials at the beginning stage will be extensively discussed. In addition, we suggest interlayer engineering via intercalation as a method for the development of artificial crystal. Since infinite combinations of the host-intercalant combination are possible, it is expected to expand the material system from the current compound system. Finally, inevitable factors that layered materials must face to be used as electronic applications will be introduced with possible solutions. Emerging electronic devices realized by layered materials are also discussed.
Collapse
Affiliation(s)
- Chanwoo Song
- Department of Materials Science and Engineering, Korea Advanced Institute of Science and Technology (KAIST), Daejeon 34141, Korea
| | - Gichang Noh
- Department of Materials Science and Engineering, Korea Advanced Institute of Science and Technology (KAIST), Daejeon 34141, Korea
- Center for Electronic Materials, Korea Institute of Science and Technology (KIST), Seoul 02792, Korea
| | - Tae Soo Kim
- Department of Materials Science and Engineering, Korea Advanced Institute of Science and Technology (KAIST), Daejeon 34141, Korea
| | - Minsoo Kang
- Department of Materials Science and Engineering, Korea Advanced Institute of Science and Technology (KAIST), Daejeon 34141, Korea
| | - Hwayoung Song
- Department of Materials Science and Engineering, Korea Advanced Institute of Science and Technology (KAIST), Daejeon 34141, Korea
| | - Ayoung Ham
- Department of Materials Science and Engineering, Korea Advanced Institute of Science and Technology (KAIST), Daejeon 34141, Korea
| | - Min-Kyung Jo
- Department of Materials Science and Engineering, Korea Advanced Institute of Science and Technology (KAIST), Daejeon 34141, Korea
- Operando Methodology and Measurement Team, Interdisciplinary Materials Measurement Institute, Korea Research Institute of Standards and Science (KRISS), Daejeon 34113, Korea
| | - Seorin Cho
- Department of Materials Science and Engineering, Korea Advanced Institute of Science and Technology (KAIST), Daejeon 34141, Korea
| | - Hyun-Jun Chai
- Department of Materials Science and Engineering, Korea Advanced Institute of Science and Technology (KAIST), Daejeon 34141, Korea
| | - Seong Rae Cho
- Department of Materials Science and Engineering, Korea Advanced Institute of Science and Technology (KAIST), Daejeon 34141, Korea
| | - Kiwon Cho
- Department of Materials Science and Engineering, Korea Advanced Institute of Science and Technology (KAIST), Daejeon 34141, Korea
| | - Jeongwon Park
- Department of Materials Science and Engineering, Korea Advanced Institute of Science and Technology (KAIST), Daejeon 34141, Korea
| | - Seungwoo Song
- Operando Methodology and Measurement Team, Interdisciplinary Materials Measurement Institute, Korea Research Institute of Standards and Science (KRISS), Daejeon 34113, Korea
| | - Intek Song
- Department of Applied Chemistry, Andong National University, Andong 36728, Korea
| | - Sunghwan Bang
- Materials & Production Engineering Research Institute, LG Electronics, Pyeongtaek-si 17709, Korea
| | - Joon Young Kwak
- Center for Electronic Materials, Korea Institute of Science and Technology (KIST), Seoul 02792, Korea
| | - Kibum Kang
- Department of Materials Science and Engineering, Korea Advanced Institute of Science and Technology (KAIST), Daejeon 34141, Korea
| |
Collapse
|
64
|
Dai K, Gratiy SL, Billeh YN, Xu R, Cai B, Cain N, Rimehaug AE, Stasik AJ, Einevoll GT, Mihalas S, Koch C, Arkhipov A. Brain Modeling ToolKit: An open source software suite for multiscale modeling of brain circuits. PLoS Comput Biol 2020; 16:e1008386. [PMID: 33253147 PMCID: PMC7728187 DOI: 10.1371/journal.pcbi.1008386] [Citation(s) in RCA: 25] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/03/2020] [Revised: 12/10/2020] [Accepted: 09/16/2020] [Indexed: 11/26/2022] Open
Abstract
Experimental studies in neuroscience are producing data at a rapidly increasing rate, providing exciting opportunities and formidable challenges to existing theoretical and modeling approaches. To turn massive datasets into predictive quantitative frameworks, the field needs software solutions for systematic integration of data into realistic, multiscale models. Here we describe the Brain Modeling ToolKit (BMTK), a software suite for building models and performing simulations at multiple levels of resolution, from biophysically detailed multi-compartmental, to point-neuron, to population-statistical approaches. Leveraging the SONATA file format and existing software such as NEURON, NEST, and others, BMTK offers a consistent user experience across multiple levels of resolution. It permits highly sophisticated simulations to be set up with little coding required, thus lowering entry barriers to new users. We illustrate successful applications of BMTK to large-scale simulations of a cortical area. BMTK is an open-source package provided as a resource supporting modeling-based discovery in the community.
Collapse
Affiliation(s)
- Kael Dai
- Allen Institute, Seattle, Washington, United States of America
| | | | - Yazan N. Billeh
- Allen Institute, Seattle, Washington, United States of America
| | - Richard Xu
- Allen Institute, Seattle, Washington, United States of America
| | - Binghuang Cai
- Allen Institute, Seattle, Washington, United States of America
| | - Nicholas Cain
- Allen Institute, Seattle, Washington, United States of America
| | - Atle E. Rimehaug
- Norwegian University of Life Sciences & University of Oslo, Oslo, Norway
| | | | - Gaute T. Einevoll
- Norwegian University of Life Sciences & University of Oslo, Oslo, Norway
| | - Stefan Mihalas
- Allen Institute, Seattle, Washington, United States of America
| | - Christof Koch
- Allen Institute, Seattle, Washington, United States of America
| | - Anton Arkhipov
- Allen Institute, Seattle, Washington, United States of America
| |
Collapse
|
65
|
Gonçalves PJ, Lueckmann JM, Deistler M, Nonnenmacher M, Öcal K, Bassetto G, Chintaluri C, Podlaski WF, Haddad SA, Vogels TP, Greenberg DS, Macke JH. Training deep neural density estimators to identify mechanistic models of neural dynamics. eLife 2020; 9:e56261. [PMID: 32940606 PMCID: PMC7581433 DOI: 10.7554/elife.56261] [Citation(s) in RCA: 68] [Impact Index Per Article: 17.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/21/2020] [Accepted: 09/16/2020] [Indexed: 01/27/2023] Open
Abstract
Mechanistic modeling in neuroscience aims to explain observed phenomena in terms of underlying causes. However, determining which model parameters agree with complex and stochastic neural data presents a significant challenge. We address this challenge with a machine learning tool which uses deep neural density estimators-trained using model simulations-to carry out Bayesian inference and retrieve the full space of parameters compatible with raw data or selected data features. Our method is scalable in parameters and data features and can rapidly analyze new data after initial training. We demonstrate the power and flexibility of our approach on receptive fields, ion channels, and Hodgkin-Huxley models. We also characterize the space of circuit configurations giving rise to rhythmic activity in the crustacean stomatogastric ganglion, and use these results to derive hypotheses for underlying compensation mechanisms. Our approach will help close the gap between data-driven and theory-driven models of neural dynamics.
Collapse
Affiliation(s)
- Pedro J Gonçalves
- Computational Neuroengineering, Department of Electrical and Computer Engineering, Technical University of MunichMunichGermany
- Max Planck Research Group Neural Systems Analysis, Center of Advanced European Studies and Research (caesar)BonnGermany
| | - Jan-Matthis Lueckmann
- Computational Neuroengineering, Department of Electrical and Computer Engineering, Technical University of MunichMunichGermany
- Max Planck Research Group Neural Systems Analysis, Center of Advanced European Studies and Research (caesar)BonnGermany
| | - Michael Deistler
- Computational Neuroengineering, Department of Electrical and Computer Engineering, Technical University of MunichMunichGermany
- Machine Learning in Science, Excellence Cluster Machine Learning, Tübingen UniversityTübingenGermany
| | - Marcel Nonnenmacher
- Computational Neuroengineering, Department of Electrical and Computer Engineering, Technical University of MunichMunichGermany
- Max Planck Research Group Neural Systems Analysis, Center of Advanced European Studies and Research (caesar)BonnGermany
- Model-Driven Machine Learning, Institute of Coastal Research, Helmholtz Centre GeesthachtGeesthachtGermany
| | - Kaan Öcal
- Max Planck Research Group Neural Systems Analysis, Center of Advanced European Studies and Research (caesar)BonnGermany
- Mathematical Institute, University of BonnBonnGermany
| | - Giacomo Bassetto
- Computational Neuroengineering, Department of Electrical and Computer Engineering, Technical University of MunichMunichGermany
- Max Planck Research Group Neural Systems Analysis, Center of Advanced European Studies and Research (caesar)BonnGermany
| | - Chaitanya Chintaluri
- Centre for Neural Circuits and Behaviour, University of OxfordOxfordUnited Kingdom
- Institute of Science and Technology AustriaKlosterneuburgAustria
| | - William F Podlaski
- Centre for Neural Circuits and Behaviour, University of OxfordOxfordUnited Kingdom
| | - Sara A Haddad
- Max Planck Institute for Brain ResearchFrankfurtGermany
| | - Tim P Vogels
- Centre for Neural Circuits and Behaviour, University of OxfordOxfordUnited Kingdom
- Institute of Science and Technology AustriaKlosterneuburgAustria
| | - David S Greenberg
- Computational Neuroengineering, Department of Electrical and Computer Engineering, Technical University of MunichMunichGermany
- Model-Driven Machine Learning, Institute of Coastal Research, Helmholtz Centre GeesthachtGeesthachtGermany
| | - Jakob H Macke
- Computational Neuroengineering, Department of Electrical and Computer Engineering, Technical University of MunichMunichGermany
- Max Planck Research Group Neural Systems Analysis, Center of Advanced European Studies and Research (caesar)BonnGermany
- Machine Learning in Science, Excellence Cluster Machine Learning, Tübingen UniversityTübingenGermany
- Max Planck Institute for Intelligent SystemsTübingenGermany
| |
Collapse
|
66
|
Systematic Integration of Structural and Functional Data into Multi-scale Models of Mouse Primary Visual Cortex. Neuron 2020; 106:388-403.e18. [DOI: 10.1016/j.neuron.2020.01.040] [Citation(s) in RCA: 90] [Impact Index Per Article: 22.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/25/2019] [Revised: 10/17/2019] [Accepted: 01/27/2020] [Indexed: 01/08/2023]
|
67
|
Ofer N, Shefi O, Yaari G. Axonal Tree Morphology and Signal Propagation Dynamics Improve Interneuron Classification. Neuroinformatics 2020; 18:581-590. [PMID: 32346847 DOI: 10.1007/s12021-020-09466-8] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/14/2023]
Abstract
Neurons are diverse and can be differentiated by their morphological, electrophysiological, and molecular properties. Current morphology-based classification approaches largely rely on the dendritic tree structure or on the overall axonal projection layout. Here, we use data from public databases of neuronal reconstructions and membrane properties to study the characteristics of the axonal and dendritic trees for interneuron classification. We show that combining signal propagation patterns observed by biophysical simulations of the activity along ramified axonal trees with morphological parameters of the axonal and dendritic trees, significantly improve classification results compared to previous approaches. The classification schemes introduced here can be utilized for robust neuronal classification. Our work paves the way for understanding and utilizing form-function principles in realistic neuronal reconstructions.
Collapse
Affiliation(s)
- Netanel Ofer
- Faculty of Engineering, Bar Ilan University, Ramat Gan, 5290002, Israel.,Bar Ilan Institute of Nanotechnologies and Advanced Materials, Bar Ilan University, Ramat Gan, 5290002, Israel
| | - Orit Shefi
- Faculty of Engineering, Bar Ilan University, Ramat Gan, 5290002, Israel. .,Bar Ilan Institute of Nanotechnologies and Advanced Materials, Bar Ilan University, Ramat Gan, 5290002, Israel.
| | - Gur Yaari
- Faculty of Engineering, Bar Ilan University, Ramat Gan, 5290002, Israel.
| |
Collapse
|
68
|
Dai K, Hernando J, Billeh YN, Gratiy SL, Planas J, Davison AP, Dura-Bernal S, Gleeson P, Devresse A, Dichter BK, Gevaert M, King JG, Van Geit WAH, Povolotsky AV, Muller E, Courcol JD, Arkhipov A. The SONATA data format for efficient description of large-scale network models. PLoS Comput Biol 2020; 16:e1007696. [PMID: 32092054 PMCID: PMC7058350 DOI: 10.1371/journal.pcbi.1007696] [Citation(s) in RCA: 16] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/18/2019] [Revised: 03/05/2020] [Accepted: 01/28/2020] [Indexed: 12/04/2022] Open
Abstract
Increasing availability of comprehensive experimental datasets and of high-performance computing resources are driving rapid growth in scale, complexity, and biological realism of computational models in neuroscience. To support construction and simulation, as well as sharing of such large-scale models, a broadly applicable, flexible, and high-performance data format is necessary. To address this need, we have developed the Scalable Open Network Architecture TemplAte (SONATA) data format. It is designed for memory and computational efficiency and works across multiple platforms. The format represents neuronal circuits and simulation inputs and outputs via standardized files and provides much flexibility for adding new conventions or extensions. SONATA is used in multiple modeling and visualization tools, and we also provide reference Application Programming Interfaces and model examples to catalyze further adoption. SONATA format is free and open for the community to use and build upon with the goal of enabling efficient model building, sharing, and reproducibility. Neuroscience is experiencing a rapid growth of data streams characterizing composition, connectivity, and activity of brain networks in ever increasing details. Data-driven modeling will be essential to integrate these multimodal and complex data into predictive simulations to advance our understanding of brain function and mechanisms. To enable efficient development and sharing of such large-scale models utilizing diverse data types, we have developed the Scalable Open Network Architecture TemplAte (SONATA) data format. The format represents neuronal circuits and simulation inputs and outputs via standardized files and provides much flexibility for adding new conventions or extensions. SONATA is already supported by several popular tools for model building, simulations, and visualization. It is free and open for everyone to use and build upon and will enable increased efficiency, reproducibility, and scientific exchange in the community.
Collapse
Affiliation(s)
- Kael Dai
- Allen Institute for Brain Science, Seattle, Washington, United States of America
| | - Juan Hernando
- Blue Brain Project, École Polytechnique Fédérale de Lausanne (EPFL), Campus Biotech, Geneva, Switzerland
| | - Yazan N. Billeh
- Allen Institute for Brain Science, Seattle, Washington, United States of America
| | - Sergey L. Gratiy
- Allen Institute for Brain Science, Seattle, Washington, United States of America
| | - Judit Planas
- Blue Brain Project, École Polytechnique Fédérale de Lausanne (EPFL), Campus Biotech, Geneva, Switzerland
| | - Andrew P. Davison
- Paris-Saclay Institute of Neuroscience UMR, Centre National de la Recherche Scientifique/Université Paris-Saclay, Gif-sur-Yvette, France
| | - Salvador Dura-Bernal
- State University of New York Downstate Medical Center, Brooklyn, New York, United States of America
- Nathan Kline Institute for Psychiatric Research, Orangeburg, New York, United States of America
| | - Padraig Gleeson
- Department of Neuroscience, Physiology and Pharmacology, University College London, London, United Kingdom
| | - Adrien Devresse
- Blue Brain Project, École Polytechnique Fédérale de Lausanne (EPFL), Campus Biotech, Geneva, Switzerland
| | - Benjamin K. Dichter
- Department of Neurosurgery, Stanford University, Stanford, California, United States of America
- Biological Systems and Engineering, Lawrence Berkeley National Laboratory, Berkeley, California, United States of America
| | - Michael Gevaert
- Blue Brain Project, École Polytechnique Fédérale de Lausanne (EPFL), Campus Biotech, Geneva, Switzerland
| | - James G. King
- Blue Brain Project, École Polytechnique Fédérale de Lausanne (EPFL), Campus Biotech, Geneva, Switzerland
| | - Werner A. H. Van Geit
- Blue Brain Project, École Polytechnique Fédérale de Lausanne (EPFL), Campus Biotech, Geneva, Switzerland
| | - Arseny V. Povolotsky
- Blue Brain Project, École Polytechnique Fédérale de Lausanne (EPFL), Campus Biotech, Geneva, Switzerland
| | - Eilif Muller
- Blue Brain Project, École Polytechnique Fédérale de Lausanne (EPFL), Campus Biotech, Geneva, Switzerland
| | - Jean-Denis Courcol
- Blue Brain Project, École Polytechnique Fédérale de Lausanne (EPFL), Campus Biotech, Geneva, Switzerland
| | - Anton Arkhipov
- Allen Institute for Brain Science, Seattle, Washington, United States of America
- * E-mail:
| |
Collapse
|
69
|
van Vreeswijk C, Farkhooi F. Fredholm theory for the mean first-passage time of integrate-and-fire oscillators with colored noise input. Phys Rev E 2020; 100:060402. [PMID: 31962454 DOI: 10.1103/physreve.100.060402] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/21/2019] [Indexed: 11/07/2022]
Abstract
We develop a method to investigate the effect of noise timescales on the first-passage time of nonlinear oscillators. Using Fredholm theory, we derive an exact integral equation for the mean event rate of a leaky integrate-and-fire oscillator that receives constant input and temporally correlated noise. Furthermore, we show that Fredholm theory provides a unified framework to determine the system scaling behavior for small and large noise timescales. In this framework, the leading-order and higher-order asymptotic corrections for slow and fast noise are naturally emerging. We show the scaling behavior in the both limits is not reciprocal. We discuss further how this approach can be extended to study the first-passage time in a general class of nonlinear oscillators driven by colored noise at arbitrary timescales.
Collapse
Affiliation(s)
- Carl van Vreeswijk
- Centre de Neurophysique Physiologie et Pathologie, Paris Descartes University and CNRS UMR 8002 INCC, 75006 Paris, France
| | - Farzad Farkhooi
- Institute for Theoretical Biology, Department of Biology, Humboldt-Universität zu Berlin, 10115 Berlin, Germany
| |
Collapse
|
70
|
Ibañez S, Luebke JI, Chang W, Draguljić D, Weaver CM. Network Models Predict That Pyramidal Neuron Hyperexcitability and Synapse Loss in the dlPFC Lead to Age-Related Spatial Working Memory Impairment in Rhesus Monkeys. Front Comput Neurosci 2020; 13:89. [PMID: 32009920 PMCID: PMC6979278 DOI: 10.3389/fncom.2019.00089] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/30/2019] [Accepted: 12/18/2019] [Indexed: 01/04/2023] Open
Abstract
Behavioral studies have shown spatial working memory impairment with aging in several animal species, including humans. Persistent activity of layer 3 pyramidal dorsolateral prefrontal cortex (dlPFC) neurons during delay periods of working memory tasks is important for encoding memory of the stimulus. In vitro studies have shown that these neurons undergo significant age-related structural and functional changes, but the extent to which these changes affect neural mechanisms underlying spatial working memory is not understood fully. Here, we confirm previous studies showing impairment on the Delayed Recognition Span Task in the spatial condition (DRSTsp), and increased in vitro action potential firing rates (hyperexcitability), across the adult life span of the rhesus monkey. We use a bump attractor model to predict how empirically observed changes in the aging dlPFC affect performance on the Delayed Response Task (DRT), and introduce a model of memory retention in the DRSTsp. Persistent activity-and, in turn, cognitive performance-in both models was affected much more by hyperexcitability of pyramidal neurons than by a loss of synapses. Our DRT simulations predict that additional changes to the network, such as increased firing of inhibitory interneurons, are needed to account for lower firing rates during the DRT with aging reported in vivo. Synaptic facilitation was an essential feature of the DRSTsp model, but it did not compensate fully for the effects of the other age-related changes on DRT performance. Modeling pyramidal neuron hyperexcitability and synapse loss simultaneously led to a partial recovery of function in both tasks, with the simulated level of DRSTsp impairment similar to that observed in aging monkeys. This modeling work integrates empirical data across multiple scales, from synapse counts to cognitive testing, to further our understanding of aging in non-human primates.
Collapse
Affiliation(s)
- Sara Ibañez
- Department of Mathematics, Franklin and Marshall College, Lancaster, PA, United States
- Department of Anatomy and Neurobiology, Boston University School of Medicine, Boston, MA, United States
| | - Jennifer I. Luebke
- Department of Anatomy and Neurobiology, Boston University School of Medicine, Boston, MA, United States
| | - Wayne Chang
- Department of Anatomy and Neurobiology, Boston University School of Medicine, Boston, MA, United States
| | - Danel Draguljić
- Department of Mathematics, Franklin and Marshall College, Lancaster, PA, United States
| | - Christina M. Weaver
- Department of Mathematics, Franklin and Marshall College, Lancaster, PA, United States
| |
Collapse
|
71
|
Merzon L, Malevich T, Zhulikov G, Krasovskaya S, MacInnes WJ. Temporal Limitations of the Standard Leaky Integrate and Fire Model. Brain Sci 2019; 10:E16. [PMID: 31892197 PMCID: PMC7016704 DOI: 10.3390/brainsci10010016] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/25/2019] [Revised: 12/18/2019] [Accepted: 12/19/2019] [Indexed: 11/17/2022] Open
Abstract
Itti and Koch's Saliency Model has been used extensively to simulate fixation selection in a variety of tasks from visual search to simple reaction times. Although the Saliency Model has been tested for its spatial prediction of fixations in visual salience, it has not been well tested for their temporal accuracy. Visual tasks, like search, invariably result in a positively skewed distribution of saccadic reaction times over large numbers of samples, yet we show that the leaky integrate and fire (LIF) neuronal model included in the classic implementation of the model tends to produce a distribution shifted to shorter fixations (in comparison with human data). Further, while parameter optimization using a genetic algorithm and Nelder-Mead method does improve the fit of the resulting distribution, it is still unable to match temporal distributions of human responses in a visual task. Analysis of times for individual images reveal that the LIF algorithm produces initial fixation durations that are fixed instead of a sample from a distribution (as in the human case). Only by aggregating responses over many input images do they result in a distribution, although the form of this distribution still depends on the input images used to create it and not on internal model variability.
Collapse
Affiliation(s)
- Liya Merzon
- Vision Modelling Laboratory, National Research University Higher School of Economics, 109074 Moscow, Russia; (G.Z.); (S.K.)
- Department of Psychology, National Research University Higher School of Economics, 101000 Moscow, Russia
- Neuroscience and Biomedical Engineering Department, Aalto University, 02150 Espoo, Finland
| | - Tatiana Malevich
- Werner Reichardt Centre for Integrative Neuroscience, 72076 Tuebingen, Germany;
| | - Georgiy Zhulikov
- Vision Modelling Laboratory, National Research University Higher School of Economics, 109074 Moscow, Russia; (G.Z.); (S.K.)
- Institute of Water Problems Russian Academy of Science, 117971 Moscow, Russia
| | - Sofia Krasovskaya
- Vision Modelling Laboratory, National Research University Higher School of Economics, 109074 Moscow, Russia; (G.Z.); (S.K.)
- Department of Psychology, National Research University Higher School of Economics, 101000 Moscow, Russia
| | - W. Joseph MacInnes
- Vision Modelling Laboratory, National Research University Higher School of Economics, 109074 Moscow, Russia; (G.Z.); (S.K.)
- Department of Psychology, National Research University Higher School of Economics, 101000 Moscow, Russia
| |
Collapse
|
72
|
Komendantov AO, Venkadesh S, Rees CL, Wheeler DW, Hamilton DJ, Ascoli GA. Quantitative firing pattern phenotyping of hippocampal neuron types. Sci Rep 2019; 9:17915. [PMID: 31784578 PMCID: PMC6884469 DOI: 10.1038/s41598-019-52611-w] [Citation(s) in RCA: 28] [Impact Index Per Article: 5.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/07/2019] [Accepted: 09/20/2019] [Indexed: 01/19/2023] Open
Abstract
Systematically organizing the anatomical, molecular, and physiological properties of cortical neurons is important for understanding their computational functions. Hippocampome.org defines 122 neuron types in the rodent hippocampal formation based on their somatic, axonal, and dendritic locations, putative excitatory/inhibitory outputs, molecular marker expression, and biophysical properties. We augmented the electrophysiological data of this knowledge base by collecting, quantifying, and analyzing the firing responses to depolarizing current injections for every hippocampal neuron type from published experiments. We designed and implemented objective protocols to classify firing patterns based on 5 transients (delay, adapting spiking, rapidly adapting spiking, transient stuttering, and transient slow-wave bursting) and 4 steady states (non-adapting spiking, persistent stuttering, persistent slow-wave bursting, and silence). This automated approach revealed 9 unique (plus one spurious) families of firing pattern phenotypes while distinguishing potential new neuronal subtypes. Novel statistical associations emerged between firing responses and other electrophysiological properties, morphological features, and molecular marker expression. The firing pattern parameters, experimental conditions, spike times, references to the original empirical evidences, and analysis scripts are released open-source through Hippocampome.org for all neuron types, greatly enhancing the existing search and browse capabilities. This information, collated online in human- and machine-accessible form, will help design and interpret both experiments and model simulations.
Collapse
Affiliation(s)
- Alexander O Komendantov
- Krasnow Institute for Advanced Study, George Mason University, 4400 University Drive, MS 2A1, Fairfax, Virginia, 2230, USA.
| | - Siva Venkadesh
- Krasnow Institute for Advanced Study, George Mason University, 4400 University Drive, MS 2A1, Fairfax, Virginia, 2230, USA
| | - Christopher L Rees
- Krasnow Institute for Advanced Study, George Mason University, 4400 University Drive, MS 2A1, Fairfax, Virginia, 2230, USA
| | - Diek W Wheeler
- Krasnow Institute for Advanced Study, George Mason University, 4400 University Drive, MS 2A1, Fairfax, Virginia, 2230, USA
| | - David J Hamilton
- Krasnow Institute for Advanced Study, George Mason University, 4400 University Drive, MS 2A1, Fairfax, Virginia, 2230, USA
| | - Giorgio A Ascoli
- Krasnow Institute for Advanced Study, George Mason University, 4400 University Drive, MS 2A1, Fairfax, Virginia, 2230, USA.
| |
Collapse
|
73
|
Inferring and validating mechanistic models of neural microcircuits based on spike-train data. Nat Commun 2019; 10:4933. [PMID: 31666513 PMCID: PMC6821748 DOI: 10.1038/s41467-019-12572-0] [Citation(s) in RCA: 18] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/25/2018] [Accepted: 09/18/2019] [Indexed: 01/11/2023] Open
Abstract
The interpretation of neuronal spike train recordings often relies on abstract statistical models that allow for principled parameter estimation and model selection but provide only limited insights into underlying microcircuits. In contrast, mechanistic models are useful to interpret microcircuit dynamics, but are rarely quantitatively matched to experimental data due to methodological challenges. Here we present analytical methods to efficiently fit spiking circuit models to single-trial spike trains. Using derived likelihood functions, we statistically infer the mean and variance of hidden inputs, neuronal adaptation properties and connectivity for coupled integrate-and-fire neurons. Comprehensive evaluations on synthetic data, validations using ground truth in-vitro and in-vivo recordings, and comparisons with existing techniques demonstrate that parameter estimation is very accurate and efficient, even for highly subsampled networks. Our methods bridge statistical, data-driven and theoretical, model-based neurosciences at the level of spiking circuits, for the purpose of a quantitative, mechanistic interpretation of recorded neuronal population activity. It is difficult to fit mechanistic, biophysically constrained circuit models to spike train data from in vivo extracellular recordings. Here the authors present analytical methods that enable efficient parameter estimation for integrate-and-fire circuit models and inference of the underlying connectivity structure in subsampled networks.
Collapse
|
74
|
Schwalger T, Chizhov AV. Mind the last spike - firing rate models for mesoscopic populations of spiking neurons. Curr Opin Neurobiol 2019; 58:155-166. [PMID: 31590003 DOI: 10.1016/j.conb.2019.08.003] [Citation(s) in RCA: 24] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/01/2019] [Accepted: 08/25/2019] [Indexed: 02/07/2023]
Abstract
The dominant modeling framework for understanding cortical computations are heuristic firing rate models. Despite their success, these models fall short to capture spike synchronization effects, to link to biophysical parameters and to describe finite-size fluctuations. In this opinion article, we propose that the refractory density method (RDM), also known as age-structured population dynamics or quasi-renewal theory, yields a powerful theoretical framework to build rate-based models for mesoscopic neural populations from realistic neuron dynamics at the microscopic level. We review recent advances achieved by the RDM to obtain efficient population density equations for networks of generalized integrate-and-fire (GIF) neurons - a class of neuron models that has been successfully fitted to various cell types. The theory not only predicts the nonstationary dynamics of large populations of neurons but also permits an extension to finite-size populations and a systematic reduction to low-dimensional rate dynamics. The new types of rate models will allow a re-examination of models of cortical computations under biological constraints.
Collapse
Affiliation(s)
- Tilo Schwalger
- Bernstein Center for Computational Neuroscience, 10115 Berlin, Germany; Institut für Mathematik, Technische Universität Berlin, 10623 Berlin, Germany.
| | - Anton V Chizhov
- Ioffe Institute, 194021 Saint-Petersburg, Russia; Sechenov Institute of Evolutionary Physiology and Biochemistry of the Russian Academy of Sciences, 194223 Saint-Petersburg, Russia
| |
Collapse
|
75
|
Venkadesh S, Komendantov AO, Wheeler DW, Hamilton DJ, Ascoli GA. Simple models of quantitative firing phenotypes in hippocampal neurons: Comprehensive coverage of intrinsic diversity. PLoS Comput Biol 2019; 15:e1007462. [PMID: 31658260 PMCID: PMC6837624 DOI: 10.1371/journal.pcbi.1007462] [Citation(s) in RCA: 19] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/01/2019] [Revised: 11/07/2019] [Accepted: 10/07/2019] [Indexed: 12/02/2022] Open
Abstract
Patterns of periodic voltage spikes elicited by a neuron help define its dynamical identity. Experimentally recorded spike trains from various neurons show qualitatively distinguishable features such as delayed spiking, spiking with or without frequency adaptation, and intrinsic bursting. Moreover, the input-dependent responses of a neuron not only show different quantitative features, such as higher spike frequency for a stronger input current injection, but can also exhibit qualitatively different responses, such as spiking and bursting under different input conditions, thus forming a complex phenotype of responses. In previous work, the comprehensive knowledge base of hippocampal neuron types Hippocampome.org systematically characterized various spike pattern phenotypes experimentally identified from 120 neuron types/subtypes. In this paper, we present a complete set of simple phenomenological models that quantitatively reproduce the diverse and complex phenotypes of hippocampal neurons. In addition to point-neuron models, we created compact multi-compartment models with up to four compartments, which will allow spatial segregation of synaptic integration in network simulations. Electrotonic compartmentalization observed in our compact multi-compartment models is qualitatively consistent with experimental observations. The models were created using an automated pipeline based on evolutionary algorithms. This work maps 120 neuron types/subtypes in the rodent hippocampus to a low-dimensional model space and adds another dimension to the knowledge accumulated in Hippocampome.org. Computationally efficient representations of intrinsic dynamics, along with other pieces of knowledge available in Hippocampome.org, provide a biologically realistic platform to explore the large-scale interactions of various neuron types at the mesoscopic level.
Collapse
Affiliation(s)
- Siva Venkadesh
- Center for Neural Informatics, Structures, and Plasticity, Krasnow Institute for Advanced Study, George Mason University, Fairfax, VA, United States of America
| | - Alexander O. Komendantov
- Center for Neural Informatics, Structures, and Plasticity, Krasnow Institute for Advanced Study, George Mason University, Fairfax, VA, United States of America
| | - Diek W. Wheeler
- Center for Neural Informatics, Structures, and Plasticity, Krasnow Institute for Advanced Study, George Mason University, Fairfax, VA, United States of America
| | - David J. Hamilton
- Center for Neural Informatics, Structures, and Plasticity, Krasnow Institute for Advanced Study, George Mason University, Fairfax, VA, United States of America
| | - Giorgio A. Ascoli
- Center for Neural Informatics, Structures, and Plasticity, Krasnow Institute for Advanced Study, George Mason University, Fairfax, VA, United States of America
| |
Collapse
|
76
|
Chartrand T, Goldman MS, Lewis TJ. Synchronization of Electrically Coupled Resonate-and-Fire Neurons. SIAM JOURNAL ON APPLIED DYNAMICAL SYSTEMS 2019; 18:1643-1693. [PMID: 33273894 PMCID: PMC7709966 DOI: 10.1137/18m1197412] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/30/2023]
Abstract
Electrical coupling between neurons is broadly present across brain areas and is typically assumed to synchronize network activity. However, intrinsic properties of the coupled cells can complicate this simple picture. Many cell types with electrical coupling show a diversity of post-spike subthreshold fluctuations, often linked to subthreshold resonance, which are transmitted through electrical synapses in addition to action potentials. Using the theory of weakly coupled oscillators, we explore the effect of both subthreshold and spike-mediated coupling on synchrony in small networks of electrically coupled resonate-and-fire neurons, a hybrid neuron model with damped subthreshold oscillations and a range of post-spike voltage dynamics. We calculate the phase response curve using an extension of the adjoint method that accounts for the discontinuous post-spike reset rule. We find that both spikes and subthreshold fluctuations can jointly promote synchronization. The subthreshold contribution is strongest when the voltage exhibits a significant post-spike elevation in voltage, or plateau potential. Additionally, we show that the geometry of trajectories approaching the spiking threshold causes a "reset-induced shear" effect that can oppose synchrony in the presence of network asymmetry, despite having no effect on the phase-locking of symmetrically coupled pairs.
Collapse
Affiliation(s)
- Thomas Chartrand
- Graduate Group in Applied Mathematics, University of California-Davis, Davis, CA 95616. Current address: Allen Institute for Brain Science, Seattle, WA
| | - Mark S Goldman
- Center for Neuroscience, Department of Neurobiology, Physiology and Behavior, Department of Ophthalmology and Vision Science, and Graduate Group in Applied Mathematics, University of California-Davis, Davis, CA 95616
| | - Timothy J Lewis
- Department of Mathematics and Graduate Group in Applied Mathematics, University of California-Davis, Davis, CA 95616
| |
Collapse
|
77
|
Payeur A, Béïque JC, Naud R. Classes of dendritic information processing. Curr Opin Neurobiol 2019; 58:78-85. [PMID: 31419712 DOI: 10.1016/j.conb.2019.07.006] [Citation(s) in RCA: 28] [Impact Index Per Article: 5.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/01/2019] [Accepted: 07/14/2019] [Indexed: 11/19/2022]
Abstract
Dendrites are much more than passive neuronal components. Mounting experimental evidence and decades of computational work have decisively shown that dendrites leverage a host of nonlinear biophysical phenomena and actively participate in sophisticated computations, at the level of the single neuron and at the level of the network. However, a coherent view of their processing power is still lacking and dendrites are largely neglected in neural network models. Here, we describe four classes of dendritic information processing and delineate their implications at the algorithmic level. We propose that beyond the well-known spatiotemporal filtering of their inputs, dendrites are capable of selecting, routing and multiplexing information. By separating dendritic processing from axonal outputs, neuron networks gain a degree of freedom with implications for perception and learning.
Collapse
Affiliation(s)
- Alexandre Payeur
- Ottawa Brain and Mind Institute, Centre for Neural Dynamics, Department of Cellular and Molecular Neuroscience, University of Ottawa, Canada
| | - Jean-Claude Béïque
- Ottawa Brain and Mind Institute, Centre for Neural Dynamics, Department of Cellular and Molecular Neuroscience, University of Ottawa, Canada
| | - Richard Naud
- Ottawa Brain and Mind Institute, Centre for Neural Dynamics, Department of Cellular and Molecular Neuroscience, University of Ottawa, Canada; Department of Physics, University of Ottawa, 150 Louis Pasteur Pet, Ottawa, ON, K1N 6N5, Canada.
| |
Collapse
|
78
|
Marangio L, Galatolo S, Fronzoni L, Chillemi S, Di Garbo A. Phase-locking patterns in a resonate and fire neural model with periodic drive. Biosystems 2019; 184:103992. [PMID: 31323255 DOI: 10.1016/j.biosystems.2019.103992] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/28/2019] [Revised: 06/11/2019] [Accepted: 07/11/2019] [Indexed: 11/25/2022]
Abstract
In this paper we studied a resonate and fire relaxation oscillator subject to time dependent modulation to investigate phase-locking phenomena occurring in neurophysiological systems. The neural model (denoted LFHN) was obtained by linearization of the FitzHugh-Nagumo neural model near an hyperbolic fixed point and then by introducing an integrate-and-fire mechanism for spike generation. By employing specific tools to study circle maps, we showed that this system exhibits several phase-locking patterns in the presence of periodic perturbations. Moreover, both the amplitude and frequency of the modulation strongly impact its phase-locking properties. In addition, general conditions for the generation of firing activity were also obtained. In addition, it was shown that for moderate noise levels the phase-locking patterns of the LFHN persist. Moreover, in the presence of noise, the rotation number changes smoothly as the stimulation current increases. Then, the statistical properties of the firing map were investigated too. Lastly, the results obtained with the forced LFHN suggest that such neural model could be used to fit specific experimental data on the firing times of neurons.
Collapse
Affiliation(s)
- Luigi Marangio
- Department of Mathematics, University of Pisa, Italy; Femto-ST Institute, Université de Bourgogne-Franche Comté, France
| | | | | | | | | |
Collapse
|
79
|
Gouwens NW, Sorensen SA, Berg J, Lee C, Jarsky T, Ting J, Sunkin SM, Feng D, Anastassiou CA, Barkan E, Bickley K, Blesie N, Braun T, Brouner K, Budzillo A, Caldejon S, Casper T, Castelli D, Chong P, Crichton K, Cuhaciyan C, Daigle TL, Dalley R, Dee N, Desta T, Ding SL, Dingman S, Doperalski A, Dotson N, Egdorf T, Fisher M, de Frates RA, Garren E, Garwood M, Gary A, Gaudreault N, Godfrey K, Gorham M, Gu H, Habel C, Hadley K, Harrington J, Harris JA, Henry A, Hill D, Josephsen S, Kebede S, Kim L, Kroll M, Lee B, Lemon T, Link KE, Liu X, Long B, Mann R, McGraw M, Mihalas S, Mukora A, Murphy GJ, Ng L, Ngo K, Nguyen TN, Nicovich PR, Oldre A, Park D, Parry S, Perkins J, Potekhina L, Reid D, Robertson M, Sandman D, Schroedter M, Slaughterbeck C, Soler-Llavina G, Sulc J, Szafer A, Tasic B, Taskin N, Teeter C, Thatra N, Tung H, Wakeman W, Williams G, Young R, Zhou Z, Farrell C, Peng H, Hawrylycz MJ, Lein E, Ng L, Arkhipov A, Bernard A, Phillips JW, Zeng H, Koch C. Classification of electrophysiological and morphological neuron types in the mouse visual cortex. Nat Neurosci 2019; 22:1182-1195. [PMID: 31209381 PMCID: PMC8078853 DOI: 10.1038/s41593-019-0417-0] [Citation(s) in RCA: 233] [Impact Index Per Article: 46.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/15/2018] [Accepted: 04/25/2019] [Indexed: 12/21/2022]
Abstract
Understanding the diversity of cell types in the brain has been an enduring challenge and requires detailed characterization of individual neurons in multiple dimensions. To systematically profile morpho-electric properties of mammalian neurons, we established a single-cell characterization pipeline using standardized patch-clamp recordings in brain slices and biocytin-based neuronal reconstructions. We built a publicly accessible online database, the Allen Cell Types Database, to display these datasets. Intrinsic physiological properties were measured from 1,938 neurons from the adult laboratory mouse visual cortex, morphological properties were measured from 461 reconstructed neurons, and 452 neurons had both measurements available. Quantitative features were used to classify neurons into distinct types using unsupervised methods. We established a taxonomy of morphologically and electrophysiologically defined cell types for this region of the cortex, with 17 electrophysiological types, 38 morphological types and 46 morpho-electric types. There was good correspondence with previously defined transcriptomic cell types and subclasses using the same transgenic mouse lines.
Collapse
Affiliation(s)
| | | | - Jim Berg
- Allen Institute for Brain Science, Seattle, Washington, USA
| | - Changkyu Lee
- Allen Institute for Brain Science, Seattle, Washington, USA
| | - Tim Jarsky
- Allen Institute for Brain Science, Seattle, Washington, USA
| | - Jonathan Ting
- Allen Institute for Brain Science, Seattle, Washington, USA
| | - Susan M Sunkin
- Allen Institute for Brain Science, Seattle, Washington, USA
| | - David Feng
- Allen Institute for Brain Science, Seattle, Washington, USA
| | | | - Eliza Barkan
- Allen Institute for Brain Science, Seattle, Washington, USA
| | - Kris Bickley
- Allen Institute for Brain Science, Seattle, Washington, USA
| | - Nicole Blesie
- Allen Institute for Brain Science, Seattle, Washington, USA
| | - Thomas Braun
- Allen Institute for Brain Science, Seattle, Washington, USA
| | - Krissy Brouner
- Allen Institute for Brain Science, Seattle, Washington, USA
| | - Agata Budzillo
- Allen Institute for Brain Science, Seattle, Washington, USA
| | | | - Tamara Casper
- Allen Institute for Brain Science, Seattle, Washington, USA
| | - Dan Castelli
- Allen Institute for Brain Science, Seattle, Washington, USA
| | - Peter Chong
- Allen Institute for Brain Science, Seattle, Washington, USA
| | | | | | - Tanya L Daigle
- Allen Institute for Brain Science, Seattle, Washington, USA
| | - Rachel Dalley
- Allen Institute for Brain Science, Seattle, Washington, USA
| | - Nick Dee
- Allen Institute for Brain Science, Seattle, Washington, USA
| | - Tsega Desta
- Allen Institute for Brain Science, Seattle, Washington, USA
| | - Song-Lin Ding
- Allen Institute for Brain Science, Seattle, Washington, USA
| | - Samuel Dingman
- Allen Institute for Brain Science, Seattle, Washington, USA
| | | | | | - Tom Egdorf
- Allen Institute for Brain Science, Seattle, Washington, USA
| | - Michael Fisher
- Allen Institute for Brain Science, Seattle, Washington, USA
| | | | - Emma Garren
- Allen Institute for Brain Science, Seattle, Washington, USA
| | | | - Amanda Gary
- Allen Institute for Brain Science, Seattle, Washington, USA
| | | | - Keith Godfrey
- Allen Institute for Brain Science, Seattle, Washington, USA
| | - Melissa Gorham
- Allen Institute for Brain Science, Seattle, Washington, USA
| | - Hong Gu
- Allen Institute for Brain Science, Seattle, Washington, USA
| | - Caroline Habel
- Allen Institute for Brain Science, Seattle, Washington, USA
| | - Kristen Hadley
- Allen Institute for Brain Science, Seattle, Washington, USA
| | | | - Julie A Harris
- Allen Institute for Brain Science, Seattle, Washington, USA
| | - Alex Henry
- Allen Institute for Brain Science, Seattle, Washington, USA
| | - DiJon Hill
- Allen Institute for Brain Science, Seattle, Washington, USA
| | - Sam Josephsen
- Allen Institute for Brain Science, Seattle, Washington, USA
| | - Sara Kebede
- Allen Institute for Brain Science, Seattle, Washington, USA
| | - Lisa Kim
- Allen Institute for Brain Science, Seattle, Washington, USA
| | - Matthew Kroll
- Allen Institute for Brain Science, Seattle, Washington, USA
| | - Brian Lee
- Allen Institute for Brain Science, Seattle, Washington, USA
| | - Tracy Lemon
- Allen Institute for Brain Science, Seattle, Washington, USA
| | | | - Xiaoxiao Liu
- Allen Institute for Brain Science, Seattle, Washington, USA
| | - Brian Long
- Allen Institute for Brain Science, Seattle, Washington, USA
| | - Rusty Mann
- Allen Institute for Brain Science, Seattle, Washington, USA
| | - Medea McGraw
- Allen Institute for Brain Science, Seattle, Washington, USA
| | - Stefan Mihalas
- Allen Institute for Brain Science, Seattle, Washington, USA
| | - Alice Mukora
- Allen Institute for Brain Science, Seattle, Washington, USA
| | - Gabe J Murphy
- Allen Institute for Brain Science, Seattle, Washington, USA
| | - Lindsay Ng
- Allen Institute for Brain Science, Seattle, Washington, USA
| | - Kiet Ngo
- Allen Institute for Brain Science, Seattle, Washington, USA
| | | | | | - Aaron Oldre
- Allen Institute for Brain Science, Seattle, Washington, USA
| | - Daniel Park
- Allen Institute for Brain Science, Seattle, Washington, USA
| | - Sheana Parry
- Allen Institute for Brain Science, Seattle, Washington, USA
| | - Jed Perkins
- Allen Institute for Brain Science, Seattle, Washington, USA
| | | | - David Reid
- Allen Institute for Brain Science, Seattle, Washington, USA
| | | | - David Sandman
- Allen Institute for Brain Science, Seattle, Washington, USA
| | | | | | | | - Josef Sulc
- Allen Institute for Brain Science, Seattle, Washington, USA
| | - Aaron Szafer
- Allen Institute for Brain Science, Seattle, Washington, USA
| | - Bosiljka Tasic
- Allen Institute for Brain Science, Seattle, Washington, USA
| | - Naz Taskin
- Allen Institute for Brain Science, Seattle, Washington, USA
| | - Corinne Teeter
- Allen Institute for Brain Science, Seattle, Washington, USA
| | | | - Herman Tung
- Allen Institute for Brain Science, Seattle, Washington, USA
| | - Wayne Wakeman
- Allen Institute for Brain Science, Seattle, Washington, USA
| | - Grace Williams
- Allen Institute for Brain Science, Seattle, Washington, USA
| | - Rob Young
- Allen Institute for Brain Science, Seattle, Washington, USA
| | - Zhi Zhou
- Allen Institute for Brain Science, Seattle, Washington, USA
| | - Colin Farrell
- Allen Institute for Brain Science, Seattle, Washington, USA
| | - Hanchuan Peng
- Allen Institute for Brain Science, Seattle, Washington, USA
| | | | - Ed Lein
- Allen Institute for Brain Science, Seattle, Washington, USA
| | - Lydia Ng
- Allen Institute for Brain Science, Seattle, Washington, USA
| | - Anton Arkhipov
- Allen Institute for Brain Science, Seattle, Washington, USA
| | - Amy Bernard
- Allen Institute for Brain Science, Seattle, Washington, USA
| | | | - Hongkui Zeng
- Allen Institute for Brain Science, Seattle, Washington, USA.
| | - Christof Koch
- Allen Institute for Brain Science, Seattle, Washington, USA
| |
Collapse
|
80
|
Naud R, Longtin A. Linking demyelination to compound action potential dispersion with a spike-diffuse-spike approach. JOURNAL OF MATHEMATICAL NEUROSCIENCE 2019; 9:3. [PMID: 31147800 PMCID: PMC6542900 DOI: 10.1186/s13408-019-0071-6] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 11/10/2018] [Accepted: 05/20/2019] [Indexed: 06/09/2023]
Abstract
To establish and exploit novel biomarkers of demyelinating diseases requires a mechanistic understanding of axonal propagation. Here, we present a novel computational framework called the stochastic spike-diffuse-spike (SSDS) model for assessing the effects of demyelination on axonal transmission. It models transmission through nodal and internodal compartments with two types of operations: a stochastic integrate-and-fire operation captures nodal excitability and a linear filtering operation describes internodal propagation. The effects of demyelinated segments on the probability of transmission, transmission delay and spike time jitter are explored. We argue that demyelination-induced impedance mismatch prevents propagation mostly when the action potential leaves a demyelinated region, not when it enters a demyelinated region. In addition, we model sodium channel remodeling as a homeostatic control of nodal excitability. We find that the effects of mild demyelination on transmission probability and delay can be largely counterbalanced by an increase in excitability at the nodes surrounding the demyelination. The spike timing jitter, however, reflects the level of demyelination whether excitability is fixed or is allowed to change in compensation. This jitter can accumulate over long axons and leads to a broadening of the compound action potential, linking microscopic defects to a mesoscopic observable. Our findings articulate why action potential jitter and compound action potential dispersion can serve as potential markers of weak and sporadic demyelination.
Collapse
Affiliation(s)
- Richard Naud
- Ottawa Brain and Mind Research Institute, Department of Cellular and Molecular Medicine, University of Ottawa, Ottawa, Canada
- Department of Physics, University of Ottawa, Ottawa, Canada
| | - André Longtin
- Department of Physics, University of Ottawa, Ottawa, Canada
| |
Collapse
|
81
|
Seo I, Lee H. Predicting transgenic markers of a neuron by electrophysiological properties using machine learning. Brain Res Bull 2019; 150:102-110. [PMID: 31125599 DOI: 10.1016/j.brainresbull.2019.05.012] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/17/2018] [Revised: 04/17/2019] [Accepted: 05/18/2019] [Indexed: 10/26/2022]
Abstract
The task of classifying and identifying neurons, the essential components of the nervous system, has been undertaken in a variety of ways. The transcriptomic approach has become more accessible with the development of genetic engineering techniques. Considering the information processing function of the brain, however, it is necessary to consider the physiological characteristics of neurons. Recently, the Allen Institute for Brain Science has published the electrophysiological characteristics of neurons which were tagged with a transgenic reporter. We used these electrophysiological features to predict the transgenic markers of neurons. Using linear regression, random forest, and an artificial neural network, we assessed the performance of supervised machine learning models by comparing the prediction accuracy or the confusion matrix. As a result, in the binary classification problem of classifying excitatory and inhibitory neurons, the accuracy was 90% or more regardless of the model. The models showed better performance than merely distinguishing neurons by suprathreshold features such as the ratio of upstrokes and downstrokes of a single spike (ρ). However, when excitatory neurons were classified, the accuracy was 28˜47%, and the accuracy of classifying inhibitory neurons was 59˜73%. The present study was based on the results of electrophysiological experiments to determine whether transgenic markers of neurons could be predicted. Future research is needed to acquire electrophysiological data and transcriptomic data simultaneously on the single cell level to reveal the correlation between the gene expression and the physiological function of a neuron in building the neural network.
Collapse
Affiliation(s)
- Incheol Seo
- Department of Microbiology, Keimyung University School of Medicine, Daegu, Republic of Korea
| | - Hyunsu Lee
- Department of Anatomy, Keimyung University School of Medicine, Daegu, Republic of Korea.
| |
Collapse
|
82
|
Glaser JI, Benjamin AS, Farhoodi R, Kording KP. The roles of supervised machine learning in systems neuroscience. Prog Neurobiol 2019; 175:126-137. [PMID: 30738835 PMCID: PMC8454059 DOI: 10.1016/j.pneurobio.2019.01.008] [Citation(s) in RCA: 61] [Impact Index Per Article: 12.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/01/2018] [Revised: 01/23/2019] [Accepted: 01/28/2019] [Indexed: 01/18/2023]
Abstract
Over the last several years, the use of machine learning (ML) in neuroscience has been rapidly increasing. Here, we review ML's contributions, both realized and potential, across several areas of systems neuroscience. We describe four primary roles of ML within neuroscience: (1) creating solutions to engineering problems, (2) identifying predictive variables, (3) setting benchmarks for simple models of the brain, and (4) serving itself as a model for the brain. The breadth and ease of its applicability suggests that machine learning should be in the toolbox of most systems neuroscientists.
Collapse
Affiliation(s)
- Joshua I Glaser
- Department of Bioengineering, University of Pennsylvania, United States.
| | - Ari S Benjamin
- Department of Bioengineering, University of Pennsylvania, United States.
| | - Roozbeh Farhoodi
- Department of Bioengineering, University of Pennsylvania, United States.
| | - Konrad P Kording
- Department of Bioengineering, University of Pennsylvania, United States; Department of Neuroscience, University of Pennsylvania, United States; Canadian Institute for Advanced Research, Canada.
| |
Collapse
|
83
|
Gardella C, Marre O, Mora T. Modeling the Correlated Activity of Neural Populations: A Review. Neural Comput 2018; 31:233-269. [PMID: 30576613 DOI: 10.1162/neco_a_01154] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
The principles of neural encoding and computations are inherently collective and usually involve large populations of interacting neurons with highly correlated activities. While theories of neural function have long recognized the importance of collective effects in populations of neurons, only in the past two decades has it become possible to record from many cells simultaneously using advanced experimental techniques with single-spike resolution and to relate these correlations to function and behavior. This review focuses on the modeling and inference approaches that have been recently developed to describe the correlated spiking activity of populations of neurons. We cover a variety of models describing correlations between pairs of neurons, as well as between larger groups, synchronous or delayed in time, with or without the explicit influence of the stimulus, and including or not latent variables. We discuss the advantages and drawbacks or each method, as well as the computational challenges related to their application to recordings of ever larger populations.
Collapse
Affiliation(s)
- Christophe Gardella
- Laboratoire de physique statistique, CNRS, Sorbonne Université, Université Paris-Diderot, and École normale supérieure, 75005 Paris, France, and Institut de la Vision, INSERM, CNRS, and Sorbonne Université, 75012 Paris, France
| | - Olivier Marre
- Institut de la Vision, INSERM, CNRS, and Sorbonne Université, 75012 Paris, France
| | - Thierry Mora
- Laboratoire de physique statistique, CNRS, Sorbonne Université, Université Paris-Diderot, and École normale supérieure, 75005 Paris, France
| |
Collapse
|
84
|
Geminiani A, Casellato C, Locatelli F, Prestori F, Pedrocchi A, D'Angelo E. Complex Dynamics in Simplified Neuronal Models: Reproducing Golgi Cell Electroresponsiveness. Front Neuroinform 2018; 12:88. [PMID: 30559658 PMCID: PMC6287018 DOI: 10.3389/fninf.2018.00088] [Citation(s) in RCA: 20] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/30/2018] [Accepted: 11/13/2018] [Indexed: 11/21/2022] Open
Abstract
Brain neurons exhibit complex electroresponsive properties – including intrinsic subthreshold oscillations and pacemaking, resonance and phase-reset – which are thought to play a critical role in controlling neural network dynamics. Although these properties emerge from detailed representations of molecular-level mechanisms in “realistic” models, they cannot usually be generated by simplified neuronal models (although these may show spike-frequency adaptation and bursting). We report here that this whole set of properties can be generated by the extended generalized leaky integrate-and-fire (E-GLIF) neuron model. E-GLIF derives from the GLIF model family and is therefore mono-compartmental, keeps the limited computational load typical of a linear low-dimensional system, admits analytical solutions and can be tuned through gradient-descent algorithms. Importantly, E-GLIF is designed to maintain a correspondence between model parameters and neuronal membrane mechanisms through a minimum set of equations. In order to test its potential, E-GLIF was used to model a specific neuron showing rich and complex electroresponsiveness, the cerebellar Golgi cell, and was validated against experimental electrophysiological data recorded from Golgi cells in acute cerebellar slices. During simulations, E-GLIF was activated by stimulus patterns, including current steps and synaptic inputs, identical to those used for the experiments. The results demonstrate that E-GLIF can reproduce the whole set of complex neuronal dynamics typical of these neurons – including intensity-frequency curves, spike-frequency adaptation, post-inhibitory rebound bursting, spontaneous subthreshold oscillations, resonance, and phase-reset – providing a new effective tool to investigate brain dynamics in large-scale simulations.
Collapse
Affiliation(s)
- Alice Geminiani
- NEARLab, Department of Electronics, Information and Bioengineering, Politecnico di Milano, Milan, Italy
| | - Claudia Casellato
- Department of Brain and Behavioral Sciences, University of Pavia, Pavia, Italy
| | - Francesca Locatelli
- Department of Brain and Behavioral Sciences, University of Pavia, Pavia, Italy
| | - Francesca Prestori
- Department of Brain and Behavioral Sciences, University of Pavia, Pavia, Italy
| | - Alessandra Pedrocchi
- NEARLab, Department of Electronics, Information and Bioengineering, Politecnico di Milano, Milan, Italy
| | - Egidio D'Angelo
- Department of Brain and Behavioral Sciences, University of Pavia, Pavia, Italy
| |
Collapse
|
85
|
Arkhipov A, Gouwens NW, Billeh YN, Gratiy S, Iyer R, Wei Z, Xu Z, Abbasi-Asl R, Berg J, Buice M, Cain N, da Costa N, de Vries S, Denman D, Durand S, Feng D, Jarsky T, Lecoq J, Lee B, Li L, Mihalas S, Ocker GK, Olsen SR, Reid RC, Soler-Llavina G, Sorensen SA, Wang Q, Waters J, Scanziani M, Koch C. Visual physiology of the layer 4 cortical circuit in silico. PLoS Comput Biol 2018; 14:e1006535. [PMID: 30419013 PMCID: PMC6258373 DOI: 10.1371/journal.pcbi.1006535] [Citation(s) in RCA: 40] [Impact Index Per Article: 6.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/23/2018] [Revised: 11/26/2018] [Accepted: 09/29/2018] [Indexed: 01/15/2023] Open
Abstract
Despite advances in experimental techniques and accumulation of large datasets concerning the composition and properties of the cortex, quantitative modeling of cortical circuits under in-vivo-like conditions remains challenging. Here we report and publicly release a biophysically detailed circuit model of layer 4 in the mouse primary visual cortex, receiving thalamo-cortical visual inputs. The 45,000-neuron model was subjected to a battery of visual stimuli, and results were compared to published work and new in vivo experiments. Simulations reproduced a variety of observations, including effects of optogenetic perturbations. Critical to the agreement between responses in silico and in vivo were the rules of functional synaptic connectivity between neurons. Interestingly, after extreme simplification the model still performed satisfactorily on many measurements, although quantitative agreement with experiments suffered. These results emphasize the importance of functional rules of cortical wiring and enable a next generation of data-driven models of in vivo neural activity and computations. How can we capture the incredible complexity of brain circuits in quantitative models, and what can such models teach us about mechanisms underlying brain activity? To answer these questions, we set out to build extensive, bio-realistic models of brain circuitry by employing systematic datasets on brain structure and function. Here we report the first modeling results of this project, focusing on the layer 4 of the primary visual cortex (V1) of the mouse. Our simulations reproduced a variety of experimental observations in response to a large battery of visual stimuli. The results elucidated circuit mechanisms determining patters of neuronal activity in layer 4 –in particular, the roles of feedforward thalamic inputs and specific patterns of intracortical connectivity in producing tuning of neuronal responses to the orientation of motion. Simplification of neuronal models led to specific deficiencies in reproducing experimental data, giving insights into how biological details contribute to various aspects of brain activity. To enable future development of more sophisticated models, we make the software code, the model, and simulation results publicly available.
Collapse
Affiliation(s)
- Anton Arkhipov
- Allen Institute for Brain Science, Seattle, Washington, United States of America
| | - Nathan W Gouwens
- Allen Institute for Brain Science, Seattle, Washington, United States of America
| | - Yazan N Billeh
- Allen Institute for Brain Science, Seattle, Washington, United States of America
| | - Sergey Gratiy
- Allen Institute for Brain Science, Seattle, Washington, United States of America
| | - Ramakrishnan Iyer
- Allen Institute for Brain Science, Seattle, Washington, United States of America
| | - Ziqiang Wei
- Janelia Research Campus, Howard Hughes Medical Institute, Ashburn, Virginia, United States of America
| | - Zihao Xu
- University of California San Diego, La Jolla, CA, United States of America
| | - Reza Abbasi-Asl
- Allen Institute for Brain Science, Seattle, Washington, United States of America
| | - Jim Berg
- Allen Institute for Brain Science, Seattle, Washington, United States of America
| | - Michael Buice
- Allen Institute for Brain Science, Seattle, Washington, United States of America
| | - Nicholas Cain
- Allen Institute for Brain Science, Seattle, Washington, United States of America
| | - Nuno da Costa
- Allen Institute for Brain Science, Seattle, Washington, United States of America
| | - Saskia de Vries
- Allen Institute for Brain Science, Seattle, Washington, United States of America
| | - Daniel Denman
- Allen Institute for Brain Science, Seattle, Washington, United States of America
| | - Severine Durand
- Allen Institute for Brain Science, Seattle, Washington, United States of America
| | - David Feng
- Allen Institute for Brain Science, Seattle, Washington, United States of America
| | - Tim Jarsky
- Allen Institute for Brain Science, Seattle, Washington, United States of America
| | - Jérôme Lecoq
- Allen Institute for Brain Science, Seattle, Washington, United States of America
| | - Brian Lee
- Allen Institute for Brain Science, Seattle, Washington, United States of America
| | - Lu Li
- Allen Institute for Brain Science, Seattle, Washington, United States of America
| | - Stefan Mihalas
- Allen Institute for Brain Science, Seattle, Washington, United States of America
| | - Gabriel K Ocker
- Allen Institute for Brain Science, Seattle, Washington, United States of America
| | - Shawn R Olsen
- Allen Institute for Brain Science, Seattle, Washington, United States of America
| | - R Clay Reid
- Allen Institute for Brain Science, Seattle, Washington, United States of America
| | | | - Staci A Sorensen
- Allen Institute for Brain Science, Seattle, Washington, United States of America
| | - Quanxin Wang
- Allen Institute for Brain Science, Seattle, Washington, United States of America
| | - Jack Waters
- Allen Institute for Brain Science, Seattle, Washington, United States of America
| | - Massimo Scanziani
- Howard Hughes Medical Institute and Department of Physiology, University of California San Francisco, San Francisco, California, United States of America
| | - Christof Koch
- Allen Institute for Brain Science, Seattle, Washington, United States of America
| |
Collapse
|
86
|
Gratiy SL, Billeh YN, Dai K, Mitelut C, Feng D, Gouwens NW, Cain N, Koch C, Anastassiou CA, Arkhipov A. BioNet: A Python interface to NEURON for modeling large-scale networks. PLoS One 2018; 13:e0201630. [PMID: 30071069 PMCID: PMC6072024 DOI: 10.1371/journal.pone.0201630] [Citation(s) in RCA: 31] [Impact Index Per Article: 5.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/17/2018] [Accepted: 07/18/2018] [Indexed: 01/07/2023] Open
Abstract
There is a significant interest in the neuroscience community in the development of large-scale network models that would integrate diverse sets of experimental data to help elucidate mechanisms underlying neuronal activity and computations. Although powerful numerical simulators (e.g., NEURON, NEST) exist, data-driven large-scale modeling remains challenging due to difficulties involved in setting up and running network simulations. We developed a high-level application programming interface (API) in Python that facilitates building large-scale biophysically detailed networks and simulating them with NEURON on parallel computer architecture. This tool, termed "BioNet", is designed to support a modular workflow whereby the description of a constructed model is saved as files that could be subsequently loaded for further refinement and/or simulation. The API supports both NEURON's built-in as well as user-defined models of cells and synapses. It is capable of simulating a variety of observables directly supported by NEURON (e.g., spikes, membrane voltage, intracellular [Ca++]), as well as plugging in modules for computing additional observables (e.g. extracellular potential). The high-level API platform obviates the time-consuming development of custom code for implementing individual models, and enables easy model sharing via standardized files. This tool will help refocus neuroscientists on addressing outstanding scientific questions rather than developing narrow-purpose modeling code.
Collapse
Affiliation(s)
| | | | - Kael Dai
- Allen Institute, Seattle, WA, United States of America
| | | | - David Feng
- Allen Institute, Seattle, WA, United States of America
| | | | - Nicholas Cain
- Allen Institute, Seattle, WA, United States of America
| | - Christof Koch
- Allen Institute, Seattle, WA, United States of America
| | | | | |
Collapse
|
87
|
Gouwens NW, Berg J, Feng D, Sorensen SA, Zeng H, Hawrylycz MJ, Koch C, Arkhipov A. Systematic generation of biophysically detailed models for diverse cortical neuron types. Nat Commun 2018; 9:710. [PMID: 29459718 PMCID: PMC5818534 DOI: 10.1038/s41467-017-02718-3] [Citation(s) in RCA: 67] [Impact Index Per Article: 11.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/26/2017] [Accepted: 12/20/2017] [Indexed: 01/17/2023] Open
Abstract
The cellular components of mammalian neocortical circuits are diverse, and capturing this diversity in computational models is challenging. Here we report an approach for generating biophysically detailed models of 170 individual neurons in the Allen Cell Types Database to link the systematic experimental characterization of cell types to the construction of cortical models. We build models from 3D morphologies and somatic electrophysiological responses measured in the same cells. Densities of active somatic conductances and additional parameters are optimized with a genetic algorithm to match electrophysiological features. We evaluate the models by applying additional stimuli and comparing model responses to experimental data. Applying this technique across a diverse set of neurons from adult mouse primary visual cortex, we verify that models preserve the distinctiveness of intrinsic properties between subsets of cells observed in experiments. The optimized models are accessible online alongside the experimental data. Code for optimization and simulation is also openly distributed.
Collapse
Affiliation(s)
- Nathan W Gouwens
- Allen Institute for Brain Science, 615 Westlake Avenue N, Seattle, WA, 98109, USA
| | - Jim Berg
- Allen Institute for Brain Science, 615 Westlake Avenue N, Seattle, WA, 98109, USA
| | - David Feng
- Allen Institute for Brain Science, 615 Westlake Avenue N, Seattle, WA, 98109, USA
| | - Staci A Sorensen
- Allen Institute for Brain Science, 615 Westlake Avenue N, Seattle, WA, 98109, USA
| | - Hongkui Zeng
- Allen Institute for Brain Science, 615 Westlake Avenue N, Seattle, WA, 98109, USA
| | - Michael J Hawrylycz
- Allen Institute for Brain Science, 615 Westlake Avenue N, Seattle, WA, 98109, USA
| | - Christof Koch
- Allen Institute for Brain Science, 615 Westlake Avenue N, Seattle, WA, 98109, USA
| | - Anton Arkhipov
- Allen Institute for Brain Science, 615 Westlake Avenue N, Seattle, WA, 98109, USA.
| |
Collapse
|