1
|
Driscoll LN, Shenoy K, Sussillo D. Flexible multitask computation in recurrent networks utilizes shared dynamical motifs. Nat Neurosci 2024; 27:1349-1363. [PMID: 38982201 PMCID: PMC11239504 DOI: 10.1038/s41593-024-01668-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/16/2022] [Accepted: 04/26/2024] [Indexed: 07/11/2024]
Abstract
Flexible computation is a hallmark of intelligent behavior. However, little is known about how neural networks contextually reconfigure for different computations. In the present work, we identified an algorithmic neural substrate for modular computation through the study of multitasking artificial recurrent neural networks. Dynamical systems analyses revealed learned computational strategies mirroring the modular subtask structure of the training task set. Dynamical motifs, which are recurring patterns of neural activity that implement specific computations through dynamics, such as attractors, decision boundaries and rotations, were reused across tasks. For example, tasks requiring memory of a continuous circular variable repurposed the same ring attractor. We showed that dynamical motifs were implemented by clusters of units when the unit activation function was restricted to be positive. Cluster lesions caused modular performance deficits. Motifs were reconfigured for fast transfer learning after an initial phase of learning. This work establishes dynamical motifs as a fundamental unit of compositional computation, intermediate between neuron and network. As whole-brain studies simultaneously record activity from multiple specialized systems, the dynamical motif framework will guide questions about specialization and generalization.
Collapse
Affiliation(s)
- Laura N Driscoll
- Department of Electrical Engineering, Stanford University, Stanford, CA, USA.
| | - Krishna Shenoy
- Department of Electrical Engineering, Stanford University, Stanford, CA, USA
- Department of Neurosurgery, Stanford University, Stanford, CA, USA
- Department of Bioengineering, Stanford University, Stanford, CA, USA
- Department of Neurobiology, Stanford University, Stanford, CA, USA
- Wu Tsai Neurosciences Institute, Stanford University, Stanford, CA, USA
- Bio-X Institute, Stanford University, Stanford, CA, USA
- Howard Hughes Medical Institute at Stanford University, Stanford, CA, USA
| | - David Sussillo
- Department of Electrical Engineering, Stanford University, Stanford, CA, USA
- Wu Tsai Neurosciences Institute, Stanford University, Stanford, CA, USA
| |
Collapse
|
2
|
Beiran M, Litwin-Kumar A. Prediction of neural activity in connectome-constrained recurrent networks. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.02.22.581667. [PMID: 38854115 PMCID: PMC11160579 DOI: 10.1101/2024.02.22.581667] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2024]
Abstract
We develop a theory of connectome-constrained neural networks in which a "student" network is trained to reproduce the activity of a ground-truth "teacher," representing a neural system for which a connectome is available. Unlike standard paradigms with unconstrained connectivity, here the two networks have the same connectivity but different biophysical parameters, reflecting uncertainty in neuronal and synaptic properties. We find that a connectome is often insufficient to constrain the dynamics of networks that perform a specific task, illustrating the difficulty of inferring function from connectivity alone. However, recordings from a small subset of neurons can remove this degeneracy, producing dynamics in the student that agree with the teacher. Our theory can also prioritize which neurons to record from to most efficiently predict unmeasured network activity. Our analysis shows that the solution spaces of connectome-constrained and unconstrained models are qualitatively different and provides a framework to determine when such models yield consistent dynamics.
Collapse
Affiliation(s)
- Manuel Beiran
- Zuckerman Mind Brain Behavior Institute, Columbia University, New York, NY 10027, USA
| | - Ashok Litwin-Kumar
- Zuckerman Mind Brain Behavior Institute, Columbia University, New York, NY 10027, USA
| |
Collapse
|
3
|
Morris PG, Taylor JD, Paton JFR, Nogaret A. Single shot detection of alterations across multiple ionic currents from assimilation of cell membrane dynamics. Sci Rep 2024; 14:6031. [PMID: 38472404 DOI: 10.1038/s41598-024-56576-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/30/2023] [Accepted: 03/08/2024] [Indexed: 03/14/2024] Open
Abstract
The dysfunction of ion channels is a causative factor in a variety of neurological diseases, thereby defining the implicated channels as key drug targets. The detection of functional changes in multiple specific ionic currents currently presents a challenge, particularly when the neurological causes are either a priori unknown, or are unexpected. Traditional patch clamp electrophysiology is a powerful tool in this regard but is low throughput. Here, we introduce a single-shot method for detecting alterations amongst a range of ion channel types from subtle changes in membrane voltage in response to a short chaotically driven current clamp protocol. We used data assimilation to estimate the parameters of individual ion channels and from these we reconstructed ionic currents which exhibit significantly lower error than the parameter estimates. Such reconstructed currents thereby become sensitive predictors of functional alterations in biological ion channels. The technique correctly predicted which ionic current was altered, and by approximately how much, following pharmacological blockade of BK, SK, A-type K+ and HCN channels in hippocampal CA1 neurons. We anticipate this assay technique could aid in the detection of functional changes in specific ionic currents during drug screening, as well as in research targeting ion channel dysfunction.
Collapse
Affiliation(s)
- Paul G Morris
- Department of Physics, University of Bath, Claverton Down, Bath, UK
- School of Physiology, Pharmacology and Neuroscience, University of Bristol, Bristol, UK
| | - Joseph D Taylor
- Department of Physics, University of Bath, Claverton Down, Bath, UK
| | - Julian F R Paton
- Manaaki Manawa - the Centre for Heart Research, Department of Physiology, Faculty of Medical and Health Sciences, University of Auckland, Grafton, Auckland, New Zealand
| | - Alain Nogaret
- Department of Physics, University of Bath, Claverton Down, Bath, UK.
| |
Collapse
|
4
|
Srikanth S, Narayanan R. Heterogeneous off-target impact of ion-channel deletion on intrinsic properties of hippocampal model neurons that self-regulate calcium. Front Cell Neurosci 2023; 17:1241450. [PMID: 37904732 PMCID: PMC10613471 DOI: 10.3389/fncel.2023.1241450] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/16/2023] [Accepted: 09/20/2023] [Indexed: 11/01/2023] Open
Abstract
How do neurons that implement cell-autonomous self-regulation of calcium react to knockout of individual ion-channel conductances? To address this question, we used a heterogeneous population of 78 conductance-based models of hippocampal pyramidal neurons that maintained cell-autonomous calcium homeostasis while receiving theta-frequency inputs. At calcium steady-state, we individually deleted each of the 11 active ion-channel conductances from each model. We measured the acute impact of deleting each conductance (one at a time) by comparing intrinsic electrophysiological properties before and immediately after channel deletion. The acute impact of deleting individual conductances on physiological properties (including calcium homeostasis) was heterogeneous, depending on the property, the specific model, and the deleted channel. The underlying many-to-many mapping between ion channels and properties pointed to ion-channel degeneracy. Next, we allowed the other conductances (barring the deleted conductance) to evolve towards achieving calcium homeostasis during theta-frequency activity. When calcium homeostasis was perturbed by ion-channel deletion, post-knockout plasticity in other conductances ensured resilience of calcium homeostasis to ion-channel deletion. These results demonstrate degeneracy in calcium homeostasis, as calcium homeostasis in knockout models was implemented in the absence of a channel that was earlier involved in the homeostatic process. Importantly, in reacquiring homeostasis, ion-channel conductances and physiological properties underwent heterogenous plasticity (dependent on the model, the property, and the deleted channel), even introducing changes in properties that were not directly connected to the deleted channel. Together, post-knockout plasticity geared towards maintaining homeostasis introduced heterogenous off-target effects on several channels and properties, suggesting that extreme caution be exercised in interpreting experimental outcomes involving channel knockouts.
Collapse
Affiliation(s)
- Sunandha Srikanth
- Cellular Neurophysiology Laboratory, Molecular Biophysics Unit, Indian Institute of Science, Bangalore, India
- Undergraduate Program, Indian Institute of Science, Bangalore, India
| | - Rishikesh Narayanan
- Cellular Neurophysiology Laboratory, Molecular Biophysics Unit, Indian Institute of Science, Bangalore, India
| |
Collapse
|
5
|
Wong W. A Fundamental Inequality Governing the Rate Coding Response of Sensory Neurons. BIOLOGICAL CYBERNETICS 2023; 117:285-295. [PMID: 37597017 DOI: 10.1007/s00422-023-00971-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/07/2023] [Accepted: 07/30/2023] [Indexed: 08/21/2023]
Abstract
A fundamental inequality governing the spike activity of peripheral neurons is derived and tested against auditory data. This inequality states that the steady-state firing rate must lie between the arithmetic and geometric means of the spontaneous and peak activities during adaptation. Implications towards the development of auditory mechanistic models are explored.
Collapse
Affiliation(s)
- Willy Wong
- Department of Electrical and Computer Engineering and Institute of Biomedical Engineering, University of Toronto, Toronto, M5S3G4, Canada.
| |
Collapse
|
6
|
Thompson WH, Skau S. On the scope of scientific hypotheses. ROYAL SOCIETY OPEN SCIENCE 2023; 10:230607. [PMID: 37650069 PMCID: PMC10465209 DOI: 10.1098/rsos.230607] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/06/2023] [Accepted: 08/04/2023] [Indexed: 09/01/2023]
Abstract
Hypotheses are frequently the starting point when undertaking the empirical portion of the scientific process. They state something that the scientific process will attempt to evaluate, corroborate, verify or falsify. Their purpose is to guide the types of data we collect, analyses we conduct, and inferences we would like to make. Over the last decade, metascience has advocated for hypotheses being in preregistrations or registered reports, but how to formulate these hypotheses has received less attention. Here, we argue that hypotheses can vary in specificity along at least three independent dimensions: the relationship, the variables, and the pipeline. Together, these dimensions form the scope of the hypothesis. We demonstrate how narrowing the scope of a hypothesis in any of these three ways reduces the hypothesis space and that this reduction is a type of novelty. Finally, we discuss how this formulation of hypotheses can guide researchers to formulate the appropriate scope for their hypotheses and should aim for neither too broad nor too narrow a scope. This framework can guide hypothesis-makers when formulating their hypotheses by helping clarify what is being tested, chaining results to previous known findings, and demarcating what is explicitly tested in the hypothesis.
Collapse
Affiliation(s)
- William Hedley Thompson
- Department of Applied Information Technology, University of Gothenburg, Gothenburg, Sweden
- Institute of Neuroscience and Physiology, Sahlgrenska Academy, University of Gothenburg, Gothenburg, Sweden
| | - Simon Skau
- Department of Pedagogical, Curricular and Professional Studies, Faculty of Education, University of Gothenburg, Gothenburg, Sweden
- Department of Clinical Neuroscience, Karolinska Institutet, Stockholm, Sweden
| |
Collapse
|
7
|
Safavi S, Panagiotaropoulos TI, Kapoor V, Ramirez-Villegas JF, Logothetis NK, Besserve M. Uncovering the organization of neural circuits with Generalized Phase Locking Analysis. PLoS Comput Biol 2023; 19:e1010983. [PMID: 37011110 PMCID: PMC10109521 DOI: 10.1371/journal.pcbi.1010983] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/22/2022] [Revised: 04/17/2023] [Accepted: 02/27/2023] [Indexed: 04/05/2023] Open
Abstract
Despite the considerable progress of in vivo neural recording techniques, inferring the biophysical mechanisms underlying large scale coordination of brain activity from neural data remains challenging. One obstacle is the difficulty to link high dimensional functional connectivity measures to mechanistic models of network activity. We address this issue by investigating spike-field coupling (SFC) measurements, which quantify the synchronization between, on the one hand, the action potentials produced by neurons, and on the other hand mesoscopic "field" signals, reflecting subthreshold activities at possibly multiple recording sites. As the number of recording sites gets large, the amount of pairwise SFC measurements becomes overwhelmingly challenging to interpret. We develop Generalized Phase Locking Analysis (GPLA) as an interpretable dimensionality reduction of this multivariate SFC. GPLA describes the dominant coupling between field activity and neural ensembles across space and frequencies. We show that GPLA features are biophysically interpretable when used in conjunction with appropriate network models, such that we can identify the influence of underlying circuit properties on these features. We demonstrate the statistical benefits and interpretability of this approach in various computational models and Utah array recordings. The results suggest that GPLA, used jointly with biophysical modeling, can help uncover the contribution of recurrent microcircuits to the spatio-temporal dynamics observed in multi-channel experimental recordings.
Collapse
Affiliation(s)
- Shervin Safavi
- Department of Physiology of Cognitive Processes, Max Planck Institute for Biological Cybernetics, Tübingen, Germany
- IMPRS for Cognitive and Systems Neuroscience, University of Tübingen, Tübingen, Germany
| | - Theofanis I. Panagiotaropoulos
- Department of Physiology of Cognitive Processes, Max Planck Institute for Biological Cybernetics, Tübingen, Germany
- Cognitive Neuroimaging Unit, INSERM, CEA, CNRS, Université Paris-Saclay, NeuroSpin center, 91191 Gif/Yvette, France
| | - Vishal Kapoor
- Department of Physiology of Cognitive Processes, Max Planck Institute for Biological Cybernetics, Tübingen, Germany
- International Center for Primate Brain Research (ICPBR), Center for Excellence in Brain Science and Intelligence Technology (CEBSIT), Chinese Academy of Sciences (CAS), Shanghai 201602, China
| | - Juan F. Ramirez-Villegas
- Department of Physiology of Cognitive Processes, Max Planck Institute for Biological Cybernetics, Tübingen, Germany
- Institute of Science and Technology Austria (IST Austria), Klosterneuburg, Austria
| | - Nikos K. Logothetis
- Department of Physiology of Cognitive Processes, Max Planck Institute for Biological Cybernetics, Tübingen, Germany
- International Center for Primate Brain Research (ICPBR), Center for Excellence in Brain Science and Intelligence Technology (CEBSIT), Chinese Academy of Sciences (CAS), Shanghai 201602, China
- Centre for Imaging Sciences, Biomedical Imaging Institute, The University of Manchester, Manchester, United Kingdom
| | - Michel Besserve
- Department of Physiology of Cognitive Processes, Max Planck Institute for Biological Cybernetics, Tübingen, Germany
- Department of Empirical Inference, Max Planck Institute for Intelligent Systems and MPI-ETH Center for Learning Systems, Tübingen, Germany
| |
Collapse
|
8
|
Levenstein D, Alvarez VA, Amarasingham A, Azab H, Chen ZS, Gerkin RC, Hasenstaub A, Iyer R, Jolivet RB, Marzen S, Monaco JD, Prinz AA, Quraishi S, Santamaria F, Shivkumar S, Singh MF, Traub R, Nadim F, Rotstein HG, Redish AD. On the Role of Theory and Modeling in Neuroscience. J Neurosci 2023; 43:1074-1088. [PMID: 36796842 PMCID: PMC9962842 DOI: 10.1523/jneurosci.1179-22.2022] [Citation(s) in RCA: 14] [Impact Index Per Article: 14.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/16/2022] [Revised: 12/14/2022] [Accepted: 12/18/2022] [Indexed: 02/18/2023] Open
Abstract
In recent years, the field of neuroscience has gone through rapid experimental advances and a significant increase in the use of quantitative and computational methods. This growth has created a need for clearer analyses of the theory and modeling approaches used in the field. This issue is particularly complex in neuroscience because the field studies phenomena that cross a wide range of scales and often require consideration at varying degrees of abstraction, from precise biophysical interactions to the computations they implement. We argue that a pragmatic perspective of science, in which descriptive, mechanistic, and normative models and theories each play a distinct role in defining and bridging levels of abstraction, will facilitate neuroscientific practice. This analysis leads to methodological suggestions, including selecting a level of abstraction that is appropriate for a given problem, identifying transfer functions to connect models and data, and the use of models themselves as a form of experiment.
Collapse
Affiliation(s)
- Daniel Levenstein
- Montreal Neurological Institute, McGill University, Montreal, Quebec H3A 2B4, Canada
| | - Veronica A Alvarez
- Laboratory on Neurobiology of Compulsive Behaviors, National Institute on Alcohol Abuse and Alcoholism, National Institutes of Health, Bethesda, Maryland 20892
| | - Asohan Amarasingham
- Departments of Mathematics and Biology, City College and the Graduate Center, City University of New York, New York, New York 10032
| | - Habiba Azab
- Department of Neuroscience, Center for Magnetic Resonance Research, University of Minnesota, Minneapolis, Minnesota 55455
| | - Zhe S Chen
- Department of Psychiatry, Neuroscience & Physiology, New York University School of Medicine, New York, New York, 10016
| | - Richard C Gerkin
- School of Life Sciences, Arizona State University, Tempe, Arizona 85281
| | - Andrea Hasenstaub
- Department of Otolaryngology-Head and Neck Surgery, University of California San Francisco, San Francisco, California 94115
| | | | - Renaud B Jolivet
- Maastricht Centre for Systems Biology, Maastricht University, Maastricht, The Netherlands
| | - Sarah Marzen
- W. M. Keck Science Department, Pitzer, Scripps, and Claremont McKenna Colleges, Claremont, California 91711
| | - Joseph D Monaco
- Department of Biomedical Engineering, Johns Hopkins University School of Medicine, Baltimore, Maryland 21218
| | - Astrid A Prinz
- Department of Biology, Emory University, Atlanta, Georgia 30322
| | - Salma Quraishi
- Neuroscience, Developmental and Regnerative Biology Department, University of Texas at San Antonio, San Antonio, Texas 78249
| | - Fidel Santamaria
- Neuroscience, Developmental and Regnerative Biology Department, University of Texas at San Antonio, San Antonio, Texas 78249
| | - Sabyasachi Shivkumar
- Brain and Cognitive Sciences, University of Rochester, Rochester, New York 14627
| | - Matthew F Singh
- Department of Psychological & Brain Sciences, Department of Electrical & Systems Engineering, Washington University in St. Louis, St. Louis, Missouri 63112
| | - Roger Traub
- IBM T.J. Watson Research Center, AI Foundations, Yorktown Heights, New York 10598
| | - Farzan Nadim
- Montreal Neurological Institute, McGill University, Montreal, Quebec H3A 2B4, Canada
- Department of Otolaryngology-Head and Neck Surgery, University of California San Francisco, San Francisco, California 94115
| | - Horacio G Rotstein
- Montreal Neurological Institute, McGill University, Montreal, Quebec H3A 2B4, Canada
- Department of Otolaryngology-Head and Neck Surgery, University of California San Francisco, San Francisco, California 94115
| | - A David Redish
- Department of Neuroscience, University of Minnesota, Minneapolis, Minnesota 55455
| |
Collapse
|
9
|
Naudin L, Raison-Aubry L, Buhry L. A general pattern of non-spiking neuron dynamics under the effect of potassium and calcium channel modifications. J Comput Neurosci 2023; 51:173-186. [PMID: 36371576 DOI: 10.1007/s10827-022-00840-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/04/2022] [Revised: 10/08/2022] [Accepted: 11/02/2022] [Indexed: 11/13/2022]
Abstract
Electrical activity of excitable cells results from ion exchanges through cell membranes, so that genetic or epigenetic changes in genes encoding ion channels are likely to affect neuronal electrical signaling throughout the brain. There is a large literature on the effect of variations in ion channels on the dynamics of spiking neurons that represent the main type of neurons found in the vertebrate nervous systems. Nevertheless, non-spiking neurons are also ubiquitous in many nervous tissues and play a critical role in the processing of some sensory systems. To our knowledge, however, how conductance variations affect the dynamics of non-spiking neurons has never been assessed. Based on experimental observations reported in the biological literature and on mathematical considerations, we first propose a phenotypic classification of non-spiking neurons. Then, we determine a general pattern of the phenotypic evolution of non-spiking neurons as a function of changes in calcium and potassium conductances. Furthermore, we study the homeostatic compensatory mechanisms of ion channels in a well-posed non-spiking retinal cone model. We show that there is a restricted range of ion conductance values for which the behavior and phenotype of the neuron are maintained. Finally, we discuss the implications of the phenotypic changes of individual cells at the level of neuronal network functioning of the C. elegans worm and the retina, which are two non-spiking nervous tissues composed of neurons with various phenotypes.
Collapse
Affiliation(s)
- Loïs Naudin
- Laboratoire Lorrain de Recherche en Informatique et ses Applications, CNRS, Université de Lorraine, Nancy, France.
| | - Laetitia Raison-Aubry
- Laboratoire Lorrain de Recherche en Informatique et ses Applications, CNRS, Université de Lorraine, Nancy, France
| | - Laure Buhry
- Laboratoire Lorrain de Recherche en Informatique et ses Applications, CNRS, Université de Lorraine, Nancy, France.
| |
Collapse
|
10
|
Quinn KN, Abbott MC, Transtrum MK, Machta BB, Sethna JP. Information geometry for multiparameter models: new perspectives on the origin of simplicity. REPORTS ON PROGRESS IN PHYSICS. PHYSICAL SOCIETY (GREAT BRITAIN) 2022; 86:10.1088/1361-6633/aca6f8. [PMID: 36576176 PMCID: PMC10018491 DOI: 10.1088/1361-6633/aca6f8] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/03/2021] [Accepted: 11/29/2022] [Indexed: 05/20/2023]
Abstract
Complex models in physics, biology, economics, and engineering are oftensloppy, meaning that the model parameters are not well determined by the model predictions for collective behavior. Many parameter combinations can vary over decades without significant changes in the predictions. This review uses information geometry to explore sloppiness and its deep relation to emergent theories. We introduce themodel manifoldof predictions, whose coordinates are the model parameters. Itshyperribbonstructure explains why only a few parameter combinations matter for the behavior. We review recent rigorous results that connect the hierarchy of hyperribbon widths to approximation theory, and to the smoothness of model predictions under changes of the control variables. We discuss recent geodesic methods to find simpler models on nearby boundaries of the model manifold-emergent theories with fewer parameters that explain the behavior equally well. We discuss a Bayesian prior which optimizes the mutual information between model parameters and experimental data, naturally favoring points on the emergent boundary theories and thus simpler models. We introduce a 'projected maximum likelihood' prior that efficiently approximates this optimal prior, and contrast both to the poor behavior of the traditional Jeffreys prior. We discuss the way the renormalization group coarse-graining in statistical mechanics introduces a flow of the model manifold, and connect stiff and sloppy directions along the model manifold with relevant and irrelevant eigendirections of the renormalization group. Finally, we discuss recently developed 'intensive' embedding methods, allowing one to visualize the predictions of arbitrary probabilistic models as low-dimensional projections of an isometric embedding, and illustrate our method by generating the model manifold of the Ising model.
Collapse
Affiliation(s)
- Katherine N Quinn
- Center for the Physics of Biological Function, Princeton University, Princeton, NJ, United States of America
| | - Michael C Abbott
- Department of Physics, Yale University, New Haven, CT, United States of America
| | - Mark K Transtrum
- Department of Physics and Astronomy, Brigham Young University, Provo, UT, United States of America
| | - Benjamin B Machta
- Department of Physics and Systems Biology Institute, Yale University, New Haven, CT, United States of America
| | - James P Sethna
- Department of Physics, Cornell University, Ithaca, NY, United States of America
| |
Collapse
|
11
|
Energy-efficient network activity from disparate circuit parameters. Proc Natl Acad Sci U S A 2022; 119:e2207632119. [PMID: 36279461 PMCID: PMC9636970 DOI: 10.1073/pnas.2207632119] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022] Open
Abstract
Neural circuits can produce similar activity patterns from vastly different combinations of channel and synaptic conductances. These conductances are tuned for specific activity patterns but might also reflect additional constraints, such as metabolic cost or robustness to perturbations. How do such constraints influence the range of permissible conductances? Here we investigate how metabolic cost affects the parameters of neural circuits with similar activity in a model of the pyloric network of the crab
Cancer borealis
. We present a machine learning method that can identify a range of network models that generate activity patterns matching experimental data and find that neural circuits can consume largely different amounts of energy despite similar circuit activity. Furthermore, a reduced but still significant range of circuit parameters gives rise to energy-efficient circuits. We then examine the space of parameters of energy-efficient circuits and identify potential tuning strategies for low metabolic cost. Finally, we investigate the interaction between metabolic cost and temperature robustness. We show that metabolic cost can vary across temperatures but that robustness to temperature changes does not necessarily incur an increased metabolic cost. Our analyses show that despite metabolic efficiency and temperature robustness constraining circuit parameters, neural systems can generate functional, efficient, and robust network activity with widely disparate sets of conductances.
Collapse
|
12
|
Biswas T, Fitzgerald JE. Geometric framework to predict structure from function in neural networks. PHYSICAL REVIEW RESEARCH 2022; 4:023255. [PMID: 37635906 PMCID: PMC10456994 DOI: 10.1103/physrevresearch.4.023255] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Indexed: 08/29/2023]
Abstract
Neural computation in biological and artificial networks relies on the nonlinear summation of many inputs. The structural connectivity matrix of synaptic weights between neurons is a critical determinant of overall network function, but quantitative links between neural network structure and function are complex and subtle. For example, many networks can give rise to similar functional responses, and the same network can function differently depending on context. Whether certain patterns of synaptic connectivity are required to generate specific network-level computations is largely unknown. Here we introduce a geometric framework for identifying synaptic connections required by steady-state responses in recurrent networks of threshold-linear neurons. Assuming that the number of specified response patterns does not exceed the number of input synapses, we analytically calculate the solution space of all feedforward and recurrent connectivity matrices that can generate the specified responses from the network inputs. A generalization accounting for noise further reveals that the solution space geometry can undergo topological transitions as the allowed error increases, which could provide insight into both neuroscience and machine learning. We ultimately use this geometric characterization to derive certainty conditions guaranteeing a nonzero synapse between neurons. Our theoretical framework could thus be applied to neural activity data to make rigorous anatomical predictions that follow generally from the model architecture.
Collapse
Affiliation(s)
- Tirthabir Biswas
- Janelia Research Campus, Howard Hughes Medical Institute, Ashburn, Virginia 20147, USA
- Department of Physics, Loyola University, New Orleans, Louisiana 70118, USA
| | - James E. Fitzgerald
- Janelia Research Campus, Howard Hughes Medical Institute, Ashburn, Virginia 20147, USA
| |
Collapse
|
13
|
Taylor JD, Chauhan AS, Taylor JT, Shilnikov AL, Nogaret A. Noise-activated barrier crossing in multiattractor dissipative neural networks. Phys Rev E 2022; 105:064203. [PMID: 35854623 DOI: 10.1103/physreve.105.064203] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/03/2021] [Accepted: 05/17/2022] [Indexed: 06/15/2023]
Abstract
Noise-activated transitions between coexisting attractors are investigated in a chaotic spiking network. At low noise level, attractor hopping consists of discrete bifurcation events that conserve the memory of initial conditions. When the escape probability becomes comparable to the intrabasin hopping probability, the lifetime of attractors is given by a detailed balance where the less coherent attractors act as a sink for the more coherent ones. In this regime, the escape probability follows an activation law allowing us to assign pseudoactivation energies to limit cycle attractors. These pseudoenergies introduce a useful metric for evaluating the resilience of biological rhythms to perturbations.
Collapse
Affiliation(s)
- Joseph D Taylor
- Department of Physics, University of Bath, Bath BA2 7AY, United Kingdom
| | - Ashok S Chauhan
- Department of Physics, University of Bath, Bath BA2 7AY, United Kingdom
| | - John T Taylor
- Department of Electronics and Electrical Engineering, University of Bath, Bath BA2 7AY, United Kingdom
| | - Andrey L Shilnikov
- Neuroscience Institute, Georgia State University, Petit Science Center, 100 Piedmont Avenue Atlanta, Georgia 30303, USA
- Department of Mathematics and Statistics, Georgia State University, Petit Science Center, 100 Piedmont Avenue, Atlanta, Georgia 30303, USA
| | - Alain Nogaret
- Department of Physics, University of Bath, Bath BA2 7AY, United Kingdom
| |
Collapse
|
14
|
Approaches to Parameter Estimation from Model Neurons and Biological Neurons. ALGORITHMS 2022. [DOI: 10.3390/a15050168] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Model optimization in neuroscience has focused on inferring intracellular parameters from time series observations of the membrane voltage and calcium concentrations. These parameters constitute the fingerprints of ion channel subtypes and may identify ion channel mutations from observed changes in electrical activity. A central question in neuroscience is whether computational methods may obtain ion channel parameters with sufficient consistency and accuracy to provide new information on the underlying biology. Finding single-valued solutions in particular, remains an outstanding theoretical challenge. This note reviews recent progress in the field. It first covers well-posed problems and describes the conditions that the model and data need to meet to warrant the recovery of all the original parameters—even in the presence of noise. The main challenge is model error, which reflects our lack of knowledge of exact equations. We report on strategies that have been partially successful at inferring the parameters of rodent and songbird neurons, when model error is sufficiently small for accurate predictions to be made irrespective of stimulation.
Collapse
|
15
|
Medlock L, Sekiguchi K, Hong S, Dura-Bernal S, Lytton WW, Prescott SA. Multiscale Computer Model of the Spinal Dorsal Horn Reveals Changes in Network Processing Associated with Chronic Pain. J Neurosci 2022; 42:3133-3149. [PMID: 35232767 PMCID: PMC8996343 DOI: 10.1523/jneurosci.1199-21.2022] [Citation(s) in RCA: 19] [Impact Index Per Article: 9.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/09/2021] [Revised: 02/17/2022] [Accepted: 02/17/2022] [Indexed: 11/21/2022] Open
Abstract
Pain-related sensory input is processed in the spinal dorsal horn (SDH) before being relayed to the brain. That processing profoundly influences whether stimuli are correctly or incorrectly perceived as painful. Significant advances have been made in identifying the types of excitatory and inhibitory neurons that comprise the SDH, and there is some information about how neuron types are connected, but it remains unclear how the overall circuit processes sensory input or how that processing is disrupted under chronic pain conditions. To explore SDH function, we developed a computational model of the circuit that is tightly constrained by experimental data. Our model comprises conductance-based neuron models that reproduce the characteristic firing patterns of spinal neurons. Excitatory and inhibitory neuron populations, defined by their expression of genetic markers, spiking pattern, or morphology, were synaptically connected according to available qualitative data. Using a genetic algorithm, synaptic weights were tuned to reproduce projection neuron firing rates (model output) based on primary afferent firing rates (model input) across a range of mechanical stimulus intensities. Disparate synaptic weight combinations could produce equivalent circuit function, revealing degeneracy that may underlie heterogeneous responses of different circuits to perturbations or pathologic insults. To validate our model, we verified that it responded to the reduction of inhibition (i.e., disinhibition) and ablation of specific neuron types in a manner consistent with experiments. Thus validated, our model offers a valuable resource for interpreting experimental results and testing hypotheses in silico to plan experiments for examining normal and pathologic SDH circuit function.SIGNIFICANCE STATEMENT We developed a multiscale computer model of the posterior part of spinal cord gray matter (spinal dorsal horn), which is involved in perceiving touch and pain. The model reproduces several experimental observations and makes predictions about how specific types of spinal neurons and synapses influence projection neurons that send information to the brain. Misfiring of these projection neurons can produce anomalous sensations associated with chronic pain. Our computer model will not only assist in planning future experiments, but will also be useful for developing new pharmacotherapy for chronic pain disorders, connecting the effect of drugs acting at the molecular scale with emergent properties of neurons and circuits that shape the pain experience.
Collapse
Affiliation(s)
- Laura Medlock
- Neurosciences & Mental Health, The Hospital for Sick Children, Toronto, Ontario M5G 0A4, Canada
- Institute of Biomedical Engineering, University of Toronto, Toronto, Ontario M5S 3G9, Canada
| | - Kazutaka Sekiguchi
- Drug Developmental Research Laboratory, Shionogi Pharmaceutical Research Center, Toyonaka, Osaka 561-0825, Japan
- State University of New York Downstate Health Science University, Brooklyn, New York 11203
| | - Sungho Hong
- Computational Neuroscience Unit, Okinawa Institute of Science and Technology, Okinawa, 904-0495, Japan
| | - Salvador Dura-Bernal
- State University of New York Downstate Health Science University, Brooklyn, New York 11203
- Nathan Kline Institute for Psychiatric Research, Orangeburg, New York 10962
| | - William W Lytton
- State University of New York Downstate Health Science University, Brooklyn, New York 11203
- Kings County Hospital, Brooklyn, New York 11207
| | - Steven A Prescott
- Neurosciences & Mental Health, The Hospital for Sick Children, Toronto, Ontario M5G 0A4, Canada
- Institute of Biomedical Engineering, University of Toronto, Toronto, Ontario M5S 3G9, Canada
- Department of Physiology, University of Toronto, Toronto, Ontario M5S 1A8, Canada
| |
Collapse
|
16
|
Yoder JA, Anderson CB, Wang C, Izquierdo EJ. Reinforcement Learning for Central Pattern Generation in Dynamical Recurrent Neural Networks. Front Comput Neurosci 2022; 16:818985. [PMID: 35465269 PMCID: PMC9028035 DOI: 10.3389/fncom.2022.818985] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/20/2021] [Accepted: 03/10/2022] [Indexed: 11/21/2022] Open
Abstract
Lifetime learning, or the change (or acquisition) of behaviors during a lifetime, based on experience, is a hallmark of living organisms. Multiple mechanisms may be involved, but biological neural circuits have repeatedly demonstrated a vital role in the learning process. These neural circuits are recurrent, dynamic, and non-linear and models of neural circuits employed in neuroscience and neuroethology tend to involve, accordingly, continuous-time, non-linear, and recurrently interconnected components. Currently, the main approach for finding configurations of dynamical recurrent neural networks that demonstrate behaviors of interest is using stochastic search techniques, such as evolutionary algorithms. In an evolutionary algorithm, these dynamic recurrent neural networks are evolved to perform the behavior over multiple generations, through selection, inheritance, and mutation, across a population of solutions. Although, these systems can be evolved to exhibit lifetime learning behavior, there are no explicit rules built into these dynamic recurrent neural networks that facilitate learning during their lifetime (e.g., reward signals). In this work, we examine a biologically plausible lifetime learning mechanism for dynamical recurrent neural networks. We focus on a recently proposed reinforcement learning mechanism inspired by neuromodulatory reward signals and ongoing fluctuations in synaptic strengths. Specifically, we extend one of the best-studied and most-commonly used dynamic recurrent neural networks to incorporate the reinforcement learning mechanism. First, we demonstrate that this extended dynamical system (model and learning mechanism) can autonomously learn to perform a central pattern generation task. Second, we compare the robustness and efficiency of the reinforcement learning rules in relation to two baseline models, a random walk and a hill-climbing walk through parameter space. Third, we systematically study the effect of the different meta-parameters of the learning mechanism on the behavioral learning performance. Finally, we report on preliminary results exploring the generality and scalability of this learning mechanism for dynamical neural networks as well as directions for future work.
Collapse
Affiliation(s)
- Jason A. Yoder
- Computer Science and Software Engineering Department, Rose-Hulman Institute of Technology, Terre Haute, IN, United States
- *Correspondence: Jason A. Yoder
| | - Cooper B. Anderson
- Computer Science and Software Engineering Department, Rose-Hulman Institute of Technology, Terre Haute, IN, United States
| | - Cehong Wang
- Computer Science and Software Engineering Department, Rose-Hulman Institute of Technology, Terre Haute, IN, United States
| | - Eduardo J. Izquierdo
- Computational Neuroethology Lab, Cognitive Science Program, Indiana University, Bloomington, IN, United States
| |
Collapse
|
17
|
Darshan R, Rivkind A. Learning to represent continuous variables in heterogeneous neural networks. Cell Rep 2022; 39:110612. [PMID: 35385721 DOI: 10.1016/j.celrep.2022.110612] [Citation(s) in RCA: 9] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/05/2021] [Revised: 02/08/2022] [Accepted: 03/11/2022] [Indexed: 12/13/2022] Open
Abstract
Animals must monitor continuous variables such as position or head direction. Manifold attractor networks-which enable a continuum of persistent neuronal states-provide a key framework to explain this monitoring ability. Neural networks with symmetric synaptic connectivity dominate this framework but are inconsistent with the diverse synaptic connectivity and neuronal representations observed in experiments. Here, we developed a theory for manifold attractors in trained neural networks, which approximates a continuum of persistent states, without assuming unrealistic symmetry. We exploit the theory to predict how asymmetries in the representation and heterogeneity in the connectivity affect the formation of the manifold via training, shape network response to stimulus, and govern mechanisms that possibly lead to destabilization of the manifold. Our work suggests that the functional properties of manifold attractors in the brain can be inferred from the overlooked asymmetries in connectivity and in the low-dimensional representation of the encoded variable.
Collapse
Affiliation(s)
- Ran Darshan
- Janelia Research Campus, Howard Hughes Medical Institute, Ashburn, VA, USA.
| | | |
Collapse
|
18
|
Bentzur A, Alon S, Shohat-Ophir G. Behavioral Neuroscience in the Era of Genomics: Tools and Lessons for Analyzing High-Dimensional Datasets. Int J Mol Sci 2022; 23:3811. [PMID: 35409169 PMCID: PMC8998543 DOI: 10.3390/ijms23073811] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/10/2022] [Revised: 03/26/2022] [Accepted: 03/29/2022] [Indexed: 12/10/2022] Open
Abstract
Behavioral neuroscience underwent a technology-driven revolution with the emergence of machine-vision and machine-learning technologies. These technological advances facilitated the generation of high-resolution, high-throughput capture and analysis of complex behaviors. Therefore, behavioral neuroscience is becoming a data-rich field. While behavioral researchers use advanced computational tools to analyze the resulting datasets, the search for robust and standardized analysis tools is still ongoing. At the same time, the field of genomics exploded with a plethora of technologies which enabled the generation of massive datasets. This growth of genomics data drove the emergence of powerful computational approaches to analyze these data. Here, we discuss the composition of a large behavioral dataset, and the differences and similarities between behavioral and genomics data. We then give examples of genomics-related tools that might be of use for behavioral analysis and discuss concepts that might emerge when considering the two fields together.
Collapse
Affiliation(s)
- Assa Bentzur
- The Mina & Everard Goodman Faculty of Life Sciences, Gonda Multidisciplinary Brain Research Center, Institute of Nanotechnology, Bar-Ilan University, Ramat Gan 5290002, Israel;
- The Alexander Kofkin Faculty of Engineering, Gonda Multidisciplinary Brain Research Center, Institute of Nanotechnology and Advanced Materials, Bar-Ilan University, Ramat Gan 5290002, Israel
| | - Shahar Alon
- The Alexander Kofkin Faculty of Engineering, Gonda Multidisciplinary Brain Research Center, Institute of Nanotechnology and Advanced Materials, Bar-Ilan University, Ramat Gan 5290002, Israel
| | - Galit Shohat-Ophir
- The Mina & Everard Goodman Faculty of Life Sciences, Gonda Multidisciplinary Brain Research Center, Institute of Nanotechnology, Bar-Ilan University, Ramat Gan 5290002, Israel;
| |
Collapse
|
19
|
Bittner SR, Palmigiano A, Piet AT, Duan CA, Brody CD, Miller KD, Cunningham J. Interrogating theoretical models of neural computation with emergent property inference. eLife 2021; 10:e56265. [PMID: 34323690 PMCID: PMC8321557 DOI: 10.7554/elife.56265] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/21/2020] [Accepted: 06/30/2021] [Indexed: 11/13/2022] Open
Abstract
A cornerstone of theoretical neuroscience is the circuit model: a system of equations that captures a hypothesized neural mechanism. Such models are valuable when they give rise to an experimentally observed phenomenon -- whether behavioral or a pattern of neural activity -- and thus can offer insights into neural computation. The operation of these circuits, like all models, critically depends on the choice of model parameters. A key step is then to identify the model parameters consistent with observed phenomena: to solve the inverse problem. In this work, we present a novel technique, emergent property inference (EPI), that brings the modern probabilistic modeling toolkit to theoretical neuroscience. When theorizing circuit models, theoreticians predominantly focus on reproducing computational properties rather than a particular dataset. Our method uses deep neural networks to learn parameter distributions with these computational properties. This methodology is introduced through a motivational example of parameter inference in the stomatogastric ganglion. EPI is then shown to allow precise control over the behavior of inferred parameters and to scale in parameter dimension better than alternative techniques. In the remainder of this work, we present novel theoretical findings in models of primary visual cortex and superior colliculus, which were gained through the examination of complex parametric structure captured by EPI. Beyond its scientific contribution, this work illustrates the variety of analyses possible once deep learning is harnessed towards solving theoretical inverse problems.
Collapse
Affiliation(s)
- Sean R Bittner
- Department of Neuroscience, Columbia UniversityNew YorkUnited States
| | | | - Alex T Piet
- Princeton Neuroscience InstitutePrincetonUnited States
- Princeton UniversityPrincetonUnited States
- Allen Institute for Brain ScienceSeattleUnited States
| | - Chunyu A Duan
- Institute of Neuroscience, Chinese Academy of SciencesShanghaiChina
| | - Carlos D Brody
- Princeton Neuroscience InstitutePrincetonUnited States
- Princeton UniversityPrincetonUnited States
- Howard Hughes Medical InstituteChevy ChaseUnited States
| | - Kenneth D Miller
- Department of Neuroscience, Columbia UniversityNew YorkUnited States
| | - John Cunningham
- Department of Statistics, Columbia UniversityNew YorkUnited States
| |
Collapse
|
20
|
Safavi S, Logothetis NK, Besserve M. From Univariate to Multivariate Coupling Between Continuous Signals and Point Processes: A Mathematical Framework. Neural Comput 2021; 33:1751-1817. [PMID: 34411270 DOI: 10.1162/neco_a_01389] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/18/2020] [Accepted: 01/19/2021] [Indexed: 11/04/2022]
Abstract
Time series data sets often contain heterogeneous signals, composed of both continuously changing quantities and discretely occurring events. The coupling between these measurements may provide insights into key underlying mechanisms of the systems under study. To better extract this information, we investigate the asymptotic statistical properties of coupling measures between continuous signals and point processes. We first introduce martingale stochastic integration theory as a mathematical model for a family of statistical quantities that include the phase locking value, a classical coupling measure to characterize complex dynamics. Based on the martingale central limit theorem, we can then derive the asymptotic gaussian distribution of estimates of such coupling measure that can be exploited for statistical testing. Second, based on multivariate extensions of this result and random matrix theory, we establish a principled way to analyze the low-rank coupling between a large number of point processes and continuous signals. For a null hypothesis of no coupling, we establish sufficient conditions for the empirical distribution of squared singular values of the matrix to converge, as the number of measured signals increases, to the well-known Marchenko-Pastur (MP) law, and the largest squared singular value converges to the upper end of the MP support. This justifies a simple thresholding approach to assess the significance of multivariate coupling. Finally, we illustrate with simulations the relevance of our univariate and multivariate results in the context of neural time series, addressing how to reliably quantify the interplay between multichannel local field potential signals and the spiking activity of a large population of neurons.
Collapse
Affiliation(s)
- Shervin Safavi
- MPI for Biological Cybernetics, and IMPRS for Cognitive and Systems Neuroscience, University of Tübingen, 72076 Tübingen, Germany
| | - Nikos K Logothetis
- MPI for Biological Cybernetics, 72076 Tübingen, Germany; International Center for Primate Brain Research, Songjiang, Shanghai 200031, China; and University of Manchester, Manchester M13 9PL, U.K.
| | - Michel Besserve
- MPI for Biological Cybernetics and MPI for Intelligent Systems, 72076 Tübingen, Germany
| |
Collapse
|
21
|
Statistical analysis and optimality of neural systems. Neuron 2021; 109:1227-1241.e5. [DOI: 10.1016/j.neuron.2021.01.020] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/25/2020] [Revised: 09/10/2020] [Accepted: 01/19/2021] [Indexed: 11/19/2022]
|
22
|
Clairon Q. A regularization method for the parameter estimation problem in ordinary differential equations via discrete optimal control theory. J Stat Plan Inference 2021. [DOI: 10.1016/j.jspi.2020.04.007] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/29/2022]
|
23
|
Biswas T, Bishop WE, Fitzgerald JE. Theoretical principles for illuminating sensorimotor processing with brain-wide neuronal recordings. Curr Opin Neurobiol 2020; 65:138-145. [PMID: 33248437 PMCID: PMC8754199 DOI: 10.1016/j.conb.2020.10.021] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/10/2020] [Revised: 10/28/2020] [Accepted: 10/29/2020] [Indexed: 11/24/2022]
Abstract
Modern recording techniques now permit brain-wide sensorimotor circuits to be observed at single neuron resolution in small animals. Extracting theoretical understanding from these recordings requires principles that organize findings and guide future experiments. Here we review theoretical principles that shed light onto brain-wide sensorimotor processing. We begin with an analogy that conceptualizes principles as streetlamps that illuminate the empirical terrain, and we illustrate the analogy by showing how two familiar principles apply in new ways to brain-wide phenomena. We then focus the bulk of the review on describing three more principles that have wide utility for mapping brain-wide neural activity, making testable predictions from highly parameterized mechanistic models, and investigating the computational determinants of neuronal response patterns across the brain.
Collapse
Affiliation(s)
- Tirthabir Biswas
- Janelia Research Campus, Howard Hughes Medical Institute, Ashburn, VA 20147, USA
| | - William E Bishop
- Janelia Research Campus, Howard Hughes Medical Institute, Ashburn, VA 20147, USA
| | - James E Fitzgerald
- Janelia Research Campus, Howard Hughes Medical Institute, Ashburn, VA 20147, USA.
| |
Collapse
|
24
|
Ballouz S, Mangala MM, Perry MD, Heitmann S, Gillis JA, Hill AP, Vandenberg JI. Co-expression of calcium and hERG potassium channels reduces the incidence of proarrhythmic events. Cardiovasc Res 2020; 117:2216-2227. [PMID: 33002116 DOI: 10.1093/cvr/cvaa280] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/16/2020] [Revised: 08/25/2020] [Accepted: 09/17/2020] [Indexed: 01/02/2023] Open
Abstract
AIMS Cardiac electrical activity is extraordinarily robust. However, when it goes wrong it can have fatal consequences. Electrical activity in the heart is controlled by the carefully orchestrated activity of more than a dozen different ion conductances. While there is considerable variability in cardiac ion channel expression levels between individuals, studies in rodents have indicated that there are modules of ion channels whose expression co-vary. The aim of this study was to investigate whether meta-analytic co-expression analysis of large-scale gene expression datasets could identify modules of co-expressed cardiac ion channel genes in human hearts that are of functional importance. METHODS AND RESULTS Meta-analysis of 3653 public human RNA-seq datasets identified a strong correlation between expression of CACNA1C (L-type calcium current, ICaL) and KCNH2 (rapid delayed rectifier K+ current, IKr), which was also observed in human adult heart tissue samples. In silico modelling suggested that co-expression of CACNA1C and KCNH2 would limit the variability in action potential duration seen with variations in expression of ion channel genes and reduce susceptibility to early afterdepolarizations, a surrogate marker for proarrhythmia. We also found that levels of KCNH2 and CACNA1C expression are correlated in human-induced pluripotent stem cell-derived cardiac myocytes and the levels of CACNA1C and KCNH2 expression were inversely correlated with the magnitude of changes in repolarization duration following inhibition of IKr. CONCLUSION Meta-analytic approaches of multiple independent human gene expression datasets can be used to identify gene modules that are important for regulating heart function. Specifically, we have verified that there is co-expression of CACNA1C and KCNH2 ion channel genes in human heart tissue, and in silico analyses suggest that CACNA1C-KCNH2 co-expression increases the robustness of cardiac electrical activity.
Collapse
Affiliation(s)
- Sara Ballouz
- Garvan-Weizmann Centre for Cellular Genomics, Garvan Institute of Medical Research, 384 Victoria Street, Darlinghurst NSW 2010, Australia.,University of New South Wales, Sydney, Kensington, NSW 2052, Australia.,Stanley Institute for Cognitive Genomics, Cold Spring Harbor Laboratory, Cold Spring Harbor, One Bungtown Road, NY 11724, USA
| | - Melissa M Mangala
- Victor Chang Cardiac Research Institute, Lowy Packer Building, 405 Liverpool Street, Darlinghurst, New South Wales 2010, Australia
| | - Matthew D Perry
- University of New South Wales, Sydney, Kensington, NSW 2052, Australia.,Victor Chang Cardiac Research Institute, Lowy Packer Building, 405 Liverpool Street, Darlinghurst, New South Wales 2010, Australia
| | - Stewart Heitmann
- Victor Chang Cardiac Research Institute, Lowy Packer Building, 405 Liverpool Street, Darlinghurst, New South Wales 2010, Australia
| | - Jesse A Gillis
- Stanley Institute for Cognitive Genomics, Cold Spring Harbor Laboratory, Cold Spring Harbor, One Bungtown Road, NY 11724, USA
| | - Adam P Hill
- University of New South Wales, Sydney, Kensington, NSW 2052, Australia.,Victor Chang Cardiac Research Institute, Lowy Packer Building, 405 Liverpool Street, Darlinghurst, New South Wales 2010, Australia
| | - Jamie I Vandenberg
- University of New South Wales, Sydney, Kensington, NSW 2052, Australia.,Victor Chang Cardiac Research Institute, Lowy Packer Building, 405 Liverpool Street, Darlinghurst, New South Wales 2010, Australia
| |
Collapse
|
25
|
Sekulić V, Yi F, Garrett T, Guet-McCreight A, Lawrence JJ, Skinner FK. Integration of Within-Cell Experimental Data With Multi-Compartmental Modeling Predicts H-Channel Densities and Distributions in Hippocampal OLM Cells. Front Cell Neurosci 2020; 14:277. [PMID: 33093823 PMCID: PMC7527636 DOI: 10.3389/fncel.2020.00277] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/13/2020] [Accepted: 08/05/2020] [Indexed: 12/13/2022] Open
Abstract
Determining biophysical details of spatially extended neurons is a challenge that needs to be overcome if we are to understand the dynamics of brain function from cellular perspectives. Moreover, we now know that we should not average across recordings from many cells of a given cell type to obtain quantitative measures such as conductance since measures can vary multiple-fold for a given cell type. In this work we examine whether a tight combination of experimental and computational work can address this challenge. The oriens-lacunosum/moleculare (OLM) interneuron operates as a “gate” that controls incoming sensory and ongoing contextual information in the CA1 of the hippocampus, making it essential to understand how its biophysical properties contribute to memory function. OLM cells fire phase-locked to the prominent hippocampal theta rhythms, and we previously used computational models to show that OLM cells exhibit high or low theta spiking resonance frequencies that depend respectively on whether their dendrites have hyperpolarization-activated cation channels (h-channels) or not. However, whether OLM cells actually possess dendritic h-channels is unknown at present. We performed a set of whole-cell recordings of OLM cells from mouse hippocampus and constructed three multi-compartment models using morphological and electrophysiological parameters extracted from the same OLM cell, including per-cell pharmacologically isolated h-channel currents. We found that the models best matched experiments when h-channels were present in the dendrites of each of the three model cells created. This strongly suggests that h-channels must be present in OLM cell dendrites and are not localized to their somata. Importantly, this work shows that a tight integration of model and experiment can help tackle the challenge of characterizing biophysical details and distributions in spatially extended neurons. Full spiking models were built for two of the OLM cells, matching their current clamp cell-specific electrophysiological recordings. Overall, our work presents a technical advancement in modeling OLM cells. Our models are available to the community to use to gain insight into cellular dynamics underlying hippocampal function.
Collapse
Affiliation(s)
- Vladislav Sekulić
- Krembil Research Institute, University Health Network, Toronto, ON, Canada.,Department of Physiology, University of Toronto, Toronto, ON, Canada
| | - Feng Yi
- Department of Biomedical and Pharmaceutical Sciences, Center for Biomolecular Structure and Dynamics, Center for Structural and Functional Neuroscience, University of Montana, Missoula, MT, United States
| | - Tavita Garrett
- Neuroscience Graduate Program and Vollum Institute, Oregon Health & Science University, Portland, OR, United States
| | - Alexandre Guet-McCreight
- Krembil Research Institute, University Health Network, Toronto, ON, Canada.,Department of Physiology, University of Toronto, Toronto, ON, Canada
| | - J Josh Lawrence
- Department of Pharmacology and Neuroscience, Texas Tech University Health Sciences Center, Lubbock, TX, United States.,Center for Translational Neuroscience and Therapeutics, Texas Tech University Health Sciences Center, Lubbock, TX, United States.,Garrison Institute on Aging, Texas Tech University Health Sciences Center, Lubbock, TX, United States
| | - Frances K Skinner
- Krembil Research Institute, University Health Network, Toronto, ON, Canada.,Departments of Medicine (Neurology) and Physiology, University of Toronto, Toronto, ON, Canada
| |
Collapse
|
26
|
Gonçalves PJ, Lueckmann JM, Deistler M, Nonnenmacher M, Öcal K, Bassetto G, Chintaluri C, Podlaski WF, Haddad SA, Vogels TP, Greenberg DS, Macke JH. Training deep neural density estimators to identify mechanistic models of neural dynamics. eLife 2020; 9:e56261. [PMID: 32940606 PMCID: PMC7581433 DOI: 10.7554/elife.56261] [Citation(s) in RCA: 68] [Impact Index Per Article: 17.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/21/2020] [Accepted: 09/16/2020] [Indexed: 01/27/2023] Open
Abstract
Mechanistic modeling in neuroscience aims to explain observed phenomena in terms of underlying causes. However, determining which model parameters agree with complex and stochastic neural data presents a significant challenge. We address this challenge with a machine learning tool which uses deep neural density estimators-trained using model simulations-to carry out Bayesian inference and retrieve the full space of parameters compatible with raw data or selected data features. Our method is scalable in parameters and data features and can rapidly analyze new data after initial training. We demonstrate the power and flexibility of our approach on receptive fields, ion channels, and Hodgkin-Huxley models. We also characterize the space of circuit configurations giving rise to rhythmic activity in the crustacean stomatogastric ganglion, and use these results to derive hypotheses for underlying compensation mechanisms. Our approach will help close the gap between data-driven and theory-driven models of neural dynamics.
Collapse
Affiliation(s)
- Pedro J Gonçalves
- Computational Neuroengineering, Department of Electrical and Computer Engineering, Technical University of MunichMunichGermany
- Max Planck Research Group Neural Systems Analysis, Center of Advanced European Studies and Research (caesar)BonnGermany
| | - Jan-Matthis Lueckmann
- Computational Neuroengineering, Department of Electrical and Computer Engineering, Technical University of MunichMunichGermany
- Max Planck Research Group Neural Systems Analysis, Center of Advanced European Studies and Research (caesar)BonnGermany
| | - Michael Deistler
- Computational Neuroengineering, Department of Electrical and Computer Engineering, Technical University of MunichMunichGermany
- Machine Learning in Science, Excellence Cluster Machine Learning, Tübingen UniversityTübingenGermany
| | - Marcel Nonnenmacher
- Computational Neuroengineering, Department of Electrical and Computer Engineering, Technical University of MunichMunichGermany
- Max Planck Research Group Neural Systems Analysis, Center of Advanced European Studies and Research (caesar)BonnGermany
- Model-Driven Machine Learning, Institute of Coastal Research, Helmholtz Centre GeesthachtGeesthachtGermany
| | - Kaan Öcal
- Max Planck Research Group Neural Systems Analysis, Center of Advanced European Studies and Research (caesar)BonnGermany
- Mathematical Institute, University of BonnBonnGermany
| | - Giacomo Bassetto
- Computational Neuroengineering, Department of Electrical and Computer Engineering, Technical University of MunichMunichGermany
- Max Planck Research Group Neural Systems Analysis, Center of Advanced European Studies and Research (caesar)BonnGermany
| | - Chaitanya Chintaluri
- Centre for Neural Circuits and Behaviour, University of OxfordOxfordUnited Kingdom
- Institute of Science and Technology AustriaKlosterneuburgAustria
| | - William F Podlaski
- Centre for Neural Circuits and Behaviour, University of OxfordOxfordUnited Kingdom
| | - Sara A Haddad
- Max Planck Institute for Brain ResearchFrankfurtGermany
| | - Tim P Vogels
- Centre for Neural Circuits and Behaviour, University of OxfordOxfordUnited Kingdom
- Institute of Science and Technology AustriaKlosterneuburgAustria
| | - David S Greenberg
- Computational Neuroengineering, Department of Electrical and Computer Engineering, Technical University of MunichMunichGermany
- Model-Driven Machine Learning, Institute of Coastal Research, Helmholtz Centre GeesthachtGeesthachtGermany
| | - Jakob H Macke
- Computational Neuroengineering, Department of Electrical and Computer Engineering, Technical University of MunichMunichGermany
- Max Planck Research Group Neural Systems Analysis, Center of Advanced European Studies and Research (caesar)BonnGermany
- Machine Learning in Science, Excellence Cluster Machine Learning, Tübingen UniversityTübingenGermany
- Max Planck Institute for Intelligent SystemsTübingenGermany
| |
Collapse
|
27
|
Estimation of neuron parameters from imperfect observations. PLoS Comput Biol 2020; 16:e1008053. [PMID: 32673311 PMCID: PMC7386621 DOI: 10.1371/journal.pcbi.1008053] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/10/2020] [Revised: 07/28/2020] [Accepted: 06/15/2020] [Indexed: 12/21/2022] Open
Abstract
The estimation of parameters controlling the electrical properties of biological neurons is essential to determine their complement of ion channels and to understand the function of biological circuits. By synchronizing conductance models to time series observations of the membrane voltage, one may construct models capable of predicting neuronal dynamics. However, identifying the actual set of parameters of biological ion channels remains a formidable theoretical challenge. Here, we present a regularization method that improves convergence towards this optimal solution when data are noisy and the model is unknown. Our method relies on the existence of an offset in parameter space arising from the interplay between model nonlinearity and experimental error. By tuning this offset, we induce saddle-node bifurcations from sub-optimal to optimal solutions. This regularization method increases the probability of finding the optimal set of parameters from 67% to 94.3%. We also reduce parameter correlations by implementing adaptive sampling and stimulation protocols compatible with parameter identifiability requirements. Our results show that the optimal model parameters may be inferred from imperfect observations provided the conditions of observability and identifiability are fulfilled.
Collapse
|
28
|
Neuronal population model of globular bushy cells covering unit-to-unit variability. PLoS Comput Biol 2019; 15:e1007563. [PMID: 31881018 PMCID: PMC6934273 DOI: 10.1371/journal.pcbi.1007563] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/22/2019] [Accepted: 11/25/2019] [Indexed: 01/02/2023] Open
Abstract
Computations of acoustic information along the central auditory pathways start in the cochlear nucleus. Bushy cells in the anteroventral cochlear nucleus, which innervate monaural and binaural stations in the superior olivary complex, process and transfer temporal cues relevant for sound localization. These cells are categorized into two groups: spherical and globular bushy cells (SBCs/GBCs). Spontaneous rates of GBCs innervated by multiple auditory nerve (AN) fibers are generally lower than those of SBCs that receive a small number of large AN synapses. In response to low-frequency tonal stimulation, both types of bushy cells show improved phase-locking and entrainment compared to AN fibers. When driven by high-frequency tones, GBCs show primary-like-with-notch or onset-L peristimulus time histograms and relatively irregular spiking. However, previous in vivo physiological studies of bushy cells also found considerable unit-to-unit variability in these response patterns. Here we present a population of models that can simulate the observed variation in GBCs. We used a simple coincidence detection model with an adaptive threshold and systematically varied its six parameters. Out of 567000 parameter combinations tested, 7520 primary-like-with-notch models and 4094 onset-L models were selected that satisfied a set of physiological criteria for a GBC unit. Analyses of the model parameters and output measures revealed that the parameters of the accepted model population are weakly correlated with each other to retain major GBC properties, and that the output spiking patterns of the model are affected by a combination of multiple parameters. Simulations of frequency-dependent temporal properties of the model GBCs showed a reasonable fit to empirical data, supporting the validity of our population modeling. The computational simplicity and efficiency of the model structure makes our approach suitable for future large-scale simulations of binaural information processing that may involve thousands of GBC units. In the auditory system, specialized neuronal circuits process various types of acoustic information. A group of neurons, called globular bushy cells (GBCs), faithfully transfer timing information of acoustic signals to their downstream neurons responsible for the perception of sound location. Previous physiological studies found representative activity patterns of GBCs, but with substantial individual variations among them. In this study, we present a population of models, instead of creating one best model, to account for the observed variations of GBCs. We varied all six parameters of a simple auditory neuron model and selected the combinations of parameters that led to acceptable activity patterns of GBCs. In total, we tested more than half a million combinations and accepted ~11600 GBC models. Temporal spiking patterns of real GBCs depend on the sound frequency, and our model population was able to replicate this trend. The model used here is computationally efficient and can thus serve as a building block for future large-scale simulations of auditory information processing.
Collapse
|
29
|
Groden M, Weigand M, Triesch J, Jedlicka P, Cuntz H. A Model of Brain Folding Based on Strong Local and Weak Long-Range Connectivity Requirements. Cereb Cortex 2019; 30:2434-2451. [DOI: 10.1093/cercor/bhz249] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/09/2019] [Revised: 08/20/2019] [Accepted: 10/01/2019] [Indexed: 12/21/2022] Open
Abstract
Abstract
Throughout the animal kingdom, the structure of the central nervous system varies widely from distributed ganglia in worms to compact brains with varying degrees of folding in mammals. The differences in structure may indicate a fundamentally different circuit organization. However, the folded brain most likely is a direct result of mechanical forces when considering that a larger surface area of cortex packs into the restricted volume provided by the skull. Here, we introduce a computational model that instead of modeling mechanical forces relies on dimension reduction methods to place neurons according to specific connectivity requirements. For a simplified connectivity with strong local and weak long-range connections, our model predicts a transition from separate ganglia through smooth brain structures to heavily folded brains as the number of cortical columns increases. The model reproduces experimentally determined relationships between metrics of cortical folding and its pathological phenotypes in lissencephaly, polymicrogyria, microcephaly, autism, and schizophrenia. This suggests that mechanical forces that are known to lead to cortical folding may synergistically contribute to arrangements that reduce wiring. Our model provides a unified conceptual understanding of gyrification linking cellular connectivity and macroscopic structures in large-scale neural network models of the brain.
Collapse
Affiliation(s)
- Moritz Groden
- Ernst Strüngmann Institute (ESI) for Neuroscience in Cooperation with Max Planck Society, Frankfurt am Main D-60528, Germany
- Frankfurt Institute for Advanced Studies (FIAS), Frankfurt am Main D-60438, Germany
- ICAR3R—Interdisciplinary Centre for 3Rs in Animal Research, Justus Liebig University Giessen, Giessen D-35390, Germany
| | - Marvin Weigand
- Ernst Strüngmann Institute (ESI) for Neuroscience in Cooperation with Max Planck Society, Frankfurt am Main D-60528, Germany
- Frankfurt Institute for Advanced Studies (FIAS), Frankfurt am Main D-60438, Germany
- Faculty of Biological Sciences, Goethe University, Frankfurt am Main D-60438, Germany
| | - Jochen Triesch
- Frankfurt Institute for Advanced Studies (FIAS), Frankfurt am Main D-60438, Germany
- Faculty of Physics, Goethe University, Frankfurt am Main D-60438, Germany
- Faculty of Computer Science and Mathematics, Goethe University, Frankfurt am Main D-60438, Germany
| | - Peter Jedlicka
- Frankfurt Institute for Advanced Studies (FIAS), Frankfurt am Main D-60438, Germany
- ICAR3R—Interdisciplinary Centre for 3Rs in Animal Research, Justus Liebig University Giessen, Giessen D-35390, Germany
- Institute of Clinical Neuroanatomy, Neuroscience Center, Goethe University, Frankfurt am Main D-60528, Germany
| | - Hermann Cuntz
- Ernst Strüngmann Institute (ESI) for Neuroscience in Cooperation with Max Planck Society, Frankfurt am Main D-60528, Germany
- Frankfurt Institute for Advanced Studies (FIAS), Frankfurt am Main D-60438, Germany
| |
Collapse
|
30
|
Abu-Hassan K, Taylor JD, Morris PG, Donati E, Bortolotto ZA, Indiveri G, Paton JFR, Nogaret A. Optimal solid state neurons. Nat Commun 2019; 10:5309. [PMID: 31796727 PMCID: PMC6890780 DOI: 10.1038/s41467-019-13177-3] [Citation(s) in RCA: 24] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/18/2019] [Accepted: 10/14/2019] [Indexed: 11/09/2022] Open
Abstract
Bioelectronic medicine is driving the need for neuromorphic microcircuits that integrate raw nervous stimuli and respond identically to biological neurons. However, designing such circuits remains a challenge. Here we estimate the parameters of highly nonlinear conductance models and derive the ab initio equations of intracellular currents and membrane voltages embodied in analog solid-state electronics. By configuring individual ion channels of solid-state neurons with parameters estimated from large-scale assimilation of electrophysiological recordings, we successfully transfer the complete dynamics of hippocampal and respiratory neurons in silico. The solid-state neurons are found to respond nearly identically to biological neurons under stimulation by a wide range of current injection protocols. The optimization of nonlinear models demonstrates a powerful method for programming analog electronic circuits. This approach offers a route for repairing diseased biocircuits and emulating their function with biomedical implants that can adapt to biofeedback.
Collapse
Affiliation(s)
- Kamal Abu-Hassan
- Department of Physics, University of Bath, Claverton Down, Bath, BA2 7AY, UK
| | - Joseph D Taylor
- Department of Physics, University of Bath, Claverton Down, Bath, BA2 7AY, UK
| | - Paul G Morris
- Department of Physics, University of Bath, Claverton Down, Bath, BA2 7AY, UK.,School of Physiology, Pharmacology and Neuroscience, University of Bristol, Bristol, BS8 1TD, UK
| | - Elisa Donati
- Institute of Neuroinformatics, University of Zürich and ETH Zürich, Winterthurerstrasse 190, 8057, Zürich, Switzerland
| | - Zuner A Bortolotto
- School of Physiology, Pharmacology and Neuroscience, University of Bristol, Bristol, BS8 1TD, UK
| | - Giacomo Indiveri
- Institute of Neuroinformatics, University of Zürich and ETH Zürich, Winterthurerstrasse 190, 8057, Zürich, Switzerland
| | - Julian F R Paton
- School of Physiology, Pharmacology and Neuroscience, University of Bristol, Bristol, BS8 1TD, UK.,Department of Physiology, Faculty of Medical and Health Sciences, University of Auckland, Grafton, Auckland, New Zealand
| | - Alain Nogaret
- Department of Physics, University of Bath, Claverton Down, Bath, BA2 7AY, UK.
| |
Collapse
|
31
|
Stimberg M, Brette R, Goodman DFM. Brian 2, an intuitive and efficient neural simulator. eLife 2019; 8:e47314. [PMID: 31429824 PMCID: PMC6786860 DOI: 10.7554/elife.47314] [Citation(s) in RCA: 202] [Impact Index Per Article: 40.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/01/2019] [Accepted: 08/19/2019] [Indexed: 01/20/2023] Open
Abstract
Brian 2 allows scientists to simply and efficiently simulate spiking neural network models. These models can feature novel dynamical equations, their interactions with the environment, and experimental protocols. To preserve high performance when defining new models, most simulators offer two options: low-level programming or description languages. The first option requires expertise, is prone to errors, and is problematic for reproducibility. The second option cannot describe all aspects of a computational experiment, such as the potentially complex logic of a stimulation protocol. Brian addresses these issues using runtime code generation. Scientists write code with simple and concise high-level descriptions, and Brian transforms them into efficient low-level code that can run interleaved with their code. We illustrate this with several challenging examples: a plastic model of the pyloric network, a closed-loop sensorimotor model, a programmatic exploration of a neuron model, and an auditory model with real-time input.
Collapse
Affiliation(s)
- Marcel Stimberg
- Sorbonne Université, INSERM, CNRS, Institut de la VisionParisFrance
| | - Romain Brette
- Sorbonne Université, INSERM, CNRS, Institut de la VisionParisFrance
| | - Dan FM Goodman
- Department of Electrical and Electronic EngineeringImperial College LondonLondonUnited Kingdom
| |
Collapse
|
32
|
Hartoyo A, Cadusch PJ, Liley DTJ, Hicks DG. Parameter estimation and identifiability in a neural population model for electro-cortical activity. PLoS Comput Biol 2019; 15:e1006694. [PMID: 31145724 PMCID: PMC6542506 DOI: 10.1371/journal.pcbi.1006694] [Citation(s) in RCA: 17] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/04/2018] [Accepted: 04/12/2019] [Indexed: 11/18/2022] Open
Abstract
Electroencephalography (EEG) provides a non-invasive measure of brain electrical activity. Neural population models, where large numbers of interacting neurons are considered collectively as a macroscopic system, have long been used to understand features in EEG signals. By tuning dozens of input parameters describing the excitatory and inhibitory neuron populations, these models can reproduce prominent features of the EEG such as the alpha-rhythm. However, the inverse problem, of directly estimating the parameters from fits to EEG data, remains unsolved. Solving this multi-parameter non-linear fitting problem will potentially provide a real-time method for characterizing average neuronal properties in human subjects. Here we perform unbiased fits of a 22-parameter neural population model to EEG data from 82 individuals, using both particle swarm optimization and Markov chain Monte Carlo sampling. We estimate how much is learned about individual parameters by computing Kullback-Leibler divergences between posterior and prior distributions for each parameter. Results indicate that only a single parameter, that determining the dynamics of inhibitory synaptic activity, is directly identifiable, while other parameters have large, though correlated, uncertainties. We show that the eigenvalues of the Fisher information matrix are roughly uniformly spaced over a log scale, indicating that the model is sloppy, like many of the regulatory network models in systems biology. These eigenvalues indicate that the system can be modeled with a low effective dimensionality, with inhibitory synaptic activity being prominent in driving system behavior.
Collapse
Affiliation(s)
- Agus Hartoyo
- Centre for Micro-Photonics, Swinburne University of Technology, Hawthorn, Victoria 3122, Australia
| | - Peter J. Cadusch
- Department of Physics and Astronomy, Swinburne University of Technology, Hawthorn, Victoria 3122, Australia
| | - David T. J. Liley
- Centre for Human Psychopharmacology, School of Health Sciences, Swinburne University of Technology, Hawthorn, Victoria 3122, Australia
- Department of Medicine, University of Melbourne, Parkville, Victoria 3010, Australia
| | - Damien G. Hicks
- Centre for Micro-Photonics, Swinburne University of Technology, Hawthorn, Victoria 3122, Australia
- Department of Physics and Astronomy, Swinburne University of Technology, Hawthorn, Victoria 3122, Australia
- Bioinformatics Division, Walter & Eliza Hall Institute of Medical Research, Parkville, Victoria 3052, Australia
| |
Collapse
|
33
|
Glaser JI, Benjamin AS, Farhoodi R, Kording KP. The roles of supervised machine learning in systems neuroscience. Prog Neurobiol 2019; 175:126-137. [PMID: 30738835 PMCID: PMC8454059 DOI: 10.1016/j.pneurobio.2019.01.008] [Citation(s) in RCA: 63] [Impact Index Per Article: 12.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/01/2018] [Revised: 01/23/2019] [Accepted: 01/28/2019] [Indexed: 01/18/2023]
Abstract
Over the last several years, the use of machine learning (ML) in neuroscience has been rapidly increasing. Here, we review ML's contributions, both realized and potential, across several areas of systems neuroscience. We describe four primary roles of ML within neuroscience: (1) creating solutions to engineering problems, (2) identifying predictive variables, (3) setting benchmarks for simple models of the brain, and (4) serving itself as a model for the brain. The breadth and ease of its applicability suggests that machine learning should be in the toolbox of most systems neuroscientists.
Collapse
Affiliation(s)
- Joshua I Glaser
- Department of Bioengineering, University of Pennsylvania, United States.
| | - Ari S Benjamin
- Department of Bioengineering, University of Pennsylvania, United States.
| | - Roozbeh Farhoodi
- Department of Bioengineering, University of Pennsylvania, United States.
| | - Konrad P Kording
- Department of Bioengineering, University of Pennsylvania, United States; Department of Neuroscience, University of Pennsylvania, United States; Canadian Institute for Advanced Research, Canada.
| |
Collapse
|
34
|
Jacob T, Lillis KP, Wang Z, Swiercz W, Rahmati N, Staley KJ. A Proposed Mechanism for Spontaneous Transitions between Interictal and Ictal Activity. J Neurosci 2019; 39:557-575. [PMID: 30446533 PMCID: PMC6335741 DOI: 10.1523/jneurosci.0719-17.2018] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/16/2017] [Revised: 10/23/2018] [Accepted: 10/31/2018] [Indexed: 11/21/2022] Open
Abstract
Epileptic networks are characterized by two outputs: brief interictal spikes and rarer, more prolonged seizures. Although either output state is readily modeled in silico and induced experimentally, the transition mechanisms are unknown, in part because no models exhibit both output states spontaneously. In silico small-world neural networks were built using single-compartment neurons whose physiological parameters were derived from dual whole-cell recordings of pyramidal cells in organotypic hippocampal slice cultures that were generating spontaneous seizure-like activity. In silico, neurons were connected by abundant local synapses and rare long-distance synapses. Activity-dependent synaptic depression and gradual recovery delimited synchronous activity. Full synaptic recovery engendered interictal population spikes that spread via long-distance synapses. When synaptic recovery was incomplete, postsynaptic neurons required coincident activation of multiple presynaptic terminals to reach firing threshold. Only local connections were sufficiently dense to spread activity under these conditions. This coalesced network activity into traveling waves whose velocity varied with synaptic recovery. Seizures were comprised of sustained traveling waves that were similar to those recorded during experimental and human neocortical seizures. Sustained traveling waves occurred only when wave velocity, network dimensions, and the rate of synaptic recovery enabled wave reentry into previously depressed areas at precisely ictogenic levels of synaptic recovery. Wide-field, cellular-resolution GCamP7b calcium imaging demonstrated similar initial patterns of activation in the hippocampus, although the anatomical distribution of traveling waves of synaptic activation was altered by the pattern of synaptic connectivity in the organotypic hippocampal cultures.SIGNIFICANCE STATEMENT When computerized distributed neural network models are required to generate both features of epileptic networks (i.e., spontaneous interictal population spikes and seizures), the network structure is substantially constrained. These constraints provide important new hypotheses regarding the nature of epileptic networks and mechanisms of seizure onset.
Collapse
Affiliation(s)
- Theju Jacob
- Massachusetts General Hospital, Boston, Massachusetts 02114
- Harvard Medical School, Boston, MA 02115
| | - Kyle P Lillis
- Massachusetts General Hospital, Boston, Massachusetts 02114
- Harvard Medical School, Boston, MA 02115
| | - Zemin Wang
- Brigham and Women's Hospital, Boston, MA 02115, and
- Harvard Medical School, Boston, MA 02115
| | - Waldemar Swiercz
- Massachusetts General Hospital, Boston, Massachusetts 02114
- Harvard Medical School, Boston, MA 02115
| | - Negah Rahmati
- Massachusetts General Hospital, Boston, Massachusetts 02114
- Harvard Medical School, Boston, MA 02115
| | - Kevin J Staley
- Massachusetts General Hospital, Boston, Massachusetts 02114,
- Harvard Medical School, Boston, MA 02115
| |
Collapse
|
35
|
Cellular function given parametric variation in the Hodgkin and Huxley model of excitability. Proc Natl Acad Sci U S A 2018; 115:E8211-E8218. [PMID: 30111538 PMCID: PMC6126753 DOI: 10.1073/pnas.1808552115] [Citation(s) in RCA: 24] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/26/2022] Open
Abstract
How is reliable physiological function maintained in cells despite considerable variability in the values of key parameters of multiple interacting processes that govern that function? Here, we use the classic Hodgkin-Huxley formulation of the squid giant axon action potential to propose a possible approach to this problem. Although the full Hodgkin-Huxley model is very sensitive to fluctuations that independently occur in its many parameters, the outcome is in fact determined by simple combinations of these parameters along two physiological dimensions: structural and kinetic (denoted S and K, respectively). Structural parameters describe the properties of the cell, including its capacitance and the densities of its ion channels. Kinetic parameters are those that describe the opening and closing of the voltage-dependent conductances. The impacts of parametric fluctuations on the dynamics of the system-seemingly complex in the high-dimensional representation of the Hodgkin-Huxley model-are tractable when examined within the S-K plane. We demonstrate that slow inactivation, a ubiquitous activity-dependent feature of ionic channels, is a powerful local homeostatic control mechanism that stabilizes excitability amid changes in structural and kinetic parameters.
Collapse
|
36
|
Mattingly HH, Transtrum MK, Abbott MC, Machta BB. Maximizing the information learned from finite data selects a simple model. Proc Natl Acad Sci U S A 2018; 115:1760-1765. [PMID: 29434042 PMCID: PMC5828598 DOI: 10.1073/pnas.1715306115] [Citation(s) in RCA: 19] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022] Open
Abstract
We use the language of uninformative Bayesian prior choice to study the selection of appropriately simple effective models. We advocate for the prior which maximizes the mutual information between parameters and predictions, learning as much as possible from limited data. When many parameters are poorly constrained by the available data, we find that this prior puts weight only on boundaries of the parameter space. Thus, it selects a lower-dimensional effective theory in a principled way, ignoring irrelevant parameter directions. In the limit where there are sufficient data to tightly constrain any number of parameters, this reduces to the Jeffreys prior. However, we argue that this limit is pathological when applied to the hyperribbon parameter manifolds generic in science, because it leads to dramatic dependence on effects invisible to experiment.
Collapse
Affiliation(s)
- Henry H Mattingly
- Department of Chemical and Biological Engineering, Princeton University, Princeton, NJ 08544
- Lewis-Sigler Institute, Princeton University, Princeton, NJ 08544
| | - Mark K Transtrum
- Department of Physics and Astronomy, Brigham Young University, Provo, UT 84602
| | - Michael C Abbott
- Marian Smoluchowski Institute of Physics, Jagiellonian University, 30-348 Kraków, Poland;
| | - Benjamin B Machta
- Lewis-Sigler Institute, Princeton University, Princeton, NJ 08544;
- Department of Physics, Princeton University, Princeton, NJ 08544
| |
Collapse
|
37
|
Goodman DF, Winter IM, Léger AC, de Cheveigné A, Lorenzi C. Modelling firing regularity in the ventral cochlear nucleus: Mechanisms, and effects of stimulus level and synaptopathy. Hear Res 2018; 358:98-110. [DOI: 10.1016/j.heares.2017.09.010] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/29/2017] [Revised: 08/30/2017] [Accepted: 09/18/2017] [Indexed: 11/29/2022]
|
38
|
O'Donnell C, Gonçalves JT, Portera-Cailliau C, Sejnowski TJ. Beyond excitation/inhibition imbalance in multidimensional models of neural circuit changes in brain disorders. eLife 2017; 6:26724. [PMID: 29019321 PMCID: PMC5663477 DOI: 10.7554/elife.26724] [Citation(s) in RCA: 43] [Impact Index Per Article: 6.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/11/2017] [Accepted: 10/04/2017] [Indexed: 11/28/2022] Open
Abstract
A leading theory holds that neurodevelopmental brain disorders arise from imbalances in excitatory and inhibitory (E/I) brain circuitry. However, it is unclear whether this one-dimensional model is rich enough to capture the multiple neural circuit alterations underlying brain disorders. Here, we combined computational simulations with analysis of in vivo two-photon Ca2+ imaging data from somatosensory cortex of Fmr1 knock-out (KO) mice, a model of Fragile-X Syndrome, to test the E/I imbalance theory. We found that: (1) The E/I imbalance model cannot account for joint alterations in the observed neural firing rates and correlations; (2) Neural circuit function is vastly more sensitive to changes in some cellular components over others; (3) The direction of circuit alterations in Fmr1 KO mice changes across development. These findings suggest that the basic E/I imbalance model should be updated to higher dimensional models that can better capture the multidimensional computational functions of neural circuits. In many brain disorders, from autism to schizophrenia, the anatomy of the brain appears remarkably unchanged. This implies that the problem may reside in how neurons communicate with one another. Unfortunately, neuroscientists know little about how brain activity might differ from normal in these disorders, or how specific changes in activity give rise to symptoms. One leading theory, first proposed over a decade ago, is that these disorders reflect an imbalance in the activity of excitatory and inhibitory neurons. Excitatory neurons activate their targets, whereas inhibitory neurons suppress or silence them. While studies in mice have lent support to this theory, they have not yet culminated in new treatments for brain disorders. One limitation of the excitation-inhibition imbalance theory is that it is one-dimensional. It assumes that there is an optimal balance of excitation and inhibition, and that brain disorders can be arranged in an imaginary line on either side of this optimum. Disorders to the right of the optimum, such as epilepsy and some forms of autism, feature too much excitation. Disorders to the left, such as the developmental disorder Rett syndrome, feature too much inhibition. But can diverse brain disorders really be classified on the basis of a single property, or do scientists need to consider other factors? To find out, O’Donnell et al. analyzed recordings of brain activity from genetically modified mice with the mutation that causes fragile X syndrome, the most common form of inherited learning disability and autism. The mice showed changes in their overall brain activity compared to control animals. Their neurons also tended to fire in a more synchronized manner. A computer simulation revealed that an imbalance in excitation and inhibition alone could not explain these changes. Yet, a more complex simulation incorporating extra properties of neural circuits did a better job of explaining the altered neural activity seen in the mice. O’Donnell et al. propose that this more advanced multi-dimensional model of changes in neural circuits could be used to screen candidate drugs before testing them in patients. In principle, the model could even help with designing drugs or other interventions by making it easier for researchers to target more precisely the changes in neural circuits that occur in brain disorders.
Collapse
Affiliation(s)
- Cian O'Donnell
- Department of Computer Science, University of Bristol, Bristol, United Kingdom.,Howard Hughes Medical Institute, Salk Institute for Biological Studies, La Jolla, United States
| | - J Tiago Gonçalves
- Dominick Purpura Department of Neuroscience, Albert Einstein College of Medicine, Bronx, United States.,Department of Neurology, David Geffen School of Medicine at UCLA, Los Angeles, United States
| | - Carlos Portera-Cailliau
- Department of Neurology, David Geffen School of Medicine at UCLA, Los Angeles, United States.,Department of Neurobiology, David Geffen School of Medicine at UCLA, Los Angeles, United States
| | - Terrence J Sejnowski
- Howard Hughes Medical Institute, Salk Institute for Biological Studies, La Jolla, United States.,Division of Biological Sciences, University of California at San Diego, La Jolla, United States
| |
Collapse
|
39
|
Patterson EA, Whelan MP. A framework to establish credibility of computational models in biology. PROGRESS IN BIOPHYSICS AND MOLECULAR BIOLOGY 2017; 129:13-19. [DOI: 10.1016/j.pbiomolbio.2016.08.007] [Citation(s) in RCA: 29] [Impact Index Per Article: 4.1] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/16/2016] [Revised: 07/18/2016] [Accepted: 08/01/2016] [Indexed: 10/20/2022]
|
40
|
Abstract
Large-scale neural simulations have the marks of a distinct methodology which can be fruitfully deployed in neuroscience. I distinguish between two types of applications of the simulation methodology in neuroscientific research. Model-oriented applications aim to use the simulation outputs to derive new hypotheses about brain organization and functioning and thus to extend current theoretical knowledge and understanding in the field. Data-oriented applications of the simulation methodology target the collection and analysis of data relevant for neuroscientific research that is inaccessible via more traditional experimental methods. I argue for a two-stage evaluation schema which helps clarify the differences and similarities between three current large-scale simulation projects pursued in neuroscience.
Collapse
|
41
|
Poulin JF, Tasic B, Hjerling-Leffler J, Trimarchi JM, Awatramani R. Disentangling neural cell diversity using single-cell transcriptomics. Nat Neurosci 2017; 19:1131-41. [PMID: 27571192 DOI: 10.1038/nn.4366] [Citation(s) in RCA: 209] [Impact Index Per Article: 29.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/12/2016] [Accepted: 07/22/2016] [Indexed: 12/12/2022]
Abstract
Cellular specialization is particularly prominent in mammalian nervous systems, which are composed of millions to billions of neurons that appear in thousands of different 'flavors' and contribute to a variety of functions. Even in a single brain region, individual neurons differ greatly in their morphology, connectivity and electrophysiological properties. Systematic classification of all mammalian neurons is a key goal towards deconstructing the nervous system into its basic components. With the recent advances in single-cell gene expression profiling technologies, it is now possible to undertake the enormous task of disentangling neuronal heterogeneity. High-throughput single-cell RNA sequencing and multiplexed quantitative RT-PCR have become more accessible, and these technologies enable systematic categorization of individual neurons into groups with similar molecular properties. Here we provide a conceptual and practical guide to classification of neural cell types using single-cell gene expression profiling technologies.
Collapse
Affiliation(s)
| | - Bosiljka Tasic
- Department of Molecular Genetics, Allen Institute for Brain Science, Seattle, Washington, USA
| | - Jens Hjerling-Leffler
- Division of Molecular Neurobiology, Department of Medical Biochemistry and Biophysics, Karolinska Institutet, Stockholm, Sweden
| | - Jeffrey M Trimarchi
- Department of Genetics, Development and Cell Biology, Iowa State University, Ames, Iowa, USA
| | | |
Collapse
|
42
|
Naumann EA, Fitzgerald JE, Dunn TW, Rihel J, Sompolinsky H, Engert F. From Whole-Brain Data to Functional Circuit Models: The Zebrafish Optomotor Response. Cell 2017; 167:947-960.e20. [PMID: 27814522 DOI: 10.1016/j.cell.2016.10.019] [Citation(s) in RCA: 154] [Impact Index Per Article: 22.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/11/2016] [Revised: 05/24/2016] [Accepted: 10/11/2016] [Indexed: 02/06/2023]
Abstract
Detailed descriptions of brain-scale sensorimotor circuits underlying vertebrate behavior remain elusive. Recent advances in zebrafish neuroscience offer new opportunities to dissect such circuits via whole-brain imaging, behavioral analysis, functional perturbations, and network modeling. Here, we harness these tools to generate a brain-scale circuit model of the optomotor response, an orienting behavior evoked by visual motion. We show that such motion is processed by diverse neural response types distributed across multiple brain regions. To transform sensory input into action, these regions sequentially integrate eye- and direction-specific sensory streams, refine representations via interhemispheric inhibition, and demix locomotor instructions to independently drive turning and forward swimming. While experiments revealed many neural response types throughout the brain, modeling identified the dimensions of functional connectivity most critical for the behavior. We thus reveal how distributed neurons collaborate to generate behavior and illustrate a paradigm for distilling functional circuit models from whole-brain data.
Collapse
Affiliation(s)
- Eva A Naumann
- Department of Molecular & Cellular Biology, Harvard University, Cambridge, MA 02138, USA; Department of Cell and Developmental Biology, University College London, London WC1E 6BT, UK
| | | | - Timothy W Dunn
- Department of Molecular & Cellular Biology, Harvard University, Cambridge, MA 02138, USA; Center for Brain Science, Harvard University, Cambridge, MA 02138, USA
| | - Jason Rihel
- Department of Cell and Developmental Biology, University College London, London WC1E 6BT, UK
| | - Haim Sompolinsky
- Center for Brain Science, Harvard University, Cambridge, MA 02138, USA; Racah Institute of Physics and the Edmond and Lily Safra Center for Brain Sciences, The Hebrew University of Jerusalem, Jerusalem 91904, Israel
| | - Florian Engert
- Department of Molecular & Cellular Biology, Harvard University, Cambridge, MA 02138, USA; Center for Brain Science, Harvard University, Cambridge, MA 02138, USA.
| |
Collapse
|
43
|
Neural Population Dynamics during Reaching Are Better Explained by a Dynamical System than Representational Tuning. PLoS Comput Biol 2016; 12:e1005175. [PMID: 27814352 PMCID: PMC5096671 DOI: 10.1371/journal.pcbi.1005175] [Citation(s) in RCA: 94] [Impact Index Per Article: 11.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/31/2016] [Accepted: 09/24/2016] [Indexed: 11/19/2022] Open
Abstract
Recent models of movement generation in motor cortex have sought to explain neural activity not as a function of movement parameters, known as representational models, but as a dynamical system acting at the level of the population. Despite evidence supporting this framework, the evaluation of representational models and their integration with dynamical systems is incomplete in the literature. Using a representational velocity-tuning based simulation of center-out reaching, we show that incorporating variable latency offsets between neural activity and kinematics is sufficient to generate rotational dynamics at the level of neural populations, a phenomenon observed in motor cortex. However, we developed a covariance-matched permutation test (CMPT) that reassigns neural data between task conditions independently for each neuron while maintaining overall neuron-to-neuron relationships, revealing that rotations based on the representational model did not uniquely depend on the underlying condition structure. In contrast, rotations based on either a dynamical model or motor cortex data depend on this relationship, providing evidence that the dynamical model more readily explains motor cortex activity. Importantly, implementing a recurrent neural network we demonstrate that both representational tuning properties and rotational dynamics emerge, providing evidence that a dynamical system can reproduce previous findings of representational tuning. Finally, using motor cortex data in combination with the CMPT, we show that results based on small numbers of neurons or conditions should be interpreted cautiously, potentially informing future experimental design. Together, our findings reinforce the view that representational models lack the explanatory power to describe complex aspects of single neuron and population level activity. The question of how the brain generates movement has been extensively studied, yet multiple competing models exist. Representational approaches relate the activity of single neurons to movement parameters such as velocity and position, approaches useful for the decoding of movement intentions, while the dynamical systems approach predicts that neural activity should evolve in a predictable way based on population activity. Existing representational models cannot reproduce the recent finding in monkeys that predictable rotational patterns underlie motor cortex activity during reach initiation, a finding predicted by a dynamical model in which muscle activity is a direct combination of neural population rotations. However, previous simulations did not consider an essential aspect of representational models: variable time offsets between neurons and kinematics. Whereas these offsets reveal rotational patterns in the model, these rotations are statistically different from those observed in the brain and predicted by a dynamical model. Importantly, a simple recurrent neural network model also showed rotational patterns statistically similar to those observed in the brain, supporting the idea that dynamical systems-based approaches may provide a powerful explanation of motor cortex function.
Collapse
|
44
|
Nogaret A, Meliza CD, Margoliash D, Abarbanel HDI. Automatic Construction of Predictive Neuron Models through Large Scale Assimilation of Electrophysiological Data. Sci Rep 2016; 6:32749. [PMID: 27605157 PMCID: PMC5015021 DOI: 10.1038/srep32749] [Citation(s) in RCA: 25] [Impact Index Per Article: 3.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/09/2016] [Accepted: 08/08/2016] [Indexed: 01/09/2023] Open
Abstract
We report on the construction of neuron models by assimilating electrophysiological data with large-scale constrained nonlinear optimization. The method implements interior point line parameter search to determine parameters from the responses to intracellular current injections of zebra finch HVC neurons. We incorporated these parameters into a nine ionic channel conductance model to obtain completed models which we then use to predict the state of the neuron under arbitrary current stimulation. Each model was validated by successfully predicting the dynamics of the membrane potential induced by 20–50 different current protocols. The dispersion of parameters extracted from different assimilation windows was studied. Differences in constraints from current protocols, stochastic variability in neuron output, and noise behave as a residual temperature which broadens the global minimum of the objective function to an ellipsoid domain whose principal axes follow an exponentially decaying distribution. The maximum likelihood expectation of extracted parameters was found to provide an excellent approximation of the global minimum and yields highly consistent kinetics for both neurons studied. Large scale assimilation absorbs the intrinsic variability of electrophysiological data over wide assimilation windows. It builds models in an automatic manner treating all data as equal quantities and requiring minimal additional insight.
Collapse
Affiliation(s)
- Alain Nogaret
- Department of Physics, University of Bath, Bath BA2 7AY, UK
| | - C Daniel Meliza
- Department of Psychology, University of Virginia, Charlottesville, VA 22904, USA
| | - Daniel Margoliash
- Department of Organismal Biology and Anatomy, University of Chicago, Chicago, IL 60637, USA
| | - Henry D I Abarbanel
- Department of Physics, University of California San Diego, La Jolla, CA 92093, USA.,Scripps Institution for Oceanography, Marine Physical Laboratory, La Jolla, CA 92093, USA
| |
Collapse
|
45
|
A Brief Overview of Techniques for Modulating Neuroendocrine and Other Neural Systems. ACTA ACUST UNITED AC 2016. [DOI: 10.1007/978-3-319-41603-8_1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register]
|
46
|
Emergence and maintenance of excitability: kinetics over structure. Curr Opin Neurobiol 2016; 40:66-71. [PMID: 27400289 DOI: 10.1016/j.conb.2016.06.013] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/28/2016] [Revised: 06/13/2016] [Accepted: 06/23/2016] [Indexed: 01/19/2023]
Abstract
The capacity to generate action potentials in neurons and other excitable cells requires tuning of both ionic channel expression and kinetics in a large parameter space. Alongside studies that extend traditional focus on control-based regulation of structural parameters (channel densities), there is a budding interest in self-organization of kinetic parameters. In this picture, ionic channels are continually forced by activity in-and-out of a pool of states not available for the mechanism of excitability. The process, acting on expressed structure, provides a bed for generation of a spectrum of excitability modes. Driven by microscopic fluctuations over a broad range of temporal scales, self-organization of kinetic parameters enriches the concepts and tools used in the study of development of excitability.
Collapse
|
47
|
Gaiteri C, Mostafavi S, Honey CJ, De Jager PL, Bennett DA. Genetic variants in Alzheimer disease - molecular and brain network approaches. Nat Rev Neurol 2016; 12:413-27. [PMID: 27282653 PMCID: PMC5017598 DOI: 10.1038/nrneurol.2016.84] [Citation(s) in RCA: 69] [Impact Index Per Article: 8.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/21/2022]
Abstract
Genetic studies in late-onset Alzheimer disease (LOAD) are aimed at identifying core disease mechanisms and providing potential biomarkers and drug candidates to improve clinical care of AD. However, owing to the complexity of LOAD, including pathological heterogeneity and disease polygenicity, extraction of actionable guidance from LOAD genetics has been challenging. Past attempts to summarize the effects of LOAD-associated genetic variants have used pathway analysis and collections of small-scale experiments to hypothesize functional convergence across several variants. In this Review, we discuss how the study of molecular, cellular and brain networks provides additional information on the effects of LOAD-associated genetic variants. We then discuss emerging combinations of these omic data sets into multiscale models, which provide a more comprehensive representation of the effects of LOAD-associated genetic variants at multiple biophysical scales. Furthermore, we highlight the clinical potential of mechanistically coupling genetic variants and disease phenotypes with multiscale brain models.
Collapse
Affiliation(s)
- Chris Gaiteri
- Rush Alzheimer's Disease Center, Rush University Medical Center, 600 S Paulina Street, Chicago, Illinois 60612, USA
| | - Sara Mostafavi
- Department of Statistics, and Medical Genetics; Centre for Molecular and Medicine and Therapeutics, University of British Columbia, 950 West 28th Avenue, Vancouver, British Columbia V5Z 4H4, Canada
| | - Christopher J Honey
- Department of Psychology, University of Toronto, 100 St. George Street, 4th Floor Sidney Smith Hall, Toronto, Ontario M5S 3G3, Canada
| | - Philip L De Jager
- Program in Translational NeuroPsychiatric Genomics, Institute for the Neurosciences, Departments of Neurology and Psychiatry, Brigham and Women's Hospital, 75 Francis Street, Boston MA 02115, USA
| | - David A Bennett
- Rush Alzheimer's Disease Center, Rush University Medical Center, 600 S Paulina Street, Chicago, Illinois 60612, USA
| |
Collapse
|
48
|
Glaser JI, Kording KP. The Development and Analysis of Integrated Neuroscience Data. Front Comput Neurosci 2016; 10:11. [PMID: 26903852 PMCID: PMC4749710 DOI: 10.3389/fncom.2016.00011] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/02/2015] [Accepted: 01/28/2016] [Indexed: 12/12/2022] Open
Abstract
There is a strong emphasis on developing novel neuroscience technologies, in particular on recording from more neurons. There has thus been increasing discussion about how to analyze the resulting big datasets. What has received less attention is that over the last 30 years, papers in neuroscience have progressively integrated more approaches, such as electrophysiology, anatomy, and genetics. As such, there has been little discussion on how to combine and analyze this multimodal data. Here, we describe the growth of multimodal approaches, and discuss the needed analysis advancements to make sense of this data.
Collapse
Affiliation(s)
- Joshua I Glaser
- Interdepartmental Neuroscience Program, Northwestern UniversityChicago, IL, USA; Department of Physical Medicine and Rehabilitation, Northwestern University and Rehabilitation Institute of ChicagoChicago, IL, USA
| | - Konrad P Kording
- Interdepartmental Neuroscience Program, Northwestern UniversityChicago, IL, USA; Department of Physical Medicine and Rehabilitation, Northwestern University and Rehabilitation Institute of ChicagoChicago, IL, USA; Department of Physiology, Northwestern UniversityChicago, IL, USA; Department of Applied Mathematics, Northwestern UniversityChicago, IL, USA
| |
Collapse
|
49
|
Introduction. Transl Neurosci 2016. [DOI: 10.1007/978-1-4899-7654-3_1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/22/2022] Open
|