1
|
Mittal D, Narayanan R. Network motifs in cellular neurophysiology. Trends Neurosci 2024; 47:506-521. [PMID: 38806296 DOI: 10.1016/j.tins.2024.04.008] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/15/2024] [Revised: 04/08/2024] [Accepted: 04/29/2024] [Indexed: 05/30/2024]
Abstract
Concepts from network science and graph theory, including the framework of network motifs, have been frequently applied in studying neuronal networks and other biological complex systems. Network-based approaches can also be used to study the functions of individual neurons, where cellular elements such as ion channels and membrane voltage are conceptualized as nodes within a network, and their interactions are denoted by edges. Network motifs in this context provide functional building blocks that help to illuminate the principles of cellular neurophysiology. In this review we build a case that network motifs operating within neurons provide tools for defining the functional architecture of single-neuron physiology and neuronal adaptations. We highlight the presence of such computational motifs in the cellular mechanisms underlying action potential generation, neuronal oscillations, dendritic integration, and neuronal plasticity. Future work applying the network motifs perspective may help to decipher the functional complexities of neurons and their adaptation during health and disease.
Collapse
Affiliation(s)
- Divyansh Mittal
- Centre for Integrative Genomics, Faculty of Biology and Medicine, University of Lausanne, Lausanne, Switzerland
| | - Rishikesh Narayanan
- Cellular Neurophysiology Laboratory, Molecular Biophysics Unit, Indian Institute of Science, Bangalore, India.
| |
Collapse
|
2
|
Tschumak A, Feldhoff F, Klefenz F. The switching and learning behavior of an octopus cell implemented on FPGA. MATHEMATICAL BIOSCIENCES AND ENGINEERING : MBE 2024; 21:5762-5781. [PMID: 38872557 DOI: 10.3934/mbe.2024254] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2024]
Abstract
A dendrocentric backpropagation spike timing-dependent plasticity learning rule has been derived based on temporal logic for a single octopus neuron. It receives parallel spike trains and collectively adjusts its synaptic weights in the range [0, 1] during training. After the training phase, it spikes in reaction to event signaling input patterns in sensory streams. The learning and switching behavior of the octopus cell has been implemented in field-programmable gate array (FPGA) hardware. The application in an FPGA is described and the proof of concept for its application in hardware that was obtained by feeding it with spike cochleagrams is given; also, it is verified by performing a comparison with the pre-computed standard software simulation results.
Collapse
Affiliation(s)
- Alexej Tschumak
- Audio Communication Group, Technische Universität Berlin, Berlin, Germany
| | - Frank Feldhoff
- Advanced Electromagnetics Group, Technische Universität Ilmenau, Ilmenau, Germany
| | - Frank Klefenz
- Fraunhofer Institute for Digital Media Technology, Ilmenau, Germany
| |
Collapse
|
3
|
Ramaswamy S. Data-driven multiscale computational models of cortical and subcortical regions. Curr Opin Neurobiol 2024; 85:102842. [PMID: 38320453 DOI: 10.1016/j.conb.2024.102842] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/23/2023] [Revised: 01/04/2024] [Accepted: 01/05/2024] [Indexed: 02/08/2024]
Abstract
Data-driven computational models of neurons, synapses, microcircuits, and mesocircuits have become essential tools in modern brain research. The goal of these multiscale models is to integrate and synthesize information from different levels of brain organization, from cellular properties, dendritic excitability, and synaptic dynamics to microcircuits, mesocircuits, and ultimately behavior. This article surveys recent advances in the genesis of data-driven computational models of mammalian neural networks in cortical and subcortical areas. I discuss the challenges and opportunities in developing data-driven multiscale models, including the need for interdisciplinary collaborations, the importance of model validation and comparison, and the potential impact on basic and translational neuroscience research. Finally, I highlight future directions and emerging technologies that will enable more comprehensive and predictive data-driven models of brain function and dysfunction.
Collapse
Affiliation(s)
- Srikanth Ramaswamy
- Neural Circuits Laboratory, Biosciences Institute, Newcastle University, Newcastle Upon Tyne, NE2 4HH, United Kingdom.
| |
Collapse
|
4
|
Bast A, Fruengel R, de Kock CPJ, Oberlaender M. Network-neuron interactions underlying sensory responses of layer 5 pyramidal tract neurons in barrel cortex. PLoS Comput Biol 2024; 20:e1011468. [PMID: 38626210 PMCID: PMC11051592 DOI: 10.1371/journal.pcbi.1011468] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/25/2023] [Revised: 04/26/2024] [Accepted: 03/14/2024] [Indexed: 04/18/2024] Open
Abstract
Neurons in the cerebral cortex receive thousands of synaptic inputs per second from thousands of presynaptic neurons. How the dendritic location of inputs, their timing, strength, and presynaptic origin, in conjunction with complex dendritic physiology, impact the transformation of synaptic input into action potential (AP) output remains generally unknown for in vivo conditions. Here, we introduce a computational approach to reveal which properties of the input causally underlie AP output, and how this neuronal input-output computation is influenced by the morphology and biophysical properties of the dendrites. We demonstrate that this approach allows dissecting of how different input populations drive in vivo observed APs. For this purpose, we focus on fast and broadly tuned responses that pyramidal tract neurons in layer 5 (L5PTs) of the rat barrel cortex elicit upon passive single whisker deflections. By reducing a multi-scale model that we reported previously, we show that three features are sufficient to predict with high accuracy the sensory responses and receptive fields of L5PTs under these specific in vivo conditions: the count of active excitatory versus inhibitory synapses preceding the response, their spatial distribution on the dendrites, and the AP history. Based on these three features, we derive an analytically tractable description of the input-output computation of L5PTs, which enabled us to dissect how synaptic input from thalamus and different cell types in barrel cortex contribute to these responses. We show that the input-output computation is preserved across L5PTs despite morphological and biophysical diversity of their dendrites. We found that trial-to-trial variability in L5PT responses, and cell-to-cell variability in their receptive fields, are sufficiently explained by variability in synaptic input from the network, whereas variability in biophysical and morphological properties have minor contributions. Our approach to derive analytically tractable models of input-output computations in L5PTs provides a roadmap to dissect network-neuron interactions underlying L5PT responses across different in vivo conditions and for other cell types.
Collapse
Affiliation(s)
- Arco Bast
- In Silico Brain Sciences Group, Max Planck Institute for Neurobiology of Behavior ˗ caesar, Bonn, Germany
- International Max Planck Research School (IMPRS) for Brain and Behavior, Bonn, Germany
| | - Rieke Fruengel
- In Silico Brain Sciences Group, Max Planck Institute for Neurobiology of Behavior ˗ caesar, Bonn, Germany
- International Max Planck Research School (IMPRS) for Brain and Behavior, Bonn, Germany
| | - Christiaan P. J. de Kock
- Department of Integrative Neurophysiology, Center for Neurogenomics and Cognitive Research, Vrije Universiteit Amsterdam, Amsterdam, the Netherlands
| | - Marcel Oberlaender
- In Silico Brain Sciences Group, Max Planck Institute for Neurobiology of Behavior ˗ caesar, Bonn, Germany
- Department of Integrative Neurophysiology, Center for Neurogenomics and Cognitive Research, Vrije Universiteit Amsterdam, Amsterdam, the Netherlands
| |
Collapse
|
5
|
O'Donnell C. Nonlinear slow-timescale mechanisms in synaptic plasticity. Curr Opin Neurobiol 2023; 82:102778. [PMID: 37657186 DOI: 10.1016/j.conb.2023.102778] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/13/2023] [Revised: 08/07/2023] [Accepted: 08/09/2023] [Indexed: 09/03/2023]
Abstract
Learning and memory rely on synapses changing their strengths in response to neural activity. However, there is a substantial gap between the timescales of neural electrical dynamics (1-100 ms) and organism behaviour during learning (seconds-minutes). What mechanisms bridge this timescale gap? What are the implications for theories of brain learning? Here I first cover experimental evidence for slow-timescale factors in plasticity induction. Then I review possible underlying cellular and synaptic mechanisms, and insights from recent computational models that incorporate such slow-timescale variables. I conclude that future progress in understanding brain learning across timescales will require both experimental and computational modelling studies that map out the nonlinearities implemented by both fast and slow plasticity mechanisms at synapses, and crucially, their joint interactions.
Collapse
Affiliation(s)
- Cian O'Donnell
- School of Computing, Engineering, and Intelligent Systems, Magee Campus, Ulster University, Derry/Londonderry, UK; School of Computer Science, Electrical and Electronic Engineering, and Engineering Maths, University of Bristol, Bristol, UK.
| |
Collapse
|
6
|
Wybo WAM, Tsai MC, Tran VAK, Illing B, Jordan J, Morrison A, Senn W. NMDA-driven dendritic modulation enables multitask representation learning in hierarchical sensory processing pathways. Proc Natl Acad Sci U S A 2023; 120:e2300558120. [PMID: 37523562 PMCID: PMC10410730 DOI: 10.1073/pnas.2300558120] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/13/2023] [Accepted: 06/14/2023] [Indexed: 08/02/2023] Open
Abstract
While sensory representations in the brain depend on context, it remains unclear how such modulations are implemented at the biophysical level, and how processing layers further in the hierarchy can extract useful features for each possible contextual state. Here, we demonstrate that dendritic N-Methyl-D-Aspartate spikes can, within physiological constraints, implement contextual modulation of feedforward processing. Such neuron-specific modulations exploit prior knowledge, encoded in stable feedforward weights, to achieve transfer learning across contexts. In a network of biophysically realistic neuron models with context-independent feedforward weights, we show that modulatory inputs to dendritic branches can solve linearly nonseparable learning problems with a Hebbian, error-modulated learning rule. We also demonstrate that local prediction of whether representations originate either from different inputs, or from different contextual modulations of the same input, results in representation learning of hierarchical feedforward weights across processing layers that accommodate a multitude of contexts.
Collapse
Affiliation(s)
- Willem A. M. Wybo
- Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA-Institute Brain Structure–Function Relationships (INM-10), Jülich Research Center, DE-52428Jülich, Germany
| | - Matthias C. Tsai
- Department of Physiology, University of Bern, CH-3012Bern, Switzerland
| | - Viet Anh Khoa Tran
- Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA-Institute Brain Structure–Function Relationships (INM-10), Jülich Research Center, DE-52428Jülich, Germany
- Department of Computer Science - 3, Faculty 1, RWTH Aachen University, DE-52074Aachen, Germany
| | - Bernd Illing
- Laboratory of Computational Neuroscience, École Polytechnique Fédérale de Lausanne, CH-1015Lausanne, Switzerland
| | - Jakob Jordan
- Department of Physiology, University of Bern, CH-3012Bern, Switzerland
| | - Abigail Morrison
- Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA-Institute Brain Structure–Function Relationships (INM-10), Jülich Research Center, DE-52428Jülich, Germany
- Department of Computer Science - 3, Faculty 1, RWTH Aachen University, DE-52074Aachen, Germany
| | - Walter Senn
- Department of Physiology, University of Bern, CH-3012Bern, Switzerland
| |
Collapse
|
7
|
Pagkalos M, Chavlis S, Poirazi P. Introducing the Dendrify framework for incorporating dendrites to spiking neural networks. Nat Commun 2023; 14:131. [PMID: 36627284 PMCID: PMC9832130 DOI: 10.1038/s41467-022-35747-8] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/26/2022] [Accepted: 12/22/2022] [Indexed: 01/11/2023] Open
Abstract
Computational modeling has been indispensable for understanding how subcellular neuronal features influence circuit processing. However, the role of dendritic computations in network-level operations remains largely unexplored. This is partly because existing tools do not allow the development of realistic and efficient network models that account for dendrites. Current spiking neural networks, although efficient, are usually quite simplistic, overlooking essential dendritic properties. Conversely, circuit models with morphologically detailed neuron models are computationally costly, thus impractical for large-network simulations. To bridge the gap between these two extremes and facilitate the adoption of dendritic features in spiking neural networks, we introduce Dendrify, an open-source Python package based on Brian 2. Dendrify, through simple commands, automatically generates reduced compartmental neuron models with simplified yet biologically relevant dendritic and synaptic integrative properties. Such models strike a good balance between flexibility, performance, and biological accuracy, allowing us to explore dendritic contributions to network-level functions while paving the way for developing more powerful neuromorphic systems.
Collapse
Affiliation(s)
- Michalis Pagkalos
- Institute of Molecular Biology and Biotechnology (IMBB), Foundation for Research and Technology Hellas (FORTH), Heraklion, 70013, Greece
- Department of Biology, University of Crete, Heraklion, 70013, Greece
| | - Spyridon Chavlis
- Institute of Molecular Biology and Biotechnology (IMBB), Foundation for Research and Technology Hellas (FORTH), Heraklion, 70013, Greece
| | - Panayiota Poirazi
- Institute of Molecular Biology and Biotechnology (IMBB), Foundation for Research and Technology Hellas (FORTH), Heraklion, 70013, Greece.
| |
Collapse
|
8
|
Oláh VJ, Pedersen NP, Rowan MJM. Ultrafast simulation of large-scale neocortical microcircuitry with biophysically realistic neurons. eLife 2022; 11:e79535. [PMID: 36341568 PMCID: PMC9640191 DOI: 10.7554/elife.79535] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/16/2022] [Accepted: 10/23/2022] [Indexed: 11/09/2022] Open
Abstract
Understanding the activity of the mammalian brain requires an integrative knowledge of circuits at distinct scales, ranging from ion channel gating to circuit connectomics. Computational models are regularly employed to understand how multiple parameters contribute synergistically to circuit behavior. However, traditional models of anatomically and biophysically realistic neurons are computationally demanding, especially when scaled to model local circuits. To overcome this limitation, we trained several artificial neural network (ANN) architectures to model the activity of realistic multicompartmental cortical neurons. We identified an ANN architecture that accurately predicted subthreshold activity and action potential firing. The ANN could correctly generalize to previously unobserved synaptic input, including in models containing nonlinear dendritic properties. When scaled, processing times were orders of magnitude faster compared with traditional approaches, allowing for rapid parameter-space mapping in a circuit model of Rett syndrome. Thus, we present a novel ANN approach allowing for rapid, detailed network experiments using inexpensive and commonly available computational resources.
Collapse
Affiliation(s)
- Viktor J Oláh
- Department of Cell Biology, Emory University School of MedicineAtlantaUnited States
| | - Nigel P Pedersen
- Department of Neurology, Emory University School of MedicineAtlantaUnited States
| | - Matthew JM Rowan
- Department of Cell Biology, Emory University School of MedicineAtlantaUnited States
| |
Collapse
|
9
|
Roder AE, Johnson KEE, Knoll M, Khalfan M, Wang B, Schultz-Cherry S, Banakis S, Kreitman A, Mederos C, Youn JH, Mercado R, Wang W, Ruchnewitz D, Samanovic MI, Mulligan MJ, Lassig M, Łuksza M, Das S, Gresham D, Ghedin E. Optimized Quantification of Intrahost Viral Diversity in SARS-CoV-2 and Influenza Virus Sequence Data. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2022:2021.05.05.442873. [PMID: 36656775 PMCID: PMC9836620 DOI: 10.1101/2021.05.05.442873] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/21/2023]
Abstract
High error rates of viral RNA-dependent RNA polymerases lead to diverse intra-host viral populations during infection. Errors made during replication that are not strongly deleterious to the virus can lead to the generation of minority variants. However, accurate detection of minority variants in viral sequence data is complicated by errors introduced during sample preparation and data analysis. We used synthetic RNA controls and simulated data to test seven variant calling tools across a range of allele frequencies and simulated coverages. We show that choice of variant caller, and use of replicate sequencing have the most significant impact on single nucleotide variant (SNV) discovery and demonstrate how both allele frequency and coverage thresholds impact both false discovery and false negative rates. We use these parameters to find minority variants in sequencing data from SARS-CoV-2 clinical specimens and provide guidance for studies of intrahost viral diversity using either single replicate data or data from technical replicates. Our study provides a framework for rigorous assessment of technical factors that impact SNV identification in viral samples and establishes heuristics that will inform and improve future studies of intrahost variation, viral diversity, and viral evolution. IMPORTANCE When viruses replicate inside a host, the virus replication machinery makes mistakes. Over time, these mistakes create mutations that result in a diverse population of viruses inside the host. Mutations that are neither lethal to the virus, nor strongly beneficial, can lead to minority variants that are minor members of the virus population. However, preparing samples for sequencing can also introduce errors that resemble minority variants, resulting in inclusion of false positive data if not filtered correctly. In this study, we aimed to determine the best methods for identification and quantification of these minority variants by testing the performance of seven commonly used variant calling tools. We used simulated and synthetic data to test their performance against a true set of variants, and then used these studies to inform variant identification in data from clinical SARS-CoV-2 clinical specimens. Together, analyses of our data provide extensive guidance for future studies of viral diversity and evolution.
Collapse
|
10
|
Pfeiffer P, Barreda Tomás FJ, Wu J, Schleimer JH, Vida I, Schreiber S. A dynamic clamp protocol to artificially modify cell capacitance. eLife 2022; 11:75517. [PMID: 35362411 PMCID: PMC9135398 DOI: 10.7554/elife.75517] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/12/2021] [Accepted: 03/17/2022] [Indexed: 11/13/2022] Open
Abstract
Dynamics of excitable cells and networks depend on the membrane time constant, set by membrane resistance and capacitance. Whereas pharmacological and genetic manipulations of ionic conductances of excitable membranes are routine in electrophysiology, experimental control over capacitance remains a challenge. Here, we present capacitance clamp, an approach that allows electrophysiologists to mimic a modified capacitance in biological neurons via an unconventional application of the dynamic clamp technique. We first demonstrate the feasibility to quantitatively modulate capacitance in a mathematical neuron model and then confirm the functionality of capacitance clamp in in vitro experiments in granule cells of rodent dentate gyrus with up to threefold virtual capacitance changes. Clamping of capacitance thus constitutes a novel technique to probe and decipher mechanisms of neuronal signaling in ways that were so far inaccessible to experimental electrophysiology.
Collapse
Affiliation(s)
- Paul Pfeiffer
- Institute for Theoretical Biology, Department of Biology, Humboldt-Universität zu Berlin, Berlin, Germany
| | | | - Jiameng Wu
- Institute for Integrative Neuroanatomy, Charité - Universitätsmedizin Berlin, Berlin, Germany
| | - Jan-Hendrik Schleimer
- Institute of Theoretical Biology, Department of Biology, Humboldt-Universität zu Berlin, Berlin, Germany
| | - Imre Vida
- Institute for Integrative Neuroanatomy, Charité - Universitätsmedizin Berlin, Berlin, Germany
| | - Susanne Schreiber
- Institute of Theoretical Biology, Department of Biology, Humboldt-Universität zu Berlin, Berlin, Germany
| |
Collapse
|
11
|
Linking Brain Structure, Activity, and Cognitive Function through Computation. eNeuro 2022; 9:ENEURO.0316-21.2022. [PMID: 35217544 PMCID: PMC8925650 DOI: 10.1523/eneuro.0316-21.2022] [Citation(s) in RCA: 13] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/26/2021] [Revised: 01/11/2022] [Accepted: 01/17/2022] [Indexed: 01/19/2023] Open
Abstract
Understanding the human brain is a “Grand Challenge” for 21st century research. Computational approaches enable large and complex datasets to be addressed efficiently, supported by artificial neural networks, modeling and simulation. Dynamic generative multiscale models, which enable the investigation of causation across scales and are guided by principles and theories of brain function, are instrumental for linking brain structure and function. An example of a resource enabling such an integrated approach to neuroscientific discovery is the BigBrain, which spatially anchors tissue models and data across different scales and ensures that multiscale models are supported by the data, making the bridge to both basic neuroscience and medicine. Research at the intersection of neuroscience, computing and robotics has the potential to advance neuro-inspired technologies by taking advantage of a growing body of insights into perception, plasticity and learning. To render data, tools and methods, theories, basic principles and concepts interoperable, the Human Brain Project (HBP) has launched EBRAINS, a digital neuroscience research infrastructure, which brings together a transdisciplinary community of researchers united by the quest to understand the brain, with fascinating insights and perspectives for societal benefits.
Collapse
|
12
|
Beniaguev D, Segev I, London M. Single cortical neurons as deep artificial neural networks. Neuron 2021; 109:2727-2739.e3. [PMID: 34380016 DOI: 10.1016/j.neuron.2021.07.002] [Citation(s) in RCA: 44] [Impact Index Per Article: 14.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/07/2020] [Revised: 03/04/2021] [Accepted: 06/30/2021] [Indexed: 11/17/2022]
Abstract
Utilizing recent advances in machine learning, we introduce a systematic approach to characterize neurons' input/output (I/O) mapping complexity. Deep neural networks (DNNs) were trained to faithfully replicate the I/O function of various biophysical models of cortical neurons at millisecond (spiking) resolution. A temporally convolutional DNN with five to eight layers was required to capture the I/O mapping of a realistic model of a layer 5 cortical pyramidal cell (L5PC). This DNN generalized well when presented with inputs widely outside the training distribution. When NMDA receptors were removed, a much simpler network (fully connected neural network with one hidden layer) was sufficient to fit the model. Analysis of the DNNs' weight matrices revealed that synaptic integration in dendritic branches could be conceptualized as pattern matching from a set of spatiotemporal templates. This study provides a unified characterization of the computational complexity of single neurons and suggests that cortical networks therefore have a unique architecture, potentially supporting their computational power.
Collapse
Affiliation(s)
- David Beniaguev
- Edmond and Lily Safra Center for Brain Sciences (ELSC), The Hebrew University of Jerusalem, Jerusalem 91904, Israel.
| | - Idan Segev
- Edmond and Lily Safra Center for Brain Sciences (ELSC), The Hebrew University of Jerusalem, Jerusalem 91904, Israel; Department of Neurobiology, The Hebrew University of Jerusalem, Jerusalem 91904, Israel
| | - Michael London
- Edmond and Lily Safra Center for Brain Sciences (ELSC), The Hebrew University of Jerusalem, Jerusalem 91904, Israel; Department of Neurobiology, The Hebrew University of Jerusalem, Jerusalem 91904, Israel
| |
Collapse
|
13
|
A convolutional neural-network framework for modelling auditory sensory cells and synapses. Commun Biol 2021; 4:827. [PMID: 34211095 PMCID: PMC8249591 DOI: 10.1038/s42003-021-02341-5] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/21/2020] [Accepted: 06/09/2021] [Indexed: 12/02/2022] Open
Abstract
In classical computational neuroscience, analytical model descriptions are derived from neuronal recordings to mimic the underlying biological system. These neuronal models are typically slow to compute and cannot be integrated within large-scale neuronal simulation frameworks. We present a hybrid, machine-learning and computational-neuroscience approach that transforms analytical models of sensory neurons and synapses into deep-neural-network (DNN) neuronal units with the same biophysical properties. Our DNN-model architecture comprises parallel and differentiable equations that can be used for backpropagation in neuro-engineering applications, and offers a simulation run-time improvement factor of 70 and 280 on CPU or GPU systems respectively. We focussed our development on auditory neurons and synapses, and show that our DNN-model architecture can be extended to a variety of existing analytical models. We describe how our approach for auditory models can be applied to other neuron and synapse types to help accelerate the development of large-scale brain networks and DNN-based treatments of the pathological system. Drakopoulos et al developed a machine-learning and computational-neuroscience approach that transforms analytical models of sensory neurons and synapses into deep-neural-network (DNN) neuronal units with the same biophysical properties. Focusing on auditory neurons and synapses, they showed that their DNN-model architecture could be extended to a variety of existing analytical models and to other neuron and synapse types, thus potentially assisting the development of large-scale brain networks and DNN-based treatments.
Collapse
|