1
|
Moore JJ, Genkin A, Tournoy M, Pughe-Sanford JL, de Ruyter van Steveninck RR, Chklovskii DB. The neuron as a direct data-driven controller. Proc Natl Acad Sci U S A 2024; 121:e2311893121. [PMID: 38913890 PMCID: PMC11228465 DOI: 10.1073/pnas.2311893121] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/22/2023] [Accepted: 04/12/2024] [Indexed: 06/26/2024] Open
Abstract
In the quest to model neuronal function amid gaps in physiological data, a promising strategy is to develop a normative theory that interprets neuronal physiology as optimizing a computational objective. This study extends current normative models, which primarily optimize prediction, by conceptualizing neurons as optimal feedback controllers. We posit that neurons, especially those beyond early sensory areas, steer their environment toward a specific desired state through their output. This environment comprises both synaptically interlinked neurons and external motor sensory feedback loops, enabling neurons to evaluate the effectiveness of their control via synaptic feedback. To model neurons as biologically feasible controllers which implicitly identify loop dynamics, infer latent states, and optimize control we utilize the contemporary direct data-driven control (DD-DC) framework. Our DD-DC neuron model explains various neurophysiological phenomena: the shift from potentiation to depression in spike-timing-dependent plasticity with its asymmetry, the duration and adaptive nature of feedforward and feedback neuronal filters, the imprecision in spike generation under constant stimulation, and the characteristic operational variability and noise in the brain. Our model presents a significant departure from the traditional, feedforward, instant-response McCulloch-Pitts-Rosenblatt neuron, offering a modern, biologically informed fundamental unit for constructing neural networks.
Collapse
Affiliation(s)
- Jason J Moore
- Neuroscience Institute, New York University Grossman School of Medicine, New York City, NY 10016
- Center for Computational Neuroscience, Flatiron Institute, New York City, NY 10010
| | - Alexander Genkin
- Center for Computational Neuroscience, Flatiron Institute, New York City, NY 10010
| | - Magnus Tournoy
- Center for Computational Neuroscience, Flatiron Institute, New York City, NY 10010
| | | | | | - Dmitri B Chklovskii
- Neuroscience Institute, New York University Grossman School of Medicine, New York City, NY 10016
- Center for Computational Neuroscience, Flatiron Institute, New York City, NY 10010
| |
Collapse
|
2
|
Zeldenrust F, Calcini N, Yan X, Bijlsma A, Celikel T. The tuning of tuning: How adaptation influences single cell information transfer. PLoS Comput Biol 2024; 20:e1012043. [PMID: 38739640 PMCID: PMC11115315 DOI: 10.1371/journal.pcbi.1012043] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/12/2023] [Revised: 05/23/2024] [Accepted: 04/01/2024] [Indexed: 05/16/2024] Open
Abstract
Sensory neurons reconstruct the world from action potentials (spikes) impinging on them. To effectively transfer information about the stimulus to the next processing level, a neuron needs to be able to adapt its working range to the properties of the stimulus. Here, we focus on the intrinsic neural properties that influence information transfer in cortical neurons and how tightly their properties need to be tuned to the stimulus statistics for them to be effective. We start by measuring the intrinsic information encoding properties of putative excitatory and inhibitory neurons in L2/3 of the mouse barrel cortex. Excitatory neurons show high thresholds and strong adaptation, making them fire sparsely and resulting in a strong compression of information, whereas inhibitory neurons that favour fast spiking transfer more information. Next, we turn to computational modelling and ask how two properties influence information transfer: 1) spike-frequency adaptation and 2) the shape of the IV-curve. We find that a subthreshold (but not threshold) adaptation, the 'h-current', and a properly tuned leak conductance can increase the information transfer of a neuron, whereas threshold adaptation can increase its working range. Finally, we verify the effect of the IV-curve slope in our experimental recordings and show that excitatory neurons form a more heterogeneous population than inhibitory neurons. These relationships between intrinsic neural features and neural coding that had not been quantified before will aid computational, theoretical and systems neuroscientists in understanding how neuronal populations can alter their coding properties, such as through the impact of neuromodulators. Why the variability of intrinsic properties of excitatory neurons is larger than that of inhibitory ones is an exciting question, for which future research is needed.
Collapse
Affiliation(s)
- Fleur Zeldenrust
- Donders Institute for Brain, Cognition, and Behaviour, Radboud University, Nijmegen - the Netherlands
| | - Niccolò Calcini
- Maastricht Centre for Systems Biology (MaCSBio), University of Maastricht, Maastricht, The Netherlands
| | - Xuan Yan
- Institute of Neuroscience, Chinese Academy of Sciences, Beijing, China
| | - Ate Bijlsma
- Department of Population Health Sciences / Department of Biology, Universiteit Utrecht, the Netherlands
| | - Tansu Celikel
- School of Psychology, Georgia Institute of Technology, Atlanta - GA, United States of America
| |
Collapse
|
3
|
Sidhu RS, Johnson EC, Jones DL, Ratnam R. A dynamic spike threshold with correlated noise predicts observed patterns of negative interval correlations in neuronal spike trains. BIOLOGICAL CYBERNETICS 2022; 116:611-633. [PMID: 36244004 PMCID: PMC9691502 DOI: 10.1007/s00422-022-00946-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 12/10/2021] [Accepted: 09/26/2022] [Indexed: 06/16/2023]
Abstract
Negative correlations in the sequential evolution of interspike intervals (ISIs) are a signature of memory in neuronal spike-trains. They provide coding benefits including firing-rate stabilization, improved detectability of weak sensory signals, and enhanced transmission of information by improving signal-to-noise ratio. Primary electrosensory afferent spike-trains in weakly electric fish fall into two categories based on the pattern of ISI correlations: non-bursting units have negative correlations which remain negative but decay to zero with increasing lags (Type I ISI correlations), and bursting units have oscillatory (alternating sign) correlation which damp to zero with increasing lags (Type II ISI correlations). Here, we predict and match observed ISI correlations in these afferents using a stochastic dynamic threshold model. We determine the ISI correlation function as a function of an arbitrary discrete noise correlation function [Formula: see text], where k is a multiple of the mean ISI. The function permits forward and inverse calculations of the correlation function. Both types of correlation functions can be generated by adding colored noise to the spike threshold with Type I correlations generated with slow noise and Type II correlations generated with fast noise. A first-order autoregressive (AR) process with a single parameter is sufficient to predict and accurately match both types of afferent ISI correlation functions, with the type being determined by the sign of the AR parameter. The predicted and experimentally observed correlations are in geometric progression. The theory predicts that the limiting sum of ISI correlations is [Formula: see text] yielding a perfect DC-block in the power spectrum of the spike train. Observed ISI correlations from afferents have a limiting sum that is slightly larger at [Formula: see text] ([Formula: see text]). We conclude that the underlying process for generating ISIs may be a simple combination of low-order AR and moving average processes and discuss the results from the perspective of optimal coding.
Collapse
Affiliation(s)
- Robin S Sidhu
- Department of Electrical and Computer Engineering, University of Illinois at Urbana-Champaign, Urbana, IL, USA
| | - Erik C Johnson
- The Johns Hopkins University Applied Physics Laboratory, Laurel, MD, USA
| | - Douglas L Jones
- Department of Electrical and Computer Engineering, University of Illinois at Urbana-Champaign, Urbana, IL, USA
| | - Rama Ratnam
- Division of Biological and Life Sciences, School of Arts and Sciences, Ahmedabad University, Ahmedabad, Gujarat, India.
| |
Collapse
|
4
|
Huang C, Zeldenrust F, Celikel T. Cortical Representation of Touch in Silico. Neuroinformatics 2022; 20:1013-1039. [PMID: 35486347 PMCID: PMC9588483 DOI: 10.1007/s12021-022-09576-5] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 02/19/2022] [Indexed: 12/31/2022]
Abstract
With its six layers and ~ 12,000 neurons, a cortical column is a complex network whose function is plausibly greater than the sum of its constituents'. Functional characterization of its network components will require going beyond the brute-force modulation of the neural activity of a small group of neurons. Here we introduce an open-source, biologically inspired, computationally efficient network model of the somatosensory cortex's granular and supragranular layers after reconstructing the barrel cortex in soma resolution. Comparisons of the network activity to empirical observations showed that the in silico network replicates the known properties of touch representations and whisker deprivation-induced changes in synaptic strength induced in vivo. Simulations show that the history of the membrane potential acts as a spatial filter that determines the presynaptic population of neurons contributing to a post-synaptic action potential; this spatial filtering might be critical for synaptic integration of top-down and bottom-up information.
Collapse
Affiliation(s)
- Chao Huang
- grid.9647.c0000 0004 7669 9786Department of Biology, University of Leipzig, Leipzig, Germany
| | - Fleur Zeldenrust
- grid.5590.90000000122931605Donders Institute for Brain, Cognition, and Behaviour, Radboud University, Nijmegen, the Netherlands
| | - Tansu Celikel
- grid.5590.90000000122931605Donders Institute for Brain, Cognition, and Behaviour, Radboud University, Nijmegen, the Netherlands ,grid.213917.f0000 0001 2097 4943School of Psychology, Georgia Institute of Technology, Atlanta, GA USA
| |
Collapse
|
5
|
Lopez-Hazas J, Montero A, Rodriguez FB. Influence of bio-inspired activity regulation through neural thresholds learning in the performance of neural networks. Neurocomputing 2021. [DOI: 10.1016/j.neucom.2021.08.001] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022]
|
6
|
Yoon YC. LIF and Simplified SRM Neurons Encode Signals Into Spikes via a Form of Asynchronous Pulse Sigma-Delta Modulation. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2017; 28:1192-1205. [PMID: 26929065 DOI: 10.1109/tnnls.2016.2526029] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/05/2023]
Abstract
We show how two spiking neuron models encode continuous-time signals into spikes (action potentials, time-encoded pulses, or point processes) using a special form of sigma-delta modulation (SDM). In particular, we show that the well-known leaky integrate-and-fire (LIF) neuron and the simplified spike response model (SRM0) neuron encode the continuous-time signals into spikes via a proposed asynchronous pulse SDM (APSDM) scheme. The encoder is clock free using level-crossing sampling with a single-level quantizer, unipolar signaling, differential coding, and pulse-shaping filters. The decoder, in the form of a low-pass filter or bandpass smoothing filter, can be fed with the spikes to reconstruct an estimate of the signal. The density of the spikes reflects the amplitude of the encoded signal. Numerical examples illustrating the concepts and the signaling efficiency of APSDM vis-à-vis SDM for comparable reconstruction accuracies are presented. We anticipate these results will facilitate the design of spiking neurons and spiking neural networks as well as cross fertilizations between the fields of neural coding and the SDM.
Collapse
|
7
|
Lansky P, Sacerdote L, Zucca C. The Gamma renewal process as an output of the diffusion leaky integrate-and-fire neuronal model. BIOLOGICAL CYBERNETICS 2016; 110:193-200. [PMID: 27246170 DOI: 10.1007/s00422-016-0690-x] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/26/2015] [Accepted: 05/18/2016] [Indexed: 06/05/2023]
Abstract
Statistical properties of spike trains as well as other neurophysiological data suggest a number of mathematical models of neurons. These models range from entirely descriptive ones to those deduced from the properties of the real neurons. One of them, the diffusion leaky integrate-and-fire neuronal model, which is based on the Ornstein-Uhlenbeck (OU) stochastic process that is restricted by an absorbing barrier, can describe a wide range of neuronal activity in terms of its parameters. These parameters are readily associated with known physiological mechanisms. The other model is descriptive, Gamma renewal process, and its parameters only reflect the observed experimental data or assumed theoretical properties. Both of these commonly used models are related here. We show under which conditions the Gamma model is an output from the diffusion OU model. In some cases, we can see that the Gamma distribution is unrealistic to be achieved for the employed parameters of the OU process.
Collapse
Affiliation(s)
- Petr Lansky
- Institute of Physiology, Academy of Sciences of Czech Republic, Videnská 1083, 142 20, Prague 4, Czech Republic
| | - Laura Sacerdote
- Department of Mathematics "G. Peano", University of Torino, Via Carlo Alberto 10, 10123, Torino, Italy
| | - Cristina Zucca
- Department of Mathematics "G. Peano", University of Torino, Via Carlo Alberto 10, 10123, Torino, Italy.
| |
Collapse
|
8
|
Johnson EC, Jones DL, Ratnam R. A minimum-error, energy-constrained neural code is an instantaneous-rate code. J Comput Neurosci 2016; 40:193-206. [PMID: 26922680 DOI: 10.1007/s10827-016-0592-x] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/17/2015] [Revised: 12/29/2015] [Accepted: 02/03/2016] [Indexed: 10/22/2022]
Abstract
Sensory neurons code information about stimuli in their sequence of action potentials (spikes). Intuitively, the spikes should represent stimuli with high fidelity. However, generating and propagating spikes is a metabolically expensive process. It is therefore likely that neural codes have been selected to balance energy expenditure against encoding error. Our recently proposed optimal, energy-constrained neural coder (Jones et al. Frontiers in Computational Neuroscience, 9, 61 2015) postulates that neurons time spikes to minimize the trade-off between stimulus reconstruction error and expended energy by adjusting the spike threshold using a simple dynamic threshold. Here, we show that this proposed coding scheme is related to existing coding schemes, such as rate and temporal codes. We derive an instantaneous rate coder and show that the spike-rate depends on the signal and its derivative. In the limit of high spike rates the spike train maximizes fidelity given an energy constraint (average spike-rate), and the predicted interspike intervals are identical to those generated by our existing optimal coding neuron. The instantaneous rate coder is shown to closely match the spike-rates recorded from P-type primary afferents in weakly electric fish. In particular, the coder is a predictor of the peristimulus time histogram (PSTH). When tested against in vitro cortical pyramidal neuron recordings, the instantaneous spike-rate approximates DC step inputs, matching both the average spike-rate and the time-to-first-spike (a simple temporal code). Overall, the instantaneous rate coder relates optimal, energy-constrained encoding to the concepts of rate-coding and temporal-coding, suggesting a possible unifying principle of neural encoding of sensory signals.
Collapse
Affiliation(s)
- Erik C Johnson
- Department of Electrical and Computer Engineering, University of Illinois at Urbana-Champaign, Urbana, IL, 61801, USA. .,Coordinated Science Laboratory, University of Illinois at Urbana-Champaign, Urbana, IL, 61801, USA. .,Beckman Institute for Advanced Science and Technology, University of Illinois at Urbana-Champaign, Urbana, IL, 61801, USA.
| | - Douglas L Jones
- Department of Electrical and Computer Engineering, University of Illinois at Urbana-Champaign, Urbana, IL, 61801, USA. .,Coordinated Science Laboratory, University of Illinois at Urbana-Champaign, Urbana, IL, 61801, USA. .,Beckman Institute for Advanced Science and Technology, University of Illinois at Urbana-Champaign, Urbana, IL, 61801, USA. .,Advanced Digital Sciences Center, Illinois at Singapore Pte. Ltd, Singapore, Singapore. .,Neuroscience Program, University of Illinois at Urbana-Champaign, Urbana, IL, 61801, USA.
| | - Rama Ratnam
- Coordinated Science Laboratory, University of Illinois at Urbana-Champaign, Urbana, IL, 61801, USA. .,Beckman Institute for Advanced Science and Technology, University of Illinois at Urbana-Champaign, Urbana, IL, 61801, USA. .,Advanced Digital Sciences Center, Illinois at Singapore Pte. Ltd, Singapore, Singapore.
| |
Collapse
|