1
|
Jaquette J, Kedia S, Sander E, Touboul JD. Reliability and robustness of oscillations in some slow-fast chaotic systems. CHAOS (WOODBURY, N.Y.) 2023; 33:103135. [PMID: 37874881 PMCID: PMC10599791 DOI: 10.1063/5.0166846] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/07/2023] [Accepted: 09/26/2023] [Indexed: 10/26/2023]
Abstract
A variety of nonlinear models of biological systems generate complex chaotic behaviors that contrast with biological homeostasis, the observation that many biological systems prove remarkably robust in the face of changing external or internal conditions. Motivated by the subtle dynamics of cell activity in a crustacean central pattern generator (CPG), this paper proposes a refinement of the notion of chaos that reconciles homeostasis and chaos in systems with multiple timescales. We show that systems displaying relaxation cycles while going through chaotic attractors generate chaotic dynamics that are regular at macroscopic timescales and are, thus, consistent with physiological function. We further show that this relative regularity may break down through global bifurcations of chaotic attractors such as crises, beyond which the system may also generate erratic activity at slow timescales. We analyze these phenomena in detail in the chaotic Rulkov map, a classical neuron model known to exhibit a variety of chaotic spike patterns. This leads us to propose that the passage of slow relaxation cycles through a chaotic attractor crisis is a robust, general mechanism for the transition between such dynamics. We validate this numerically in three other models: a simple model of the crustacean CPG neural network, a discrete cubic map, and a continuous flow.
Collapse
Affiliation(s)
| | | | - Evelyn Sander
- Department of Mathematical Sciences, George Mason University, Fairfax, Virginia 22030, USA
| | | |
Collapse
|
2
|
Pham T, Hansel C. Intrinsic threshold plasticity: cholinergic activation and role in the neuronal recognition of incomplete input patterns. J Physiol 2023; 601:3221-3239. [PMID: 35879872 PMCID: PMC9873838 DOI: 10.1113/jp283473] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/15/2022] [Accepted: 07/15/2022] [Indexed: 01/27/2023] Open
Abstract
Activity-dependent changes in membrane excitability are observed in neurons across brain areas and represent a cell-autonomous form of plasticity (intrinsic plasticity; IP) that in itself does not involve alterations in synaptic strength (synaptic plasticity; SP). Non-homeostatic IP may play an essential role in learning, e.g. by changing the action potential threshold near the soma. A computational problem, however, arises from the implication that such amplification does not discriminate between synaptic inputs and therefore may reduce the resolution of input representation. Here, we investigate consequences of IP for the performance of an artificial neural network in (a) the discrimination of unknown input patterns and (b) the recognition of known/learned patterns. While negative changes in threshold potentials in the output layer indeed reduce its ability to discriminate patterns, they benefit the recognition of known but incompletely presented patterns. An analysis of thresholds and IP-induced threshold changes in published sets of physiological data obtained from whole-cell patch-clamp recordings from L2/3 pyramidal neurons in (a) the primary visual cortex (V1) of awake macaques and (b) the primary somatosensory cortex (S1) of mice in vitro, respectively, reveals a difference between resting and threshold potentials of ∼15 mV for V1 and ∼25 mV for S1, and a total plasticity range of ∼10 mV (S1). The most efficient activity pattern to lower threshold is paired cholinergic and electric activation. Our findings show that threshold reduction promotes a shift in neural coding strategies from accurate faithful representation to interpretative assignment of input patterns to learned object categories. KEY POINTS: Intrinsic plasticity may change the action potential threshold near the soma of neurons (threshold plasticity), thus altering the input-output function for all synaptic inputs 'upstream' of the plasticity location. A potential problem arising from this shared amplification is that it may reduce the ability to discriminate between different input patterns. Here, we assess the performance of an artificial neural network in the discrimination of unknown input patterns as well as the recognition of known patterns subsequent to changes in the spike threshold. We observe that negative changes in threshold potentials do reduce discrimination performance, but at the same time improve performance in an object recognition task, in particular when patterns are incompletely presented. Analysis of whole-cell patch-clamp recordings from pyramidal neurons in the primary somatosensory cortex (S1) of mice reveals that negative threshold changes preferentially result from electric stimulation of neurons paired with the activation of muscarinic acetylcholine receptors.
Collapse
Affiliation(s)
- Tuan Pham
- Committee on Computational Neuroscience, The University of Chicago
| | - Christian Hansel
- Committee on Computational Neuroscience, The University of Chicago
- Department of Neurobiology, The University of Chicago
| |
Collapse
|
3
|
Jeon I, Kim T. Distinctive properties of biological neural networks and recent advances in bottom-up approaches toward a better biologically plausible neural network. Front Comput Neurosci 2023; 17:1092185. [PMID: 37449083 PMCID: PMC10336230 DOI: 10.3389/fncom.2023.1092185] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/07/2022] [Accepted: 06/12/2023] [Indexed: 07/18/2023] Open
Abstract
Although it may appear infeasible and impractical, building artificial intelligence (AI) using a bottom-up approach based on the understanding of neuroscience is straightforward. The lack of a generalized governing principle for biological neural networks (BNNs) forces us to address this problem by converting piecemeal information on the diverse features of neurons, synapses, and neural circuits into AI. In this review, we described recent attempts to build a biologically plausible neural network by following neuroscientifically similar strategies of neural network optimization or by implanting the outcome of the optimization, such as the properties of single computational units and the characteristics of the network architecture. In addition, we proposed a formalism of the relationship between the set of objectives that neural networks attempt to achieve, and neural network classes categorized by how closely their architectural features resemble those of BNN. This formalism is expected to define the potential roles of top-down and bottom-up approaches for building a biologically plausible neural network and offer a map helping the navigation of the gap between neuroscience and AI engineering.
Collapse
Affiliation(s)
| | - Taegon Kim
- Brain Science Institute, Korea Institute of Science and Technology, Seoul, Republic of Korea
| |
Collapse
|
4
|
Sarazin MXB, Victor J, Medernach D, Naudé J, Delord B. Online Learning and Memory of Neural Trajectory Replays for Prefrontal Persistent and Dynamic Representations in the Irregular Asynchronous State. Front Neural Circuits 2021; 15:648538. [PMID: 34305535 PMCID: PMC8298038 DOI: 10.3389/fncir.2021.648538] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/31/2020] [Accepted: 05/31/2021] [Indexed: 11/13/2022] Open
Abstract
In the prefrontal cortex (PFC), higher-order cognitive functions and adaptive flexible behaviors rely on continuous dynamical sequences of spiking activity that constitute neural trajectories in the state space of activity. Neural trajectories subserve diverse representations, from explicit mappings in physical spaces to generalized mappings in the task space, and up to complex abstract transformations such as working memory, decision-making and behavioral planning. Computational models have separately assessed learning and replay of neural trajectories, often using unrealistic learning rules or decoupling simulations for learning from replay. Hence, the question remains open of how neural trajectories are learned, memorized and replayed online, with permanently acting biological plasticity rules. The asynchronous irregular regime characterizing cortical dynamics in awake conditions exerts a major source of disorder that may jeopardize plasticity and replay of locally ordered activity. Here, we show that a recurrent model of local PFC circuitry endowed with realistic synaptic spike timing-dependent plasticity and scaling processes can learn, memorize and replay large-size neural trajectories online under asynchronous irregular dynamics, at regular or fast (sped-up) timescale. Presented trajectories are quickly learned (within seconds) as synaptic engrams in the network, and the model is able to chunk overlapping trajectories presented separately. These trajectory engrams last long-term (dozen hours) and trajectory replays can be triggered over an hour. In turn, we show the conditions under which trajectory engrams and replays preserve asynchronous irregular dynamics in the network. Functionally, spiking activity during trajectory replays at regular timescale accounts for both dynamical coding with temporal tuning in individual neurons, persistent activity at the population level, and large levels of variability consistent with observed cognitive-related PFC dynamics. Together, these results offer a consistent theoretical framework accounting for how neural trajectories can be learned, memorized and replayed in PFC networks circuits to subserve flexible dynamic representations and adaptive behaviors.
Collapse
Affiliation(s)
- Matthieu X B Sarazin
- Institut des Systèmes Intelligents et de Robotique, CNRS, Inserm, Sorbonne Université, Paris, France
| | - Julie Victor
- CEA Paris-Saclay, CNRS, NeuroSpin, Saclay, France
| | - David Medernach
- Institut des Systèmes Intelligents et de Robotique, CNRS, Inserm, Sorbonne Université, Paris, France
| | - Jérémie Naudé
- Neuroscience Paris Seine - Institut de biologie Paris Seine, CNRS, Inserm, Sorbonne Université, Paris, France
| | - Bruno Delord
- Institut des Systèmes Intelligents et de Robotique, CNRS, Inserm, Sorbonne Université, Paris, France
| |
Collapse
|
5
|
Heiney K, Huse Ramstad O, Fiskum V, Christiansen N, Sandvig A, Nichele S, Sandvig I. Criticality, Connectivity, and Neural Disorder: A Multifaceted Approach to Neural Computation. Front Comput Neurosci 2021; 15:611183. [PMID: 33643017 PMCID: PMC7902700 DOI: 10.3389/fncom.2021.611183] [Citation(s) in RCA: 15] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/28/2020] [Accepted: 01/18/2021] [Indexed: 01/03/2023] Open
Abstract
It has been hypothesized that the brain optimizes its capacity for computation by self-organizing to a critical point. The dynamical state of criticality is achieved by striking a balance such that activity can effectively spread through the network without overwhelming it and is commonly identified in neuronal networks by observing the behavior of cascades of network activity termed "neuronal avalanches." The dynamic activity that occurs in neuronal networks is closely intertwined with how the elements of the network are connected and how they influence each other's functional activity. In this review, we highlight how studying criticality with a broad perspective that integrates concepts from physics, experimental and theoretical neuroscience, and computer science can provide a greater understanding of the mechanisms that drive networks to criticality and how their disruption may manifest in different disorders. First, integrating graph theory into experimental studies on criticality, as is becoming more common in theoretical and modeling studies, would provide insight into the kinds of network structures that support criticality in networks of biological neurons. Furthermore, plasticity mechanisms play a crucial role in shaping these neural structures, both in terms of homeostatic maintenance and learning. Both network structures and plasticity have been studied fairly extensively in theoretical models, but much work remains to bridge the gap between theoretical and experimental findings. Finally, information theoretical approaches can tie in more concrete evidence of a network's computational capabilities. Approaching neural dynamics with all these facets in mind has the potential to provide a greater understanding of what goes wrong in neural disorders. Criticality analysis therefore holds potential to identify disruptions to healthy dynamics, granted that robust methods and approaches are considered.
Collapse
Affiliation(s)
- Kristine Heiney
- Department of Computer Science, Oslo Metropolitan University, Oslo, Norway
- Department of Computer Science, Norwegian University of Science and Technology (NTNU), Trondheim, Norway
| | - Ola Huse Ramstad
- Department of Neuromedicine and Movement Science, Norwegian University of Science and Technology (NTNU), Trondheim, Norway
| | - Vegard Fiskum
- Department of Neuromedicine and Movement Science, Norwegian University of Science and Technology (NTNU), Trondheim, Norway
| | - Nicholas Christiansen
- Department of Neuromedicine and Movement Science, Norwegian University of Science and Technology (NTNU), Trondheim, Norway
| | - Axel Sandvig
- Department of Neuromedicine and Movement Science, Norwegian University of Science and Technology (NTNU), Trondheim, Norway
- Department of Clinical Neuroscience, Umeå University Hospital, Umeå, Sweden
- Department of Neurology, St. Olav's Hospital, Trondheim, Norway
| | - Stefano Nichele
- Department of Computer Science, Oslo Metropolitan University, Oslo, Norway
- Department of Holistic Systems, Simula Metropolitan, Oslo, Norway
| | - Ioanna Sandvig
- Department of Neuromedicine and Movement Science, Norwegian University of Science and Technology (NTNU), Trondheim, Norway
| |
Collapse
|
6
|
Bachmann C, Tetzlaff T, Duarte R, Morrison A. Firing rate homeostasis counteracts changes in stability of recurrent neural networks caused by synapse loss in Alzheimer's disease. PLoS Comput Biol 2020; 16:e1007790. [PMID: 32841234 PMCID: PMC7505475 DOI: 10.1371/journal.pcbi.1007790] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/26/2019] [Revised: 09/21/2020] [Accepted: 03/17/2020] [Indexed: 11/19/2022] Open
Abstract
The impairment of cognitive function in Alzheimer's disease is clearly correlated to synapse loss. However, the mechanisms underlying this correlation are only poorly understood. Here, we investigate how the loss of excitatory synapses in sparsely connected random networks of spiking excitatory and inhibitory neurons alters their dynamical characteristics. Beyond the effects on the activity statistics, we find that the loss of excitatory synapses on excitatory neurons reduces the network's sensitivity to small perturbations. This decrease in sensitivity can be considered as an indication of a reduction of computational capacity. A full recovery of the network's dynamical characteristics and sensitivity can be achieved by firing rate homeostasis, here implemented by an up-scaling of the remaining excitatory-excitatory synapses. Mean-field analysis reveals that the stability of the linearised network dynamics is, in good approximation, uniquely determined by the firing rate, and thereby explains why firing rate homeostasis preserves not only the firing rate but also the network's sensitivity to small perturbations.
Collapse
Affiliation(s)
- Claudia Bachmann
- Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA BRAIN Institute I, Jülich Research Centre, Jülich, Germany
| | - Tom Tetzlaff
- Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA BRAIN Institute I, Jülich Research Centre, Jülich, Germany
| | - Renato Duarte
- Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA BRAIN Institute I, Jülich Research Centre, Jülich, Germany
| | - Abigail Morrison
- Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA BRAIN Institute I, Jülich Research Centre, Jülich, Germany
- Institute of Cognitive Neuroscience, Faculty of Psychology, Ruhr-University Bochum, Bochum, Germany
| |
Collapse
|
7
|
Cessac B. Linear response in neuronal networks: From neurons dynamics to collective response. CHAOS (WOODBURY, N.Y.) 2019; 29:103105. [PMID: 31675822 DOI: 10.1063/1.5111803] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/30/2019] [Accepted: 09/11/2019] [Indexed: 06/10/2023]
Abstract
We review two examples where the linear response of a neuronal network submitted to an external stimulus can be derived explicitly, including network parameters dependence. This is done in a statistical physicslike approach where one associates, to the spontaneous dynamics of the model, a natural notion of Gibbs distribution inherited from ergodic theory or stochastic processes. These two examples are the Amari-Wilson-Cowan model [S. Amari, Syst. Man Cybernet. SMC-2, 643-657 (1972); H. R. Wilson and J. D. Cowan, Biophys. J. 12, 1-24 (1972)] and a conductance based Integrate and Fire model [M. Rudolph and A. Destexhe, Neural Comput. 18, 2146-2210 (2006); M. Rudolph and A. Destexhe, Neurocomputing 70(10-12), 1966-1969 (2007)].
Collapse
Affiliation(s)
- Bruno Cessac
- Université Côte d'Azur, Inria, Biovision team, Sophia-Antipolis, France
| |
Collapse
|
8
|
Heterogeneous network dynamics in an excitatory-inhibitory network model by distinct intrinsic mechanisms in the fast spiking interneurons. Brain Res 2019; 1714:27-44. [DOI: 10.1016/j.brainres.2019.02.013] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/20/2017] [Revised: 01/06/2019] [Accepted: 02/12/2019] [Indexed: 01/22/2023]
|
9
|
Kurashige H, Yamashita Y, Hanakawa T, Honda M. A Knowledge-Based Arrangement of Prototypical Neural Representation Prior to Experience Contributes to Selectivity in Upcoming Knowledge Acquisition. Front Hum Neurosci 2018; 12:111. [PMID: 29662446 PMCID: PMC5890192 DOI: 10.3389/fnhum.2018.00111] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/06/2017] [Accepted: 03/08/2018] [Indexed: 11/15/2022] Open
Abstract
Knowledge acquisition is a process in which one actively selects a piece of information from the environment and assimilates it with prior knowledge. However, little is known about the neural mechanism underlying selectivity in knowledge acquisition. Here we executed a 2-day human experiment to investigate the involvement of characteristic spontaneous activity resembling a so-called “preplay” in selectivity in sentence comprehension, an instance of knowledge acquisition. On day 1, we presented 10 sentences (prior sentences) that were difficult to understand on their own. On the following day, we first measured the resting-state functional magnetic resonance imaging (fMRI). Then, we administered a sentence comprehension task using 20 new sentences (posterior sentences). The posterior sentences were also difficult to understand on their own, but some could be associated with prior sentences to facilitate their understanding. Next, we measured the posterior sentence-induced fMRI to identify the neural representation. From the resting-state fMRI, we extracted the appearances of activity patterns similar to the neural representations for posterior sentences. Importantly, the resting-state fMRI was measured before giving the posterior sentences, and thus such appearances could be considered as preplay-like or prototypical neural representations. We compared the intensities of such appearances with the understanding of posterior sentences. This gave a positive correlation between these two variables, but only if posterior sentences were associated with prior sentences. Additional analysis showed the contribution of the entorhinal cortex, rather than the hippocampus, to the correlation. The present study suggests that prior knowledge-based arrangement of neural activity before an experience contributes to the active selection of information to be learned. Such arrangement prior to an experience resembles preplay activity observed in the rodent brain. In terms of knowledge acquisition, the present study leads to a new view of the brain (or more precisely of the brain’s knowledge) as an autopoietic system in which the brain (or knowledge) selects what it should learn by itself, arranges preplay-like activity as a position for the new information in advance, and actively reorganizes itself.
Collapse
Affiliation(s)
- Hiroki Kurashige
- Graduate School of Informatics and Engineering, The University of Electro-Communications, Tokyo, Japan.,National Institute of Neuroscience, National Center of Neurology and Psychiatry, Tokyo, Japan
| | - Yuichi Yamashita
- National Institute of Neuroscience, National Center of Neurology and Psychiatry, Tokyo, Japan
| | - Takashi Hanakawa
- Integrative Brain Imaging Center, National Center of Neurology and Psychiatry, Tokyo, Japan
| | - Manabu Honda
- National Institute of Neuroscience, National Center of Neurology and Psychiatry, Tokyo, Japan
| |
Collapse
|
10
|
Abeysuriya RG, Hadida J, Sotiropoulos SN, Jbabdi S, Becker R, Hunt BAE, Brookes MJ, Woolrich MW. A biophysical model of dynamic balancing of excitation and inhibition in fast oscillatory large-scale networks. PLoS Comput Biol 2018; 14:e1006007. [PMID: 29474352 PMCID: PMC5841816 DOI: 10.1371/journal.pcbi.1006007] [Citation(s) in RCA: 47] [Impact Index Per Article: 7.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/28/2017] [Revised: 03/07/2018] [Accepted: 01/28/2018] [Indexed: 01/03/2023] Open
Abstract
Over long timescales, neuronal dynamics can be robust to quite large perturbations, such as changes in white matter connectivity and grey matter structure through processes including learning, aging, development and certain disease processes. One possible explanation is that robust dynamics are facilitated by homeostatic mechanisms that can dynamically rebalance brain networks. In this study, we simulate a cortical brain network using the Wilson-Cowan neural mass model with conduction delays and noise, and use inhibitory synaptic plasticity (ISP) to dynamically achieve a spatially local balance between excitation and inhibition. Using MEG data from 55 subjects we find that ISP enables us to simultaneously achieve high correlation with multiple measures of functional connectivity, including amplitude envelope correlation and phase locking. Further, we find that ISP successfully achieves local E/I balance, and can consistently predict the functional connectivity computed from real MEG data, for a much wider range of model parameters than is possible with a model without ISP.
Collapse
Affiliation(s)
- Romesh G. Abeysuriya
- Oxford Centre for Human Brain Activity, Wellcome Centre for Integrative Neuroimaging, Department of Psychiatry, University of Oxford, United Kingdom
| | - Jonathan Hadida
- Oxford Centre for Human Brain Activity, Wellcome Centre for Integrative Neuroimaging, Department of Psychiatry, University of Oxford, United Kingdom
- Oxford Centre for Functional Magnetic Resonance Imaging of the Brain, Wellcome Centre for Integrative Neuroimaging, University of Oxford, United Kingdom
| | - Stamatios N. Sotiropoulos
- Oxford Centre for Functional Magnetic Resonance Imaging of the Brain, Wellcome Centre for Integrative Neuroimaging, University of Oxford, United Kingdom
- Sir Peter Mansfield Imaging Centre, School of Physics and Astronomy, University of Nottingham, United Kingdom
- National Institute for Health Research (NIHR) Nottingham Biomedical Research Centre, Queens Medical Centre, Nottingham
| | - Saad Jbabdi
- Oxford Centre for Functional Magnetic Resonance Imaging of the Brain, Wellcome Centre for Integrative Neuroimaging, University of Oxford, United Kingdom
| | - Robert Becker
- Oxford Centre for Human Brain Activity, Wellcome Centre for Integrative Neuroimaging, Department of Psychiatry, University of Oxford, United Kingdom
| | - Benjamin A. E. Hunt
- Sir Peter Mansfield Imaging Centre, School of Physics and Astronomy, University of Nottingham, United Kingdom
- Department of Diagnostic Imaging, Neurosciences & Mental Health, Research Institute, The Hospital for Sick Children, Toronto, Ontario, Canada
| | - Matthew J. Brookes
- Sir Peter Mansfield Imaging Centre, School of Physics and Astronomy, University of Nottingham, United Kingdom
| | - Mark W. Woolrich
- Oxford Centre for Human Brain Activity, Wellcome Centre for Integrative Neuroimaging, Department of Psychiatry, University of Oxford, United Kingdom
- Oxford Centre for Functional Magnetic Resonance Imaging of the Brain, Wellcome Centre for Integrative Neuroimaging, University of Oxford, United Kingdom
| |
Collapse
|
11
|
Abstract
In this paper, we present data for the lognormal distributions of spike rates, synaptic weights and intrinsic excitability (gain) for neurons in various brain areas, such as auditory or visual cortex, hippocampus, cerebellum, striatum, midbrain nuclei. We find a remarkable consistency of heavy-tailed, specifically lognormal, distributions for rates, weights and gains in all brain areas examined. The difference between strongly recurrent and feed-forward connectivity (cortex vs. striatum and cerebellum), neurotransmitter (GABA (striatum) or glutamate (cortex)) or the level of activation (low in cortex, high in Purkinje cells and midbrain nuclei) turns out to be irrelevant for this feature. Logarithmic scale distribution of weights and gains appears to be a general, functional property in all cases analyzed. We then created a generic neural model to investigate adaptive learning rules that create and maintain lognormal distributions. We conclusively demonstrate that not only weights, but also intrinsic gains, need to have strong Hebbian learning in order to produce and maintain the experimentally attested distributions. This provides a solution to the long-standing question about the type of plasticity exhibited by intrinsic excitability.
Collapse
Affiliation(s)
- Gabriele Scheler
- Carl Correns Foundation for Mathematical Biology, Mountain View, CA, 94040, USA
| |
Collapse
|
12
|
Abstract
In this paper, we document lognormal distributions for spike rates, synaptic weights and intrinsic excitability (gain) for neurons in various brain areas, such as auditory or visual cortex, hippocampus, cerebellum, striatum, midbrain nuclei. We find a remarkable consistency of heavy-tailed, specifically lognormal, distributions for rates, weights and gains in all brain areas. The difference between strongly recurrent and feed-forward connectivity (cortex vs. striatum and cerebellum), neurotransmitter (GABA (striatum) or glutamate (cortex)) or the level of activation (low in cortex, high in Purkinje cells and midbrain nuclei) turns out to be irrelevant for this feature. Logarithmic scale distribution of weights and gains appears as a functional property that is present everywhere. Secondly, we created a generic neural model to show that Hebbian learning will create and maintain lognormal distributions. We could prove with the model that not only weights, but also intrinsic gains, need to have strong Hebbian learning in order to produce and maintain the experimentally attested distributions. This settles a long-standing question about the type of plasticity exhibited by intrinsic excitability.
Collapse
Affiliation(s)
- Gabriele Scheler
- Carl Correns Foundation for Mathematical Biology, Mountain View, CA, 94040, USA
| |
Collapse
|
13
|
Shim HG, Jang SS, Jang DC, Jin Y, Chang W, Park JM, Kim SJ. mGlu1 receptor mediates homeostatic control of intrinsic excitability through Ih in cerebellar Purkinje cells. J Neurophysiol 2016; 115:2446-55. [PMID: 26912592 DOI: 10.1152/jn.00566.2015] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/09/2015] [Accepted: 02/21/2016] [Indexed: 01/14/2023] Open
Abstract
Homeostatic intrinsic plasticity is a cellular mechanism for maintaining a stable neuronal activity level in response to developmental or activity-dependent changes. Type 1 metabotropic glutamate receptor (mGlu1 receptor) has been widely known to monitor neuronal activity, which plays a role as a modulator of intrinsic and synaptic plasticity of neurons. Whether mGlu1 receptor contributes to the compensatory adjustment of Purkinje cells (PCs), the sole output of the cerebellar cortex, in response to chronic changes in excitability remains unclear. Here, we demonstrate that the mGlu1 receptor is involved in homeostatic intrinsic plasticity through the upregulation of the hyperpolarization-activated current (Ih) in cerebellar PCs. This plasticity was prevented by inhibiting the mGlu1 receptor with Bay 36-7620, an mGlu1 receptor inverse agonist, but not with CPCCOEt, a neutral antagonist. Chronic inactivation with tetrodotoxin (TTX) increased the components of Ih in the PCs, and ZD 7288, a hyperpolarization-activated cyclic nucleotide-gated channel selective inhibitor, fully restored reduction of firing rates in the deprived neurons. The homeostatic elevation of Ih was also prevented by BAY 36-7620, but not CPCCOEt. Furthermore, KT 5720, a blocker of protein kinase A (PKA), prevented the effect of TTX reducing the evoked firing rates, indicating the reduction in excitability of PCs due to PKA activation. Our study shows that both the mGlu1 receptor and the PKA pathway are involved in the homeostatic intrinsic plasticity of PCs after chronic blockade of the network activity, which provides a novel understanding on how cerebellar PCs can preserve the homeostatic state under activity-deprived conditions.
Collapse
Affiliation(s)
- Hyun Geun Shim
- Department of Physiology, Seoul National University College of Medicine, Seoul, Republic of Korea; Department of Biomedical Science, Seoul National University College of Medicine, Seoul, Republic of Korea
| | - Sung-Soo Jang
- Department of Physiology, Seoul National University College of Medicine, Seoul, Republic of Korea; Neuroscience Research Institute, Seoul National University College of Medicine, Seoul, Republic of Korea
| | - Dong Cheol Jang
- Department of Physiology, Seoul National University College of Medicine, Seoul, Republic of Korea; Department of Brain and Cognitive Sciences, College of Science, Seoul National University, Kwanak-gu, Seoul, Republic of Korea
| | - Yunju Jin
- Center for Cognition and Sociality, Institute for Basic Science (IBS), Daejeon, Republic of Korea; and
| | - Wonseok Chang
- Department of Anesthesiology, Duke University Medical Center, Durham, North Carolina
| | - Joo Min Park
- Center for Cognition and Sociality, Institute for Basic Science (IBS), Daejeon, Republic of Korea; and
| | - Sang Jeong Kim
- Department of Physiology, Seoul National University College of Medicine, Seoul, Republic of Korea; Department of Biomedical Science, Seoul National University College of Medicine, Seoul, Republic of Korea; Neuroscience Research Institute, Seoul National University College of Medicine, Seoul, Republic of Korea;
| |
Collapse
|
14
|
Sweeney Y, Hellgren Kotaleski J, Hennig MH. A Diffusive Homeostatic Signal Maintains Neural Heterogeneity and Responsiveness in Cortical Networks. PLoS Comput Biol 2015; 11:e1004389. [PMID: 26158556 PMCID: PMC4497656 DOI: 10.1371/journal.pcbi.1004389] [Citation(s) in RCA: 21] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/27/2014] [Accepted: 06/09/2015] [Indexed: 11/18/2022] Open
Abstract
Gaseous neurotransmitters such as nitric oxide (NO) provide a unique and often overlooked mechanism for neurons to communicate through diffusion within a network, independent of synaptic connectivity. NO provides homeostatic control of intrinsic excitability. Here we conduct a theoretical investigation of the distinguishing roles of NO-mediated diffusive homeostasis in comparison with canonical non-diffusive homeostasis in cortical networks. We find that both forms of homeostasis provide a robust mechanism for maintaining stable activity following perturbations. However, the resulting networks differ, with diffusive homeostasis maintaining substantial heterogeneity in activity levels of individual neurons, a feature disrupted in networks with non-diffusive homeostasis. This results in networks capable of representing input heterogeneity, and linearly responding over a broader range of inputs than those undergoing non-diffusive homeostasis. We further show that these properties are preserved when homeostatic and Hebbian plasticity are combined. These results suggest a mechanism for dynamically maintaining neural heterogeneity, and expose computational advantages of non-local homeostatic processes.
Collapse
Affiliation(s)
- Yann Sweeney
- Institute for Adaptive and Neural Computation, School of Informatics, University of Edinburgh, Edinburgh, United Kingdom
- Department of Computational Biology, School of Computer Science and Communication, Royal Institute of Technology, Stockholm, Sweden
- * E-mail:
| | - Jeanette Hellgren Kotaleski
- Department of Computational Biology, School of Computer Science and Communication, Royal Institute of Technology, Stockholm, Sweden
| | - Matthias H. Hennig
- Institute for Adaptive and Neural Computation, School of Informatics, University of Edinburgh, Edinburgh, United Kingdom
| |
Collapse
|
15
|
Stability of Neuronal Networks with Homeostatic Regulation. PLoS Comput Biol 2015; 11:e1004357. [PMID: 26154297 PMCID: PMC4495932 DOI: 10.1371/journal.pcbi.1004357] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/02/2014] [Accepted: 05/28/2015] [Indexed: 11/19/2022] Open
Abstract
Neurons are equipped with homeostatic mechanisms that counteract long-term perturbations of their average activity and thereby keep neurons in a healthy and information-rich operating regime. While homeostasis is believed to be crucial for neural function, a systematic analysis of homeostatic control has largely been lacking. The analysis presented here analyses the necessary conditions for stable homeostatic control. We consider networks of neurons with homeostasis and show that homeostatic control that is stable for single neurons, can destabilize activity in otherwise stable recurrent networks leading to strong non-abating oscillations in the activity. This instability can be prevented by slowing down the homeostatic control. The stronger the network recurrence, the slower the homeostasis has to be. Next, we consider how non-linearities in the neural activation function affect these constraints. Finally, we consider the case that homeostatic feedback is mediated via a cascade of multiple intermediate stages. Counter-intuitively, the addition of extra stages in the homeostatic control loop further destabilizes activity in single neurons and networks. Our theoretical framework for homeostasis thus reveals previously unconsidered constraints on homeostasis in biological networks, and identifies conditions that require the slow time-constants of homeostatic regulation observed experimentally. Despite their apparent robustness many biological system work best in controlled environments, the tightly regulated mammalian body temperature being a good example. Biological homeostatic control systems, not unlike those used in engineering, ensure that the right operating conditions are met. Similarly, neurons appear to adjust the amount of activity they produce to be neither too high nor too low by, among other ways, regulating their excitability. However, for no apparent reason the neural homeostatic processes are very slow, taking hours or even days to regulate the neuron. Here we use results from mathematical control theory to examine under which conditions such slow control is necessary to prevent instabilities that lead to strong, sustained oscillations in the activity. Our results lead to a deeper understanding of neural homeostasis and can help the design of artificial neural systems.
Collapse
|
16
|
Toutounji H, Schumacher J, Pipa G. Homeostatic plasticity for single node delay-coupled reservoir computing. Neural Comput 2015; 27:1159-85. [PMID: 25826022 DOI: 10.1162/neco_a_00737] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Supplementing a differential equation with delays results in an infinite-dimensional dynamical system. This property provides the basis for a reservoir computing architecture, where the recurrent neural network is replaced by a single nonlinear node, delay-coupled to itself. Instead of the spatial topology of a network, subunits in the delay-coupled reservoir are multiplexed in time along one delay span of the system. The computational power of the reservoir is contingent on this temporal multiplexing. Here, we learn optimal temporal multiplexing by means of a biologically inspired homeostatic plasticity mechanism. Plasticity acts locally and changes the distances between the subunits along the delay, depending on how responsive these subunits are to the input. After analytically deriving the learning mechanism, we illustrate its role in improving the reservoir's computational power. To this end, we investigate, first, the increase of the reservoir's memory capacity. Second, we predict a NARMA-10 time series, showing that plasticity reduces the normalized root-mean-square error by more than 20%. Third, we discuss plasticity's influence on the reservoir's input-information capacity, the coupling strength between subunits, and the distribution of the readout coefficients.
Collapse
Affiliation(s)
- Hazem Toutounji
- Neuroinformatics Department, Institute of Cognitive Science, University of Osnabrück, 49069 Osnabrück, Germany, and Department of Theoretical Neuroscience, Central Institute of Mental Health, Medical Faculty Mannheim of Heidelberg University, 68159 Mannheim, Germany
| | - Johannes Schumacher
- Neuroinformatics Department, Institute of Cognitive Science, University of Osnabrück, 49069 Osnabrück, Germany
| | - Gordon Pipa
- Neuroinformatics Department, Institute of Cognitive Science, University of Osnabrück, 49069 Osnabrück, Germany
| |
Collapse
|
17
|
Toutounji H, Pasemann F. Behavior control in the sensorimotor loop with short-term synaptic dynamics induced by self-regulating neurons. Front Neurorobot 2014; 8:19. [PMID: 24904403 PMCID: PMC4033235 DOI: 10.3389/fnbot.2014.00019] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/13/2013] [Accepted: 05/07/2014] [Indexed: 12/02/2022] Open
Abstract
The behavior and skills of living systems depend on the distributed control provided by specialized and highly recurrent neural networks. Learning and memory in these systems is mediated by a set of adaptation mechanisms, known collectively as neuronal plasticity. Translating principles of recurrent neural control and plasticity to artificial agents has seen major strides, but is usually hampered by the complex interactions between the agent's body and its environment. One of the important standing issues is for the agent to support multiple stable states of behavior, so that its behavioral repertoire matches the requirements imposed by these interactions. The agent also must have the capacity to switch between these states in time scales that are comparable to those by which sensory stimulation varies. Achieving this requires a mechanism of short-term memory that allows the neurocontroller to keep track of the recent history of its input, which finds its biological counterpart in short-term synaptic plasticity. This issue is approached here by deriving synaptic dynamics in recurrent neural networks. Neurons are introduced as self-regulating units with a rich repertoire of dynamics. They exhibit homeostatic properties for certain parameter domains, which result in a set of stable states and the required short-term memory. They can also operate as oscillators, which allow them to surpass the level of activity imposed by their homeostatic operation conditions. Neural systems endowed with the derived synaptic dynamics can be utilized for the neural behavior control of autonomous mobile agents. The resulting behavior depends also on the underlying network structure, which is either engineered or developed by evolutionary techniques. The effectiveness of these self-regulating units is demonstrated by controlling locomotion of a hexapod with 18 degrees of freedom, and obstacle-avoidance of a wheel-driven robot.
Collapse
Affiliation(s)
- Hazem Toutounji
- Department of Neurocybernetics, Institute of Cognitive Science, University of Osnabrück Osnabrück, Germany
| | - Frank Pasemann
- Department of Neurocybernetics, Institute of Cognitive Science, University of Osnabrück Osnabrück, Germany
| |
Collapse
|
18
|
Bhalla US. Multiscale modeling and synaptic plasticity. PROGRESS IN MOLECULAR BIOLOGY AND TRANSLATIONAL SCIENCE 2014; 123:351-86. [PMID: 24560151 DOI: 10.1016/b978-0-12-397897-4.00012-7] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/28/2022]
Abstract
Synaptic plasticity is a major convergence point for theory and computation, and the process of plasticity engages physiology, cell, and molecular biology. In its many manifestations, plasticity is at the hub of basic neuroscience questions about memory and development, as well as more medically themed questions of neural damage and recovery. As an important cellular locus of memory, synaptic plasticity has received a huge amount of experimental and theoretical attention. If computational models have tended to pick specific aspects of plasticity, such as STDP, and reduce them to an equation, some experimental studies are equally guilty of oversimplification each time they identify a new molecule and declare it to be the last word in plasticity and learning. Multiscale modeling begins with the acknowledgment that synaptic function spans many levels of signaling, and these are so tightly coupled that we risk losing essential features of plasticity if we focus exclusively on any one level. Despite the technical challenges and gaps in data for model specification, an increasing number of multiscale modeling studies have taken on key questions in plasticity. These have provided new insights, but importantly, they have opened new avenues for questioning. This review discusses a wide range of multiscale models in plasticity, including their technical landscape and their implications.
Collapse
Affiliation(s)
- Upinder S Bhalla
- National Centre for Biological Sciences, Bangalore, Karnataka, India
| |
Collapse
|