1
|
Yamane Y. Adaptation of the inferior temporal neurons and efficient visual processing. Front Behav Neurosci 2024; 18:1398874. [PMID: 39132448 PMCID: PMC11310006 DOI: 10.3389/fnbeh.2024.1398874] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/10/2024] [Accepted: 07/16/2024] [Indexed: 08/13/2024] Open
Abstract
Numerous studies examining the responses of individual neurons in the inferior temporal (IT) cortex have revealed their characteristics such as two-dimensional or three-dimensional shape tuning, objects, or category selectivity. While these basic selectivities have been studied assuming that their response to stimuli is relatively stable, physiological experiments have revealed that the responsiveness of IT neurons also depends on visual experience. The activity changes of IT neurons occur over various time ranges; among these, repetition suppression (RS), in particular, is robustly observed in IT neurons without any behavioral or task constraints. I observed a similar phenomenon in the ventral visual neurons in macaque monkeys while they engaged in free viewing and actively fixated on one consistent object multiple times. This observation indicates that the phenomenon also occurs in natural situations during which the subject actively views stimuli without forced fixation, suggesting that this phenomenon is an everyday occurrence and widespread across regions of the visual system, making it a default process for visual neurons. Such short-term activity modulation may be a key to understanding the visual system; however, the circuit mechanism and the biological significance of RS remain unclear. Thus, in this review, I summarize the observed modulation types in IT neurons and the known properties of RS. Subsequently, I discuss adaptation in vision, including concepts such as efficient and predictive coding, as well as the relationship between adaptation and psychophysical aftereffects. Finally, I discuss some conceptual implications of this phenomenon as well as the circuit mechanisms and the models that may explain adaptation as a fundamental aspect of visual processing.
Collapse
Affiliation(s)
- Yukako Yamane
- Neural Computation Unit, Okinawa Institute of Science and Technology Graduate University, Okinawa, Japan
| |
Collapse
|
2
|
Koren V, Malerba SB, Schwalger T, Panzeri S. Structure, dynamics, coding and optimal biophysical parameters of efficient excitatory-inhibitory spiking networks. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.04.24.590955. [PMID: 38712237 PMCID: PMC11071478 DOI: 10.1101/2024.04.24.590955] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/08/2024]
Abstract
The principle of efficient coding posits that sensory cortical networks are designed to encode maximal sensory information with minimal metabolic cost. Despite the major influence of efficient coding in neuroscience, it has remained unclear whether fundamental empirical properties of neural network activity can be explained solely based on this normative principle. Here, we rigorously derive the structural, coding, biophysical and dynamical properties of excitatory-inhibitory recurrent networks of spiking neurons that emerge directly from imposing that the network minimizes an instantaneous loss function and a time-averaged performance measure enacting efficient coding. The optimal network has biologically-plausible biophysical features, including realistic integrate-and-fire spiking dynamics, spike-triggered adaptation, and a non-stimulus-specific excitatory external input regulating metabolic cost. The efficient network has excitatory-inhibitory recurrent connectivity between neurons with similar stimulus tuning implementing feature-specific competition, similar to that recently found in visual cortex. Networks with unstructured connectivity cannot reach comparable levels of coding efficiency. The optimal biophysical parameters include 4 to 1 ratio of excitatory vs inhibitory neurons and 3 to 1 ratio of mean inhibitory-to-inhibitory vs. excitatory-to-inhibitory connectivity that closely match those of cortical sensory networks. The efficient network has biologically-plausible spiking dynamics, with a tight instantaneous E-I balance that makes them capable to achieve efficient coding of external stimuli varying over multiple time scales. Together, these results explain how efficient coding may be implemented in cortical networks and suggests that key properties of biological neural networks may be accounted for by efficient coding.
Collapse
Affiliation(s)
- Veronika Koren
- Institute of Neural Information Processing, Center for Molecular Neurobiology (ZMNH), University Medical Center Hamburg-Eppendorf (UKE), 20251 Hamburg, Germany
- Institute of Mathematics, Technische Universität Berlin, 10623 Berlin, Germany
- Bernstein Center for Computational Neuroscience Berlin, 10115 Berlin, Germany
| | - Simone Blanco Malerba
- Institute of Neural Information Processing, Center for Molecular Neurobiology (ZMNH), University Medical Center Hamburg-Eppendorf (UKE), 20251 Hamburg, Germany
| | - Tilo Schwalger
- Institute of Mathematics, Technische Universität Berlin, 10623 Berlin, Germany
- Bernstein Center for Computational Neuroscience Berlin, 10115 Berlin, Germany
| | - Stefano Panzeri
- Institute of Neural Information Processing, Center for Molecular Neurobiology (ZMNH), University Medical Center Hamburg-Eppendorf (UKE), 20251 Hamburg, Germany
| |
Collapse
|
3
|
van Andel DM, Sprengers JJ, Königs M, de Jonge MV, Bruining H. Effects of Bumetanide on Neurocognitive Functioning in Children with Autism Spectrum Disorder: Secondary Analysis of a Randomized Placebo-Controlled Trial. J Autism Dev Disord 2024; 54:894-904. [PMID: 36626004 PMCID: PMC10907457 DOI: 10.1007/s10803-022-05841-3] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 11/18/2022] [Indexed: 01/11/2023]
Abstract
We present the secondary-analysis of neurocognitive tests in the 'Bumetanide in Autism Medication and Biomarker' (BAMBI;EUDRA-CT-2014-001560-35) study, a randomized double-blind placebo-controlled (1:1) trial testing 3-months bumetanide treatment (≤ 1 mg twice-daily) in unmedicated children 7-15 years with ASD. Children with IQ ≥ 70 were analyzed for baseline deficits and treatment-effects on the intention-to-treat-population with generalized-linear-models, principal component analysis and network analysis. Ninety-two children were allocated to treatment and 83 eligible for analyses. Heterogeneous neurocognitive impairments were found that were unaffected by bumetanide treatment. Network analysis showed higher modularity after treatment (mean difference:-0.165, 95%CI:-0.317 to - 0.013,p = .034) and changes in the relative importance of response inhibition in the neurocognitive network (mean difference:-0.037, 95%CI:-0.073 to - 0.001,p = .042). This study offers perspectives to include neurocognitive tests in ASD trials.
Collapse
Affiliation(s)
- Dorinde M van Andel
- Department of Psychiatry, UMC Utrecht Brain Centre, University Medical Centre Utrecht, Utrecht, The Netherlands
| | - Jan J Sprengers
- Department of Psychiatry, UMC Utrecht Brain Centre, University Medical Centre Utrecht, Utrecht, The Netherlands
| | - Marsh Königs
- Department of Paediatrics, Emma Neuroscience Group, Amsterdam UMC Emma Children's Hospital, Amsterdam, The Netherlands
| | - Maretha V de Jonge
- Department of Psychiatry, UMC Utrecht Brain Centre, University Medical Centre Utrecht, Utrecht, The Netherlands
- Department Education and Child Studies, Faculty of Social and Behavioral Sciences, Leiden University, Leiden, The Netherlands
| | - Hilgo Bruining
- Department of Psychiatry, UMC Utrecht Brain Centre, University Medical Centre Utrecht, Utrecht, The Netherlands.
- Child and Adolescent Psychiatry and Psychosocial Care, Emma Children's Hospital, Vrije Universiteit Amsterdam, Amsterdam UMC, Amsterdam, Netherlands.
- N=You Neurodevelopmental Precision Center, Amsterdam Neuroscience, Amsterdam Reproduction and Development, Amsterdam UMC, Amsterdam, Netherlands.
- Levvel, Center for Child and Adolescent Psychiatry, Amsterdam, Netherlands.
- Department of Child and Adolescent Psychiatry, Amsterdam UMC, University of Amsterdam, Meibergdreef 9, 1105 AZ, Amsterdam, the Netherlands.
| |
Collapse
|
4
|
Chapman GW, Hasselmo ME. Predictive learning by a burst-dependent learning rule. Neurobiol Learn Mem 2023; 205:107826. [PMID: 37696414 DOI: 10.1016/j.nlm.2023.107826] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/07/2022] [Revised: 08/05/2023] [Accepted: 09/03/2023] [Indexed: 09/13/2023]
Abstract
Humans and other animals are able to quickly generalize latent dynamics of spatiotemporal sequences, often from a minimal number of previous experiences. Additionally, internal representations of external stimuli must remain stable, even in the presence of sensory noise, in order to be useful for informing behavior. In contrast, typical machine learning approaches require many thousands of samples, and generalize poorly to unexperienced examples, or fail completely to predict at long timescales. Here, we propose a novel neural network module which incorporates hierarchy and recurrent feedback terms, constituting a simplified model of neocortical microcircuits. This microcircuit predicts spatiotemporal trajectories at the input layer using a temporal error minimization algorithm. We show that this module is able to predict with higher accuracy into the future compared to traditional models. Investigating this model we find that successive predictive models learn representations which are increasingly removed from the raw sensory space, namely as successive temporal derivatives of the positional information. Next, we introduce a spiking neural network model which implements the rate-model through the use of a recently proposed biological learning rule utilizing dual-compartment neurons. We show that this network performs well on the same tasks as the mean-field models, by developing intrinsic dynamics that follow the dynamics of the external stimulus, while coordinating transmission of higher-order dynamics. Taken as a whole, these findings suggest that hierarchical temporal abstraction of sequences, rather than feed-forward reconstruction, may be responsible for the ability of neural systems to quickly adapt to novel situations.
Collapse
Affiliation(s)
- G William Chapman
- Center for Systems Neuroscience, Boston University, Boston, MA, USA.
| | | |
Collapse
|
5
|
Han MJ, Tsukruk VV. Trainable Bilingual Synaptic Functions in Bio-enabled Synaptic Transistors. ACS NANO 2023; 17:18883-18892. [PMID: 37721448 PMCID: PMC10569090 DOI: 10.1021/acsnano.3c04113] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/08/2023] [Accepted: 09/14/2023] [Indexed: 09/19/2023]
Abstract
The signal transmission of the nervous system is regulated by neurotransmitters. Depending on the type of neurotransmitter released by presynaptic neurons, neuron cells can either be excited or inhibited. Maintaining a balance between excitatory and inhibitory synaptic responses is crucial for the nervous system's versatility, elasticity, and ability to perform parallel computing. On the way to mimic the brain's versatility and plasticity traits, creating a preprogrammed balance between excitatory and inhibitory responses is required. Despite substantial efforts to investigate the balancing of the nervous system, a complex circuit configuration has been suggested to simulate the interaction between excitatory and inhibitory synapses. As a meaningful approach, an optoelectronic synapse for balancing the excitatory and inhibitory responses assisted by light mediation is proposed here by deploying humidity-sensitive chiral nematic phases of known polysaccharide cellulose nanocrystals. The environment-induced pitch tuning changes the polarization of the helicoidal organization, affording different hysteresis effects with the subsequent excitatory and inhibitory nonvolatile behavior in the bio-electrolyte-gated transistors. By applying voltage pulses combined with stimulation of chiral light, the artificial optoelectronic synapse tunes not only synaptic functions but also learning pathways and color recognition. These multifunctional bio-based synaptic field-effect transistors exhibit potential for enhanced parallel neuromorphic computing and robot vision technology.
Collapse
Affiliation(s)
- Moon Jong Han
- Department
of Electronic Engineering, Gachon University, Seongnam 13120, Republic of Korea
| | - Vladimir V. Tsukruk
- School
of Materials Science and Engineering, Georgia
Institute of Technology, Atlanta, Georgia 30332, United States
| |
Collapse
|
6
|
Yamada T, Watanabe T, Sasaki Y. Are sleep disturbances a cause or consequence of autism spectrum disorder? Psychiatry Clin Neurosci 2023; 77:377-385. [PMID: 36949621 PMCID: PMC10871071 DOI: 10.1111/pcn.13550] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/01/2022] [Revised: 03/10/2023] [Accepted: 03/17/2023] [Indexed: 03/24/2023]
Abstract
Autism spectrum disorder (ASD) is a neurodevelopmental disorder characterized by core symptoms such as atypical social communication, stereotyped behaviors, and restricted interests. One of the comorbid symptoms of individuals with ASD is sleep disturbance. There are two major hypotheses regarding the neural mechanism underlying ASD, i.e., the excitation/inhibition (E/I) imbalance and the altered neuroplasticity hypotheses. However, the pathology of ASD remains unclear due to inconsistent research results. This paper argues that sleep is a confounding factor, thus, must be considered when examining the pathology of ASD because sleep plays an important role in modulating the E/I balance and neuroplasticity in the human brain. Investigation of the E/I balance and neuroplasticity during sleep might enhance our understanding of the neural mechanisms of ASD. It may also lead to the development of neurobiologically informed interventions to supplement existing psychosocial therapies.
Collapse
Affiliation(s)
- Takashi Yamada
- Department of Cognitive, Linguistic, and Psychological Sciences, Brown University, Providence, 02912, USA
| | - Takeo Watanabe
- Department of Cognitive, Linguistic, and Psychological Sciences, Brown University, Providence, 02912, USA
| | - Yuka Sasaki
- Department of Cognitive, Linguistic, and Psychological Sciences, Brown University, Providence, 02912, USA
| |
Collapse
|
7
|
Jeon I, Kim T. Distinctive properties of biological neural networks and recent advances in bottom-up approaches toward a better biologically plausible neural network. Front Comput Neurosci 2023; 17:1092185. [PMID: 37449083 PMCID: PMC10336230 DOI: 10.3389/fncom.2023.1092185] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/07/2022] [Accepted: 06/12/2023] [Indexed: 07/18/2023] Open
Abstract
Although it may appear infeasible and impractical, building artificial intelligence (AI) using a bottom-up approach based on the understanding of neuroscience is straightforward. The lack of a generalized governing principle for biological neural networks (BNNs) forces us to address this problem by converting piecemeal information on the diverse features of neurons, synapses, and neural circuits into AI. In this review, we described recent attempts to build a biologically plausible neural network by following neuroscientifically similar strategies of neural network optimization or by implanting the outcome of the optimization, such as the properties of single computational units and the characteristics of the network architecture. In addition, we proposed a formalism of the relationship between the set of objectives that neural networks attempt to achieve, and neural network classes categorized by how closely their architectural features resemble those of BNN. This formalism is expected to define the potential roles of top-down and bottom-up approaches for building a biologically plausible neural network and offer a map helping the navigation of the gap between neuroscience and AI engineering.
Collapse
Affiliation(s)
| | - Taegon Kim
- Brain Science Institute, Korea Institute of Science and Technology, Seoul, Republic of Korea
| |
Collapse
|
8
|
Langdon C, Genkin M, Engel TA. A unifying perspective on neural manifolds and circuits for cognition. Nat Rev Neurosci 2023; 24:363-377. [PMID: 37055616 PMCID: PMC11058347 DOI: 10.1038/s41583-023-00693-x] [Citation(s) in RCA: 19] [Impact Index Per Article: 19.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 03/06/2023] [Indexed: 04/15/2023]
Abstract
Two different perspectives have informed efforts to explain the link between the brain and behaviour. One approach seeks to identify neural circuit elements that carry out specific functions, emphasizing connectivity between neurons as a substrate for neural computations. Another approach centres on neural manifolds - low-dimensional representations of behavioural signals in neural population activity - and suggests that neural computations are realized by emergent dynamics. Although manifolds reveal an interpretable structure in heterogeneous neuronal activity, finding the corresponding structure in connectivity remains a challenge. We highlight examples in which establishing the correspondence between low-dimensional activity and connectivity has been possible, unifying the neural manifold and circuit perspectives. This relationship is conspicuous in systems in which the geometry of neural responses mirrors their spatial layout in the brain, such as the fly navigational system. Furthermore, we describe evidence that, in systems in which neural responses are heterogeneous, the circuit comprises interactions between activity patterns on the manifold via low-rank connectivity. We suggest that unifying the manifold and circuit approaches is important if we are to be able to causally test theories about the neural computations that underlie behaviour.
Collapse
Affiliation(s)
- Christopher Langdon
- Princeton Neuroscience Institute, Princeton University, Princeton, NJ, USA
- Cold Spring Harbor Laboratory, Cold Spring Harbor, NY, USA
| | - Mikhail Genkin
- Cold Spring Harbor Laboratory, Cold Spring Harbor, NY, USA
| | - Tatiana A Engel
- Princeton Neuroscience Institute, Princeton University, Princeton, NJ, USA.
- Cold Spring Harbor Laboratory, Cold Spring Harbor, NY, USA.
| |
Collapse
|
9
|
Garnier Artiñano T, Andalibi V, Atula I, Maestri M, Vanni S. Biophysical parameters control signal transfer in spiking network. Front Comput Neurosci 2023; 17:1011814. [PMID: 36761840 PMCID: PMC9905747 DOI: 10.3389/fncom.2023.1011814] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/04/2022] [Accepted: 01/09/2023] [Indexed: 01/26/2023] Open
Abstract
Introduction Information transmission and representation in both natural and artificial networks is dependent on connectivity between units. Biological neurons, in addition, modulate synaptic dynamics and post-synaptic membrane properties, but how these relate to information transmission in a population of neurons is still poorly understood. A recent study investigated local learning rules and showed how a spiking neural network can learn to represent continuous signals. Our study builds on their model to explore how basic membrane properties and synaptic delays affect information transfer. Methods The system consisted of three input and output units and a hidden layer of 300 excitatory and 75 inhibitory leaky integrate-and-fire (LIF) or adaptive integrate-and-fire (AdEx) units. After optimizing the connectivity to accurately replicate the input patterns in the output units, we transformed the model to more biologically accurate units and included synaptic delay and concurrent action potential generation in distinct neurons. We examined three different parameter regimes which comprised either identical physiological values for both excitatory and inhibitory units (Comrade), more biologically accurate values (Bacon), or the Comrade regime whose output units were optimized for low reconstruction error (HiFi). We evaluated information transmission and classification accuracy of the network with four distinct metrics: coherence, Granger causality, transfer entropy, and reconstruction error. Results Biophysical parameters showed a major impact on information transfer metrics. The classification was surprisingly robust, surviving very low firing and information rates, whereas information transmission overall and particularly low reconstruction error were more dependent on higher firing rates in LIF units. In AdEx units, the firing rates were lower and less information was transferred, but interestingly the highest information transmission rates were no longer overlapping with the highest firing rates. Discussion Our findings can be reflected on the predictive coding theory of the cerebral cortex and may suggest information transfer qualities as a phenomenological quality of biological cells.
Collapse
Affiliation(s)
- Tomás Garnier Artiñano
- Helsinki University Hospital (HUS) Neurocenter, Neurology, Helsinki University Hospital, Helsinki, Finland,Department of Neurosciences, Clinicum, University of Helsinki, Helsinki, Finland
| | - Vafa Andalibi
- Department of Computer Science, Indiana University Bloomington, Bloomington, IN, United States
| | - Iiris Atula
- Helsinki University Hospital (HUS) Neurocenter, Neurology, Helsinki University Hospital, Helsinki, Finland,Department of Neurosciences, Clinicum, University of Helsinki, Helsinki, Finland
| | - Matteo Maestri
- Helsinki University Hospital (HUS) Neurocenter, Neurology, Helsinki University Hospital, Helsinki, Finland,Department of Neurosciences, Clinicum, University of Helsinki, Helsinki, Finland,Department of Biomedical and Neuromotor Sciences, University of Bologna, Bologna, Italy
| | - Simo Vanni
- Helsinki University Hospital (HUS) Neurocenter, Neurology, Helsinki University Hospital, Helsinki, Finland,Department of Neurosciences, Clinicum, University of Helsinki, Helsinki, Finland,Department of Physiology, Medicum, University of Helsinki, Helsinki, Finland,*Correspondence: Simo Vanni,
| |
Collapse
|
10
|
Ali A, Ahmad N, de Groot E, Johannes van Gerven MA, Kietzmann TC. Predictive coding is a consequence of energy efficiency in recurrent neural networks. PATTERNS (NEW YORK, N.Y.) 2022; 3:100639. [PMID: 36569556 PMCID: PMC9768680 DOI: 10.1016/j.patter.2022.100639] [Citation(s) in RCA: 10] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 11/25/2021] [Revised: 12/24/2021] [Accepted: 10/27/2022] [Indexed: 11/24/2022]
Abstract
Predictive coding is a promising framework for understanding brain function. It postulates that the brain continuously inhibits predictable sensory input, ensuring preferential processing of surprising elements. A central aspect of this view is its hierarchical connectivity, involving recurrent message passing between excitatory bottom-up signals and inhibitory top-down feedback. Here we use computational modeling to demonstrate that such architectural hardwiring is not necessary. Rather, predictive coding is shown to emerge as a consequence of energy efficiency. When training recurrent neural networks to minimize their energy consumption while operating in predictive environments, the networks self-organize into prediction and error units with appropriate inhibitory and excitatory interconnections and learn to inhibit predictable sensory input. Moving beyond the view of purely top-down-driven predictions, we demonstrate, via virtual lesioning experiments, that networks perform predictions on two timescales: fast lateral predictions among sensory units and slower prediction cycles that integrate evidence over time.
Collapse
Affiliation(s)
- Abdullahi Ali
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, the Netherlands,Corresponding author
| | - Nasir Ahmad
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, the Netherlands
| | - Elgar de Groot
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, the Netherlands,Department of Experimental Psychology, Utrecht University, Utrecht, the Netherlands
| | | | - Tim Christian Kietzmann
- Institute of Cognitive Science, University of Osnabrück, Osnabrück, Germany,Corresponding author
| |
Collapse
|
11
|
Lee J, Jo J, Lee B, Lee JH, Yoon S. Brain-inspired Predictive Coding Improves the Performance of Machine Challenging Tasks. Front Comput Neurosci 2022; 16:1062678. [PMID: 36465966 PMCID: PMC9709416 DOI: 10.3389/fncom.2022.1062678] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/06/2022] [Accepted: 10/28/2022] [Indexed: 09/19/2023] Open
Abstract
Backpropagation has been regarded as the most favorable algorithm for training artificial neural networks. However, it has been criticized for its biological implausibility because its learning mechanism contradicts the human brain. Although backpropagation has achieved super-human performance in various machine learning applications, it often shows limited performance in specific tasks. We collectively referred to such tasks as machine-challenging tasks (MCTs) and aimed to investigate methods to enhance machine learning for MCTs. Specifically, we start with a natural question: Can a learning mechanism that mimics the human brain lead to the improvement of MCT performances? We hypothesized that a learning mechanism replicating the human brain is effective for tasks where machine intelligence is difficult. Multiple experiments corresponding to specific types of MCTs where machine intelligence has room to improve performance were performed using predictive coding, a more biologically plausible learning algorithm than backpropagation. This study regarded incremental learning, long-tailed, and few-shot recognition as representative MCTs. With extensive experiments, we examined the effectiveness of predictive coding that robustly outperformed backpropagation-trained networks for the MCTs. We demonstrated that predictive coding-based incremental learning alleviates the effect of catastrophic forgetting. Next, predictive coding-based learning mitigates the classification bias in long-tailed recognition. Finally, we verified that the network trained with predictive coding could correctly predict corresponding targets with few samples. We analyzed the experimental result by drawing analogies between the properties of predictive coding networks and those of the human brain and discussing the potential of predictive coding networks in general machine learning.
Collapse
Affiliation(s)
- Jangho Lee
- Department of Electrical and Computer Engineering, Seoul National University, Seoul, South Korea
| | - Jeonghee Jo
- Institute of New Media and Communications, Seoul National University, Seoul, South Korea
| | - Byounghwa Lee
- CybreBrain Research Section, Electronics and Telecommunications Research Institute (ETRI), Daejeon, South Korea
| | - Jung-Hoon Lee
- CybreBrain Research Section, Electronics and Telecommunications Research Institute (ETRI), Daejeon, South Korea
| | - Sungroh Yoon
- Department of Electrical and Computer Engineering, Seoul National University, Seoul, South Korea
- Interdisciplinary Program in Artificial Intelligence, Seoul National University, Seoul, South Korea
| |
Collapse
|
12
|
Hu B, Guan ZH, Chen G, Chen CLP. Neuroscience and Network Dynamics Toward Brain-Inspired Intelligence. IEEE TRANSACTIONS ON CYBERNETICS 2022; 52:10214-10227. [PMID: 33909581 DOI: 10.1109/tcyb.2021.3071110] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
This article surveys the interdisciplinary research of neuroscience, network science, and dynamic systems, with emphasis on the emergence of brain-inspired intelligence. To replicate brain intelligence, a practical way is to reconstruct cortical networks with dynamic activities that nourish the brain functions, instead of using only artificial computing networks. The survey provides a complex network and spatiotemporal dynamics (abbr. network dynamics) perspective for understanding the brain and cortical networks and, furthermore, develops integrated approaches of neuroscience and network dynamics toward building brain-inspired intelligence with learning and resilience functions. Presented are fundamental concepts and principles of complex networks, neuroscience, and hybrid dynamic systems, as well as relevant studies about the brain and intelligence. Other promising research directions, such as brain science, data science, quantum information science, and machine behavior are also briefly discussed toward future applications.
Collapse
|
13
|
L’esprit predictif : introduction à la théorie du cerveau bayésien. Encephale 2022; 48:436-444. [DOI: 10.1016/j.encep.2021.09.011] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/14/2021] [Revised: 09/28/2021] [Accepted: 09/30/2021] [Indexed: 01/13/2023]
|
14
|
Masset P, Qin S, Zavatone-Veth JA. Drifting neuronal representations: Bug or feature? BIOLOGICAL CYBERNETICS 2022; 116:253-266. [PMID: 34993613 DOI: 10.1007/s00422-021-00916-3] [Citation(s) in RCA: 10] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/25/2021] [Accepted: 11/17/2021] [Indexed: 06/14/2023]
Abstract
The brain displays a remarkable ability to sustain stable memories, allowing animals to execute precise behaviors or recall stimulus associations years after they were first learned. Yet, recent long-term recording experiments have revealed that single-neuron representations continuously change over time, contravening the classical assumption that learned features remain static. How do unstable neural codes support robust perception, memories, and actions? Here, we review recent experimental evidence for such representational drift across brain areas, as well as dissections of its functional characteristics and underlying mechanisms. We emphasize theoretical proposals for how drift need not only be a form of noise for which the brain must compensate. Rather, it can emerge from computationally beneficial mechanisms in hierarchical networks performing robust probabilistic computations.
Collapse
Affiliation(s)
- Paul Masset
- Center for Brain Science, Harvard University, Cambridge, MA, USA.
- Department of Molecular and Cellular Biology, Harvard University, Cambridge, MA, USA.
| | - Shanshan Qin
- Center for Brain Science, Harvard University, Cambridge, MA, USA
- School of Engineering and Applied Sciences, Harvard University, Cambridge, MA, USA
| | - Jacob A Zavatone-Veth
- Center for Brain Science, Harvard University, Cambridge, MA, USA
- Department of Physics, Harvard University, Cambridge, MA, USA
| |
Collapse
|
15
|
Shaffer C, Westlin C, Quigley KS, Whitfield-Gabrieli S, Barrett LF. Allostasis, Action, and Affect in Depression: Insights from the Theory of Constructed Emotion. Annu Rev Clin Psychol 2022; 18:553-580. [PMID: 35534123 PMCID: PMC9247744 DOI: 10.1146/annurev-clinpsy-081219-115627] [Citation(s) in RCA: 29] [Impact Index Per Article: 14.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
The theory of constructed emotion is a systems neuroscience approach to understanding the nature of emotion. It is also a general theoretical framework to guide hypothesis generation for how actions and experiences are constructed as the brain continually anticipates metabolic needs and attempts to meet those needs before they arise (termed allostasis). In this review, we introduce this framework and hypothesize that allostatic dysregulation is a trans-disorder vulnerability for mental and physical illness. We then review published findings consistent with the hypothesis that several symptoms in major depressive disorder (MDD), such as fatigue, distress, context insensitivity, reward insensitivity, and motor retardation, are associated with persistent problems in energy regulation. Our approach transforms the current understanding of MDD as resulting from enhanced emotional reactivity combined with reduced cognitive control and, in doing so, offers novel hypotheses regarding the development, progression, treatment, and prevention of MDD.
Collapse
Affiliation(s)
- Clare Shaffer
- Department of Psychology, Northeastern University, Boston, Massachusetts, USA; ,
| | - Christiana Westlin
- Department of Psychology, Northeastern University, Boston, Massachusetts, USA; ,
| | - Karen S Quigley
- Department of Psychology, Northeastern University, Boston, Massachusetts, USA; ,
- VA Bedford Healthcare System, Bedford, Massachusetts, USA
| | - Susan Whitfield-Gabrieli
- Department of Psychology, Northeastern University, Boston, Massachusetts, USA; ,
- Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, Massachusetts, USA
| | - Lisa Feldman Barrett
- Department of Psychology, Northeastern University, Boston, Massachusetts, USA; ,
- Department of Psychiatry and the Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital and Harvard Medical School, Charlestown, Massachusetts, USA
| |
Collapse
|
16
|
Merging pruning and neuroevolution: towards robust and efficient controllers for modular soft robots. KNOWL ENG REV 2022. [DOI: 10.1017/s0269888921000151] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/23/2022]
Abstract
Abstract
Artificial neural networks (ANNs) can be employed as controllers for robotic agents. Their structure is often complex, with many neurons and connections, especially when the robots have many sensors and actuators distributed across their bodies and/or when high expressive power is desirable. Pruning (removing neurons or connections) reduces the complexity of the ANN, thus increasing its energy efficiency, and has been reported to improve the generalization capability, in some cases. In addition, it is well-known that pruning in biological neural networks plays a fundamental role in the development of brains and their ability to learn. In this study, we consider the evolutionary optimization of neural controllers for the case study of Voxel-based soft robots, a kind of modular, bio-inspired soft robots, applying pruning during fitness evaluation. For a locomotion task, and for centralized as well as distributed controllers, we experimentally characterize the effect of different forms of pruning on after-pruning effectiveness, life-long effectiveness, adaptability to new terrains, and behavior. We find that incorporating some forms of pruning in neuroevolution leads to almost equally effective controllers as those evolved without pruning, with the benefit of higher robustness to pruning. We also observe occasional improvements in generalization ability.
Collapse
|
17
|
Zhou D, Lynn CW, Cui Z, Ciric R, Baum GL, Moore TM, Roalf DR, Detre JA, Gur RC, Gur RE, Satterthwaite TD, Bassett DS. Efficient coding in the economics of human brain connectomics. Netw Neurosci 2022; 6:234-274. [PMID: 36605887 PMCID: PMC9810280 DOI: 10.1162/netn_a_00223] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/07/2021] [Accepted: 12/08/2021] [Indexed: 01/07/2023] Open
Abstract
In systems neuroscience, most models posit that brain regions communicate information under constraints of efficiency. Yet, evidence for efficient communication in structural brain networks characterized by hierarchical organization and highly connected hubs remains sparse. The principle of efficient coding proposes that the brain transmits maximal information in a metabolically economical or compressed form to improve future behavior. To determine how structural connectivity supports efficient coding, we develop a theory specifying minimum rates of message transmission between brain regions to achieve an expected fidelity, and we test five predictions from the theory based on random walk communication dynamics. In doing so, we introduce the metric of compression efficiency, which quantifies the trade-off between lossy compression and transmission fidelity in structural networks. In a large sample of youth (n = 1,042; age 8-23 years), we analyze structural networks derived from diffusion-weighted imaging and metabolic expenditure operationalized using cerebral blood flow. We show that structural networks strike compression efficiency trade-offs consistent with theoretical predictions. We find that compression efficiency prioritizes fidelity with development, heightens when metabolic resources and myelination guide communication, explains advantages of hierarchical organization, links higher input fidelity to disproportionate areal expansion, and shows that hubs integrate information by lossy compression. Lastly, compression efficiency is predictive of behavior-beyond the conventional network efficiency metric-for cognitive domains including executive function, memory, complex reasoning, and social cognition. Our findings elucidate how macroscale connectivity supports efficient coding and serve to foreground communication processes that utilize random walk dynamics constrained by network connectivity.
Collapse
Affiliation(s)
- Dale Zhou
- Department of Neuroscience, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA
| | - Christopher W. Lynn
- Initiative for the Theoretical Sciences, Graduate Center, City University of New York, New York, NY, USA,Joseph Henry Laboratories of Physics, Princeton University, Princeton, NJ, USA
| | - Zaixu Cui
- Department of Psychiatry, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA
| | - Rastko Ciric
- Department of Bioengineering, Schools of Engineering and Medicine, Stanford University, Stanford, CA, USA
| | - Graham L. Baum
- Department of Psychology and Center for Brain Science, Harvard University, Cambridge, MA, USA
| | - Tyler M. Moore
- Department of Psychiatry, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA,Penn-Children’s Hospital of Philadelphia Lifespan Brain Institute, Philadelphia, PA, USA
| | - David R. Roalf
- Department of Neurology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA
| | - John A. Detre
- Department of Neurology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA
| | - Ruben C. Gur
- Department of Psychiatry, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA,Penn-Children’s Hospital of Philadelphia Lifespan Brain Institute, Philadelphia, PA, USA
| | - Raquel E. Gur
- Department of Psychiatry, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA,Penn-Children’s Hospital of Philadelphia Lifespan Brain Institute, Philadelphia, PA, USA
| | - Theodore D. Satterthwaite
- Department of Psychiatry, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA,Penn-Children’s Hospital of Philadelphia Lifespan Brain Institute, Philadelphia, PA, USA
| | - Dani S. Bassett
- Department of Psychiatry, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA,Department of Neurology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA,Department of Physics & Astronomy, College of Arts and Sciences, University of Pennsylvania, Philadelphia, PA, USA,Department of Bioengineering, School of Engineering and Applied Sciences, University of Pennsylvania, Philadelphia, PA, USA,Department of Electrical & Systems Engineering, School of Engineering and Applied Sciences, University of Pennsylvania, Philadelphia, PA, USA,Santa Fe Institute, Santa Fe, NM, USA,* Corresponding Author:
| |
Collapse
|
18
|
Büchel J, Zendrikov D, Solinas S, Indiveri G, Muir DR. Supervised training of spiking neural networks for robust deployment on mixed-signal neuromorphic processors. Sci Rep 2021; 11:23376. [PMID: 34862429 PMCID: PMC8642544 DOI: 10.1038/s41598-021-02779-x] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/07/2021] [Accepted: 11/22/2021] [Indexed: 11/14/2022] Open
Abstract
Mixed-signal analog/digital circuits emulate spiking neurons and synapses with extremely high energy efficiency, an approach known as "neuromorphic engineering". However, analog circuits are sensitive to process-induced variation among transistors in a chip ("device mismatch"). For neuromorphic implementation of Spiking Neural Networks (SNNs), mismatch causes parameter variation between identically-configured neurons and synapses. Each chip exhibits a different distribution of neural parameters, causing deployed networks to respond differently between chips. Current solutions to mitigate mismatch based on per-chip calibration or on-chip learning entail increased design complexity, area and cost, making deployment of neuromorphic devices expensive and difficult. Here we present a supervised learning approach that produces SNNs with high robustness to mismatch and other common sources of noise. Our method trains SNNs to perform temporal classification tasks by mimicking a pre-trained dynamical system, using a local learning rule from non-linear control theory. We demonstrate our method on two tasks requiring temporal memory, and measure the robustness of our approach to several forms of noise and mismatch. We show that our approach is more robust than common alternatives for training SNNs. Our method provides robust deployment of pre-trained networks on mixed-signal neuromorphic hardware, without requiring per-device training or calibration.
Collapse
Affiliation(s)
- Julian Büchel
- SynSense, Thurgauerstrasse 40, 8050, Zurich, Switzerland
- Institute of Neuroinformatics, University of Zurich and ETH Zurich, Winterthurerstrasse 190, 8057, Zurich, Switzerland
| | - Dmitrii Zendrikov
- Institute of Neuroinformatics, University of Zurich and ETH Zurich, Winterthurerstrasse 190, 8057, Zurich, Switzerland
| | - Sergio Solinas
- Department of Biomedical Science, University of Sassari, Piazza Università, 21, 07100, Sassari, Sardegna, Italy
| | - Giacomo Indiveri
- SynSense, Thurgauerstrasse 40, 8050, Zurich, Switzerland
- Institute of Neuroinformatics, University of Zurich and ETH Zurich, Winterthurerstrasse 190, 8057, Zurich, Switzerland
| | - Dylan R Muir
- SynSense, Thurgauerstrasse 40, 8050, Zurich, Switzerland.
| |
Collapse
|
19
|
A general principle of dendritic constancy: A neuron's size- and shape-invariant excitability. Neuron 2021; 109:3647-3662.e7. [PMID: 34555313 DOI: 10.1016/j.neuron.2021.08.028] [Citation(s) in RCA: 14] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/05/2019] [Revised: 06/29/2021] [Accepted: 08/20/2021] [Indexed: 11/20/2022]
Abstract
Reducing neuronal size results in less membrane and therefore lower input conductance. Smaller neurons are thus more excitable, as seen in their responses to somatic current injections. However, the impact of a neuron's size and shape on its voltage responses to dendritic synaptic activation is much less understood. Here we use analytical cable theory to predict voltage responses to distributed synaptic inputs in unbranched cables, showing that these are entirely independent of dendritic length. For a given synaptic density, neuronal responses depend only on the average dendritic diameter and intrinsic conductivity. This remains valid for a wide range of morphologies irrespective of their arborization complexity. Spiking models indicate that morphology-invariant numbers of spikes approximate the percentage of active synapses. In contrast to spike rate, spike times do depend on dendrite morphology. In summary, neuronal excitability in response to distributed synaptic inputs is largely unaffected by dendrite length or complexity.
Collapse
|
20
|
Zeldenrust F, Gutkin B, Denéve S. Efficient and robust coding in heterogeneous recurrent networks. PLoS Comput Biol 2021; 17:e1008673. [PMID: 33930016 PMCID: PMC8115785 DOI: 10.1371/journal.pcbi.1008673] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/19/2021] [Revised: 05/12/2021] [Accepted: 04/07/2021] [Indexed: 11/19/2022] Open
Abstract
Cortical networks show a large heterogeneity of neuronal properties. However, traditional coding models have focused on homogeneous populations of excitatory and inhibitory neurons. Here, we analytically derive a class of recurrent networks of spiking neurons that close to optimally track a continuously varying input online, based on two assumptions: 1) every spike is decoded linearly and 2) the network aims to reduce the mean-squared error between the input and the estimate. From this we derive a class of predictive coding networks, that unifies encoding and decoding and in which we can investigate the difference between homogeneous networks and heterogeneous networks, in which each neurons represents different features and has different spike-generating properties. We find that in this framework, 'type 1' and 'type 2' neurons arise naturally and networks consisting of a heterogeneous population of different neuron types are both more efficient and more robust against correlated noise. We make two experimental predictions: 1) we predict that integrators show strong correlations with other integrators and resonators are correlated with resonators, whereas the correlations are much weaker between neurons with different coding properties and 2) that 'type 2' neurons are more coherent with the overall network activity than 'type 1' neurons.
Collapse
Affiliation(s)
- Fleur Zeldenrust
- Department of Neurophysiology, Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, the Netherlands
| | - Boris Gutkin
- Group for Neural Theory, INSERM U960, Département d’Études Cognitives, École Normal Supérieure PSL University, Paris, France
- Center for Cognition and Decision Making, National Research University Higher School of Economics, Moscow, Russia
| | - Sophie Denéve
- Group for Neural Theory, INSERM U960, Département d’Études Cognitives, École Normal Supérieure PSL University, Paris, France
| |
Collapse
|
21
|
Recanatesi S, Farrell M, Lajoie G, Deneve S, Rigotti M, Shea-Brown E. Predictive learning as a network mechanism for extracting low-dimensional latent space representations. Nat Commun 2021; 12:1417. [PMID: 33658520 PMCID: PMC7930246 DOI: 10.1038/s41467-021-21696-1] [Citation(s) in RCA: 29] [Impact Index Per Article: 9.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/12/2019] [Accepted: 01/22/2021] [Indexed: 01/02/2023] Open
Abstract
Artificial neural networks have recently achieved many successes in solving sequential processing and planning tasks. Their success is often ascribed to the emergence of the task’s low-dimensional latent structure in the network activity – i.e., in the learned neural representations. Here, we investigate the hypothesis that a means for generating representations with easily accessed low-dimensional latent structure, possibly reflecting an underlying semantic organization, is through learning to predict observations about the world. Specifically, we ask whether and when network mechanisms for sensory prediction coincide with those for extracting the underlying latent variables. Using a recurrent neural network model trained to predict a sequence of observations we show that network dynamics exhibit low-dimensional but nonlinearly transformed representations of sensory inputs that map the latent structure of the sensory environment. We quantify these results using nonlinear measures of intrinsic dimensionality and linear decodability of latent variables, and provide mathematical arguments for why such useful predictive representations emerge. We focus throughout on how our results can aid the analysis and interpretation of experimental data. Neural networks trained using predictive models generate representations that recover the underlying low-dimensional latent structure in the data. Here, the authors demonstrate that a network trained on a spatial navigation task generates place-related neural activations similar to those observed in the hippocampus and show that these are related to the latent structure.
Collapse
Affiliation(s)
- Stefano Recanatesi
- University of Washington Center for Computational Neuroscience and Swartz Center for Theoretical Neuroscience, Seattle, WA, USA.
| | - Matthew Farrell
- Department of Applied Mathematics, University of Washington, Seattle, WA, USA
| | - Guillaume Lajoie
- Department of Mathematics and Statistics, Université de Montréal, Montreal, QC, Canada.,Mila-Quebec Artificial Intelligence Institute, Montreal, QC, Canada
| | - Sophie Deneve
- Group for Neural Theory, Ecole Normal Superieur, Paris, France
| | | | - Eric Shea-Brown
- University of Washington Center for Computational Neuroscience and Swartz Center for Theoretical Neuroscience, Seattle, WA, USA.,Department of Applied Mathematics, University of Washington, Seattle, WA, USA.,Allen Institute for Brain Science, Seattle, WA, USA
| |
Collapse
|
22
|
Sohn H, Meirhaeghe N, Rajalingham R, Jazayeri M. A Network Perspective on Sensorimotor Learning. Trends Neurosci 2021; 44:170-181. [PMID: 33349476 PMCID: PMC9744184 DOI: 10.1016/j.tins.2020.11.007] [Citation(s) in RCA: 20] [Impact Index Per Article: 6.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/01/2020] [Revised: 09/11/2020] [Accepted: 11/20/2020] [Indexed: 12/15/2022]
Abstract
What happens in the brain when we learn? Ever since the foundational work of Cajal, the field has made numerous discoveries as to how experience could change the structure and function of individual synapses. However, more recent advances have highlighted the need for understanding learning in terms of complex interactions between populations of neurons and synapses. How should one think about learning at such a macroscopic level? Here, we develop a conceptual framework to bridge the gap between the different scales at which learning operates, from synapses to neurons to behavior. Using this framework, we explore the principles that guide sensorimotor learning across these scales, and set the stage for future experimental and theoretical work in the field.
Collapse
Affiliation(s)
| | - Nicolas Meirhaeghe
- Harvard-MIT Division of Health Sciences & Technology, Massachusetts Institute of Technology,Corresponding authors: Nicolas Meirhaeghe, , Mehrdad Jazayeri, Ph.D.,
| | | | - Mehrdad Jazayeri
- McGovern Institute for Brain Research,,Department of Brain & Cognitive Sciences, Massachusetts Institute of Technology,Corresponding authors: Nicolas Meirhaeghe, , Mehrdad Jazayeri, Ph.D.,
| |
Collapse
|
23
|
Rullán Buxó CE, Pillow JW. Poisson balanced spiking networks. PLoS Comput Biol 2020; 16:e1008261. [PMID: 33216741 PMCID: PMC7717583 DOI: 10.1371/journal.pcbi.1008261] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/11/2019] [Revised: 12/04/2020] [Accepted: 08/14/2020] [Indexed: 11/18/2022] Open
Abstract
An important problem in computational neuroscience is to understand how networks of spiking neurons can carry out various computations underlying behavior. Balanced spiking networks (BSNs) provide a powerful framework for implementing arbitrary linear dynamical systems in networks of integrate-and-fire neurons. However, the classic BSN model requires near-instantaneous transmission of spikes between neurons, which is biologically implausible. Introducing realistic synaptic delays leads to an pathological regime known as "ping-ponging", in which different populations spike maximally in alternating time bins, causing network output to overshoot the target solution. Here we document this phenomenon and provide a novel solution: we show that a network can have realistic synaptic delays while maintaining accuracy and stability if neurons are endowed with conditionally Poisson firing. Formally, we propose two alternate formulations of Poisson balanced spiking networks: (1) a "local" framework, which replaces the hard integrate-and-fire spiking rule within each neuron by a "soft" threshold function, such that firing probability grows as a smooth nonlinear function of membrane potential; and (2) a "population" framework, which reformulates the BSN objective function in terms of expected spike counts over the entire population. We show that both approaches offer improved robustness, allowing for accurate implementation of network dynamics with realistic synaptic delays between neurons. Both Poisson frameworks preserve the coding accuracy and robustness to neuron loss of the original model and, moreover, produce positive correlations between similarly tuned neurons, a feature of real neural populations that is not found in the deterministic BSN. This work unifies balanced spiking networks with Poisson generalized linear models and suggests several promising avenues for future research.
Collapse
Affiliation(s)
| | - Jonathan W. Pillow
- Princeton Neuroscience Institute, Princeton University, Princeton, New Jersey, USA
| |
Collapse
|
24
|
Wang Q, Banerjee S, So C, Qiu C, Lam HIC, Tse D, Völgyi B, Pan F. Unmasking inhibition prolongs neuronal function in retinal degeneration mouse model. FASEB J 2020; 34:15282-15299. [PMID: 32985731 DOI: 10.1096/fj.202001315rr] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/29/2020] [Revised: 08/25/2020] [Accepted: 09/08/2020] [Indexed: 11/11/2022]
Abstract
All neurodegenerative diseases involve a relatively long period of timeframe from the onset of the disease to complete loss of functions. Extending this timeframe, even at a reduced level of function, would improve the quality of life of patients with these devastating diseases. The retina, as the part of the central nervous system and a frequent site of many distressing neurodegenerative disease, provides an ideal model to investigate the feasibility of extending the functional timeframe through pharmacologic intervention. Retinitis Pigmentosa (RP) is a group of blinding diseases. Although the rate of progression and degree of visual loss varies, there is usually a prolonged time before patients totally lose their photoreceptors and vision. It is believed that inhibitory mechanisms are still intact and may become relatively strong after the gradual loss of photoreceptors in RP patients. Therefore, it is possible that light-evoked responses of retinal ganglion cells and visual information processes in retinal circuits could be "unmasked" by blocking these inhibitory mechanisms restoring some level of visual function. Our results indicate that if the inhibition in the inner retina was unmasked in the retina of the rd10 mouse (the well-characterized RP mimicking, clinically relevant mouse model), the light-evoked responses of many retinal ganglion cells can be induced and restore their normal light sensitivity. GABA A receptor plays a major role in this masking inhibition. ERG b-wave and behavioral tests of spatial vision partly recovered after the application of PTX. Hence, removing retinal inhibition unmasks signalling mediated by surviving cones, thereby restoring some degree of visual function. These results may offer a novel strategy to restore the visual function with the surviving cones in RP patients and other gradual and progressive neurodegenerative diseases.
Collapse
Affiliation(s)
- Qin Wang
- School of Optometry, The Hong Kong Polytechnic University, Kowloon, Hong Kong
| | - Seema Banerjee
- School of Optometry, The Hong Kong Polytechnic University, Kowloon, Hong Kong
| | - Chunghim So
- School of Optometry, The Hong Kong Polytechnic University, Kowloon, Hong Kong
| | - Chunting Qiu
- School of Optometry, The Hong Kong Polytechnic University, Kowloon, Hong Kong
| | - Hang-I Christie Lam
- School of Optometry, The Hong Kong Polytechnic University, Kowloon, Hong Kong
| | - Dennis Tse
- School of Optometry, The Hong Kong Polytechnic University, Kowloon, Hong Kong
| | - Béla Völgyi
- Department of Experimental Zoology and Neurobiology, Szentágothai Research Centre, MTA NAP Retinal Electrical Synapses Research Group, University of Pécs, Pécs, Hungary
| | - Feng Pan
- School of Optometry, The Hong Kong Polytechnic University, Kowloon, Hong Kong.,The Centre for Eye and Vision Research, Hong Kong
| |
Collapse
|
25
|
Li Q, Gao J, Zhang Z, Huang Q, Wu Y, Xu B. Distinguishing Epileptiform Discharges From Normal Electroencephalograms Using Adaptive Fractal and Network Analysis: A Clinical Perspective. Front Physiol 2020; 11:828. [PMID: 32903770 PMCID: PMC7438848 DOI: 10.3389/fphys.2020.00828] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/26/2020] [Accepted: 06/22/2020] [Indexed: 01/03/2023] Open
Abstract
Epilepsy is one of the most common disorders of the brain. Clinically, to corroborate an epileptic seizure-like symptom and to find the seizure localization, electroencephalogram (EEG) data are often visually examined by a clinical doctor to detect the presence of epileptiform discharges. Epileptiform discharges are transient waveforms lasting for several tens to hundreds of milliseconds and are mainly divided into seven types. It is important to develop systematic approaches to accurately distinguish these waveforms from normal control ones. This is a difficult task if one wishes to develop first principle rather than black-box based approaches, since clinically used scalp EEGs usually contain a lot of noise and artifacts. To solve this problem, we analyzed 640 multi-channel EEG segments, each 4s long. Among these segments, 540 are short epileptiform discharges, and 100 are from healthy controls. We have proposed two approaches for distinguishing epileptiform discharges from normal EEGs. The first method is based on Signal Range and EEGs' long range correlation properties characterized by the Hurst parameter H extracted by applying adaptive fractal analysis (AFA), which can also maximally suppress the effects of noise and various kinds of artifacts. Our second method is based on networks constructed from three aspects of the scalp EEG signals, the Signal Range, the energy of the alpha wave component, and EEG's long range correlation properties. The networks are further analyzed using singular value decomposition (SVD). The square of the first singular value from SVD is used to construct features to distinguish epileptiform discharges from normal controls. Using Random Forest Classifier (RF), our approaches can achieve very high accuracy in distinguishing epileptiform discharges from normal control ones, and thus are very promising to be used clinically. The network-based approach is also used to infer the localizations of each type of epileptiform discharges, and it is found that the sub-networks representing the most likely location of each type of epileptiform discharges are different among the seven types of epileptiform discharges.
Collapse
Affiliation(s)
- Qiong Li
- School of Computer, Electronics and Information, Guangxi University, Nanning, China
| | - Jianbo Gao
- Center for Geodata and Analysis, Faculty of Geographical Science, Beijing Normal University, Beijing, China
- Institute of Automation, Chinese Academy of Sciences, Beijing, China
- International College, Guangxi University, Nanning, Guangxi, China
| | - Ziwen Zhang
- School of Computer, Electronics and Information, Guangxi University, Nanning, China
| | - Qi Huang
- The First Affiliated Hospital of Guangxi Medical University, Nanning, China
| | - Yuan Wu
- The First Affiliated Hospital of Guangxi Medical University, Nanning, China
| | - Bo Xu
- Institute of Automation, Chinese Academy of Sciences, Beijing, China
| |
Collapse
|
26
|
Lu Z, Bassett DS. Invertible generalized synchronization: A putative mechanism for implicit learning in neural systems. CHAOS (WOODBURY, N.Y.) 2020; 30:063133. [PMID: 32611103 DOI: 10.1063/5.0004344] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/10/2020] [Accepted: 05/25/2020] [Indexed: 06/11/2023]
Abstract
Regardless of the marked differences between biological and artificial neural systems, one fundamental similarity is that they are essentially dynamical systems that can learn to imitate other dynamical systems whose governing equations are unknown. The brain is able to learn the dynamic nature of the physical world via experience; analogously, artificial neural systems such as reservoir computing networks (RCNs) can learn the long-term behavior of complex dynamical systems from data. Recent work has shown that the mechanism of such learning in RCNs is invertible generalized synchronization (IGS). Yet, whether IGS is also the mechanism of learning in biological systems remains unclear. To shed light on this question, we draw inspiration from features of the human brain to propose a general and biologically feasible learning framework that utilizes IGS. To evaluate the framework's relevance, we construct several distinct neural network models as instantiations of the proposed framework. Regardless of their particularities, these neural network models can consistently learn to imitate other dynamical processes with a biologically feasible adaptation rule that modulates the strength of synapses. Further, we observe and theoretically explain the spontaneous emergence of four distinct phenomena reminiscent of cognitive functions: (i) learning multiple dynamics; (ii) switching among the imitations of multiple dynamical systems, either spontaneously or driven by external cues; (iii) filling-in missing variables from incomplete observations; and (iv) deciphering superimposed input from different dynamical systems. Collectively, our findings support the notion that biological neural networks can learn the dynamic nature of their environment through the mechanism of IGS.
Collapse
Affiliation(s)
- Zhixin Lu
- Department of Bioengineering, School of Engineering and Applied Science, University of Pennsylvania, Philadelphia, Pennsylvania 19104, USA
| | - Danielle S Bassett
- Department of Bioengineering, School of Engineering and Applied Science, University of Pennsylvania, Philadelphia, Pennsylvania 19104, USA
| |
Collapse
|
27
|
Ashhad S, Feldman JL. Emergent Elements of Inspiratory Rhythmogenesis: Network Synchronization and Synchrony Propagation. Neuron 2020; 106:482-497.e4. [PMID: 32130872 PMCID: PMC11221628 DOI: 10.1016/j.neuron.2020.02.005] [Citation(s) in RCA: 45] [Impact Index Per Article: 11.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/30/2019] [Revised: 01/15/2020] [Accepted: 02/07/2020] [Indexed: 12/22/2022]
Abstract
We assessed the mechanism of mammalian breathing rhythmogenesis in the preBötzinger complex (preBötC) in vitro, where experimental tests remain inconsistent with hypotheses of canonical rhythmogenic cellular or synaptic mechanisms, i.e., pacemaker neurons or inhibition. Under rhythmic conditions, in each cycle, an inspiratory burst emerges as (presumptive) preBötC rhythmogenic neurons transition from aperiodic uncorrelated population spike activity to become increasingly synchronized during preinspiration (for ∼50-500 ms), which can trigger inspiratory bursts that propagate to motoneurons. In nonrhythmic conditions, antagonizing GABAA receptors can initiate this synchronization while inducing a higher conductance state in nonrhythmogenic preBötC output neurons. Our analyses uncover salient features of preBötC network dynamics where inspiratory bursts arise when and only when the preBötC rhythmogenic subpopulation strongly synchronizes to drive output neurons. Furthermore, downstream propagation of preBötC network activity, ultimately to motoneurons, is dependent on the strength of input synchrony onto preBötC output neurons exemplifying synchronous propagation of network activity.
Collapse
Affiliation(s)
- Sufyan Ashhad
- Department of Neurobiology, University of California, Los Angeles, Box 951763, Los Angeles, CA 90095-1763, USA
| | - Jack L Feldman
- Department of Neurobiology, University of California, Los Angeles, Box 951763, Los Angeles, CA 90095-1763, USA.
| |
Collapse
|
28
|
Hong C, Wei X, Wang J, Deng B, Yu H, Che Y. Training Spiking Neural Networks for Cognitive Tasks: A Versatile Framework Compatible With Various Temporal Codes. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2020; 31:1285-1296. [PMID: 31247574 DOI: 10.1109/tnnls.2019.2919662] [Citation(s) in RCA: 14] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Abstract
Recent studies have demonstrated the effectiveness of supervised learning in spiking neural networks (SNNs). A trainable SNN provides a valuable tool not only for engineering applications but also for theoretical neuroscience studies. Here, we propose a modified SpikeProp learning algorithm, which ensures better learning stability for SNNs and provides more diverse network structures and coding schemes. Specifically, we designed a spike gradient threshold rule to solve the well-known gradient exploding problem in SNN training. In addition, regulation rules on firing rates and connection weights are proposed to control the network activity during training. Based on these rules, biologically realistic features such as lateral connections, complex synaptic dynamics, and sparse activities are included in the network to facilitate neural computation. We demonstrate the versatility of this framework by implementing three well-known temporal codes for different types of cognitive tasks, namely, handwritten digit recognition, spatial coordinate transformation, and motor sequence generation. Several important features observed in experimental studies, such as selective activity, excitatory-inhibitory balance, and weak pairwise correlation, emerged in the trained model. This agreement between experimental and computational results further confirmed the importance of these features in neural function. This work provides a new framework, in which various neural behaviors can be modeled and the underlying computational mechanisms can be studied.
Collapse
|
29
|
Brendel W, Bourdoukan R, Vertechi P, Machens CK, Denève S. Learning to represent signals spike by spike. PLoS Comput Biol 2020; 16:e1007692. [PMID: 32176682 PMCID: PMC7135338 DOI: 10.1371/journal.pcbi.1007692] [Citation(s) in RCA: 14] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/19/2019] [Revised: 04/06/2020] [Accepted: 01/27/2020] [Indexed: 12/31/2022] Open
Abstract
Networks based on coordinated spike coding can encode information with high efficiency in the spike trains of individual neurons. These networks exhibit single-neuron variability and tuning curves as typically observed in cortex, but paradoxically coincide with a precise, non-redundant spike-based population code. However, it has remained unclear whether the specific synaptic connectivities required in these networks can be learnt with local learning rules. Here, we show how to learn the required architecture. Using coding efficiency as an objective, we derive spike-timing-dependent learning rules for a recurrent neural network, and we provide exact solutions for the networks’ convergence to an optimal state. As a result, we deduce an entire network from its input distribution and a firing cost. After learning, basic biophysical quantities such as voltages, firing thresholds, excitation, inhibition, or spikes acquire precise functional interpretations. Spiking neural networks can encode information with high efficiency in the spike trains of individual neurons if the synaptic weights between neurons are set to specific, optimal values. In this regime, the networks exhibit irregular spike trains, high trial-to-trial variability, and stimulus tuning, as typically observed in cortex. The strong variability on the level of single neurons paradoxically coincides with a precise, non-redundant, and spike-based population code. However, it has remained unclear whether the specific synaptic connectivities required in these spiking networks can be learnt with local learning rules. In this study, we show how the required architecture can be learnt. We derive local and biophysically plausible learning rules for recurrent neural networks from first principles. We show both mathematically and using numerical simulations that these learning rules drive the networks into the optimal state, and we show that the optimal state is governed by the statistics of the input signals. After learning, the voltages of individual neurons can be interpreted as measuring the instantaneous error of the code, given by the error between the desired output signal and the actual output signal.
Collapse
Affiliation(s)
- Wieland Brendel
- Champalimaud Neuroscience Programme, Champalimaud Foundation, Lisbon, Portugal
- Group for Neural Theory, INSERM U960, Département d’Etudes Cognitives, Ecole Normale Supérieure, Paris, France
- Tübingen AI Center, University of Tübingen, Germany
| | - Ralph Bourdoukan
- Group for Neural Theory, INSERM U960, Département d’Etudes Cognitives, Ecole Normale Supérieure, Paris, France
| | - Pietro Vertechi
- Champalimaud Neuroscience Programme, Champalimaud Foundation, Lisbon, Portugal
- Group for Neural Theory, INSERM U960, Département d’Etudes Cognitives, Ecole Normale Supérieure, Paris, France
| | - Christian K. Machens
- Champalimaud Neuroscience Programme, Champalimaud Foundation, Lisbon, Portugal
- * E-mail: (CKM); (SD)
| | - Sophie Denève
- Group for Neural Theory, INSERM U960, Département d’Etudes Cognitives, Ecole Normale Supérieure, Paris, France
- * E-mail: (CKM); (SD)
| |
Collapse
|
30
|
Wang X, Lin X, Dang X. Supervised learning in spiking neural networks: A review of algorithms and evaluations. Neural Netw 2020; 125:258-280. [PMID: 32146356 DOI: 10.1016/j.neunet.2020.02.011] [Citation(s) in RCA: 49] [Impact Index Per Article: 12.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/21/2019] [Revised: 12/15/2019] [Accepted: 02/20/2020] [Indexed: 01/08/2023]
Abstract
As a new brain-inspired computational model of the artificial neural network, a spiking neural network encodes and processes neural information through precisely timed spike trains. Spiking neural networks are composed of biologically plausible spiking neurons, which have become suitable tools for processing complex temporal or spatiotemporal information. However, because of their intricately discontinuous and implicit nonlinear mechanisms, the formulation of efficient supervised learning algorithms for spiking neural networks is difficult, and has become an important problem in this research field. This article presents a comprehensive review of supervised learning algorithms for spiking neural networks and evaluates them qualitatively and quantitatively. First, a comparison between spiking neural networks and traditional artificial neural networks is provided. The general framework and some related theories of supervised learning for spiking neural networks are then introduced. Furthermore, the state-of-the-art supervised learning algorithms in recent years are reviewed from the perspectives of applicability to spiking neural network architecture and the inherent mechanisms of supervised learning algorithms. A performance comparison of spike train learning of some representative algorithms is also made. In addition, we provide five qualitative performance evaluation criteria for supervised learning algorithms for spiking neural networks and further present a new taxonomy for supervised learning algorithms depending on these five performance evaluation criteria. Finally, some future research directions in this research field are outlined.
Collapse
Affiliation(s)
- Xiangwen Wang
- College of Computer Science and Engineering, Northwest Normal University, Lanzhou, 730070, People's Republic of China
| | - Xianghong Lin
- College of Computer Science and Engineering, Northwest Normal University, Lanzhou, 730070, People's Republic of China.
| | - Xiaochao Dang
- College of Computer Science and Engineering, Northwest Normal University, Lanzhou, 730070, People's Republic of China
| |
Collapse
|
31
|
Multi-level anomalous Hall resistance in a single Hall cross for the applications of neuromorphic device. Sci Rep 2020; 10:1285. [PMID: 31992806 PMCID: PMC6987114 DOI: 10.1038/s41598-020-58223-z] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/04/2019] [Accepted: 12/13/2019] [Indexed: 12/13/2022] Open
Abstract
We demonstrate the process of obtaining memristive multi-states Hall resistance (RH) change in a single Hall cross (SHC) structure. Otherwise, the working mechanism successfully mimics the behavior of biological neural systems. The motion of domain wall (DW) in the SHC was used to control the ascend (or descend) of the RH amplitude. The primary synaptic functions such as long-term potentiation (LTP), long-term depression (LTD), and spike-time-dependent plasticity (STDP) could then be emulated by regulating RH. Applied programmable magnetic field pulses are in varying conditions such as intensity and duration to adjust RH. These results show that analog readings of DW movement can be closely resembled with the change of synaptic weight and have great potentials for bioinspired neuromorphic computing.
Collapse
|
32
|
Kastanenka KV, Moreno-Bote R, De Pittà M, Perea G, Eraso-Pichot A, Masgrau R, Poskanzer KE, Galea E. A roadmap to integrate astrocytes into Systems Neuroscience. Glia 2020; 68:5-26. [PMID: 31058383 PMCID: PMC6832773 DOI: 10.1002/glia.23632] [Citation(s) in RCA: 43] [Impact Index Per Article: 10.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/03/2018] [Revised: 04/08/2019] [Accepted: 04/09/2019] [Indexed: 12/14/2022]
Abstract
Systems neuroscience is still mainly a neuronal field, despite the plethora of evidence supporting the fact that astrocytes modulate local neural circuits, networks, and complex behaviors. In this article, we sought to identify which types of studies are necessary to establish whether astrocytes, beyond their well-documented homeostatic and metabolic functions, perform computations implementing mathematical algorithms that sub-serve coding and higher-brain functions. First, we reviewed Systems-like studies that include astrocytes in order to identify computational operations that these cells may perform, using Ca2+ transients as their encoding language. The analysis suggests that astrocytes may carry out canonical computations in a time scale of subseconds to seconds in sensory processing, neuromodulation, brain state, memory formation, fear, and complex homeostatic reflexes. Next, we propose a list of actions to gain insight into the outstanding question of which variables are encoded by such computations. The application of statistical analyses based on machine learning, such as dimensionality reduction and decoding in the context of complex behaviors, combined with connectomics of astrocyte-neuronal circuits, is, in our view, fundamental undertakings. We also discuss technical and analytical approaches to study neuronal and astrocytic populations simultaneously, and the inclusion of astrocytes in advanced modeling of neural circuits, as well as in theories currently under exploration such as predictive coding and energy-efficient coding. Clarifying the relationship between astrocytic Ca2+ and brain coding may represent a leap forward toward novel approaches in the study of astrocytes in health and disease.
Collapse
Affiliation(s)
- Ksenia V. Kastanenka
- Department of Neurology, MassGeneral Institute for Neurodegenerative Diseases, Massachusetts General Hospital and Harvard Medical School, Massachusetts 02129, USA
| | - Rubén Moreno-Bote
- Department of Information and Communications Technologies, Center for Brain and Cognition and Universitat Pompeu Fabra, 08018 Barcelona, Spain
- ICREA, 08010 Barcelona, Spain
| | | | | | - Abel Eraso-Pichot
- Departament de Bioquímica, Institut de Neurociències i Universitat Autònoma de Barcelona, Bellaterra, 08193 Barcelona, Spain
| | - Roser Masgrau
- Departament de Bioquímica, Institut de Neurociències i Universitat Autònoma de Barcelona, Bellaterra, 08193 Barcelona, Spain
| | - Kira E. Poskanzer
- Department of Biochemistry & Biophysics, Neuroscience Graduate Program, and Kavli Institute for Fundamental Neuroscience, University of California, San Francisco, San Francisco, California 94143, USA
- Equally contributing authors
| | - Elena Galea
- ICREA, 08010 Barcelona, Spain
- Departament de Bioquímica, Institut de Neurociències i Universitat Autònoma de Barcelona, Bellaterra, 08193 Barcelona, Spain
- Equally contributing authors
| |
Collapse
|
33
|
Bridi MCD, Zong FJ, Min X, Luo N, Tran T, Qiu J, Severin D, Zhang XT, Wang G, Zhu ZJ, He KW, Kirkwood A. Daily Oscillation of the Excitation-Inhibition Balance in Visual Cortical Circuits. Neuron 2019; 105:621-629.e4. [PMID: 31831331 DOI: 10.1016/j.neuron.2019.11.011] [Citation(s) in RCA: 66] [Impact Index Per Article: 13.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/14/2019] [Revised: 09/16/2019] [Accepted: 11/07/2019] [Indexed: 12/16/2022]
Abstract
A balance between synaptic excitation and inhibition (E/I balance) maintained within a narrow window is widely regarded to be crucial for cortical processing. In line with this idea, the E/I balance is reportedly comparable across neighboring neurons, behavioral states, and developmental stages and altered in many neurological disorders. Motivated by these ideas, we examined whether synaptic inhibition changes over the 24-h day to compensate for the well-documented sleep-dependent changes in synaptic excitation. We found that, in pyramidal cells of visual and prefrontal cortices and hippocampal CA1, synaptic inhibition also changes over the 24-h light/dark cycle but, surprisingly, in the opposite direction of synaptic excitation. Inhibition is upregulated in the visual cortex during the light phase in a sleep-dependent manner. In the visual cortex, these changes in the E/I balance occurred in feedback, but not feedforward, circuits. These observations open new and interesting questions on the function and regulation of the E/I balance.
Collapse
Affiliation(s)
- Michelle C D Bridi
- Mind/Brain Institute and Department of Neuroscience, Johns Hopkins University, Baltimore, MD 21218, USA
| | - Fang-Jiao Zong
- Interdisciplinary Research Center on Biology and Chemistry, Shanghai Institute of Organic Chemistry, Chinese Academy of Sciences, Shanghai 200032, China; University of Chinese Academy of Sciences, Beijing 100049, China
| | - Xia Min
- Interdisciplinary Research Center on Biology and Chemistry, Shanghai Institute of Organic Chemistry, Chinese Academy of Sciences, Shanghai 200032, China; University of Chinese Academy of Sciences, Beijing 100049, China
| | - Nancy Luo
- Mind/Brain Institute and Department of Neuroscience, Johns Hopkins University, Baltimore, MD 21218, USA
| | - Trinh Tran
- Mind/Brain Institute and Department of Neuroscience, Johns Hopkins University, Baltimore, MD 21218, USA
| | - Jiaqian Qiu
- Interdisciplinary Research Center on Biology and Chemistry, Shanghai Institute of Organic Chemistry, Chinese Academy of Sciences, Shanghai 200032, China; University of Chinese Academy of Sciences, Beijing 100049, China
| | - Daniel Severin
- Mind/Brain Institute and Department of Neuroscience, Johns Hopkins University, Baltimore, MD 21218, USA
| | - Xue-Ting Zhang
- Interdisciplinary Research Center on Biology and Chemistry, Shanghai Institute of Organic Chemistry, Chinese Academy of Sciences, Shanghai 200032, China; University of Chinese Academy of Sciences, Beijing 100049, China
| | - Guanglin Wang
- Interdisciplinary Research Center on Biology and Chemistry, Shanghai Institute of Organic Chemistry, Chinese Academy of Sciences, Shanghai 200032, China
| | - Zheng-Jiang Zhu
- Interdisciplinary Research Center on Biology and Chemistry, Shanghai Institute of Organic Chemistry, Chinese Academy of Sciences, Shanghai 200032, China; University of Chinese Academy of Sciences, Beijing 100049, China
| | - Kai-Wen He
- Interdisciplinary Research Center on Biology and Chemistry, Shanghai Institute of Organic Chemistry, Chinese Academy of Sciences, Shanghai 200032, China; University of Chinese Academy of Sciences, Beijing 100049, China.
| | - Alfredo Kirkwood
- Mind/Brain Institute and Department of Neuroscience, Johns Hopkins University, Baltimore, MD 21218, USA.
| |
Collapse
|
34
|
Koren V, Andrei AR, Hu M, Dragoi V, Obermayer K. Reading-out task variables as a low-dimensional reconstruction of neural spike trains in single trials. PLoS One 2019; 14:e0222649. [PMID: 31622346 PMCID: PMC6797168 DOI: 10.1371/journal.pone.0222649] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/27/2019] [Accepted: 09/03/2019] [Indexed: 11/18/2022] Open
Abstract
We propose a new model of the read-out of spike trains that exploits the multivariate structure of responses of neural ensembles. Assuming the point of view of a read-out neuron that receives synaptic inputs from a population of projecting neurons, synaptic inputs are weighted with a heterogeneous set of weights. We propose that synaptic weights reflect the role of each neuron within the population for the computational task that the network has to solve. In our case, the computational task is discrimination of binary classes of stimuli, and weights are such as to maximize the discrimination capacity of the network. We compute synaptic weights as the feature weights of an optimal linear classifier. Once weights have been learned, they weight spike trains and allow to compute the post-synaptic current that modulates the spiking probability of the read-out unit in real time. We apply the model on parallel spike trains from V1 and V4 areas in the behaving monkey macaca mulatta, while the animal is engaged in a visual discrimination task with binary classes of stimuli. The read-out of spike trains with our model allows to discriminate the two classes of stimuli, while population PSTH entirely fails to do so. Splitting neurons in two subpopulations according to the sign of the weight, we show that population signals of the two functional subnetworks are negatively correlated. Disentangling the superficial, the middle and the deep layer of the cortex, we show that in both V1 and V4, superficial layers are the most important in discriminating binary classes of stimuli.
Collapse
Affiliation(s)
- Veronika Koren
- Neural Information Processing Group, Institute of Software Engineering and Theoretical Computer Science, Technische Universität Berlin, Berlin, Germany
- Bernstein Center for Computational Neuroscience Berlin, Germany
- * E-mail:
| | - Ariana R. Andrei
- Department of Neurobiology and Anatomy, University of Texas Medical School, Houston, Texas, United States of America
| | - Ming Hu
- Picower Institute for Learning and Memory, Massachusetts Institute of Technology, Cambridge, Massachusetts, United States of America
| | - Valentin Dragoi
- Department of Neurobiology and Anatomy, University of Texas Medical School, Houston, Texas, United States of America
| | - Klaus Obermayer
- Neural Information Processing Group, Institute of Software Engineering and Theoretical Computer Science, Technische Universität Berlin, Berlin, Germany
- Bernstein Center for Computational Neuroscience Berlin, Germany
| |
Collapse
|
35
|
Weissenberger F, Gauy MM, Zou X, Steger A. Mutual Inhibition with Few Inhibitory Cells via Nonlinear Inhibitory Synaptic Interaction. Neural Comput 2019; 31:2252-2265. [PMID: 31525311 DOI: 10.1162/neco_a_01230] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
In computational neural network models, neurons are usually allowed to excite some and inhibit other neurons, depending on the weight of their synaptic connections. The traditional way to transform such networks into networks that obey Dale's law (i.e., a neuron can either excite or inhibit) is to accompany each excitatory neuron with an inhibitory one through which inhibitory signals are mediated. However, this requires an equal number of excitatory and inhibitory neurons, whereas a realistic number of inhibitory neurons is much smaller. In this letter, we propose a model of nonlinear interaction of inhibitory synapses on dendritic compartments of excitatory neurons that allows the excitatory neurons to mediate inhibitory signals through a subset of the inhibitory population. With this construction, the number of required inhibitory neurons can be reduced tremendously.
Collapse
Affiliation(s)
- Felix Weissenberger
- Department of Computer Science, Institute of Theoretical Computer Science, ETH Zurich, Zurich CH-8092, Switzerland
| | - Marcelo Matheus Gauy
- Department of Computer Science, Institute of Theoretical Computer Science, ETH Zurich, Zurich CH-8092, Switzerland
| | - Xun Zou
- Department of Computer Science, Institute of Theoretical Computer Science, ETH Zurich, Zurich CH-8092, Switzerland
| | - Angelika Steger
- Department of Computer Science, Institute of Theoretical Computer Science, ETH Zurich, Zurich CH-8092, Switzerland
| |
Collapse
|
36
|
Nobukawa S, Nishimura H, Yamanishi T. Temporal-specific complexity of spiking patterns in spontaneous activity induced by a dual complex network structure. Sci Rep 2019; 9:12749. [PMID: 31484990 PMCID: PMC6726653 DOI: 10.1038/s41598-019-49286-8] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/14/2018] [Accepted: 08/22/2019] [Indexed: 11/08/2022] Open
Abstract
Temporal fluctuation of neural activity in the brain has an important function in optimal information processing. Spontaneous activity is a source of such fluctuation. The distribution of excitatory postsynaptic potentials (EPSPs) between cortical pyramidal neurons can follow a log-normal distribution. Recent studies have shown that networks connected by weak synapses exhibit characteristics of a random network, whereas networks connected by strong synapses have small-world characteristics of small path lengths and large cluster coefficients. To investigate the relationship between temporal complexity spontaneous activity and structural network duality in synaptic connections, we executed a simulation study using the leaky integrate-and-fire spiking neural network with log-normal synaptic weight distribution for the EPSPs and duality of synaptic connectivity, depending on synaptic weight. We conducted multiscale entropy analysis of the temporal spiking activity. Our simulation demonstrated that, when strong synaptic connections approach a small-world network, specific spiking patterns arise during irregular spatio-temporal spiking activity, and the complexity at the large temporal scale (i.e., slow frequency) is enhanced. Moreover, we confirmed through a surrogate data analysis that slow temporal dynamics reflect a deterministic process in the spiking neural networks. This modelling approach may improve the understanding of the spatio-temporal complex neural activity in the brain.
Collapse
Affiliation(s)
- Sou Nobukawa
- Department of Computer Science, Chiba Institute of Technology, 2-17-1 Tsudanuma, Narashino, Chiba, 275-0016, Japan.
| | - Haruhiko Nishimura
- Graduate School of Applied Informatics, University of Hyogo, 7-1-28 Chuo-ku, Kobe, Hyogo, 650-8588, Japan
| | - Teruya Yamanishi
- AI & IoT Center, Department of Management and Information Sciences, Fukui University of Technology, 3-6-1 Gakuen, Fukui, 910-8505, Japan
| |
Collapse
|
37
|
Training dynamically balanced excitatory-inhibitory networks. PLoS One 2019; 14:e0220547. [PMID: 31393909 PMCID: PMC6687153 DOI: 10.1371/journal.pone.0220547] [Citation(s) in RCA: 23] [Impact Index Per Article: 4.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/10/2019] [Accepted: 07/19/2019] [Indexed: 12/02/2022] Open
Abstract
The construction of biologically plausible models of neural circuits is crucial for understanding the computational properties of the nervous system. Constructing functional networks composed of separate excitatory and inhibitory neurons obeying Dale’s law presents a number of challenges. We show how a target-based approach, when combined with a fast online constrained optimization technique, is capable of building functional models of rate and spiking recurrent neural networks in which excitation and inhibition are balanced. Balanced networks can be trained to produce complicated temporal patterns and to solve input-output tasks while retaining biologically desirable features such as Dale’s law and response variability.
Collapse
|
38
|
Lam M, Hill WD, Trampush JW, Yu J, Knowles E, Davies G, Stahl E, Huckins L, Liewald DC, Djurovic S, Melle I, Sundet K, Christoforou A, Reinvang I, DeRosse P, Lundervold AJ, Steen VM, Espeseth T, Räikkönen K, Widen E, Palotie A, Eriksson JG, Giegling I, Konte B, Hartmann AM, Roussos P, Giakoumaki S, Burdick KE, Payton A, Ollier W, Chiba-Falek O, Attix DK, Need AC, Cirulli ET, Voineskos AN, Stefanis NC, Avramopoulos D, Hatzimanolis A, Arking DE, Smyrnis N, Bilder RM, Freimer NA, Cannon TD, London E, Poldrack RA, Sabb FW, Congdon E, Conley ED, Scult MA, Dickinson D, Straub RE, Donohoe G, Morris D, Corvin A, Gill M, Hariri AR, Weinberger DR, Pendleton N, Bitsios P, Rujescu D, Lahti J, Le Hellard S, Keller MC, Andreassen OA, Deary IJ, Glahn DC, Malhotra AK, Lencz T. Pleiotropic Meta-Analysis of Cognition, Education, and Schizophrenia Differentiates Roles of Early Neurodevelopmental and Adult Synaptic Pathways. Am J Hum Genet 2019; 105:334-350. [PMID: 31374203 PMCID: PMC6699140 DOI: 10.1016/j.ajhg.2019.06.012] [Citation(s) in RCA: 66] [Impact Index Per Article: 13.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/23/2019] [Accepted: 06/12/2019] [Indexed: 12/12/2022] Open
Abstract
Susceptibility to schizophrenia is inversely correlated with general cognitive ability at both the phenotypic and the genetic level. Paradoxically, a modest but consistent positive genetic correlation has been reported between schizophrenia and educational attainment, despite the strong positive genetic correlation between cognitive ability and educational attainment. Here we leverage published genome-wide association studies (GWASs) in cognitive ability, education, and schizophrenia to parse biological mechanisms underlying these results. Association analysis based on subsets (ASSET), a pleiotropic meta-analytic technique, allowed jointly associated loci to be identified and characterized. Specifically, we identified subsets of variants associated in the expected ("concordant") direction across all three phenotypes (i.e., greater risk for schizophrenia, lower cognitive ability, and lower educational attainment); these were contrasted with variants that demonstrated the counterintuitive ("discordant") relationship between education and schizophrenia (i.e., greater risk for schizophrenia and higher educational attainment). ASSET analysis revealed 235 independent loci associated with cognitive ability, education, and/or schizophrenia at p < 5 × 10-8. Pleiotropic analysis successfully identified more than 100 loci that were not significant in the input GWASs. Many of these have been validated by larger, more recent single-phenotype GWASs. Leveraging the joint genetic correlations of cognitive ability, education, and schizophrenia, we were able to dissociate two distinct biological mechanisms-early neurodevelopmental pathways that characterize concordant allelic variation and adulthood synaptic pruning pathways-that were linked to the paradoxical positive genetic association between education and schizophrenia. Furthermore, genetic correlation analyses revealed that these mechanisms contribute not only to the etiopathogenesis of schizophrenia but also to the broader biological dimensions implicated in both general health outcomes and psychiatric illness.
Collapse
Affiliation(s)
- Max Lam
- Institute of Mental Health, Singapore, 539747, Singapore; Division of Psychiatry Research, The Zucker Hillside Hospital, Glen Oaks, NY 11004, USA; Stanley Center for Psychiatric Research, Broad Institute of Harvard and MIT, Cambridge, MA 02142, USA
| | - W David Hill
- Centre for Cognitive Ageing and Cognitive Epidemiology, University of Edinburgh, Edinburgh, Scotland, EH8 9JZ, United Kingdom; Department of Psychology, University of Edinburgh, Edinburgh, Scotland, EH8 9JZ, United Kingdom
| | - Joey W Trampush
- Department of Psychiatry and the Behavioral Sciences, Keck School of Medicine, University of Southern California, Los Angeles, CA 90033, USA
| | - Jin Yu
- Division of Psychiatry Research, The Zucker Hillside Hospital, Glen Oaks, NY 11004, USA
| | - Emma Knowles
- Department of Psychiatry, Yale University School of Medicine, New Haven, CT 06511, USA
| | - Gail Davies
- Centre for Cognitive Ageing and Cognitive Epidemiology, University of Edinburgh, Edinburgh, Scotland, EH8 9JZ, United Kingdom; Department of Psychology, University of Edinburgh, Edinburgh, Scotland, EH8 9JZ, United Kingdom
| | - Eli Stahl
- Department of Psychiatry, Icahn School of Medicine at Mount Sinai, New York, NY 10029, USA; Department of Genetics and Genomic Science, Icahn School of Medicine at Mount Sinai, New York, NY 10029, USA; Institute for Multiscale Biology, Icahn School of Medicine at Mount Sinai, New York, NY 10029, USA
| | - Laura Huckins
- Department of Psychiatry, Icahn School of Medicine at Mount Sinai, New York, NY 10029, USA; Department of Genetics and Genomic Science, Icahn School of Medicine at Mount Sinai, New York, NY 10029, USA; Institute for Multiscale Biology, Icahn School of Medicine at Mount Sinai, New York, NY 10029, USA
| | - David C Liewald
- Department of Psychology, University of Edinburgh, Edinburgh, Scotland, EH8 9JZ, United Kingdom
| | - Srdjan Djurovic
- Department of Medical Genetics, Oslo University Hospital, University of Bergen, Bergen 4956, Nydalen 0424, Norway; Norsk Senter for Forskning på Mentale Lidelser, K.G. Jebsen Centre for Psychosis Research, University of Bergen, Bergen 4956, Nydalen 0424, Norway
| | - Ingrid Melle
- Norsk Senter for Forskning på Mentale Lidelser, K.G. Jebsen Centre for Psychosis Research, University of Bergen, Bergen 4956, Nydalen 0424, Norway; Division of Mental Health and Addiction, Oslo University Hospital, Oslo 1039, Blindern 0315, Norway
| | - Kjetil Sundet
- Division of Mental Health and Addiction, Oslo University Hospital, Oslo 1039, Blindern 0315, Norway; Department of Psychology, University of Oslo, Oslo 1094, Blindern 0317, Norway
| | - Andrea Christoforou
- Dr. Einar Martens Research Group for Biological Psychiatry, Center for Medical Genetics and Molecular Medicine, Haukeland University Hospital, Bergen 7804, N-5020 Bergen, Norway
| | - Ivar Reinvang
- Department of Psychology, University of Oslo, Oslo 1094, Blindern 0317, Norway
| | - Pamela DeRosse
- Division of Psychiatry Research, The Zucker Hillside Hospital, Glen Oaks, NY 11004, USA
| | - Astri J Lundervold
- Department of Biological and Medical Psychology, University of Bergen, 7807, N-5020, Norway
| | - Vidar M Steen
- Norsk Senter for Forskning på Mentale Lidelser, K.G. Jebsen Centre for Psychosis Research, University of Bergen, Bergen 4956, Nydalen 0424, Norway; Dr. Einar Martens Research Group for Biological Psychiatry, Center for Medical Genetics and Molecular Medicine, Haukeland University Hospital, Bergen 7804, N-5020 Bergen, Norway
| | - Thomas Espeseth
- Division of Mental Health and Addiction, Oslo University Hospital, Oslo 1039, Blindern 0315, Norway; Department of Psychology, University of Oslo, Oslo 1094, Blindern 0317, Norway
| | - Katri Räikkönen
- Institute of Behavioural Sciences, University of Helsinki, Helsinki, 00014, Finland
| | - Elisabeth Widen
- Institute for Molecular Medicine Finland (FIMM), University of Helsinki, 00014, Finland
| | - Aarno Palotie
- Institute for Molecular Medicine Finland (FIMM), University of Helsinki, 00014, Finland; Wellcome Trust Sanger Institute, Wellcome Trust Genome Campus, Cambridge CB10 1SA, United Kingdom; Department of Medical Genetics, University of Helsinki and University Central Hospital, Helsinki, 00014, Finland
| | - Johan G Eriksson
- Department of General Practice, University of Helsinki and Helsinki University Hospital, Helsinki, 00014, Finland; National Institute for Health and Welfare, Helsinki FI-00271, Finland; Folkhälsan Research Center, Helsinki 00290, Finland
| | - Ina Giegling
- Department of Psychiatry, Martin Luther University of Halle-Wittenberg, Halle 06108, Germany
| | - Bettina Konte
- Department of Psychiatry, Martin Luther University of Halle-Wittenberg, Halle 06108, Germany
| | - Annette M Hartmann
- Department of Psychiatry, Martin Luther University of Halle-Wittenberg, Halle 06108, Germany
| | - Panos Roussos
- Department of Psychiatry, Icahn School of Medicine at Mount Sinai, New York, NY 10029, USA; Department of Genetics and Genomic Science, Icahn School of Medicine at Mount Sinai, New York, NY 10029, USA; Institute for Multiscale Biology, Icahn School of Medicine at Mount Sinai, New York, NY 10029, USA; Mental Illness Research, Education, and Clinical Center (VISN 2), James J. Peters VA Medical Center, Bronx, NY 10468, USA
| | | | - Katherine E Burdick
- Department of Psychiatry, Icahn School of Medicine at Mount Sinai, New York, NY 10029, USA; Mental Illness Research, Education, and Clinical Center (VISN 2), James J. Peters VA Medical Center, Bronx, NY 10468, USA; Department of Psychiatry, Brigham and Women's Hospital, Harvard Medical School, Boston, MA 02115
| | - Antony Payton
- Division of Informatics, Imaging, and Data Sciences, School of Health Sciences, University of Manchester, Manchester M139NT, United Kingdom
| | - William Ollier
- Centre for Epidemiology, Division of Population Health, Health Services Research and Primary Care, University of Manchester, Manchester M139PL, United Kingdom; School of Healthcare Sciences, Manchester Metropolitan University, Manchester M15 6BH, United Kingdom
| | - Ornit Chiba-Falek
- Department of Neurology, Bryan Alzheimer Disease Research Center, Duke University Medical Center, Durham, NC 27705, USA; Center for Genomic and Computational Biology, Duke University Medical Center, Durham, NC 27705, USA
| | - Deborah K Attix
- Department of Neurology, Bryan Alzheimer Disease Research Center, Duke University Medical Center, Durham, NC 27705, USA; Center for Genomic and Computational Biology, Duke University Medical Center, Durham, NC 27705, USA; Psychiatry and Behavioral Sciences, Division of Medical Psychology, Duke University Medical Center, Durham, NC 27708, USA; Department of Neurology, Duke University Medical Center, Durham, NC 27708, USA
| | - Anna C Need
- Division of Brain Sciences, Department of Medicine, Imperial College, London W12 0NN, UK
| | | | - Aristotle N Voineskos
- Campbell Family Mental Health Institute, Centre for Addiction and Mental Health, University of Toronto, Toronto M6J 1H4, Canada
| | - Nikos C Stefanis
- Department of Psychiatry, National and Kapodistrian University of Athens Medical School, Eginition Hospital, Athens, Greece; University Mental Health Research Institute, Athens 115 27, Greece; Neurobiology Research Institute, Theodor-Theohari Cozzika Foundation, Athens, Greece
| | - Dimitrios Avramopoulos
- Department of Psychiatry, Johns Hopkins University School of Medicine, Baltimore, MD 21287, USA; McKusick-Nathans Institute of Genetic Medicine, Johns Hopkins University School of Medicine, Baltimore, MD 21205, USA
| | - Alex Hatzimanolis
- Campbell Family Mental Health Institute, Centre for Addiction and Mental Health, University of Toronto, Toronto M6J 1H4, Canada; Department of Psychiatry, National and Kapodistrian University of Athens Medical School, Eginition Hospital, Athens, Greece; University Mental Health Research Institute, Athens 115 27, Greece
| | - Dan E Arking
- Department of Psychiatry, Johns Hopkins University School of Medicine, Baltimore, MD 21287, USA
| | - Nikolaos Smyrnis
- Campbell Family Mental Health Institute, Centre for Addiction and Mental Health, University of Toronto, Toronto M6J 1H4, Canada; Department of Psychiatry, National and Kapodistrian University of Athens Medical School, Eginition Hospital, Athens, Greece
| | - Robert M Bilder
- McKusick-Nathans Institute of Genetic Medicine, Johns Hopkins University School of Medicine, Baltimore, MD 21205, USA
| | - Nelson A Freimer
- McKusick-Nathans Institute of Genetic Medicine, Johns Hopkins University School of Medicine, Baltimore, MD 21205, USA
| | - Tyrone D Cannon
- Department of Psychology, Yale University, New Haven, CT 06511, USA
| | - Edythe London
- UCLA Semel Institute for Neuroscience and Human Behavior, Los Angeles, CA 90024, USA
| | | | - Fred W Sabb
- Robert and Beverly Lewis Center for Neuroimaging, University of Oregon, Eugene, OR, 97401, USA
| | - Eliza Congdon
- UCLA Semel Institute for Neuroscience and Human Behavior, Los Angeles, CA 90024, USA
| | | | - Matthew A Scult
- Laboratory of NeuroGenetics, Department of Psychology and Neuroscience, Duke University, Durham, NC 27708, USA
| | - Dwight Dickinson
- Clinical and Translational Neuroscience Branch, Intramural Research Program, National Institute of Mental Health, National Institute of Health, Bethesda, MD 20814, USA
| | - Richard E Straub
- Lieber Institute for Brain Development, Johns Hopkins University Medical Campus, Baltimore, MD 21205, USA
| | - Gary Donohoe
- Neuroimaging, Cognition, and Genomics Centre, School of Psychology and Discipline of Biochemistry, National University of Ireland, Galway, Ireland
| | - Derek Morris
- Neuroimaging, Cognition, and Genomics Centre, School of Psychology and Discipline of Biochemistry, National University of Ireland, Galway, Ireland
| | - Aiden Corvin
- Neuropsychiatric Genetics Research Group, Department of Psychiatry, Trinity College Dublin, Dublin, Ireland; Trinity College Institute of Neuroscience, Trinity College Dublin, Dublin, Ireland
| | - Michael Gill
- Neuropsychiatric Genetics Research Group, Department of Psychiatry, Trinity College Dublin, Dublin, Ireland; Trinity College Institute of Neuroscience, Trinity College Dublin, Dublin, Ireland
| | - Ahmad R Hariri
- Laboratory of NeuroGenetics, Department of Psychology and Neuroscience, Duke University, Durham, NC 27708, USA
| | - Daniel R Weinberger
- Lieber Institute for Brain Development, Johns Hopkins University Medical Campus, Baltimore, MD 21205, USA
| | - Neil Pendleton
- Division of Neuroscience and Experimental Psychology, School of Biological Sciences, University of Manchester, Manchester Academic Health Science Centre, Salford Royal NHS Foundation Trust, Manchester M13 9PL, United Kingdom
| | - Panos Bitsios
- Department of Psychiatry and Behavioral Sciences, Faculty of Medicine, University of Crete, Heraklion, Crete GR-71003, Greece
| | - Dan Rujescu
- Department of Psychiatry, Martin Luther University of Halle-Wittenberg, Halle 06108, Germany
| | - Jari Lahti
- Institute of Behavioural Sciences, University of Helsinki, Helsinki, 00014, Finland; Helsinki Collegium for Advanced Studies, University of Helsinki, Helsinki 00014, Finland
| | - Stephanie Le Hellard
- Norsk Senter for Forskning på Mentale Lidelser, K.G. Jebsen Centre for Psychosis Research, University of Bergen, Bergen 4956, Nydalen 0424, Norway; Dr. Einar Martens Research Group for Biological Psychiatry, Center for Medical Genetics and Molecular Medicine, Haukeland University Hospital, Bergen 7804, N-5020 Bergen, Norway
| | - Matthew C Keller
- Institute for Behavioral Genetics, University of Colorado, Boulder, CO 80303, USA
| | - Ole A Andreassen
- Norsk Senter for Forskning på Mentale Lidelser, K.G. Jebsen Centre for Psychosis Research, University of Bergen, Bergen 4956, Nydalen 0424, Norway; Division of Mental Health and Addiction, Oslo University Hospital, Oslo 1039, Blindern 0315, Norway; Institute of Clinical Medicine, University of Oslo, Oslo 0318, Norway
| | - Ian J Deary
- Centre for Cognitive Ageing and Cognitive Epidemiology, University of Edinburgh, Edinburgh, Scotland, EH8 9JZ, United Kingdom; Department of Psychology, University of Edinburgh, Edinburgh, Scotland, EH8 9JZ, United Kingdom
| | - David C Glahn
- Department of Psychiatry, Yale University School of Medicine, New Haven, CT 06511, USA
| | - Anil K Malhotra
- Division of Psychiatry Research, The Zucker Hillside Hospital, Glen Oaks, NY 11004, USA; Department of Psychiatry, Zucker School of Medicine at Hofstra/Northwell, Hempstead, NY 11549, USA; Center for Psychiatric Neuroscience, Feinstein Institute for Medical Research, Manhasset, NY 11030, USA
| | - Todd Lencz
- Division of Psychiatry Research, The Zucker Hillside Hospital, Glen Oaks, NY 11004, USA; Department of Psychiatry, Zucker School of Medicine at Hofstra/Northwell, Hempstead, NY 11549, USA; Center for Psychiatric Neuroscience, Feinstein Institute for Medical Research, Manhasset, NY 11030, USA.
| |
Collapse
|
39
|
Ott T, Masset P, Kepecs A. The Neurobiology of Confidence: From Beliefs to Neurons. COLD SPRING HARBOR SYMPOSIA ON QUANTITATIVE BIOLOGY 2019; 83:9-16. [PMID: 31270145 DOI: 10.1101/sqb.2018.83.038794] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/24/2022]
Abstract
How confident are you? As humans, aware of our subjective sense of confidence, we can readily answer. Knowing your level of confidence helps to optimize both routine decisions such as whether to go back and check if the front door was locked and momentous ones like finding a partner for life. Yet the inherently subjective nature of confidence has limited investigations by neurobiologists. Here, we provide an overview of recent advances in this field and lay out a conceptual framework that lets us translate psychological questions about subjective confidence into the language of neuroscience. We show how statistical notions of confidence provide a bridge between our subjective sense of confidence and confidence-guided behaviors in nonhuman animals, thus enabling the study of the underlying neurobiology. We discuss confidence as a core cognitive process that enables organisms to optimize behavior such as learning or resource allocation and that serves as the basis of metacognitive reasoning. These approaches place confidence on a solid footing and pave the way for a mechanistic understanding of how the brain implements confidence-based algorithms to guide behavior.
Collapse
Affiliation(s)
- Torben Ott
- Cold Spring Harbor Laboratory, Cold Spring Harbor, New York 11724, USA
| | - Paul Masset
- Cold Spring Harbor Laboratory, Cold Spring Harbor, New York 11724, USA.,Watson School of Biological Sciences, Cold Spring Harbor, New York 11724, USA.,Department of Molecular and Cellular Biology & Center for Brain Science, Harvard University, Cambridge, Massachusetts 02138, USA
| | - Adam Kepecs
- Cold Spring Harbor Laboratory, Cold Spring Harbor, New York 11724, USA
| |
Collapse
|
40
|
Shine JM. Neuromodulatory Influences on Integration and Segregation in the Brain. Trends Cogn Sci 2019; 23:572-583. [PMID: 31076192 DOI: 10.1016/j.tics.2019.04.002] [Citation(s) in RCA: 123] [Impact Index Per Article: 24.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/02/2018] [Revised: 04/01/2019] [Accepted: 04/04/2019] [Indexed: 12/20/2022]
Abstract
Cognitive function relies on the dynamic cooperation of specialized regions of the brain; however, the elements of the system responsible for coordinating this interaction remain poorly understood. In this Opinion article I argue that this capacity is mediated in part by competitive and cooperative dynamic interactions between two prominent metabotropic neuromodulatory systems - the cholinergic basal forebrain and the noradrenergic locus coeruleus (LC). I assert that activity in these projection nuclei regulates the amount of segregation and integration within the whole brain network by modulating the activity of a diverse set of specialized regions of the brain on a timescale relevant for cognition and attention.
Collapse
Affiliation(s)
- James M Shine
- Brain and Mind Centre, The University of Sydney, Sydney, NSW, Australia.
| |
Collapse
|
41
|
Wagner MJ, Kim TH, Kadmon J, Nguyen ND, Ganguli S, Schnitzer MJ, Luo L. Shared Cortex-Cerebellum Dynamics in the Execution and Learning of a Motor Task. Cell 2019; 177:669-682.e24. [PMID: 30929904 PMCID: PMC6500577 DOI: 10.1016/j.cell.2019.02.019] [Citation(s) in RCA: 106] [Impact Index Per Article: 21.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/03/2018] [Revised: 01/08/2019] [Accepted: 02/12/2019] [Indexed: 01/09/2023]
Abstract
Throughout mammalian neocortex, layer 5 pyramidal (L5) cells project via the pons to a vast number of cerebellar granule cells (GrCs), forming a fundamental pathway. Yet, it is unknown how neuronal dynamics are transformed through the L5→GrC pathway. Here, by directly comparing premotor L5 and GrC activity during a forelimb movement task using dual-site two-photon Ca2+ imaging, we found that in expert mice, L5 and GrC dynamics were highly similar. L5 cells and GrCs shared a common set of task-encoding activity patterns, possessed similar diversity of responses, and exhibited high correlations comparable to local correlations among L5 cells. Chronic imaging revealed that these dynamics co-emerged in cortex and cerebellum over learning: as behavioral performance improved, initially dissimilar L5 cells and GrCs converged onto a shared, low-dimensional, task-encoding set of neural activity patterns. Thus, a key function of cortico-cerebellar communication is the propagation of shared dynamics that emerge during learning.
Collapse
Affiliation(s)
- Mark J Wagner
- Department of Biology, Stanford University, Stanford, CA 94305, USA; Howard Hughes Medical Institute, Stanford University, Stanford, CA 94305, USA.
| | - Tony Hyun Kim
- Department of Biology, Stanford University, Stanford, CA 94305, USA; Howard Hughes Medical Institute, Stanford University, Stanford, CA 94305, USA; Department of Electrical Engineering, Stanford University, Stanford, CA 94305, USA
| | - Jonathan Kadmon
- Department of Applied Physics, Stanford University, Stanford, CA 94305, USA
| | - Nghia D Nguyen
- Department of Biology, Stanford University, Stanford, CA 94305, USA; Howard Hughes Medical Institute, Stanford University, Stanford, CA 94305, USA
| | - Surya Ganguli
- Department of Applied Physics, Stanford University, Stanford, CA 94305, USA
| | - Mark J Schnitzer
- Department of Biology, Stanford University, Stanford, CA 94305, USA; Howard Hughes Medical Institute, Stanford University, Stanford, CA 94305, USA; Department of Applied Physics, Stanford University, Stanford, CA 94305, USA.
| | - Liqun Luo
- Department of Biology, Stanford University, Stanford, CA 94305, USA; Howard Hughes Medical Institute, Stanford University, Stanford, CA 94305, USA.
| |
Collapse
|
42
|
Cortical recruitment determines learning dynamics and strategy. Nat Commun 2019; 10:1479. [PMID: 30931939 PMCID: PMC6443669 DOI: 10.1038/s41467-019-09450-0] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/08/2018] [Accepted: 03/12/2019] [Indexed: 12/26/2022] Open
Abstract
Salience is a broad and widely used concept in neuroscience whose neuronal correlates, however, remain elusive. In behavioral conditioning, salience is used to explain various effects, such as stimulus overshadowing, and refers to how fast and strongly a stimulus can be associated with a conditioned event. Here, we identify sounds of equal intensity and perceptual detectability, which due to their spectro-temporal content recruit different levels of population activity in mouse auditory cortex. When using these sounds as cues in a Go/NoGo discrimination task, the degree of cortical recruitment matches the salience parameter of a reinforcement learning model used to analyze learning speed. We test an essential prediction of this model by training mice to discriminate light-sculpted optogenetic activity patterns in auditory cortex, and verify that cortical recruitment causally determines association or overshadowing of the stimulus components. This demonstrates that cortical recruitment underlies major aspects of stimulus salience during reinforcement learning.
Collapse
|
43
|
Berniker M, Penny S. A normative approach to neuromotor control. BIOLOGICAL CYBERNETICS 2019; 113:83-92. [PMID: 30178151 DOI: 10.1007/s00422-018-0777-7] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/01/2018] [Accepted: 08/20/2018] [Indexed: 06/08/2023]
Abstract
While we can readily observe and model the dynamics of our limbs, analyzing the neurons that drive movement is not nearly as straightforward. As a result, their role in motor behavior (e.g., forward models, state estimators, controllers, etc.) remains elusive. Computational explanations of electrophysiological data often rely on firing rate models or deterministic spiking models. Yet neither can accurately describe the interactions of neurons that issue spikes, probabilistically. Here we take a normative approach by designing a probabilistic spiking network to implement LQR control for a limb model. We find typical results: cosine tuning curves, population vectors that correlate with reaching directions, low-dimensional oscillatory activity for reaches that have no oscillatory movement, and changes in neuron's tuning curves after force field adaptation. Importantly, while the model is consistent with these empirically derived correlations, we can also analyze it in terms of the known causal mechanism: an LQR controller and the probability distributions of the neurons that encode it. Redesigning the system under a different set of assumptions (e.g. a different controller, or network architecture) would yield a new set of testable predictions. We suggest this normative approach can be a framework for examining the motor system, providing testable links between observed neural activity and motor behavior.
Collapse
Affiliation(s)
- Max Berniker
- University of Illinois at Chicago, Chicago, USA.
| | | |
Collapse
|
44
|
La Barbera L, Vedele F, Nobili A, D'Amelio M, Krashia P. Neurodevelopmental Disorders: Functional Role of Ambra1 in Autism and Schizophrenia. Mol Neurobiol 2019; 56:6716-6724. [PMID: 30915711 DOI: 10.1007/s12035-019-1557-7] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/17/2019] [Accepted: 03/13/2019] [Indexed: 12/19/2022]
Abstract
The activating molecule in Beclin-1-regulated autophagy (Ambra1) is a highly intrinsically disordered protein best known for its role as a mediator in autophagy, by favoring the formation of autophagosomes. Additional studies have revealed that Ambra1 is able to coordinate cell responses to stress conditions such as starvation, and it actively participates in cell proliferation, cytoskeletal modification, apoptosis, mitochondria removal, and cell cycle downregulation. All these functions highlight the importance of Ambra1 in crucial physiological events, including metabolism, cell death, and cell division. Importantly, Ambra1 is also crucial for proper embryonic development, and its complete absence in knock-out animal models leads to severe brain morphology defects. In line with this, it has recently been implicated in neurodevelopmental disorders affecting humans, particularly autism spectrum disorders and schizophrenia. Here, we discuss the recent links between Ambra1 and neurodevelopment, particularly focusing on its role during the maturation of hippocampal parvalbumin interneurons and its importance for maintaining a proper excitation/inhibition balance in the brain.
Collapse
Affiliation(s)
- Livia La Barbera
- Laboratory of Molecular Neurosciences, Department of Experimental Neurosciences, IRCCS Santa Lucia Foundation, Rome, Italy.,Department of Systems Medicine, University of Rome 'Tor Vergata', Rome, Italy
| | - Francescangelo Vedele
- Laboratory of Molecular Neurosciences, Department of Experimental Neurosciences, IRCCS Santa Lucia Foundation, Rome, Italy.,Department of Systems Medicine, University of Rome 'Tor Vergata', Rome, Italy
| | - Annalisa Nobili
- Laboratory of Molecular Neurosciences, Department of Experimental Neurosciences, IRCCS Santa Lucia Foundation, Rome, Italy.,Unit of Molecular Neurosciences, Department of Medicine, University Campus-Biomedico, Rome, Italy
| | - Marcello D'Amelio
- Laboratory of Molecular Neurosciences, Department of Experimental Neurosciences, IRCCS Santa Lucia Foundation, Rome, Italy. .,Unit of Molecular Neurosciences, Department of Medicine, University Campus-Biomedico, Rome, Italy.
| | - Paraskevi Krashia
- Laboratory of Molecular Neurosciences, Department of Experimental Neurosciences, IRCCS Santa Lucia Foundation, Rome, Italy. .,Department of Systems Medicine, University of Rome 'Tor Vergata', Rome, Italy.
| |
Collapse
|
45
|
Gao Z, Shi Q, Fukuda T, Li C, Huang Q. An overview of biomimetic robots with animal behaviors. Neurocomputing 2019. [DOI: 10.1016/j.neucom.2018.12.071] [Citation(s) in RCA: 21] [Impact Index Per Article: 4.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/22/2022]
|
46
|
Landi S, Petrucco L, Sicca F, Ratto GM. Transient Cognitive Impairment in Epilepsy. Front Mol Neurosci 2019; 11:458. [PMID: 30666185 PMCID: PMC6330286 DOI: 10.3389/fnmol.2018.00458] [Citation(s) in RCA: 27] [Impact Index Per Article: 5.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/30/2018] [Accepted: 11/28/2018] [Indexed: 02/05/2023] Open
Abstract
Impairments of the dialog between excitation and inhibition (E/I) is commonly associated to neuropsychiatric disorders like autism, bipolar disorders and epilepsy. Moderate levels of hyperexcitability can lead to mild alterations of the EEG and are often associated with cognitive deficits even in the absence of overt seizures. Indeed, various testing paradigms have shown degraded performances in presence of acute or chronic non-ictal epileptiform activity. Evidences from both animal models and the clinics suggest that anomalous activity can cause cognitive deficits by transiently disrupting cortical processing, independently from the underlying etiology of the disease. Here, we will review our understanding of the influence of an abnormal EEG activity on brain computation in the context of the available clinical data and in genetic or pharmacological animal models.
Collapse
Affiliation(s)
- Silvia Landi
- NEST, Istituto Nanoscienze-CNR and Scuola Normale Superiore, Pisa, Italy
| | - Luigi Petrucco
- Graduate School of Systemic Neurosciences, Ludwig Maximilian University of Munich (LMU), Munich, Germany
| | - Federico Sicca
- Department of Developmental Neuroscience, Fondazione IRCCS Stella Maris, Pisa, Italy
| | - Gian Michele Ratto
- NEST, Istituto Nanoscienze-CNR and Scuola Normale Superiore, Pisa, Italy
| |
Collapse
|
47
|
Barrett LF, Finlay BL. Concepts, Goals and the Control of Survival-Related Behaviors. Curr Opin Behav Sci 2018; 24:172-179. [PMID: 31157289 DOI: 10.1016/j.cobeha.2018.10.001] [Citation(s) in RCA: 37] [Impact Index Per Article: 6.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/01/2023]
Abstract
Scientists have long studied the actions that impact basic survival in various domains of life, such as defense, foraging, reproduction, thermoregulation, and so on, as if such actions will reveal the nature of emotion. Each domain of survival came to be characterized by a repertoire of distinct actions, and each action was thought to be caused by a dedicated neural circuit, called a survival circuit. Survival circuits are thought to be triggered by sensory events in the world, quickly producing obligatory, stereotypic reflexes as well as more flexible, deliberate responses. In this paper, we consider recent evidence from behavioral ecology that even so-called "reflexes" are better understood as purposeful, flexible actions that unfold across a range of temporal trajectories. They are highly context-dependent and tailored to the requirements of the situation. We then consider evidence from the neuroscience of motor control that motor actions are assembled by neural populations, not triggered by simple circuits. We end by considering the value of these suggestions for understanding the species-general vs. species-specific contributions to emotion.
Collapse
Affiliation(s)
- Lisa Feldman Barrett
- Department of Psychology, Northeastern University.,Psychiatric Neuroimaging Division, Department of Psychiatry, Massachusetts General Hospital and Harvard Medical School.,Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital
| | - Barbara L Finlay
- Behavioral and Evolutionary Neuroscience Group, Department of Psychology, Cornell University
| |
Collapse
|
48
|
Rupprecht P, Friedrich RW. Precise Synaptic Balance in the Zebrafish Homolog of Olfactory Cortex. Neuron 2018; 100:669-683.e5. [PMID: 30318416 DOI: 10.1016/j.neuron.2018.09.013] [Citation(s) in RCA: 16] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/28/2018] [Revised: 07/04/2018] [Accepted: 09/06/2018] [Indexed: 01/04/2023]
Abstract
Neuronal computations critically depend on the connectivity rules that govern the convergence of excitatory and inhibitory synaptic signals onto individual neurons. To examine the functional synaptic organization of a distributed memory network, we performed voltage clamp recordings in telencephalic area Dp of adult zebrafish, the homolog of olfactory cortex. In neurons of posterior Dp, odor stimulation evoked large, recurrent excitatory and inhibitory inputs that established a transient state of high conductance and synaptic balance. Excitation and inhibition in individual neurons were co-tuned to different odors and correlated on slow and fast timescales. This precise synaptic balance implies specific connectivity among Dp neurons, despite the absence of an obvious topography. Precise synaptic balance stabilizes activity patterns in different directions of coding space and in time while preserving high bandwidth. The coordinated connectivity of excitatory and inhibitory subnetworks in Dp therefore supports fast recurrent memory operations.
Collapse
Affiliation(s)
- Peter Rupprecht
- Friedrich Miescher Institute for Biomedical Research, Maulbeerstrasse 66, 4058 Basel, Switzerland; Faculty of Natural Sciences, University of Basel, 4003 Basel, Switzerland.
| | - Rainer W Friedrich
- Friedrich Miescher Institute for Biomedical Research, Maulbeerstrasse 66, 4058 Basel, Switzerland; Faculty of Natural Sciences, University of Basel, 4003 Basel, Switzerland.
| |
Collapse
|
49
|
Ritz H, Nassar MR, Frank MJ, Shenhav A. A Control Theoretic Model of Adaptive Learning in Dynamic Environments. J Cogn Neurosci 2018; 30:1405-1421. [PMID: 29877769 PMCID: PMC6432773 DOI: 10.1162/jocn_a_01289] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/27/2023]
Abstract
To behave adaptively in environments that are noisy and nonstationary, humans and other animals must monitor feedback from their environment and adjust their predictions and actions accordingly. An understudied approach for modeling these adaptive processes comes from the engineering field of control theory, which provides general principles for regulating dynamical systems, often without requiring a generative model. The proportional-integral-derivative (PID) controller is one of the most popular models of industrial process control. The proportional term is analogous to the "delta rule" in psychology, adjusting estimates in proportion to each error in prediction. The integral and derivative terms augment this update to simultaneously improve accuracy and stability. Here, we tested whether the PID algorithm can describe how people sequentially adjust their predictions in response to new information. Across three experiments, we found that the PID controller was an effective model of participants' decisions in noisy, changing environments. In Experiment 1, we reanalyzed a change-point detection experiment and showed that participants' behavior incorporated elements of PID updating. In Experiments 2-3, we developed a task with gradual transitions that we optimized to detect PID-like adjustments. In both experiments, the PID model offered better descriptions of behavioral adjustments than both the classical delta-rule model and its more sophisticated variant, the Kalman filter. We further examined how participants weighted different PID terms in response to salient environmental events, finding that these control terms were modulated by reward, surprise, and outcome entropy. These experiments provide preliminary evidence that adaptive learning in dynamic environments resembles PID control.
Collapse
Affiliation(s)
- Harrison Ritz
- Cognitive, Linguistic & Psychological Science, Brown University, Providence, RI
| | - Matthew R. Nassar
- Cognitive, Linguistic & Psychological Science, Brown University, Providence, RI
| | - Michael J. Frank
- Cognitive, Linguistic & Psychological Science, Brown University, Providence, RI
- Brown Institute for Brain Science, Brown University, Providence, RI
| | - Amitai Shenhav
- Cognitive, Linguistic & Psychological Science, Brown University, Providence, RI
- Brown Institute for Brain Science, Brown University, Providence, RI
| |
Collapse
|
50
|
Zénon A, Solopchuk O, Pezzulo G. An information-theoretic perspective on the costs of cognition. Neuropsychologia 2018; 123:5-18. [PMID: 30268880 DOI: 10.1016/j.neuropsychologia.2018.09.013] [Citation(s) in RCA: 54] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/18/2017] [Revised: 08/10/2018] [Accepted: 09/19/2018] [Indexed: 01/06/2023]
Abstract
In statistics and machine learning, model accuracy is traded off with complexity, which can be viewed as the amount of information extracted from the data. Here, we discuss how cognitive costs can be expressed in terms of similar information costs, i.e. as a function of the amount of information required to update a person's prior knowledge (or internal model) to effectively solve a task. We then examine the theoretical consequences that ensue from this assumption. This framework naturally explains why some tasks - for example, unfamiliar or dual tasks - are costly and permits to quantify these costs using information-theoretic measures. Finally, we discuss brain implementation of this principle and show that subjective cognitive costs can originate either from local or global capacity limitations on information processing or from increased rate of metabolic alterations. These views shed light on the potential adaptive value of cost-avoidance mechanisms.
Collapse
Affiliation(s)
- Alexandre Zénon
- Institut de Neuroscience Cognitive et Intégrative d'Aquitaine, Université de Bordeaux, France; Institute of Neuroscience, Université Catholique de Louvain, Brussels, Belgium.
| | - Oleg Solopchuk
- Institut de Neuroscience Cognitive et Intégrative d'Aquitaine, Université de Bordeaux, France; Institute of Neuroscience, Université Catholique de Louvain, Brussels, Belgium
| | - Giovanni Pezzulo
- Institute of Cognitive Sciences and Technologies, National Research Council, Via San Martino della Battaglia 44, 00185 Rome, Italy
| |
Collapse
|