1
|
Eckmann S, Young EJ, Gjorgjieva J. Synapse-type-specific competitive Hebbian learning forms functional recurrent networks. Proc Natl Acad Sci U S A 2024; 121:e2305326121. [PMID: 38870059 PMCID: PMC11194505 DOI: 10.1073/pnas.2305326121] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/04/2023] [Accepted: 04/25/2024] [Indexed: 06/15/2024] Open
Abstract
Cortical networks exhibit complex stimulus-response patterns that are based on specific recurrent interactions between neurons. For example, the balance between excitatory and inhibitory currents has been identified as a central component of cortical computations. However, it remains unclear how the required synaptic connectivity can emerge in developing circuits where synapses between excitatory and inhibitory neurons are simultaneously plastic. Using theory and modeling, we propose that a wide range of cortical response properties can arise from a single plasticity paradigm that acts simultaneously at all excitatory and inhibitory connections-Hebbian learning that is stabilized by the synapse-type-specific competition for a limited supply of synaptic resources. In plastic recurrent circuits, this competition enables the formation and decorrelation of inhibition-balanced receptive fields. Networks develop an assembly structure with stronger synaptic connections between similarly tuned excitatory and inhibitory neurons and exhibit response normalization and orientation-specific center-surround suppression, reflecting the stimulus statistics during training. These results demonstrate how neurons can self-organize into functional networks and suggest an essential role for synapse-type-specific competitive learning in the development of cortical circuits.
Collapse
Affiliation(s)
- Samuel Eckmann
- Computation in Neural Circuits Group, Max Planck Institute for Brain Research, Frankfurt am Main60438, Germany
- Computational and Biological Learning Lab, Department of Engineering, University of Cambridge, CambridgeCB2 1PZ, United Kingdom
| | - Edward James Young
- Computational and Biological Learning Lab, Department of Engineering, University of Cambridge, CambridgeCB2 1PZ, United Kingdom
| | - Julijana Gjorgjieva
- Computation in Neural Circuits Group, Max Planck Institute for Brain Research, Frankfurt am Main60438, Germany
- School of Life Sciences, Technical University Munich, Freising85354, Germany
| |
Collapse
|
2
|
Znamenskiy P, Kim MH, Muir DR, Iacaruso MF, Hofer SB, Mrsic-Flogel TD. Functional specificity of recurrent inhibition in visual cortex. Neuron 2024; 112:991-1000.e8. [PMID: 38244539 DOI: 10.1016/j.neuron.2023.12.013] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/15/2023] [Revised: 10/31/2023] [Accepted: 12/19/2023] [Indexed: 01/22/2024]
Abstract
In the neocortex, neural activity is shaped by the interaction of excitatory and inhibitory neurons, defined by the organization of their synaptic connections. Although connections among excitatory pyramidal neurons are sparse and functionally tuned, inhibitory connectivity is thought to be dense and largely unstructured. By measuring in vivo visual responses and synaptic connectivity of parvalbumin-expressing (PV+) inhibitory cells in mouse primary visual cortex, we show that the synaptic weights of their connections to nearby pyramidal neurons are specifically tuned according to the similarity of the cells' responses. Individual PV+ cells strongly inhibit those pyramidal cells that provide them with strong excitation and share their visual selectivity. This structured organization of inhibitory synaptic weights provides a circuit mechanism for tuned inhibition onto pyramidal cells despite dense connectivity, stabilizing activity within feature-specific excitatory ensembles while supporting competition between them.
Collapse
Affiliation(s)
- Petr Znamenskiy
- Specification and Function of Neural Circuits Laboratory, The Francis Crick Institute, 1 Midland Road, London NW1 1AT, UK; Sainsbury Wellcome Centre, 25 Howland Street, London W1T 4JG, UK; Biozentrum, University of Basel, Klingelbergstrasse 70, 4056 Basel, Switzerland.
| | - Mean-Hwan Kim
- Biozentrum, University of Basel, Klingelbergstrasse 70, 4056 Basel, Switzerland
| | - Dylan R Muir
- Biozentrum, University of Basel, Klingelbergstrasse 70, 4056 Basel, Switzerland
| | | | - Sonja B Hofer
- Sainsbury Wellcome Centre, 25 Howland Street, London W1T 4JG, UK; Biozentrum, University of Basel, Klingelbergstrasse 70, 4056 Basel, Switzerland
| | - Thomas D Mrsic-Flogel
- Sainsbury Wellcome Centre, 25 Howland Street, London W1T 4JG, UK; Biozentrum, University of Basel, Klingelbergstrasse 70, 4056 Basel, Switzerland.
| |
Collapse
|
3
|
Kühn T, Monasson R. Information content in continuous attractor neural networks is preserved in the presence of moderate disordered background connectivity. Phys Rev E 2023; 108:064301. [PMID: 38243526 DOI: 10.1103/physreve.108.064301] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/22/2023] [Accepted: 10/04/2023] [Indexed: 01/21/2024]
Abstract
Continuous attractor neural networks (CANN) form an appealing conceptual model for the storage of information in the brain. However a drawback of CANN is that they require finely tuned interactions. We here study the effect of quenched noise in the interactions on the coding of positional information within CANN. Using the replica method we compute the Fisher information for a network with position-dependent input and recurrent connections composed of a short-range (in space) and a disordered component. We find that the loss in positional information is small for not too large disorder strength, indicating that CANN have a regime in which the advantageous effects of local connectivity on information storage outweigh the detrimental ones. Furthermore, a substantial part of this information can be extracted with a simple linear readout.
Collapse
Affiliation(s)
- Tobias Kühn
- Laboratoire de Physique de l'Ecole Normale Supérieure, CNRS UMR8023 and PSL Research, Sorbonne Université, Université Paris Cité, F-75005 Paris, France
- Institut de la Vision, Sorbonne Université, CNRS, INSERM, F-75012 Paris, France
| | - Rémi Monasson
- Laboratoire de Physique de l'Ecole Normale Supérieure, CNRS UMR8023 and PSL Research, Sorbonne Université, Université Paris Cité, F-75005 Paris, France
| |
Collapse
|
4
|
Matsuda K, Shirakami A, Nakajima R, Akutsu T, Shimono M. Whole-Brain Evaluation of Cortical Microconnectomes. eNeuro 2023; 10:ENEURO.0094-23.2023. [PMID: 37903612 PMCID: PMC10616907 DOI: 10.1523/eneuro.0094-23.2023] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/22/2023] [Revised: 09/08/2023] [Accepted: 09/30/2023] [Indexed: 11/01/2023] Open
Abstract
The brain is an organ that functions as a network of many elements connected in a nonuniform manner. In the brain, the neocortex is evolutionarily newest and is thought to be primarily responsible for the high intelligence of mammals. In the mature mammalian brain, all cortical regions are expected to have some degree of homology, but have some variations of local circuits to achieve specific functions performed by individual regions. However, few cellular-level studies have examined how the networks within different cortical regions differ. This study aimed to find rules for systematic changes of connectivity (microconnectomes) across 16 different cortical region groups. We also observed unknown trends in basic parameters in vitro such as firing rate and layer thickness across brain regions. Results revealed that the frontal group shows unique characteristics such as dense active neurons, thick cortex, and strong connections with deeper layers. This suggests the frontal side of the cortex is inherently capable of driving, even in isolation and that frontal nodes provide the driving force generating a global pattern of spontaneous synchronous activity, such as the default mode network. This finding provides a new hypothesis explaining why disruption in the frontal region causes a large impact on mental health.
Collapse
Affiliation(s)
- Kouki Matsuda
- Graduate Schools of Medicine, Kyoto University, 53 Kawaramachi, Shogoin, Sakyo-ku, Kyoto 606-8507, Japan
| | - Arata Shirakami
- Graduate Schools of Medicine, Kyoto University, 53 Kawaramachi, Shogoin, Sakyo-ku, Kyoto 606-8507, Japan
| | - Ryota Nakajima
- Graduate Schools of Medicine, Kyoto University, 53 Kawaramachi, Shogoin, Sakyo-ku, Kyoto 606-8507, Japan
| | - Tatsuya Akutsu
- Bioinformatics Center, Institute for Chemical Research, Kyoto University, Gokasho, Uji, Kyoto 611-0011, Japan
| | - Masanori Shimono
- Graduate Schools of Medicine, Kyoto University, 53 Kawaramachi, Shogoin, Sakyo-ku, Kyoto 606-8507, Japan
- Graduate School of Information Science and Technology, Osaka University, 1-5 Yamadaoka, Suita-shi, Osaka 565-0871
| |
Collapse
|
5
|
Gallinaro JV, Scholl B, Clopath C. Synaptic weights that correlate with presynaptic selectivity increase decoding performance. PLoS Comput Biol 2023; 19:e1011362. [PMID: 37549193 PMCID: PMC10434873 DOI: 10.1371/journal.pcbi.1011362] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/28/2022] [Revised: 08/17/2023] [Accepted: 07/16/2023] [Indexed: 08/09/2023] Open
Abstract
The activity of neurons in the visual cortex is often characterized by tuning curves, which are thought to be shaped by Hebbian plasticity during development and sensory experience. This leads to the prediction that neural circuits should be organized such that neurons with similar functional preference are connected with stronger weights. In support of this idea, previous experimental and theoretical work have provided evidence for a model of the visual cortex characterized by such functional subnetworks. A recent experimental study, however, have found that the postsynaptic preferred stimulus was defined by the total number of spines activated by a given stimulus and independent of their individual strength. While this result might seem to contradict previous literature, there are many factors that define how a given synaptic input influences postsynaptic selectivity. Here, we designed a computational model in which postsynaptic functional preference is defined by the number of inputs activated by a given stimulus. Using a plasticity rule where synaptic weights tend to correlate with presynaptic selectivity, and is independent of functional-similarity between pre- and postsynaptic activity, we find that this model can be used to decode presented stimuli in a manner that is comparable to maximum likelihood inference.
Collapse
Affiliation(s)
- Júlia V. Gallinaro
- Bioengineering Department, Imperial College London, London, United Kingdom
| | - Benjamin Scholl
- Department of Neuroscience, Perelman School of Medicine, University of Pennsylvania, Philadephia, Pennsylvania, United States of America
| | - Claudia Clopath
- Bioengineering Department, Imperial College London, London, United Kingdom
| |
Collapse
|
6
|
Li AA, Wang F, Wu S, Zhang X. Emergence of probabilistic representation in the neural network of primary visual cortex. iScience 2022; 25:103975. [PMID: 35310336 PMCID: PMC8924637 DOI: 10.1016/j.isci.2022.103975] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/22/2021] [Revised: 10/27/2021] [Accepted: 02/21/2022] [Indexed: 11/12/2022] Open
Abstract
During the early development of the mammalian visual system, the distribution of neuronal preferred orientations in the primary visual cortex (V1) gradually shifts to match major orientation features of the environment, achieving its optimal representation. By combining computational modeling and electrophysiological recording, we provide a circuit plasticity mechanism that underlies the developmental emergence of such matched representation in the visual cortical network. Specifically, in a canonical circuit of densely-interconnected pyramidal cells and inhibitory parvalbumin-expressing (PV+) fast-spiking interneurons in V1 layer 2/3, our model successfully simulates the experimental observations and further reveals that the nonuniform inhibition plays a key role in shaping the network representation through spike timing-dependent plasticity. The experimental results suggest that PV + interneurons in V1 are capable of providing nonuniform inhibition shortly after vision onset. Our study elucidates a circuit mechanism for acquisition of prior knowledge of environment for optimal inference in sensory neural systems Computational and experimental methods are combined to representation in mice V1 Nonuniform inhibition plays a key role in shaping the network representation PV + interneurons provide nonuniform inhibition shortly after vision onset
Collapse
Affiliation(s)
- Ang A Li
- Academy for Advanced Interdisciplinary Studies, IDG/McGovern Institute for Brain Research, Peking-Tsinghua Center for Life Sciences, Beijing, China
| | - Fengchao Wang
- Academy for Advanced Interdisciplinary Studies, IDG/McGovern Institute for Brain Research, Peking-Tsinghua Center for Life Sciences, Beijing, China.,State Key Laboratory of Cognitive Neuroscience and Learning, IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing, China
| | - Si Wu
- Academy for Advanced Interdisciplinary Studies, IDG/McGovern Institute for Brain Research, Peking-Tsinghua Center for Life Sciences, Beijing, China.,School of Psychology and Cognitive Sciences, Peking University, Beijing, China
| | - Xiaohui Zhang
- State Key Laboratory of Cognitive Neuroscience and Learning, IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing, China
| |
Collapse
|
7
|
Larisch R, Gönner L, Teichmann M, Hamker FH. Sensory coding and contrast invariance emerge from the control of plastic inhibition over emergent selectivity. PLoS Comput Biol 2021; 17:e1009566. [PMID: 34843455 PMCID: PMC8629393 DOI: 10.1371/journal.pcbi.1009566] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/20/2021] [Accepted: 10/15/2021] [Indexed: 11/18/2022] Open
Abstract
Visual stimuli are represented by a highly efficient code in the primary visual cortex, but the development of this code is still unclear. Two distinct factors control coding efficiency: Representational efficiency, which is determined by neuronal tuning diversity, and metabolic efficiency, which is influenced by neuronal gain. How these determinants of coding efficiency are shaped during development, supported by excitatory and inhibitory plasticity, is only partially understood. We investigate a fully plastic spiking network of the primary visual cortex, building on phenomenological plasticity rules. Our results suggest that inhibitory plasticity is key to the emergence of tuning diversity and accurate input encoding. We show that inhibitory feedback (random and specific) increases the metabolic efficiency by implementing a gain control mechanism. Interestingly, this led to the spontaneous emergence of contrast-invariant tuning curves. Our findings highlight that (1) interneuron plasticity is key to the development of tuning diversity and (2) that efficient sensory representations are an emergent property of the resulting network. Synaptic plasticity is crucial for the development of efficient input representation in the different sensory cortices, such as the primary visual cortex. Efficient visual representation is determined by two factors: representational efficiency, i.e. how many different input features can be represented, and metabolic efficiency, i.e. how many spikes are required to represent a specific feature. Previous research has pointed out the importance of plasticity at excitatory synapses to achieve high representational efficiency and feedback inhibition as a gain control mechanism for controlling metabolic efficiency. However, it is only partially understood how the influence of inhibitory plasticity on excitatory plasticity can lead to an efficient representation. Using a spiking neural network, we show that plasticity at feed-forward and feedback inhibitory synapses is necessary for the emergence of well-distributed neuronal selectivity to improve representational efficiency. Further, the emergent balance between excitatory and inhibitory currents improves the metabolic efficiency, and leads to contrast-invariant tuning as an inherent network property. Extending previous work, our simulation results highlight the importance of plasticity at inhibitory synapses.
Collapse
Affiliation(s)
- René Larisch
- Department of Computer Science, Artificial Intelligence, TU Chemnitz, Chemnitz, Germany
- * E-mail: (RL); (FHH)
| | - Lorenz Gönner
- Department of Computer Science, Artificial Intelligence, TU Chemnitz, Chemnitz, Germany
- Faculty of Psychology, Lifespan Developmental Neuroscience, TU Dresden, Dresden, Germany
| | - Michael Teichmann
- Department of Computer Science, Artificial Intelligence, TU Chemnitz, Chemnitz, Germany
| | - Fred H. Hamker
- Department of Computer Science, Artificial Intelligence, TU Chemnitz, Chemnitz, Germany
- Bernstein Center Computational Neuroscience, Berlin, Germany
- * E-mail: (RL); (FHH)
| |
Collapse
|
8
|
Sadeh S, Clopath C. Excitatory-inhibitory balance modulates the formation and dynamics of neuronal assemblies in cortical networks. SCIENCE ADVANCES 2021; 7:eabg8411. [PMID: 34731002 PMCID: PMC8565910 DOI: 10.1126/sciadv.abg8411] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/22/2021] [Accepted: 09/14/2021] [Indexed: 05/20/2023]
Abstract
Repetitive activation of subpopulations of neurons leads to the formation of neuronal assemblies, which can guide learning and behavior. Recent technological advances have made the artificial induction of these assemblies feasible, yet how various parameters of induction can be optimized is not clear. Here, we studied this question in large-scale cortical network models with excitatory-inhibitory balance. We found that the background network in which assemblies are embedded can strongly modulate their dynamics and formation. Networks with dominant excitatory interactions enabled a fast formation of assemblies, but this was accompanied by recruitment of other non-perturbed neurons, leading to some degree of nonspecific induction. On the other hand, networks with strong excitatory-inhibitory interactions ensured that the formation of assemblies remained constrained to the perturbed neurons, but slowed down the induction. Our results suggest that these two regimes can be suitable for computational and cognitive tasks with different trade-offs between speed and specificity.
Collapse
Affiliation(s)
- Sadra Sadeh
- Bioengineering Department, Imperial College London, London SW7 2AZ, UK
| | | |
Collapse
|
9
|
Crodelle J, McLaughlin DW. Modeling the role of gap junctions between excitatory neurons in the developing visual cortex. PLoS Comput Biol 2021; 17:e1007915. [PMID: 34228707 PMCID: PMC8284639 DOI: 10.1371/journal.pcbi.1007915] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/23/2020] [Revised: 07/16/2021] [Accepted: 06/16/2021] [Indexed: 11/23/2022] Open
Abstract
Recent experiments in the developing mammalian visual cortex have revealed that gap junctions couple excitatory cells and potentially influence the formation of chemical synapses. In particular, cells that were coupled by a gap junction during development tend to share an orientation preference and are preferentially coupled by a chemical synapse in the adult cortex, a property that is diminished when gap junctions are blocked. In this work, we construct a simplified model of the developing mouse visual cortex including spike-timing-dependent plasticity of both the feedforward synaptic inputs and recurrent cortical synapses. We use this model to show that synchrony among gap-junction-coupled cells underlies their preference to form strong recurrent synapses and develop similar orientation preference; this effect decreases with an increase in coupling density. Additionally, we demonstrate that gap-junction coupling works, together with the relative timing of synaptic development of the feedforward and recurrent synapses, to determine the resulting cortical map of orientation preference. Gap junctions, or sites of direct electrical connections between neurons, have a significant presence in the cortex, both during development and in adulthood. Their primary function during either of these periods, however, is still poorly understood. In the adult cortex, gap junctions between local, inhibitory neurons have been shown to promote synchronous firing, a network characteristic thought to be important for learning, attention, and memory. During development, gap junctions between excitatory, pyramidal cells, have been conjectured to play a role in synaptic plasticity and the formation of cortical circuits. In the visual cortex, where neurons exhibit tuned responses to properties of visual input such as orientation and direction, recent experiments show that excitatory cells are coupled by gap junctions during the first postnatal week and are replaced by chemical synapses during the second week. In this work, we explore the possible contribution of gap-junction coupling during development to the formation of chemical synapses between the visual cortex from the thalamus and between cortical cells within the visual cortex. Specifically, using a mathematical model of the visual cortex during development, we identify the response properties of gap-junction-coupled cells and their influence on the formation of the cortical map of orientation preference.
Collapse
Affiliation(s)
- Jennifer Crodelle
- Middlebury College, Middlebury, Vermont, United States of America
- Courant Institute of Mathematical Sciences, NYU, New York, New York, United States of America
- * E-mail:
| | - David W. McLaughlin
- Courant Institute of Mathematical Sciences, NYU, New York, New York, United States of America
- Center for Neural Science, NYU, New York, New York, United States of America
- Neuroscience Institute of NYU Langone Health, New York, New York, United States of America
- New York University Shanghai, Shanghai, China
| |
Collapse
|
10
|
Stapmanns J, Hahne J, Helias M, Bolten M, Diesmann M, Dahmen D. Event-Based Update of Synapses in Voltage-Based Learning Rules. Front Neuroinform 2021; 15:609147. [PMID: 34177505 PMCID: PMC8222618 DOI: 10.3389/fninf.2021.609147] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/22/2020] [Accepted: 04/07/2021] [Indexed: 11/13/2022] Open
Abstract
Due to the point-like nature of neuronal spiking, efficient neural network simulators often employ event-based simulation schemes for synapses. Yet many types of synaptic plasticity rely on the membrane potential of the postsynaptic cell as a third factor in addition to pre- and postsynaptic spike times. In some learning rules membrane potentials not only influence synaptic weight changes at the time points of spike events but in a continuous manner. In these cases, synapses therefore require information on the full time course of membrane potentials to update their strength which a priori suggests a continuous update in a time-driven manner. The latter hinders scaling of simulations to realistic cortical network sizes and relevant time scales for learning. Here, we derive two efficient algorithms for archiving postsynaptic membrane potentials, both compatible with modern simulation engines based on event-based synapse updates. We theoretically contrast the two algorithms with a time-driven synapse update scheme to analyze advantages in terms of memory and computations. We further present a reference implementation in the spiking neural network simulator NEST for two prototypical voltage-based plasticity rules: the Clopath rule and the Urbanczik-Senn rule. For both rules, the two event-based algorithms significantly outperform the time-driven scheme. Depending on the amount of data to be stored for plasticity, which heavily differs between the rules, a strong performance increase can be achieved by compressing or sampling of information on membrane potentials. Our results on computational efficiency related to archiving of information provide guidelines for the design of learning rules in order to make them practically usable in large-scale networks.
Collapse
Affiliation(s)
- Jonas Stapmanns
- Institute of Neuroscience and Medicine (INM-6), Institute for Advanced Simulation (IAS-6), JARA Institute Brain Structure Function Relationship (INM-10), Jülich Research Centre, Jülich, Germany
- Department of Physics, Institute for Theoretical Solid State Physics, RWTH Aachen University, Aachen, Germany
| | - Jan Hahne
- School of Mathematics and Natural Sciences, Bergische Universität Wuppertal, Wuppertal, Germany
| | - Moritz Helias
- Institute of Neuroscience and Medicine (INM-6), Institute for Advanced Simulation (IAS-6), JARA Institute Brain Structure Function Relationship (INM-10), Jülich Research Centre, Jülich, Germany
- Department of Physics, Institute for Theoretical Solid State Physics, RWTH Aachen University, Aachen, Germany
| | - Matthias Bolten
- School of Mathematics and Natural Sciences, Bergische Universität Wuppertal, Wuppertal, Germany
| | - Markus Diesmann
- Institute of Neuroscience and Medicine (INM-6), Institute for Advanced Simulation (IAS-6), JARA Institute Brain Structure Function Relationship (INM-10), Jülich Research Centre, Jülich, Germany
- Department of Physics, Faculty 1, RWTH Aachen University, Aachen, Germany
- Department of Psychiatry, Psychotherapy and Psychosomatics, Medical Faculty, RWTH Aachen University, Aachen, Germany
| | - David Dahmen
- Institute of Neuroscience and Medicine (INM-6), Institute for Advanced Simulation (IAS-6), JARA Institute Brain Structure Function Relationship (INM-10), Jülich Research Centre, Jülich, Germany
| |
Collapse
|
11
|
Berberian N, Ross M, Chartier S. Embodied working memory during ongoing input streams. PLoS One 2021; 16:e0244822. [PMID: 33400724 PMCID: PMC7785253 DOI: 10.1371/journal.pone.0244822] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/18/2020] [Accepted: 12/16/2020] [Indexed: 11/18/2022] Open
Abstract
Sensory stimuli endow animals with the ability to generate an internal representation. This representation can be maintained for a certain duration in the absence of previously elicited inputs. The reliance on an internal representation rather than purely on the basis of external stimuli is a hallmark feature of higher-order functions such as working memory. Patterns of neural activity produced in response to sensory inputs can continue long after the disappearance of previous inputs. Experimental and theoretical studies have largely invested in understanding how animals faithfully maintain sensory representations during ongoing reverberations of neural activity. However, these studies have focused on preassigned protocols of stimulus presentation, leaving out by default the possibility of exploring how the content of working memory interacts with ongoing input streams. Here, we study working memory using a network of spiking neurons with dynamic synapses subject to short-term and long-term synaptic plasticity. The formal model is embodied in a physical robot as a companion approach under which neuronal activity is directly linked to motor output. The artificial agent is used as a methodological tool for studying the formation of working memory capacity. To this end, we devise a keyboard listening framework to delineate the context under which working memory content is (1) refined, (2) overwritten or (3) resisted by ongoing new input streams. Ultimately, this study takes a neurorobotic perspective to resurface the long-standing implication of working memory in flexible cognition.
Collapse
Affiliation(s)
- Nareg Berberian
- Laboratory for Computational Neurodynamics and Cognition, School of Psychology, University of Ottawa, Ottawa, Ontario, Canada
| | - Matt Ross
- Laboratory for Computational Neurodynamics and Cognition, School of Psychology, University of Ottawa, Ottawa, Ontario, Canada
| | - Sylvain Chartier
- Laboratory for Computational Neurodynamics and Cognition, School of Psychology, University of Ottawa, Ottawa, Ontario, Canada
| |
Collapse
|
12
|
Sadeh S, Clopath C. Inhibitory stabilization and cortical computation. Nat Rev Neurosci 2020; 22:21-37. [PMID: 33177630 DOI: 10.1038/s41583-020-00390-z] [Citation(s) in RCA: 54] [Impact Index Per Article: 13.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 09/22/2020] [Indexed: 12/22/2022]
Abstract
Neuronal networks with strong recurrent connectivity provide the brain with a powerful means to perform complex computational tasks. However, high-gain excitatory networks are susceptible to instability, which can lead to runaway activity, as manifested in pathological regimes such as epilepsy. Inhibitory stabilization offers a dynamic, fast and flexible compensatory mechanism to balance otherwise unstable networks, thus enabling the brain to operate in its most efficient regimes. Here we review recent experimental evidence for the presence of such inhibition-stabilized dynamics in the brain and discuss their consequences for cortical computation. We show how the study of inhibition-stabilized networks in the brain has been facilitated by recent advances in the technological toolbox and perturbative techniques, as well as a concomitant development of biologically realistic computational models. By outlining future avenues, we suggest that inhibitory stabilization can offer an exemplary case of how experimental neuroscience can progress in tandem with technology and theory to advance our understanding of the brain.
Collapse
Affiliation(s)
- Sadra Sadeh
- Bioengineering Department, Imperial College London, London, UK
| | - Claudia Clopath
- Bioengineering Department, Imperial College London, London, UK.
| |
Collapse
|
13
|
Iyer R, Hu B, Mihalas S. Contextual Integration in Cortical and Convolutional Neural Networks. Front Comput Neurosci 2020; 14:31. [PMID: 32390818 PMCID: PMC7192314 DOI: 10.3389/fncom.2020.00031] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/01/2019] [Accepted: 03/24/2020] [Indexed: 11/28/2022] Open
Abstract
It has been suggested that neurons can represent sensory input using probability distributions and neural circuits can perform probabilistic inference. Lateral connections between neurons have been shown to have non-random connectivity and modulate responses to stimuli within the classical receptive field. Large-scale efforts mapping local cortical connectivity describe cell type specific connections from inhibitory neurons and like-to-like connectivity between excitatory neurons. To relate the observed connectivity to computations, we propose a neuronal network model that approximates Bayesian inference of the probability of different features being present at different image locations. We show that the lateral connections between excitatory neurons in a circuit implementing contextual integration in this should depend on correlations between unit activities, minus a global inhibitory drive. The model naturally suggests the need for two types of inhibitory gates (normalization, surround inhibition). First, using natural scene statistics and classical receptive fields corresponding to simple cells parameterized with data from mouse primary visual cortex, we show that the predicted connectivity qualitatively matches with that measured in mouse cortex: neurons with similar orientation tuning have stronger connectivity, and both excitatory and inhibitory connectivity have a modest spatial extent, comparable to that observed in mouse visual cortex. We incorporate lateral connections learned using this model into convolutional neural networks. Features are defined by supervised learning on the task, and the lateral connections provide an unsupervised learning of feature context in multiple layers. Since the lateral connections provide contextual information when the feedforward input is locally corrupted, we show that incorporating such lateral connections into convolutional neural networks makes them more robust to noise and leads to better performance on noisy versions of the MNIST dataset. Decomposing the predicted lateral connectivity matrices into low-rank and sparse components introduces additional cell types into these networks. We explore effects of cell-type specific perturbations on network computation. Our framework can potentially be applied to networks trained on other tasks, with the learned lateral connections aiding computations implemented by feedforward connections when the input is unreliable and demonstrate the potential usefulness of combining supervised and unsupervised learning techniques in real-world vision tasks.
Collapse
Affiliation(s)
- Ramakrishnan Iyer
- Modeling and Theory, Allen Institute for Brain Science, Seattle, WA, United States
| | - Brian Hu
- Modeling and Theory, Allen Institute for Brain Science, Seattle, WA, United States
| | - Stefan Mihalas
- Modeling and Theory, Allen Institute for Brain Science, Seattle, WA, United States
| |
Collapse
|
14
|
Sederberg A, Nemenman I. Randomly connected networks generate emergent selectivity and predict decoding properties of large populations of neurons. PLoS Comput Biol 2020; 16:e1007875. [PMID: 32379751 PMCID: PMC7237045 DOI: 10.1371/journal.pcbi.1007875] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/30/2020] [Revised: 05/19/2020] [Accepted: 04/14/2020] [Indexed: 01/12/2023] Open
Abstract
Modern recording methods enable sampling of thousands of neurons during the performance of behavioral tasks, raising the question of how recorded activity relates to theoretical models. In the context of decision making, functional connectivity between choice-selective cortical neurons was recently reported. The straightforward interpretation of these data suggests the existence of selective pools of inhibitory and excitatory neurons. Computationally investigating an alternative mechanism for these experimental observations, we find that a randomly connected network of excitatory and inhibitory neurons generates single-cell selectivity, patterns of pairwise correlations, and the same ability of excitatory and inhibitory populations to predict choice, as in experimental observations. Further, we predict that, for this task, there are no anatomically defined subpopulations of neurons representing choice, and that choice preference of a particular neuron changes with the details of the task. We suggest that distributed stimulus selectivity and functional organization in population codes could be emergent properties of randomly connected networks.
Collapse
Affiliation(s)
- Audrey Sederberg
- Department of Physics, Emory University, Atlanta, Georgia, United States of America
- Initiative in Theory and Modeling of Living Systems, Emory University, Atlanta, Georgia, United States of America
| | - Ilya Nemenman
- Department of Physics, Emory University, Atlanta, Georgia, United States of America
- Initiative in Theory and Modeling of Living Systems, Emory University, Atlanta, Georgia, United States of America
- Department of Biology, Emory University, Atlanta, Georgia, United States of America
| |
Collapse
|
15
|
Design Thinking in Nursing Education to Improve Care for Lesbian, Gay, Bisexual, Transgender, Queer, Intersex and Two-Spirit People. Creat Nurs 2020; 26:118-124. [PMID: 32321796 DOI: 10.1891/crnr-d-20-00003] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
Abstract
Design thinking methodology is a collaborative strategy with the potential to create innovations. Design thinking is being used increasingly in health care. Design jams are interdisciplinary events that bring together experts and community members to collaborate on creative solutions to health-care problems. This article describes the design thinking process and includes reflection on the authors' participation in a design jam event aimed to address the knowledge-to-action gap that exists in health care for (LGBTQI2S) people.
Collapse
|
16
|
Sadeh S, Clopath C. Patterned perturbation of inhibition can reveal the dynamical structure of neural processing. eLife 2020; 9:e52757. [PMID: 32073400 PMCID: PMC7180056 DOI: 10.7554/elife.52757] [Citation(s) in RCA: 22] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/15/2019] [Accepted: 02/19/2020] [Indexed: 12/18/2022] Open
Abstract
Perturbation of neuronal activity is key to understanding the brain's functional properties, however, intervention studies typically perturb neurons in a nonspecific manner. Recent optogenetics techniques have enabled patterned perturbations, in which specific patterns of activity can be invoked in identified target neurons to reveal more specific cortical function. Here, we argue that patterned perturbation of neurons is in fact necessary to reveal the specific dynamics of inhibitory stabilization, emerging in cortical networks with strong excitatory and inhibitory functional subnetworks, as recently reported in mouse visual cortex. We propose a specific perturbative signature of these networks and investigate how this can be measured under different experimental conditions. Functionally, rapid spontaneous transitions between selective ensembles of neurons emerge in such networks, consistent with experimental results. Our study outlines the dynamical and functional properties of feature-specific inhibitory-stabilized networks, and suggests experimental protocols that can be used to detect them in the intact cortex.
Collapse
Affiliation(s)
- Sadra Sadeh
- Bioengineering Department, Imperial College LondonLondonUnited Kingdom
| | - Claudia Clopath
- Bioengineering Department, Imperial College LondonLondonUnited Kingdom
| |
Collapse
|
17
|
Saglietti L, Gerace F, Ingrosso A, Baldassi C, Zecchina R. From statistical inference to a differential learning rule for stochastic neural networks. Interface Focus 2018; 8:20180033. [PMID: 30443331 PMCID: PMC6227809 DOI: 10.1098/rsfs.2018.0033] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 09/11/2018] [Indexed: 11/13/2022] Open
Abstract
Stochastic neural networks are a prototypical computational device able to build a probabilistic representation of an ensemble of external stimuli. Building on the relationship between inference and learning, we derive a synaptic plasticity rule that relies only on delayed activity correlations, and that shows a number of remarkable features. Our delayed-correlations matching (DCM) rule satisfies some basic requirements for biological feasibility: finite and noisy afferent signals, Dale’s principle and asymmetry of synaptic connections, locality of the weight update computations. Nevertheless, the DCM rule is capable of storing a large, extensive number of patterns as attractors in a stochastic recurrent neural network, under general scenarios without requiring any modification: it can deal with correlated patterns, a broad range of architectures (with or without hidden neuronal states), one-shot learning with the palimpsest property, all the while avoiding the proliferation of spurious attractors. When hidden units are present, our learning rule can be employed to construct Boltzmann machine-like generative models, exploiting the addition of hidden neurons in feature extraction and classification tasks.
Collapse
Affiliation(s)
- Luca Saglietti
- Microsoft Research New England, Cambridge, MA, USA.,Italian Institute for Genomic Medicine, Torino, Italy
| | - Federica Gerace
- Italian Institute for Genomic Medicine, Torino, Italy.,Politecnico di Torino, DISAT, Torino, Italy
| | | | - Carlo Baldassi
- Italian Institute for Genomic Medicine, Torino, Italy.,Bocconi Institute for Data Science and Analytics, Bocconi University, Milano, Italy.,Istituto Nazionale di Fisica Nucleare, Torino, Italy
| | - Riccardo Zecchina
- Italian Institute for Genomic Medicine, Torino, Italy.,Bocconi Institute for Data Science and Analytics, Bocconi University, Milano, Italy.,International Centre for Theoretical Physics, Trieste, Italy
| |
Collapse
|
18
|
Triplett MA, Avitan L, Goodhill GJ. Emergence of spontaneous assembly activity in developing neural networks without afferent input. PLoS Comput Biol 2018; 14:e1006421. [PMID: 30265665 PMCID: PMC6161857 DOI: 10.1371/journal.pcbi.1006421] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/17/2018] [Accepted: 08/07/2018] [Indexed: 02/04/2023] Open
Abstract
Spontaneous activity is a fundamental characteristic of the developing nervous system. Intriguingly, it often takes the form of multiple structured assemblies of neurons. Such assemblies can form even in the absence of afferent input, for instance in the zebrafish optic tectum after bilateral enucleation early in life. While the development of neural assemblies based on structured afferent input has been theoretically well-studied, it is less clear how they could arise in systems without afferent input. Here we show that a recurrent network of binary threshold neurons with initially random weights can form neural assemblies based on a simple Hebbian learning rule. Over development the network becomes increasingly modular while being driven by initially unstructured spontaneous activity, leading to the emergence of neural assemblies. Surprisingly, the set of neurons making up each assembly then continues to evolve, despite the number of assemblies remaining roughly constant. In the mature network assembly activity builds over several timesteps before the activation of the full assembly, as recently observed in calcium-imaging experiments. Our results show that Hebbian learning is sufficient to explain the emergence of highly structured patterns of neural activity in the absence of structured input.
Collapse
Affiliation(s)
- Marcus A. Triplett
- Queensland Brain Institute, University of Queensland, St Lucia, Queensland, Australia
- School of Mathematics and Physics, University of Queensland, St Lucia, Queensland, Australia
| | - Lilach Avitan
- Queensland Brain Institute, University of Queensland, St Lucia, Queensland, Australia
| | - Geoffrey J. Goodhill
- Queensland Brain Institute, University of Queensland, St Lucia, Queensland, Australia
- School of Mathematics and Physics, University of Queensland, St Lucia, Queensland, Australia
- * E-mail:
| |
Collapse
|
19
|
Gallinaro JV, Rotter S. Associative properties of structural plasticity based on firing rate homeostasis in recurrent neuronal networks. Sci Rep 2018; 8:3754. [PMID: 29491474 PMCID: PMC5830542 DOI: 10.1038/s41598-018-22077-3] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/16/2017] [Accepted: 02/16/2018] [Indexed: 11/18/2022] Open
Abstract
Correlation-based Hebbian plasticity is thought to shape neuronal connectivity during development and learning, whereas homeostatic plasticity would stabilize network activity. Here we investigate another, new aspect of this dichotomy: Can Hebbian associative properties also emerge as a network effect from a plasticity rule based on homeostatic principles on the neuronal level? To address this question, we simulated a recurrent network of leaky integrate-and-fire neurons, in which excitatory connections are subject to a structural plasticity rule based on firing rate homeostasis. We show that a subgroup of neurons develop stronger within-group connectivity as a consequence of receiving stronger external stimulation. In an experimentally well-documented scenario we show that feature specific connectivity, similar to what has been observed in rodent visual cortex, can emerge from such a plasticity rule. The experience-dependent structural changes triggered by stimulation are long-lasting and decay only slowly when the neurons are exposed again to unspecific external inputs.
Collapse
Affiliation(s)
- Júlia V Gallinaro
- Bernstein Center Freiburg & Faculty of Biology, University of Freiburg, Freiburg im Breisgau, Germany.
| | - Stefan Rotter
- Bernstein Center Freiburg & Faculty of Biology, University of Freiburg, Freiburg im Breisgau, Germany
| |
Collapse
|
20
|
Muir DR, Molina-Luna P, Roth MM, Helmchen F, Kampa BM. Specific excitatory connectivity for feature integration in mouse primary visual cortex. PLoS Comput Biol 2017; 13:e1005888. [PMID: 29240769 PMCID: PMC5746254 DOI: 10.1371/journal.pcbi.1005888] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/31/2017] [Revised: 12/28/2017] [Accepted: 11/23/2017] [Indexed: 11/21/2022] Open
Abstract
Local excitatory connections in mouse primary visual cortex (V1) are stronger and more prevalent between neurons that share similar functional response features. However, the details of how functional rules for local connectivity shape neuronal responses in V1 remain unknown. We hypothesised that complex responses to visual stimuli may arise as a consequence of rules for selective excitatory connectivity within the local network in the superficial layers of mouse V1. In mouse V1 many neurons respond to overlapping grating stimuli (plaid stimuli) with highly selective and facilitatory responses, which are not simply predicted by responses to single gratings presented alone. This complexity is surprising, since excitatory neurons in V1 are considered to be mainly tuned to single preferred orientations. Here we examined the consequences for visual processing of two alternative connectivity schemes: in the first case, local connections are aligned with visual properties inherited from feedforward input (a 'like-to-like' scheme specifically connecting neurons that share similar preferred orientations); in the second case, local connections group neurons into excitatory subnetworks that combine and amplify multiple feedforward visual properties (a 'feature binding' scheme). By comparing predictions from large scale computational models with in vivo recordings of visual representations in mouse V1, we found that responses to plaid stimuli were best explained by assuming feature binding connectivity. Unlike under the like-to-like scheme, selective amplification within feature-binding excitatory subnetworks replicated experimentally observed facilitatory responses to plaid stimuli; explained selective plaid responses not predicted by grating selectivity; and was consistent with broad anatomical selectivity observed in mouse V1. Our results show that visual feature binding can occur through local recurrent mechanisms without requiring feedforward convergence, and that such a mechanism is consistent with visual responses and cortical anatomy in mouse V1.
Collapse
Affiliation(s)
- Dylan R. Muir
- Biozentrum, University of Basel, Basel, Switzerland
- Laboratory of Neural Circuit Dynamics, Brain Research Institute, University of Zurich, Zurich, Switzerland
| | - Patricia Molina-Luna
- Laboratory of Neural Circuit Dynamics, Brain Research Institute, University of Zurich, Zurich, Switzerland
| | - Morgane M. Roth
- Biozentrum, University of Basel, Basel, Switzerland
- Laboratory of Neural Circuit Dynamics, Brain Research Institute, University of Zurich, Zurich, Switzerland
| | - Fritjof Helmchen
- Laboratory of Neural Circuit Dynamics, Brain Research Institute, University of Zurich, Zurich, Switzerland
| | - Björn M. Kampa
- Laboratory of Neural Circuit Dynamics, Brain Research Institute, University of Zurich, Zurich, Switzerland
- Department of Neurophysiology, Institute of Biology 2, RWTH Aachen University, Aachen, Germany
- JARA-BRAIN, Aachen, Germany
| |
Collapse
|
21
|
Richter LMA, Gjorgjieva J. Understanding neural circuit development through theory and models. Curr Opin Neurobiol 2017; 46:39-47. [DOI: 10.1016/j.conb.2017.07.004] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/25/2017] [Revised: 07/07/2017] [Accepted: 07/10/2017] [Indexed: 11/25/2022]
|
22
|
Borges RR, Borges FS, Lameu EL, Batista AM, Iarosz KC, Caldas IL, Antonopoulos CG, Baptista MS. Spike timing-dependent plasticity induces non-trivial topology in the brain. Neural Netw 2017; 88:58-64. [PMID: 28189840 DOI: 10.1016/j.neunet.2017.01.010] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/14/2016] [Revised: 01/14/2017] [Accepted: 01/24/2017] [Indexed: 10/20/2022]
Abstract
We study the capacity of Hodgkin-Huxley neuron in a network to change temporarily or permanently their connections and behavior, the so called spike timing-dependent plasticity (STDP), as a function of their synchronous behavior. We consider STDP of excitatory and inhibitory synapses driven by Hebbian rules. We show that the final state of networks evolved by a STDP depend on the initial network configuration. Specifically, an initial all-to-all topology evolves to a complex topology. Moreover, external perturbations can induce co-existence of clusters, those whose neurons are synchronous and those whose neurons are desynchronous. This work reveals that STDP based on Hebbian rules leads to a change in the direction of the synapses between high and low frequency neurons, and therefore, Hebbian learning can be explained in terms of preferential attachment between these two diverse communities of neurons, those with low-frequency spiking neurons, and those with higher-frequency spiking neurons.
Collapse
Affiliation(s)
- R R Borges
- Pós-Graduação em Ciências, Universidade Estadual de Ponta Grossa, Ponta Grossa, PR, Brazil; Departamento de Matemática, Universidade Tecnológica Federal do Paraná, 86812-460, Apucarana, PR, Brazil
| | - F S Borges
- Pós-Graduação em Ciências, Universidade Estadual de Ponta Grossa, Ponta Grossa, PR, Brazil
| | - E L Lameu
- Pós-Graduação em Ciências, Universidade Estadual de Ponta Grossa, Ponta Grossa, PR, Brazil
| | - A M Batista
- Pós-Graduação em Ciências, Universidade Estadual de Ponta Grossa, Ponta Grossa, PR, Brazil; Departamento de Matemática e Estatística, Universidade Estadual de Ponta Grossa, Ponta Grossa, PR, Brazil; Instituto de Física, Universidade de São Paulo, São Paulo, SP, Brazil.
| | - K C Iarosz
- Instituto de Física, Universidade de São Paulo, São Paulo, SP, Brazil
| | - I L Caldas
- Instituto de Física, Universidade de São Paulo, São Paulo, SP, Brazil
| | - C G Antonopoulos
- Department of Mathematical Sciences, University of Essex, Wivenhoe Park, UK
| | - M S Baptista
- Institute for Complex Systems and Mathematical Biology, University of Aberdeen, SUPA, Aberdeen, UK
| |
Collapse
|
23
|
Memory replay in balanced recurrent networks. PLoS Comput Biol 2017; 13:e1005359. [PMID: 28135266 PMCID: PMC5305273 DOI: 10.1371/journal.pcbi.1005359] [Citation(s) in RCA: 32] [Impact Index Per Article: 4.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/30/2016] [Revised: 02/13/2017] [Accepted: 01/09/2017] [Indexed: 11/19/2022] Open
Abstract
Complex patterns of neural activity appear during up-states in the neocortex and sharp waves in the hippocampus, including sequences that resemble those during prior behavioral experience. The mechanisms underlying this replay are not well understood. How can small synaptic footprints engraved by experience control large-scale network activity during memory retrieval and consolidation? We hypothesize that sparse and weak synaptic connectivity between Hebbian assemblies are boosted by pre-existing recurrent connectivity within them. To investigate this idea, we connect sequences of assemblies in randomly connected spiking neuronal networks with a balance of excitation and inhibition. Simulations and analytical calculations show that recurrent connections within assemblies allow for a fast amplification of signals that indeed reduces the required number of inter-assembly connections. Replay can be evoked by small sensory-like cues or emerge spontaneously by activity fluctuations. Global-potentially neuromodulatory-alterations of neuronal excitability can switch between network states that favor retrieval and consolidation.
Collapse
|
24
|
Sweeney Y, Clopath C. Emergent spatial synaptic structure from diffusive plasticity. Eur J Neurosci 2016; 45:1057-1067. [DOI: 10.1111/ejn.13279] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/11/2016] [Revised: 05/04/2016] [Accepted: 05/13/2016] [Indexed: 11/29/2022]
Affiliation(s)
- Yann Sweeney
- Department of Bioengineering; Imperial College London, South Kensington Campus; London SW7 2AZ UK
| | - Claudia Clopath
- Department of Bioengineering; Imperial College London, South Kensington Campus; London SW7 2AZ UK
| |
Collapse
|