1
|
Vinograd A, Nair A, Kim JH, Linderman SW, Anderson DJ. Causal evidence of a line attractor encoding an affective state. Nature 2024; 634:910-918. [PMID: 39142337 PMCID: PMC11499281 DOI: 10.1038/s41586-024-07915-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/05/2023] [Accepted: 08/06/2024] [Indexed: 08/16/2024]
Abstract
Continuous attractors are an emergent property of neural population dynamics that have been hypothesized to encode continuous variables such as head direction and eye position1-4. In mammals, direct evidence of neural implementation of a continuous attractor has been hindered by the challenge of targeting perturbations to specific neurons within contributing ensembles2,3. Dynamical systems modelling has revealed that neurons in the hypothalamus exhibit approximate line-attractor dynamics in male mice during aggressive encounters5. We have previously hypothesized that these dynamics may encode the variable intensity and persistence of an aggressive internal state. Here we report that these neurons also showed line-attractor dynamics in head-fixed mice observing aggression6. This allowed us to identify and manipulate line-attractor-contributing neurons using two-photon calcium imaging and holographic optogenetic perturbations. On-manifold perturbations yielded integration of optogenetic stimulation pulses and persistent activity that drove the system along the line attractor, while transient off-manifold perturbations were followed by rapid relaxation back into the attractor. Furthermore, single-cell stimulation and imaging revealed selective functional connectivity among attractor-contributing neurons. Notably, individual differences among mice in line-attractor stability were correlated with the degree of functional connectivity among attractor-contributing neurons. Mechanistic recurrent neural network modelling indicated that dense subnetwork connectivity and slow neurotransmission7 best recapitulate our empirical findings. Our work bridges circuit and manifold levels3, providing causal evidence of continuous attractor dynamics encoding an affective internal state in the mammalian hypothalamus.
Collapse
Affiliation(s)
- Amit Vinograd
- Division of Biology and Biological Engineering, California Institute of Technology, Pasadena, CA, USA
- Tianqiao and Chrissy Chen Institute for Neuroscience Caltech, Pasadena, CA, USA
- Howard Hughes Medical Institute, Chevy Chase, MD, USA
| | - Aditya Nair
- Division of Biology and Biological Engineering, California Institute of Technology, Pasadena, CA, USA
- Tianqiao and Chrissy Chen Institute for Neuroscience Caltech, Pasadena, CA, USA
- Howard Hughes Medical Institute, Chevy Chase, MD, USA
| | - Joseph H Kim
- Division of Biology and Biological Engineering, California Institute of Technology, Pasadena, CA, USA
- Tianqiao and Chrissy Chen Institute for Neuroscience Caltech, Pasadena, CA, USA
| | - Scott W Linderman
- Department of Statistics, Stanford University, Stanford, CA, USA
- Wu Tsai Neurosciences Institute, Stanford University, Stanford, CA, USA
| | - David J Anderson
- Division of Biology and Biological Engineering, California Institute of Technology, Pasadena, CA, USA.
- Tianqiao and Chrissy Chen Institute for Neuroscience Caltech, Pasadena, CA, USA.
- Howard Hughes Medical Institute, Chevy Chase, MD, USA.
| |
Collapse
|
2
|
Gillett M, Brunel N. Dynamic control of sequential retrieval speed in networks with heterogeneous learning rules. eLife 2024; 12:RP88805. [PMID: 39197099 PMCID: PMC11357343 DOI: 10.7554/elife.88805] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 08/30/2024] Open
Abstract
Temporal rescaling of sequential neural activity has been observed in multiple brain areas during behaviors involving time estimation and motor execution at variable speeds. Temporally asymmetric Hebbian rules have been used in network models to learn and retrieve sequential activity, with characteristics that are qualitatively consistent with experimental observations. However, in these models sequential activity is retrieved at a fixed speed. Here, we investigate the effects of a heterogeneity of plasticity rules on network dynamics. In a model in which neurons differ by the degree of temporal symmetry of their plasticity rule, we find that retrieval speed can be controlled by varying external inputs to the network. Neurons with temporally symmetric plasticity rules act as brakes and tend to slow down the dynamics, while neurons with temporally asymmetric rules act as accelerators of the dynamics. We also find that such networks can naturally generate separate 'preparatory' and 'execution' activity patterns with appropriate external inputs.
Collapse
Affiliation(s)
- Maxwell Gillett
- Department of Neurobiology, Duke UniversityDurhamUnited States
| | - Nicolas Brunel
- Department of Neurobiology, Duke UniversityDurhamUnited States
- Department of Physics, Duke UniversityDurhamUnited States
| |
Collapse
|
3
|
Mininni CJ, Zanutto BS. Constructing neural networks with pre-specified dynamics. Sci Rep 2024; 14:18860. [PMID: 39143351 PMCID: PMC11324765 DOI: 10.1038/s41598-024-69747-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/20/2023] [Accepted: 08/08/2024] [Indexed: 08/16/2024] Open
Abstract
A main goal in neuroscience is to understand the computations carried out by neural populations that give animals their cognitive skills. Neural network models allow to formulate explicit hypotheses regarding the algorithms instantiated in the dynamics of a neural population, its firing statistics, and the underlying connectivity. Neural networks can be defined by a small set of parameters, carefully chosen to procure specific capabilities, or by a large set of free parameters, fitted with optimization algorithms that minimize a given loss function. In this work we alternatively propose a method to make a detailed adjustment of the network dynamics and firing statistic to better answer questions that link dynamics, structure, and function. Our algorithm-termed generalised Firing-to-Parameter (gFTP)-provides a way to construct binary recurrent neural networks whose dynamics strictly follows a user pre-specified transition graph that details the transitions between population firing states triggered by stimulus presentations. Our main contribution is a procedure that detects when a transition graph is not realisable in terms of a neural network, and makes the necessary modifications in order to obtain a new transition graph that is realisable and preserves all the information encoded in the transitions of the original graph. With a realisable transition graph, gFTP assigns values to the network firing states associated with each node in the graph, and finds the synaptic weight matrices by solving a set of linear separation problems. We test gFTP performance by constructing networks with random dynamics, continuous attractor-like dynamics that encode position in 2-dimensional space, and discrete attractor dynamics. We then show how gFTP can be employed as a tool to explore the link between structure, function, and the algorithms instantiated in the network dynamics.
Collapse
Affiliation(s)
- Camilo J Mininni
- Instituto de Biología y Medicina Experimental, Consejo Nacional de Investigaciones Científicas y Técnicas, Buenos Aires, Argentina.
| | - B Silvano Zanutto
- Instituto de Biología y Medicina Experimental, Consejo Nacional de Investigaciones Científicas y Técnicas, Buenos Aires, Argentina
- Instituto de Ingeniería Biomédica, Universidad de Buenos Aires, Buenos Aires, Argentina
| |
Collapse
|
4
|
Yang X, La Camera G. Co-existence of synaptic plasticity and metastable dynamics in a spiking model of cortical circuits. PLoS Comput Biol 2024; 20:e1012220. [PMID: 38950068 PMCID: PMC11244818 DOI: 10.1371/journal.pcbi.1012220] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/05/2024] [Revised: 07/12/2024] [Accepted: 06/01/2024] [Indexed: 07/03/2024] Open
Abstract
Evidence for metastable dynamics and its role in brain function is emerging at a fast pace and is changing our understanding of neural coding by putting an emphasis on hidden states of transient activity. Clustered networks of spiking neurons have enhanced synaptic connections among groups of neurons forming structures called cell assemblies; such networks are capable of producing metastable dynamics that is in agreement with many experimental results. However, it is unclear how a clustered network structure producing metastable dynamics may emerge from a fully local plasticity rule, i.e., a plasticity rule where each synapse has only access to the activity of the neurons it connects (as opposed to the activity of other neurons or other synapses). Here, we propose a local plasticity rule producing ongoing metastable dynamics in a deterministic, recurrent network of spiking neurons. The metastable dynamics co-exists with ongoing plasticity and is the consequence of a self-tuning mechanism that keeps the synaptic weights close to the instability line where memories are spontaneously reactivated. In turn, the synaptic structure is stable to ongoing dynamics and random perturbations, yet it remains sufficiently plastic to remap sensory representations to encode new sets of stimuli. Both the plasticity rule and the metastable dynamics scale well with network size, with synaptic stability increasing with the number of neurons. Overall, our results show that it is possible to generate metastable dynamics over meaningful hidden states using a simple but biologically plausible plasticity rule which co-exists with ongoing neural dynamics.
Collapse
Affiliation(s)
- Xiaoyu Yang
- Graduate Program in Physics and Astronomy, Stony Brook University, Stony Brook, New York, United States of America
- Department of Neurobiology and Behavior, Stony Brook University, Stony Brook, New York, United States of America
- Center for Neural Circuit Dynamics, Stony Brook University, Stony Brook, New York, United States of America
| | - Giancarlo La Camera
- Department of Neurobiology and Behavior, Stony Brook University, Stony Brook, New York, United States of America
- Center for Neural Circuit Dynamics, Stony Brook University, Stony Brook, New York, United States of America
| |
Collapse
|
5
|
Vinograd A, Nair A, Linderman SW, Anderson DJ. Intrinsic Dynamics and Neural Implementation of a Hypothalamic Line Attractor Encoding an Internal Behavioral State. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.05.21.595051. [PMID: 38826298 PMCID: PMC11142118 DOI: 10.1101/2024.05.21.595051] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/04/2024]
Abstract
Line attractors are emergent population dynamics hypothesized to encode continuous variables such as head direction and internal states. In mammals, direct evidence of neural implementation of a line attractor has been hindered by the challenge of targeting perturbations to specific neurons within contributing ensembles. Estrogen receptor type 1 (Esr1)-expressing neurons in the ventrolateral subdivision of the ventromedial hypothalamus (VMHvl) show line attractor dynamics in male mice during fighting. We hypothesized that these dynamics may encode continuous variation in the intensity of an internal aggressive state. Here, we report that these neurons also show line attractor dynamics in head-fixed mice observing aggression. We exploit this finding to identify and perturb line attractor-contributing neurons using 2-photon calcium imaging and holographic optogenetic perturbations. On-manifold perturbations demonstrate that integration and persistent activity are intrinsic properties of these neurons which drive the system along the line attractor, while transient off-manifold perturbations reveal rapid relaxation back into the attractor. Furthermore, stimulation and imaging reveal selective functional connectivity among attractor-contributing neurons. Intriguingly, individual differences among mice in line attractor stability were correlated with the degree of functional connectivity among contributing neurons. Mechanistic modelling indicates that dense subnetwork connectivity and slow neurotransmission are required to explain our empirical findings. Our work bridges circuit and manifold paradigms, shedding light on the intrinsic and operational dynamics of a behaviorally relevant mammalian line attractor.
Collapse
Affiliation(s)
- Amit Vinograd
- Division of Biology and Biological Engineering, California Institute of Technology; Pasadena, USA
- Tianqiao and Chrissy Chen Institute for Neuroscience Caltech; Pasadena, USA
- Howard Hughes Medical Institute; Chevy Chase, USA
| | - Aditya Nair
- Division of Biology and Biological Engineering, California Institute of Technology; Pasadena, USA
- Tianqiao and Chrissy Chen Institute for Neuroscience Caltech; Pasadena, USA
- Howard Hughes Medical Institute; Chevy Chase, USA
| | - Scott W. Linderman
- Department of Statistics, Stanford University, Stanford, USA
- Wu Tsai Neurosciences Institute, Stanford University, Stanford, USA
| | - David J. Anderson
- Division of Biology and Biological Engineering, California Institute of Technology; Pasadena, USA
- Tianqiao and Chrissy Chen Institute for Neuroscience Caltech; Pasadena, USA
- Howard Hughes Medical Institute; Chevy Chase, USA
| |
Collapse
|
6
|
Shen Y, Shao M, Hao ZZ, Huang M, Xu N, Liu S. Multimodal Nature of the Single-cell Primate Brain Atlas: Morphology, Transcriptome, Electrophysiology, and Connectivity. Neurosci Bull 2024; 40:517-532. [PMID: 38194157 PMCID: PMC11003949 DOI: 10.1007/s12264-023-01160-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/22/2023] [Accepted: 09/23/2023] [Indexed: 01/10/2024] Open
Abstract
Primates exhibit complex brain structures that augment cognitive function. The neocortex fulfills high-cognitive functions through billions of connected neurons. These neurons have distinct transcriptomic, morphological, and electrophysiological properties, and their connectivity principles vary. These features endow the primate brain atlas with a multimodal nature. The recent integration of next-generation sequencing with modified patch-clamp techniques is revolutionizing the way to census the primate neocortex, enabling a multimodal neuronal atlas to be established in great detail: (1) single-cell/single-nucleus RNA-seq technology establishes high-throughput transcriptomic references, covering all major transcriptomic cell types; (2) patch-seq links the morphological and electrophysiological features to the transcriptomic reference; (3) multicell patch-clamp delineates the principles of local connectivity. Here, we review the applications of these technologies in the primate neocortex and discuss the current advances and tentative gaps for a comprehensive understanding of the primate neocortex.
Collapse
Affiliation(s)
- Yuhui Shen
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangzhou, 510060, China
| | - Mingting Shao
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangzhou, 510060, China
| | - Zhao-Zhe Hao
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangzhou, 510060, China
| | - Mengyao Huang
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangzhou, 510060, China
| | - Nana Xu
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangzhou, 510060, China
| | - Sheng Liu
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangzhou, 510060, China.
- Guangdong Province Key Laboratory of Brain Function and Disease, Guangzhou, 510080, China.
| |
Collapse
|
7
|
Papo D, Buldú JM. Does the brain behave like a (complex) network? I. Dynamics. Phys Life Rev 2024; 48:47-98. [PMID: 38145591 DOI: 10.1016/j.plrev.2023.12.006] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/08/2023] [Accepted: 12/10/2023] [Indexed: 12/27/2023]
Abstract
Graph theory is now becoming a standard tool in system-level neuroscience. However, endowing observed brain anatomy and dynamics with a complex network structure does not entail that the brain actually works as a network. Asking whether the brain behaves as a network means asking whether network properties count. From the viewpoint of neurophysiology and, possibly, of brain physics, the most substantial issues a network structure may be instrumental in addressing relate to the influence of network properties on brain dynamics and to whether these properties ultimately explain some aspects of brain function. Here, we address the dynamical implications of complex network, examining which aspects and scales of brain activity may be understood to genuinely behave as a network. To do so, we first define the meaning of networkness, and analyse some of its implications. We then examine ways in which brain anatomy and dynamics can be endowed with a network structure and discuss possible ways in which network structure may be shown to represent a genuine organisational principle of brain activity, rather than just a convenient description of its anatomy and dynamics.
Collapse
Affiliation(s)
- D Papo
- Department of Neuroscience and Rehabilitation, Section of Physiology, University of Ferrara, Ferrara, Italy; Center for Translational Neurophysiology, Fondazione Istituto Italiano di Tecnologia, Ferrara, Italy.
| | - J M Buldú
- Complex Systems Group & G.I.S.C., Universidad Rey Juan Carlos, Madrid, Spain
| |
Collapse
|
8
|
Metzner C, Yamakou ME, Voelkl D, Schilling A, Krauss P. Quantifying and Maximizing the Information Flux in Recurrent Neural Networks. Neural Comput 2024; 36:351-384. [PMID: 38363658 DOI: 10.1162/neco_a_01651] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/21/2023] [Accepted: 12/04/2023] [Indexed: 02/18/2024]
Abstract
Free-running recurrent neural networks (RNNs), especially probabilistic models, generate an ongoing information flux that can be quantified with the mutual information I[x→(t),x→(t+1)] between subsequent system states x→. Although previous studies have shown that I depends on the statistics of the network's connection weights, it is unclear how to maximize I systematically and how to quantify the flux in large systems where computing the mutual information becomes intractable. Here, we address these questions using Boltzmann machines as model systems. We find that in networks with moderately strong connections, the mutual information I is approximately a monotonic transformation of the root-mean-square averaged Pearson correlations between neuron pairs, a quantity that can be efficiently computed even in large systems. Furthermore, evolutionary maximization of I[x→(t),x→(t+1)] reveals a general design principle for the weight matrices enabling the systematic construction of systems with a high spontaneous information flux. Finally, we simultaneously maximize information flux and the mean period length of cyclic attractors in the state-space of these dynamical networks. Our results are potentially useful for the construction of RNNs that serve as short-time memories or pattern generators.
Collapse
Affiliation(s)
- Claus Metzner
- Neuroscience Lab, University Hospital Erlangen, 91054 Erlangen, Germany
- Biophysics Lab, Friedrich-Alexander University of Erlangen-Nuremberg, 91054 Erlangen, Germany
| | - Marius E Yamakou
- Department of Data Science, Friedrich-Alexander University Erlangen-Nuremberg, 91054 Erlangen, Germany
| | - Dennis Voelkl
- Neuroscience Lab, University Hospital Erlangen, 91054 Erlangen, Germany
| | - Achim Schilling
- Neuroscience Lab, University Hospital Erlangen, 91054 Erlangen, Germany
- Cognitive Computational Neuroscience Group, Friedrich-Alexander University Erlangen-Nuremberg, 91054 Erlangen, Germany
| | - Patrick Krauss
- Neuroscience Lab, University Hospital Erlangen, 91054 Erlangen, Germany
- Cognitive Computational Neuroscience Group, Friedrich-Alexander University Erlangen-Nuremberg, 91054 Erlangen, Germany
- Pattern Recognition Lab, Friedrich-Alexander University Erlangen-Nuremberg, 91054 Erlangen, Germany
| |
Collapse
|
9
|
Gosti G, Milanetti E, Folli V, de Pasquale F, Leonetti M, Corbetta M, Ruocco G, Della Penna S. A recurrent Hopfield network for estimating meso-scale effective connectivity in MEG. Neural Netw 2024; 170:72-93. [PMID: 37977091 DOI: 10.1016/j.neunet.2023.11.027] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/17/2023] [Revised: 11/07/2023] [Accepted: 11/09/2023] [Indexed: 11/19/2023]
Abstract
The architecture of communication within the brain, represented by the human connectome, has gained a paramount role in the neuroscience community. Several features of this communication, e.g., the frequency content, spatial topology, and temporal dynamics are currently well established. However, identifying generative models providing the underlying patterns of inhibition/excitation is very challenging. To address this issue, we present a novel generative model to estimate large-scale effective connectivity from MEG. The dynamic evolution of this model is determined by a recurrent Hopfield neural network with asymmetric connections, and thus denoted Recurrent Hopfield Mass Model (RHoMM). Since RHoMM must be applied to binary neurons, it is suitable for analyzing Band Limited Power (BLP) dynamics following a binarization process. We trained RHoMM to predict the MEG dynamics through a gradient descent minimization and we validated it in two steps. First, we showed a significant agreement between the similarity of the effective connectivity patterns and that of the interregional BLP correlation, demonstrating RHoMM's ability to capture individual variability of BLP dynamics. Second, we showed that the simulated BLP correlation connectomes, obtained from RHoMM evolutions of BLP, preserved some important topological features, e.g, the centrality of the real data, assuring the reliability of RHoMM. Compared to other biophysical models, RHoMM is based on recurrent Hopfield neural networks, thus, it has the advantage of being data-driven, less demanding in terms of hyperparameters and scalable to encompass large-scale system interactions. These features are promising for investigating the dynamics of inhibition/excitation at different spatial scales.
Collapse
Affiliation(s)
- Giorgio Gosti
- Center for Life Nano- & Neuro-Science, Istituto Italiano di Tecnologia, Viale Regina Elena, 291, 00161, Rome, Italy; Soft and Living Matter Laboratory, Institute of Nanotechnology, Consiglio Nazionale delle Ricerche, Piazzale Aldo Moro, 5, 00185, Rome, Italy; Istituto di Scienze del Patrimonio Culturale, Sede di Roma, Consiglio Nazionale delle Ricerche, CNR-ISPC, Via Salaria km, 34900 Rome, Italy.
| | - Edoardo Milanetti
- Center for Life Nano- & Neuro-Science, Istituto Italiano di Tecnologia, Viale Regina Elena, 291, 00161, Rome, Italy; Department of Physics, Sapienza University of Rome, Piazzale Aldo Moro, 5, 00185, Rome, Italy.
| | - Viola Folli
- Center for Life Nano- & Neuro-Science, Istituto Italiano di Tecnologia, Viale Regina Elena, 291, 00161, Rome, Italy; D-TAILS srl, Via di Torre Rossa, 66, 00165, Rome, Italy.
| | - Francesco de Pasquale
- Faculty of Veterinary Medicine, University of Teramo, 64100 Piano D'Accio, Teramo, Italy.
| | - Marco Leonetti
- Center for Life Nano- & Neuro-Science, Istituto Italiano di Tecnologia, Viale Regina Elena, 291, 00161, Rome, Italy; Soft and Living Matter Laboratory, Institute of Nanotechnology, Consiglio Nazionale delle Ricerche, Piazzale Aldo Moro, 5, 00185, Rome, Italy; D-TAILS srl, Via di Torre Rossa, 66, 00165, Rome, Italy.
| | - Maurizio Corbetta
- Department of Neuroscience, University of Padova, Via Belzoni, 160, 35121, Padova, Italy; Padova Neuroscience Center (PNC), University of Padova, Via Orus, 2/B, 35129, Padova, Italy; Veneto Institute of Molecular Medicine (VIMM), Via Orus, 2, 35129, Padova, Italy.
| | - Giancarlo Ruocco
- Center for Life Nano- & Neuro-Science, Istituto Italiano di Tecnologia, Viale Regina Elena, 291, 00161, Rome, Italy; Department of Physics, Sapienza University of Rome, Piazzale Aldo Moro, 5, 00185, Rome, Italy.
| | - Stefania Della Penna
- Department of Neuroscience, Imaging and Clinical Sciences, and Institute for Advanced Biomedical Technologies, "G. d'Annunzio" University of Chieti-Pescara, Via Luigi Polacchi, 11, 66100 Chieti, Italy.
| |
Collapse
|
10
|
Leonetti M, Gosti G, Ruocco G. Photonic Stochastic Emergent Storage for deep classification by scattering-intrinsic patterns. Nat Commun 2024; 15:505. [PMID: 38218858 PMCID: PMC10787794 DOI: 10.1038/s41467-023-44498-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/21/2023] [Accepted: 12/15/2023] [Indexed: 01/15/2024] Open
Abstract
Disorder is a pervasive characteristic of natural systems, offering a wealth of non-repeating patterns. In this study, we present a novel storage method that harnesses naturally-occurring random structures to store an arbitrary pattern in a memory device. This method, the Stochastic Emergent Storage (SES), builds upon the concept of emergent archetypes, where a training set of imperfect examples (prototypes) is employed to instantiate an archetype in a Hopfield-like network through emergent processes. We demonstrate this non-Hebbian paradigm in the photonic domain by utilizing random transmission matrices, which govern light scattering in a white-paint turbid medium, as prototypes. Through the implementation of programmable hardware, we successfully realize and experimentally validate the capability to store an arbitrary archetype and perform classification at the speed of light. Leveraging the vast number of modes excited by mesoscopic diffusion, our approach enables the simultaneous storage of thousands of memories without requiring any additional fabrication efforts. Similar to a content addressable memory, all stored memories can be collectively assessed against a given pattern to identify the matching element. Furthermore, by organizing memories spatially into distinct classes, they become features within a higher-level categorical (deeper) optical classification layer.
Collapse
Affiliation(s)
- Marco Leonetti
- Soft and Living Matter Laboratory, Institute of Nanotechnology, 00185, Rome, Italy.
- Center for Life Nano- & Neuro-Science, Italian Institute of Technology, Rome, Italy.
- Rebel Dynamics-IIT CLN2S Jointlab, 00161, Roma, Italy.
| | - Giorgio Gosti
- Soft and Living Matter Laboratory, Institute of Nanotechnology, 00185, Rome, Italy
- Center for Life Nano- & Neuro-Science, Italian Institute of Technology, Rome, Italy
- Istituto di Scienze del Patrimonio Culturale, Sede di Roma, Consiglio Nazionale delle Ricerche, 00010, Montelibretti (RM), Italy
| | - Giancarlo Ruocco
- Center for Life Nano- & Neuro-Science, Italian Institute of Technology, Rome, Italy
- Department of Physics, University Sapienza, I-00185, Roma, Italy
| |
Collapse
|
11
|
Milstein AD, Tran S, Ng G, Soltesz I. Offline memory replay in recurrent neuronal networks emerges from constraints on online dynamics. J Physiol 2023; 601:3241-3264. [PMID: 35907087 PMCID: PMC9885000 DOI: 10.1113/jp283216] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/13/2022] [Accepted: 07/22/2022] [Indexed: 02/01/2023] Open
Abstract
During spatial exploration, neural circuits in the hippocampus store memories of sequences of sensory events encountered in the environment. When sensory information is absent during 'offline' resting periods, brief neuronal population bursts can 'replay' sequences of activity that resemble bouts of sensory experience. These sequences can occur in either forward or reverse order, and can even include spatial trajectories that have not been experienced, but are consistent with the topology of the environment. The neural circuit mechanisms underlying this variable and flexible sequence generation are unknown. Here we demonstrate in a recurrent spiking network model of hippocampal area CA3 that experimental constraints on network dynamics such as population sparsity, stimulus selectivity, rhythmicity and spike rate adaptation, as well as associative synaptic connectivity, enable additional emergent properties, including variable offline memory replay. In an online stimulus-driven state, we observed the emergence of neuronal sequences that swept from representations of past to future stimuli on the timescale of the theta rhythm. In an offline state driven only by noise, the network generated both forward and reverse neuronal sequences, and recapitulated the experimental observation that offline memory replay events tend to include salient locations like the site of a reward. These results demonstrate that biological constraints on the dynamics of recurrent neural circuits are sufficient to enable memories of sensory events stored in the strengths of synaptic connections to be flexibly read out during rest and sleep, which is thought to be important for memory consolidation and planning of future behaviour. KEY POINTS: A recurrent spiking network model of hippocampal area CA3 was optimized to recapitulate experimentally observed network dynamics during simulated spatial exploration. During simulated offline rest, the network exhibited the emergent property of generating flexible forward, reverse and mixed direction memory replay events. Network perturbations and analysis of model diversity and degeneracy identified associative synaptic connectivity and key features of network dynamics as important for offline sequence generation. Network simulations demonstrate that population over-representation of salient positions like the site of reward results in biased memory replay.
Collapse
Affiliation(s)
- Aaron D. Milstein
- Department of Neurosurgery, Stanford University School of Medicine, Stanford CA
- Department of Neuroscience and Cell Biology, Robert Wood Johnson Medical School and Center for Advanced Biotechnology and Medicine, Rutgers University, Piscataway, NJ
| | - Sarah Tran
- Department of Neurosurgery, Stanford University School of Medicine, Stanford CA
| | - Grace Ng
- Department of Neurosurgery, Stanford University School of Medicine, Stanford CA
| | - Ivan Soltesz
- Department of Neurosurgery, Stanford University School of Medicine, Stanford CA
| |
Collapse
|
12
|
Shao Y, Ostojic S. Relating local connectivity and global dynamics in recurrent excitatory-inhibitory networks. PLoS Comput Biol 2023; 19:e1010855. [PMID: 36689488 PMCID: PMC9894562 DOI: 10.1371/journal.pcbi.1010855] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/29/2022] [Revised: 02/02/2023] [Accepted: 01/06/2023] [Indexed: 01/24/2023] Open
Abstract
How the connectivity of cortical networks determines the neural dynamics and the resulting computations is one of the key questions in neuroscience. Previous works have pursued two complementary approaches to quantify the structure in connectivity. One approach starts from the perspective of biological experiments where only the local statistics of connectivity motifs between small groups of neurons are accessible. Another approach is based instead on the perspective of artificial neural networks where the global connectivity matrix is known, and in particular its low-rank structure can be used to determine the resulting low-dimensional dynamics. A direct relationship between these two approaches is however currently missing. Specifically, it remains to be clarified how local connectivity statistics and the global low-rank connectivity structure are inter-related and shape the low-dimensional activity. To bridge this gap, here we develop a method for mapping local connectivity statistics onto an approximate global low-rank structure. Our method rests on approximating the global connectivity matrix using dominant eigenvectors, which we compute using perturbation theory for random matrices. We demonstrate that multi-population networks defined from local connectivity statistics for which the central limit theorem holds can be approximated by low-rank connectivity with Gaussian-mixture statistics. We specifically apply this method to excitatory-inhibitory networks with reciprocal motifs, and show that it yields reliable predictions for both the low-dimensional dynamics, and statistics of population activity. Importantly, it analytically accounts for the activity heterogeneity of individual neurons in specific realizations of local connectivity. Altogether, our approach allows us to disentangle the effects of mean connectivity and reciprocal motifs on the global recurrent feedback, and provides an intuitive picture of how local connectivity shapes global network dynamics.
Collapse
Affiliation(s)
- Yuxiu Shao
- Laboratoire de Neurosciences Cognitives et Computationnelles, INSERM U960, Ecole Normale Superieure—PSL Research University, Paris, France
| | - Srdjan Ostojic
- Laboratoire de Neurosciences Cognitives et Computationnelles, INSERM U960, Ecole Normale Superieure—PSL Research University, Paris, France
| |
Collapse
|
13
|
Metzner C, Krauss P. Dynamics and Information Import in Recurrent Neural Networks. Front Comput Neurosci 2022; 16:876315. [PMID: 35573264 PMCID: PMC9091337 DOI: 10.3389/fncom.2022.876315] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/15/2022] [Accepted: 04/04/2022] [Indexed: 12/27/2022] Open
Abstract
Recurrent neural networks (RNNs) are complex dynamical systems, capable of ongoing activity without any driving input. The long-term behavior of free-running RNNs, described by periodic, chaotic and fixed point attractors, is controlled by the statistics of the neural connection weights, such as the density d of non-zero connections, or the balance b between excitatory and inhibitory connections. However, for information processing purposes, RNNs need to receive external input signals, and it is not clear which of the dynamical regimes is optimal for this information import. We use both the average correlations C and the mutual information I between the momentary input vector and the next system state vector as quantitative measures of information import and analyze their dependence on the balance and density of the network. Remarkably, both resulting phase diagrams C(b, d) and I(b, d) are highly consistent, pointing to a link between the dynamical systems and the information-processing approach to complex systems. Information import is maximal not at the "edge of chaos," which is optimally suited for computation, but surprisingly in the low-density chaotic regime and at the border between the chaotic and fixed point regime. Moreover, we find a completely new type of resonance phenomenon, which we call "Import Resonance" (IR), where the information import shows a maximum, i.e., a peak-like dependence on the coupling strength between the RNN and its external input. IR complements previously found Recurrence Resonance (RR), where correlation and mutual information of successive system states peak for a certain amplitude of noise added to the system. Both IR and RR can be exploited to optimize information processing in artificial neural networks and might also play a crucial role in biological neural systems.
Collapse
Affiliation(s)
- Claus Metzner
- Neuroscience Lab, University Hospital Erlangen, Erlangen, Germany
| | - Patrick Krauss
- Neuroscience Lab, University Hospital Erlangen, Erlangen, Germany
- Cognitive Computational Neuroscience Group, Friedrich-Alexander-University Erlangen-Nuremberg, Erlangen, Germany
- Pattern Recognition Lab, Friedrich-Alexander-University Erlangen-Nuremberg, Erlangen, Germany
| |
Collapse
|
14
|
Yoder JA, Anderson CB, Wang C, Izquierdo EJ. Reinforcement Learning for Central Pattern Generation in Dynamical Recurrent Neural Networks. Front Comput Neurosci 2022; 16:818985. [PMID: 35465269 PMCID: PMC9028035 DOI: 10.3389/fncom.2022.818985] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/20/2021] [Accepted: 03/10/2022] [Indexed: 11/21/2022] Open
Abstract
Lifetime learning, or the change (or acquisition) of behaviors during a lifetime, based on experience, is a hallmark of living organisms. Multiple mechanisms may be involved, but biological neural circuits have repeatedly demonstrated a vital role in the learning process. These neural circuits are recurrent, dynamic, and non-linear and models of neural circuits employed in neuroscience and neuroethology tend to involve, accordingly, continuous-time, non-linear, and recurrently interconnected components. Currently, the main approach for finding configurations of dynamical recurrent neural networks that demonstrate behaviors of interest is using stochastic search techniques, such as evolutionary algorithms. In an evolutionary algorithm, these dynamic recurrent neural networks are evolved to perform the behavior over multiple generations, through selection, inheritance, and mutation, across a population of solutions. Although, these systems can be evolved to exhibit lifetime learning behavior, there are no explicit rules built into these dynamic recurrent neural networks that facilitate learning during their lifetime (e.g., reward signals). In this work, we examine a biologically plausible lifetime learning mechanism for dynamical recurrent neural networks. We focus on a recently proposed reinforcement learning mechanism inspired by neuromodulatory reward signals and ongoing fluctuations in synaptic strengths. Specifically, we extend one of the best-studied and most-commonly used dynamic recurrent neural networks to incorporate the reinforcement learning mechanism. First, we demonstrate that this extended dynamical system (model and learning mechanism) can autonomously learn to perform a central pattern generation task. Second, we compare the robustness and efficiency of the reinforcement learning rules in relation to two baseline models, a random walk and a hill-climbing walk through parameter space. Third, we systematically study the effect of the different meta-parameters of the learning mechanism on the behavioral learning performance. Finally, we report on preliminary results exploring the generality and scalability of this learning mechanism for dynamical neural networks as well as directions for future work.
Collapse
Affiliation(s)
- Jason A. Yoder
- Computer Science and Software Engineering Department, Rose-Hulman Institute of Technology, Terre Haute, IN, United States
- *Correspondence: Jason A. Yoder
| | - Cooper B. Anderson
- Computer Science and Software Engineering Department, Rose-Hulman Institute of Technology, Terre Haute, IN, United States
| | - Cehong Wang
- Computer Science and Software Engineering Department, Rose-Hulman Institute of Technology, Terre Haute, IN, United States
| | - Eduardo J. Izquierdo
- Computational Neuroethology Lab, Cognitive Science Program, Indiana University, Bloomington, IN, United States
| |
Collapse
|
15
|
Turner NL, Macrina T, Bae JA, Yang R, Wilson AM, Schneider-Mizell C, Lee K, Lu R, Wu J, Bodor AL, Bleckert AA, Brittain D, Froudarakis E, Dorkenwald S, Collman F, Kemnitz N, Ih D, Silversmith WM, Zung J, Zlateski A, Tartavull I, Yu SC, Popovych S, Mu S, Wong W, Jordan CS, Castro M, Buchanan J, Bumbarger DJ, Takeno M, Torres R, Mahalingam G, Elabbady L, Li Y, Cobos E, Zhou P, Suckow S, Becker L, Paninski L, Polleux F, Reimer J, Tolias AS, Reid RC, da Costa NM, Seung HS. Reconstruction of neocortex: Organelles, compartments, cells, circuits, and activity. Cell 2022; 185:1082-1100.e24. [PMID: 35216674 PMCID: PMC9337909 DOI: 10.1016/j.cell.2022.01.023] [Citation(s) in RCA: 77] [Impact Index Per Article: 38.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/03/2020] [Revised: 07/26/2021] [Accepted: 01/27/2022] [Indexed: 12/31/2022]
Abstract
We assembled a semi-automated reconstruction of L2/3 mouse primary visual cortex from ∼250 × 140 × 90 μm3 of electron microscopic images, including pyramidal and non-pyramidal neurons, astrocytes, microglia, oligodendrocytes and precursors, pericytes, vasculature, nuclei, mitochondria, and synapses. Visual responses of a subset of pyramidal cells are included. The data are publicly available, along with tools for programmatic and three-dimensional interactive access. Brief vignettes illustrate the breadth of potential applications relating structure to function in cortical circuits and neuronal cell biology. Mitochondria and synapse organization are characterized as a function of path length from the soma. Pyramidal connectivity motif frequencies are predicted accurately using a configuration model of random graphs. Pyramidal cells receiving more connections from nearby cells exhibit stronger and more reliable visual responses. Sample code shows data access and analysis.
Collapse
Affiliation(s)
- Nicholas L Turner
- Princeton Neuroscience Institute, Princeton University, Princeton, NJ 08544, USA; Computer Science Department, Princeton University, Princeton, NJ 08544, USA
| | - Thomas Macrina
- Princeton Neuroscience Institute, Princeton University, Princeton, NJ 08544, USA; Computer Science Department, Princeton University, Princeton, NJ 08544, USA
| | - J Alexander Bae
- Princeton Neuroscience Institute, Princeton University, Princeton, NJ 08544, USA; Electrical and Computer Engineering Department, Princeton University, Princeton, NJ 08544, USA
| | - Runzhe Yang
- Princeton Neuroscience Institute, Princeton University, Princeton, NJ 08544, USA; Computer Science Department, Princeton University, Princeton, NJ 08544, USA
| | - Alyssa M Wilson
- Princeton Neuroscience Institute, Princeton University, Princeton, NJ 08544, USA
| | | | - Kisuk Lee
- Princeton Neuroscience Institute, Princeton University, Princeton, NJ 08544, USA; Brain & Cognitive Sciences Department, Massachusetts Institute of Technology, Cambridge, MA 02139, USA
| | - Ran Lu
- Princeton Neuroscience Institute, Princeton University, Princeton, NJ 08544, USA
| | - Jingpeng Wu
- Princeton Neuroscience Institute, Princeton University, Princeton, NJ 08544, USA
| | - Agnes L Bodor
- Allen Institute for Brain Science, Seattle, WA 98109, USA
| | | | | | - Emmanouil Froudarakis
- Department of Neuroscience, Baylor College of Medicine, Houston, TX 77030, USA; Center for Neuroscience and Artificial Intelligence, Baylor College of Medicine, Houston, TX 77030, USA
| | - Sven Dorkenwald
- Princeton Neuroscience Institute, Princeton University, Princeton, NJ 08544, USA; Computer Science Department, Princeton University, Princeton, NJ 08544, USA
| | | | - Nico Kemnitz
- Princeton Neuroscience Institute, Princeton University, Princeton, NJ 08544, USA
| | - Dodam Ih
- Princeton Neuroscience Institute, Princeton University, Princeton, NJ 08544, USA
| | | | - Jonathan Zung
- Princeton Neuroscience Institute, Princeton University, Princeton, NJ 08544, USA; Computer Science Department, Princeton University, Princeton, NJ 08544, USA
| | - Aleksandar Zlateski
- Electrical Engineering and Computer Science Department, Massachusetts Institute of Technology, Cambridge, MA 02139, USA
| | - Ignacio Tartavull
- Princeton Neuroscience Institute, Princeton University, Princeton, NJ 08544, USA
| | - Szi-Chieh Yu
- Princeton Neuroscience Institute, Princeton University, Princeton, NJ 08544, USA
| | - Sergiy Popovych
- Princeton Neuroscience Institute, Princeton University, Princeton, NJ 08544, USA; Computer Science Department, Princeton University, Princeton, NJ 08544, USA
| | - Shang Mu
- Princeton Neuroscience Institute, Princeton University, Princeton, NJ 08544, USA
| | - William Wong
- Princeton Neuroscience Institute, Princeton University, Princeton, NJ 08544, USA
| | - Chris S Jordan
- Princeton Neuroscience Institute, Princeton University, Princeton, NJ 08544, USA
| | - Manuel Castro
- Princeton Neuroscience Institute, Princeton University, Princeton, NJ 08544, USA
| | - JoAnn Buchanan
- Allen Institute for Brain Science, Seattle, WA 98109, USA
| | | | - Marc Takeno
- Allen Institute for Brain Science, Seattle, WA 98109, USA
| | - Russel Torres
- Allen Institute for Brain Science, Seattle, WA 98109, USA
| | | | - Leila Elabbady
- Allen Institute for Brain Science, Seattle, WA 98109, USA
| | - Yang Li
- Allen Institute for Brain Science, Seattle, WA 98109, USA
| | - Erick Cobos
- Department of Neuroscience, Baylor College of Medicine, Houston, TX 77030, USA; Center for Neuroscience and Artificial Intelligence, Baylor College of Medicine, Houston, TX 77030, USA
| | - Pengcheng Zhou
- Department of Statistics, Columbia University, New York, NY 10027, USA; Center for Theoretical Neuroscience, Columbia University, New York, NY 10027, USA; Grossman Center for the Statistics of Mind, Columbia University, New York, NY 10027, USA
| | - Shelby Suckow
- Allen Institute for Brain Science, Seattle, WA 98109, USA
| | - Lynne Becker
- Allen Institute for Brain Science, Seattle, WA 98109, USA
| | - Liam Paninski
- Department of Statistics, Columbia University, New York, NY 10027, USA; Center for Theoretical Neuroscience, Columbia University, New York, NY 10027, USA; Grossman Center for the Statistics of Mind, Columbia University, New York, NY 10027, USA; Zuckerman Mind Brain Behavior Institute, Columbia University, New York, NY 10027, USA; Department of Neuroscience, Columbia University, New York, NY 10027, USA; Kavli Institute for Brain Science at Columbia University, New York, NY 10027, USA
| | - Franck Polleux
- Zuckerman Mind Brain Behavior Institute, Columbia University, New York, NY 10027, USA; Department of Neuroscience, Columbia University, New York, NY 10027, USA; Kavli Institute for Brain Science at Columbia University, New York, NY 10027, USA
| | - Jacob Reimer
- Department of Neuroscience, Baylor College of Medicine, Houston, TX 77030, USA; Center for Neuroscience and Artificial Intelligence, Baylor College of Medicine, Houston, TX 77030, USA
| | - Andreas S Tolias
- Department of Neuroscience, Baylor College of Medicine, Houston, TX 77030, USA; Center for Neuroscience and Artificial Intelligence, Baylor College of Medicine, Houston, TX 77030, USA; Department of Electrical and Computer Engineering, Rice University, Houston, TX 77005, USA
| | - R Clay Reid
- Allen Institute for Brain Science, Seattle, WA 98109, USA
| | | | - H Sebastian Seung
- Princeton Neuroscience Institute, Princeton University, Princeton, NJ 08544, USA; Computer Science Department, Princeton University, Princeton, NJ 08544, USA.
| |
Collapse
|
16
|
Jia S, Xing D, Yu Z, Liu JK. Dissecting cascade computational components in spiking neural networks. PLoS Comput Biol 2021; 17:e1009640. [PMID: 34843460 PMCID: PMC8659421 DOI: 10.1371/journal.pcbi.1009640] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/10/2021] [Revised: 12/09/2021] [Accepted: 11/14/2021] [Indexed: 01/15/2023] Open
Abstract
Finding out the physical structure of neuronal circuits that governs neuronal responses is an important goal for brain research. With fast advances for large-scale recording techniques, identification of a neuronal circuit with multiple neurons and stages or layers becomes possible and highly demanding. Although methods for mapping the connection structure of circuits have been greatly developed in recent years, they are mostly limited to simple scenarios of a few neurons in a pairwise fashion; and dissecting dynamical circuits, particularly mapping out a complete functional circuit that converges to a single neuron, is still a challenging question. Here, we show that a recent method, termed spike-triggered non-negative matrix factorization (STNMF), can address these issues. By simulating different scenarios of spiking neural networks with various connections between neurons and stages, we demonstrate that STNMF is a persuasive method to dissect functional connections within a circuit. Using spiking activities recorded at neurons of the output layer, STNMF can obtain a complete circuit consisting of all cascade computational components of presynaptic neurons, as well as their spiking activities. For simulated simple and complex cells of the primary visual cortex, STNMF allows us to dissect the pathway of visual computation. Taken together, these results suggest that STNMF could provide a useful approach for investigating neuronal systems leveraging recorded functional neuronal activity. It is well known that the computation of neuronal circuits is carried out through the staged and cascade structure of different types of neurons. Nevertheless, the information, particularly sensory information, is processed in a network primarily with feedforward connections through different pathways. A peculiar example is the early visual system, where light is transcoded by the retinal cells, routed by the lateral geniculate nucleus, and reached the primary visual cortex. One meticulous interest in recent years is to map out these physical structures of neuronal pathways. However, most methods so far are limited to taking snapshots of a static view of connections between neurons. It remains unclear how to obtain a functional and dynamical neuronal circuit beyond the simple scenarios of a few randomly sampled neurons. Using simulated spiking neural networks of visual pathways with different scenarios of multiple stages, mixed cell types, and natural image stimuli, we demonstrate that a recent computational tool, named spike-triggered non-negative matrix factorization, can resolve these issues. It enables us to recover the entire structural components of neural networks underlying the computation, together with the functional components of each individual neuron. Utilizing it for complex cells of the primary visual cortex allows us to reveal every underpinning of the nonlinear computation. Our results, together with other recent experimental and computational efforts, show that it is possible to systematically dissect neural circuitry with detailed structural and functional components.
Collapse
Affiliation(s)
- Shanshan Jia
- Institute for Artificial Intelligence, Department of Computer Science and Technology, Peking University, Beijing, China
| | - Dajun Xing
- State Key Laboratory of Cognitive Neuroscience and Learning, IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing, China
| | - Zhaofei Yu
- Institute for Artificial Intelligence, Department of Computer Science and Technology, Peking University, Beijing, China
- * E-mail: (ZY); (JKL)
| | - Jian K. Liu
- School of Computing, University of Leeds, Leeds, United Kingdom
- * E-mail: (ZY); (JKL)
| |
Collapse
|
17
|
Hu X, Zeng Z. Bridging the Functional and Wiring Properties of V1 Neurons Through Sparse Coding. Neural Comput 2021; 34:104-137. [PMID: 34758484 DOI: 10.1162/neco_a_01453] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/23/2021] [Accepted: 07/20/2021] [Indexed: 11/04/2022]
Abstract
The functional properties of neurons in the primary visual cortex (V1) are thought to be closely related to the structural properties of this network, but the specific relationships remain unclear. Previous theoretical studies have suggested that sparse coding, an energy-efficient coding method, might underlie the orientation selectivity of V1 neurons. We thus aimed to delineate how the neurons are wired to produce this feature. We constructed a model and endowed it with a simple Hebbian learning rule to encode images of natural scenes. The excitatory neurons fired sparsely in response to images and developed strong orientation selectivity. After learning, the connectivity between excitatory neuron pairs, inhibitory neuron pairs, and excitatory-inhibitory neuron pairs depended on firing pattern and receptive field similarity between the neurons. The receptive fields (RFs) of excitatory neurons and inhibitory neurons were well predicted by the RFs of presynaptic excitatory neurons and inhibitory neurons, respectively. The excitatory neurons formed a small-world network, in which certain local connection patterns were significantly overrepresented. Bidirectionally manipulating the firing rates of inhibitory neurons caused linear transformations of the firing rates of excitatory neurons, and vice versa. These wiring properties and modulatory effects were congruent with a wide variety of data measured in V1, suggesting that the sparse coding principle might underlie both the functional and wiring properties of V1 neurons.
Collapse
Affiliation(s)
- Xiaolin Hu
- Department of Computer Science and Technology, State Key Laboratory of Intelligent Technology and Systems, BNRist, Tsinghua Laboratory of Brain and Intelligence, and IDG/McGovern Institute for Brain Research, Tsinghua University, Beijing 100084, China
| | - Zhigang Zeng
- School of Artificial Intelligence and Automation, Huazhong University of Science and Technology, Wuhan 430074, China, and Key Laboratory of Image Processing and Intelligent Control, Education Ministry of China, Wuhan 430074, China
| |
Collapse
|
18
|
Chipman PH, Fung CCA, Pazo Fernandez A, Sawant A, Tedoldi A, Kawai A, Ghimire Gautam S, Kurosawa M, Abe M, Sakimura K, Fukai T, Goda Y. Astrocyte GluN2C NMDA receptors control basal synaptic strengths of hippocampal CA1 pyramidal neurons in the stratum radiatum. eLife 2021; 10:70818. [PMID: 34693906 PMCID: PMC8594917 DOI: 10.7554/elife.70818] [Citation(s) in RCA: 16] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/31/2021] [Accepted: 10/22/2021] [Indexed: 12/12/2022] Open
Abstract
Experience-dependent plasticity is a key feature of brain synapses for which neuronal N-Methyl-D-Aspartate receptors (NMDARs) play a major role, from developmental circuit refinement to learning and memory. Astrocytes also express NMDARs, although their exact function has remained controversial. Here, we identify in mouse hippocampus, a circuit function for GluN2C NMDAR, a subtype highly expressed in astrocytes, in layer-specific tuning of synaptic strengths in CA1 pyramidal neurons. Interfering with astrocyte NMDAR or GluN2C NMDAR activity reduces the range of presynaptic strength distribution specifically in the stratum radiatum inputs without an appreciable change in the mean presynaptic strength. Mathematical modeling shows that narrowing of the width of presynaptic release probability distribution compromises the expression of long-term synaptic plasticity. Our findings suggest a novel feedback signaling system that uses astrocyte GluN2C NMDARs to adjust basal synaptic weight distribution of Schaffer collateral inputs, which in turn impacts computations performed by the CA1 pyramidal neuron.
Collapse
Affiliation(s)
| | - Chi Chung Alan Fung
- Neural Coding and Brain Computing Unit, Okinawa Institute of Science and Technology Graduate University, Onna-son, Japan
| | | | | | - Angelo Tedoldi
- RIKEN Center for Brain Science, Wako-shi, Saitama, Japan
| | - Atsushi Kawai
- RIKEN Center for Brain Science, Wako-shi, Saitama, Japan
| | | | | | - Manabu Abe
- Department of Animal Model Development, Brain Research Institute, Niigata University, Niigata, Japan
| | - Kenji Sakimura
- Department of Animal Model Development, Brain Research Institute, Niigata University, Niigata, Japan
| | - Tomoki Fukai
- Neural Coding and Brain Computing Unit, Okinawa Institute of Science and Technology Graduate University, Onna-son, Japan
| | - Yukiko Goda
- RIKEN Center for Brain Science, Wako-shi, Saitama, Japan
| |
Collapse
|
19
|
Goetz L, Roth A, Häusser M. Active dendrites enable strong but sparse inputs to determine orientation selectivity. Proc Natl Acad Sci U S A 2021; 118:e2017339118. [PMID: 34301882 PMCID: PMC8325157 DOI: 10.1073/pnas.2017339118] [Citation(s) in RCA: 22] [Impact Index Per Article: 7.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022] Open
Abstract
The dendrites of neocortical pyramidal neurons are excitable. However, it is unknown how synaptic inputs engage nonlinear dendritic mechanisms during sensory processing in vivo, and how they in turn influence action potential output. Here, we provide a quantitative account of the relationship between synaptic inputs, nonlinear dendritic events, and action potential output. We developed a detailed pyramidal neuron model constrained by in vivo dendritic recordings. We drive this model with realistic input patterns constrained by sensory responses measured in vivo and connectivity measured in vitro. We show mechanistically that under realistic conditions, dendritic Na+ and NMDA spikes are the major determinants of neuronal output in vivo. We demonstrate that these dendritic spikes can be triggered by a surprisingly small number of strong synaptic inputs, in some cases even by single synapses. We predict that dendritic excitability allows the 1% strongest synaptic inputs of a neuron to control the tuning of its output. Active dendrites therefore allow smaller subcircuits consisting of only a few strongly connected neurons to achieve selectivity for specific sensory features.
Collapse
Affiliation(s)
- Lea Goetz
- Wolfson Institute for Biomedical Research, University College London, London WC1E 6BT, United Kingdom
| | - Arnd Roth
- Wolfson Institute for Biomedical Research, University College London, London WC1E 6BT, United Kingdom
| | - Michael Häusser
- Wolfson Institute for Biomedical Research, University College London, London WC1E 6BT, United Kingdom
| |
Collapse
|
20
|
Houben AM. Frequency Selectivity of Neural Circuits With Heterogeneous Discrete Transmission Delays. Neural Comput 2021; 33:2068-2086. [PMID: 34310671 DOI: 10.1162/neco_a_01404] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/28/2020] [Accepted: 02/24/2021] [Indexed: 11/04/2022]
Abstract
Neurons are connected to other neurons by axons and dendrites that conduct signals with finite velocities, resulting in delays between the firing of a neuron and the arrival of the resultant impulse at other neurons. Since delays greatly complicate the analytical treatment and interpretation of models, they are usually neglected or taken to be uniform, leading to a lack in the comprehension of the effects of delays in neural systems. This letter shows that heterogeneous transmission delays make small groups of neurons respond selectively to inputs with differing frequency spectra. By studying a single integrate-and-fire neuron receiving correlated time-shifted inputs, it is shown how the frequency response is linked to both the strengths and delay times of the afferent connections. The results show that incorporating delays alters the functioning of neural networks, and changes the effect that neural connections and synaptic strengths have.
Collapse
|
21
|
Aljadeff J, Gillett M, Pereira Obilinovic U, Brunel N. From synapse to network: models of information storage and retrieval in neural circuits. Curr Opin Neurobiol 2021; 70:24-33. [PMID: 34175521 DOI: 10.1016/j.conb.2021.05.005] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/03/2021] [Revised: 05/06/2021] [Accepted: 05/25/2021] [Indexed: 10/21/2022]
Abstract
The mechanisms of information storage and retrieval in brain circuits are still the subject of debate. It is widely believed that information is stored at least in part through changes in synaptic connectivity in networks that encode this information and that these changes lead in turn to modifications of network dynamics, such that the stored information can be retrieved at a later time. Here, we review recent progress in deriving synaptic plasticity rules from experimental data and in understanding how plasticity rules affect the dynamics of recurrent networks. We show that the dynamics generated by such networks exhibit a large degree of diversity, depending on parameters, similar to experimental observations in vivo during delayed response tasks.
Collapse
Affiliation(s)
- Johnatan Aljadeff
- Neurobiology Section, Division of Biological Sciences, UC San Diego, USA
| | | | | | - Nicolas Brunel
- Department of Neurobiology, Duke University, USA; Department of Physics, Duke University, USA.
| |
Collapse
|
22
|
Abstract
Giulio Tononi's Integrated Information Theory (IIT) proposes explaining consciousness by directly identifying it with integrated information. We examine the construct validity of IIT's measure of consciousness, phi (Φ), by analyzing its formal properties, its relation to key aspects of consciousness, and its co-variation with relevant empirical circumstances. Our analysis shows that IIT's identification of consciousness with the causal efficacy with which differentiated networks accomplish global information transfer (which is what Φ in fact measures) is mistaken. This misidentification has the consequence of requiring the attribution of consciousness to a range of natural systems and artifacts that include, but are not limited to, large-scale electrical power grids, gene-regulation networks, some electronic circuit boards, and social networks. Instead of treating this consequence of the theory as a disconfirmation, IIT embraces it. By regarding these systems as bearers of consciousness ex hypothesi, IIT is led towards the orbit of panpsychist ideation. This departure from science as we know it can be avoided by recognizing the functional misattribution at the heart of IIT's identity claim. We show, for example, what function is actually performed, at least in the human case, by the cortical combination of differentiation with integration that IIT identifies with consciousness. Finally, we examine what lessons may be drawn from IIT's failure to provide a credible account of consciousness for progress in the very active field of research concerned with exploring the phenomenon from formal and neural points of view.
Collapse
|
23
|
Knoll G, Lindner B. Recurrence-mediated suprathreshold stochastic resonance. J Comput Neurosci 2021; 49:407-418. [PMID: 34003421 PMCID: PMC8556192 DOI: 10.1007/s10827-021-00788-3] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/09/2021] [Revised: 04/21/2021] [Accepted: 04/26/2021] [Indexed: 11/29/2022]
Abstract
It has previously been shown that the encoding of time-dependent signals by feedforward networks (FFNs) of processing units exhibits suprathreshold stochastic resonance (SSR), which is an optimal signal transmission for a finite level of independent, individual stochasticity in the single units. In this study, a recurrent spiking network is simulated to demonstrate that SSR can be also caused by network noise in place of intrinsic noise. The level of autonomously generated fluctuations in the network can be controlled by the strength of synapses, and hence the coding fraction (our measure of information transmission) exhibits a maximum as a function of the synaptic coupling strength. The presence of a coding peak at an optimal coupling strength is robust over a wide range of individual, network, and signal parameters, although the optimal strength and peak magnitude depend on the parameter being varied. We also perform control experiments with an FFN illustrating that the optimized coding fraction is due to the change in noise level and not from other effects entailed when changing the coupling strength. These results also indicate that the non-white (temporally correlated) network noise in general provides an extra boost to encoding performance compared to the FFN driven by intrinsic white noise fluctuations.
Collapse
Affiliation(s)
- Gregory Knoll
- Bernstein Center for Computational Neuroscience Berlin, Philippstr. 13, Haus 2, Berlin, 10115, Germany. .,Physics Department of Humboldt University Berlin, Newtonstr. 15, 12489, Berlin, Germany.
| | - Benjamin Lindner
- Bernstein Center for Computational Neuroscience Berlin, Philippstr. 13, Haus 2, Berlin, 10115, Germany.,Physics Department of Humboldt University Berlin, Newtonstr. 15, 12489, Berlin, Germany
| |
Collapse
|
24
|
Barz CS, Garderes PM, Ganea DA, Reischauer S, Feldmeyer D, Haiss F. Functional and Structural Properties of Highly Responsive Somatosensory Neurons in Mouse Barrel Cortex. Cereb Cortex 2021; 31:4533-4553. [PMID: 33963394 PMCID: PMC8408454 DOI: 10.1093/cercor/bhab104] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/21/2020] [Revised: 03/12/2021] [Accepted: 03/24/2021] [Indexed: 11/14/2022] Open
Abstract
Sparse population activity is a well-known feature of supragranular sensory neurons in neocortex. The mechanisms underlying sparseness are not well understood because a direct link between the neurons activated in vivo, and their cellular properties investigated in vitro has been missing. We used two-photon calcium imaging to identify a subset of neurons in layer L2/3 (L2/3) of mouse primary somatosensory cortex that are highly active following principal whisker vibrotactile stimulation. These high responders (HRs) were then tagged using photoconvertible green fluorescent protein for subsequent targeting in the brain slice using intracellular patch-clamp recordings and biocytin staining. This approach allowed us to investigate the structural and functional properties of HRs that distinguish them from less active control cells. Compared to less responsive L2/3 neurons, HRs displayed increased levels of stimulus-evoked and spontaneous activity, elevated noise and spontaneous pairwise correlations, and stronger coupling to the population response. Intrinsic excitability was reduced in HRs, while we found no evidence for differences in other electrophysiological and morphological parameters. Thus, the choice of which neurons participate in stimulus encoding may be determined largely by network connectivity rather than by cellular structure and function.
Collapse
Affiliation(s)
- C S Barz
- Institute of Neuroscience and Medicine, INM-10, Research Centre Jülich, 52425 Jülich, Germany.,Department of Psychiatry, Psychotherapy and Psychosomatics, Medical School, RWTH Aachen University, 52074 Aachen, Germany.,Jülich-Aachen Research Alliance - Translational Brain Medicine, 52074 Aachen, Germany.,IZKF Aachen, Medical School, RWTH Aachen University, 52074 Aachen, Germany
| | - P M Garderes
- IZKF Aachen, Medical School, RWTH Aachen University, 52074 Aachen, Germany.,Department of Neuropathology, Medical School, RWTH Aachen University, 52074 Aachen, Germany.,Department of Ophthalmology, Medical School, RWTH Aachen University, 52074 Aachen, Germany.,Unit of Neural Circuits Dynamics and Decision Making, Institut Pasteur, 75015 Paris, France
| | - D A Ganea
- IZKF Aachen, Medical School, RWTH Aachen University, 52074 Aachen, Germany.,Department of Neuropathology, Medical School, RWTH Aachen University, 52074 Aachen, Germany.,Department of Ophthalmology, Medical School, RWTH Aachen University, 52074 Aachen, Germany.,Biomedical Department, University of Basel, 4056 Basel, Switzerland
| | - S Reischauer
- Medical Clinic I, (Cardiology/Angiology) and Campus Kerckhoff, Justus-Liebig-University Giessen, 35390 Giessen Germany.,Department of Developmental Genetics, Max Planck Institute for Heart and Lung Research, 61231 Bad Nauheim, Germany.,Cardio-Pulmonary Institute (CPI), 35392 Giessen, Germany
| | - D Feldmeyer
- Institute of Neuroscience and Medicine, INM-10, Research Centre Jülich, 52425 Jülich, Germany.,Department of Psychiatry, Psychotherapy and Psychosomatics, Medical School, RWTH Aachen University, 52074 Aachen, Germany.,Jülich-Aachen Research Alliance - Translational Brain Medicine, 52074 Aachen, Germany
| | - F Haiss
- IZKF Aachen, Medical School, RWTH Aachen University, 52074 Aachen, Germany.,Department of Neuropathology, Medical School, RWTH Aachen University, 52074 Aachen, Germany.,Department of Ophthalmology, Medical School, RWTH Aachen University, 52074 Aachen, Germany.,Unit of Neural Circuits Dynamics and Decision Making, Institut Pasteur, 75015 Paris, France
| |
Collapse
|
25
|
|
26
|
Changeux JP, Goulas A, Hilgetag CC. A Connectomic Hypothesis for the Hominization of the Brain. Cereb Cortex 2021; 31:2425-2449. [PMID: 33367521 PMCID: PMC8023825 DOI: 10.1093/cercor/bhaa365] [Citation(s) in RCA: 37] [Impact Index Per Article: 12.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/17/2020] [Revised: 10/30/2020] [Accepted: 11/02/2020] [Indexed: 02/06/2023] Open
Abstract
Cognitive abilities of the human brain, including language, have expanded dramatically in the course of our recent evolution from nonhuman primates, despite only minor apparent changes at the gene level. The hypothesis we propose for this paradox relies upon fundamental features of human brain connectivity, which contribute to a characteristic anatomical, functional, and computational neural phenotype, offering a parsimonious framework for connectomic changes taking place upon the human-specific evolution of the genome. Many human connectomic features might be accounted for by substantially increased brain size within the global neural architecture of the primate brain, resulting in a larger number of neurons and areas and the sparsification, increased modularity, and laminar differentiation of cortical connections. The combination of these features with the developmental expansion of upper cortical layers, prolonged postnatal brain development, and multiplied nongenetic interactions with the physical, social, and cultural environment gives rise to categorically human-specific cognitive abilities including the recursivity of language. Thus, a small set of genetic regulatory events affecting quantitative gene expression may plausibly account for the origins of human brain connectivity and cognition.
Collapse
Affiliation(s)
- Jean-Pierre Changeux
- CNRS UMR 3571, Institut Pasteur, 75724 Paris, France
- Communications Cellulaires, Collège de France, 75005 Paris, France
| | - Alexandros Goulas
- Institute of Computational Neuroscience, University Medical Center Eppendorf, Hamburg University, 20246 Hamburg, Germany
| | - Claus C Hilgetag
- Institute of Computational Neuroscience, University Medical Center Eppendorf, Hamburg University, 20246 Hamburg, Germany
- Department of Health Sciences, Boston University, Boston, MA 02115, USA
| |
Collapse
|
27
|
Noise in Neurons and Synapses Enables Reliable Associative Memory Storage in Local Cortical Circuits. eNeuro 2021; 8:ENEURO.0302-20.2020. [PMID: 33408153 PMCID: PMC8114874 DOI: 10.1523/eneuro.0302-20.2020] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/07/2020] [Revised: 12/15/2020] [Accepted: 12/16/2020] [Indexed: 12/02/2022] Open
Abstract
Neural networks in the brain can function reliably despite various sources of errors and noise present at every step of signal transmission. These sources include errors in the presynaptic inputs to the neurons, noise in synaptic transmission, and fluctuations in the neurons’ postsynaptic potentials (PSPs). Collectively they lead to errors in the neurons’ outputs which are, in turn, injected into the network. Does unreliable network activity hinder fundamental functions of the brain, such as learning and memory retrieval? To explore this question, this article examines the effects of errors and noise on the properties of model networks of inhibitory and excitatory neurons involved in associative sequence learning. The associative learning problem is solved analytically and numerically, and it is also shown how memory sequences can be loaded into the network with a biologically more plausible perceptron-type learning rule. Interestingly, the results reveal that errors and noise during learning increase the probability of memory recall. There is a trade-off between the capacity and reliability of stored memories, and, noise during learning is required for optimal retrieval of stored information. What is more, networks loaded with associative memories to capacity display many structural and dynamical features observed in local cortical circuits in mammals. Based on the similarities between the associative and cortical networks, this article predicts that connections originating from more unreliable neurons or neuron classes in the cortex are more likely to be depressed or eliminated during learning, while connections onto noisier neurons or neuron classes have lower probabilities and higher weights.
Collapse
|
28
|
Zhou J, Huang H. Weakly correlated synapses promote dimension reduction in deep neural networks. Phys Rev E 2021; 103:012315. [PMID: 33601541 DOI: 10.1103/physreve.103.012315] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/24/2020] [Accepted: 01/08/2021] [Indexed: 11/07/2022]
Abstract
By controlling synaptic and neural correlations, deep learning has achieved empirical successes in improving classification performances. How synaptic correlations affect neural correlations to produce disentangled hidden representations remains elusive. Here we propose a simplified model of dimension reduction, taking into account pairwise correlations among synapses, to reveal the mechanism underlying how the synaptic correlations affect dimension reduction. Our theory determines the scaling of synaptic correlations requiring only mathematical self-consistency for both binary and continuous synapses. The theory also predicts that weakly correlated synapses encourage dimension reduction compared to their orthogonal counterparts. In addition, these synapses attenuate the decorrelation process along the network depth. These two computational roles are explained by a proposed mean-field equation. The theoretical predictions are in excellent agreement with numerical simulations, and the key features are also captured by deep learning with Hebbian rules.
Collapse
Affiliation(s)
- Jianwen Zhou
- PMI Lab, School of Physics, Sun Yat-sen University, Guangzhou 510275, People's Republic of China
| | - Haiping Huang
- PMI Lab, School of Physics, Sun Yat-sen University, Guangzhou 510275, People's Republic of China
| |
Collapse
|
29
|
Probing the structure-function relationship with neural networks constructed by solving a system of linear equations. Sci Rep 2021; 11:3808. [PMID: 33589672 PMCID: PMC7884791 DOI: 10.1038/s41598-021-82964-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/08/2020] [Accepted: 01/27/2021] [Indexed: 11/17/2022] Open
Abstract
Neural network models are an invaluable tool to understand brain function since they allow us to connect the cellular and circuit levels with behaviour. Neural networks usually comprise a huge number of parameters, which must be chosen carefully such that networks reproduce anatomical, behavioural, and neurophysiological data. These parameters are usually fitted with off-the-shelf optimization algorithms that iteratively change network parameters and simulate the network to evaluate its performance and improve fitting. Here we propose to invert the fitting process by proceeding from the network dynamics towards network parameters. Firing state transitions are chosen according to the transition graph associated with the solution of a task. Then, a system of linear equations is constructed from the network firing states and membrane potentials, in a way that guarantees the consistency of the system. This allows us to uncouple the dynamical features of the model, like its neurons firing rate and correlation, from the structural features, and the task-solving algorithm implemented by the network. We employed our method to probe the structure–function relationship in a sequence memory task. The networks obtained showed connectivity and firing statistics that recapitulated experimental observations. We argue that the proposed method is a complementary and needed alternative to the way neural networks are constructed to model brain function.
Collapse
|
30
|
Ingrosso A. Optimal learning with excitatory and inhibitory synapses. PLoS Comput Biol 2020; 16:e1008536. [PMID: 33370266 PMCID: PMC7793294 DOI: 10.1371/journal.pcbi.1008536] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/27/2020] [Revised: 01/08/2021] [Accepted: 11/13/2020] [Indexed: 11/22/2022] Open
Abstract
Characterizing the relation between weight structure and input/output statistics is fundamental for understanding the computational capabilities of neural circuits. In this work, I study the problem of storing associations between analog signals in the presence of correlations, using methods from statistical mechanics. I characterize the typical learning performance in terms of the power spectrum of random input and output processes. I show that optimal synaptic weight configurations reach a capacity of 0.5 for any fraction of excitatory to inhibitory weights and have a peculiar synaptic distribution with a finite fraction of silent synapses. I further provide a link between typical learning performance and principal components analysis in single cases. These results may shed light on the synaptic profile of brain circuits, such as cerebellar structures, that are thought to engage in processing time-dependent signals and performing on-line prediction.
Collapse
Affiliation(s)
- Alessandro Ingrosso
- Zuckerman Mind, Brain, Behavior Institute, Columbia University, New York, New York, United States of America
| |
Collapse
|
31
|
Ju H, Kim JZ, Beggs JM, Bassett DS. Network structure of cascading neural systems predicts stimulus propagation and recovery. J Neural Eng 2020; 17:056045. [PMID: 33036007 PMCID: PMC11191848 DOI: 10.1088/1741-2552/abbff1] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/12/2022]
Abstract
OBJECTIVE Many neural systems display spontaneous, spatiotemporal patterns of neural activity that are crucial for information processing. While these cascading patterns presumably arise from the underlying network of synaptic connections between neurons, the precise contribution of the network's local and global connectivity to these patterns and information processing remains largely unknown. APPROACH Here, we demonstrate how network structure supports information processing through network dynamics in empirical and simulated spiking neurons using mathematical tools from linear systems theory, network control theory, and information theory. MAIN RESULTS In particular, we show that activity, and the information that it contains, travels through cycles in real and simulated networks. SIGNIFICANCE Broadly, our results demonstrate how cascading neural networks could contribute to cognitive faculties that require lasting activation of neuronal patterns, such as working memory or attention.
Collapse
Affiliation(s)
- Harang Ju
- Neuroscience Graduate Group, University of Pennsylvania, Philadelphia, PA 19104, United States of America
| | - Jason Z Kim
- Department of Bioengineering, University of Pennsylvania, Philadelphia, PA 19104, United States of America
| | - John M Beggs
- Department of Physics, Indiana University, Bloomington, IN 47405, United States of America
| | - Danielle S Bassett
- Department of Bioengineering, University of Pennsylvania, Philadelphia, PA 19104, United States of America
- Department of Electrical & Systems Engineering, University of Pennsylvania, Philadelphia, PA 19104, United States of America
- Department of Physics & Astronomy, University of Pennsylvania, Philadelphia, PA 19104, United States of America
- Department of Neurology, University of Pennsylvania, Philadelphia, PA 19104, United States of America
- Department of Psychiatry, University of Pennsylvania, Philadelphia, PA 19104, United States of America
- Santa Fe Institute, 1399 Hyde Park Rd, Santa Fe, NM 87501, United States of America
| |
Collapse
|
32
|
Dalgleish HWP, Russell LE, Packer AM, Roth A, Gauld OM, Greenstreet F, Thompson EJ, Häusser M. How many neurons are sufficient for perception of cortical activity? eLife 2020; 9:e58889. [PMID: 33103656 PMCID: PMC7695456 DOI: 10.7554/elife.58889] [Citation(s) in RCA: 54] [Impact Index Per Article: 13.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/13/2020] [Accepted: 10/17/2020] [Indexed: 01/12/2023] Open
Abstract
Many theories of brain function propose that activity in sparse subsets of neurons underlies perception and action. To place a lower bound on the amount of neural activity that can be perceived, we used an all-optical approach to drive behaviour with targeted two-photon optogenetic activation of small ensembles of L2/3 pyramidal neurons in mouse barrel cortex while simultaneously recording local network activity with two-photon calcium imaging. By precisely titrating the number of neurons stimulated, we demonstrate that the lower bound for perception of cortical activity is ~14 pyramidal neurons. We find a steep sigmoidal relationship between the number of activated neurons and behaviour, saturating at only ~37 neurons, and show this relationship can shift with learning. Furthermore, activation of ensembles is balanced by inhibition of neighbouring neurons. This surprising perceptual sensitivity in the face of potent network suppression supports the sparse coding hypothesis, and suggests that cortical perception balances a trade-off between minimizing the impact of noise while efficiently detecting relevant signals.
Collapse
Affiliation(s)
- Henry WP Dalgleish
- Wolfson Institute for Biomedical Research, University College LondonLondonUnited Kingdom
| | - Lloyd E Russell
- Wolfson Institute for Biomedical Research, University College LondonLondonUnited Kingdom
| | - Adam M Packer
- Wolfson Institute for Biomedical Research, University College LondonLondonUnited Kingdom
| | - Arnd Roth
- Wolfson Institute for Biomedical Research, University College LondonLondonUnited Kingdom
| | - Oliver M Gauld
- Wolfson Institute for Biomedical Research, University College LondonLondonUnited Kingdom
| | - Francesca Greenstreet
- Wolfson Institute for Biomedical Research, University College LondonLondonUnited Kingdom
| | - Emmett J Thompson
- Wolfson Institute for Biomedical Research, University College LondonLondonUnited Kingdom
| | - Michael Häusser
- Wolfson Institute for Biomedical Research, University College LondonLondonUnited Kingdom
| |
Collapse
|
33
|
Wright EAP, Goltsev AV. Statistical analysis of unidirectional and reciprocal chemical connections in the C. elegans connectome. Eur J Neurosci 2020; 52:4525-4535. [PMID: 33022789 DOI: 10.1111/ejn.14988] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/06/2020] [Revised: 09/14/2020] [Accepted: 09/15/2020] [Indexed: 11/29/2022]
Abstract
We analyze unidirectional and reciprocally connected pairs of neurons in the chemical connectomes of the male and hermaphrodite Caenorhabditis elegans, using recently published data. Our analysis reveals that reciprocal connections provide communication between most neurons with chemical synapses, and comprise on average more synapses than both unidirectional connections and the entire connectome. We further reveal that the C. elegans connectome is wired so that afferent connections onto neurons with large numbers of presynaptic neighbors (in-degree) comprise an above-average number of synapses (synaptic multiplicity). Notably, the larger the in-degree of a neuron the larger the synaptic multiplicity of its afferent connections. Finally, we show that the male forms two times fewer reciprocal connections between sex-shared neurons than the hermaphrodite, but a large number of reciprocal connections with male-specific neurons. These observations provide evidence for Hebbian structural plasticity in the C. elegans.
Collapse
Affiliation(s)
- Edgar A P Wright
- Department of Physics & I3N, University of Aveiro, Aveiro, Portugal
| | - Alexander V Goltsev
- Department of Physics & I3N, University of Aveiro, Aveiro, Portugal.,A.F. Ioffe Physico-Technical Institute, St. Petersburg, Russia
| |
Collapse
|
34
|
Auth JM, Nachstedt T, Tetzlaff C. The Interplay of Synaptic Plasticity and Scaling Enables Self-Organized Formation and Allocation of Multiple Memory Representations. Front Neural Circuits 2020; 14:541728. [PMID: 33117130 PMCID: PMC7575689 DOI: 10.3389/fncir.2020.541728] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/10/2020] [Accepted: 08/19/2020] [Indexed: 12/23/2022] Open
Abstract
It is commonly assumed that memories about experienced stimuli are represented by groups of highly interconnected neurons called cell assemblies. This requires allocating and storing information in the neural circuitry, which happens through synaptic weight adaptations at different types of synapses. In general, memory allocation is associated with synaptic changes at feed-forward synapses while memory storage is linked with adaptation of recurrent connections. It remains, however, largely unknown how memory allocation and storage can be achieved and the adaption of the different synapses involved be coordinated to allow for a faithful representation of multiple memories without disruptive interference between them. In this theoretical study, by using network simulations and phase space analyses, we show that the interplay between long-term synaptic plasticity and homeostatic synaptic scaling organizes simultaneously the adaptations of feed-forward and recurrent synapses such that a new stimulus forms a new memory and where different stimuli are assigned to distinct cell assemblies. The resulting dynamics can reproduce experimental in-vivo data, focusing on how diverse factors, such as neuronal excitability and network connectivity, influence memory formation. Thus, the here presented model suggests that a few fundamental synaptic mechanisms may suffice to implement memory allocation and storage in neural circuitry.
Collapse
Affiliation(s)
- Johannes Maria Auth
- Department of Computational Neuroscience, Third Institute of Physics, Georg-August-Universität, Göttingen, Germany
- Bernstein Center for Computational Neuroscience, Göttingen, Germany
| | - Timo Nachstedt
- Department of Computational Neuroscience, Third Institute of Physics, Georg-August-Universität, Göttingen, Germany
- Bernstein Center for Computational Neuroscience, Göttingen, Germany
| | - Christian Tetzlaff
- Department of Computational Neuroscience, Third Institute of Physics, Georg-August-Universität, Göttingen, Germany
- Bernstein Center for Computational Neuroscience, Göttingen, Germany
| |
Collapse
|
35
|
Ocker GK, Buice MA. Flexible neural connectivity under constraints on total connection strength. PLoS Comput Biol 2020; 16:e1008080. [PMID: 32745134 PMCID: PMC7425997 DOI: 10.1371/journal.pcbi.1008080] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/24/2019] [Revised: 08/13/2020] [Accepted: 06/19/2020] [Indexed: 12/23/2022] Open
Abstract
Neural computation is determined by neurons’ dynamics and circuit connectivity. Uncertain and dynamic environments may require neural hardware to adapt to different computational tasks, each requiring different connectivity configurations. At the same time, connectivity is subject to a variety of constraints, placing limits on the possible computations a given neural circuit can perform. Here we examine the hypothesis that the organization of neural circuitry favors computational flexibility: that it makes many computational solutions available, given physiological constraints. From this hypothesis, we develop models of connectivity degree distributions based on constraints on a neuron’s total synaptic weight. To test these models, we examine reconstructions of the mushroom bodies from the first instar larva and adult Drosophila melanogaster. We perform a Bayesian model comparison for two constraint models and a random wiring null model. Overall, we find that flexibility under a homeostatically fixed total synaptic weight describes Kenyon cell connectivity better than other models, suggesting a principle shaping the apparently random structure of Kenyon cell wiring. Furthermore, we find evidence that larval Kenyon cells are more flexible earlier in development, suggesting a mechanism whereby neural circuits begin as flexible systems that develop into specialized computational circuits. High-throughput electron microscopic anatomical experiments have begun to yield detailed maps of neural circuit connectivity. Uncovering the principles that govern these circuit structures is a major challenge for systems neuroscience. Healthy neural circuits must be able to perform computational tasks while satisfying physiological constraints. Those constraints can restrict a neuron’s possible connectivity, and thus potentially restrict its computation. Here we examine simple models of constraints on total synaptic weights, and calculate the number of circuit configurations they allow: a simple measure of their computational flexibility. We propose probabilistic models of connectivity that weight the number of synaptic partners according to computational flexibility under a constraint and test them using recent wiring diagrams from a learning center, the mushroom body, in the fly brain. We compare constraints that fix or bound a neuron’s total connection strength to a simple random wiring null model. Of these models, the fixed total connection strength matched the overall connectivity best in mushroom bodies from both larval and adult flies. We also provide evidence suggesting that neural circuits are more flexible in early stages of development and lose this flexibility as they grow towards specialized function.
Collapse
Affiliation(s)
- Gabriel Koch Ocker
- Allen Institute for Brain Science, Seattle, Washington, United States of America
- * E-mail:
| | - Michael A. Buice
- Allen Institute for Brain Science, Seattle, Washington, United States of America
- Department of Applied Mathematics, University of Washington, Seattle, Washington, United States of America
| |
Collapse
|
36
|
Lynn MB, Lee KFH, Soares C, Naud R, Béïque JC. A Synthetic Likelihood Solution to the Silent Synapse Estimation Problem. Cell Rep 2020; 32:107916. [PMID: 32697998 DOI: 10.1016/j.celrep.2020.107916] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/12/2020] [Revised: 05/04/2020] [Accepted: 06/25/2020] [Indexed: 11/19/2022] Open
Abstract
Functional features of synaptic populations are typically inferred from random electrophysiological sampling of small subsets of synapses. Are these samples unbiased? Here, we develop a biophysically constrained statistical framework to address this question and apply it to assess the performance of a widely used method based on a failure-rate analysis to quantify the occurrence of silent (AMPAR-lacking) synapses. We simulate this method in silico and find that it is characterized by strong and systematic biases, poor reliability, and weak statistical power. Key conclusions are validated by whole-cell recordings from hippocampal neurons. To address these shortcomings, we develop a simulator of the experimental protocol and use it to compute a synthetic likelihood. By maximizing the likelihood, we infer silent synapse fraction with no bias, low variance, and superior statistical power over alternatives. Together, this generalizable approach highlights how a simulator of experimental methodologies can substantially improve the estimation of physiological properties.
Collapse
Affiliation(s)
- Michael B Lynn
- Department of Cellular and Molecular Medicine, University of Ottawa, Ottawa, ON K1H 8M5, Canada
| | - Kevin F H Lee
- Department of Cellular and Molecular Medicine, University of Ottawa, Ottawa, ON K1H 8M5, Canada
| | - Cary Soares
- Department of Cellular and Molecular Medicine, University of Ottawa, Ottawa, ON K1H 8M5, Canada
| | - Richard Naud
- Department of Cellular and Molecular Medicine, University of Ottawa, Ottawa, ON K1H 8M5, Canada; Centre for Neural Dynamics, University of Ottawa, Ottawa, ON K1H 8M5, Canada; University of Ottawa's Brain and Mind Research Institute, Ottawa, ON K1H 8M5, Canada; Department of Physics, STEM Complex, Room 336, 150 Louis Pasteur Private, University of Ottawa, Ottawa, ON K1N 6N5, Canada.
| | - Jean-Claude Béïque
- Department of Cellular and Molecular Medicine, University of Ottawa, Ottawa, ON K1H 8M5, Canada; Canadian Partnership for Stroke Recovery, University of Ottawa, Ottawa, ON K1H 8M5, Canada; Centre for Neural Dynamics, University of Ottawa, Ottawa, ON K1H 8M5, Canada; University of Ottawa's Brain and Mind Research Institute, Ottawa, ON K1H 8M5, Canada.
| |
Collapse
|
37
|
On the boundary conditions of avoidance memory reconsolidation: An attractor network perspective. Neural Netw 2020; 127:96-109. [DOI: 10.1016/j.neunet.2020.04.013] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/29/2019] [Revised: 04/09/2020] [Accepted: 04/14/2020] [Indexed: 11/21/2022]
|
38
|
Battista A, Monasson R. Spectrum of multispace Euclidean random matrices. Phys Rev E 2020; 101:052133. [PMID: 32575268 DOI: 10.1103/physreve.101.052133] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/15/2020] [Accepted: 05/06/2020] [Indexed: 11/07/2022]
Abstract
We consider the additive superimposition of an extensive number of independent Euclidean Random Matrices in the high-density regime. The resolvent is computed with techniques from free probability theory, as well as with the replica method of statistical physics of disordered systems. Results for the spectrum and eigenmodes are shown for a few applications relevant to computational neuroscience, and are corroborated by numerical simulations.
Collapse
Affiliation(s)
- Aldo Battista
- Laboratory of Physics of the Ecole Normale Supérieure, CNRS UMR 8023 & PSL Research, Sorbonne Université, 24 rue Lhomond, 75005 Paris, France
| | - Rémi Monasson
- Laboratory of Physics of the Ecole Normale Supérieure, CNRS UMR 8023 & PSL Research, Sorbonne Université, 24 rue Lhomond, 75005 Paris, France
| |
Collapse
|
39
|
Abstract
Many aspects of the brain’s design can be understood as the result of evolutionary drive toward metabolic efficiency. In addition to the energetic costs of neural computation and transmission, experimental evidence indicates that synaptic plasticity is metabolically demanding as well. As synaptic plasticity is crucial for learning, we examine how these metabolic costs enter in learning. We find that when synaptic plasticity rules are naively implemented, training neural networks requires extremely large amounts of energy when storing many patterns. We propose that this is avoided by precisely balancing labile forms of synaptic plasticity with more stable forms. This algorithm, termed synaptic caching, boosts energy efficiency manifold and can be used with any plasticity rule, including back-propagation. Our results yield a novel interpretation of the multiple forms of neural synaptic plasticity observed experimentally, including synaptic tagging and capture phenomena. Furthermore, our results are relevant for energy efficient neuromorphic designs. The brain expends a lot of energy. While the organ accounts for only about 2% of a person’s bodyweight, it is responsible for about 20% of our energy use at rest. Neurons use some of this energy to communicate with each other and to process information, but much of the energy is likely used to support learning. A study in fruit flies showed that insects that learned to associate two stimuli and then had their food supply cut off, died 20% earlier than untrained flies. This is thought to be because learning used up the insects’ energy reserves. If learning a single association requires so much energy, how does the brain manage to store vast amounts of data? Li and van Rossum offer an explanation based on a computer model of neural networks. The advantage of using such a model is that it is possible to control and measure conditions more precisely than in the living brain. Analysing the model confirmed that learning many new associations requires large amounts of energy. This is particularly true if the memories must be stored with a high degree of accuracy, and if the neural network contains many stored memories already. The reason that learning consumes so much energy is that forming long-term memories requires neurons to produce new proteins. Using the computer model, Li and van Rossum show that neural networks can overcome this limitation by storing memories initially in a transient form that does not require protein synthesis. Doing so reduces energy requirements by as much as 10-fold. Studies in living brains have shown that transient memories of this type do in fact exist. The current results hence offer a hypothesis as to how the brain can learn in a more energy efficient way. Energy consumption is thought to have placed constraints on brain evolution. It is also often a bottleneck in computers. By revealing how the brain encodes memories energy efficiently, the current findings could thus also inspire new engineering solutions.
Collapse
Affiliation(s)
- Ho Ling Li
- School of Psychology, University of Nottingham, Nottingham, United Kingdom
| | - Mark Cw van Rossum
- School of Psychology, University of Nottingham, Nottingham, United Kingdom.,School of Mathematical Sciences, University of Nottingham, Nottingham, United Kingdom
| |
Collapse
|
40
|
Bondanelli G, Ostojic S. Coding with transient trajectories in recurrent neural networks. PLoS Comput Biol 2020; 16:e1007655. [PMID: 32053594 PMCID: PMC7043794 DOI: 10.1371/journal.pcbi.1007655] [Citation(s) in RCA: 16] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/09/2019] [Revised: 02/26/2020] [Accepted: 01/14/2020] [Indexed: 01/04/2023] Open
Abstract
Following a stimulus, the neural response typically strongly varies in time and across neurons before settling to a steady-state. While classical population coding theory disregards the temporal dimension, recent works have argued that trajectories of transient activity can be particularly informative about stimulus identity and may form the basis of computations through dynamics. Yet the dynamical mechanisms needed to generate a population code based on transient trajectories have not been fully elucidated. Here we examine transient coding in a broad class of high-dimensional linear networks of recurrently connected units. We start by reviewing a well-known result that leads to a distinction between two classes of networks: networks in which all inputs lead to weak, decaying transients, and networks in which specific inputs elicit amplified transient responses and are mapped onto output states during the dynamics. Theses two classes are simply distinguished based on the spectrum of the symmetric part of the connectivity matrix. For the second class of networks, which is a sub-class of non-normal networks, we provide a procedure to identify transiently amplified inputs and the corresponding readouts. We first apply these results to standard randomly-connected and two-population networks. We then build minimal, low-rank networks that robustly implement trajectories mapping a specific input onto a specific orthogonal output state. Finally, we demonstrate that the capacity of the obtained networks increases proportionally with their size.
Collapse
Affiliation(s)
- Giulio Bondanelli
- Laboratoire de Neurosciences Cognitives et Computationelles, Département d’Études Cognitives, École Normale Supérieure, INSERM U960, PSL University, Paris, France
| | - Srdjan Ostojic
- Laboratoire de Neurosciences Cognitives et Computationelles, Département d’Études Cognitives, École Normale Supérieure, INSERM U960, PSL University, Paris, France
| |
Collapse
|
41
|
Battista A, Monasson R. Capacity-Resolution Trade-Off in the Optimal Learning of Multiple Low-Dimensional Manifolds by Attractor Neural Networks. PHYSICAL REVIEW LETTERS 2020; 124:048302. [PMID: 32058781 DOI: 10.1103/physrevlett.124.048302] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/06/2019] [Indexed: 06/10/2023]
Abstract
Recurrent neural networks (RNN) are powerful tools to explain how attractors may emerge from noisy, high-dimensional dynamics. We study here how to learn the ∼N^{2} pairwise interactions in a RNN with N neurons to embed L manifolds of dimension D≪N. We show that the capacity, i.e., the maximal ratio L/N, decreases as |logε|^{-D}, where ε is the error on the position encoded by the neural activity along each manifold. Hence, RNN are flexible memory devices capable of storing a large number of manifolds at high spatial resolution. Our results rely on a combination of analytical tools from statistical mechanics and random matrix theory, extending Gardner's classical theory of learning to the case of patterns with strong spatial correlations.
Collapse
Affiliation(s)
- Aldo Battista
- Laboratory of Physics of the Ecole Normale Supérieure, CNRS UMR 8023 & PSL Research, 24 rue Lhomond, 75005 Paris, France
| | - Rémi Monasson
- Laboratory of Physics of the Ecole Normale Supérieure, CNRS UMR 8023 & PSL Research, 24 rue Lhomond, 75005 Paris, France
| |
Collapse
|
42
|
Constraining computational models using electron microscopy wiring diagrams. Curr Opin Neurobiol 2019; 58:94-100. [PMID: 31470252 DOI: 10.1016/j.conb.2019.07.007] [Citation(s) in RCA: 23] [Impact Index Per Article: 4.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/31/2019] [Accepted: 07/25/2019] [Indexed: 12/18/2022]
Abstract
Numerous efforts to generate "connectomes," or synaptic wiring diagrams, of large neural circuits or entire nervous systems are currently underway. These efforts promise an abundance of data to guide theoretical models of neural computation and test their predictions. However, there is not yet a standard set of tools for incorporating the connectivity constraints that these datasets provide into the models typically studied in theoretical neuroscience. This article surveys recent approaches to building models with constrained wiring diagrams and the insights they have provided. It also describes challenges and the need for new techniques to scale these approaches to ever more complex datasets.
Collapse
|
43
|
Gosti G, Folli V, Leonetti M, Ruocco G. Beyond the Maximum Storage Capacity Limit in Hopfield Recurrent Neural Networks. ENTROPY 2019; 21:e21080726. [PMID: 33267440 PMCID: PMC7515255 DOI: 10.3390/e21080726] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 07/03/2019] [Accepted: 07/23/2019] [Indexed: 11/16/2022]
Abstract
In a neural network, an autapse is a particular kind of synapse that links a neuron onto itself. Autapses are almost always not allowed neither in artificial nor in biological neural networks. Moreover, redundant or similar stored states tend to interact destructively. This paper shows how autapses together with stable state redundancy can improve the storage capacity of a recurrent neural network. Recent research shows how, in an N-node Hopfield neural network with autapses, the number of stored patterns (P) is not limited to the well known bound 0.14N, as it is for networks without autapses. More precisely, it describes how, as the number of stored patterns increases well over the 0.14N threshold, for P much greater than N, the retrieval error asymptotically approaches a value below the unit. Consequently, the reduction of retrieval errors allows a number of stored memories, which largely exceeds what was previously considered possible. Unfortunately, soon after, new results showed that, in the thermodynamic limit, given a network with autapses in this high-storage regime, the basin of attraction of the stored memories shrinks to a single state. This means that, for each stable state associated with a stored memory, even a single bit error in the initial pattern would lead the system to a stationary state associated with a different memory state. This thus limits the potential use of this kind of Hopfield network as an associative memory. This paper presents a strategy to overcome this limitation by improving the error correcting characteristics of the Hopfield neural network. The proposed strategy allows us to form what we call an absorbing-neighborhood of state surrounding each stored memory. An absorbing-neighborhood is a set defined by a Hamming distance surrounding a network state, which is an absorbing because, in the long-time limit, states inside it are absorbed by stable states in the set. We show that this strategy allows the network to store an exponential number of memory patterns, each surrounded with an absorbing-neighborhood with an exponentially growing size.
Collapse
Affiliation(s)
- Giorgio Gosti
- Center for Life Nanoscience, Istituto Italiano di Tecnologia, Viale Regina Elena 291, 00161 Rome, Italy
- Correspondence:
| | - Viola Folli
- Center for Life Nanoscience, Istituto Italiano di Tecnologia, Viale Regina Elena 291, 00161 Rome, Italy
| | - Marco Leonetti
- Center for Life Nanoscience, Istituto Italiano di Tecnologia, Viale Regina Elena 291, 00161 Rome, Italy
- CNR NANOTEC-Institute of Nanotechnology c/o Campus Ecotekne, University of Salento, Via Monteroni, 73100 Lecce, Italy
| | - Giancarlo Ruocco
- Center for Life Nanoscience, Istituto Italiano di Tecnologia, Viale Regina Elena 291, 00161 Rome, Italy
- Department of Physics, Sapienza University of Rome, Piazzale Aldo Moro 5, 00185 Rome, Italy
| |
Collapse
|
44
|
Robust Associative Learning Is Sufficient to Explain the Structural and Dynamical Properties of Local Cortical Circuits. J Neurosci 2019; 39:6888-6904. [PMID: 31270161 DOI: 10.1523/jneurosci.3218-18.2019] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/02/2019] [Revised: 05/31/2019] [Accepted: 06/24/2019] [Indexed: 11/21/2022] Open
Abstract
The ability of neural networks to associate successive states of network activity lies at the basis of many cognitive functions. Hence, we hypothesized that many ubiquitous structural and dynamical properties of local cortical networks result from associative learning. To test this hypothesis, we trained recurrent networks of excitatory and inhibitory neurons on memories composed of varying numbers of associations and compared the resulting network properties with those observed experimentally. We show that, when the network is robustly loaded with near-maximum amount of associations it can support, it develops properties that are consistent with the observed probabilities of excitatory and inhibitory connections, shapes of connection weight distributions, overexpression of specific 2- and 3-neuron motifs, distributions of connection numbers in clusters of 3-8 neurons, sustained, irregular, and asynchronous firing activity, and balance of excitation and inhibition. In addition, memories loaded into the network can be retrieved, even in the presence of noise that is comparable with the baseline variations in the postsynaptic potential. The confluence of these results suggests that many structural and dynamical properties of local cortical networks are simply a byproduct of associative learning. We predict that overexpression of excitatory-excitatory bidirectional connections observed in many cortical systems must be accompanied with underexpression of bidirectionally connected inhibitory-excitatory neuron pairs.SIGNIFICANCE STATEMENT Many structural and dynamical properties of local cortical networks are ubiquitously present across areas and species. Because synaptic connectivity is shaped by experience, we wondered whether continual learning, rather than genetic control, is responsible for producing such features. To answer this question, we developed a biologically constrained recurrent network of excitatory and inhibitory neurons capable of learning predefined sequences of network states. Embedding such associative memories into the network revealed that, when individual neurons are robustly loaded with a near-maximum amount of memories they can support, the network develops many properties that are consistent with experimental observations. Our findings suggest that basic structural and dynamical properties of local networks in the brain are simply a byproduct of learning and memory storage.
Collapse
|
45
|
Deger M, Seeholzer A, Gerstner W. Multicontact Co-operativity in Spike-Timing-Dependent Structural Plasticity Stabilizes Networks. Cereb Cortex 2019; 28:1396-1415. [PMID: 29300903 PMCID: PMC6041941 DOI: 10.1093/cercor/bhx339] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/14/2017] [Accepted: 11/30/2017] [Indexed: 12/12/2022] Open
Abstract
Excitatory synaptic connections in the adult neocortex consist of multiple synaptic contacts, almost exclusively formed on dendritic spines. Changes of spine volume, a correlate of synaptic strength, can be tracked in vivo for weeks. Here, we present a combined model of structural and spike-timing–dependent plasticity that explains the multicontact configuration of synapses in adult neocortical networks under steady-state and lesion-induced conditions. Our plasticity rule with Hebbian and anti-Hebbian terms stabilizes both the postsynaptic firing rate and correlations between the pre- and postsynaptic activity at an active synaptic contact. Contacts appear spontaneously at a low rate and disappear if their strength approaches zero. Many presynaptic neurons compete to make strong synaptic connections onto a postsynaptic neuron, whereas the synaptic contacts of a given presynaptic neuron co-operate via postsynaptic firing. We find that co-operation of multiple synaptic contacts is crucial for stable, long-term synaptic memories. In simulations of a simplified network model of barrel cortex, our plasticity rule reproduces whisker-trimming–induced rewiring of thalamocortical and recurrent synaptic connectivity on realistic time scales.
Collapse
Affiliation(s)
- Moritz Deger
- School of Computer and Communication Sciences and School of Life Sciences, Brain Mind Institute, École Polytechnique Fédérale de Lausanne, 1015 Lausanne EPFL, Switzerland.,Institute for Zoology, Faculty of Mathematics and Natural Sciences, University of Cologne, 50674 Cologne, Germany
| | - Alexander Seeholzer
- School of Computer and Communication Sciences and School of Life Sciences, Brain Mind Institute, École Polytechnique Fédérale de Lausanne, 1015 Lausanne EPFL, Switzerland
| | - Wulfram Gerstner
- School of Computer and Communication Sciences and School of Life Sciences, Brain Mind Institute, École Polytechnique Fédérale de Lausanne, 1015 Lausanne EPFL, Switzerland
| |
Collapse
|
46
|
Short term memory properties of sensory neural architectures. J Comput Neurosci 2019; 46:321-332. [PMID: 31104206 DOI: 10.1007/s10827-019-00720-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/05/2019] [Revised: 05/09/2019] [Accepted: 05/12/2019] [Indexed: 10/26/2022]
Abstract
A functional role of the cerebral cortex is to form and hold representations of the sensory world for behavioral purposes. This is achieved by a sheet of neurons, organized in modules called cortical columns, that receives inputs in a peculiar manner, with only a few neurons driven by sensory inputs through thalamic projections, and a vast majority of neurons receiving mainly cortical inputs. How should cortical modules be organized, with respect to sensory inputs, in order for the cortex to efficiently hold sensory representations in memory? To address this question we investigate the memory performance of trees of recurrent networks (TRN) that are composed of recurrent networks, modeling cortical columns, connected with each others through a tree-shaped feed-forward backbone of connections, with sensory stimuli injected at the root of the tree. On these sensory architectures two types of short-term memory (STM) mechanisms can be implemented, STM via transient dynamics on the feed-forward tree, and STM via reverberating activity on the recurrent connectivity inside modules. We derive equations describing the dynamics of such networks, which allow us to thoroughly explore the space of possible architectures and quantify their memory performance. By varying the divergence ratio of the tree, we show that serial architectures, where sensory inputs are successively processed in different modules, are better suited to implement STM via transient dynamics, while parallel architectures, where sensory inputs are simultaneously processed by all modules, are better suited to implement STM via reverberating dynamics.
Collapse
|
47
|
Krauss P, Schuster M, Dietrich V, Schilling A, Schulze H, Metzner C. Weight statistics controls dynamics in recurrent neural networks. PLoS One 2019; 14:e0214541. [PMID: 30964879 PMCID: PMC6456246 DOI: 10.1371/journal.pone.0214541] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/03/2018] [Accepted: 03/14/2019] [Indexed: 11/19/2022] Open
Abstract
Recurrent neural networks are complex non-linear systems, capable of ongoing activity in the absence of driving inputs. The dynamical properties of these systems, in particular their long-time attractor states, are determined on the microscopic level by the connection strengths wij between the individual neurons. However, little is known to which extent network dynamics is tunable on a more coarse-grained level by the statistical features of the weight matrix. In this work, we investigate the dynamics of recurrent networks of Boltzmann neurons. In particular we study the impact of three statistical parameters: density (the fraction of non-zero connections), balance (the ratio of excitatory to inhibitory connections), and symmetry (the fraction of neuron pairs with wij = wji). By computing a 'phase diagram' of network dynamics, we find that balance is the essential control parameter: Its gradual increase from negative to positive values drives the system from oscillatory behavior into a chaotic regime, and eventually into stationary fixed points. Only directly at the border of the chaotic regime do the neural networks display rich but regular dynamics, thus enabling actual information processing. These results suggest that the brain, too, is fine-tuned to the 'edge of chaos' by assuring a proper balance between excitatory and inhibitory neural connections.
Collapse
Affiliation(s)
- Patrick Krauss
- Cognitive Computational Neuroscience Group at the Chair of English Philology and Linguistics, Department of English and American Studies, Friedrich-Alexander University Erlangen-Nürnberg (FAU), Erlangen, Germany
- Experimental Otolaryngology, Neuroscience Group, University Hospital Erlangen, Friedrich-Alexander University Erlangen-Nürnberg (FAU), Erlangen, Germany
| | - Marc Schuster
- Experimental Otolaryngology, Neuroscience Group, University Hospital Erlangen, Friedrich-Alexander University Erlangen-Nürnberg (FAU), Erlangen, Germany
| | - Verena Dietrich
- Experimental Otolaryngology, Neuroscience Group, University Hospital Erlangen, Friedrich-Alexander University Erlangen-Nürnberg (FAU), Erlangen, Germany
| | - Achim Schilling
- Cognitive Computational Neuroscience Group at the Chair of English Philology and Linguistics, Department of English and American Studies, Friedrich-Alexander University Erlangen-Nürnberg (FAU), Erlangen, Germany
- Experimental Otolaryngology, Neuroscience Group, University Hospital Erlangen, Friedrich-Alexander University Erlangen-Nürnberg (FAU), Erlangen, Germany
| | - Holger Schulze
- Experimental Otolaryngology, Neuroscience Group, University Hospital Erlangen, Friedrich-Alexander University Erlangen-Nürnberg (FAU), Erlangen, Germany
| | - Claus Metzner
- Experimental Otolaryngology, Neuroscience Group, University Hospital Erlangen, Friedrich-Alexander University Erlangen-Nürnberg (FAU), Erlangen, Germany
- Biophysics Group, Department of Physics, Friedrich-Alexander University Erlangen-Nürnberg (FAU), Erlangen, Germany
| |
Collapse
|
48
|
Duarte R, Morrison A. Leveraging heterogeneity for neural computation with fading memory in layer 2/3 cortical microcircuits. PLoS Comput Biol 2019; 15:e1006781. [PMID: 31022182 PMCID: PMC6504118 DOI: 10.1371/journal.pcbi.1006781] [Citation(s) in RCA: 16] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/19/2018] [Revised: 05/07/2019] [Accepted: 01/09/2019] [Indexed: 11/24/2022] Open
Abstract
Complexity and heterogeneity are intrinsic to neurobiological systems, manifest in every process, at every scale, and are inextricably linked to the systems' emergent collective behaviours and function. However, the majority of studies addressing the dynamics and computational properties of biologically inspired cortical microcircuits tend to assume (often for the sake of analytical tractability) a great degree of homogeneity in both neuronal and synaptic/connectivity parameters. While simplification and reductionism are necessary to understand the brain's functional principles, disregarding the existence of the multiple heterogeneities in the cortical composition, which may be at the core of its computational proficiency, will inevitably fail to account for important phenomena and limit the scope and generalizability of cortical models. We address these issues by studying the individual and composite functional roles of heterogeneities in neuronal, synaptic and structural properties in a biophysically plausible layer 2/3 microcircuit model, built and constrained by multiple sources of empirical data. This approach was made possible by the emergence of large-scale, well curated databases, as well as the substantial improvements in experimental methodologies achieved over the last few years. Our results show that variability in single neuron parameters is the dominant source of functional specialization, leading to highly proficient microcircuits with much higher computational power than their homogeneous counterparts. We further show that fully heterogeneous circuits, which are closest to the biophysical reality, owe their response properties to the differential contribution of different sources of heterogeneity.
Collapse
Affiliation(s)
- Renato Duarte
- Institute of Neuroscience and Medicine (INM-6), Institute for Advanced Simulation (IAS-6) and JARA Institute Brain Structure-Function Relationships (JBI-1 / INM-10), Jülich Research Centre, Jülich, Germany
- Bernstein Center Freiburg, Albert-Ludwig University of Freiburg, Freiburg im Breisgau, Germany
- Faculty of Biology, Albert-Ludwig University of Freiburg, Freiburg im Breisgau, Germany
- Institute of Adaptive and Neural Computation, School of Informatics, University of Edinburgh, Edinburgh, United Kingdom
| | - Abigail Morrison
- Institute of Neuroscience and Medicine (INM-6), Institute for Advanced Simulation (IAS-6) and JARA Institute Brain Structure-Function Relationships (JBI-1 / INM-10), Jülich Research Centre, Jülich, Germany
- Bernstein Center Freiburg, Albert-Ludwig University of Freiburg, Freiburg im Breisgau, Germany
- Institute of Cognitive Neuroscience, Faculty of Psychology, Ruhr-University Bochum, Bochum, Germany
| |
Collapse
|
49
|
Henderson NT, Le Marchand SJ, Hruska M, Hippenmeyer S, Luo L, Dalva MB. Ephrin-B3 controls excitatory synapse density through cell-cell competition for EphBs. eLife 2019; 8:e41563. [PMID: 30789343 PMCID: PMC6384025 DOI: 10.7554/elife.41563] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/30/2018] [Accepted: 01/31/2019] [Indexed: 11/13/2022] Open
Abstract
Cortical networks are characterized by sparse connectivity, with synapses found at only a subset of axo-dendritic contacts. Yet within these networks, neurons can exhibit high connection probabilities, suggesting that cell-intrinsic factors, not proximity, determine connectivity. Here, we identify ephrin-B3 (eB3) as a factor that determines synapse density by mediating a cell-cell competition that requires ephrin-B-EphB signaling. In a microisland culture system designed to isolate cell-cell competition, we find that eB3 determines winning and losing neurons in a contest for synapses. In a Mosaic Analysis with Double Markers (MADM) genetic mouse model system in vivo the relative levels of eB3 control spine density in layer 5 and 6 neurons. MADM cortical neurons in vitro reveal that eB3 controls synapse density independently of action potential-driven activity. Our findings illustrate a new class of competitive mechanism mediated by trans-synaptic organizing proteins which control the number of synapses neurons receive relative to neighboring neurons.
Collapse
Affiliation(s)
- Nathan T Henderson
- Department of Neuroscience, The Vickie and Jack Farber Institute for Neuroscience, Thomas Jefferson University, Philadelphia, United States
- Department of Neuroscience, University of Pennsylvania, Philadelphia, United States
| | | | - Martin Hruska
- Department of Neuroscience, The Vickie and Jack Farber Institute for Neuroscience, Thomas Jefferson University, Philadelphia, United States
| | - Simon Hippenmeyer
- Institute of Science and Technology Austria, Klosterneuburg, Austria
| | - Liqun Luo
- Department of Biology, Howard Hughes Medical Institute, Stanford University, Stanford, United States
| | - Matthew B Dalva
- Department of Neuroscience, The Vickie and Jack Farber Institute for Neuroscience, Thomas Jefferson University, Philadelphia, United States
| |
Collapse
|
50
|
Function of local circuits in the hippocampal dentate gyrus-CA3 system. Neurosci Res 2018; 140:43-52. [PMID: 30408501 DOI: 10.1016/j.neures.2018.11.003] [Citation(s) in RCA: 33] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/30/2018] [Revised: 09/27/2018] [Accepted: 10/15/2018] [Indexed: 11/20/2022]
Abstract
Anatomical observations, theoretical work and lesioning experiments have supported the idea that the CA3 in the hippocampus is important for encoding, storage and retrieval of memory while the dentate gyrus (DG) is important for the pattern separation of the incoming inputs from the entorhinal cortex. Study of the presumed function of the dentate gyrus in pattern separation has been hampered by the lack of reliable methods to identify different excitatory cell types in the DG. Recent papers have identified different cell types in the DG, in awake behaving animals, with more reliable methods. These studies have revealed each cell type's spatial representation as well as their involvement in pattern separation. Moreover, chronic electrophysiological recording from sleeping and waking animals also provided more insights into the operation of the DG-CA3 system for memory encoding and retrieval. This article will review the local circuit architectures and physiological properties of the DG-CA3 system and discuss how the local circuit in the DG-CA3 may function, incorporating recent physiological findings in the DG-CA3 system.
Collapse
|