1
|
Zajzon B, Dahmen D, Morrison A, Duarte R. Signal denoising through topographic modularity of neural circuits. eLife 2023; 12:77009. [PMID: 36700545 PMCID: PMC9981157 DOI: 10.7554/elife.77009] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/12/2022] [Accepted: 01/25/2023] [Indexed: 01/27/2023] Open
Abstract
Information from the sensory periphery is conveyed to the cortex via structured projection pathways that spatially segregate stimulus features, providing a robust and efficient encoding strategy. Beyond sensory encoding, this prominent anatomical feature extends throughout the neocortex. However, the extent to which it influences cortical processing is unclear. In this study, we combine cortical circuit modeling with network theory to demonstrate that the sharpness of topographic projections acts as a bifurcation parameter, controlling the macroscopic dynamics and representational precision across a modular network. By shifting the balance of excitation and inhibition, topographic modularity gradually increases task performance and improves the signal-to-noise ratio across the system. We demonstrate that in biologically constrained networks, such a denoising behavior is contingent on recurrent inhibition. We show that this is a robust and generic structural feature that enables a broad range of behaviorally relevant operating regimes, and provide an in-depth theoretical analysis unraveling the dynamical principles underlying the mechanism.
Collapse
Affiliation(s)
- Barna Zajzon
- Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA-BRAIN Institute I, Jülich Research CentreJülichGermany
- Department of Psychiatry, Psychotherapy and Psychosomatics, RWTH Aachen UniversityAachenGermany
| | - David Dahmen
- Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA-BRAIN Institute I, Jülich Research CentreJülichGermany
| | - Abigail Morrison
- Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA-BRAIN Institute I, Jülich Research CentreJülichGermany
- Department of Computer Science 3 - Software Engineering, RWTH Aachen UniversityAachenGermany
| | - Renato Duarte
- Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA-BRAIN Institute I, Jülich Research CentreJülichGermany
- Donders Institute for Brain, Cognition and Behavior, Radboud University NijmegenNijmegenNetherlands
| |
Collapse
|
2
|
Tiberi L, Stapmanns J, Kühn T, Luu T, Dahmen D, Helias M. Gell-Mann-Low Criticality in Neural Networks. PHYSICAL REVIEW LETTERS 2022; 128:168301. [PMID: 35522522 DOI: 10.1103/physrevlett.128.168301] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/12/2021] [Revised: 02/09/2022] [Accepted: 03/04/2022] [Indexed: 06/14/2023]
Abstract
Criticality is deeply related to optimal computational capacity. The lack of a renormalized theory of critical brain dynamics, however, so far limits insights into this form of biological information processing to mean-field results. These methods neglect a key feature of critical systems: the interaction between degrees of freedom across all length scales, required for complex nonlinear computation. We present a renormalized theory of a prototypical neural field theory, the stochastic Wilson-Cowan equation. We compute the flow of couplings, which parametrize interactions on increasing length scales. Despite similarities with the Kardar-Parisi-Zhang model, the theory is of a Gell-Mann-Low type, the archetypal form of a renormalizable quantum field theory. Here, nonlinear couplings vanish, flowing towards the Gaussian fixed point, but logarithmically slowly, thus remaining effective on most scales. We show this critical structure of interactions to implement a desirable trade-off between linearity, optimal for information storage, and nonlinearity, required for computation.
Collapse
Affiliation(s)
- Lorenzo Tiberi
- Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA-Institute Brain Structure-Function Relationships (INM-10), Jülich Research Centre, 52425 Jülich, Germany
- Institute for Theoretical Solid State Physics, RWTH Aachen University, 52074 Aachen, Germany
- Center for Advanced Simulation and Analytics, Forschungszentrum Jülich, 52425 Jülich, Germany
| | - Jonas Stapmanns
- Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA-Institute Brain Structure-Function Relationships (INM-10), Jülich Research Centre, 52425 Jülich, Germany
- Institute for Theoretical Solid State Physics, RWTH Aachen University, 52074 Aachen, Germany
| | - Tobias Kühn
- Laboratoire de Physique de l'Ecole Normale Supérieure, ENS, Université PSL, CNRS, Sorbonne Université, Université de Paris, F-75005 Paris, France
| | - Thomas Luu
- Center for Advanced Simulation and Analytics, Forschungszentrum Jülich, 52425 Jülich, Germany
- Institut für Kernphysik (IKP-3), Institute for Advanced Simulation (IAS-4) and Jülich Center for Hadron Physics, Jülich Research Centre, 52425 Jülich, Germany
| | - David Dahmen
- Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA-Institute Brain Structure-Function Relationships (INM-10), Jülich Research Centre, 52425 Jülich, Germany
| | - Moritz Helias
- Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA-Institute Brain Structure-Function Relationships (INM-10), Jülich Research Centre, 52425 Jülich, Germany
- Institute for Theoretical Solid State Physics, RWTH Aachen University, 52074 Aachen, Germany
- Center for Advanced Simulation and Analytics, Forschungszentrum Jülich, 52425 Jülich, Germany
| |
Collapse
|
3
|
Baggio G, Rutten V, Hennequin G, Zampieri S. Efficient communication over complex dynamical networks: The role of matrix non-normality. SCIENCE ADVANCES 2020; 6:eaba2282. [PMID: 32518824 PMCID: PMC7253166 DOI: 10.1126/sciadv.aba2282] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 11/15/2019] [Accepted: 03/27/2020] [Indexed: 06/11/2023]
Abstract
In both natural and engineered systems, communication often occurs dynamically over networks ranging from highly structured grids to largely disordered graphs. To use, or comprehend the use of, networks as efficient communication media requires understanding of how they propagate and transform information in the face of noise. Here, we develop a framework that enables us to examine how network structure, noise, and interference between consecutive packets jointly determine transmission performance in complex networks governed by linear dynamics. Mathematically, normal networks, which can be decomposed into separate low-dimensional information channels, suffer greatly from readout noise. Most details of their wiring have no impact on transmission quality. Non-normal networks, however, can largely cancel the effect of noise by transiently amplifying select input dimensions while ignoring others, resulting in higher net information throughput. Our theory could inform the design of new communication networks, as well as the optimal use of existing ones.
Collapse
Affiliation(s)
- Giacomo Baggio
- Department of Information Engineering, University of Padova, via Gradenigo, 6/B I-35131 Padova, Italy
| | - Virginia Rutten
- Gatsby Computational Neuroscience Unit, University College London, London W1T 4JG, UK
- Janelia Research Campus, Howard Hughes Medical Institute, Ashburn, VA, USA
| | - Guillaume Hennequin
- Computational and Biological Learning Lab, Department of Engineering, University of Cambridge, Cambridge CB2 1PZ, UK
| | - Sandro Zampieri
- Department of Information Engineering, University of Padova, via Gradenigo, 6/B I-35131 Padova, Italy
| |
Collapse
|
4
|
Yang S, Chung J, Jin SH, Bao S, Yang S. A circuit mechanism of time-to-space conversion for perception. Hear Res 2018; 366:32-37. [PMID: 29804722 DOI: 10.1016/j.heares.2018.05.008] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/23/2017] [Revised: 05/08/2018] [Accepted: 05/14/2018] [Indexed: 12/13/2022]
Abstract
Sensory information in a temporal sequence is processed as a collective unit by the nervous system. The cellular mechanisms underlying how sequential inputs are incorporated into the brain has emerged as an important subject in neuroscience. Here, we hypothesize that information-bearing (IB) signals can be entrained and amplified by a clock signal, allowing them to efficiently propagate along in a feedforward circuit. IB signals can remain latent on individual dendrites of the receiving neurons until they are read out by an oscillatory clock signal. In such a way, the IB signals pass through the next neurons along a linear chain. This hypothesis identifies a cellular process of time-to-space and sound-to-map conversion in primary auditory cortex, providing insight into a mechanistic principle underlying the representation and memory of temporal sequences of information.
Collapse
Affiliation(s)
- Sunggu Yang
- Department of Nano-bioengineering, Incheon National University, Incheon, 22012, South Korea.
| | - Jaeyong Chung
- Department of Electronics Engineering, Incheon National University, Incheon, 22012, South Korea
| | - Sung Hun Jin
- Department of Electronics Engineering, Incheon National University, Incheon, 22012, South Korea
| | - Shaowen Bao
- Department of Physiology, University of Arizona, Tucson, AZ 85724, USA.
| | - Sungchil Yang
- Department of Biomedical Sciences, City University of Hong Kong, Tat Chee Avenue, Kowloon, Hong Kong.
| |
Collapse
|
5
|
Marzen S. Difference between memory and prediction in linear recurrent networks. Phys Rev E 2017; 96:032308. [PMID: 29346995 DOI: 10.1103/physreve.96.032308] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/29/2017] [Indexed: 06/07/2023]
Abstract
Recurrent networks are trained to memorize their input better, often in the hopes that such training will increase the ability of the network to predict. We show that networks designed to memorize input can be arbitrarily bad at prediction. We also find, for several types of inputs, that one-node networks optimized for prediction are nearly at upper bounds on predictive capacity given by Wiener filters and are roughly equivalent in performance to randomly generated five-node networks. Our results suggest that maximizing memory capacity leads to very different networks than maximizing predictive capacity and that optimizing recurrent weights can decrease reservoir size by half an order of magnitude.
Collapse
Affiliation(s)
- Sarah Marzen
- Department of Physics, Physics of Living Systems, Massachusetts Institute of Technology, Cambridge, Massachusetts 02139, USA
| |
Collapse
|
6
|
Inubushi M, Yoshimura K. Reservoir Computing Beyond Memory-Nonlinearity Trade-off. Sci Rep 2017; 7:10199. [PMID: 28860513 PMCID: PMC5579006 DOI: 10.1038/s41598-017-10257-6] [Citation(s) in RCA: 55] [Impact Index Per Article: 7.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/11/2017] [Accepted: 08/08/2017] [Indexed: 11/09/2022] Open
Abstract
Reservoir computing is a brain-inspired machine learning framework that employs a signal-driven dynamical system, in particular harnessing common-signal-induced synchronization which is a widely observed nonlinear phenomenon. Basic understanding of a working principle in reservoir computing can be expected to shed light on how information is stored and processed in nonlinear dynamical systems, potentially leading to progress in a broad range of nonlinear sciences. As a first step toward this goal, from the viewpoint of nonlinear physics and information theory, we study the memory-nonlinearity trade-off uncovered by Dambre et al. (2012). Focusing on a variational equation, we clarify a dynamical mechanism behind the trade-off, which illustrates why nonlinear dynamics degrades memory stored in dynamical system in general. Moreover, based on the trade-off, we propose a mixture reservoir endowed with both linear and nonlinear dynamics and show that it improves the performance of information processing. Interestingly, for some tasks, significant improvements are observed by adding a few linear dynamics to the nonlinear dynamical system. By employing the echo state network model, the effect of the mixture reservoir is numerically verified for a simple function approximation task and for more complex tasks.
Collapse
Affiliation(s)
- Masanobu Inubushi
- NTT Communication Science Laboratories, NTT Corporation, 3-1, Morinosato Wakamiya Atsugi-shi, Kanagawa, 243-0198, Japan.
| | - Kazuyuki Yoshimura
- Department of Information and Electronics, Graduate School of Engineering, Tottori University, 4-101 Koyama-Minami, Tottori, 680-8552, Japan
| |
Collapse
|
7
|
Barak O, Tsodyks M. Working models of working memory. Curr Opin Neurobiol 2014; 25:20-4. [PMID: 24709596 DOI: 10.1016/j.conb.2013.10.008] [Citation(s) in RCA: 135] [Impact Index Per Article: 13.5] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/31/2013] [Revised: 10/16/2013] [Accepted: 10/30/2013] [Indexed: 01/14/2023]
Affiliation(s)
- Omri Barak
- Faculty of Medicine, Technion - Israel Institute of Technology, 1 Efron St., Haifa 31096, Israel
| | - Misha Tsodyks
- Department of Neurobiology, Weizmann Institute of Science, Herzl St., Rehovot 76100, Israel.
| |
Collapse
|
8
|
Cayco-Gajic NA, Shea-Brown E. Neutral stability, rate propagation, and critical branching in feedforward networks. Neural Comput 2013; 25:1768-806. [PMID: 23607560 DOI: 10.1162/neco_a_00461] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Recent experimental and computational evidence suggests that several dynamical properties may characterize the operating point of functioning neural networks: critical branching, neutral stability, and production of a wide range of firing patterns. We seek the simplest setting in which these properties emerge, clarifying their origin and relationship in random, feedforward networks of McCullochs-Pitts neurons. Two key parameters are the thresholds at which neurons fire spikes and the overall level of feedforward connectivity. When neurons have low thresholds, we show that there is always a connectivity for which the properties in question all occur, that is, these networks preserve overall firing rates from layer to layer and produce broad distributions of activity in each layer. This fails to occur, however, when neurons have high thresholds. A key tool in explaining this difference is the eigenstructure of the resulting mean-field Markov chain, as this reveals which activity modes will be preserved from layer to layer. We extend our analysis from purely excitatory networks to more complex models that include inhibition and local noise, and find that both of these features extend the parameter ranges over which networks produce the properties of interest.
Collapse
Affiliation(s)
- N Alex Cayco-Gajic
- Department of Applied Mathematics, University of Washington, Seattle, WA 98195, USA.
| | | |
Collapse
|
9
|
Schultze-Kraft M, Diesmann M, Grün S, Helias M. Noise suppression and surplus synchrony by coincidence detection. PLoS Comput Biol 2013; 9:e1002904. [PMID: 23592953 PMCID: PMC3617020 DOI: 10.1371/journal.pcbi.1002904] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/15/2012] [Accepted: 11/25/2012] [Indexed: 12/04/2022] Open
Abstract
The functional significance of correlations between action potentials of neurons is still a matter of vivid debate. In particular, it is presently unclear how much synchrony is caused by afferent synchronized events and how much is intrinsic due to the connectivity structure of cortex. The available analytical approaches based on the diffusion approximation do not allow to model spike synchrony, preventing a thorough analysis. Here we theoretically investigate to what extent common synaptic afferents and synchronized inputs each contribute to correlated spiking on a fine temporal scale between pairs of neurons. We employ direct simulation and extend earlier analytical methods based on the diffusion approximation to pulse-coupling, allowing us to introduce precisely timed correlations in the spiking activity of the synaptic afferents. We investigate the transmission of correlated synaptic input currents by pairs of integrate-and-fire model neurons, so that the same input covariance can be realized by common inputs or by spiking synchrony. We identify two distinct regimes: In the limit of low correlation linear perturbation theory accurately determines the correlation transmission coefficient, which is typically smaller than unity, but increases sensitively even for weakly synchronous inputs. In the limit of high input correlation, in the presence of synchrony, a qualitatively new picture arises. As the non-linear neuronal response becomes dominant, the output correlation becomes higher than the total correlation in the input. This transmission coefficient larger unity is a direct consequence of non-linear neural processing in the presence of noise, elucidating how synchrony-coded signals benefit from these generic properties present in cortical networks. Whether spike timing conveys information in cortical networks or whether the firing rate alone is sufficient is a matter of controversial debate, touching the fundamental question of how the brain processes, stores, and conveys information. If the firing rate alone is the decisive signal used in the brain, correlations between action potentials are just an epiphenomenon of cortical connectivity, where pairs of neurons share a considerable fraction of common afferents. Due to membrane leakage, small synaptic amplitudes and the non-linear threshold, nerve cells exhibit lossy transmission of correlation originating from shared synaptic inputs. However, the membrane potential of cortical neurons often displays non-Gaussian fluctuations, caused by synchronized synaptic inputs. Moreover, synchronously active neurons have been found to reflect behavior in primates. In this work we therefore contrast the transmission of correlation due to shared afferents and due to synchronously arriving synaptic impulses for leaky neuron models. We not only find that neurons are highly sensitive to synchronous afferents, but that they can suppress noise on signals transmitted by synchrony, a computational advantage over rate signals.
Collapse
|