1
|
Gozel O, Doiron B. Between-area communication through the lens of within-area neuronal dynamics. SCIENCE ADVANCES 2024; 10:eadl6120. [PMID: 39413191 PMCID: PMC11482330 DOI: 10.1126/sciadv.adl6120] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/27/2023] [Accepted: 09/13/2024] [Indexed: 10/18/2024]
Abstract
A core problem in systems and circuits neuroscience is deciphering the origin of shared dynamics in neuronal activity: Do they emerge through local network interactions, or are they inherited from external sources? We explore this question with large-scale networks of spatially ordered spiking neuron models where a downstream network receives input from an upstream sender network. We show that linear measures of the communication between the sender and receiver networks can discriminate between emergent or inherited population dynamics. A match in the dimensionality of the sender and receiver population activities promotes faithful communication. In contrast, a nonlinear mapping between the sender to receiver activity, for example, through downstream emergent population-wide fluctuations, can impair linear communication. Our work exposes the benefits and limitations of linear measures when analyzing between-area communication in circuits with rich population-wide neuronal dynamics.
Collapse
Affiliation(s)
- Olivia Gozel
- Departments of Neurobiology and Statistics, University of Chicago, Chicago, IL 60637, USA
- Grossman Center for Quantitative Biology and Human Behavior, University of Chicago, Chicago, IL 60637, USA
| | - Brent Doiron
- Departments of Neurobiology and Statistics, University of Chicago, Chicago, IL 60637, USA
- Grossman Center for Quantitative Biology and Human Behavior, University of Chicago, Chicago, IL 60637, USA
| |
Collapse
|
2
|
Zheng T, Sugino M, Jimbo Y, Ermentrout GB, Kotani K. Analyzing top-down visual attention in the context of gamma oscillations: a layer- dependent network-of- networks approach. Front Comput Neurosci 2024; 18:1439632. [PMID: 39376575 PMCID: PMC11456483 DOI: 10.3389/fncom.2024.1439632] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/28/2024] [Accepted: 09/03/2024] [Indexed: 10/09/2024] Open
Abstract
Top-down visual attention is a fundamental cognitive process that allows individuals to selectively attend to salient visual stimuli in the environment. Recent empirical findings have revealed that gamma oscillations participate in the modulation of visual attention. However, computational studies face challenges when analyzing the attentional process in the context of gamma oscillation due to the unstable nature of gamma oscillations and the complexity induced by the layered fashion in the visual cortex. In this study, we propose a layer-dependent network-of-networks approach to analyze such attention with gamma oscillations. The model is validated by reproducing empirical findings on orientation preference and the enhancement of neuronal response due to top-down attention. We perform parameter plane analysis to classify neuronal responses into several patterns and find that the neuronal response to sensory and attention signals was modulated by the heterogeneity of the neuronal population. Furthermore, we revealed a counter-intuitive scenario that the excitatory populations in layer 2/3 and layer 5 exhibit opposite responses to the attentional input. By modification of the original model, we confirmed layer 6 plays an indispensable role in such cases. Our findings uncover the layer-dependent dynamics in the cortical processing of visual attention and open up new possibilities for further research on layer-dependent properties in the cerebral cortex.
Collapse
Affiliation(s)
- Tianyi Zheng
- Department of Precision Engineering, The University of Tokyo, Tokyo, Japan
| | - Masato Sugino
- Department of Precision Engineering, The University of Tokyo, Tokyo, Japan
| | - Yasuhiko Jimbo
- Department of Precision Engineering, The University of Tokyo, Tokyo, Japan
| | - G. Bard Ermentrout
- Department of Mathematics, University of Pittsburgh, Pittsburgh, PA, United States
| | - Kiyoshi Kotani
- Department of Human and Engineered Environmental Studies, The University of Tokyo, Chiba, Japan
| |
Collapse
|
3
|
Ostojic S, Fusi S. Computational role of structure in neural activity and connectivity. Trends Cogn Sci 2024; 28:677-690. [PMID: 38553340 DOI: 10.1016/j.tics.2024.03.003] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/24/2023] [Revised: 02/29/2024] [Accepted: 03/07/2024] [Indexed: 07/05/2024]
Abstract
One major challenge of neuroscience is identifying structure in seemingly disorganized neural activity. Different types of structure have different computational implications that can help neuroscientists understand the functional role of a particular brain area. Here, we outline a unified approach to characterize structure by inspecting the representational geometry and the modularity properties of the recorded activity and show that a similar approach can also reveal structure in connectivity. We start by setting up a general framework for determining geometry and modularity in activity and connectivity and relating these properties with computations performed by the network. We then use this framework to review the types of structure found in recent studies of model networks performing three classes of computations.
Collapse
Affiliation(s)
- Srdjan Ostojic
- Laboratoire de Neurosciences Cognitives et Computationnelles, INSERM U960, Ecole Normale Superieure - PSL Research University, 75005 Paris, France.
| | - Stefano Fusi
- Center for Theoretical Neuroscience, Columbia University, New York, NY, USA; Zuckerman Mind Brain Behavior Institute, Columbia University, New York, NY, USA; Department of Neuroscience, Columbia University, New York, NY, USA; Kavli Institute for Brain Science, Columbia University, New York, NY, USA
| |
Collapse
|
4
|
Podlaski WF, Machens CK. Approximating Nonlinear Functions With Latent Boundaries in Low-Rank Excitatory-Inhibitory Spiking Networks. Neural Comput 2024; 36:803-857. [PMID: 38658028 DOI: 10.1162/neco_a_01658] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/21/2023] [Accepted: 01/02/2024] [Indexed: 04/26/2024]
Abstract
Deep feedforward and recurrent neural networks have become successful functional models of the brain, but they neglect obvious biological details such as spikes and Dale's law. Here we argue that these details are crucial in order to understand how real neural circuits operate. Towards this aim, we put forth a new framework for spike-based computation in low-rank excitatory-inhibitory spiking networks. By considering populations with rank-1 connectivity, we cast each neuron's spiking threshold as a boundary in a low-dimensional input-output space. We then show how the combined thresholds of a population of inhibitory neurons form a stable boundary in this space, and those of a population of excitatory neurons form an unstable boundary. Combining the two boundaries results in a rank-2 excitatory-inhibitory (EI) network with inhibition-stabilized dynamics at the intersection of the two boundaries. The computation of the resulting networks can be understood as the difference of two convex functions and is thereby capable of approximating arbitrary non-linear input-output mappings. We demonstrate several properties of these networks, including noise suppression and amplification, irregular activity and synaptic balance, as well as how they relate to rate network dynamics in the limit that the boundary becomes soft. Finally, while our work focuses on small networks (5-50 neurons), we discuss potential avenues for scaling up to much larger networks. Overall, our work proposes a new perspective on spiking networks that may serve as a starting point for a mechanistic understanding of biological spike-based computation.
Collapse
Affiliation(s)
- William F Podlaski
- Champalimaud Neuroscience Programme, Champalimaud Foundation, 1400-038 Lisbon, Portugal
| | - Christian K Machens
- Champalimaud Neuroscience Programme, Champalimaud Foundation, 1400-038 Lisbon, Portugal
| |
Collapse
|
5
|
Jarne C, Laje R. Exploring weight initialization, diversity of solutions, and degradation in recurrent neural networks trained for temporal and decision-making tasks. J Comput Neurosci 2023; 51:407-431. [PMID: 37561278 DOI: 10.1007/s10827-023-00857-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/25/2022] [Revised: 05/26/2023] [Accepted: 06/27/2023] [Indexed: 08/11/2023]
Abstract
Recurrent Neural Networks (RNNs) are frequently used to model aspects of brain function and structure. In this work, we trained small fully-connected RNNs to perform temporal and flow control tasks with time-varying stimuli. Our results show that different RNNs can solve the same task by converging to different underlying dynamics and also how the performance gracefully degrades as either network size is decreased, interval duration is increased, or connectivity damage is induced. For the considered tasks, we explored how robust the network obtained after training can be according to task parameterization. In the process, we developed a framework that can be useful to parameterize other tasks of interest in computational neuroscience. Our results are useful to quantify different aspects of the models, which are normally used as black boxes and need to be understood in order to model the biological response of cerebral cortex areas.
Collapse
Affiliation(s)
- Cecilia Jarne
- Universidad Nacional de Quilmes, Departamento de Ciencia y Tecnología, Bernal, Buenos Aires, Argentina.
- CONICET, Buenos Aires, Argentina.
- Center for Functionally Integrative Neuroscience, Department of Clinical Medicine, Aarhus University, Aarhus, Denmark.
| | - Rodrigo Laje
- Universidad Nacional de Quilmes, Departamento de Ciencia y Tecnología, Bernal, Buenos Aires, Argentina
- CONICET, Buenos Aires, Argentina
| |
Collapse
|
6
|
Durstewitz D, Koppe G, Thurm MI. Reconstructing computational system dynamics from neural data with recurrent neural networks. Nat Rev Neurosci 2023; 24:693-710. [PMID: 37794121 DOI: 10.1038/s41583-023-00740-7] [Citation(s) in RCA: 6] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 08/18/2023] [Indexed: 10/06/2023]
Abstract
Computational models in neuroscience usually take the form of systems of differential equations. The behaviour of such systems is the subject of dynamical systems theory. Dynamical systems theory provides a powerful mathematical toolbox for analysing neurobiological processes and has been a mainstay of computational neuroscience for decades. Recently, recurrent neural networks (RNNs) have become a popular machine learning tool for studying the non-linear dynamics of neural and behavioural processes by emulating an underlying system of differential equations. RNNs have been routinely trained on similar behavioural tasks to those used for animal subjects to generate hypotheses about the underlying computational mechanisms. By contrast, RNNs can also be trained on the measured physiological and behavioural data, thereby directly inheriting their temporal and geometrical properties. In this way they become a formal surrogate for the experimentally probed system that can be further analysed, perturbed and simulated. This powerful approach is called dynamical system reconstruction. In this Perspective, we focus on recent trends in artificial intelligence and machine learning in this exciting and rapidly expanding field, which may be less well known in neuroscience. We discuss formal prerequisites, different model architectures and training approaches for RNN-based dynamical system reconstructions, ways to evaluate and validate model performance, how to interpret trained models in a neuroscience context, and current challenges.
Collapse
Affiliation(s)
- Daniel Durstewitz
- Dept. of Theoretical Neuroscience, Central Institute of Mental Health, Medical Faculty Mannheim, Heidelberg University, Mannheim, Germany.
- Interdisciplinary Center for Scientific Computing, Heidelberg University, Heidelberg, Germany.
- Faculty of Physics and Astronomy, Heidelberg University, Heidelberg, Germany.
| | - Georgia Koppe
- Dept. of Theoretical Neuroscience, Central Institute of Mental Health, Medical Faculty Mannheim, Heidelberg University, Mannheim, Germany
- Dept. of Psychiatry and Psychotherapy, Central Institute of Mental Health, Medical Faculty Mannheim, Heidelberg University, Mannheim, Germany
- Hector Institute for Artificial Intelligence in Psychiatry, Central Institute of Mental Health, Medical Faculty Mannheim, Heidelberg University, Mannheim, Germany
| | - Max Ingo Thurm
- Dept. of Theoretical Neuroscience, Central Institute of Mental Health, Medical Faculty Mannheim, Heidelberg University, Mannheim, Germany
| |
Collapse
|
7
|
Cimeša L, Ciric L, Ostojic S. Geometry of population activity in spiking networks with low-rank structure. PLoS Comput Biol 2023; 19:e1011315. [PMID: 37549194 PMCID: PMC10461857 DOI: 10.1371/journal.pcbi.1011315] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/25/2022] [Revised: 08/28/2023] [Accepted: 06/27/2023] [Indexed: 08/09/2023] Open
Abstract
Recurrent network models are instrumental in investigating how behaviorally-relevant computations emerge from collective neural dynamics. A recently developed class of models based on low-rank connectivity provides an analytically tractable framework for understanding of how connectivity structure determines the geometry of low-dimensional dynamics and the ensuing computations. Such models however lack some fundamental biological constraints, and in particular represent individual neurons in terms of abstract units that communicate through continuous firing rates rather than discrete action potentials. Here we examine how far the theoretical insights obtained from low-rank rate networks transfer to more biologically plausible networks of spiking neurons. Adding a low-rank structure on top of random excitatory-inhibitory connectivity, we systematically compare the geometry of activity in networks of integrate-and-fire neurons to rate networks with statistically equivalent low-rank connectivity. We show that the mean-field predictions of rate networks allow us to identify low-dimensional dynamics at constant population-average activity in spiking networks, as well as novel non-linear regimes of activity such as out-of-phase oscillations and slow manifolds. We finally exploit these results to directly build spiking networks that perform nonlinear computations.
Collapse
Affiliation(s)
- Ljubica Cimeša
- Laboratoire de Neurosciences Cognitives Computationnelles, Département d’Études Cognitives, École Normale Supérieure, INSERM U960, PSL University, Paris, France
| | - Lazar Ciric
- Laboratoire de Neurosciences Cognitives Computationnelles, Département d’Études Cognitives, École Normale Supérieure, INSERM U960, PSL University, Paris, France
| | - Srdjan Ostojic
- Laboratoire de Neurosciences Cognitives Computationnelles, Département d’Études Cognitives, École Normale Supérieure, INSERM U960, PSL University, Paris, France
| |
Collapse
|
8
|
Wu T, Cai Y, Zhang R, Wang Z, Tao L, Xiao ZC. Multi-band oscillations emerge from a simple spiking network. CHAOS (WOODBURY, N.Y.) 2023; 33:043121. [PMID: 37097932 DOI: 10.1063/5.0106884] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/30/2022] [Accepted: 03/14/2023] [Indexed: 06/19/2023]
Abstract
In the brain, coherent neuronal activities often appear simultaneously in multiple frequency bands, e.g., as combinations of alpha (8-12 Hz), beta (12.5-30 Hz), and gamma (30-120 Hz) oscillations, among others. These rhythms are believed to underlie information processing and cognitive functions and have been subjected to intense experimental and theoretical scrutiny. Computational modeling has provided a framework for the emergence of network-level oscillatory behavior from the interaction of spiking neurons. However, due to the strong nonlinear interactions between highly recurrent spiking populations, the interplay between cortical rhythms in multiple frequency bands has rarely been theoretically investigated. Many studies invoke multiple physiological timescales (e.g., various ion channels or multiple types of inhibitory neurons) or oscillatory inputs to produce rhythms in multi-bands. Here, we demonstrate the emergence of multi-band oscillations in a simple network consisting of one excitatory and one inhibitory neuronal population driven by constant input. First, we construct a data-driven, Poincaré section theory for robust numerical observations of single-frequency oscillations bifurcating into multiple bands. Then, we develop model reductions of the stochastic, nonlinear, high-dimensional neuronal network to capture the appearance of multi-band dynamics and the underlying bifurcations theoretically. Furthermore, when viewed within the reduced state space, our analysis reveals conserved geometrical features of the bifurcations on low-dimensional dynamical manifolds. These results suggest a simple geometric mechanism behind the emergence of multi-band oscillations without appealing to oscillatory inputs or multiple synaptic or neuronal timescales. Thus, our work points to unexplored regimes of stochastic competition between excitation and inhibition behind the generation of dynamic, patterned neuronal activities.
Collapse
Affiliation(s)
- Tianyi Wu
- School of Mathematical Sciences, Peking University, Beijing 100871, China
- Center for Bioinformatics, National Laboratory of Protein Engineering and Plant Genetic Engineering, School of Life Sciences, Peking University, Beijing 100871, China
- Courant Institute of Mathematical Sciences, New York University, New York, New York 10003, USA
| | - Yuhang Cai
- Department of Mathematics, University of California, Berkeley, Berkeley, California 94720, USA
| | - Ruilin Zhang
- Center for Bioinformatics, National Laboratory of Protein Engineering and Plant Genetic Engineering, School of Life Sciences, Peking University, Beijing 100871, China
- Yuanpei College, Peking University, Beijing 100871, China
| | - Zhongyi Wang
- School of Mathematical Sciences, Peking University, Beijing 100871, China
- Center for Bioinformatics, National Laboratory of Protein Engineering and Plant Genetic Engineering, School of Life Sciences, Peking University, Beijing 100871, China
| | - Louis Tao
- Center for Bioinformatics, National Laboratory of Protein Engineering and Plant Genetic Engineering, School of Life Sciences, Peking University, Beijing 100871, China
- Center for Quantitative Biology, Peking University, Beijing 100871, China
| | - Zhuo-Cheng Xiao
- Courant Institute of Mathematical Sciences, New York University, New York, New York 10003, USA
| |
Collapse
|
9
|
Different eigenvalue distributions encode the same temporal tasks in recurrent neural networks. Cogn Neurodyn 2023; 17:257-275. [PMID: 35469119 PMCID: PMC9020562 DOI: 10.1007/s11571-022-09802-5] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/24/2021] [Revised: 01/28/2022] [Accepted: 03/21/2022] [Indexed: 01/26/2023] Open
Abstract
Different brain areas, such as the cortex and, more specifically, the prefrontal cortex, show great recurrence in their connections, even in early sensory areas. Several approaches and methods based on trained networks have been proposed to model and describe these regions. It is essential to understand the dynamics behind the models because they are used to build different hypotheses about the functioning of brain areas and to explain experimental results. The main contribution here is the description of the dynamics through the classification and interpretation carried out with a set of numerical simulations. This study sheds light on the multiplicity of solutions obtained for the same tasks and shows the link between the spectra of linearized trained networks and the dynamics of the counterparts. The patterns in the distribution of the eigenvalues of the recurrent weight matrix were studied and properly related to the dynamics in each task. Supplementary Information The online version contains supplementary material available at 10.1007/s11571-022-09802-5.
Collapse
|
10
|
Shao Y, Ostojic S. Relating local connectivity and global dynamics in recurrent excitatory-inhibitory networks. PLoS Comput Biol 2023; 19:e1010855. [PMID: 36689488 PMCID: PMC9894562 DOI: 10.1371/journal.pcbi.1010855] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/29/2022] [Revised: 02/02/2023] [Accepted: 01/06/2023] [Indexed: 01/24/2023] Open
Abstract
How the connectivity of cortical networks determines the neural dynamics and the resulting computations is one of the key questions in neuroscience. Previous works have pursued two complementary approaches to quantify the structure in connectivity. One approach starts from the perspective of biological experiments where only the local statistics of connectivity motifs between small groups of neurons are accessible. Another approach is based instead on the perspective of artificial neural networks where the global connectivity matrix is known, and in particular its low-rank structure can be used to determine the resulting low-dimensional dynamics. A direct relationship between these two approaches is however currently missing. Specifically, it remains to be clarified how local connectivity statistics and the global low-rank connectivity structure are inter-related and shape the low-dimensional activity. To bridge this gap, here we develop a method for mapping local connectivity statistics onto an approximate global low-rank structure. Our method rests on approximating the global connectivity matrix using dominant eigenvectors, which we compute using perturbation theory for random matrices. We demonstrate that multi-population networks defined from local connectivity statistics for which the central limit theorem holds can be approximated by low-rank connectivity with Gaussian-mixture statistics. We specifically apply this method to excitatory-inhibitory networks with reciprocal motifs, and show that it yields reliable predictions for both the low-dimensional dynamics, and statistics of population activity. Importantly, it analytically accounts for the activity heterogeneity of individual neurons in specific realizations of local connectivity. Altogether, our approach allows us to disentangle the effects of mean connectivity and reciprocal motifs on the global recurrent feedback, and provides an intuitive picture of how local connectivity shapes global network dynamics.
Collapse
Affiliation(s)
- Yuxiu Shao
- Laboratoire de Neurosciences Cognitives et Computationnelles, INSERM U960, Ecole Normale Superieure—PSL Research University, Paris, France
| | - Srdjan Ostojic
- Laboratoire de Neurosciences Cognitives et Computationnelles, INSERM U960, Ecole Normale Superieure—PSL Research University, Paris, France
| |
Collapse
|
11
|
Shi YL, Zeraati R, Levina A, Engel TA. Spatial and temporal correlations in neural networks with structured connectivity. PHYSICAL REVIEW RESEARCH 2023; 5:013005. [PMID: 38938692 PMCID: PMC11210526 DOI: 10.1103/physrevresearch.5.013005] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/29/2024]
Abstract
Correlated fluctuations in the activity of neural populations reflect the network's dynamics and connectivity. The temporal and spatial dimensions of neural correlations are interdependent. However, prior theoretical work mainly analyzed correlations in either spatial or temporal domains, oblivious to their interplay. We show that the network dynamics and connectivity jointly define the spatiotemporal profile of neural correlations. We derive analytical expressions for pairwise correlations in networks of binary units with spatially arranged connectivity in one and two dimensions. We find that spatial interactions among units generate multiple timescales in auto- and cross-correlations. Each timescale is associated with fluctuations at a particular spatial frequency, making a hierarchical contribution to the correlations. External inputs can modulate the correlation timescales when spatial interactions are nonlinear, and the modulation effect depends on the operating regime of network dynamics. These theoretical results open new ways to relate connectivity and dynamics in cortical networks via measurements of spatiotemporal neural correlations.
Collapse
Affiliation(s)
- Yan-Liang Shi
- Cold Spring Harbor Laboratory, Cold Spring Harbor, New York, USA
| | - Roxana Zeraati
- International Max Planck Research School for the Mechanisms of Mental Function and Dysfunction, University of Tübingen, Tübingen, Germany
- Max Planck Institute for Biological Cybernetics, Tübingen, Germany
| | - Anna Levina
- Max Planck Institute for Biological Cybernetics, Tübingen, Germany
- Department of Computer Science, University of Tübingen, Tübingen, Germany
- Bernstein Center for Computational Neuroscience Tübingen, Tübingen, Germany
| | - Tatiana A Engel
- Cold Spring Harbor Laboratory, Cold Spring Harbor, New York, USA
| |
Collapse
|
12
|
Mosheiff N, Ermentrout B, Huang C. Chaotic dynamics in spatially distributed neuronal networks generate population-wide shared variability. PLoS Comput Biol 2023; 19:e1010843. [PMID: 36626362 PMCID: PMC9870129 DOI: 10.1371/journal.pcbi.1010843] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/29/2022] [Revised: 01/23/2023] [Accepted: 12/26/2022] [Indexed: 01/11/2023] Open
Abstract
Neural activity in the cortex is highly variable in response to repeated stimuli. Population recordings across the cortex demonstrate that the variability of neuronal responses is shared among large groups of neurons and concentrates in a low dimensional space. However, the source of the population-wide shared variability is unknown. In this work, we analyzed the dynamical regimes of spatially distributed networks of excitatory and inhibitory neurons. We found chaotic spatiotemporal dynamics in networks with similar excitatory and inhibitory projection widths, an anatomical feature of the cortex. The chaotic solutions contain broadband frequency power in rate variability and have distance-dependent and low-dimensional correlations, in agreement with experimental findings. In addition, rate chaos can be induced by globally correlated noisy inputs. These results suggest that spatiotemporal chaos in cortical networks can explain the shared variability observed in neuronal population responses.
Collapse
Affiliation(s)
- Noga Mosheiff
- Department of Neuroscience, University of Pittsburgh, Pittsburgh, Pennsylvania, United States of America
- Center for the Neural Basis of Cognition, Pittsburgh, Pennsylvania, United States of America
| | - Bard Ermentrout
- Department of Mathematics, University of Pittsburgh, Pittsburgh, Pennsylvania, United States of America
| | - Chengcheng Huang
- Department of Neuroscience, University of Pittsburgh, Pittsburgh, Pennsylvania, United States of America
- Center for the Neural Basis of Cognition, Pittsburgh, Pennsylvania, United States of America
- Department of Mathematics, University of Pittsburgh, Pittsburgh, Pennsylvania, United States of America
| |
Collapse
|
13
|
Fukai T. Computational models of Idling brain activity for memory processing. Neurosci Res 2022; 189:75-82. [PMID: 36592825 DOI: 10.1016/j.neures.2022.12.024] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/28/2022] [Accepted: 12/29/2022] [Indexed: 01/01/2023]
Abstract
Studying the underlying neural mechanisms of cognitive functions of the brain is one of the central questions in modern biology. Moreover, it has significantly impacted the development of novel technologies in artificial intelligence. Spontaneous activity is a unique feature of the brain and is currently lacking in many artificially constructed intelligent machines. Spontaneous activity may represent the brain's idling states, which are internally driven by neuronal networks and possibly participate in offline processing during awake, sleep, and resting states. Evidence is accumulating that the brain's spontaneous activity is not mere noise but part of the mechanisms to process information about previous experiences. A bunch of literature has shown how previous sensory and behavioral experiences influence the subsequent patterns of brain activity with various methods in various animals. It seems, however, that the patterns of neural activity and their computational roles differ significantly from area to area and from function to function. In this article, I review the various forms of the brain's spontaneous activity, especially those observed during memory processing, and some attempts to model the generation mechanisms and computational roles of such activities.
Collapse
Affiliation(s)
- Tomoki Fukai
- Okinawa Institute of Science and Technology, Tancha 1919-1, Onna-son, Okinawa 904-0495, Japan.
| |
Collapse
|
14
|
Timcheck J, Kadmon J, Boahen K, Ganguli S. Optimal noise level for coding with tightly balanced networks of spiking neurons in the presence of transmission delays. PLoS Comput Biol 2022; 18:e1010593. [PMID: 36251693 PMCID: PMC9576105 DOI: 10.1371/journal.pcbi.1010593] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/13/2022] [Accepted: 09/21/2022] [Indexed: 11/19/2022] Open
Abstract
Neural circuits consist of many noisy, slow components, with individual neurons subject to ion channel noise, axonal propagation delays, and unreliable and slow synaptic transmission. This raises a fundamental question: how can reliable computation emerge from such unreliable components? A classic strategy is to simply average over a population of N weakly-coupled neurons to achieve errors that scale as [Formula: see text]. But more interestingly, recent work has introduced networks of leaky integrate-and-fire (LIF) neurons that achieve coding errors that scale superclassically as 1/N by combining the principles of predictive coding and fast and tight inhibitory-excitatory balance. However, spike transmission delays preclude such fast inhibition, and computational studies have observed that such delays can cause pathological synchronization that in turn destroys superclassical coding performance. Intriguingly, it has also been observed in simulations that noise can actually improve coding performance, and that there exists some optimal level of noise that minimizes coding error. However, we lack a quantitative theory that describes this fascinating interplay between delays, noise and neural coding performance in spiking networks. In this work, we elucidate the mechanisms underpinning this beneficial role of noise by deriving analytical expressions for coding error as a function of spike propagation delay and noise levels in predictive coding tight-balance networks of LIF neurons. Furthermore, we compute the minimal coding error and the associated optimal noise level, finding that they grow as power-laws with the delay. Our analysis reveals quantitatively how optimal levels of noise can rescue neural coding performance in spiking neural networks with delays by preventing the build up of pathological synchrony without overwhelming the overall spiking dynamics. This analysis can serve as a foundation for the further study of precise computation in the presence of noise and delays in efficient spiking neural circuits.
Collapse
Affiliation(s)
- Jonathan Timcheck
- Department of Physics, Stanford University, Stanford, California, United States of America
| | - Jonathan Kadmon
- Department of Applied Physics, Stanford University, Stanford, California, United States of America
| | - Kwabena Boahen
- Department of Bioengineering, Stanford University, Stanford, California, United States of America
| | - Surya Ganguli
- Department of Applied Physics, Stanford University, Stanford, California, United States of America
| |
Collapse
|
15
|
A chaotic neural network model for biceps muscle based on Rossler stimulation equation and bifurcation diagram. Biomed Signal Process Control 2022. [DOI: 10.1016/j.bspc.2022.103852] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/23/2022]
|
16
|
Herbert E, Ostojic S. The impact of sparsity in low-rank recurrent neural networks. PLoS Comput Biol 2022; 18:e1010426. [PMID: 35944030 PMCID: PMC9390915 DOI: 10.1371/journal.pcbi.1010426] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/04/2022] [Revised: 08/19/2022] [Accepted: 07/22/2022] [Indexed: 11/18/2022] Open
Abstract
Neural population dynamics are often highly coordinated, allowing task-related computations to be understood as neural trajectories through low-dimensional subspaces. How the network connectivity and input structure give rise to such activity can be investigated with the aid of low-rank recurrent neural networks, a recently-developed class of computational models which offer a rich theoretical framework linking the underlying connectivity structure to emergent low-dimensional dynamics. This framework has so far relied on the assumption of all-to-all connectivity, yet cortical networks are known to be highly sparse. Here we investigate the dynamics of low-rank recurrent networks in which the connections are randomly sparsified, which makes the network connectivity formally full-rank. We first analyse the impact of sparsity on the eigenvalue spectrum of low-rank connectivity matrices, and use this to examine the implications for the dynamics. We find that in the presence of sparsity, the eigenspectra in the complex plane consist of a continuous bulk and isolated outliers, a form analogous to the eigenspectra of connectivity matrices composed of a low-rank and a full-rank random component. This analogy allows us to characterise distinct dynamical regimes of the sparsified low-rank network as a function of key network parameters. Altogether, we find that the low-dimensional dynamics induced by low-rank connectivity structure are preserved even at high levels of sparsity, and can therefore support rich and robust computations even in networks sparsified to a biologically-realistic extent.
Collapse
Affiliation(s)
- Elizabeth Herbert
- Laboratoire de Neurosciences Cognitives et Computationnelles, Département d’Études Cognitives, INSERM U960, École Normale Supérieure - PSL University, Paris, France
| | - Srdjan Ostojic
- Laboratoire de Neurosciences Cognitives et Computationnelles, Département d’Études Cognitives, INSERM U960, École Normale Supérieure - PSL University, Paris, France
| |
Collapse
|
17
|
Valente A, Ostojic S, Pillow J. Probing the Relationship Between Latent Linear Dynamical Systems and Low-Rank Recurrent Neural Network Models. Neural Comput 2022; 34:1871-1892. [PMID: 35896161 DOI: 10.1162/neco_a_01522] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/26/2021] [Accepted: 04/15/2022] [Indexed: 11/04/2022]
Abstract
A large body of work has suggested that neural populations exhibit low-dimensional dynamics during behavior. However, there are a variety of different approaches for modeling low-dimensional neural population activity. One approach involves latent linear dynamical system (LDS) models, in which population activity is described by a projection of low-dimensional latent variables with linear dynamics. A second approach involves low-rank recurrent neural networks (RNNs), in which population activity arises directly from a low-dimensional projection of past activity. Although these two modeling approaches have strong similarities, they arise in different contexts and tend to have different domains of application. Here we examine the precise relationship between latent LDS models and linear low-rank RNNs. When can one model class be converted to the other, and vice versa? We show that latent LDS models can only be converted to RNNs in specific limit cases, due to the non-Markovian property of latent LDS models. Conversely, we show that linear RNNs can be mapped onto LDS models, with latent dimensionality at most twice the rank of the RNN. A surprising consequence of our results is that a partially observed RNN is better represented by an LDS model than by an RNN consisting of only observed units.
Collapse
Affiliation(s)
- Adrian Valente
- Laboratoire de Neurosciences Cognitives et Computationnelles, INSERM U960, Ecole Normale Superieure-PSL Research University, 75005 Paris, France
| | - Srdjan Ostojic
- Laboratoire de Neurosciences Cognitives et Computationnelles, INSERM U960, Ecole Normale Superieure-PSL Research University, 75005 Paris, France
| | - Jonathan Pillow
- Princeton Neuroscience Institute and Department of Psychology, Princeton University, Princeton, NJ 08544, U.S.A.
| |
Collapse
|
18
|
Gradient-based learning drives robust representations in recurrent neural networks by balancing compression and expansion. NAT MACH INTELL 2022. [DOI: 10.1038/s42256-022-00498-0] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/02/2023]
|
19
|
Urai AE, Doiron B, Leifer AM, Churchland AK. Large-scale neural recordings call for new insights to link brain and behavior. Nat Neurosci 2022; 25:11-19. [PMID: 34980926 DOI: 10.1038/s41593-021-00980-9] [Citation(s) in RCA: 90] [Impact Index Per Article: 45.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/31/2020] [Accepted: 11/08/2021] [Indexed: 12/17/2022]
Abstract
Neuroscientists today can measure activity from more neurons than ever before, and are facing the challenge of connecting these brain-wide neural recordings to computation and behavior. In the present review, we first describe emerging tools and technologies being used to probe large-scale brain activity and new approaches to characterize behavior in the context of such measurements. We next highlight insights obtained from large-scale neural recordings in diverse model systems, and argue that some of these pose a challenge to traditional theoretical frameworks. Finally, we elaborate on existing modeling frameworks to interpret these data, and argue that the interpretation of brain-wide neural recordings calls for new theoretical approaches that may depend on the desired level of understanding. These advances in both neural recordings and theory development will pave the way for critical advances in our understanding of the brain.
Collapse
Affiliation(s)
- Anne E Urai
- Cold Spring Harbor Laboratory, Cold Spring Harbor, NY, USA.,Cognitive Psychology Unit, Leiden University, Leiden, The Netherlands
| | | | | | - Anne K Churchland
- Cold Spring Harbor Laboratory, Cold Spring Harbor, NY, USA. .,University of California Los Angeles, Los Angeles, CA, USA.
| |
Collapse
|
20
|
Bi H, di Volo M, Torcini A. Asynchronous and Coherent Dynamics in Balanced Excitatory-Inhibitory Spiking Networks. Front Syst Neurosci 2021; 15:752261. [PMID: 34955768 PMCID: PMC8702645 DOI: 10.3389/fnsys.2021.752261] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/01/2021] [Accepted: 10/27/2021] [Indexed: 01/14/2023] Open
Abstract
Dynamic excitatory-inhibitory (E-I) balance is a paradigmatic mechanism invoked to explain the irregular low firing activity observed in the cortex. However, we will show that the E-I balance can be at the origin of other regimes observable in the brain. The analysis is performed by combining extensive simulations of sparse E-I networks composed of N spiking neurons with analytical investigations of low dimensional neural mass models. The bifurcation diagrams, derived for the neural mass model, allow us to classify the possible asynchronous and coherent behaviors emerging in balanced E-I networks with structural heterogeneity for any finite in-degree K. Analytic mean-field (MF) results show that both supra and sub-threshold balanced asynchronous regimes are observable in our system in the limit N >> K >> 1. Due to the heterogeneity, the asynchronous states are characterized at the microscopic level by the splitting of the neurons in to three groups: silent, fluctuation, and mean driven. These features are consistent with experimental observations reported for heterogeneous neural circuits. The coherent rhythms observed in our system can range from periodic and quasi-periodic collective oscillations (COs) to coherent chaos. These rhythms are characterized by regular or irregular temporal fluctuations joined to spatial coherence somehow similar to coherent fluctuations observed in the cortex over multiple spatial scales. The COs can emerge due to two different mechanisms. A first mechanism analogous to the pyramidal-interneuron gamma (PING), usually invoked for the emergence of γ-oscillations. The second mechanism is intimately related to the presence of current fluctuations, which sustain COs characterized by an essentially simultaneous bursting of the two populations. We observe period-doubling cascades involving the PING-like COs finally leading to the appearance of coherent chaos. Fluctuation driven COs are usually observable in our system as quasi-periodic collective motions characterized by two incommensurate frequencies. However, for sufficiently strong current fluctuations these collective rhythms can lock. This represents a novel mechanism of frequency locking in neural populations promoted by intrinsic fluctuations. COs are observable for any finite in-degree K, however, their existence in the limit N >> K >> 1 appears as uncertain.
Collapse
Affiliation(s)
- Hongjie Bi
- CY Cergy Paris Université, Laboratoire de Physique Théorique et Modélisation, CNRS, UMR 8089, Cergy-Pontoise, France
- Neural Coding and Brain Computing Unit, Okinawa Institute of Science and Technology, Okinawa, Japan
| | - Matteo di Volo
- CY Cergy Paris Université, Laboratoire de Physique Théorique et Modélisation, CNRS, UMR 8089, Cergy-Pontoise, France
| | - Alessandro Torcini
- CY Cergy Paris Université, Laboratoire de Physique Théorique et Modélisation, CNRS, UMR 8089, Cergy-Pontoise, France
- CNR-Consiglio Nazionale delle Ricerche, Istituto dei Sistemi Complessi, Sesto Fiorentino, Italy
| |
Collapse
|
21
|
Usher M. Refuting the unfolding-argument on the irrelevance of causal structure to consciousness. Conscious Cogn 2021; 95:103212. [PMID: 34627098 DOI: 10.1016/j.concog.2021.103212] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/01/2020] [Revised: 09/15/2021] [Accepted: 09/16/2021] [Indexed: 11/28/2022]
Abstract
The unfolding argument (UA) was advanced as a refutation of prominent theories, which posit that phenomenal experience is determined by patterns of neural activation in a recurrent (neural) network (RN) structure. The argument is based on the statement that any input-output function of an RN can be approximated by an "equivalent" feedforward-network (FFN). According to UA, if consciousness depends on causal structure, its presence is unfalsifiable (thus non-scientific), as an equivalent FFN structure is behaviorally indistinguishable with regards to any behavioral test. Here I refute UA by appealing to computational theory and cognitive-neuroscience. I argue that a robust functional equivalence between FFN and RN is not supported by the mathematical work on the Universal Approximator theorem, and is also unlikely to hold, as a conjecture, given data in cognitive neuroscience; I argue that an equivalence of RN and FFN can only apply to static functions between input/output layers and not to the temporal patterns or to the network's reactions to structural perturbations. Finally, I review data indicating that consciousness has functional characteristics, such as a flexible control of behavior, and that cognitive/brain dynamics reveal interacting top-down and bottom-up processes, which are necessary for the mediation of such control processes.
Collapse
|
22
|
Huang C. Modulation of the dynamical state in cortical network models. Curr Opin Neurobiol 2021; 70:43-50. [PMID: 34403890 PMCID: PMC8688204 DOI: 10.1016/j.conb.2021.07.004] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/01/2021] [Revised: 05/18/2021] [Accepted: 07/14/2021] [Indexed: 11/29/2022]
Abstract
Cortical neural responses can be modulated by various factors, such as stimulus inputs and the behavior state of the animal. Understanding the circuit mechanisms underlying modulations of network dynamics is important to understand the flexibility of circuit computations. Identifying the dynamical state of a network is an important first step to predict network responses to external stimulus and top-down modulatory inputs. Models in stable or unstable dynamical regimes require different analytic tools to estimate the network responses to inputs and the structure of neural variability. In this article, I review recent cortical models of state-dependent responses and their predictions about the underlying modulatory mechanisms.
Collapse
Affiliation(s)
- Chengcheng Huang
- Departments of Neuroscience and Mathematics, University of Pittsburgh, Pittsburgh, PA, USA; Center for the Neural Basis of Cognition, Pittsburgh, PA, USA.
| |
Collapse
|
23
|
Logiaco L, Abbott LF, Escola S. Thalamic control of cortical dynamics in a model of flexible motor sequencing. Cell Rep 2021; 35:109090. [PMID: 34077721 PMCID: PMC8449509 DOI: 10.1016/j.celrep.2021.109090] [Citation(s) in RCA: 36] [Impact Index Per Article: 12.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/17/2020] [Revised: 03/04/2021] [Accepted: 04/16/2021] [Indexed: 12/26/2022] Open
Abstract
The neural mechanisms that generate an extensible library of motor motifs and flexibly string them into arbitrary sequences are unclear. We developed a model in which inhibitory basal ganglia output neurons project to thalamic units that are themselves bidirectionally connected to a recurrent cortical network. We model the basal ganglia inhibitory patterns as silencing some thalamic neurons while leaving others disinhibited and free to interact with cortex during specific motifs. We show that a small number of disinhibited thalamic neurons can control cortical dynamics to generate specific motor output in a noise-robust way. Additionally, a single "preparatory" thalamocortical network can produce fast cortical dynamics that support rapid transitions between any pair of learned motifs. If the thalamic units associated with each sequence component are segregated, many motor outputs can be learned without interference and then combined in arbitrary orders for the flexible production of long and complex motor sequences.
Collapse
Affiliation(s)
- Laureline Logiaco
- Zuckerman Institute, Department of Neuroscience, Columbia University, New York, NY 10027, USA; Zuckerman Institute, Department of Psychiatry, Columbia University, New York, NY 10027, USA.
| | - L F Abbott
- Zuckerman Institute, Department of Neuroscience, Columbia University, New York, NY 10027, USA
| | - Sean Escola
- Zuckerman Institute, Department of Psychiatry, Columbia University, New York, NY 10027, USA
| |
Collapse
|
24
|
Akil AE, Rosenbaum R, Josić K. Balanced networks under spike-time dependent plasticity. PLoS Comput Biol 2021; 17:e1008958. [PMID: 33979336 PMCID: PMC8143429 DOI: 10.1371/journal.pcbi.1008958] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/25/2020] [Revised: 05/24/2021] [Accepted: 04/12/2021] [Indexed: 11/28/2022] Open
Abstract
The dynamics of local cortical networks are irregular, but correlated. Dynamic excitatory–inhibitory balance is a plausible mechanism that generates such irregular activity, but it remains unclear how balance is achieved and maintained in plastic neural networks. In particular, it is not fully understood how plasticity induced changes in the network affect balance, and in turn, how correlated, balanced activity impacts learning. How do the dynamics of balanced networks change under different plasticity rules? How does correlated spiking activity in recurrent networks change the evolution of weights, their eventual magnitude, and structure across the network? To address these questions, we develop a theory of spike–timing dependent plasticity in balanced networks. We show that balance can be attained and maintained under plasticity–induced weight changes. We find that correlations in the input mildly affect the evolution of synaptic weights. Under certain plasticity rules, we find an emergence of correlations between firing rates and synaptic weights. Under these rules, synaptic weights converge to a stable manifold in weight space with their final configuration dependent on the initial state of the network. Lastly, we show that our framework can also describe the dynamics of plastic balanced networks when subsets of neurons receive targeted optogenetic input. Animals are able to learn complex tasks through changes in individual synapses between cells. Such changes lead to the coevolution of neural activity patterns and the structure of neural connectivity, but the consequences of these interactions are not fully understood. We consider plasticity in model neural networks which achieve an average balance between the excitatory and inhibitory synaptic inputs to different cells, and display cortical–like, irregular activity. We extend the theory of balanced networks to account for synaptic plasticity and show which rules can maintain balance, and which will drive the network into a different state. This theory of plasticity can provide insights into the relationship between stimuli, network dynamics, and synaptic circuitry.
Collapse
Affiliation(s)
- Alan Eric Akil
- Department of Mathematics, University of Houston, Houston, Texas, United States of America
| | - Robert Rosenbaum
- Department of Applied and Computational Mathematics and Statistics, University of Notre Dame, Notre Dame, Indiana, United States of America
- Interdisciplinary Center for Network Science and Applications, University of Notre Dame, Notre Dame, Indiana, United States of America
| | - Krešimir Josić
- Department of Mathematics, University of Houston, Houston, Texas, United States of America
- Department of Biology and Biochemistry, University of Houston, Houston, Texas, United States of America
- * E-mail:
| |
Collapse
|
25
|
Unpredictable Oscillations for Hopfield-Type Neural Networks with Delayed and Advanced Arguments. MATHEMATICS 2021. [DOI: 10.3390/math9050571] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
This is the first time that the method for the investigation of unpredictable solutions of differential equations has been extended to unpredictable oscillations of neural networks with a generalized piecewise constant argument, which is delayed and advanced. The existence and exponential stability of the unique unpredictable oscillation are proven. According to the theory, the presence of unpredictable oscillations is strong evidence for Poincaré chaos. Consequently, the paper is a contribution to chaos applications in neuroscience. The model is inspired by chaotic time-varying stimuli, which allow studying the distribution of chaotic signals in neural networks. Unpredictable inputs create an excitation wave of neurons that transmit chaotic signals. The technique of analysis includes the ideas used for differential equations with a piecewise constant argument. The results are illustrated by examples and simulations. They are carried out in MATLAB Simulink to demonstrate the simplicity of the diagrammatic approaches.
Collapse
|
26
|
Dwyer DS. Genomic Chaos Begets Psychiatric Disorder. Complex Psychiatry 2020; 6:20-29. [PMID: 34883501 PMCID: PMC7673594 DOI: 10.1159/000507988] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/16/2019] [Accepted: 04/06/2020] [Indexed: 12/21/2022] Open
Abstract
The processes that created the primordial genome are inextricably linked to current day vulnerability to developing a psychiatric disorder as summarized in this review article. Chaos and dynamic forces including duplication, transposition, and recombination generated the protogenome. To survive early stages of genome evolution, self-organization emerged to curb chaos. Eventually, the human genome evolved through a delicate balance of chaos/instability and organization/stability. However, recombination coldspots, silencing of transposable elements, and other measures to limit chaos also led to retention of variants that increase risk for disease. Moreover, ongoing dynamics in the genome creates various new mutations that determine liability for psychiatric disorders. Homologous recombination, long-range gene regulation, and gene interactions were all guided by spooky action-at-a-distance, which increased variability in the system. A probabilistic system of life was required to deal with a changing environment. This ensured the generation of outliers in the population, which enhanced the probability that some members would survive unfavorable environmental impacts. Some of the outliers produced through this process in man are ill suited to cope with the complex demands of modern life. Genomic chaos and mental distress from the psychological challenges of modern living will inevitably converge to produce psychiatric disorders in man.
Collapse
Affiliation(s)
- Donard S. Dwyer
- Departments of Psychiatry and Behavioral Medicine and Pharmacology, Toxicology and Neuroscience, LSU Health Shreveport, Shreveport, Louisiana, USA
| |
Collapse
|
27
|
Krauss P, Schuster M, Dietrich V, Schilling A, Schulze H, Metzner C. Weight statistics controls dynamics in recurrent neural networks. PLoS One 2019; 14:e0214541. [PMID: 30964879 PMCID: PMC6456246 DOI: 10.1371/journal.pone.0214541] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/03/2018] [Accepted: 03/14/2019] [Indexed: 11/19/2022] Open
Abstract
Recurrent neural networks are complex non-linear systems, capable of ongoing activity in the absence of driving inputs. The dynamical properties of these systems, in particular their long-time attractor states, are determined on the microscopic level by the connection strengths wij between the individual neurons. However, little is known to which extent network dynamics is tunable on a more coarse-grained level by the statistical features of the weight matrix. In this work, we investigate the dynamics of recurrent networks of Boltzmann neurons. In particular we study the impact of three statistical parameters: density (the fraction of non-zero connections), balance (the ratio of excitatory to inhibitory connections), and symmetry (the fraction of neuron pairs with wij = wji). By computing a 'phase diagram' of network dynamics, we find that balance is the essential control parameter: Its gradual increase from negative to positive values drives the system from oscillatory behavior into a chaotic regime, and eventually into stationary fixed points. Only directly at the border of the chaotic regime do the neural networks display rich but regular dynamics, thus enabling actual information processing. These results suggest that the brain, too, is fine-tuned to the 'edge of chaos' by assuring a proper balance between excitatory and inhibitory neural connections.
Collapse
Affiliation(s)
- Patrick Krauss
- Cognitive Computational Neuroscience Group at the Chair of English Philology and Linguistics, Department of English and American Studies, Friedrich-Alexander University Erlangen-Nürnberg (FAU), Erlangen, Germany
- Experimental Otolaryngology, Neuroscience Group, University Hospital Erlangen, Friedrich-Alexander University Erlangen-Nürnberg (FAU), Erlangen, Germany
| | - Marc Schuster
- Experimental Otolaryngology, Neuroscience Group, University Hospital Erlangen, Friedrich-Alexander University Erlangen-Nürnberg (FAU), Erlangen, Germany
| | - Verena Dietrich
- Experimental Otolaryngology, Neuroscience Group, University Hospital Erlangen, Friedrich-Alexander University Erlangen-Nürnberg (FAU), Erlangen, Germany
| | - Achim Schilling
- Cognitive Computational Neuroscience Group at the Chair of English Philology and Linguistics, Department of English and American Studies, Friedrich-Alexander University Erlangen-Nürnberg (FAU), Erlangen, Germany
- Experimental Otolaryngology, Neuroscience Group, University Hospital Erlangen, Friedrich-Alexander University Erlangen-Nürnberg (FAU), Erlangen, Germany
| | - Holger Schulze
- Experimental Otolaryngology, Neuroscience Group, University Hospital Erlangen, Friedrich-Alexander University Erlangen-Nürnberg (FAU), Erlangen, Germany
| | - Claus Metzner
- Experimental Otolaryngology, Neuroscience Group, University Hospital Erlangen, Friedrich-Alexander University Erlangen-Nürnberg (FAU), Erlangen, Germany
- Biophysics Group, Department of Physics, Friedrich-Alexander University Erlangen-Nürnberg (FAU), Erlangen, Germany
| |
Collapse
|
28
|
Huang C, Ruff DA, Pyle R, Rosenbaum R, Cohen MR, Doiron B. Circuit Models of Low-Dimensional Shared Variability in Cortical Networks. Neuron 2018; 101:337-348.e4. [PMID: 30581012 DOI: 10.1016/j.neuron.2018.11.034] [Citation(s) in RCA: 78] [Impact Index Per Article: 13.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/26/2018] [Revised: 10/25/2018] [Accepted: 11/19/2018] [Indexed: 12/19/2022]
Abstract
Trial-to-trial variability is a reflection of the circuitry and cellular physiology that make up a neuronal network. A pervasive yet puzzling feature of cortical circuits is that despite their complex wiring, population-wide shared spiking variability is low dimensional. Previous model cortical networks cannot explain this global variability, and rather assume it is from external sources. We show that if the spatial and temporal scales of inhibitory coupling match known physiology, networks of model spiking neurons internally generate low-dimensional shared variability that captures population activity recorded in vivo. Shifting spatial attention into the receptive field of visual neurons has been shown to differentially modulate shared variability within and between brain areas. A top-down modulation of inhibitory neurons in our network provides a parsimonious mechanism for this attentional modulation. Our work provides a critical link between observed cortical circuit structure and realistic shared neuronal variability and its modulation.
Collapse
Affiliation(s)
- Chengcheng Huang
- Department of Mathematics, University of Pittsburgh, Pittsburgh, PA, USA; Center for the Neural Basis of Cognition, Pittsburgh, PA, USA
| | - Douglas A Ruff
- Center for the Neural Basis of Cognition, Pittsburgh, PA, USA; Department of Neuroscience, University of Pittsburgh, Pittsburgh, PA, USA
| | - Ryan Pyle
- Department of Applied and Computational Mathematics and Statistics, University of Notre Dame, Notre Dame, IN, USA
| | - Robert Rosenbaum
- Department of Applied and Computational Mathematics and Statistics, University of Notre Dame, Notre Dame, IN, USA; Interdisciplinary Center for Network Science and Applications, University of Notre Dame, Notre Dame, IN, USA
| | - Marlene R Cohen
- Center for the Neural Basis of Cognition, Pittsburgh, PA, USA; Department of Neuroscience, University of Pittsburgh, Pittsburgh, PA, USA
| | - Brent Doiron
- Department of Mathematics, University of Pittsburgh, Pittsburgh, PA, USA; Center for the Neural Basis of Cognition, Pittsburgh, PA, USA.
| |
Collapse
|