1
|
Chen R, Singh M, Braver TS, Ching S. Dynamical models reveal anatomically reliable attractor landscapes embedded in resting state brain networks. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.01.15.575745. [PMID: 38293124 PMCID: PMC10827065 DOI: 10.1101/2024.01.15.575745] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/01/2024]
Abstract
Analyses of functional connectivity (FC) in resting-state brain networks (RSNs) have generated many insights into cognition. However, the mechanistic underpinnings of FC and RSNs are still not well-understood. It remains debated whether resting state activity is best characterized as noise-driven fluctuations around a single stable state, or instead, as a nonlinear dynamical system with nontrivial attractors embedded in the RSNs. Here, we provide evidence for the latter, by constructing whole-brain dynamical systems models from individual resting-state fMRI (rfMRI) recordings, using the Mesoscale Individualized NeuroDynamic (MINDy) platform. The MINDy models consist of hundreds of neural masses representing brain parcels, connected by fully trainable, individualized weights. We found that our models manifested a diverse taxonomy of nontrivial attractor landscapes including multiple equilibria and limit cycles. However, when projected into anatomical space, these attractors mapped onto a limited set of canonical RSNs, including the default mode network (DMN) and frontoparietal control network (FPN), which were reliable at the individual level. Further, by creating convex combinations of models, bifurcations were induced that recapitulated the full spectrum of dynamics found via fitting. These findings suggest that the resting brain traverses a diverse set of dynamics, which generates several distinct but anatomically overlapping attractor landscapes. Treating rfMRI as a unimodal stationary process (i.e., conventional FC) may miss critical attractor properties and structure within the resting brain. Instead, these may be better captured through neural dynamical modeling and analytic approaches. The results provide new insights into the generative mechanisms and intrinsic spatiotemporal organization of brain networks.
Collapse
Affiliation(s)
- Ruiqi Chen
- Division of Biology and Biomedical Sciences, Washington University in St. Louis, St. Louis, MO 63108
| | - Matthew Singh
- Department of Electrical and Systems Engineering, Washington University in St. Louis, St. Louis, MO 63108
| | - Todd S. Braver
- Department of Psychological & Brain Sciences, Washington University in St. Louis, St. Louis, MO 63108
| | - ShiNung Ching
- Department of Electrical and Systems Engineering, Washington University in St. Louis, St. Louis, MO 63108
| |
Collapse
|
2
|
Kurikawa T, Kaneko K. Multiple-Timescale Neural Networks: Generation of History-Dependent Sequences and Inference Through Autonomous Bifurcations. Front Comput Neurosci 2021; 15:743537. [PMID: 34955798 PMCID: PMC8702558 DOI: 10.3389/fncom.2021.743537] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/18/2021] [Accepted: 11/09/2021] [Indexed: 11/17/2022] Open
Abstract
Sequential transitions between metastable states are ubiquitously observed in the neural system and underlying various cognitive functions such as perception and decision making. Although a number of studies with asymmetric Hebbian connectivity have investigated how such sequences are generated, the focused sequences are simple Markov ones. On the other hand, fine recurrent neural networks trained with supervised machine learning methods can generate complex non-Markov sequences, but these sequences are vulnerable against perturbations and such learning methods are biologically implausible. How stable and complex sequences are generated in the neural system still remains unclear. We have developed a neural network with fast and slow dynamics, which are inspired by the hierarchy of timescales on neural activities in the cortex. The slow dynamics store the history of inputs and outputs and affect the fast dynamics depending on the stored history. We show that the learning rule that requires only local information can form the network generating the complex and robust sequences in the fast dynamics. The slow dynamics work as bifurcation parameters for the fast one, wherein they stabilize the next pattern of the sequence before the current pattern is destabilized depending on the previous patterns. This co-existence period leads to the stable transition between the current and the next pattern in the non-Markov sequence. We further find that timescale balance is critical to the co-existence period. Our study provides a novel mechanism generating robust complex sequences with multiple timescales. Considering the multiple timescales are widely observed, the mechanism advances our understanding of temporal processing in the neural system.
Collapse
Affiliation(s)
- Tomoki Kurikawa
- Department of Physics, Kansai Medical University, Hirakata, Japan
| | - Kunihiko Kaneko
- Department of Basic Science, Graduate School of Arts and Sciences, University of Tokyo, Tokyo, Japan.,Center for Complex Systems Biology, Universal Biology Institute, University of Tokyo, Tokyo, Japan
| |
Collapse
|
3
|
Inoue K, Nakajima K, Kuniyoshi Y. Designing spontaneous behavioral switching via chaotic itinerancy. SCIENCE ADVANCES 2020; 6:6/46/eabb3989. [PMID: 33177080 PMCID: PMC7673744 DOI: 10.1126/sciadv.abb3989] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/22/2020] [Accepted: 09/24/2020] [Indexed: 05/09/2023]
Abstract
Chaotic itinerancy is a frequently observed phenomenon in high-dimensional nonlinear dynamical systems and is characterized by itinerant transitions among multiple quasi-attractors. Several studies have pointed out that high-dimensional activity in animal brains can be observed to exhibit chaotic itinerancy, which is considered to play a critical role in the spontaneous behavior generation of animals. Thus, how to design desired chaotic itinerancy is a topic of great interest, particularly for neurorobotics researchers who wish to understand and implement autonomous behavioral controls. However, it is generally difficult to gain control over high-dimensional nonlinear dynamical systems. In this study, we propose a method for implementing chaotic itinerancy reproducibly in a high-dimensional chaotic neural network. We demonstrate that our method enables us to easily design both the trajectories of quasi-attractors and the transition rules among them simply by adjusting the limited number of system parameters and by using the intrinsic high-dimensional chaos.
Collapse
Affiliation(s)
- Katsuma Inoue
- Graduate School of Information Science and Technology, The University of Tokyo, Engineering Building 2, 7-3-1 Hongo, Bunkyo-ku, Tokyo 113-8656, Japan.
| | - Kohei Nakajima
- Graduate School of Information Science and Technology, The University of Tokyo, Engineering Building 2, 7-3-1 Hongo, Bunkyo-ku, Tokyo 113-8656, Japan.
| | - Yasuo Kuniyoshi
- Graduate School of Information Science and Technology, The University of Tokyo, Engineering Building 2, 7-3-1 Hongo, Bunkyo-ku, Tokyo 113-8656, Japan.
| |
Collapse
|
4
|
Szilágyi A, Szabó P, Santos M, Szathmáry E. Phenotypes to remember: Evolutionary developmental memory capacity and robustness. PLoS Comput Biol 2020; 16:e1008425. [PMID: 33253184 PMCID: PMC7703877 DOI: 10.1371/journal.pcbi.1008425] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/10/2020] [Accepted: 10/06/2020] [Indexed: 12/02/2022] Open
Abstract
There is increased awareness of the possibility of developmental memories resulting from evolutionary learning. Genetic regulatory and neural networks can be modelled by analogous formalism raising the important question of productive analogies in principles, processes and performance. We investigate the formation and persistence of various developmental memories of past phenotypes asking how the number of remembered past phenotypes scales with network size, to what extent memories stored form by Hebbian-like rules, and how robust these developmental "devo-engrams" are against networks perturbations (graceful degradation). The analogy between neural and genetic regulatory networks is not superficial in that it allows knowledge transfer between fields that used to be developed separately from each other. Known examples of spectacular phenotypic radiations could partly be accounted for in such terms.
Collapse
Affiliation(s)
- András Szilágyi
- Institute of Evolution, Centre for Ecological Research, Tihany, Hungary
- Department of Plant Systematics, Ecology and Theoretical Biology, Eötvös Loránd University, Budapest, Hungary
- Center for the Conceptual Foundations of Science, Parmenides Foundation, Pullach/Munich, Germany
| | - Péter Szabó
- Institute of Evolution, Centre for Ecological Research, Tihany, Hungary
- Department of Ecology, Institute for Biology, University of Veterinary Medicine Budapest, Budapest, Hungary
| | - Mauro Santos
- Institute of Evolution, Centre for Ecological Research, Tihany, Hungary
- Department de Genètica i de Microbiologia, Grup de Genòmica, Bioinformàtica i Biologia Evolutiva (GBBE), Universitat Autonòma de Barcelona, Barcelona, Spain
| | - Eörs Szathmáry
- Institute of Evolution, Centre for Ecological Research, Tihany, Hungary
- Department of Plant Systematics, Ecology and Theoretical Biology, Eötvös Loránd University, Budapest, Hungary
- Center for the Conceptual Foundations of Science, Parmenides Foundation, Pullach/Munich, Germany
| |
Collapse
|
5
|
Susman L, Brenner N, Barak O. Stable memory with unstable synapses. Nat Commun 2019; 10:4441. [PMID: 31570719 PMCID: PMC6768856 DOI: 10.1038/s41467-019-12306-2] [Citation(s) in RCA: 26] [Impact Index Per Article: 5.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/01/2018] [Accepted: 08/20/2019] [Indexed: 12/22/2022] Open
Abstract
What is the physiological basis of long-term memory? The prevailing view in Neuroscience attributes changes in synaptic efficacy to memory acquisition, implying that stable memories correspond to stable connectivity patterns. However, an increasing body of experimental evidence points to significant, activity-independent fluctuations in synaptic strengths. How memories can survive these fluctuations and the accompanying stabilizing homeostatic mechanisms is a fundamental open question. Here we explore the possibility of memory storage within a global component of network connectivity, while individual connections fluctuate. We find that homeostatic stabilization of fluctuations differentially affects different aspects of network connectivity. Specifically, memories stored as time-varying attractors of neural dynamics are more resilient to erosion than fixed-points. Such dynamic attractors can be learned by biologically plausible learning-rules and support associative retrieval. Our results suggest a link between the properties of learning-rules and those of network-level memory representations, and point at experimentally measurable signatures. How are stable memories maintained in the brain despite significant ongoing fluctuations in synaptic strengths? Here, the authors show that a model consistent with fluctuations, homeostasis and biologically plausible learning rules, naturally leads to memories implemented as dynamic attractors.
Collapse
Affiliation(s)
- Lee Susman
- Interdisciplinary Program in Applied Mathematics, Technion Israel Institute of Technology, Haifa, 32000, Israel. .,Network Biology Research Laboratories, Technion Israel Institute of Technology, Haifa, 32000, Israel.
| | - Naama Brenner
- Network Biology Research Laboratories, Technion Israel Institute of Technology, Haifa, 32000, Israel. .,Dept. of Chemical Engineering, Technion Israel Institute of Technology, Haifa, 32000, Israel.
| | - Omri Barak
- Network Biology Research Laboratories, Technion Israel Institute of Technology, Haifa, 32000, Israel. .,Rappaport Faculty of Medicine, Technion Israel Institute of Technology, Haifa, 32000, Israel.
| |
Collapse
|
6
|
Kurikawa T, Kaneko K. Dynamic Organization of Hierarchical Memories. PLoS One 2016; 11:e0162640. [PMID: 27618549 PMCID: PMC5019405 DOI: 10.1371/journal.pone.0162640] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/24/2016] [Accepted: 08/25/2016] [Indexed: 11/19/2022] Open
Abstract
In the brain, external objects are categorized in a hierarchical way. Although it is widely accepted that objects are represented as static attractors in neural state space, this view does not take account interaction between intrinsic neural dynamics and external input, which is essential to understand how neural system responds to inputs. Indeed, structured spontaneous neural activity without external inputs is known to exist, and its relationship with evoked activities is discussed. Then, how categorical representation is embedded into the spontaneous and evoked activities has to be uncovered. To address this question, we studied bifurcation process with increasing input after hierarchically clustered associative memories are learned. We found a “dynamic categorization”; neural activity without input wanders globally over the state space including all memories. Then with the increase of input strength, diffuse representation of higher category exhibits transitions to focused ones specific to each object. The hierarchy of memories is embedded in the transition probability from one memory to another during the spontaneous dynamics. With increased input strength, neural activity wanders over a narrower state space including a smaller set of memories, showing more specific category or memory corresponding to the applied input. Moreover, such coarse-to-fine transitions are also observed temporally during transient process under constant input, which agrees with experimental findings in the temporal cortex. These results suggest the hierarchy emerging through interaction with an external input underlies hierarchy during transient process, as well as in the spontaneous activity.
Collapse
Affiliation(s)
- Tomoki Kurikawa
- Lab for Neural Circuit Theory, RIKEN Brain Science Institute, Wako, Saitama, 351-0198, Japan
- Research Fellow of Japan society for the Promotion of Science, Chiyoda, Tokyo, 102-0083, Japan
- * E-mail:
| | - Kunihiko Kaneko
- Research Center for Complex Systems Biology, University of Tokyo, Meguro, Tokyo, 153-8902, Japan
| |
Collapse
|
7
|
Kaneko K. From globally coupled maps to complex-systems biology. CHAOS (WOODBURY, N.Y.) 2015; 25:097608. [PMID: 26428561 DOI: 10.1063/1.4916925] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/05/2023]
Abstract
Studies of globally coupled maps, introduced as a network of chaotic dynamics, are briefly reviewed with an emphasis on novel concepts therein, which are universal in high-dimensional dynamical systems. They include clustering of synchronized oscillations, hierarchical clustering, chimera of synchronization and desynchronization, partition complexity, prevalence of Milnor attractors, chaotic itinerancy, and collective chaos. The degrees of freedom necessary for high dimensionality are proposed to equal the number in which the combinatorial exceeds the exponential. Future analysis of high-dimensional dynamical systems with regard to complex-systems biology is briefly discussed.
Collapse
Affiliation(s)
- Kunihiko Kaneko
- Research Center for Complex Systems Biology, Graduate School of Arts and Sciences, The University of Tokyo 3-8-1 Komaba, Meguro-ku, Tokyo 153-8902, Japan
| |
Collapse
|
8
|
Rabinovich MI, Tristan I, Varona P. Hierarchical nonlinear dynamics of human attention. Neurosci Biobehav Rev 2015; 55:18-35. [DOI: 10.1016/j.neubiorev.2015.04.001] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/13/2014] [Revised: 12/04/2014] [Accepted: 04/01/2015] [Indexed: 12/17/2022]
|
9
|
Tsuda I, Yamaguchi Y, Hashimoto T, Okuda J, Kawasaki M, Nagasaka Y. Study of the neural dynamics for understanding communication in terms of complex hetero systems. Neurosci Res 2014; 90:51-5. [PMID: 25455742 DOI: 10.1016/j.neures.2014.10.007] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/29/2014] [Revised: 09/15/2014] [Accepted: 10/05/2014] [Indexed: 11/30/2022]
Abstract
The purpose of the research project was to establish a new research area named "neural information science for communication" by elucidating its neural mechanism. The research was performed in collaboration with applied mathematicians in complex-systems science and experimental researchers in neuroscience. The project included measurements of brain activity during communication with or without languages and analyses performed with the help of extended theories for dynamical systems and stochastic systems. The communication paradigm was extended to the interactions between human and human, human and animal, human and robot, human and materials, and even animal and animal.
Collapse
Affiliation(s)
- Ichiro Tsuda
- Research Institute for Electronic Science, Hokkaido University, Sapporo, Japan; Research Center for Integrative Mathematics, Hokkaido University, Sapporo, Japan.
| | - Yoko Yamaguchi
- Neuroinformatics Japan Center, RIKEN BSI, Hirosawa 2-1, Wako, Saitama, Japan
| | - Takashi Hashimoto
- School of Knowledge Science, Japan Advanced Institute of Science and Technology (JAIST), Nomi, Japan
| | - Jiro Okuda
- Department of Intelligent Systems, Faculty of Computer Science and Engineering, Kyoto Sangyo University, Kyoto, Japan
| | - Masahiro Kawasaki
- RIKEN BSI-Toyota Collaboration Center, Hirosawa 2-1, Wako, Saitama, Japan; Graduate School of Systems and Information Engineering, University of Tsukuba, Tsukuba, Ibaraki, Japan
| | - Yasuo Nagasaka
- Laboratory for Adaptive Intelligence, Brain Science Institute, RIKEN, Saitama, Japan
| |
Collapse
|
10
|
Tsuda I. Chaotic itinerancy and its roles in cognitive neurodynamics. Curr Opin Neurobiol 2014; 31:67-71. [PMID: 25217808 DOI: 10.1016/j.conb.2014.08.011] [Citation(s) in RCA: 31] [Impact Index Per Article: 3.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/30/2014] [Revised: 08/21/2014] [Accepted: 08/22/2014] [Indexed: 10/24/2022]
Abstract
Chaotic itinerancy is an autonomously excited trajectory through high-dimensional state space of cortical neural activity that causes the appearance of a temporal sequence of quasi-attractors. A quasi-attractor is a local region of weakly convergent flows that represent ordered activity, yet connected to divergent flows representing disordered, chaotic activity between the regions. In a cognitive neurodynamic aspect, quasi-attractors represent perceptions, thoughts and memories, chaotic trajectories between them with intelligent searches, such as history-dependent trial-and-error via exploration, and itinerancy with history-dependent sequences in thinking, speaking and writing.
Collapse
Affiliation(s)
- Ichiro Tsuda
- Research Institute for Electronic Science, Hokkaido University, Kita-12, Nishi-7, Kita-ku, Sapporo, Hokkaido 060-0012, Japan.
| |
Collapse
|
11
|
Kurikawa T, Kaneko K. Memories as bifurcations: realization by collective dynamics of spiking neurons under stochastic inputs. Neural Netw 2014; 62:25-31. [PMID: 25124069 DOI: 10.1016/j.neunet.2014.07.005] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/09/2014] [Revised: 07/13/2014] [Accepted: 07/14/2014] [Indexed: 11/29/2022]
Abstract
How the neural system proceeds from sensory stimuli to generate appropriate behaviors is a basic question that has not yet been fully answered. In contrast to the conventional viewpoint, in which the external stimulus dominantly drives the response behavior, recent studies have revealed that not only external stimuli, but also intrinsic neural dynamics, contribute to the generation of response behavior. In particular, spontaneous activity, which is neural activity without extensive external stimuli, has been found to exhibit similar patterns to those evoked by external inputs, from time to time. In order to further understand the role of this spontaneous activity on the response, we propose a viewpoint, memories-as-bifurcations, that differs from the traditional memories-as-attractors viewpoint. According to this viewpoint, memory is recalled when spontaneous neural activity is changed to an appropriate output activity upon the application of an input. After reviewing the previous rate-coding model embodying this viewpoint, we employ a model of a spiking neuron network that can embed input/output associations, and study the dynamics of collective neural activity. The organized neural activity, which matched the target pattern, is shown to be generated even under application of stochastic input, while the spontaneous activity, which apparently shows noisy dynamics, is found to exhibit selectively higher similarity with evoked activities corresponding to embedded target patterns. These results suggest that such an intrinsic structure in the spontaneous activity might play a role in generating the higher response. The relevance of these results to biological neural processing is also discussed.
Collapse
Affiliation(s)
- Tomoki Kurikawa
- Brain Science Institute, RIKEN, 2-1 Hirosawa Wako-shi, Saitama, 351-0198, Japan.
| | - Kunihiko Kaneko
- Graduate School of Arts and Sciences, University of Tokyo, Komaba 3-8-1, Meguro-ku, Tokyo, Japan
| |
Collapse
|