1
|
Medrano J, Friston K, Zeidman P. Linking fast and slow: The case for generative models. Netw Neurosci 2024; 8:24-43. [PMID: 38562283 PMCID: PMC10861163 DOI: 10.1162/netn_a_00343] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/08/2023] [Accepted: 10/11/2023] [Indexed: 04/04/2024] Open
Abstract
A pervasive challenge in neuroscience is testing whether neuronal connectivity changes over time due to specific causes, such as stimuli, events, or clinical interventions. Recent hardware innovations and falling data storage costs enable longer, more naturalistic neuronal recordings. The implicit opportunity for understanding the self-organised brain calls for new analysis methods that link temporal scales: from the order of milliseconds over which neuronal dynamics evolve, to the order of minutes, days, or even years over which experimental observations unfold. This review article demonstrates how hierarchical generative models and Bayesian inference help to characterise neuronal activity across different time scales. Crucially, these methods go beyond describing statistical associations among observations and enable inference about underlying mechanisms. We offer an overview of fundamental concepts in state-space modeling and suggest a taxonomy for these methods. Additionally, we introduce key mathematical principles that underscore a separation of temporal scales, such as the slaving principle, and review Bayesian methods that are being used to test hypotheses about the brain with multiscale data. We hope that this review will serve as a useful primer for experimental and computational neuroscientists on the state of the art and current directions of travel in the complex systems modelling literature.
Collapse
Affiliation(s)
- Johan Medrano
- The Wellcome Centre for Human Neuroimaging, UCL Queen Square Institute of Neurology, London, UK
| | - Karl Friston
- The Wellcome Centre for Human Neuroimaging, UCL Queen Square Institute of Neurology, London, UK
| | - Peter Zeidman
- The Wellcome Centre for Human Neuroimaging, UCL Queen Square Institute of Neurology, London, UK
| |
Collapse
|
2
|
Manicom G, Kirk V, Postlethwaite C. Non-Markovian processes on heteroclinic networks. CHAOS (WOODBURY, N.Y.) 2024; 34:033120. [PMID: 38470263 DOI: 10.1063/5.0176205] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/12/2023] [Accepted: 02/15/2024] [Indexed: 03/13/2024]
Abstract
Sets of saddle equilibria connected by trajectories are known as heteroclinic networks. Trajectories near a heteroclinic network typically spend a long period of time near one of the saddles before rapidly transitioning to the neighborhood of a different saddle. The sequence of saddles visited by a trajectory can be considered a stochastic sequence of states. In the presence of small-amplitude noise, this sequence may be either Markovian or non-Markovian, depending on the appearance of a phenomenon called lift-off at one or more saddles of the network. In this paper, we investigate how lift-off occurring at one saddle affects the dynamics near the next saddle visited, how we might determine the order of the associated Markov chain of states, and how we might calculate the transition probabilities of that Markov chain. We first review methods developed by Bakhtin to determine the map describing the dynamics near a linear saddle in the presence of noise and extend the results to include three different initial probability distributions. Using Bakhtin's map, we determine conditions under which the effect of lift-off persists as the trajectory moves past a subsequent saddle. We then propose a method for finding a lower bound for the order of this Markov chain. Many of the theoretical results in this paper are only valid in the limit of small noise, and we numerically investigate how close simulated results get to the theoretical predictions over a range of noise amplitudes and parameter values.
Collapse
Affiliation(s)
- Gray Manicom
- Department of Mathematics, The University of Auckland, Auckland 1142, New Zealand
| | - Vivien Kirk
- Department of Mathematics, The University of Auckland, Auckland 1142, New Zealand
| | - Claire Postlethwaite
- Department of Mathematics, The University of Auckland, Auckland 1142, New Zealand
| |
Collapse
|
3
|
Ji X, Elmoznino E, Deane G, Constant A, Dumas G, Lajoie G, Simon J, Bengio Y. Sources of richness and ineffability for phenomenally conscious states. Neurosci Conscious 2024; 2024:niae001. [PMID: 38487679 PMCID: PMC10939345 DOI: 10.1093/nc/niae001] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/01/2023] [Revised: 01/03/2024] [Accepted: 01/23/2024] [Indexed: 03/17/2024] Open
Abstract
Conscious states-state that there is something it is like to be in-seem both rich or full of detail and ineffable or hard to fully describe or recall. The problem of ineffability, in particular, is a longstanding issue in philosophy that partly motivates the explanatory gap: the belief that consciousness cannot be reduced to underlying physical processes. Here, we provide an information theoretic dynamical systems perspective on the richness and ineffability of consciousness. In our framework, the richness of conscious experience corresponds to the amount of information in a conscious state and ineffability corresponds to the amount of information lost at different stages of processing. We describe how attractor dynamics in working memory would induce impoverished recollections of our original experiences, how the discrete symbolic nature of language is insufficient for describing the rich and high-dimensional structure of experiences, and how similarity in the cognitive function of two individuals relates to improved communicability of their experiences to each other. While our model may not settle all questions relating to the explanatory gap, it makes progress toward a fully physicalist explanation of the richness and ineffability of conscious experience-two important aspects that seem to be part of what makes qualitative character so puzzling.
Collapse
Affiliation(s)
- Xu Ji
- Mila - Quebec AI Institute, Montreal, Quebec H2S 3H1, Canada
- Department of Computer science and operations Research, University of Montreal, Pavillon André-Aisenstadt 2920, chemin de la Tour, Montreal, Quebec H3T 1J4, Canada
| | - Eric Elmoznino
- Mila - Quebec AI Institute, Montreal, Quebec H2S 3H1, Canada
- Department of Computer science and operations Research, University of Montreal, Pavillon André-Aisenstadt 2920, chemin de la Tour, Montreal, Quebec H3T 1J4, Canada
| | - George Deane
- Department of Philosophy, University of Montreal, Pavillon 2910, boul. Édouard-Montpetit, Montreal, Quebec H3C 3J7, Canada
| | - Axel Constant
- School of Engineering and Informatics, University of Sussex, Sussex House, Falmer, East Sussex BN1 9RH, United Kingdom
| | - Guillaume Dumas
- Mila - Quebec AI Institute, Montreal, Quebec H2S 3H1, Canada
- Department of Psychiatry and Addiction, University of Montreal, Pavillon Roger-Gaudry 2900, boul. Édouard-Montpetit, Montreal, Quebec H3T 1J4, Canada
| | - Guillaume Lajoie
- Mila - Quebec AI Institute, Montreal, Quebec H2S 3H1, Canada
- Department of Mathematics and Statistics, University of Montreal, Pavillon André-Aisenstadt (AA-5190) 2920, chemin de la Tour, Montreal, Quebec H3T 1J4, Canada
| | - Jonathan Simon
- Department of Philosophy, University of Montreal, Pavillon 2910, boul. Édouard-Montpetit, Montreal, Quebec H3C 3J7, Canada
| | - Yoshua Bengio
- Mila - Quebec AI Institute, Montreal, Quebec H2S 3H1, Canada
- Department of Computer science and operations Research, University of Montreal, Pavillon André-Aisenstadt 2920, chemin de la Tour, Montreal, Quebec H3T 1J4, Canada
- CIFAR - Canadian Institute for Advanced Research, MaRS Centre, West Tower 661 University Ave., Suite 505, Toronto, Ontario M5G 1M1, Canada
| |
Collapse
|
4
|
Munn BR, Müller EJ, Aru J, Whyte CJ, Gidon A, Larkum ME, Shine JM. A thalamocortical substrate for integrated information via critical synchronous bursting. Proc Natl Acad Sci U S A 2023; 120:e2308670120. [PMID: 37939085 PMCID: PMC10655573 DOI: 10.1073/pnas.2308670120] [Citation(s) in RCA: 6] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/23/2023] [Accepted: 09/21/2023] [Indexed: 11/10/2023] Open
Abstract
Understanding the neurobiological mechanisms underlying consciousness remains a significant challenge. Recent evidence suggests that the coupling between distal-apical and basal-somatic dendrites in thick-tufted layer 5 pyramidal neurons (L5PN), regulated by the nonspecific-projecting thalamus, is crucial for consciousness. Yet, it is uncertain whether this thalamocortical mechanism can support emergent signatures of consciousness, such as integrated information. To address this question, we constructed a biophysical network of dual-compartment thick-tufted L5PN, with dendrosomatic coupling controlled by thalamic inputs. Our findings demonstrate that integrated information is maximized when nonspecific thalamic inputs drive the system into a regime of time-varying synchronous bursting. Here, the system exhibits variable spiking dynamics with broad pairwise correlations, supporting the enhanced integrated information. Further, the observed peak in integrated information aligns with criticality signatures and empirically observed layer 5 pyramidal bursting rates. These results suggest that the thalamocortical core of the mammalian brain may be evolutionarily configured to optimize effective information processing, providing a potential neuronal mechanism that integrates microscale theories with macroscale signatures of consciousness.
Collapse
Affiliation(s)
- Brandon R. Munn
- Brain and Mind Centre, School of Medical Sciences, Faculty of Medicine and Health, University of Sydney, Sydney2050, Australia
- Complex Systems, School of Physics, Faculty of Science, University of Sydney, Sydney2050, Australia
| | - Eli J. Müller
- Brain and Mind Centre, School of Medical Sciences, Faculty of Medicine and Health, University of Sydney, Sydney2050, Australia
- Complex Systems, School of Physics, Faculty of Science, University of Sydney, Sydney2050, Australia
| | - Jaan Aru
- Institute of Computer Science, University of Tartu, Tartu51009, Estonia
| | - Christopher J. Whyte
- Brain and Mind Centre, School of Medical Sciences, Faculty of Medicine and Health, University of Sydney, Sydney2050, Australia
- Complex Systems, School of Physics, Faculty of Science, University of Sydney, Sydney2050, Australia
| | - Albert Gidon
- Institute of Biology, Humboldt University of Berlin, Berlin10099, Germany
- NeuroCure Center of Excellence, Charité Universitätsmedizin Berlin, Berlin10099, Germany
| | - Matthew E. Larkum
- Institute of Biology, Humboldt University of Berlin, Berlin10099, Germany
- NeuroCure Center of Excellence, Charité Universitätsmedizin Berlin, Berlin10099, Germany
| | - James M. Shine
- Brain and Mind Centre, School of Medical Sciences, Faculty of Medicine and Health, University of Sydney, Sydney2050, Australia
- Complex Systems, School of Physics, Faculty of Science, University of Sydney, Sydney2050, Australia
| |
Collapse
|
5
|
Meyer-Ortmanns H. Heteroclinic networks for brain dynamics. FRONTIERS IN NETWORK PHYSIOLOGY 2023; 3:1276401. [PMID: 38020242 PMCID: PMC10663269 DOI: 10.3389/fnetp.2023.1276401] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 08/11/2023] [Accepted: 10/16/2023] [Indexed: 12/01/2023]
Abstract
Heteroclinic networks are a mathematical concept in dynamic systems theory that is suited to describe metastable states and switching events in brain dynamics. The framework is sensitive to external input and, at the same time, reproducible and robust against perturbations. Solutions of the corresponding differential equations are spatiotemporal patterns that are supposed to encode information both in space and time coordinates. We focus on the concept of winnerless competition as realized in generalized Lotka-Volterra equations and report on results for binding and chunking dynamics, synchronization on spatial grids, and entrainment to heteroclinic motion. We summarize proposals of how to design heteroclinic networks as desired in view of reproducing experimental observations from neuronal networks and discuss the subtle role of noise. The review is on a phenomenological level with possible applications to brain dynamics, while we refer to the literature for a rigorous mathematical treatment. We conclude with promising perspectives for future research.
Collapse
Affiliation(s)
- Hildegard Meyer-Ortmanns
- School of Science, Constructor University, Bremen, Germany
- Complexity Science Hub Vienna, Vienna, Austria
| |
Collapse
|
6
|
Jaeger H, Noheda B, van der Wiel WG. Toward a formal theory for computing machines made out of whatever physics offers. Nat Commun 2023; 14:4911. [PMID: 37587135 PMCID: PMC10432384 DOI: 10.1038/s41467-023-40533-1] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/09/2022] [Accepted: 08/01/2023] [Indexed: 08/18/2023] Open
Abstract
Approaching limitations of digital computing technologies have spurred research in neuromorphic and other unconventional approaches to computing. Here we argue that if we want to engineer unconventional computing systems in a systematic way, we need guidance from a formal theory that is different from the classical symbolic-algorithmic Turing machine theory. We propose a general strategy for developing such a theory, and within that general view, a specific approach that we call fluent computing. In contrast to Turing, who modeled computing processes from a top-down perspective as symbolic reasoning, we adopt the scientific paradigm of physics and model physical computing systems bottom-up by formalizing what can ultimately be measured in a physical computing system. This leads to an understanding of computing as the structuring of processes, while classical models of computing systems describe the processing of structures.
Collapse
Affiliation(s)
- Herbert Jaeger
- Bernoulli Institute, University of Groningen, 9700 AB, Groningen, The Netherlands.
- Groningen Cognitive Systems and Materials Center (CogniGron), University of Groningen, 9700 AB, Groningen, The Netherlands.
| | - Beatriz Noheda
- Groningen Cognitive Systems and Materials Center (CogniGron), University of Groningen, 9700 AB, Groningen, The Netherlands
- Zernike Institute for Advanced Materials, University of Groningen, 9700 AB, Groningen, The Netherlands
| | - Wilfred G van der Wiel
- BRAINS Center for Brain-Inspired Nano Systems, University of Twente, 7500 AE, Enschede, The Netherlands
- MESA+ Institute for Nanotechnology, University of Twente, 7500 AE, Enschede, The Netherlands
- Institute of Physics, Westfälische Wilhelms-Universität Münster, Münster, Germany
| |
Collapse
|
7
|
Valle-Lisboa JC, Pomi A, Mizraji E. Multiplicative processing in the modeling of cognitive activities in large neural networks. Biophys Rev 2023; 15:767-785. [PMID: 37681105 PMCID: PMC10480136 DOI: 10.1007/s12551-023-01074-5] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/27/2023] [Accepted: 06/04/2023] [Indexed: 09/09/2023] Open
Abstract
Explaining the foundation of cognitive abilities in the processing of information by neural systems has been in the beginnings of biophysics since McCulloch and Pitts pioneered work within the biophysics school of Chicago in the 1940s and the interdisciplinary cybernetists meetings in the 1950s, inseparable from the birth of computing and artificial intelligence. Since then, neural network models have traveled a long path, both in the biophysical and the computational disciplines. The biological, neurocomputational aspect reached its representational maturity with the Distributed Associative Memory models developed in the early 70 s. In this framework, the inclusion of signal-signal multiplication within neural network models was presented as a necessity to provide matrix associative memories with adaptive, context-sensitive associations, while greatly enhancing their computational capabilities. In this review, we show that several of the most successful neural network models use a form of multiplication of signals. We present several classical models that included such kind of multiplication and the computational reasons for the inclusion. We then turn to the different proposals about the possible biophysical implementation that underlies these computational capacities. We pinpoint the important ideas put forth by different theoretical models using a tensor product representation and show that these models endow memories with the context-dependent adaptive capabilities necessary to allow for evolutionary adaptation to changing and unpredictable environments. Finally, we show how the powerful abilities of contemporary computationally deep-learning models, inspired in neural networks, also depend on multiplications, and discuss some perspectives in view of the wide panorama unfolded. The computational relevance of multiplications calls for the development of new avenues of research that uncover the mechanisms our nervous system uses to achieve multiplication.
Collapse
Affiliation(s)
- Juan C. Valle-Lisboa
- Group of Cognitive Systems Modeling, Biophysics and Systems Biology Section, Facultad de Ciencias, Universidad de la República, Iguá 4225, 11400 Montevideo, Uruguay
- Centro Interdisciplinario en Cognición para la Enseñanza y el Aprendizaje (CICEA), Universidad de la República, Espacio Interdisciplinario, 11200 Montevideo, Uruguay
| | - Andrés Pomi
- Group of Cognitive Systems Modeling, Biophysics and Systems Biology Section, Facultad de Ciencias, Universidad de la República, Iguá 4225, 11400 Montevideo, Uruguay
| | - Eduardo Mizraji
- Group of Cognitive Systems Modeling, Biophysics and Systems Biology Section, Facultad de Ciencias, Universidad de la República, Iguá 4225, 11400 Montevideo, Uruguay
| |
Collapse
|
8
|
Wright JJ, Bourke PD. The mesoanatomy of the cortex, minimization of free energy, and generative cognition. Front Comput Neurosci 2023; 17:1169772. [PMID: 37251599 PMCID: PMC10213520 DOI: 10.3389/fncom.2023.1169772] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/20/2023] [Accepted: 04/10/2023] [Indexed: 05/31/2023] Open
Abstract
Capacity for generativity and unlimited association is the defining characteristic of sentience, and this capacity somehow arises from neuronal self-organization in the cortex. We have previously argued that, consistent with the free energy principle, cortical development is driven by synaptic and cellular selection maximizing synchrony, with effects manifesting in a wide range of features of mesoscopic cortical anatomy. Here, we further argue that in the postnatal stage, as more structured inputs reach the cortex, the same principles of self-organization continue to operate at multitudes of local cortical sites. The unitary ultra-small world structures that emerged antenatally can represent sequences of spatiotemporal images. Local shifts of presynapses from excitatory to inhibitory cells result in the local coupling of spatial eigenmodes and the development of Markov blankets, minimizing prediction errors in each unit's interactions with surrounding neurons. In response to the superposition of inputs exchanged between cortical areas, more complicated, potentially cognitive structures are competitively selected by the merging of units and the elimination of redundant connections that result from the minimization of variational free energy and the elimination of redundant degrees of freedom. The trajectory along which free energy is minimized is shaped by interaction with sensorimotor, limbic, and brainstem mechanisms, providing a basis for creative and unlimited associative learning.
Collapse
Affiliation(s)
- James Joseph Wright
- Centre for Brain Research, and Department of Psychological Medicine, School of Medicine, University of Auckland, Auckland, New Zealand
| | - Paul David Bourke
- School of Social Sciences, Faculty of Arts, Business, Law and Education, University of Western Australia, Perth, WA, Australia
| |
Collapse
|
9
|
Deep Intelligence: What AI Should Learn from Nature’s Imagination. Cognit Comput 2023. [DOI: 10.1007/s12559-023-10124-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/08/2023]
|
10
|
Beiran M, Meirhaeghe N, Sohn H, Jazayeri M, Ostojic S. Parametric control of flexible timing through low-dimensional neural manifolds. Neuron 2023; 111:739-753.e8. [PMID: 36640766 PMCID: PMC9992137 DOI: 10.1016/j.neuron.2022.12.016] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/13/2021] [Revised: 09/23/2022] [Accepted: 12/08/2022] [Indexed: 01/15/2023]
Abstract
Biological brains possess an unparalleled ability to adapt behavioral responses to changing stimuli and environments. How neural processes enable this capacity is a fundamental open question. Previous works have identified two candidate mechanisms: a low-dimensional organization of neural activity and a modulation by contextual inputs. We hypothesized that combining the two might facilitate generalization and adaptation in complex tasks. We tested this hypothesis in flexible timing tasks where dynamics play a key role. Examining trained recurrent neural networks, we found that confining the dynamics to a low-dimensional subspace allowed tonic inputs to parametrically control the overall input-output transform, enabling generalization to novel inputs and adaptation to changing conditions. Reverse-engineering and theoretical analyses demonstrated that this parametric control relies on a mechanism where tonic inputs modulate the dynamics along non-linear manifolds while preserving their geometry. Comparisons with data from behaving monkeys confirmed the behavioral and neural signatures of this mechanism.
Collapse
Affiliation(s)
- Manuel Beiran
- Laboratoire de Neurosciences Cognitives et Computationnelles, INSERM U960, Ecole Normale Superieure - PSL University, 75005 Paris, France; Zuckerman Mind Brain Behavior Institute, Columbia University, New York, NY 10027, USA
| | - Nicolas Meirhaeghe
- Harvard-MIT Division of Health Sciences and Technology, Massachusetts Institute of Technology, Cambridge, MA 02139, USA; Institut de Neurosciences de la Timone (INT), UMR 7289, CNRS, Aix-Marseille Université, Marseille 13005, France
| | - Hansem Sohn
- McGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, MA 02139, USA
| | - Mehrdad Jazayeri
- McGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, MA 02139, USA; Department of Brain & Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA 02139, USA.
| | - Srdjan Ostojic
- Laboratoire de Neurosciences Cognitives et Computationnelles, INSERM U960, Ecole Normale Superieure - PSL University, 75005 Paris, France.
| |
Collapse
|
11
|
Brecelj T, Petrič T. Stable Heteroclinic Channel Networks for Physical Human-Humanoid Robot Collaboration. SENSORS (BASEL, SWITZERLAND) 2023; 23:1396. [PMID: 36772433 PMCID: PMC9921709 DOI: 10.3390/s23031396] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 11/29/2022] [Revised: 01/10/2023] [Accepted: 01/18/2023] [Indexed: 06/18/2023]
Abstract
Human-robot collaboration is one of the most challenging fields in robotics, as robots must understand human intentions and suitably cooperate with them in the given circumstances. But although this is one of the most investigated research areas in robotics, it is still in its infancy. In this paper, human-robot collaboration is addressed by applying a phase state system, guided by stable heteroclinic channel networks, to a humanoid robot. The base mathematical model is first defined and illustrated on a simple three-state system. Further on, an eight-state system is applied to a humanoid robot to guide it and make it perform different movements according to the forces exerted on its grippers. The movements presented in this paper are squatting, standing up, and walking forwards and backward, while the motion velocity depends on the magnitude of the applied forces. The method presented in this paper proves to be a suitable way of controlling robots by means of physical human-robot interaction. As the phase state system and the robot movements can both be further extended to make the robot execute many other tasks, the proposed method seems to provide a promising way for further investigation and realization of physical human-robot interaction.
Collapse
|
12
|
Zajzon B, Dahmen D, Morrison A, Duarte R. Signal denoising through topographic modularity of neural circuits. eLife 2023; 12:77009. [PMID: 36700545 PMCID: PMC9981157 DOI: 10.7554/elife.77009] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/12/2022] [Accepted: 01/25/2023] [Indexed: 01/27/2023] Open
Abstract
Information from the sensory periphery is conveyed to the cortex via structured projection pathways that spatially segregate stimulus features, providing a robust and efficient encoding strategy. Beyond sensory encoding, this prominent anatomical feature extends throughout the neocortex. However, the extent to which it influences cortical processing is unclear. In this study, we combine cortical circuit modeling with network theory to demonstrate that the sharpness of topographic projections acts as a bifurcation parameter, controlling the macroscopic dynamics and representational precision across a modular network. By shifting the balance of excitation and inhibition, topographic modularity gradually increases task performance and improves the signal-to-noise ratio across the system. We demonstrate that in biologically constrained networks, such a denoising behavior is contingent on recurrent inhibition. We show that this is a robust and generic structural feature that enables a broad range of behaviorally relevant operating regimes, and provide an in-depth theoretical analysis unraveling the dynamical principles underlying the mechanism.
Collapse
Affiliation(s)
- Barna Zajzon
- Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA-BRAIN Institute I, Jülich Research CentreJülichGermany
- Department of Psychiatry, Psychotherapy and Psychosomatics, RWTH Aachen UniversityAachenGermany
| | - David Dahmen
- Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA-BRAIN Institute I, Jülich Research CentreJülichGermany
| | - Abigail Morrison
- Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA-BRAIN Institute I, Jülich Research CentreJülichGermany
- Department of Computer Science 3 - Software Engineering, RWTH Aachen UniversityAachenGermany
| | - Renato Duarte
- Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA-BRAIN Institute I, Jülich Research CentreJülichGermany
- Donders Institute for Brain, Cognition and Behavior, Radboud University NijmegenNijmegenNetherlands
| |
Collapse
|
13
|
Controllable branching of robust response patterns in nonlinear mechanical resonators. Nat Commun 2023; 14:161. [PMID: 36631442 PMCID: PMC9834403 DOI: 10.1038/s41467-022-35685-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/14/2022] [Accepted: 12/15/2022] [Indexed: 01/13/2023] Open
Abstract
In lieu of continuous time active feedback control in complex systems, nonlinear dynamics offers a means to generate desired long-term responses using short-time control signals. This type of control has been proposed for use in resonators that exhibit a plethora of complex dynamic behaviors resulting from energy exchange between modes. However, the dynamic response and, ultimately, the ability to control the response of these systems remains poorly understood. Here, we show that a micromechanical resonator can generate diverse, robust dynamical responses that occur on a timescale five orders of magnitude larger than the external harmonic driving and these responses can be selected by inserting small pulses at specific branching points. We develop a theoretical model and experimentally show the ability to control these response patterns. Hence, these mechanical resonators may represent a simple physical platform for the development of springboard concepts for nonlinear, flexible, yet robust dynamics found in other areas of physics, chemistry, and biology.
Collapse
|
14
|
Pietras B, Schmutz V, Schwalger T. Mesoscopic description of hippocampal replay and metastability in spiking neural networks with short-term plasticity. PLoS Comput Biol 2022; 18:e1010809. [PMID: 36548392 PMCID: PMC9822116 DOI: 10.1371/journal.pcbi.1010809] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/29/2022] [Revised: 01/06/2023] [Accepted: 12/11/2022] [Indexed: 12/24/2022] Open
Abstract
Bottom-up models of functionally relevant patterns of neural activity provide an explicit link between neuronal dynamics and computation. A prime example of functional activity patterns are propagating bursts of place-cell activities called hippocampal replay, which is critical for memory consolidation. The sudden and repeated occurrences of these burst states during ongoing neural activity suggest metastable neural circuit dynamics. As metastability has been attributed to noise and/or slow fatigue mechanisms, we propose a concise mesoscopic model which accounts for both. Crucially, our model is bottom-up: it is analytically derived from the dynamics of finite-size networks of Linear-Nonlinear Poisson neurons with short-term synaptic depression. As such, noise is explicitly linked to stochastic spiking and network size, and fatigue is explicitly linked to synaptic dynamics. To derive the mesoscopic model, we first consider a homogeneous spiking neural network and follow the temporal coarse-graining approach of Gillespie to obtain a "chemical Langevin equation", which can be naturally interpreted as a stochastic neural mass model. The Langevin equation is computationally inexpensive to simulate and enables a thorough study of metastable dynamics in classical setups (population spikes and Up-Down-states dynamics) by means of phase-plane analysis. An extension of the Langevin equation for small network sizes is also presented. The stochastic neural mass model constitutes the basic component of our mesoscopic model for replay. We show that the mesoscopic model faithfully captures the statistical structure of individual replayed trajectories in microscopic simulations and in previously reported experimental data. Moreover, compared to the deterministic Romani-Tsodyks model of place-cell dynamics, it exhibits a higher level of variability regarding order, direction and timing of replayed trajectories, which seems biologically more plausible and could be functionally desirable. This variability is the product of a new dynamical regime where metastability emerges from a complex interplay between finite-size fluctuations and local fatigue.
Collapse
Affiliation(s)
- Bastian Pietras
- Institute for Mathematics, Technische Universität Berlin, Berlin, Germany
- Bernstein Center for Computational Neuroscience, Berlin, Germany
- Department of Information and Communication Technologies, Universitat Pompeu Fabra, Barcelona, Spain
| | - Valentin Schmutz
- Brain Mind Institute, School of Computer and Communication Sciences and School of Life Sciences, École Polytechnique Fédérale de Lausanne (EPFL), Lausanne, Switzerland
| | - Tilo Schwalger
- Institute for Mathematics, Technische Universität Berlin, Berlin, Germany
- Bernstein Center for Computational Neuroscience, Berlin, Germany
- * E-mail:
| |
Collapse
|
15
|
Morrison M, Young LS. Chaotic heteroclinic networks as models of switching behavior in biological systems. CHAOS (WOODBURY, N.Y.) 2022; 32:123102. [PMID: 36587320 DOI: 10.1063/5.0122184] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/22/2022] [Accepted: 11/04/2022] [Indexed: 06/17/2023]
Abstract
Key features of biological activity can often be captured by transitions between a finite number of semi-stable states that correspond to behaviors or decisions. We present here a broad class of dynamical systems that are ideal for modeling such activity. The models we propose are chaotic heteroclinic networks with nontrivial intersections of stable and unstable manifolds. Due to the sensitive dependence on initial conditions, transitions between states are seemingly random. Dwell times, exit distributions, and other transition statistics can be built into the model through geometric design and can be controlled by tunable parameters. To test our model's ability to simulate realistic biological phenomena, we turned to one of the most studied organisms, C. elegans, well known for its limited behavioral states. We reconstructed experimental data from two laboratories, demonstrating the model's ability to quantitatively reproduce dwell times and transition statistics under a variety of conditions. Stochastic switching between dominant states in complex dynamical systems has been extensively studied and is often modeled as Markov chains. As an alternative, we propose here a new paradigm, namely, chaotic heteroclinic networks generated by deterministic rules (without the necessity for noise). Chaotic heteroclinic networks can be used to model systems with arbitrary architecture and size without a commensurate increase in phase dimension. They are highly flexible and able to capture a wide range of transition characteristics that can be adjusted through control parameters.
Collapse
Affiliation(s)
- Megan Morrison
- Courant Institute, New York University, New York, New York 10012, USA
| | - Lai-Sang Young
- Courant Institute, New York University, New York, New York 10012, USA
| |
Collapse
|
16
|
Shine JM. Adaptively navigating affordance landscapes: How interactions between the superior colliculus and thalamus coordinate complex, adaptive behaviour. Neurosci Biobehav Rev 2022; 143:104921. [DOI: 10.1016/j.neubiorev.2022.104921] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/04/2022] [Revised: 09/08/2022] [Accepted: 09/08/2022] [Indexed: 11/06/2022]
|
17
|
Coppola P, Allanson J, Naci L, Adapa R, Finoia P, Williams GB, Pickard JD, Owen AM, Menon DK, Stamatakis EA. The complexity of the stream of consciousness. Commun Biol 2022; 5:1173. [PMID: 36329176 PMCID: PMC9633704 DOI: 10.1038/s42003-022-04109-x] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/04/2021] [Accepted: 10/10/2022] [Indexed: 11/06/2022] Open
Abstract
Typical consciousness can be defined as an individual-specific stream of experiences. Modern consciousness research on dynamic functional connectivity uses clustering techniques to create common bases on which to compare different individuals. We propose an alternative approach by combining modern theories of consciousness and insights arising from phenomenology and dynamical systems theory. This approach enables a representation of an individual's connectivity dynamics in an intrinsically-defined, individual-specific landscape. Given the wealth of evidence relating functional connectivity to experiential states, we assume this landscape is a proxy measure of an individual's stream of consciousness. By investigating the properties of this landscape in individuals in different states of consciousness, we show that consciousness is associated with short term transitions that are less predictable, quicker, but, on average, more constant. We also show that temporally-specific connectivity states are less easily describable by network patterns that are distant in time, suggesting a richer space of possible states. We show that the cortex, cerebellum and subcortex all display consciousness-relevant dynamics and discuss the implication of our results in forming a point of contact between dynamical systems interpretations and phenomenology.
Collapse
Affiliation(s)
- Peter Coppola
- Division of Anaesthesia, School of Clinical Medicine, University of Cambridge, Addenbrooke's Hospital, Cambridge, UK
- Department of Clinical Neurosciences, School of Clinical Medicine, University of Cambridge, Addenbrooke's Hospital, Cambridge, UK
| | - Judith Allanson
- Department of Clinical Neurosciences, School of Clinical Medicine, University of Cambridge, Addenbrooke's Hospital, Cambridge, UK
- Department of Neurosciences, Cambridge University Hospitals NHS Foundation, Addenbrooke's Hospital, Cambridge, UK
| | - Lorina Naci
- Trinity College Institute of Neuroscience, School of Psychology, Lloyd Building, Trinity College Dublin, Dublin, Ireland
| | - Ram Adapa
- Division of Anaesthesia, School of Clinical Medicine, University of Cambridge, Addenbrooke's Hospital, Cambridge, UK
| | - Paola Finoia
- Division of Anaesthesia, School of Clinical Medicine, University of Cambridge, Addenbrooke's Hospital, Cambridge, UK
- Division of Neurosurgery, School of Clinical Medicine, University of Cambridge, Addenbrooke's Hospital, Cambridge, UK
| | - Guy B Williams
- Department of Clinical Neurosciences, School of Clinical Medicine, University of Cambridge, Addenbrooke's Hospital, Cambridge, UK
- Wolfson Brain Imaging Centre, University of Cambridge, Cambridge, UK
| | - John D Pickard
- Department of Clinical Neurosciences, School of Clinical Medicine, University of Cambridge, Addenbrooke's Hospital, Cambridge, UK
- Division of Neurosurgery, School of Clinical Medicine, University of Cambridge, Addenbrooke's Hospital, Cambridge, UK
- Wolfson Brain Imaging Centre, University of Cambridge, Cambridge, UK
| | - Adrian M Owen
- The Brain and Mind Institute, Western Interdisciplinary Research Building, N6A 5B7 University of Western Ontario, London, ON, Canada
| | - David K Menon
- Division of Anaesthesia, School of Clinical Medicine, University of Cambridge, Addenbrooke's Hospital, Cambridge, UK
- Wolfson Brain Imaging Centre, University of Cambridge, Cambridge, UK
| | - Emmanuel A Stamatakis
- Division of Anaesthesia, School of Clinical Medicine, University of Cambridge, Addenbrooke's Hospital, Cambridge, UK.
- Department of Clinical Neurosciences, School of Clinical Medicine, University of Cambridge, Addenbrooke's Hospital, Cambridge, UK.
| |
Collapse
|
18
|
Wright JJ, Bourke PD. Unification of free energy minimization, spatiotemporal energy, and dimension reduction models of V1 organization: Postnatal learning on an antenatal scaffold. Front Comput Neurosci 2022; 16:869268. [PMID: 36313813 PMCID: PMC9614369 DOI: 10.3389/fncom.2022.869268] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/02/2022] [Accepted: 09/27/2022] [Indexed: 11/23/2022] Open
Abstract
Developmental selection of neurons and synapses so as to maximize pulse synchrony has recently been used to explain antenatal cortical development. Consequences of the same selection process—an application of the Free Energy Principle—are here followed into the postnatal phase in V1, and the implications for cognitive function are considered. Structured inputs transformed via lag relay in superficial patch connections lead to the generation of circumferential synaptic connectivity superimposed upon the antenatal, radial, “like-to-like” connectivity surrounding each singularity. The spatiotemporal energy and dimension reduction models of cortical feature preferences are accounted for and unified within the expanded model, and relationships of orientation preference (OP), space frequency preference (SFP), and temporal frequency preference (TFP) are resolved. The emergent anatomy provides a basis for “active inference” that includes interpolative modification of synapses so as to anticipate future inputs, as well as learn directly from present stimuli. Neurodynamic properties are those of heteroclinic networks with coupled spatial eigenmodes.
Collapse
Affiliation(s)
- James Joseph Wright
- Centre for Brain Research, University of Auckland, Auckland, New Zealand
- Department of Psychological Medicine, School of Medicine, University of Auckland, Auckland, New Zealand
- *Correspondence: James Joseph Wright,
| | - Paul David Bourke
- Faculty of Arts, Business, Law and Education, School of Social Sciences, University of Western Australia, Perth, WA, Australia
| |
Collapse
|
19
|
John YJ, Sawyer KS, Srinivasan K, Müller EJ, Munn BR, Shine JM. It's about time: Linking dynamical systems with human neuroimaging to understand the brain. Netw Neurosci 2022; 6:960-979. [PMID: 36875012 PMCID: PMC9976648 DOI: 10.1162/netn_a_00230] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/30/2021] [Accepted: 01/04/2022] [Indexed: 11/04/2022] Open
Abstract
Most human neuroscience research to date has focused on statistical approaches that describe stationary patterns of localized neural activity or blood flow. While these patterns are often interpreted in light of dynamic, information-processing concepts, the static, local, and inferential nature of the statistical approach makes it challenging to directly link neuroimaging results to plausible underlying neural mechanisms. Here, we argue that dynamical systems theory provides the crucial mechanistic framework for characterizing both the brain's time-varying quality and its partial stability in the face of perturbations, and hence, that this perspective can have a profound impact on the interpretation of human neuroimaging results and their relationship with behavior. After briefly reviewing some key terminology, we identify three key ways in which neuroimaging analyses can embrace a dynamical systems perspective: by shifting from a local to a more global perspective, by focusing on dynamics instead of static snapshots of neural activity, and by embracing modeling approaches that map neural dynamics using "forward" models. Through this approach, we envisage ample opportunities for neuroimaging researchers to enrich their understanding of the dynamic neural mechanisms that support a wide array of brain functions, both in health and in the setting of psychopathology.
Collapse
Affiliation(s)
- Yohan J. John
- Neural Systems Laboratory, Department of Health Sciences, Boston University, Boston, MA, USA
| | - Kayle S. Sawyer
- Departments of Anatomy and Neurobiology, Boston University, Boston University, Boston, MA, USA
- Department of Radiology, Massachusetts General Hospital, Boston, MA, USA
- Boston VA Healthcare System, Boston, MA, USA
- Sawyer Scientific, LLC, Boston, MA, USA
| | - Karthik Srinivasan
- McGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, MA, USA
| | - Eli J. Müller
- Brain and Mind Center, University of Sydney, Sydney, NSW, Australia
| | - Brandon R. Munn
- Brain and Mind Center, University of Sydney, Sydney, NSW, Australia
| | - James M. Shine
- Brain and Mind Center, University of Sydney, Sydney, NSW, Australia
| |
Collapse
|
20
|
Lara-González E, Padilla-Orozco M, Fuentes-Serrano A, Bargas J, Duhne M. Translational neuronal ensembles: Neuronal microcircuits in psychology, physiology, pharmacology and pathology. Front Syst Neurosci 2022; 16:979680. [PMID: 36090187 PMCID: PMC9449457 DOI: 10.3389/fnsys.2022.979680] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/27/2022] [Accepted: 07/27/2022] [Indexed: 11/23/2022] Open
Abstract
Multi-recording techniques show evidence that neurons coordinate their firing forming ensembles and that brain networks are made by connections between ensembles. While “canonical” microcircuits are composed of interconnected principal neurons and interneurons, it is not clear how they participate in recorded neuronal ensembles: “groups of neurons that show spatiotemporal co-activation”. Understanding synapses and their plasticity has become complex, making hard to consider all details to fill the gap between cellular-synaptic and circuit levels. Therefore, two assumptions became necessary: First, whatever the nature of the synapses these may be simplified by “functional connections”. Second, whatever the mechanisms to achieve synaptic potentiation or depression, the resultant synaptic weights are relatively stable. Both assumptions have experimental basis cited in this review, and tools to analyze neuronal populations are being developed based on them. Microcircuitry processing followed with multi-recording techniques show temporal sequences of neuronal ensembles resembling computational routines. These sequences can be aligned with the steps of behavioral tasks and behavior can be modified upon their manipulation, supporting the hypothesis that they are memory traces. In vitro, recordings show that these temporal sequences can be contained in isolated tissue of histological scale. Sequences found in control conditions differ from those recorded in pathological tissue obtained from animal disease models and those recorded after the actions of clinically useful drugs to treat disease states, setting the basis for new bioassays to test drugs with potential clinical use. These findings make the neuronal ensembles theoretical framework a dynamic neuroscience paradigm.
Collapse
Affiliation(s)
- Esther Lara-González
- División Neurociencias, Instituto de Fisiología Celular, Universidad Nacional Autónoma de México, Mexico City, Mexico
- Department of Neuroscience, Feinberg School of Medicine, Northwestern University, Chicago, IL, United States
| | - Montserrat Padilla-Orozco
- División Neurociencias, Instituto de Fisiología Celular, Universidad Nacional Autónoma de México, Mexico City, Mexico
| | - Alejandra Fuentes-Serrano
- División Neurociencias, Instituto de Fisiología Celular, Universidad Nacional Autónoma de México, Mexico City, Mexico
| | - José Bargas
- División Neurociencias, Instituto de Fisiología Celular, Universidad Nacional Autónoma de México, Mexico City, Mexico
- *Correspondence: José Bargas,
| | - Mariana Duhne
- División Neurociencias, Instituto de Fisiología Celular, Universidad Nacional Autónoma de México, Mexico City, Mexico
- Department of Neurology, University of California, San Francisco, San Francisco, CA, United States
- Mariana Duhne,
| |
Collapse
|
21
|
Metastable spiking networks in the replica-mean-field limit. PLoS Comput Biol 2022; 18:e1010215. [PMID: 35714155 PMCID: PMC9246178 DOI: 10.1371/journal.pcbi.1010215] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/26/2021] [Revised: 06/30/2022] [Accepted: 05/16/2022] [Indexed: 11/19/2022] Open
Abstract
Characterizing metastable neural dynamics in finite-size spiking networks remains a daunting challenge. We propose to address this challenge in the recently introduced replica-mean-field (RMF) limit. In this limit, networks are made of infinitely many replicas of the finite network of interest, but with randomized interactions across replicas. Such randomization renders certain excitatory networks fully tractable at the cost of neglecting activity correlations, but with explicit dependence on the finite size of the neural constituents. However, metastable dynamics typically unfold in networks with mixed inhibition and excitation. Here, we extend the RMF computational framework to point-process-based neural network models with exponential stochastic intensities, allowing for mixed excitation and inhibition. Within this setting, we show that metastable finite-size networks admit multistable RMF limits, which are fully characterized by stationary firing rates. Technically, these stationary rates are determined as the solutions of a set of delayed differential equations under certain regularity conditions that any physical solutions shall satisfy. We solve this original problem by combining the resolvent formalism and singular-perturbation theory. Importantly, we find that these rates specify probabilistic pseudo-equilibria which accurately capture the neural variability observed in the original finite-size network. We also discuss the emergence of metastability as a stochastic bifurcation, which can be interpreted as a static phase transition in the RMF limits. In turn, we expect to leverage the static picture of RMF limits to infer purely dynamical features of metastable finite-size networks, such as the transition rates between pseudo-equilibria.
Collapse
|
22
|
Hancock F, Rosas FE, Mediano PAM, Luppi AI, Cabral J, Dipasquale O, Turkheimer FE. May the 4C's be with you: an overview of complexity-inspired frameworks for analysing resting-state neuroimaging data. J R Soc Interface 2022; 19:20220214. [PMID: 35765805 PMCID: PMC9240685 DOI: 10.1098/rsif.2022.0214] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/17/2022] [Accepted: 06/09/2022] [Indexed: 11/12/2022] Open
Abstract
Competing and complementary models of resting-state brain dynamics contribute to our phenomenological and mechanistic understanding of whole-brain coordination and communication, and provide potential evidence for differential brain functioning associated with normal and pathological behaviour. These neuroscientific theories stem from the perspectives of physics, engineering, mathematics and psychology and create a complicated landscape of domain-specific terminology and meaning, which, when used outside of that domain, may lead to incorrect assumptions and conclusions within the neuroscience community. Here, we review and clarify the key concepts of connectivity, computation, criticality and coherence-the 4C's-and outline a potential role for metastability as a common denominator across these propositions. We analyse and synthesize whole-brain neuroimaging research, examined through functional magnetic imaging, to demonstrate that complexity science offers a principled and integrated approach to describe, and potentially understand, macroscale spontaneous brain functioning.
Collapse
Affiliation(s)
- Fran Hancock
- Department of Neuroimaging, Institute of Psychiatry, Psychology and Neuroscience, King's College London, London, UK
| | - Fernando E. Rosas
- Centre for Psychedelic Research, Department of Brain Science, Imperial College London, London SW7 2DD, UK
- Data Science Institute, Imperial College London, London SW7 2AZ, UK
- Centre for Complexity Science, Imperial College London, London SW7 2AZ, UK
| | - Pedro A. M. Mediano
- Department of Psychology, University of Cambridge, Cambridge CB2 3EB, UK
- Department of Psychology, Queen Mary University of London, London E1 4NS, UK
| | - Andrea I. Luppi
- Division of Anaesthesia, School of Clinical Medicine, University of Cambridge, Cambridge, UK
- Department of Clinical Neurosciences, University of Cambridge, Cambridge, UK
- Leverhulme Centre for the Future of Intelligence, University of Cambridge, Cambridge, UK
- Alan Turing Institute, London, UK
| | - Joana Cabral
- Life and Health Sciences Research Institute (ICVS), School of Medicine, University of Minho, Braga, Portugal
- Department of Psychiatry, University of Oxford, Oxford, UK
| | - Ottavia Dipasquale
- Department of Neuroimaging, Institute of Psychiatry, Psychology and Neuroscience, King's College London, London, UK
| | - Federico E. Turkheimer
- Department of Neuroimaging, Institute of Psychiatry, Psychology and Neuroscience, King's College London, London, UK
| |
Collapse
|
23
|
Levin M. Technological Approach to Mind Everywhere: An Experimentally-Grounded Framework for Understanding Diverse Bodies and Minds. Front Syst Neurosci 2022; 16:768201. [PMID: 35401131 PMCID: PMC8988303 DOI: 10.3389/fnsys.2022.768201] [Citation(s) in RCA: 28] [Impact Index Per Article: 14.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/31/2021] [Accepted: 01/24/2022] [Indexed: 12/11/2022] Open
Abstract
Synthetic biology and bioengineering provide the opportunity to create novel embodied cognitive systems (otherwise known as minds) in a very wide variety of chimeric architectures combining evolved and designed material and software. These advances are disrupting familiar concepts in the philosophy of mind, and require new ways of thinking about and comparing truly diverse intelligences, whose composition and origin are not like any of the available natural model species. In this Perspective, I introduce TAME-Technological Approach to Mind Everywhere-a framework for understanding and manipulating cognition in unconventional substrates. TAME formalizes a non-binary (continuous), empirically-based approach to strongly embodied agency. TAME provides a natural way to think about animal sentience as an instance of collective intelligence of cell groups, arising from dynamics that manifest in similar ways in numerous other substrates. When applied to regenerating/developmental systems, TAME suggests a perspective on morphogenesis as an example of basal cognition. The deep symmetry between problem-solving in anatomical, physiological, transcriptional, and 3D (traditional behavioral) spaces drives specific hypotheses by which cognitive capacities can increase during evolution. An important medium exploited by evolution for joining active subunits into greater agents is developmental bioelectricity, implemented by pre-neural use of ion channels and gap junctions to scale up cell-level feedback loops into anatomical homeostasis. This architecture of multi-scale competency of biological systems has important implications for plasticity of bodies and minds, greatly potentiating evolvability. Considering classical and recent data from the perspectives of computational science, evolutionary biology, and basal cognition, reveals a rich research program with many implications for cognitive science, evolutionary biology, regenerative medicine, and artificial intelligence.
Collapse
Affiliation(s)
- Michael Levin
- Allen Discovery Center at Tufts University, Medford, MA, United States
- Wyss Institute for Biologically Inspired Engineering at Harvard University, Cambridge, MA, United States
| |
Collapse
|
24
|
Brinkman BAW, Yan H, Maffei A, Park IM, Fontanini A, Wang J, La Camera G. Metastable dynamics of neural circuits and networks. APPLIED PHYSICS REVIEWS 2022; 9:011313. [PMID: 35284030 PMCID: PMC8900181 DOI: 10.1063/5.0062603] [Citation(s) in RCA: 13] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/06/2021] [Accepted: 01/31/2022] [Indexed: 05/14/2023]
Abstract
Cortical neurons emit seemingly erratic trains of action potentials or "spikes," and neural network dynamics emerge from the coordinated spiking activity within neural circuits. These rich dynamics manifest themselves in a variety of patterns, which emerge spontaneously or in response to incoming activity produced by sensory inputs. In this Review, we focus on neural dynamics that is best understood as a sequence of repeated activations of a number of discrete hidden states. These transiently occupied states are termed "metastable" and have been linked to important sensory and cognitive functions. In the rodent gustatory cortex, for instance, metastable dynamics have been associated with stimulus coding, with states of expectation, and with decision making. In frontal, parietal, and motor areas of macaques, metastable activity has been related to behavioral performance, choice behavior, task difficulty, and attention. In this article, we review the experimental evidence for neural metastable dynamics together with theoretical approaches to the study of metastable activity in neural circuits. These approaches include (i) a theoretical framework based on non-equilibrium statistical physics for network dynamics; (ii) statistical approaches to extract information about metastable states from a variety of neural signals; and (iii) recent neural network approaches, informed by experimental results, to model the emergence of metastable dynamics. By discussing these topics, we aim to provide a cohesive view of how transitions between different states of activity may provide the neural underpinnings for essential functions such as perception, memory, expectation, or decision making, and more generally, how the study of metastable neural activity may advance our understanding of neural circuit function in health and disease.
Collapse
Affiliation(s)
| | - H. Yan
- State Key Laboratory of Electroanalytical Chemistry, Changchun Institute of Applied Chemistry, Chinese Academy of Sciences, Changchun, Jilin 130022, People's Republic of China
| | | | | | | | - J. Wang
- Authors to whom correspondence should be addressed: and
| | - G. La Camera
- Authors to whom correspondence should be addressed: and
| |
Collapse
|
25
|
Thome J, Steinbach R, Grosskreutz J, Durstewitz D, Koppe G. Classification of amyotrophic lateral sclerosis by brain volume, connectivity, and network dynamics. Hum Brain Mapp 2022; 43:681-699. [PMID: 34655259 PMCID: PMC8720197 DOI: 10.1002/hbm.25679] [Citation(s) in RCA: 11] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/13/2021] [Accepted: 09/27/2021] [Indexed: 12/19/2022] Open
Abstract
Emerging studies corroborate the importance of neuroimaging biomarkers and machine learning to improve diagnostic classification of amyotrophic lateral sclerosis (ALS). While most studies focus on structural data, recent studies assessing functional connectivity between brain regions by linear methods highlight the role of brain function. These studies have yet to be combined with brain structure and nonlinear functional features. We investigate the role of linear and nonlinear functional brain features, and the benefit of combining brain structure and function for ALS classification. ALS patients (N = 97) and healthy controls (N = 59) underwent structural and functional resting state magnetic resonance imaging. Based on key hubs of resting state networks, we defined three feature sets comprising brain volume, resting state functional connectivity (rsFC), as well as (nonlinear) resting state dynamics assessed via recurrent neural networks. Unimodal and multimodal random forest classifiers were built to classify ALS. Out-of-sample prediction errors were assessed via five-fold cross-validation. Unimodal classifiers achieved a classification accuracy of 56.35-61.66%. Multimodal classifiers outperformed unimodal classifiers achieving accuracies of 62.85-66.82%. Evaluating the ranking of individual features' importance scores across all classifiers revealed that rsFC features were most dominant in classification. While univariate analyses revealed reduced rsFC in ALS patients, functional features more generally indicated deficits in information integration across resting state brain networks in ALS. The present work undermines that combining brain structure and function provides an additional benefit to diagnostic classification, as indicated by multimodal classifiers, while emphasizing the importance of capturing both linear and nonlinear functional brain properties to identify discriminative biomarkers of ALS.
Collapse
Affiliation(s)
- Janine Thome
- Department of Theoretical Neuroscience, Central Institute of Mental Health Mannheim, Medical Faculty MannheimHeidelberg UniversityGermany
- Clinic for Psychiatry and Psychotherapy, Central Institute of Mental Health Mannheim, Medical Faculty MannheimHeidelberg UniversityGermany
| | - Robert Steinbach
- Hans Berger Department of NeurologyJena University HospitalJenaGermany
| | - Julian Grosskreutz
- Precision Neurology, Department of NeurologyUniversity of LuebeckLuebeckGermany
| | - Daniel Durstewitz
- Department of Theoretical Neuroscience, Central Institute of Mental Health Mannheim, Medical Faculty MannheimHeidelberg UniversityGermany
| | - Georgia Koppe
- Department of Theoretical Neuroscience, Central Institute of Mental Health Mannheim, Medical Faculty MannheimHeidelberg UniversityGermany
- Clinic for Psychiatry and Psychotherapy, Central Institute of Mental Health Mannheim, Medical Faculty MannheimHeidelberg UniversityGermany
| |
Collapse
|
26
|
Schirner M, Kong X, Yeo BTT, Deco G, Ritter P. Dynamic primitives of brain network interaction Special Issue "Advances in Mapping the Connectome". Neuroimage 2022; 250:118928. [PMID: 35101596 DOI: 10.1016/j.neuroimage.2022.118928] [Citation(s) in RCA: 11] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/01/2021] [Revised: 12/03/2021] [Accepted: 01/20/2022] [Indexed: 01/04/2023] Open
Abstract
What dynamic processes underly functional brain networks? Functional connectivity (FC) and functional connectivity dynamics (FCD) are used to represent the patterns and dynamics of functional brain networks. FC(D) is related to the synchrony of brain activity: when brain areas oscillate in a coordinated manner this yields a high correlation between their signal time series. To explain the processes underlying FC(D) we review how synchronized oscillations emerge from coupled neural populations in brain network models (BNMs). From detailed spiking networks to more abstract population models, there is strong support for the idea that the brain operates near critical instabilities that give rise to multistable or metastable dynamics that in turn lead to the intermittently synchronized slow oscillations underlying FC(D). We explore further consequences from these fundamental mechanisms and how they fit with reality. We conclude by highlighting the need for integrative brain models that connect separate mechanisms across levels of description and spatiotemporal scales and link them with cognitive function.
Collapse
Affiliation(s)
- Michael Schirner
- Berlin Institute of Health at Charité - Universitätsmedizin Berlin, Charitéplatz 1, 10117 Berlin, Germany; Charité - Universitätsmedizin Berlin, corporate member of Freie Universität Berlin and Humboldt Universität zu Berlin, Department of Neurology with Experimental Neurology, Charitéplatz 1, 10117 Berlin, Germany; Bernstein Focus State Dependencies of Learning & Bernstein Center for Computational Neuroscience, Berlin, Germany; Einstein Center for Neuroscience Berlin, Charitéplatz 1, 10117 Berlin, Germany; Einstein Center Digital Future, Wilhelmstraße 67, 10117 Berlin, Germany.
| | - Xiaolu Kong
- Department of Electrical and Computer Engineering, National University of Singapore, Singapore; Centre for Sleep & Cognition & Centre for Translational Magnetic Resonance Research, Yong Loo Lin School of Medicine, Singapore; N.1 Institute for Health & Institute for Digital Medicine, National University of Singapore, Singapore
| | - B T Thomas Yeo
- Department of Electrical and Computer Engineering, National University of Singapore, Singapore; Centre for Sleep & Cognition & Centre for Translational Magnetic Resonance Research, Yong Loo Lin School of Medicine, Singapore; N.1 Institute for Health & Institute for Digital Medicine, National University of Singapore, Singapore; Integrative Sciences and Engineering Programme (ISEP), National University of Singapore, Singapore, Singapore; Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Charlestown, USA
| | - Gustavo Deco
- Center for Brain and Cognition, Computational Neuroscience Group, Department of Information and Communication Technologies, Universitat Pompeu Fabra, Barcelona, Spain; Institució Catalana de la Recerca i Estudis Avançats, Barcelona, Spain; Department of Neuropsychology, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany; School of Psychological Sciences, Turner Institute for Brain and Mental Health, Monash University, Melbourne, Clayton, Australia
| | - Petra Ritter
- Berlin Institute of Health at Charité - Universitätsmedizin Berlin, Charitéplatz 1, 10117 Berlin, Germany; Charité - Universitätsmedizin Berlin, corporate member of Freie Universität Berlin and Humboldt Universität zu Berlin, Department of Neurology with Experimental Neurology, Charitéplatz 1, 10117 Berlin, Germany; Bernstein Focus State Dependencies of Learning & Bernstein Center for Computational Neuroscience, Berlin, Germany; Einstein Center for Neuroscience Berlin, Charitéplatz 1, 10117 Berlin, Germany; Einstein Center Digital Future, Wilhelmstraße 67, 10117 Berlin, Germany.
| |
Collapse
|
27
|
Hipólito I. Cognition Without Neural Representation: Dynamics of a Complex System. Front Psychol 2022; 12:643276. [PMID: 35095629 PMCID: PMC8789682 DOI: 10.3389/fpsyg.2021.643276] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/17/2020] [Accepted: 10/31/2021] [Indexed: 12/26/2022] Open
Abstract
This paper proposes an account of neurocognitive activity without leveraging the notion of neural representation. Neural representation is a concept that results from assuming that the properties of the models used in computational cognitive neuroscience (e.g., information, representation, etc.) must literally exist the system being modelled (e.g., the brain). Computational models are important tools to test a theory about how the collected data (e.g., behavioural or neuroimaging) has been generated. While the usefulness of computational models is unquestionable, it does not follow that neurocognitive activity should literally entail the properties construed in the model (e.g., information, representation). While this is an assumption present in computationalist accounts, it is not held across the board in neuroscience. In the last section, the paper offers a dynamical account of neurocognitive activity with Dynamical Causal Modelling (DCM) that combines dynamical systems theory (DST) mathematical formalisms with the theoretical contextualisation provided by Embodied and Enactive Cognitive Science (EECS).
Collapse
Affiliation(s)
- Inês Hipólito
- Berlin School of Mind and Brain, Institut für Philosophie, Humboldt-Universität zu Berlin, Berlin, Germany
| |
Collapse
|
28
|
Mediano PAM, Rosas FE, Farah JC, Shanahan M, Bor D, Barrett AB. Integrated information as a common signature of dynamical and information-processing complexity. CHAOS (WOODBURY, N.Y.) 2022; 32:013115. [PMID: 35105139 DOI: 10.1063/5.0063384] [Citation(s) in RCA: 13] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/14/2021] [Accepted: 12/03/2021] [Indexed: 06/14/2023]
Abstract
The apparent dichotomy between information-processing and dynamical approaches to complexity science forces researchers to choose between two diverging sets of tools and explanations, creating conflict and often hindering scientific progress. Nonetheless, given the shared theoretical goals between both approaches, it is reasonable to conjecture the existence of underlying common signatures that capture interesting behavior in both dynamical and information-processing systems. Here, we argue that a pragmatic use of integrated information theory (IIT), originally conceived in theoretical neuroscience, can provide a potential unifying framework to study complexity in general multivariate systems. By leveraging metrics put forward by the integrated information decomposition framework, our results reveal that integrated information can effectively capture surprisingly heterogeneous signatures of complexity-including metastability and criticality in networks of coupled oscillators as well as distributed computation and emergent stable particles in cellular automata-without relying on idiosyncratic, ad hoc criteria. These results show how an agnostic use of IIT can provide important steps toward bridging the gap between informational and dynamical approaches to complex systems.
Collapse
Affiliation(s)
- Pedro A M Mediano
- Department of Psychology, University of Cambridge, Cambridge CB2 3EB, United Kingdom
| | - Fernando E Rosas
- Centre for Psychedelic Research, Department of Brain Science, Imperial College London, London SW7 2DD, United Kingdom
| | - Juan Carlos Farah
- School of Engineering, École Polytechnique Fédérale de Lausanne, CH-1015 Lausanne, Switzerland
| | - Murray Shanahan
- Department of Computing, Imperial College London, London SW7 2RH, United Kingdom
| | - Daniel Bor
- Department of Psychology, University of Cambridge, Cambridge CB2 3EB, United Kingdom
| | - Adam B Barrett
- Sackler Center for Consciousness Science, Department of Informatics, University of Sussex, Brighton BN1 9RH, United Kingdom
| |
Collapse
|
29
|
Psychiatric Illnesses as Disorders of Network Dynamics. BIOLOGICAL PSYCHIATRY: COGNITIVE NEUROSCIENCE AND NEUROIMAGING 2021; 6:865-876. [DOI: 10.1016/j.bpsc.2020.01.001] [Citation(s) in RCA: 14] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/17/2019] [Accepted: 01/06/2020] [Indexed: 01/05/2023]
|
30
|
Konno D, Nishimoto S, Suzuki T, Ikegaya Y, Matsumoto N. Multiple states in ongoing neural activity in the rat visual cortex. PLoS One 2021; 16:e0256791. [PMID: 34437630 PMCID: PMC8389421 DOI: 10.1371/journal.pone.0256791] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/07/2020] [Accepted: 08/16/2021] [Indexed: 01/04/2023] Open
Abstract
The brain continuously produces internal activity in the absence of afferently salient sensory input. Spontaneous neural activity is intrinsically defined by circuit structures and associated with the mode of information processing and behavioral responses. However, the spatiotemporal dynamics of spontaneous activity in the visual cortices of behaving animals remain almost elusive. Using a custom-made electrode array, we recorded 32-site electrocorticograms in the primary and secondary visual cortex of freely behaving rats and determined the propagation patterns of spontaneous neural activity. Nonlinear dimensionality reduction and unsupervised clustering revealed multiple discrete states of the activity patterns. The activity remained stable in one state and suddenly jumped to another state. The diversity and dynamics of the internally switching cortical states would imply flexibility of neural responses to various external inputs.
Collapse
Affiliation(s)
- Daichi Konno
- Graduate School of Pharmaceutical Sciences, The University of Tokyo, Tokyo, Japan
- Graduate School of Medicine, The University of Tokyo, Tokyo, Japan
| | - Shinji Nishimoto
- Center for Information and Neural Networks, National Institute of Information and Communications Technology, Suita City, Osaka, Japan
| | - Takafumi Suzuki
- Center for Information and Neural Networks, National Institute of Information and Communications Technology, Suita City, Osaka, Japan
| | - Yuji Ikegaya
- Graduate School of Pharmaceutical Sciences, The University of Tokyo, Tokyo, Japan
- Center for Information and Neural Networks, National Institute of Information and Communications Technology, Suita City, Osaka, Japan
- Institute for AI and Beyond, The University of Tokyo, Tokyo, Japan
| | - Nobuyoshi Matsumoto
- Graduate School of Pharmaceutical Sciences, The University of Tokyo, Tokyo, Japan
| |
Collapse
|
31
|
Hurwitz C, Kudryashova N, Onken A, Hennig MH. Building population models for large-scale neural recordings: Opportunities and pitfalls. Curr Opin Neurobiol 2021; 70:64-73. [PMID: 34411907 DOI: 10.1016/j.conb.2021.07.003] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/03/2021] [Revised: 06/11/2021] [Accepted: 07/14/2021] [Indexed: 11/15/2022]
Abstract
Modern recording technologies now enable simultaneous recording from large numbers of neurons. This has driven the development of new statistical models for analyzing and interpreting neural population activity. Here, we provide a broad overview of recent developments in this area. We compare and contrast different approaches, highlight strengths and limitations, and discuss biological and mechanistic insights that these methods provide.
Collapse
Affiliation(s)
- Cole Hurwitz
- University of Edinburgh, Institute for Adaptive and Neural Computation Edinburgh, EH8 9AB, United Kingdom
| | - Nina Kudryashova
- University of Edinburgh, Institute for Adaptive and Neural Computation Edinburgh, EH8 9AB, United Kingdom
| | - Arno Onken
- University of Edinburgh, Institute for Adaptive and Neural Computation Edinburgh, EH8 9AB, United Kingdom
| | - Matthias H Hennig
- University of Edinburgh, Institute for Adaptive and Neural Computation Edinburgh, EH8 9AB, United Kingdom.
| |
Collapse
|
32
|
Kimmel M, Hristova D. The Micro-genesis of Improvisational Co-creation. CREATIVITY RESEARCH JOURNAL 2021. [DOI: 10.1080/10400419.2021.1922197] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
|
33
|
Wright JJ, Bourke PD. Combining inter-areal, mesoscopic, and neurodynamic models of cortical function: Response to Commentary on "The growth of cognition: Free energy minimization and the embryogenesis of cortical computation". Phys Life Rev 2021; 39:88-95. [PMID: 34393081 DOI: 10.1016/j.plrev.2021.07.004] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/21/2021] [Accepted: 07/22/2021] [Indexed: 10/20/2022]
Affiliation(s)
- J J Wright
- Centre for Brain Research, and Department of Psychological Medicine, School of Medicine, University of Auckland, Auckland, New Zealand.
| | - P D Bourke
- School of Social Sciences, Faculty of Arts, Business, Law and Education, University of Western Australia, Perth, Australia
| |
Collapse
|
34
|
Yang L, Sun W, Turcotte M. Coexistence of Hopf-born rotation and heteroclinic cycling in a time-delayed three-gene auto-regulated and mutually-repressed core genetic regulation network. J Theor Biol 2021; 527:110813. [PMID: 34144050 DOI: 10.1016/j.jtbi.2021.110813] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/25/2021] [Revised: 05/28/2021] [Accepted: 06/10/2021] [Indexed: 11/28/2022]
Abstract
In this work, we study the behavior of a time-delayed mutually repressive auto-activating three-gene system. Delays are introduced to account for the location difference between DNA transcription that leads to production of messenger RNA and its translation that result in protein synthesis. We study the dynamics of the system using numerical simulations, computational bifurcation analysis and mathematical analysis. We find Hopf bifurcations leading to stable and unstable rotation in the system, and we study the rotational behavior as a function of cyclic mutual repression parameter asymmetry between each gene pair in the network. We focus on how rotation co-exists with a stable heteroclinic flow linking the three saddles in the system. We find that this coexistence allows for a transition between two markedly different types of rotation leading to strikingly different phenotypes. One type of rotation belongs to Hopf-induced rotation while the other type, belongs to heteroclinic cycling between three saddle nodes in the system. We discuss the evolutionary and biological implications of our findings.
Collapse
Affiliation(s)
- Lei Yang
- Hangzhou Dianzi University, Hangzhou, Zhejiang, China
| | - Weigang Sun
- Hangzhou Dianzi University, Hangzhou, Zhejiang, China
| | - Marc Turcotte
- Hangzhou Dianzi University, Hangzhou, Zhejiang, China.
| |
Collapse
|
35
|
Kang J, Jeong S, Pae C, Park H. Bayesian estimation of maximum entropy model for individualized energy landscape analysis of brain state dynamics. Hum Brain Mapp 2021; 42:3411-3428. [PMID: 33934421 PMCID: PMC8249903 DOI: 10.1002/hbm.25442] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/08/2021] [Revised: 03/25/2021] [Accepted: 03/29/2021] [Indexed: 11/24/2022] Open
Abstract
The pairwise maximum entropy model (MEM) for resting state functional MRI (rsfMRI) has been used to generate energy landscape of brain states and to explore nonlinear brain state dynamics. Researches using MEM, however, has mostly been restricted to fixed‐effect group‐level analyses, using concatenated time series across individuals, due to the need for large samples in the parameter estimation of MEM. To mitigate the small sample problem in analyzing energy landscapes for individuals, we propose a Bayesian estimation of individual MEM using variational Bayes approximation (BMEM). We evaluated the performances of BMEM with respect to sample sizes and prior information using simulation. BMEM showed advantages over conventional maximum likelihood estimation in reliably estimating model parameters for individuals with small sample data, particularly utilizing the empirical priors derived from group data. We then analyzed individual rsfMRI of the Human Connectome Project to show the usefulness of MEM in differentiating individuals and in exploring neural correlates for human behavior. MEM and its energy landscape properties showed high subject specificity comparable to that of functional connectivity. Canonical correlation analysis identified canonical variables for MEM highly associated with cognitive scores. Inter‐individual variations of cognitive scores were also reflected in energy landscape properties such as energies, occupation times, and basin sizes at local minima. We conclude that BMEM provides an efficient method to characterize dynamic properties of individuals using energy landscape analysis of individual brain states.
Collapse
Affiliation(s)
- Jiyoung Kang
- Center for Systems and Translational Brain ScienceInstitute of Human Complexity and Systems Science, Yonsei UniversitySeoulSouth Korea
- Department of Nuclear Medicine, PsychiatryYonsei University College of MedicineSeoulSouth Korea
| | - Seok‐Oh Jeong
- Department of StatisticsHankuk University of Foreign StudiesYong‐In, SeoulSouth Korea
| | - Chongwon Pae
- Center for Systems and Translational Brain ScienceInstitute of Human Complexity and Systems Science, Yonsei UniversitySeoulSouth Korea
- Department of Nuclear Medicine, PsychiatryYonsei University College of MedicineSeoulSouth Korea
| | - Hae‐Jeong Park
- Center for Systems and Translational Brain ScienceInstitute of Human Complexity and Systems Science, Yonsei UniversitySeoulSouth Korea
- Department of Nuclear Medicine, PsychiatryYonsei University College of MedicineSeoulSouth Korea
- Graduate School of Medical Science, Brain Korea 21 ProjectYonsei University College of MedicineSeoulSouth Korea
| |
Collapse
|
36
|
Pezzulo G, LaPalme J, Durant F, Levin M. Bistability of somatic pattern memories: stochastic outcomes in bioelectric circuits underlying regeneration. Philos Trans R Soc Lond B Biol Sci 2021; 376:20190765. [PMID: 33550952 PMCID: PMC7935058 DOI: 10.1098/rstb.2019.0765] [Citation(s) in RCA: 18] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 07/06/2020] [Indexed: 02/06/2023] Open
Abstract
Nervous systems' computational abilities are an evolutionary innovation, specializing and speed-optimizing ancient biophysical dynamics. Bioelectric signalling originated in cells' communication with the outside world and with each other, enabling cooperation towards adaptive construction and repair of multicellular bodies. Here, we review the emerging field of developmental bioelectricity, which links the field of basal cognition to state-of-the-art questions in regenerative medicine, synthetic bioengineering and even artificial intelligence. One of the predictions of this view is that regeneration and regulative development can restore correct large-scale anatomies from diverse starting states because, like the brain, they exploit bioelectric encoding of distributed goal states-in this case, pattern memories. We propose a new interpretation of recent stochastic regenerative phenotypes in planaria, by appealing to computational models of memory representation and processing in the brain. Moreover, we discuss novel findings showing that bioelectric changes induced in planaria can be stored in tissue for over a week, thus revealing that somatic bioelectric circuits in vivo can implement a long-term, re-writable memory medium. A consideration of the mechanisms, evolution and functionality of basal cognition makes novel predictions and provides an integrative perspective on the evolution, physiology and biomedicine of information processing in vivo. This article is part of the theme issue 'Basal cognition: multicellularity, neurons and the cognitive lens'.
Collapse
Affiliation(s)
- Giovanni Pezzulo
- Institute of Cognitive Sciences and Technologies, National Research Council, Rome, Italy
| | - Joshua LaPalme
- Allen Discovery Center, Tufts University, Medford, MA, USA
| | - Fallon Durant
- Allen Discovery Center, Tufts University, Medford, MA, USA
| | - Michael Levin
- Allen Discovery Center, Tufts University, Medford, MA, USA
| |
Collapse
|
37
|
Creaser J, Ashwin P, Postlethwaite C, Britz J. Noisy network attractor models for transitions between EEG microstates. JOURNAL OF MATHEMATICAL NEUROSCIENCE 2021; 11:1. [PMID: 33394133 PMCID: PMC7782644 DOI: 10.1186/s13408-020-00100-0] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 03/24/2020] [Accepted: 12/08/2020] [Indexed: 06/12/2023]
Abstract
The brain is intrinsically organized into large-scale networks that constantly re-organize on multiple timescales, even when the brain is at rest. The timing of these dynamics is crucial for sensation, perception, cognition, and ultimately consciousness, but the underlying dynamics governing the constant reorganization and switching between networks are not yet well understood. Electroencephalogram (EEG) microstates are brief periods of stable scalp topography that have been identified as the electrophysiological correlate of functional magnetic resonance imaging defined resting-state networks. Spatiotemporal microstate sequences maintain high temporal resolution and have been shown to be scale-free with long-range temporal correlations. Previous attempts to model EEG microstate sequences have failed to capture this crucial property and so cannot fully capture the dynamics; this paper answers the call for more sophisticated modeling approaches. We present a dynamical model that exhibits a noisy network attractor between nodes that represent the microstates. Using an excitable network between four nodes, we can reproduce the transition probabilities between microstates but not the heavy tailed residence time distributions. We present two extensions to this model: first, an additional hidden node at each state; second, an additional layer that controls the switching frequency in the original network. Introducing either extension to the network gives the flexibility to capture these heavy tails. We compare the model generated sequences to microstate sequences from EEG data collected from healthy subjects at rest. For the first extension, we show that the hidden nodes 'trap' the trajectories allowing the control of residence times at each node. For the second extension, we show that two nodes in the controlling layer are sufficient to model the long residence times. Finally, we show that in addition to capturing the residence time distributions and transition probabilities of the sequences, these two models capture additional properties of the sequences including having interspersed long and short residence times and long range temporal correlations in line with the data as measured by the Hurst exponent.
Collapse
Affiliation(s)
- Jennifer Creaser
- Department of Mathematics and EPSRC Centre for Predictive Modelling in Healthcare, University of Exeter, Exeter, UK.
| | - Peter Ashwin
- Department of Mathematics and EPSRC Centre for Predictive Modelling in Healthcare, University of Exeter, Exeter, UK
| | | | - Juliane Britz
- Department of Psychology, University of Fribourg, Fribourg, Switzerland
- Neurology Unit, Medicine Section, Faculty of Science and Medicine, University of Fribourg, Fribourg, Switzerland
| |
Collapse
|
38
|
Woo JH, Honey CJ, Moon JY. Phase and amplitude dynamics of coupled oscillator systems on complex networks. CHAOS (WOODBURY, N.Y.) 2020; 30:121102. [PMID: 33380037 PMCID: PMC7714526 DOI: 10.1063/5.0031031] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/27/2020] [Accepted: 11/05/2020] [Indexed: 06/12/2023]
Abstract
We investigated locking behaviors of coupled limit-cycle oscillators with phase and amplitude dynamics. We focused on how the dynamics are affected by inhomogeneous coupling strength and by angular and radial shifts in coupling functions. We performed mean-field analyses of oscillator systems with inhomogeneous coupling strength, testing Gaussian, power-law, and brain-like degree distributions. Even for oscillators with identical intrinsic frequencies and intrinsic amplitudes, we found that the coupling strength distribution and the coupling function generated a wide repertoire of phase and amplitude dynamics. These included fully and partially locked states in which high-degree or low-degree nodes would phase-lead the network. The mean-field analytical findings were confirmed via numerical simulations. The results suggest that, in oscillator systems in which individual nodes can independently vary their amplitude over time, qualitatively different dynamics can be produced via shifts in the coupling strength distribution and the coupling form. Of particular relevance to information flows in oscillator networks, changes in the non-specific drive to individual nodes can make high-degree nodes phase-lag or phase-lead the rest of the network.
Collapse
Affiliation(s)
- Jae Hyung Woo
- Department of Psychological and Brain Sciences, Johns Hopkins University, Baltimore, Maryland 21218, USA
| | - Christopher J. Honey
- Department of Psychological and Brain Sciences, Johns Hopkins University, Baltimore, Maryland 21218, USA
| | - Joon-Young Moon
- Department of Psychological and Brain Sciences, Johns Hopkins University, Baltimore, Maryland 21218, USA
| |
Collapse
|
39
|
Torres JJ, Baroni F, Latorre R, Varona P. Temporal discrimination from the interaction between dynamic synapses and intrinsic subthreshold oscillations. Neurocomputing 2020. [DOI: 10.1016/j.neucom.2020.07.031] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/23/2022]
|
40
|
Dynamical mesoscale model of absence seizures in genetic models. PLoS One 2020; 15:e0239125. [PMID: 32991590 PMCID: PMC7524004 DOI: 10.1371/journal.pone.0239125] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/14/2020] [Accepted: 08/31/2020] [Indexed: 12/20/2022] Open
Abstract
A mesoscale network model is proposed for the development of spike and wave discharges (SWDs) in the cortico-thalamo-cortical (C-T-C) circuit. It is based on experimental findings in two genetic models of childhood absence epilepsy–rats of WAG/Rij and GAERS strains. The model is organized hierarchically into two levels (brain structures and individual neurons) and composed of compartments for representation of somatosensory cortex, reticular and ventroposteriomedial thalamic nuclei. The cortex and the two thalamic compartments contain excitatory and inhibitory connections between four populations of neurons. Two connected subnetworks both including relevant parts of a C-T-C network responsible for SWD generation are modelled: a smaller subnetwork for the focal area in which the SWD generation can take place, and a larger subnetwork for surrounding areas which can be only passively involved into SWDs, but which is mostly responsible for normal brain activity. This assumption allows modeling of both normal and SWD activity as a dynamical system (no noise is necessary), providing reproducibility of results and allowing future analysis by means of theory of dynamical system theories. The model is able to reproduce most time-frequency changes in EEG activity accompanying the transition from normal to epileptiform activity and back. Three different mechanisms of SWD initiation reported previously in experimental studies were successfully reproduced in the model. The model incorporates also a separate mechanism for the maintenance of SWDs based on coupling analysis from experimental data. Finally, the model reproduces the possibility to stop ongoing SWDs with high frequency electrical stimulation, as described in the literature.
Collapse
|
41
|
Binding brain dynamics building up heteroclinic networks: Comment on "The growth of cognition: Free energy minimization and the embryogenesis of cortical computation" by J.J. Wright and P.D. Bourke. Phys Life Rev 2020; 36:33-34. [PMID: 32883600 DOI: 10.1016/j.plrev.2020.08.005] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/17/2020] [Accepted: 08/19/2020] [Indexed: 01/06/2023]
|
42
|
Alderson TH, Bokde ALW, Kelso JAS, Maguire L, Coyle D. Metastable neural dynamics underlies cognitive performance across multiple behavioural paradigms. Hum Brain Mapp 2020; 41:3212-3234. [PMID: 32301561 PMCID: PMC7375112 DOI: 10.1002/hbm.25009] [Citation(s) in RCA: 17] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/28/2019] [Revised: 01/20/2020] [Accepted: 03/31/2020] [Indexed: 12/24/2022] Open
Abstract
Despite resting state networks being associated with a variety of cognitive abilities, it remains unclear how these local areas act in concert to express particular cognitive operations. Theoretical and empirical accounts indicate that large-scale resting state networks reconcile dual tendencies towards integration and segregation by operating in a metastable regime of their coordination dynamics. Metastability may confer important behavioural qualities by binding distributed local areas into large-scale neurocognitive networks. We tested this hypothesis by analysing fMRI data in a large cohort of healthy individuals (N = 566) and comparing the metastability of the brain's large-scale resting network architecture at rest and during the performance of several tasks. Metastability was estimated using a well-defined collective variable capturing the level of 'phase-locking' between large-scale networks over time. Task-based reasoning was principally characterised by high metastability in cognitive control networks and low metastability in sensory processing areas. Although metastability between resting state networks increased during task performance, cognitive ability was more closely linked to spontaneous activity. High metastability in the intrinsic connectivity of cognitive control networks was linked to novel problem solving or fluid intelligence, but was less important in tasks relying on previous experience or crystallised intelligence. Crucially, subjects with resting architectures similar or 'pre-configured' to a task-general arrangement demonstrated superior cognitive performance. Taken together, our findings support a key linkage between the spontaneous metastability of large-scale networks in the cerebral cortex and cognition.
Collapse
Affiliation(s)
- Thomas H. Alderson
- Intelligent Systems Research CentreUlster UniversityAntrimUnited Kingdom
- Beckman Institute for Advanced Science and TechnologyUniversity of Illinois at Urbana‐ChampaignUrbanaIllinoisUnited States
| | - Arun L. W. Bokde
- Trinity College Institute of Neuroscience and Cognitive Systems Group, Discipline of Psychiatry, School of MedicineTrinity College DublinDublinIreland
| | - J. A. Scott Kelso
- Intelligent Systems Research CentreUlster UniversityAntrimUnited Kingdom
- Center for Complex Systems and Brain SciencesFlorida Atlantic UniversityBoca RatonFloridaUnited States
| | - Liam Maguire
- Intelligent Systems Research CentreUlster UniversityAntrimUnited Kingdom
| | - Damien Coyle
- Intelligent Systems Research CentreUlster UniversityAntrimUnited Kingdom
| |
Collapse
|
43
|
Voit M, Veneziale S, Meyer-Ortmanns H. Coupled heteroclinic networks in disguise. CHAOS (WOODBURY, N.Y.) 2020; 30:083113. [PMID: 32872836 DOI: 10.1063/5.0006720] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/05/2020] [Accepted: 07/17/2020] [Indexed: 06/11/2023]
Abstract
We consider diffusively coupled heteroclinic networks, ranging from two coupled heteroclinic cycles to small numbers of heteroclinic networks, each composed of two connected heteroclinic cycles. In these systems, we analyze patterns of synchronization as a function of the coupling strength. We find synchronized limit cycles, slowing-down states, as well as quasiperiodic motion of rotating tori solutions, transient chaos, and chaos, in general along with multistable behavior. This means that coupled heteroclinic networks easily come in disguise even when they constitute the main building blocks of the dynamics. The generated spatial patterns are rotating waves with on-site limit cycles and perturbed traveling waves from on-site quasiperiodic behavior. The bifurcation diagrams of these simple systems are in general quite intricate.
Collapse
Affiliation(s)
- Maximilian Voit
- Department of Physics and Earth Sciences, Jacobs-University Bremen, 28759 Bremen, Germany
| | - Sara Veneziale
- Mathematics Institute, University of Warwick, Coventry CV4 7AL, United Kingdom
| | | |
Collapse
|
44
|
Dulov E. Evaluation of Decision-Making Chains and their Fractal Dimensions. Integr Psychol Behav Sci 2020; 55:386-429. [PMID: 32666328 DOI: 10.1007/s12124-020-09566-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
Abstract
A decision-making process is a part of the decision-making theory, reasonably placing a major research interest on the question how the process is conducted and what affects the process itself in general. Naturally it is perceived as a sequence of steps, where things are moving forward little-by-little towards to the settled goal. An analysis could be done before (planning), during the process (control + adaption) or afterwards (analysis and evaluation). Also, we can just study someone's decision process first, mainly trying to avoid making "their" mistakes. Anyway, making decisions or just observing and studying them is a part of life. Either one assumes evaluation of the current situation and of the expected outcomes, assigning to each decision some "quality" according to the fixed set of criteria (like probabilistic), or the flexible ones (different heuristics). Thus, from the mathematical and the philosophic points of view we will face three principle questions applicable to any particular decision-making theory: (1) How many criteria do we need? (2) How well they are defined/described? (3) Are there any relations between them, or we can consider them to be independent ones? Besides, any admissible theory also will consider some kind of underground efficiency questions (at least not to over-complicate and postpone a decision-making process), possibility to track and secure the major and intermediate goals and et cetera. It is clear that theoretical research and even the hated ad-hoc hypothesis use some reasonable assumptions about criteria selection and their quantity: pure or context oriented, but we want to consider the presented problem without restrictions of any specific theory, domain or context; using just common sense and analogies between exact and human sciences detected in twentieth century an later. Therefore, we created a hypothesis on how many evaluation criteria do we really need to operate inside an abstract decision domain-regardless the nature of criteria and their relations with real-world processes. Actually, it was not a big surprise that it resulted to be related with concepts of fractals, chaos and the notion of the fractal dimension. Their clear presence was discovered in many social and biological sciences recently, so an investigation was continued not only in terms of finding "deep" arguments to prove our postulates: recent results in math and physics also showed that most dynamic processes could be described differently considering an analysis of the current situation, short-term and long-term runs. Hence, the nature and the quantity of the involved criteria may vary (they could be implicitly time-dependent) and we need to study this kind of relation also.
Collapse
|
45
|
Venkadesh S, Barreto E, Ascoli GA. Itinerant complexity in networks of intrinsically bursting neurons. CHAOS (WOODBURY, N.Y.) 2020; 30:061106. [PMID: 32611128 PMCID: PMC7311180 DOI: 10.1063/5.0010334] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/09/2020] [Accepted: 05/29/2020] [Indexed: 06/11/2023]
Abstract
Active neurons can be broadly classified by their intrinsic oscillation patterns into two classes characterized by spiking or bursting. Here, we show that networks of identical bursting neurons with inhibitory pulsatory coupling exhibit itinerant dynamics. Using the relative phases of bursts between neurons, we numerically demonstrate that the network exhibits endogenous transitions between multiple modes of transient synchrony. This is true even for bursts consisting of two spikes. In contrast, our simulations reveal that networks of identical singlet-spiking neurons do not exhibit such complexity. These results suggest a role for bursting dynamics in realizing itinerant complexity in neural circuits.
Collapse
|
46
|
Mäs M, Helbing D. Random Deviations Improve Micro-Macro Predictions: An Empirical Test. SOCIOLOGICAL METHODS & RESEARCH 2020; 49:387-417. [PMID: 32655202 PMCID: PMC7324148 DOI: 10.1177/0049124117729708] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
Many sociological theories make critically different macropredictions when their microassumptions are implemented stochastically rather than deterministically. Deviations from individuals' behavioral patterns described by microtheories can spark cascades that change macrooutcomes, even when deviations are rare and random. With two experiments, we empirically tested whether macrophenomena can be critically shaped by random deviations. Ninety-six percent of participants' decisions were in line with a deterministic theory of bounded rationality. Despite this impressive microlevel accuracy, the deterministic model failed to predict the observed macrooutcomes. However, a stochastic version of the same microtheory largely improved macropredictions. The stochastic model also correctly predicted the conditions under which deviations mattered. Results also supported the hypothesis that nonrandom deviations can result in fundamentally different macrooutcomes than random deviations. In conclusion, we echo the warning that deterministic microtheories can be misleading. Our findings show that taking into account deviations in sociological theories can improve explanations and predictions.
Collapse
Affiliation(s)
- Michael Mäs
- Department of Sociology/ICS, University of Groningen, Groningen, the Netherlands
| | - Dirk Helbing
- ETH Zurich, Zürich, Switzerland
- Delft University of Technology, Delft, the Netherlands
| |
Collapse
|
47
|
Bondanelli G, Ostojic S. Coding with transient trajectories in recurrent neural networks. PLoS Comput Biol 2020; 16:e1007655. [PMID: 32053594 PMCID: PMC7043794 DOI: 10.1371/journal.pcbi.1007655] [Citation(s) in RCA: 16] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/09/2019] [Revised: 02/26/2020] [Accepted: 01/14/2020] [Indexed: 01/04/2023] Open
Abstract
Following a stimulus, the neural response typically strongly varies in time and across neurons before settling to a steady-state. While classical population coding theory disregards the temporal dimension, recent works have argued that trajectories of transient activity can be particularly informative about stimulus identity and may form the basis of computations through dynamics. Yet the dynamical mechanisms needed to generate a population code based on transient trajectories have not been fully elucidated. Here we examine transient coding in a broad class of high-dimensional linear networks of recurrently connected units. We start by reviewing a well-known result that leads to a distinction between two classes of networks: networks in which all inputs lead to weak, decaying transients, and networks in which specific inputs elicit amplified transient responses and are mapped onto output states during the dynamics. Theses two classes are simply distinguished based on the spectrum of the symmetric part of the connectivity matrix. For the second class of networks, which is a sub-class of non-normal networks, we provide a procedure to identify transiently amplified inputs and the corresponding readouts. We first apply these results to standard randomly-connected and two-population networks. We then build minimal, low-rank networks that robustly implement trajectories mapping a specific input onto a specific orthogonal output state. Finally, we demonstrate that the capacity of the obtained networks increases proportionally with their size.
Collapse
Affiliation(s)
- Giulio Bondanelli
- Laboratoire de Neurosciences Cognitives et Computationelles, Département d’Études Cognitives, École Normale Supérieure, INSERM U960, PSL University, Paris, France
| | - Srdjan Ostojic
- Laboratoire de Neurosciences Cognitives et Computationelles, Département d’Études Cognitives, École Normale Supérieure, INSERM U960, PSL University, Paris, France
| |
Collapse
|
48
|
Dynamical Emergence Theory (DET): A Computational Account of Phenomenal Consciousness. Minds Mach (Dordr) 2020. [DOI: 10.1007/s11023-020-09516-9] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/25/2022]
|
49
|
Zarghami TS, Friston KJ. Dynamic effective connectivity. Neuroimage 2019; 207:116453. [PMID: 31821868 DOI: 10.1016/j.neuroimage.2019.116453] [Citation(s) in RCA: 38] [Impact Index Per Article: 7.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/13/2019] [Revised: 10/29/2019] [Accepted: 12/06/2019] [Indexed: 01/17/2023] Open
Abstract
Metastability is a key source of itinerant dynamics in the brain; namely, spontaneous spatiotemporal reorganization of neuronal activity. This itinerancy has been the focus of numerous dynamic functional connectivity (DFC) analyses - developed to characterize the formation and dissolution of distributed functional patterns over time, using resting state fMRI. However, aside from technical and practical controversies, these approaches cannot recover the neuronal mechanisms that underwrite itinerant (e.g., metastable) dynamics-due to their descriptive, model-free nature. We argue that effective connectivity (EC) analyses are more apt for investigating the neuronal basis of metastability. To this end, we appeal to biologically-grounded models (i.e., dynamic causal modelling, DCM) and dynamical systems theory (i.e., heteroclinic sequential dynamics) to create a probabilistic, generative model of haemodynamic fluctuations. This model generates trajectories in the parametric space of EC modes (i.e., states of connectivity) that characterize functional brain architectures. In brief, it extends an established spectral DCM, to generate functional connectivity data features that change over time. This foundational paper tries to establish the model's face validity by simulating non-stationary fMRI time series and recovering key model parameters (i.e., transition probabilities among connectivity states and the parametric nature of these states) using variational Bayes. These data are further characterized using Bayesian model comparison (within and between subjects). Finally, we consider practical issues that attend applications and extensions of this scheme. Importantly, the scheme operates within a generic Bayesian framework - that can be adapted to study metastability and itinerant dynamics in any non-stationary time series.
Collapse
Affiliation(s)
- Tahereh S Zarghami
- Bio-Electric Department, School of Electrical and Computer Engineering, University of Tehran, Amirabad, Tehran, Iran.
| | - Karl J Friston
- The Wellcome Centre for Human Neuroimaging, University College London, Queen Square, London, WC1N 3AR, UK.
| |
Collapse
|
50
|
Kang J, Pae C, Park HJ. Graph-theoretical analysis for energy landscape reveals the organization of state transitions in the resting-state human cerebral cortex. PLoS One 2019; 14:e0222161. [PMID: 31498822 PMCID: PMC6733463 DOI: 10.1371/journal.pone.0222161] [Citation(s) in RCA: 16] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/28/2019] [Accepted: 08/22/2019] [Indexed: 11/19/2022] Open
Abstract
The resting-state brain is often considered a nonlinear dynamic system transitioning among multiple coexisting stable states. Despite the increasing number of studies on the multistability of the brain system, the processes of state transitions have rarely been systematically explored. Thus, we investigated the state transition processes of the human cerebral cortex system at rest by introducing a graph-theoretical analysis of the state transition network. The energy landscape analysis of brain state occurrences, estimated using the pairwise maximum entropy model for resting-state fMRI data, identified multiple local minima, some of which mediate multi-step transitions toward the global minimum. The state transition among local minima is clustered into two groups according to state transition rates and most inter-group state transitions were mediated by a hub transition state. The distance to the hub transition state determined the path length of the inter-group transition. The cortical system appeared to have redundancy in inter-group transitions when the hub transition state was removed. Such a hub-like organization of transition processes disappeared when the connectivity of the cortical system was altered from the resting-state configuration. In the state transition, the default mode network acts as a transition hub, while coactivation of the prefrontal cortex and default mode network is captured as the global minimum. In summary, the resting-state cerebral cortex has a well-organized architecture of state transitions among stable states, when evaluated by a graph-theoretical analysis of the nonlinear state transition network of the brain.
Collapse
Affiliation(s)
- Jiyoung Kang
- Center for Systems and Translational Brain Sciences, Institute of Human Complexity and Systems Science, Yonsei University, Seoul, Republic of Korea
- Department of Nuclear Medicine, Yonsei University College of Medicine, Seoul, Republic of Korea
| | - Chongwon Pae
- Department of Nuclear Medicine, Yonsei University College of Medicine, Seoul, Republic of Korea
| | - Hae-Jeong Park
- Center for Systems and Translational Brain Sciences, Institute of Human Complexity and Systems Science, Yonsei University, Seoul, Republic of Korea
- Department of Nuclear Medicine, Yonsei University College of Medicine, Seoul, Republic of Korea
- BK21 PLUS Project for Medical Science, Yonsei University College of Medicine, Seoul, Republic of Korea
- Department of Cognitive Science, Yonsei University, Seoul, Republic of Korea
| |
Collapse
|