1
|
Koch D, Nandan A, Ramesan G, Koseska A. Biological computations: Limitations of attractor-based formalisms and the need for transients. Biochem Biophys Res Commun 2024; 720:150069. [PMID: 38754165 DOI: 10.1016/j.bbrc.2024.150069] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/19/2023] [Revised: 04/15/2024] [Accepted: 05/07/2024] [Indexed: 05/18/2024]
Abstract
Living systems, from single cells to higher vertebrates, receive a continuous stream of non-stationary inputs that they sense, for e.g. via cell surface receptors or sensory organs. By integrating these time-varying, multi-sensory, and often noisy information with memory using complex molecular or neuronal networks, they generate a variety of responses beyond simple stimulus-response association, including avoidance behavior, life-long-learning or social interactions. In a broad sense, these processes can be understood as a type of biological computation. Taking as a basis generic features of biological computations, such as real-time responsiveness or robustness and flexibility of the computation, we highlight the limitations of the current attractor-based framework for understanding computations in biological systems. We argue that frameworks based on transient dynamics away from attractors are better suited for the description of computations performed by neuronal and signaling networks. In particular, we discuss how quasi-stable transient dynamics from ghost states that emerge at criticality have a promising potential for developing an integrated framework of computations, that can help us understand how living system actively process information and learn from their continuously changing environment.
Collapse
Affiliation(s)
- Daniel Koch
- Lise Meitner Group Cellular Computations and Learning, Max Planck Institute for Neurobiology of Behaviour - Caesar, Bonn, Germany
| | - Akhilesh Nandan
- Lise Meitner Group Cellular Computations and Learning, Max Planck Institute for Neurobiology of Behaviour - Caesar, Bonn, Germany
| | - Gayathri Ramesan
- Lise Meitner Group Cellular Computations and Learning, Max Planck Institute for Neurobiology of Behaviour - Caesar, Bonn, Germany
| | - Aneta Koseska
- Lise Meitner Group Cellular Computations and Learning, Max Planck Institute for Neurobiology of Behaviour - Caesar, Bonn, Germany.
| |
Collapse
|
2
|
Weber J, Solbakk AK, Blenkmann AO, Llorens A, Funderud I, Leske S, Larsson PG, Ivanovic J, Knight RT, Endestad T, Helfrich RF. Ramping dynamics and theta oscillations reflect dissociable signatures during rule-guided human behavior. Nat Commun 2024; 15:637. [PMID: 38245516 PMCID: PMC10799948 DOI: 10.1038/s41467-023-44571-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/12/2022] [Accepted: 12/19/2023] [Indexed: 01/22/2024] Open
Abstract
Contextual cues and prior evidence guide human goal-directed behavior. The neurophysiological mechanisms that implement contextual priors to guide subsequent actions in the human brain remain unclear. Using intracranial electroencephalography (iEEG), we demonstrate that increasing uncertainty introduces a shift from a purely oscillatory to a mixed processing regime with an additional ramping component. Oscillatory and ramping dynamics reflect dissociable signatures, which likely differentially contribute to the encoding and transfer of different cognitive variables in a cue-guided motor task. The results support the idea that prefrontal activity encodes rules and ensuing actions in distinct coding subspaces, while theta oscillations synchronize the prefrontal-motor network, possibly to guide action execution. Collectively, our results reveal how two key features of large-scale neural population activity, namely continuous ramping dynamics and oscillatory synchrony, jointly support rule-guided human behavior.
Collapse
Affiliation(s)
- Jan Weber
- Hertie Institute for Clinical Brain Research, Center for Neurology, University Medical Center Tübingen, Tübingen, Germany
- International Max Planck Research School for the Mechanisms of Mental Function and Dysfunction, University of Tübingen, Tübingen, Germany
| | - Anne-Kristin Solbakk
- Department of Psychology, University of Oslo, Oslo, Norway
- RITMO Centre for Interdisciplinary Studies in Rhythm, Time and Motion, University of Oslo, Oslo, Norway
- Department of Neurosurgery, Oslo University Hospital, Oslo, Norway
- Department of Neuropsychology, Helgeland Hospital, Mosjøen, Norway
| | - Alejandro O Blenkmann
- Department of Psychology, University of Oslo, Oslo, Norway
- RITMO Centre for Interdisciplinary Studies in Rhythm, Time and Motion, University of Oslo, Oslo, Norway
| | - Anais Llorens
- Department of Psychology, University of Oslo, Oslo, Norway
- RITMO Centre for Interdisciplinary Studies in Rhythm, Time and Motion, University of Oslo, Oslo, Norway
- Helen Wills Neuroscience Institute, UC Berkeley, Berkeley, CA, USA
| | - Ingrid Funderud
- Department of Psychology, University of Oslo, Oslo, Norway
- RITMO Centre for Interdisciplinary Studies in Rhythm, Time and Motion, University of Oslo, Oslo, Norway
- Department of Neuropsychology, Helgeland Hospital, Mosjøen, Norway
| | - Sabine Leske
- Department of Psychology, University of Oslo, Oslo, Norway
- RITMO Centre for Interdisciplinary Studies in Rhythm, Time and Motion, University of Oslo, Oslo, Norway
- Department of Musicology, University of Oslo, Oslo, Norway
| | | | | | - Robert T Knight
- Helen Wills Neuroscience Institute, UC Berkeley, Berkeley, CA, USA
- Department of Psychology, UC Berkeley, Berkeley, CA, USA
| | - Tor Endestad
- Department of Psychology, University of Oslo, Oslo, Norway
- RITMO Centre for Interdisciplinary Studies in Rhythm, Time and Motion, University of Oslo, Oslo, Norway
| | - Randolph F Helfrich
- Hertie Institute for Clinical Brain Research, Center for Neurology, University Medical Center Tübingen, Tübingen, Germany.
| |
Collapse
|
3
|
Alexander P, Parastesh F, Hamarash II, Karthikeyan A, Jafari S, He S. Effect of the electromagnetic induction on a modified memristive neural map model. MATHEMATICAL BIOSCIENCES AND ENGINEERING : MBE 2023; 20:17849-17865. [PMID: 38052539 DOI: 10.3934/mbe.2023793] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/07/2023]
Abstract
The significance of discrete neural models lies in their mathematical simplicity and computational ease. This research focuses on enhancing a neural map model by incorporating a hyperbolic tangent-based memristor. The study extensively explores the impact of magnetic induction strength on the model's dynamics, analyzing bifurcation diagrams and the presence of multistability. Moreover, the investigation extends to the collective behavior of coupled memristive neural maps with electrical, chemical, and magnetic connections. The synchronization of these coupled memristive maps is examined, revealing that chemical coupling exhibits a broader synchronization area. Additionally, diverse chimera states and cluster synchronized states are identified and discussed.
Collapse
Affiliation(s)
- Prasina Alexander
- Centre for Nonlinear Systems, Chennai Institute of Technology, Chennai, India
| | - Fatemeh Parastesh
- Department of Biomedical Engineering, Amirkabir University of Technology (Tehran Polytechnic), Iran
| | - Ibrahim Ismael Hamarash
- Electrical Engineering Department, Salahaddin University-Erbil, Kirkuk Rd., Erbil, Kurdistan, Iraq
- School of Computer Science and Engineering, University of Kurdistan Hewler, 40m St., Erbil, Kurdistan, Iraq
| | - Anitha Karthikeyan
- Department of Electronics and Communication Engineering, Vemu Institute of Technology, Chithoor, India
- Department of Electronics and Communications Engineering and University Centre for Research & Development, Chandigarh University, Mohali-140413, Punjab
| | - Sajad Jafari
- Department of Biomedical Engineering, Amirkabir University of Technology (Tehran Polytechnic), Iran
- Health Technology Research Institute, Amirkabir University of Technology (Tehran Polytechnic), Iran
| | - Shaobo He
- School of Automation and Electronic Information, Xiangtan University, Xiangtan 411105, China
| |
Collapse
|
4
|
Matzel LD, Sauce B. A multi-faceted role of dual-state dopamine signaling in working memory, attentional control, and intelligence. Front Behav Neurosci 2023; 17:1060786. [PMID: 36873775 PMCID: PMC9978119 DOI: 10.3389/fnbeh.2023.1060786] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/03/2022] [Accepted: 01/25/2023] [Indexed: 02/18/2023] Open
Abstract
Genetic evidence strongly suggests that individual differences in intelligence will not be reducible to a single dominant cause. However, some of those variations/changes may be traced to tractable, cohesive mechanisms. One such mechanism may be the balance of dopamine D1 (D1R) and D2 (D2R) receptors, which regulate intrinsic currents and synaptic transmission in frontal cortical regions. Here, we review evidence from human, animal, and computational studies that suggest that this balance (in density, activity state, and/or availability) is critical to the implementation of executive functions such as attention and working memory, both of which are principal contributors to variations in intelligence. D1 receptors dominate neural responding during stable periods of short-term memory maintenance (requiring attentional focus), while D2 receptors play a more specific role during periods of instability such as changing environmental or memory states (requiring attentional disengagement). Here we bridge these observations with known properties of human intelligence. Starting from theories of intelligence that place executive functions (e.g., working memory and attentional control) at its center, we propose that dual-state dopamine signaling might be a causal contributor to at least some of the variation in intelligence across individuals and its change by experiences/training. Although it is unlikely that such a mechanism can account for more than a modest portion of the total variance in intelligence, our proposal is consistent with an array of available evidence and has a high degree of explanatory value. We suggest future directions and specific empirical tests that can further elucidate these relationships.
Collapse
Affiliation(s)
- Louis D Matzel
- Department of Psychology, Rutgers University, Piscataway, NJ, United States
| | - Bruno Sauce
- Department of Biological Psychology, Vrije Universiteit Amsterdam, Amsterdam, Netherlands
| |
Collapse
|
5
|
Pietras B, Schmutz V, Schwalger T. Mesoscopic description of hippocampal replay and metastability in spiking neural networks with short-term plasticity. PLoS Comput Biol 2022; 18:e1010809. [PMID: 36548392 PMCID: PMC9822116 DOI: 10.1371/journal.pcbi.1010809] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/29/2022] [Revised: 01/06/2023] [Accepted: 12/11/2022] [Indexed: 12/24/2022] Open
Abstract
Bottom-up models of functionally relevant patterns of neural activity provide an explicit link between neuronal dynamics and computation. A prime example of functional activity patterns are propagating bursts of place-cell activities called hippocampal replay, which is critical for memory consolidation. The sudden and repeated occurrences of these burst states during ongoing neural activity suggest metastable neural circuit dynamics. As metastability has been attributed to noise and/or slow fatigue mechanisms, we propose a concise mesoscopic model which accounts for both. Crucially, our model is bottom-up: it is analytically derived from the dynamics of finite-size networks of Linear-Nonlinear Poisson neurons with short-term synaptic depression. As such, noise is explicitly linked to stochastic spiking and network size, and fatigue is explicitly linked to synaptic dynamics. To derive the mesoscopic model, we first consider a homogeneous spiking neural network and follow the temporal coarse-graining approach of Gillespie to obtain a "chemical Langevin equation", which can be naturally interpreted as a stochastic neural mass model. The Langevin equation is computationally inexpensive to simulate and enables a thorough study of metastable dynamics in classical setups (population spikes and Up-Down-states dynamics) by means of phase-plane analysis. An extension of the Langevin equation for small network sizes is also presented. The stochastic neural mass model constitutes the basic component of our mesoscopic model for replay. We show that the mesoscopic model faithfully captures the statistical structure of individual replayed trajectories in microscopic simulations and in previously reported experimental data. Moreover, compared to the deterministic Romani-Tsodyks model of place-cell dynamics, it exhibits a higher level of variability regarding order, direction and timing of replayed trajectories, which seems biologically more plausible and could be functionally desirable. This variability is the product of a new dynamical regime where metastability emerges from a complex interplay between finite-size fluctuations and local fatigue.
Collapse
Affiliation(s)
- Bastian Pietras
- Institute for Mathematics, Technische Universität Berlin, Berlin, Germany
- Bernstein Center for Computational Neuroscience, Berlin, Germany
- Department of Information and Communication Technologies, Universitat Pompeu Fabra, Barcelona, Spain
| | - Valentin Schmutz
- Brain Mind Institute, School of Computer and Communication Sciences and School of Life Sciences, École Polytechnique Fédérale de Lausanne (EPFL), Lausanne, Switzerland
| | - Tilo Schwalger
- Institute for Mathematics, Technische Universität Berlin, Berlin, Germany
- Bernstein Center for Computational Neuroscience, Berlin, Germany
| |
Collapse
|
6
|
Lee JH, Tsunada J, Vijayan S, Cohen YE. Cortical circuit-based lossless neural integrator for perceptual decision-making: A computational modeling study. Front Comput Neurosci 2022; 16:979830. [DOI: 10.3389/fncom.2022.979830] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/28/2022] [Accepted: 10/17/2022] [Indexed: 11/05/2022] Open
Abstract
The intrinsic uncertainty of sensory information (i.e., evidence) does not necessarily deter an observer from making a reliable decision. Indeed, uncertainty can be reduced by integrating (accumulating) incoming sensory evidence. It is widely thought that this accumulation is instantiated via recurrent rate-code neural networks. Yet, these networks do not fully explain important aspects of perceptual decision-making, such as a subject’s ability to retain accumulated evidence during temporal gaps in the sensory evidence. Here, we utilized computational models to show that cortical circuits can switch flexibly between “retention” and “integration” modes during perceptual decision-making. Further, we found that, depending on how the sensory evidence was readout, we could simulate “stepping” and “ramping” activity patterns, which may be analogous to those seen in different studies of decision-making in the primate parietal cortex. This finding may reconcile these previous empirical studies because it suggests these two activity patterns emerge from the same mechanism.
Collapse
|
7
|
Brinkman BAW, Yan H, Maffei A, Park IM, Fontanini A, Wang J, La Camera G. Metastable dynamics of neural circuits and networks. APPLIED PHYSICS REVIEWS 2022; 9:011313. [PMID: 35284030 PMCID: PMC8900181 DOI: 10.1063/5.0062603] [Citation(s) in RCA: 13] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/06/2021] [Accepted: 01/31/2022] [Indexed: 05/14/2023]
Abstract
Cortical neurons emit seemingly erratic trains of action potentials or "spikes," and neural network dynamics emerge from the coordinated spiking activity within neural circuits. These rich dynamics manifest themselves in a variety of patterns, which emerge spontaneously or in response to incoming activity produced by sensory inputs. In this Review, we focus on neural dynamics that is best understood as a sequence of repeated activations of a number of discrete hidden states. These transiently occupied states are termed "metastable" and have been linked to important sensory and cognitive functions. In the rodent gustatory cortex, for instance, metastable dynamics have been associated with stimulus coding, with states of expectation, and with decision making. In frontal, parietal, and motor areas of macaques, metastable activity has been related to behavioral performance, choice behavior, task difficulty, and attention. In this article, we review the experimental evidence for neural metastable dynamics together with theoretical approaches to the study of metastable activity in neural circuits. These approaches include (i) a theoretical framework based on non-equilibrium statistical physics for network dynamics; (ii) statistical approaches to extract information about metastable states from a variety of neural signals; and (iii) recent neural network approaches, informed by experimental results, to model the emergence of metastable dynamics. By discussing these topics, we aim to provide a cohesive view of how transitions between different states of activity may provide the neural underpinnings for essential functions such as perception, memory, expectation, or decision making, and more generally, how the study of metastable neural activity may advance our understanding of neural circuit function in health and disease.
Collapse
Affiliation(s)
| | - H. Yan
- State Key Laboratory of Electroanalytical Chemistry, Changchun Institute of Applied Chemistry, Chinese Academy of Sciences, Changchun, Jilin 130022, People's Republic of China
| | | | | | | | - J. Wang
- Authors to whom correspondence should be addressed: and
| | - G. La Camera
- Authors to whom correspondence should be addressed: and
| |
Collapse
|
8
|
Mollick JA, Hazy TE, Krueger KA, Nair A, Mackie P, Herd SA, O'Reilly RC. A systems-neuroscience model of phasic dopamine. Psychol Rev 2020; 127:972-1021. [PMID: 32525345 PMCID: PMC8453660 DOI: 10.1037/rev0000199] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/01/2023]
Abstract
We describe a neurobiologically informed computational model of phasic dopamine signaling to account for a wide range of findings, including many considered inconsistent with the simple reward prediction error (RPE) formalism. The central feature of this PVLV framework is a distinction between a primary value (PV) system for anticipating primary rewards (Unconditioned Stimuli [USs]), and a learned value (LV) system for learning about stimuli associated with such rewards (CSs). The LV system represents the amygdala, which drives phasic bursting in midbrain dopamine areas, while the PV system represents the ventral striatum, which drives shunting inhibition of dopamine for expected USs (via direct inhibitory projections) and phasic pausing for expected USs (via the lateral habenula). Our model accounts for data supporting the separability of these systems, including individual differences in CS-based (sign-tracking) versus US-based learning (goal-tracking). Both systems use competing opponent-processing pathways representing evidence for and against specific USs, which can explain data dissociating the processes involved in acquisition versus extinction conditioning. Further, opponent processing proved critical in accounting for the full range of conditioned inhibition phenomena, and the closely related paradigm of second-order conditioning. Finally, we show how additional separable pathways representing aversive USs, largely mirroring those for appetitive USs, also have important differences from the positive valence case, allowing the model to account for several important phenomena in aversive conditioning. Overall, accounting for all of these phenomena strongly constrains the model, thus providing a well-validated framework for understanding phasic dopamine signaling. (PsycInfo Database Record (c) 2020 APA, all rights reserved).
Collapse
Affiliation(s)
- Jessica A Mollick
- Department of Psychology and Neuroscience, University of Colorado Boulder
| | - Thomas E Hazy
- Department of Psychology and Neuroscience, University of Colorado Boulder
| | - Kai A Krueger
- Department of Psychology and Neuroscience, University of Colorado Boulder
| | - Ananta Nair
- Department of Psychology and Neuroscience, University of Colorado Boulder
| | - Prescott Mackie
- Department of Psychology and Neuroscience, University of Colorado Boulder
| | - Seth A Herd
- Department of Psychology and Neuroscience, University of Colorado Boulder
| | - Randall C O'Reilly
- Department of Psychology and Neuroscience, University of Colorado Boulder
| |
Collapse
|
9
|
Marcos E, Tsujimoto S, Mattia M, Genovesio A. A Network Activity Reconfiguration Underlies the Transition from Goal to Action. Cell Rep 2020; 27:2909-2920.e4. [PMID: 31167137 DOI: 10.1016/j.celrep.2019.05.021] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/20/2017] [Revised: 03/10/2018] [Accepted: 05/03/2019] [Indexed: 11/18/2022] Open
Abstract
Neurons in prefrontal cortex (PF) represent mnemonic information about current goals until the action can be selected and executed. However, the neuronal dynamics underlying the transition from goal into specific actions are poorly understood. Here, we show that the goal-coding PF network is dynamically reconfigured from mnemonic to action selection states and that such reconfiguration is mediated by cell assemblies with heterogeneous excitability. We recorded neuronal activity from PF while monkeys selected their actions on the basis of memorized goals. Many PF neurons encoded the goal, but only a minority of them did so across both memory retention and action selection stages. Interestingly, about half of this minority of neurons switched their goal preference across the goal-action transition. Our computational model led us to propose a PF network composed of heterogeneous cell assemblies with single-state and bistable local dynamics able to produce both dynamical stability and input susceptibility simultaneously.
Collapse
Affiliation(s)
- Encarni Marcos
- Department of Physiology and Pharmacology, Sapienza University of Rome, Rome, Italy; Instituto de Neurociencias de Alicante, Consejo Superior de Investigaciones Científicas-Universidad Miguel Hernández de Elche, San Juan de Alicante, Spain
| | - Satoshi Tsujimoto
- Department of Intelligence Science and Technology, Graduate School of Informatics, Kyoto University, Kyoto, Japan; The Nielsen Company Pte. Ltd., Singapore, Singapore
| | | | - Aldo Genovesio
- Department of Physiology and Pharmacology, Sapienza University of Rome, Rome, Italy.
| |
Collapse
|
10
|
Bondanelli G, Ostojic S. Coding with transient trajectories in recurrent neural networks. PLoS Comput Biol 2020; 16:e1007655. [PMID: 32053594 PMCID: PMC7043794 DOI: 10.1371/journal.pcbi.1007655] [Citation(s) in RCA: 16] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/09/2019] [Revised: 02/26/2020] [Accepted: 01/14/2020] [Indexed: 01/04/2023] Open
Abstract
Following a stimulus, the neural response typically strongly varies in time and across neurons before settling to a steady-state. While classical population coding theory disregards the temporal dimension, recent works have argued that trajectories of transient activity can be particularly informative about stimulus identity and may form the basis of computations through dynamics. Yet the dynamical mechanisms needed to generate a population code based on transient trajectories have not been fully elucidated. Here we examine transient coding in a broad class of high-dimensional linear networks of recurrently connected units. We start by reviewing a well-known result that leads to a distinction between two classes of networks: networks in which all inputs lead to weak, decaying transients, and networks in which specific inputs elicit amplified transient responses and are mapped onto output states during the dynamics. Theses two classes are simply distinguished based on the spectrum of the symmetric part of the connectivity matrix. For the second class of networks, which is a sub-class of non-normal networks, we provide a procedure to identify transiently amplified inputs and the corresponding readouts. We first apply these results to standard randomly-connected and two-population networks. We then build minimal, low-rank networks that robustly implement trajectories mapping a specific input onto a specific orthogonal output state. Finally, we demonstrate that the capacity of the obtained networks increases proportionally with their size.
Collapse
Affiliation(s)
- Giulio Bondanelli
- Laboratoire de Neurosciences Cognitives et Computationelles, Département d’Études Cognitives, École Normale Supérieure, INSERM U960, PSL University, Paris, France
| | - Srdjan Ostojic
- Laboratoire de Neurosciences Cognitives et Computationelles, Département d’Études Cognitives, École Normale Supérieure, INSERM U960, PSL University, Paris, France
| |
Collapse
|
11
|
A model for the peak-interval task based on neural oscillation-delimited states. Behav Processes 2019; 168:103941. [PMID: 31550668 DOI: 10.1016/j.beproc.2019.103941] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/30/2019] [Revised: 08/16/2019] [Accepted: 08/23/2019] [Indexed: 11/24/2022]
Abstract
Specific mechanisms underlying how the brain keeps track of time are largely unknown. Several existing computational models of timing reproduce behavioral results obtained with experimental psychophysical tasks, but only a few tackle the underlying biological mechanisms, such as the synchronized neural activity that occurs throughout brain areas. In this paper, we introduce a model for the peak-interval task based on neuronal network properties. We consider that Local Field Potential (LFP) oscillation cycles specify a sequence of states, represented as neuronal ensembles. Repeated presentation of time intervals during training reinforces the connections of specific ensembles to downstream networks - sets of neurons connected to the sequence of states. Later, during the peak-interval procedure, these downstream networks are reactivated by previously experienced neuronal ensembles, triggering behavioral responses at the learned time intervals. The model reproduces experimental response patterns from individual rats in the peak-interval procedure, satisfying relevant properties such as the Weber law. Finally, we provide a biological interpretation of the parameters of the model.
Collapse
|
12
|
Hardy NF, Goudar V, Romero-Sosa JL, Buonomano DV. A model of temporal scaling correctly predicts that motor timing improves with speed. Nat Commun 2018; 9:4732. [PMID: 30413692 PMCID: PMC6226482 DOI: 10.1038/s41467-018-07161-6] [Citation(s) in RCA: 30] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/30/2018] [Accepted: 10/20/2018] [Indexed: 11/09/2022] Open
Abstract
Timing is fundamental to complex motor behaviors: from tying a knot to playing the piano. A general feature of motor timing is temporal scaling: the ability to produce motor patterns at different speeds. One theory of temporal processing proposes that the brain encodes time in dynamic patterns of neural activity (population clocks), here we first examine whether recurrent neural network (RNN) models can account for temporal scaling. Appropriately trained RNNs exhibit temporal scaling over a range similar to that of humans and capture a signature of motor timing, Weber's law, but predict that temporal precision improves at faster speeds. Human psychophysics experiments confirm this prediction: the variability of responses in absolute time are lower at faster speeds. These results establish that RNNs can account for temporal scaling and suggest a novel psychophysical principle: the Weber-Speed effect.
Collapse
Affiliation(s)
- Nicholas F Hardy
- Neuroscience Interdepartmental Program, University of California Los Angeles, Los Angeles, CA, 90095, USA
- Departments of Neurobiology, University of California Los Angeles, Los Angeles, CA, 90095, USA
| | - Vishwa Goudar
- Departments of Neurobiology, University of California Los Angeles, Los Angeles, CA, 90095, USA
| | - Juan L Romero-Sosa
- Departments of Neurobiology, University of California Los Angeles, Los Angeles, CA, 90095, USA
| | - Dean V Buonomano
- Neuroscience Interdepartmental Program, University of California Los Angeles, Los Angeles, CA, 90095, USA.
- Departments of Neurobiology, University of California Los Angeles, Los Angeles, CA, 90095, USA.
- Departments of Psychology, University of California Los Angeles, Los Angeles, CA, 90095, USA.
| |
Collapse
|
13
|
Farashahi S, Ting CC, Kao CH, Wu SW, Soltani A. Dynamic combination of sensory and reward information under time pressure. PLoS Comput Biol 2018; 14:e1006070. [PMID: 29584717 PMCID: PMC5889192 DOI: 10.1371/journal.pcbi.1006070] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/02/2017] [Revised: 04/06/2018] [Accepted: 03/02/2018] [Indexed: 12/03/2022] Open
Abstract
When making choices, collecting more information is beneficial but comes at the cost of sacrificing time that could be allocated to making other potentially rewarding decisions. To investigate how the brain balances these costs and benefits, we conducted a series of novel experiments in humans and simulated various computational models. Under six levels of time pressure, subjects made decisions either by integrating sensory information over time or by dynamically combining sensory and reward information over time. We found that during sensory integration, time pressure reduced performance as the deadline approached, and choice was more strongly influenced by the most recent sensory evidence. By fitting performance and reaction time with various models we found that our experimental results are more compatible with leaky integration of sensory information with an urgency signal or a decision process based on stochastic transitions between discrete states modulated by an urgency signal. When combining sensory and reward information, subjects spent less time on integration than optimally prescribed when reward decreased slowly over time, and the most recent evidence did not have the maximal influence on choice. The suboptimal pattern of reaction time was partially mitigated in an equivalent control experiment in which sensory integration over time was not required, indicating that the suboptimal response time was influenced by the perception of imperfect sensory integration. Meanwhile, during combination of sensory and reward information, performance did not drop as the deadline approached, and response time was not different between correct and incorrect trials. These results indicate a decision process different from what is involved in the integration of sensory information over time. Together, our results not only reveal limitations in sensory integration over time but also illustrate how these limitations influence dynamic combination of sensory and reward information. Collecting more information seems beneficial for making most of the decisions we face in daily life. However, the benefit of collecting more information critically depends on how well we can integrate that information over time and how costly time is. Here we investigate how humans determine the amount of time to spend on collecting sensory information in order to make a perceptual decision when the reward for making a correct choice decreases over time. We show that sensory integration over time is not perfect and further deteriorates with time pressure. However, we also find evidence that when the cost of time has to be considered, decision processes are influenced by limitations in sensory integration.
Collapse
Affiliation(s)
- Shiva Farashahi
- Department of Psychological and Brain Sciences, Dartmouth College, Hanover, NH, United States of America
| | - Chih-Chung Ting
- CREED, Amsterdam School of Economics, Universiteit van Amsterdam, Amsterdam, the Netherlands
| | - Chang-Hao Kao
- Department of Psychology, University of Pennsylvania, Philadelphia, PA, United States of America
| | - Shih-Wei Wu
- Institute of Neuroscience, National Yang-Ming University, Taipei, Taiwan
- Brain Research Center, National Yang-Ming University, Taipei, Taiwan
- * E-mail: (AS); (SWW)
| | - Alireza Soltani
- Department of Psychological and Brain Sciences, Dartmouth College, Hanover, NH, United States of America
- * E-mail: (AS); (SWW)
| |
Collapse
|
14
|
Dechery JB, MacLean JN. Emergent cortical circuit dynamics contain dense, interwoven ensembles of spike sequences. J Neurophysiol 2017; 118:1914-1925. [PMID: 28724786 DOI: 10.1152/jn.00394.2017] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/30/2017] [Revised: 07/05/2017] [Accepted: 07/14/2017] [Indexed: 01/30/2023] Open
Abstract
Temporal codes are theoretically powerful encoding schemes, but their precise form in the neocortex remains unknown in part because of the large number of possible codes and the difficulty in disambiguating informative spikes from statistical noise. A biologically plausible and computationally powerful temporal coding scheme is the Hebbian assembly phase sequence (APS), which predicts reliable propagation of spikes between functionally related assemblies of neurons. Here, we sought to measure the inherent capacity of neocortical networks to produce reliable sequences of spikes, as would be predicted by an APS code. To record microcircuit activity, the scale at which computation is implemented, we used two-photon calcium imaging to densely sample spontaneous activity in murine neocortical networks ex vivo. We show that the population spike histogram is sufficient to produce a spatiotemporal progression of activity across the population. To more comprehensively evaluate the capacity for sequential spiking that cannot be explained by the overall population spiking, we identify statistically significant spike sequences. We found a large repertoire of sequence spikes that collectively comprise the majority of spiking in the circuit. Sequences manifest probabilistically and share neuron membership, resulting in unique ensembles of interwoven sequences characterizing individual spatiotemporal progressions of activity. Distillation of population dynamics into its constituent sequences provides a way to capture trial-to-trial variability and may prove to be a powerful decoding substrate in vivo. Informed by these data, we suggest that the Hebbian APS be reformulated as interwoven sequences with flexible assembly membership due to shared overlapping neurons.NEW & NOTEWORTHY Neocortical computation occurs largely within microcircuits comprised of individual neurons and their connections within small volumes (<500 μm3). We found evidence for a long-postulated temporal code, the Hebbian assembly phase sequence, by identifying repeated and co-occurring sequences of spikes. Variance in population activity across trials was explained in part by the ensemble of active sequences. The presence of interwoven sequences suggests that neuronal assembly structure can be variable and is determined by previous activity.
Collapse
Affiliation(s)
- Joseph B Dechery
- Committee on Computational Neuroscience, University of Chicago, Chicago, Illinois; and
| | - Jason N MacLean
- Committee on Computational Neuroscience, University of Chicago, Chicago, Illinois; and .,Department of Neurobiology, University of Chicago, Illinois
| |
Collapse
|
15
|
Distributed representations of action sequences in anterior cingulate cortex: A recurrent neural network approach. Psychon Bull Rev 2017; 25:302-321. [DOI: 10.3758/s13423-017-1280-1] [Citation(s) in RCA: 18] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/26/2022]
|
16
|
Gu S, Betzel RF, Mattar MG, Cieslak M, Delio PR, Grafton ST, Pasqualetti F, Bassett DS. Optimal trajectories of brain state transitions. Neuroimage 2017; 148:305-317. [PMID: 28088484 PMCID: PMC5489344 DOI: 10.1016/j.neuroimage.2017.01.003] [Citation(s) in RCA: 98] [Impact Index Per Article: 14.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/16/2016] [Revised: 12/27/2016] [Accepted: 01/02/2017] [Indexed: 12/05/2022] Open
Abstract
The complexity of neural dynamics stems in part from the complexity of the underlying anatomy. Yet how white matter structure constrains how the brain transitions from one cognitive state to another remains unknown. Here we address this question by drawing on recent advances in network control theory to model the underlying mechanisms of brain state transitions as elicited by the collective control of region sets. We find that previously identified attention and executive control systems are poised to affect a broad array of state transitions that cannot easily be classified by traditional engineering-based notions of control. This theoretical versatility comes with a vulnerability to injury. In patients with mild traumatic brain injury, we observe a loss of specificity in putative control processes, suggesting greater susceptibility to neurophysiological noise. These results offer fundamental insights into the mechanisms driving brain state transitions in healthy cognition and their alteration following injury.
Collapse
Affiliation(s)
- Shi Gu
- Applied Mathematics and Computational Science, University of Pennsylvania, Philadelphia, PA 19104, USA; Department of Bioengineering, University of Pennsylvania, Philadelphia, PA 19104, USA
| | - Richard F Betzel
- Department of Bioengineering, University of Pennsylvania, Philadelphia, PA 19104, USA
| | - Marcelo G Mattar
- Princeton Neuroscience Institute, Princeton University, Princeton, NJ 08544, USA
| | - Matthew Cieslak
- Department of Psychological and Brain Sciences, University of California, Santa Barbara, CA 93106, USA
| | - Philip R Delio
- Department of Psychological and Brain Sciences, University of California, Santa Barbara, CA 93106, USA; Neurology Associates of Santa Barbara, Santa Barbara, CA 93105, USA
| | - Scott T Grafton
- Department of Psychological and Brain Sciences, University of California, Santa Barbara, CA 93106, USA
| | - Fabio Pasqualetti
- Department of Mechanical Engineering, University of California, Riverside, CA 92521, USA
| | - Danielle S Bassett
- Department of Bioengineering, University of Pennsylvania, Philadelphia, PA 19104, USA; Department of Electrical & Systems Engineering, University of Pennsylvania, Philadelphia, PA 19104, USA.
| |
Collapse
|
17
|
Kato A, Morita K. Forgetting in Reinforcement Learning Links Sustained Dopamine Signals to Motivation. PLoS Comput Biol 2016; 12:e1005145. [PMID: 27736881 PMCID: PMC5063413 DOI: 10.1371/journal.pcbi.1005145] [Citation(s) in RCA: 36] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/11/2016] [Accepted: 09/14/2016] [Indexed: 12/12/2022] Open
Abstract
It has been suggested that dopamine (DA) represents reward-prediction-error (RPE) defined in reinforcement learning and therefore DA responds to unpredicted but not predicted reward. However, recent studies have found DA response sustained towards predictable reward in tasks involving self-paced behavior, and suggested that this response represents a motivational signal. We have previously shown that RPE can sustain if there is decay/forgetting of learned-values, which can be implemented as decay of synaptic strengths storing learned-values. This account, however, did not explain the suggested link between tonic/sustained DA and motivation. In the present work, we explored the motivational effects of the value-decay in self-paced approach behavior, modeled as a series of ‘Go’ or ‘No-Go’ selections towards a goal. Through simulations, we found that the value-decay can enhance motivation, specifically, facilitate fast goal-reaching, albeit counterintuitively. Mathematical analyses revealed that underlying potential mechanisms are twofold: (1) decay-induced sustained RPE creates a gradient of ‘Go’ values towards a goal, and (2) value-contrasts between ‘Go’ and ‘No-Go’ are generated because while chosen values are continually updated, unchosen values simply decay. Our model provides potential explanations for the key experimental findings that suggest DA's roles in motivation: (i) slowdown of behavior by post-training blockade of DA signaling, (ii) observations that DA blockade severely impairs effortful actions to obtain rewards while largely sparing seeking of easily obtainable rewards, and (iii) relationships between the reward amount, the level of motivation reflected in the speed of behavior, and the average level of DA. These results indicate that reinforcement learning with value-decay, or forgetting, provides a parsimonious mechanistic account for the DA's roles in value-learning and motivation. Our results also suggest that when biological systems for value-learning are active even though learning has apparently converged, the systems might be in a state of dynamic equilibrium, where learning and forgetting are balanced. Dopamine (DA) has been suggested to have two reward-related roles: (1) representing reward-prediction-error (RPE), and (2) providing motivational drive. Role(1) is based on the physiological results that DA responds to unpredicted but not predicted reward, whereas role(2) is supported by the pharmacological results that blockade of DA signaling causes motivational impairments such as slowdown of self-paced behavior. So far, these two roles are considered to be played by two different temporal patterns of DA signals: role(1) by phasic signals and role(2) by tonic/sustained signals. However, recent studies have found sustained DA signals with features indicative of both roles (1) and (2), complicating this picture. Meanwhile, whereas synaptic/circuit mechanisms for role(1), i.e., how RPE is calculated in the upstream of DA neurons and how RPE-dependent update of learned-values occurs through DA-dependent synaptic plasticity, have now become clarified, mechanisms for role(2) remain unclear. In this work, we modeled self-paced behavior by a series of ‘Go’ or ‘No-Go’ selections in the framework of reinforcement-learning assuming DA's role(1), and demonstrated that incorporation of decay/forgetting of learned-values, which is presumably implemented as decay of synaptic strengths storing learned-values, provides a potential unified mechanistic account for the DA's two roles, together with its various temporal patterns.
Collapse
Affiliation(s)
- Ayaka Kato
- Department of Biological Sciences, Graduate School of Science, The University of Tokyo, Tokyo, Japan
| | - Kenji Morita
- Physical and Health Education, Graduate School of Education, The University of Tokyo, Tokyo, Japan
- * E-mail:
| |
Collapse
|
18
|
Corticostriatal circuit mechanisms of value-based action selection: Implementation of reinforcement learning algorithms and beyond. Behav Brain Res 2016; 311:110-121. [DOI: 10.1016/j.bbr.2016.05.017] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/24/2015] [Revised: 05/02/2016] [Accepted: 05/06/2016] [Indexed: 01/20/2023]
|
19
|
Raz G, Shpigelman L, Jacob Y, Gonen T, Benjamini Y, Hendler T. Psychophysiological whole-brain network clustering based on connectivity dynamics analysis in naturalistic conditions. Hum Brain Mapp 2016; 37:4654-4672. [PMID: 27477592 DOI: 10.1002/hbm.23335] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/15/2016] [Revised: 07/02/2016] [Accepted: 07/24/2016] [Indexed: 01/10/2023] Open
Abstract
We introduce a novel method for delineating context-dependent functional brain networks whose connectivity dynamics are synchronized with the occurrence of a specific psychophysiological process of interest. In this method of context-related network dynamics analysis (CRNDA), a continuous psychophysiological index serves as a reference for clustering the whole-brain into functional networks. We applied CRNDA to fMRI data recorded during the viewing of a sadness-inducing film clip. The method reliably demarcated networks in which temporal patterns of connectivity related to the time series of reported emotional intensity. Our work successfully replicated the link between network connectivity and emotion rating in an independent sample group for seven of the networks. The demarcated networks have clear common functional denominators. Three of these networks overlap with distinct empathy-related networks, previously identified in distinct sets of studies. The other networks are related to sensorimotor processing, language, attention, and working memory. The results indicate that CRNDA, a data-driven method for network clustering that is sensitive to transient connectivity patterns, can productively and reliably demarcate networks that follow psychologically meaningful processes. Hum Brain Mapp 37:4654-4672, 2016. © 2016 Wiley Periodicals, Inc.
Collapse
Affiliation(s)
- Gal Raz
- Tel Aviv Center For Brain Functions, Wohl Institute for Advanced Imaging, Tel Aviv Sourasky Medical Center, Tel Aviv, Israel.,The Steve Tisch School of Film and Television, Tel Aviv University, Tel Aviv, Israel.,IBM Research, Haifa, Israel
| | | | - Yael Jacob
- Tel Aviv Center For Brain Functions, Wohl Institute for Advanced Imaging, Tel Aviv Sourasky Medical Center, Tel Aviv, Israel.,Sagol School of Neuroscience, Tel Aviv University, Tel Aviv, Israel
| | - Tal Gonen
- Tel Aviv Center For Brain Functions, Wohl Institute for Advanced Imaging, Tel Aviv Sourasky Medical Center, Tel Aviv, Israel.,School of Psychological Sciences, Tel-Aviv University, Tel Aviv, Israel
| | - Yoav Benjamini
- Department of Statistics and Operations Research, Tel Aviv University, Tel Aviv, Israel
| | - Talma Hendler
- Tel Aviv Center For Brain Functions, Wohl Institute for Advanced Imaging, Tel Aviv Sourasky Medical Center, Tel Aviv, Israel.,School of Psychological Sciences, Tel-Aviv University, Tel Aviv, Israel.,Sackler Faculty of Medicine, Tel Aviv University, Tel Aviv, Israel
| |
Collapse
|
20
|
Alegre-Cortés J, Soto-Sánchez C, Pizá ÁG, Albarracín AL, Farfán FD, Felice CJ, Fernández E. Time-frequency analysis of neuronal populations with instantaneous resolution based on noise-assisted multivariate empirical mode decomposition. J Neurosci Methods 2016; 267:35-44. [PMID: 27044801 DOI: 10.1016/j.jneumeth.2016.03.018] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/11/2015] [Revised: 03/21/2016] [Accepted: 03/28/2016] [Indexed: 11/16/2022]
Abstract
BACKGROUND Linear analysis has classically provided powerful tools for understanding the behavior of neural populations, but the neuron responses to real-world stimulation are nonlinear under some conditions, and many neuronal components demonstrate strong nonlinear behavior. In spite of this, temporal and frequency dynamics of neural populations to sensory stimulation have been usually analyzed with linear approaches. NEW METHOD In this paper, we propose the use of Noise-Assisted Multivariate Empirical Mode Decomposition (NA-MEMD), a data-driven template-free algorithm, plus the Hilbert transform as a suitable tool for analyzing population oscillatory dynamics in a multi-dimensional space with instantaneous frequency (IF) resolution. RESULTS The proposed approach was able to extract oscillatory information of neurophysiological data of deep vibrissal nerve and visual cortex multiunit recordings that were not evidenced using linear approaches with fixed bases such as the Fourier analysis. COMPARISON WITH EXISTING METHODS Texture discrimination analysis performance was increased when Noise-Assisted Multivariate Empirical Mode plus Hilbert transform was implemented, compared to linear techniques. Cortical oscillatory population activity was analyzed with precise time-frequency resolution. Similarly, NA-MEMD provided increased time-frequency resolution of cortical oscillatory population activity. CONCLUSIONS Noise-Assisted Multivariate Empirical Mode Decomposition plus Hilbert transform is an improved method to analyze neuronal population oscillatory dynamics overcoming linear and stationary assumptions of classical methods.
Collapse
Affiliation(s)
- J Alegre-Cortés
- Bioengineering Institute, Miguel Hernández University (UMH), Alicante, Spain
| | - C Soto-Sánchez
- Bioengineering Institute, Miguel Hernández University (UMH), Alicante, Spain; Biomedical Research Networking center in Bioengineering, Biomaterials and Nanomedicine (CIBER-BBN), Zaragoza, Spain
| | - Á G Pizá
- Laboratorio de Medios e Interfases (LAMEIN), Departamento de Bioingeniería, Facultad de Ciencias Exactas y Tecnología, Universidad Nacional de Tucumán, Tucumán, Argentina; Instituto Superior de Investigaciones Biológicas (INSIBIO), Consejo Nacional de Investigaciones Científicas y Técnicas (CONICET), Tucumán, Argentina
| | - A L Albarracín
- Laboratorio de Medios e Interfases (LAMEIN), Departamento de Bioingeniería, Facultad de Ciencias Exactas y Tecnología, Universidad Nacional de Tucumán, Tucumán, Argentina; Instituto Superior de Investigaciones Biológicas (INSIBIO), Consejo Nacional de Investigaciones Científicas y Técnicas (CONICET), Tucumán, Argentina
| | - F D Farfán
- Laboratorio de Medios e Interfases (LAMEIN), Departamento de Bioingeniería, Facultad de Ciencias Exactas y Tecnología, Universidad Nacional de Tucumán, Tucumán, Argentina; Instituto Superior de Investigaciones Biológicas (INSIBIO), Consejo Nacional de Investigaciones Científicas y Técnicas (CONICET), Tucumán, Argentina
| | - C J Felice
- Laboratorio de Medios e Interfases (LAMEIN), Departamento de Bioingeniería, Facultad de Ciencias Exactas y Tecnología, Universidad Nacional de Tucumán, Tucumán, Argentina; Instituto Superior de Investigaciones Biológicas (INSIBIO), Consejo Nacional de Investigaciones Científicas y Técnicas (CONICET), Tucumán, Argentina
| | - E Fernández
- Bioengineering Institute, Miguel Hernández University (UMH), Alicante, Spain; Biomedical Research Networking center in Bioengineering, Biomaterials and Nanomedicine (CIBER-BBN), Zaragoza, Spain.
| |
Collapse
|
21
|
Cao R, Pastukhov A, Mattia M, Braun J. Collective Activity of Many Bistable Assemblies Reproduces Characteristic Dynamics of Multistable Perception. J Neurosci 2016; 36:6957-72. [PMID: 27358454 PMCID: PMC6604901 DOI: 10.1523/jneurosci.4626-15.2016] [Citation(s) in RCA: 34] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/28/2015] [Revised: 05/11/2016] [Accepted: 05/16/2016] [Indexed: 11/21/2022] Open
Abstract
UNLABELLED The timing of perceptual decisions depends on both deterministic and stochastic factors, as the gradual accumulation of sensory evidence (deterministic) is contaminated by sensory and/or internal noise (stochastic). When human observers view multistable visual displays, successive episodes of stochastic accumulation culminate in repeated reversals of visual appearance. Treating reversal timing as a "first-passage time" problem, we ask how the observed timing densities constrain the underlying stochastic accumulation. Importantly, mean reversal times (i.e., deterministic factors) differ enormously between displays/observers/stimulation levels, whereas the variance and skewness of reversal times (i.e., stochastic factors) keep characteristic proportions of the mean. What sort of stochastic process could reproduce this highly consistent "scaling property?" Here we show that the collective activity of a finite population of bistable units (i.e., a generalized Ehrenfest process) quantitatively reproduces all aspects of the scaling property of multistable phenomena, in contrast to other processes under consideration (Poisson, Wiener, or Ornstein-Uhlenbeck process). The postulated units express the spontaneous dynamics of attractor assemblies transitioning between distinct activity states. Plausible candidates are cortical columns, or clusters of columns, as they are preferentially connected and spontaneously explore a restricted repertoire of activity states. Our findings suggests that perceptual representations are granular, probabilistic, and operate far from equilibrium, thereby offering a suitable substrate for statistical inference. SIGNIFICANCE STATEMENT Spontaneous reversals of high-level perception, so-called multistable perception, conform to highly consistent and characteristic statistics, constraining plausible neural representations. We show that the observed perceptual dynamics would be reproduced quantitatively by a finite population of distinct neural assemblies, each with locally bistable activity, operating far from the collective equilibrium (generalized Ehrenfest process). Such a representation would be consistent with the intrinsic stochastic dynamics of neocortical activity, which is dominated by preferentially connected assemblies, such as cortical columns or clusters of columns. We predict that local neuron assemblies will express bistable dynamics, with spontaneous active-inactive transitions, whenever they contribute to high-level perception.
Collapse
Affiliation(s)
- Robin Cao
- Institute of Biology, Otto-von-Guericke University, 39120 Magdeburg, Germany, Istituto Superiore di Sanità, 00161 Rome, Italy, and
| | | | | | - Jochen Braun
- Institute of Biology, Otto-von-Guericke University, 39120 Magdeburg, Germany, Center for Behavioral Brain Sciences, 39120 Magdeburg, Germany,
| |
Collapse
|
22
|
Hass J, Durstewitz D. Time at the center, or time at the side? Assessing current models of time perception. Curr Opin Behav Sci 2016. [DOI: 10.1016/j.cobeha.2016.02.030] [Citation(s) in RCA: 23] [Impact Index Per Article: 2.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/01/2023]
|
23
|
Yada Y, Kanzaki R, Takahashi H. State-Dependent Propagation of Neuronal Sub-Population in Spontaneous Synchronized Bursts. Front Syst Neurosci 2016; 10:28. [PMID: 27065820 PMCID: PMC4815764 DOI: 10.3389/fnsys.2016.00028] [Citation(s) in RCA: 24] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/27/2016] [Accepted: 03/14/2016] [Indexed: 01/05/2023] Open
Abstract
Repeating stable spatiotemporal patterns emerge in synchronized spontaneous activity in neuronal networks. The repertoire of such patterns can serve as memory, or a reservoir of information, in a neuronal network; moreover, the variety of patterns may represent the network memory capacity. However, a neuronal substrate for producing a repertoire of patterns in synchronization remains elusive. We herein hypothesize that state-dependent propagation of a neuronal sub-population is the key mechanism. By combining high-resolution measurement with a 4096-channel complementary metal-oxide semiconductor (CMOS) microelectrode array (MEA) and dimensionality reduction with non-negative matrix factorization (NMF), we investigated synchronized bursts of dissociated rat cortical neurons at approximately 3 weeks in vitro. We found that bursts had a repertoire of repeating spatiotemporal patterns, and different patterns shared a partially similar sequence of sub-population, supporting the idea of sequential structure of neuronal sub-populations during synchronized activity. We additionally found that similar spatiotemporal patterns tended to appear successively and periodically, suggesting a state-dependent fluctuation of propagation, which has been overlooked in existing literature. Thus, such a state-dependent property within the sequential sub-population structure is a plausible neural substrate for performing a repertoire of stable patterns during synchronized activity.
Collapse
Affiliation(s)
- Yuichiro Yada
- Research Center for Advanced Science and Technology, The University of TokyoTokyo, Japan; Department of Mechano-Informatics, Graduate School of Information Science and Technology, The University of TokyoTokyo, Japan; Japan Society for the Promotion of ScienceTokyo, Japan
| | - Ryohei Kanzaki
- Research Center for Advanced Science and Technology, The University of TokyoTokyo, Japan; Department of Mechano-Informatics, Graduate School of Information Science and Technology, The University of TokyoTokyo, Japan
| | - Hirokazu Takahashi
- Research Center for Advanced Science and Technology, The University of TokyoTokyo, Japan; Department of Mechano-Informatics, Graduate School of Information Science and Technology, The University of TokyoTokyo, Japan
| |
Collapse
|
24
|
Hedrick MS, Moon IJ, Woo J, Won JH. Effects of Physiological Internal Noise on Model Predictions of Concurrent Vowel Identification for Normal-Hearing Listeners. PLoS One 2016; 11:e0149128. [PMID: 26866811 PMCID: PMC4750862 DOI: 10.1371/journal.pone.0149128] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/08/2015] [Accepted: 01/27/2016] [Indexed: 11/18/2022] Open
Abstract
Previous studies have shown that concurrent vowel identification improves with increasing temporal onset asynchrony of the vowels, even if the vowels have the same fundamental frequency. The current study investigated the possible underlying neural processing involved in concurrent vowel perception. The individual vowel stimuli from a previously published study were used as inputs for a phenomenological auditory-nerve (AN) model. Spectrotemporal representations of simulated neural excitation patterns were constructed (i.e., neurograms) and then matched quantitatively with the neurograms of the single vowels using the Neurogram Similarity Index Measure (NSIM). A novel computational decision model was used to predict concurrent vowel identification. To facilitate optimum matches between the model predictions and the behavioral human data, internal noise was added at either neurogram generation or neurogram matching using the NSIM procedure. The best fit to the behavioral data was achieved with a signal-to-noise ratio (SNR) of 8 dB for internal noise added at the neurogram but with a much smaller amount of internal noise (SNR of 60 dB) for internal noise added at the level of the NSIM computations. The results suggest that accurate modeling of concurrent vowel data from listeners with normal hearing may partly depend on internal noise and where internal noise is hypothesized to occur during the concurrent vowel identification process.
Collapse
Affiliation(s)
- Mark S. Hedrick
- Department of Audiology and Speech Pathology, University of Tennessee Health Science Center, Knoxville, TN, United States of America
| | - Il Joon Moon
- Department of Otorhinolaryngology-Head and Neck Surgery, Samsung Medical Center, Sungkyunkwan University, School of Medicine, Seoul, Korea
| | - Jihwan Woo
- Department of Biomedical Engineering, University of Ulsan, Ulsan, Korea
- * E-mail:
| | - Jong Ho Won
- Department of Audiology and Speech Pathology, University of Tennessee Health Science Center, Knoxville, TN, United States of America
| |
Collapse
|
25
|
Wieland S, Bernardi D, Schwalger T, Lindner B. Slow fluctuations in recurrent networks of spiking neurons. PHYSICAL REVIEW. E, STATISTICAL, NONLINEAR, AND SOFT MATTER PHYSICS 2015; 92:040901. [PMID: 26565154 DOI: 10.1103/physreve.92.040901] [Citation(s) in RCA: 35] [Impact Index Per Article: 3.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/24/2015] [Indexed: 06/05/2023]
Abstract
Networks of fast nonlinear elements may display slow fluctuations if interactions are strong. We find a transition in the long-term variability of a sparse recurrent network of perfect integrate-and-fire neurons at which the Fano factor switches from zero to infinity and the correlation time is minimized. This corresponds to a bifurcation in a linear map arising from the self-consistency of temporal input and output statistics. More realistic neural dynamics with a leak current and refractory period lead to smoothed transitions and modified critical couplings that can be theoretically predicted.
Collapse
Affiliation(s)
- Stefan Wieland
- Bernstein Center for Computational Neuroscience Berlin, Philippstrasse 13, Haus 2, 10115 Berlin, Germany
- Physics Department of Humboldt University Berlin, Newtonstrasse 15, 12489 Berlin, Germany
| | - Davide Bernardi
- Bernstein Center for Computational Neuroscience Berlin, Philippstrasse 13, Haus 2, 10115 Berlin, Germany
- Physics Department of Humboldt University Berlin, Newtonstrasse 15, 12489 Berlin, Germany
| | - Tilo Schwalger
- School of Computer and Communication Sciences and School of Life Sciences, Brain Mind Institute, École Polytechnique Fédérale de Lausanne, Station 15, 1015 Lausanne EPFL, Switzerland
| | - Benjamin Lindner
- Bernstein Center for Computational Neuroscience Berlin, Philippstrasse 13, Haus 2, 10115 Berlin, Germany
- Physics Department of Humboldt University Berlin, Newtonstrasse 15, 12489 Berlin, Germany
| |
Collapse
|
26
|
Latimer KW, Yates JL, Meister MLR, Huk AC, Pillow JW. NEURONAL MODELING. Single-trial spike trains in parietal cortex reveal discrete steps during decision-making. Science 2015; 349:184-7. [PMID: 26160947 PMCID: PMC4799998 DOI: 10.1126/science.aaa4056] [Citation(s) in RCA: 171] [Impact Index Per Article: 19.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/02/2022]
Abstract
Neurons in the macaque lateral intraparietal (LIP) area exhibit firing rates that appear to ramp upward or downward during decision-making. These ramps are commonly assumed to reflect the gradual accumulation of evidence toward a decision threshold. However, the ramping in trial-averaged responses could instead arise from instantaneous jumps at different times on different trials. We examined single-trial responses in LIP using statistical methods for fitting and comparing latent dynamical spike-train models. We compared models with latent spike rates governed by either continuous diffusion-to-bound dynamics or discrete "stepping" dynamics. Roughly three-quarters of the choice-selective neurons we recorded were better described by the stepping model. Moreover, the inferred steps carried more information about the animal's choice than spike counts.
Collapse
Affiliation(s)
- Kenneth W Latimer
- Center for Perceptual Systems, The University of Texas at Austin, Austin, TX 78712, USA. Institute for Neuroscience, The University of Texas at Austin, Austin, TX 78712, USA
| | - Jacob L Yates
- Center for Perceptual Systems, The University of Texas at Austin, Austin, TX 78712, USA. Institute for Neuroscience, The University of Texas at Austin, Austin, TX 78712, USA
| | - Miriam L R Meister
- Institute for Neuroscience, The University of Texas at Austin, Austin, TX 78712, USA. Department of Physiology and Biophysics, University of Washington, Seattle, WA 98195, USA
| | - Alexander C Huk
- Center for Perceptual Systems, The University of Texas at Austin, Austin, TX 78712, USA. Institute for Neuroscience, The University of Texas at Austin, Austin, TX 78712, USA. Department of Neuroscience, The University of Texas at Austin, Austin, TX 78712, USA. Department of Psychology, The University of Texas at Austin, Austin, TX 78712, USA
| | - Jonathan W Pillow
- Center for Perceptual Systems, The University of Texas at Austin, Austin, TX 78712, USA. Institute for Neuroscience, The University of Texas at Austin, Austin, TX 78712, USA. Department of Psychology, The University of Texas at Austin, Austin, TX 78712, USA. Princeton Neuroscience Institute and Department of Psychology, Princeton University, Princeton, NJ 08544, USA.
| |
Collapse
|
27
|
Cao R, Braun J, Mattia M. Stochastic accumulation by cortical columns may explain the scalar property of multistable perception. PHYSICAL REVIEW LETTERS 2014; 113:098103. [PMID: 25216009 DOI: 10.1103/physrevlett.113.098103] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/30/2014] [Indexed: 06/03/2023]
Abstract
The timing of certain mental events is thought to reflect random walks performed by underlying neural dynamics. One class of such events--stochastic reversals of multistable perceptions--exhibits a unique scalar property: even though timing densities vary widely, higher moments stay in particular proportions to the mean. We show that stochastic accumulation of activity in a finite number of idealized cortical columns--realizing a generalized Ehrenfest urn model--may explain these observations. Modeling stochastic reversals as the first-passage time of a threshold number of active columns, we obtain higher moments of the first-passage time density. We derive analytical expressions for noninteracting columns and generalize the results to interacting columns in simulations. The scalar property of multistable perception is reproduced by a dynamic regime with a fixed, low threshold, in which the activation of a few additional columns suffices for a reversal.
Collapse
Affiliation(s)
- Robin Cao
- Cognitive Biology, Center for Behavioral Brain Sciences, Otto von Guericke University, 39106 Magdeburg, Germany and Department of Technologies and Health, Istituto Superiore di Sanità, 00161 Roma, Italy
| | - Jochen Braun
- Cognitive Biology, Center for Behavioral Brain Sciences, Otto von Guericke University, 39106 Magdeburg, Germany
| | - Maurizio Mattia
- Department of Technologies and Health, Istituto Superiore di Sanità, 00161 Roma, Italy
| |
Collapse
|
28
|
Wilkinson NM, Metta G. Capture of fixation by rotational flow; a deterministic hypothesis regarding scaling and stochasticity in fixational eye movements. Front Syst Neurosci 2014; 8:29. [PMID: 24616670 PMCID: PMC3935396 DOI: 10.3389/fnsys.2014.00029] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/06/2013] [Accepted: 02/09/2014] [Indexed: 11/13/2022] Open
Abstract
Visual scan paths exhibit complex, stochastic dynamics. Even during visual fixation, the eye is in constant motion. Fixational drift and tremor are thought to reflect fluctuations in the persistent neural activity of neural integrators in the oculomotor brainstem, which integrate sequences of transient saccadic velocity signals into a short term memory of eye position. Despite intensive research and much progress, the precise mechanisms by which oculomotor posture is maintained remain elusive. Drift exhibits a stochastic statistical profile which has been modeled using random walk formalisms. Tremor is widely dismissed as noise. Here we focus on the dynamical profile of fixational tremor, and argue that tremor may be a signal which usefully reflects the workings of oculomotor postural control. We identify signatures reminiscent of a certain flavor of transient neurodynamics; toric traveling waves which rotate around a central phase singularity. Spiral waves play an organizational role in dynamical systems at many scales throughout nature, though their potential functional role in brain activity remains a matter of educated speculation. Spiral waves have a repertoire of functionally interesting dynamical properties, including persistence, which suggest that they could in theory contribute to persistent neural activity in the oculomotor postural control system. Whilst speculative, the singularity hypothesis of oculomotor postural control implies testable predictions, and could provide the beginnings of an integrated dynamical framework for eye movements across scales.
Collapse
Affiliation(s)
| | - Giorgio Metta
- iCub Facility, Fondazione Istituto Italiano di TecnologiaGenova, Italy
- Centre for Robotics and Neural Systems, School of Computing and Mathematics, University of PlymouthPlymouth, UK
| |
Collapse
|
29
|
Allman MJ, Teki S, Griffiths TD, Meck WH. Properties of the Internal Clock: First- and Second-Order Principles of Subjective Time. Annu Rev Psychol 2014; 65:743-71. [DOI: 10.1146/annurev-psych-010213-115117] [Citation(s) in RCA: 231] [Impact Index Per Article: 23.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Affiliation(s)
- Melissa J. Allman
- Department of Psychology, Michigan State University, East Lansing, Michigan 48823;
| | - Sundeep Teki
- Wellcome Trust Center for Neuroimaging, University College London, London, WC1N 3BG United Kingdom;
| | - Timothy D. Griffiths
- Wellcome Trust Center for Neuroimaging, University College London, London, WC1N 3BG United Kingdom;
- Institute of Neuroscience, The Medical School, Newcastle University, Newcastle-upon-Tyne, NE2 4HH United Kingdom;
| | - Warren H. Meck
- Department of Psychology and Neuroscience, Duke University, Durham, North Carolina 27701;
| |
Collapse
|
30
|
Neurocomputational Models of Time Perception. ADVANCES IN EXPERIMENTAL MEDICINE AND BIOLOGY 2014; 829:49-71. [DOI: 10.1007/978-1-4939-1782-2_4] [Citation(s) in RCA: 22] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/23/2022]
|
31
|
Sakamoto K, Katori Y, Saito N, Yoshida S, Aihara K, Mushiake H. Increased firing irregularity as an emergent property of neural-state transition in monkey prefrontal cortex. PLoS One 2013; 8:e80906. [PMID: 24349020 PMCID: PMC3857743 DOI: 10.1371/journal.pone.0080906] [Citation(s) in RCA: 10] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/20/2013] [Accepted: 10/18/2013] [Indexed: 11/30/2022] Open
Abstract
Flexible behaviors are organized by complex neural networks in the prefrontal cortex. Recent studies have suggested that such networks exhibit multiple dynamical states, and can switch rapidly from one state to another. In many complex systems such as the brain, the early-warning signals that may predict whether a critical threshold for state transitions is approaching are extremely difficult to detect. We hypothesized that increases in firing irregularity are a crucial measure for predicting state transitions in the underlying neuronal circuits of the prefrontal cortex. We used both experimental and theoretical approaches to test this hypothesis. Experimentally, we analyzed activities of neurons in the prefrontal cortex while monkeys performed a maze task that required them to perform actions to reach a goal. We observed increased firing irregularity before the activity changed to encode goal-to-action information. Theoretically, we constructed theoretical generic neural networks and demonstrated that changes in neuronal gain on functional connectivity resulted in a loss of stability and an altered state of the networks, accompanied by increased firing irregularity. These results suggest that assessing the temporal pattern of neuronal fluctuations provides important clues regarding the state stability of the prefrontal network. We also introduce a novel scheme that the prefrontal cortex functions in a metastable state near the critical point of bifurcation. According to this scheme, firing irregularity in the prefrontal cortex indicates that the system is about to change its state and the flow of information in a flexible manner, which is essential for executive functions. This metastable and/or critical dynamical state of the prefrontal cortex may account for distractibility and loss of flexibility in the prefrontal cortex in major mental illnesses such as schizophrenia.
Collapse
Affiliation(s)
- Kazuhiro Sakamoto
- Research Institute of Electrical Communication, Tohoku University, Sendai, Japan
- * E-mail:
| | - Yuichi Katori
- Institute of Industrial Science, University of Tokyo, Tokyo, Japan
- Funding Program for World-Leading Innovative Research and Development on Science and Technology, Aihara Innovative Mathematical Modelling Project, Japan Science and Technology Agency, Tokyo, Japan
| | - Naohiro Saito
- Department of Physiology, Tohoku University School of Medicine, Sendai, Japan
| | - Shun Yoshida
- Department of Physiology, Tohoku University School of Medicine, Sendai, Japan
| | - Kazuyuki Aihara
- Institute of Industrial Science, University of Tokyo, Tokyo, Japan
- Funding Program for World-Leading Innovative Research and Development on Science and Technology, Aihara Innovative Mathematical Modelling Project, Japan Science and Technology Agency, Tokyo, Japan
| | - Hajime Mushiake
- Department of Physiology, Tohoku University School of Medicine, Sendai, Japan
- The Core Research for Evolutional Science and Technology Program, Japan Science and Technology Agency, Tokyo, Japan
| |
Collapse
|
32
|
Stochastic computations in cortical microcircuit models. PLoS Comput Biol 2013; 9:e1003311. [PMID: 24244126 PMCID: PMC3828141 DOI: 10.1371/journal.pcbi.1003311] [Citation(s) in RCA: 48] [Impact Index Per Article: 4.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/26/2013] [Accepted: 08/22/2013] [Indexed: 12/30/2022] Open
Abstract
Experimental data from neuroscience suggest that a substantial amount of knowledge is stored in the brain in the form of probability distributions over network states and trajectories of network states. We provide a theoretical foundation for this hypothesis by showing that even very detailed models for cortical microcircuits, with data-based diverse nonlinear neurons and synapses, have a stationary distribution of network states and trajectories of network states to which they converge exponentially fast from any initial state. We demonstrate that this convergence holds in spite of the non-reversibility of the stochastic dynamics of cortical microcircuits. We further show that, in the presence of background network oscillations, separate stationary distributions emerge for different phases of the oscillation, in accordance with experimentally reported phase-specific codes. We complement these theoretical results by computer simulations that investigate resulting computation times for typical probabilistic inference tasks on these internally stored distributions, such as marginalization or marginal maximum-a-posteriori estimation. Furthermore, we show that the inherent stochastic dynamics of generic cortical microcircuits enables them to quickly generate approximate solutions to difficult constraint satisfaction problems, where stored knowledge and current inputs jointly constrain possible solutions. This provides a powerful new computing paradigm for networks of spiking neurons, that also throws new light on how networks of neurons in the brain could carry out complex computational tasks such as prediction, imagination, memory recall and problem solving.
Collapse
|
33
|
Spanagel R, Durstewitz D, Hansson A, Heinz A, Kiefer F, Köhr G, Matthäus F, Nöthen MM, Noori HR, Obermayer K, Rietschel M, Schloss P, Scholz H, Schumann G, Smolka M, Sommer W, Vengeliene V, Walter H, Wurst W, Zimmermann US, Stringer S, Smits Y, Derks EM. A systems medicine research approach for studying alcohol addiction. Addict Biol 2013; 18:883-96. [PMID: 24283978 DOI: 10.1111/adb.12109] [Citation(s) in RCA: 72] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/11/2022]
Abstract
According to the World Health Organization, about 2 billion people drink alcohol. Excessive alcohol consumption can result in alcohol addiction, which is one of the most prevalent neuropsychiatric diseases afflicting our society today. Prevention and intervention of alcohol binging in adolescents and treatment of alcoholism are major unmet challenges affecting our health-care system and society alike. Our newly formed German SysMedAlcoholism consortium is using a new systems medicine approach and intends (1) to define individual neurobehavioral risk profiles in adolescents that are predictive of alcohol use disorders later in life and (2) to identify new pharmacological targets and molecules for the treatment of alcoholism. To achieve these goals, we will use omics-information from epigenomics, genetics transcriptomics, neurodynamics, global neurochemical connectomes and neuroimaging (IMAGEN; Schumann et al. ) to feed mathematical prediction modules provided by two Bernstein Centers for Computational Neurosciences (Berlin and Heidelberg/Mannheim), the results of which will subsequently be functionally validated in independent clinical samples and appropriate animal models. This approach will lead to new early intervention strategies and identify innovative molecules for relapse prevention that will be tested in experimental human studies. This research program will ultimately help in consolidating addiction research clusters in Germany that can effectively conduct large clinical trials, implement early intervention strategies and impact political and healthcare decision makers.
Collapse
Affiliation(s)
- Rainer Spanagel
- Insitute of Psychopharmacology; Central Institute of Mental Health; Medical Faculty Mannheim; University of Heidelberg; Germany
| | - Daniel Durstewitz
- Bernstein Center for Computational Neuroscience; Central Institute of Mental Health; Germany
| | - Anita Hansson
- Insitute of Psychopharmacology; Central Institute of Mental Health; Medical Faculty Mannheim; University of Heidelberg; Germany
| | - Andreas Heinz
- Department of Addictive Behaviour and Addiction Medicine; Central Institute of Mental Health; Germany
| | - Falk Kiefer
- Department of Genetic Epidemiology in Psychiatry; Central Institute of Mental Health; Germany
| | - Georg Köhr
- Insitute of Psychopharmacology; Central Institute of Mental Health; Medical Faculty Mannheim; University of Heidelberg; Germany
| | | | - Markus M. Nöthen
- Department of Psychiatry; Charité University Medical Center; Germany
| | - Hamid R. Noori
- Insitute of Psychopharmacology; Central Institute of Mental Health; Medical Faculty Mannheim; University of Heidelberg; Germany
| | - Klaus Obermayer
- Institute of Applied Mathematics; University of Heidelberg; Germany
| | - Marcella Rietschel
- Department of Genomics, Life & Brain Centre; University of Bonn; Germany
| | - Patrick Schloss
- Neural Information Processing Group; Technical University of Berlin; Germany
| | - Henrike Scholz
- Behavioral Neurogenetics' Zoological Institute; University of Cologne; Germany
| | - Gunter Schumann
- MRC-SGDP Centre; Institute of Psychiatry; King's College; UK
| | - Michael Smolka
- Department of Psychiatry and Psychotherapy; Technical University Dresden; Germany
| | - Wolfgang Sommer
- Insitute of Psychopharmacology; Central Institute of Mental Health; Medical Faculty Mannheim; University of Heidelberg; Germany
| | - Valentina Vengeliene
- Insitute of Psychopharmacology; Central Institute of Mental Health; Medical Faculty Mannheim; University of Heidelberg; Germany
| | - Henrik Walter
- Department of Addictive Behaviour and Addiction Medicine; Central Institute of Mental Health; Germany
| | - Wolfgang Wurst
- Institute of Developmental Genetics; Helmholtz Center Munich; Germany
| | - Uli S. Zimmermann
- Department of Psychiatry and Psychotherapy; Technical University Dresden; Germany
| | - Sven Stringer
- Psychiatry Department; Academic Medical Center; The Netherlands
- Brain Center Rudolf Magnus; University Medical Center; The Netherlands
| | - Yannick Smits
- Psychiatry Department; Academic Medical Center; The Netherlands
| | - Eske M. Derks
- Psychiatry Department; Academic Medical Center; The Netherlands
| | | |
Collapse
|
34
|
Abstract
Cognitive functions like motor planning rely on the concerted activity of multiple neuronal assemblies underlying still elusive computational strategies. During reaching tasks, we observed stereotyped sudden transitions (STs) between low and high multiunit activity of monkey dorsal premotor cortex (PMd) predicting forthcoming actions on a single-trial basis. Occurrence of STs was observed even when movement was delayed or successfully canceled after a stop signal, excluding a mere substrate of the motor execution. An attractor model accounts for upward STs and high-frequency modulations of field potentials, indicative of local synaptic reverberation. We found in vivo compelling evidence that motor plans in PMd emerge from the coactivation of such attractor modules, heterogeneous in the strength of local synaptic self-excitation. Modules with strong coupling early reacted with variable times to weak inputs, priming a chain reaction of both upward and downward STs in other modules. Such web of "flip-flops" rapidly converged to a stereotyped distributed representation of the motor program, as prescribed by the long-standing theory of associative networks.
Collapse
|
35
|
Cao R, Braun J, Mattia M. Dynamical features of stimulus integration by interacting cortical columns. BMC Neurosci 2013. [PMCID: PMC3704319 DOI: 10.1186/1471-2202-14-s1-p268] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022] Open
|
36
|
Laje R, Buonomano DV. Robust timing and motor patterns by taming chaos in recurrent neural networks. Nat Neurosci 2013; 16:925-33. [PMID: 23708144 PMCID: PMC3753043 DOI: 10.1038/nn.3405] [Citation(s) in RCA: 248] [Impact Index Per Article: 22.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/23/2012] [Accepted: 04/20/2013] [Indexed: 12/14/2022]
Abstract
The brain's ability to tell time and produce complex spatiotemporal motor patterns is critical for anticipating the next ring of a telephone or playing a musical instrument. One class of models proposes that these abilities emerge from dynamically changing patterns of neural activity generated in recurrent neural networks. However, the relevant dynamic regimes of recurrent networks are highly sensitive to noise; that is, chaotic. We developed a firing rate model that tells time on the order of seconds and generates complex spatiotemporal patterns in the presence of high levels of noise. This is achieved through the tuning of the recurrent connections. The network operates in a dynamic regime that exhibits coexisting chaotic and locally stable trajectories. These stable patterns function as 'dynamic attractors' and provide a feature that is characteristic of biological systems: the ability to 'return' to the pattern being generated in the face of perturbations.
Collapse
Affiliation(s)
- Rodrigo Laje
- Departments of Neurobiology and Psychology, Brain Research Institute, and Integrative Center for Learning and Memory, University of California, Los Angeles, CA, USA
| | - Dean V. Buonomano
- Departments of Neurobiology and Psychology, Brain Research Institute, and Integrative Center for Learning and Memory, University of California, Los Angeles, CA, USA
| |
Collapse
|
37
|
Standage D, You H, Wang DH, Dorris MC. Trading speed and accuracy by coding time: a coupled-circuit cortical model. PLoS Comput Biol 2013; 9:e1003021. [PMID: 23592967 PMCID: PMC3617027 DOI: 10.1371/journal.pcbi.1003021] [Citation(s) in RCA: 19] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/08/2011] [Accepted: 02/21/2013] [Indexed: 11/19/2022] Open
Abstract
Our actions take place in space and time, but despite the role of time in decision theory and the growing acknowledgement that the encoding of time is crucial to behaviour, few studies have considered the interactions between neural codes for objects in space and for elapsed time during perceptual decisions. The speed-accuracy trade-off (SAT) provides a window into spatiotemporal interactions. Our hypothesis is that temporal coding determines the rate at which spatial evidence is integrated, controlling the SAT by gain modulation. Here, we propose that local cortical circuits are inherently suited to the relevant spatial and temporal coding. In simulations of an interval estimation task, we use a generic local-circuit model to encode time by ‘climbing’ activity, seen in cortex during tasks with a timing requirement. The model is a network of simulated pyramidal cells and inhibitory interneurons, connected by conductance synapses. A simple learning rule enables the network to quickly produce new interval estimates, which show signature characteristics of estimates by experimental subjects. Analysis of network dynamics formally characterizes this generic, local-circuit timing mechanism. In simulations of a perceptual decision task, we couple two such networks. Network function is determined only by spatial selectivity and NMDA receptor conductance strength; all other parameters are identical. To trade speed and accuracy, the timing network simply learns longer or shorter intervals, driving the rate of downstream decision processing by spatially non-selective input, an established form of gain modulation. Like the timing network's interval estimates, decision times show signature characteristics of those by experimental subjects. Overall, we propose, demonstrate and analyse a generic mechanism for timing, a generic mechanism for modulation of decision processing by temporal codes, and we make predictions for experimental verification. Studies in neuroscience have characterized how the brain represents objects in space and how these objects are selected for detailed perceptual processing. The selection process entails a decision about which object is favoured by the available evidence over time. This period of time is typically in the range of hundreds of milliseconds and is widely believed to be crucial for decisions, allowing neurons to filter noise in the evidence. Despite the widespread belief that time plays this role in decisions and the growing recognition that the brain estimates elapsed time during perceptual tasks, few studies have considered how the encoding of time effects decision making. We propose that neurons encode time in this range by the same general mechanisms used to select objects for detailed processing, and that these temporal representations determine how long evidence is filtered. To this end, we simulate a perceptual decision by coupling two instances of a neural network widely used to simulate localized regions of the cerebral cortex. One network encodes the passage of time and the other makes decisions based on noisy evidence. The former influences the performance of the latter, reproducing signature characteristics of temporal estimates and perceptual decisions.
Collapse
Affiliation(s)
- Dominic Standage
- Department of Biomedical and Molecular Sciences and Center for Neuroscience Studies, Queen's University, Kingston, Ontario, Canada
- * E-mail: (DS); (DHW)
| | - Hongzhi You
- Department of Systems Science and National Key Laboratory of Cognitive Neuroscience and Learning, Beijing Normal University, Beijing, China
| | - Da-Hui Wang
- Department of Systems Science and National Key Laboratory of Cognitive Neuroscience and Learning, Beijing Normal University, Beijing, China
- * E-mail: (DS); (DHW)
| | - Michael C. Dorris
- Department of Biomedical and Molecular Sciences and Center for Neuroscience Studies, Queen's University, Kingston, Ontario, Canada
- Institute of Neuroscience, Shanghai Institutes for Biological Sciences, Chinese Academy of Sciences, Shanghai, China
| |
Collapse
|
38
|
Orpwood R. Qualia could arise from information processing in local cortical networks. Front Psychol 2013; 4:121. [PMID: 23504586 PMCID: PMC3596736 DOI: 10.3389/fpsyg.2013.00121] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/07/2012] [Accepted: 02/25/2013] [Indexed: 12/02/2022] Open
Abstract
Re-entrant feedback, either within sensory cortex or arising from prefrontal areas, has been strongly linked to the emergence of consciousness, both in theoretical and experimental work. This idea, together with evidence for local micro-consciousness, suggests the generation of qualia could in some way result from local network activity under re-entrant activation. This paper explores the possibility by examining the processing of information by local cortical networks. It highlights the difference between the information structure (how the information is physically embodied), and the information message (what the information is about). It focuses on the network’s ability to recognize information structures amongst its inputs under conditions of extensive local feedback, and to then assign information messages to those structures. It is shown that if the re-entrant feedback enables the network to achieve an attractor state, then the message assigned in any given pass of information through the network is a representation of the message assigned in the previous pass-through of information. Based on this ability the paper argues that as information is repeatedly cycled through the network, the information message that is assigned evolves from a recognition of what the input structure is, to what it is like, to how it appears, to how it seems. It could enable individual networks to be the site of qualia generation. The paper goes on to show networks in cortical layers 2/3 and 5a have the connectivity required for the behavior proposed, and reviews some evidence for a link between such local cortical cyclic activity and conscious percepts. It concludes with some predictions based on the theory discussed.
Collapse
Affiliation(s)
- Roger Orpwood
- Centre for Pain Research, Department for Health, University of Bath Bath, UK
| |
Collapse
|
39
|
Davis B, Jovicich J, Iacovella V, Hasson U. Functional and developmental significance of amplitude variance asymmetry in the BOLD resting-state signal. Cereb Cortex 2013; 24:1332-50. [PMID: 23329729 DOI: 10.1093/cercor/bhs416] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022] Open
Abstract
It is known that the brain's resting-state activity (RSA) is organized in low frequency oscillations that drive network connectivity. Recent research has also shown that elements of RSA described by high-frequency and nonoscillatory properties are non-random and functionally relevant. Motivated by this research, we investigated nonoscillatory aspects of the blood-oxygen-level-dependent (BOLD) RSA using a novel method for characterizing subtle fluctuation dynamics. The metric that we develop quantifies the relative variance of the amplitude of local-maxima and local-minima in a BOLD time course (amplitude variance asymmetry; AVA). This metric reveals new properties of RSA activity, without relying on connectivity as a descriptive tool. We applied the AVA analysis to data from 3 different participant groups (2 adults, 1 child) collected from 3 different centers. The analyses show that AVA patterns a) identify 3 types of RSA profiles in adults' sensory systems b) differ in topology and pattern of dynamics in adults and children, and c) are stable across magnetic resonance scanners. Furthermore, children with higher IQ demonstrated more adult-like AVA patterns. These findings indicate that AVA reflects important and novel dimensions of brain development and RSA.
Collapse
Affiliation(s)
- Ben Davis
- Center for Mind/Brain Sciences (CIMeC), University of Trento, I-38060 Mattarello (TN), Italy and
| | | | | | | |
Collapse
|
40
|
Goel A, Buonomano DV. Chronic electrical stimulation homeostatically decreases spontaneous activity, but paradoxically increases evoked network activity. J Neurophysiol 2013; 109:1824-36. [PMID: 23324317 DOI: 10.1152/jn.00612.2012] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
Neural dynamics generated within cortical networks play a fundamental role in brain function. However, the learning rules that allow recurrent networks to generate functional dynamic regimes, and the degree to which these regimes are themselves plastic, are not known. In this study we examined plasticity of network dynamics in cortical organotypic slices in response to chronic changes in activity. Studies have typically manipulated network activity pharmacologically; we used chronic electrical stimulation to increase activity in in vitro cortical circuits in a more physiological manner. Slices were stimulated with "implanted" electrodes for 4 days. Chronic electrical stimulation or treatment with bicuculline decreased spontaneous activity as predicted by homeostatic learning rules. Paradoxically, however, whereas bicuculline decreased evoked network activity, chronic stimulation actually increased the likelihood that evoked stimulation elicited polysynaptic activity, despite a decrease in evoked monosynaptic strength. Furthermore, there was an inverse correlation between spontaneous and evoked activity, suggesting a homeostatic tradeoff between spontaneous and evoked activity. Within-slice experiments revealed that cells close to the stimulated electrode exhibited more evoked polysynaptic activity and less spontaneous activity than cells close to a control electrode. Collectively, our results establish that chronic stimulation changes the dynamic regimes of networks. In vitro studies of homeostatic plasticity typically lack any external input, and thus neurons must rely on "spontaneous" activity to reach homeostatic "set points." However, in the presence of external input we propose that homeostatic learning rules seem to shift networks from spontaneous to evoked regimes.
Collapse
Affiliation(s)
- Anubhuti Goel
- Dept. of Neurobiology and Psychology, Integrative Center for Learning and Memory, Univ. of California, Los Angeles, Los Angeles, CA 90095, USA
| | | |
Collapse
|
41
|
Neural dynamics of choice: single-trial analysis of decision-related activity in parietal cortex. J Neurosci 2012; 32:12684-701. [PMID: 22972993 DOI: 10.1523/jneurosci.5752-11.2012] [Citation(s) in RCA: 67] [Impact Index Per Article: 5.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022] Open
Abstract
Previous neurophysiological studies of perceptual decision-making have focused on single-unit activity, providing insufficient information about how individual decisions are accomplished. For the first time, we recorded simultaneously from multiple decision-related neurons in parietal cortex of monkeys performing a perceptual decision task and used these recordings to analyze the neural dynamics during single trials. We demonstrate that decision-related lateral intraparietal area neurons typically undergo gradual changes in firing rate during individual decisions, as predicted by mechanisms based on continuous integration of sensory evidence. Furthermore, we identify individual decisions that can be described as a change of mind: the decision circuitry was transiently in a state associated with a different choice before transitioning into a state associated with the final choice. These changes of mind reflected in monkey neural activity share similarities with previously reported changes of mind reflected in human behavior.
Collapse
|
42
|
Giulioni M, Camilleri P, Mattia M, Dante V, Braun J, Del Giudice P. Robust Working Memory in an Asynchronously Spiking Neural Network Realized with Neuromorphic VLSI. Front Neurosci 2012; 5:149. [PMID: 22347151 PMCID: PMC3270576 DOI: 10.3389/fnins.2011.00149] [Citation(s) in RCA: 31] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/30/2011] [Accepted: 12/29/2011] [Indexed: 11/29/2022] Open
Abstract
We demonstrate bistable attractor dynamics in a spiking neural network implemented with neuromorphic VLSI hardware. The on-chip network consists of three interacting populations (two excitatory, one inhibitory) of leaky integrate-and-fire (LIF) neurons. One excitatory population is distinguished by strong synaptic self-excitation, which sustains meta-stable states of “high” and “low”-firing activity. Depending on the overall excitability, transitions to the “high” state may be evoked by external stimulation, or may occur spontaneously due to random activity fluctuations. In the former case, the “high” state retains a “working memory” of a stimulus until well after its release. In the latter case, “high” states remain stable for seconds, three orders of magnitude longer than the largest time-scale implemented in the circuitry. Evoked and spontaneous transitions form a continuum and may exhibit a wide range of latencies, depending on the strength of external stimulation and of recurrent synaptic excitation. In addition, we investigated “corrupted” “high” states comprising neurons of both excitatory populations. Within a “basin of attraction,” the network dynamics “corrects” such states and re-establishes the prototypical “high” state. We conclude that, with effective theoretical guidance, full-fledged attractor dynamics can be realized with comparatively small populations of neuromorphic hardware neurons.
Collapse
|
43
|
Exploring the spectrum of dynamical regimes and timescales in spontaneous cortical activity. Cogn Neurodyn 2011; 6:239-50. [PMID: 23730355 DOI: 10.1007/s11571-011-9179-4] [Citation(s) in RCA: 45] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/20/2011] [Revised: 10/11/2011] [Accepted: 10/17/2011] [Indexed: 10/16/2022] Open
Abstract
Rhythms at slow (<1 Hz) frequency of alternating Up and Down states occur during slow-wave sleep states, under deep anaesthesia and in cortical slices of mammals maintained in vitro. Such spontaneous oscillations result from the interplay between network reverberations nonlinearly sustained by a strong synaptic coupling and a fatigue mechanism inhibiting the neurons firing in an activity-dependent manner. Varying pharmacologically the excitability level of brain slices we exploit the network dynamics underlying slow rhythms, uncovering an intrinsic anticorrelation between Up and Down state durations. Besides, a non-monotonic change of Down state duration is also observed, which shrinks the distribution of the accessible frequencies of the slow rhythms. Attractor dynamics with activity-dependent self-inhibition predicts a similar trend even when the system excitability is reduced, because of a stability loss of Up and Down states. Hence, such cortical rhythms tend to display a maximal size of the distribution of Up/Down frequencies, envisaging the location of the system dynamics on a critical boundary of the parameter space. This would be an optimal solution for the system in order to display a wide spectrum of dynamical regimes and timescales.
Collapse
|
44
|
Balaguer-Ballester E, Lapish CC, Seamans JK, Durstewitz D. Attracting dynamics of frontal cortex ensembles during memory-guided decision-making. PLoS Comput Biol 2011; 7:e1002057. [PMID: 21625577 PMCID: PMC3098221 DOI: 10.1371/journal.pcbi.1002057] [Citation(s) in RCA: 66] [Impact Index Per Article: 5.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/24/2010] [Accepted: 03/31/2011] [Indexed: 11/18/2022] Open
Abstract
A common theoretical view is that attractor-like properties of neuronal dynamics underlie cognitive processing. However, although often proposed theoretically, direct experimental support for the convergence of neural activity to stable population patterns as a signature of attracting states has been sparse so far, especially in higher cortical areas. Combining state space reconstruction theorems and statistical learning techniques, we were able to resolve details of anterior cingulate cortex (ACC) multiple single-unit activity (MSUA) ensemble dynamics during a higher cognitive task which were not accessible previously. The approach worked by constructing high-dimensional state spaces from delays of the original single-unit firing rate variables and the interactions among them, which were then statistically analyzed using kernel methods. We observed cognitive-epoch-specific neural ensemble states in ACC which were stable across many trials (in the sense of being predictive) and depended on behavioral performance. More interestingly, attracting properties of these cognitively defined ensemble states became apparent in high-dimensional expansions of the MSUA spaces due to a proper unfolding of the neural activity flow, with properties common across different animals. These results therefore suggest that ACC networks may process different subcomponents of higher cognitive tasks by transiting among different attracting states.
Collapse
Affiliation(s)
- Emili Balaguer-Ballester
- Bernstein-Center for Computational Neuroscience Heidelberg-Mannheim, Central Institute of Mental Health, Medical Faculty Mannheim, Heidelberg University, Mannheim, Germany
| | - Christopher C. Lapish
- Department of Psychology, Indiana University-Purdue University, Indianapolis, Indiana, United States of America
| | - Jeremy K. Seamans
- Brain Research Center & Department of Psychiatry, University of British Columbia, Vancouver, Canada
| | - Daniel Durstewitz
- Bernstein-Center for Computational Neuroscience Heidelberg-Mannheim, Central Institute of Mental Health, Medical Faculty Mannheim, Heidelberg University, Mannheim, Germany
| |
Collapse
|
45
|
Buonomano DV, Laje R. Population clocks: motor timing with neural dynamics. Trends Cogn Sci 2011; 14:520-7. [PMID: 20889368 DOI: 10.1016/j.tics.2010.09.002] [Citation(s) in RCA: 107] [Impact Index Per Article: 8.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/20/2010] [Revised: 08/31/2010] [Accepted: 09/01/2010] [Indexed: 01/06/2023]
Abstract
An understanding of sensory and motor processing will require elucidation of the mechanisms by which the brain tells time. Open questions relate to whether timing relies on dedicated or intrinsic mechanisms and whether distinct mechanisms underlie timing across scales and modalities. Although experimental and theoretical studies support the notion that neural circuits are intrinsically capable of sensory timing on short scales, few general models of motor timing have been proposed. For one class of models, population clocks, it is proposed that time is encoded in the time-varying patterns of activity of a population of neurons. We argue that population clocks emerge from the internal dynamics of recurrently connected networks, are biologically realistic and account for many aspects of motor timing.
Collapse
Affiliation(s)
- Dean V Buonomano
- Department of Neurobiology, University of California, Los Angeles, Box 951761, Los Angeles, CA 90095, USA.
| | | |
Collapse
|
46
|
Tschacher W, Schildt M, Sander K. Brain connectivity in listening to affective stimuli: a functional magnetic resonance imaging (fMRI) study and implications for psychotherapy. Psychother Res 2011; 20:576-88. [PMID: 20845228 DOI: 10.1080/10503307.2010.493538] [Citation(s) in RCA: 11] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 10/19/2022] Open
Abstract
To investigate the functional connectivity among amygdala, insula, and auditory cortex during affective auditory stimulation and its relevance for psychotherapy, the authors recorded, using functional magnetic resonance imaging (fMRI), the blood oxygenation level-dependent (BOLD) responses of these brain regions in 20 healthy adults while listening to affective sounds (laughing and crying). Their connectivity was analyzed by time-series panel analysis. The authors found significant positive associations among brain regions, with time-lagged associations generally directed from the right to the left hemisphere. Associations between amygdalar and cortical regions, however, were negative; specifically, activations of the left auditory cortex preceded decreases of the right amygdala. This suggested that affect regulation using cognitive control may have been achieved through active inhibition of amygdalar structures by the cortex. The authors discuss the implications of the findings for the change mechanisms inherent in psychotherapy.
Collapse
Affiliation(s)
- Wolfgang Tschacher
- University Hospital of Psychiatry, University of Bern, Bern, Switzerland.
| | | | | |
Collapse
|
47
|
History-dependent excitability as a single-cell substrate of transient memory for information discrimination. PLoS One 2010; 5:e15023. [PMID: 21203387 PMCID: PMC3010997 DOI: 10.1371/journal.pone.0015023] [Citation(s) in RCA: 16] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/26/2010] [Accepted: 10/08/2010] [Indexed: 11/19/2022] Open
Abstract
Neurons react differently to incoming stimuli depending upon their previous history of stimulation. This property can be considered as a single-cell substrate for transient memory, or context-dependent information processing: depending upon the current context that the neuron "sees" through the subset of the network impinging on it in the immediate past, the same synaptic event can evoke a postsynaptic spike or just a subthreshold depolarization. We propose a formal definition of History-Dependent Excitability (HDE) as a measure of the propensity to firing in any moment in time, linking the subthreshold history-dependent dynamics with spike generation. This definition allows the quantitative assessment of the intrinsic memory for different single-neuron dynamics and input statistics. We illustrate the concept of HDE by considering two general dynamical mechanisms: the passive behavior of an Integrate and Fire (IF) neuron, and the inductive behavior of a Generalized Integrate and Fire (GIF) neuron with subthreshold damped oscillations. This framework allows us to characterize the sensitivity of different model neurons to the detailed temporal structure of incoming stimuli. While a neuron with intrinsic oscillations discriminates equally well between input trains with the same or different frequency, a passive neuron discriminates better between inputs with different frequencies. This suggests that passive neurons are better suited to rate-based computation, while neurons with subthreshold oscillations are advantageous in a temporal coding scheme. We also address the influence of intrinsic properties in single-cell processing as a function of input statistics, and show that intrinsic oscillations enhance discrimination sensitivity at high input rates. Finally, we discuss how the recognition of these cell-specific discrimination properties might further our understanding of neuronal network computations and their relationships to the distribution and functional connectivity of different neuronal types.
Collapse
|
48
|
Durstewitz D, Vittoz NM, Floresco SB, Seamans JK. Abrupt transitions between prefrontal neural ensemble states accompany behavioral transitions during rule learning. Neuron 2010; 66:438-48. [PMID: 20471356 DOI: 10.1016/j.neuron.2010.03.029] [Citation(s) in RCA: 229] [Impact Index Per Article: 16.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 03/22/2010] [Indexed: 11/28/2022]
Abstract
One of the most intriguing aspects of adaptive behavior involves the inference of regularities and rules in ever-changing environments. Rules are often deduced through evidence-based learning which relies on the prefrontal cortex (PFC). This is a highly dynamic process, evolving trial by trial and therefore may not be adequately captured by averaging single-unit responses over numerous repetitions. Here, we employed advanced statistical techniques to visualize the trajectories of ensembles of simultaneously recorded medial PFC neurons on a trial-by-trial basis as rats deduced a novel rule in a set-shifting task. Neural populations formed clearly distinct and lasting representations of familiar and novel rules by entering unique network states. During rule acquisition, the recorded ensembles often exhibited abrupt transitions, rather than evolving continuously, in tight temporal relation to behavioral performance shifts. These results support the idea that rule learning is an evidence-based decision process, perhaps accompanied by moments of sudden insight.
Collapse
Affiliation(s)
- Daniel Durstewitz
- RG Computational Neuroscience, Central Institute of Mental Health and Interdisciplinary Center for Neurosciences, University of Heidelberg, J 5, 68159 Mannheim, Germany.
| | | | | | | |
Collapse
|
49
|
Abstract
Noise, which is ubiquitous in the nervous system, causes trial-to-trial variability in the neural responses to stimuli. This neural variability is in turn a likely source of behavioral variability. Using Hidden Markov modeling, a method of analysis that can make use of such trial-to-trial response variability, we have uncovered sequences of discrete states of neural activity in gustatory cortex during taste processing. Here, we advance our understanding of these patterns in two ways. First, we reproduce the experimental findings in a formal model, describing a network that evinces sharp transitions between discrete states that are deterministically stable given sufficient noise in the network; as in the empirical data, the transitions occur at variable times across trials, but the stimulus-specific sequence is itself reliable. Second, we demonstrate that such noise-induced transitions between discrete states can be computationally advantageous in a reduced, decision-making network. The reduced network produces binary outputs, which represent classification of ingested substances as palatable or nonpalatable, and the corresponding behavioral responses of "spit" or "swallow". We evaluate the performance of the network by measuring how reliably its outputs follow small biases in the strengths of its inputs. We compare two modes of operation: deterministic integration ("ramping") versus stochastic decision-making ("jumping"), the latter of which relies on state-to-state transitions. We find that the stochastic mode of operation can be optimal under typical levels of internal noise and that, within this mode, addition of random noise to each input can improve optimal performance when decisions must be made in limited time.
Collapse
|
50
|
Braun J, Mattia M. Attractors and noise: twin drivers of decisions and multistability. Neuroimage 2010; 52:740-51. [PMID: 20083212 DOI: 10.1016/j.neuroimage.2009.12.126] [Citation(s) in RCA: 72] [Impact Index Per Article: 5.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/15/2009] [Accepted: 12/12/2009] [Indexed: 11/17/2022] Open
Abstract
Perceptual decisions are made not only during goal-directed behavior such as choice tasks, but also occur spontaneously while multistable stimuli are being viewed. In both contexts, the formation of a perceptual decision is best captured by noisy attractor dynamics. Noise-driven attractor transitions can accommodate a wide range of timescales and a hierarchical arrangement with "nested attractors" harbors even more dynamical possibilities. The attractor framework seems particularly promising for understanding higher-level mental states that combine heterogeneous information from a distributed set of brain areas.
Collapse
Affiliation(s)
- Jochen Braun
- Cognitive Biology Lab, University of Magdeburg, Germany.
| | | |
Collapse
|