1
|
Zheng C, Tang E. A topological mechanism for robust and efficient global oscillations in biological networks. Nat Commun 2024; 15:6453. [PMID: 39085205 PMCID: PMC11291491 DOI: 10.1038/s41467-024-50510-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/22/2023] [Accepted: 07/11/2024] [Indexed: 08/02/2024] Open
Abstract
Long and stable timescales are often observed in complex biochemical networks, such as in emergent oscillations. How these robust dynamics persist remains unclear, given the many stochastic reactions and shorter time scales demonstrated by underlying components. We propose a topological model that produces long oscillations around the network boundary, reducing the system dynamics to a lower-dimensional current in a robust manner. Using this to model KaiC, which regulates the circadian rhythm in cyanobacteria, we compare the coherence of oscillations to that in other KaiC models. Our topological model localizes currents on the system edge, with an efficient regime of simultaneously increased precision and decreased cost. Further, we introduce a new predictor of coherence from the analysis of spectral gaps, and show that our model saturates a global thermodynamic bound. Our work presents a new mechanism and parsimonious description for robust emergent oscillations in complex biological networks.
Collapse
Affiliation(s)
- Chongbin Zheng
- Center for Theoretical Biological Physics, Rice University, Houston, TX, 77005, USA
- Department of Physics and Astronomy, Rice University, Houston, TX, 77005, USA
| | - Evelyn Tang
- Center for Theoretical Biological Physics, Rice University, Houston, TX, 77005, USA.
- Department of Physics and Astronomy, Rice University, Houston, TX, 77005, USA.
| |
Collapse
|
2
|
Bhasin BJ, Raymond JL, Goldman MS. Synaptic weight dynamics underlying memory consolidation: implications for learning rules, circuit organization, and circuit function. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.03.20.586036. [PMID: 38585936 PMCID: PMC10996481 DOI: 10.1101/2024.03.20.586036] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 04/09/2024]
Abstract
Systems consolidation is a common feature of learning and memory systems, in which a long-term memory initially stored in one brain region becomes persistently stored in another region. We studied the dynamics of systems consolidation in simple circuit architectures with two sites of plasticity, one in an early-learning and one in a late-learning brain area. We show that the synaptic dynamics of the circuit during consolidation of an analog memory can be understood as a temporal integration process, by which transient changes in activity driven by plasticity in the early-learning area are accumulated into persistent synaptic changes at the late-learning site. This simple principle naturally leads to a speed-accuracy tradeoff in systems consolidation and provides insight into how the circuit mitigates the stability-plasticity dilemma of storing new memories while preserving core features of older ones. Furthermore, it imposes two constraints on the circuit. First, the plasticity rule at the late-learning site must stably support a continuum of possible outputs for a given input. We show that this is readily achieved by heterosynaptic but not standard Hebbian rules. Second, to turn off the consolidation process and prevent erroneous changes at the late-learning site, neural activity in the early-learning area must be reset to its baseline activity. We propose two biologically plausible implementations for this reset that suggest novel roles for core elements of the cerebellar circuit. Significance Statement How are memories transformed over time? We propose a simple organizing principle for how long term memories are moved from an initial to a final site of storage. We show that successful transfer occurs when the late site of memory storage is endowed with synaptic plasticity rules that stably accumulate changes in activity occurring at the early site of memory storage. We instantiate this principle in a simple computational model that is representative of brain circuits underlying a variety of behaviors. The model suggests how a neural circuit can store new memories while preserving core features of older ones, and suggests novel roles for core elements of the cerebellar circuit.
Collapse
|
3
|
Kong LW, Brewer GA, Lai YC. Reservoir-computing based associative memory and itinerancy for complex dynamical attractors. Nat Commun 2024; 15:4840. [PMID: 38844437 PMCID: PMC11156990 DOI: 10.1038/s41467-024-49190-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/16/2023] [Accepted: 05/24/2024] [Indexed: 06/09/2024] Open
Abstract
Traditional neural network models of associative memories were used to store and retrieve static patterns. We develop reservoir-computing based memories for complex dynamical attractors, under two common recalling scenarios in neuropsychology: location-addressable with an index channel and content-addressable without such a channel. We demonstrate that, for location-addressable retrieval, a single reservoir computing machine can memorize a large number of periodic and chaotic attractors, each retrievable with a specific index value. We articulate control strategies to achieve successful switching among the attractors, unveil the mechanism behind failed switching, and uncover various scaling behaviors between the number of stored attractors and the reservoir network size. For content-addressable retrieval, we exploit multistability with cue signals, where the stored attractors coexist in the high-dimensional phase space of the reservoir network. As the length of the cue signal increases through a critical value, a high success rate can be achieved. The work provides foundational insights into developing long-term memories and itinerancy for complex dynamical patterns.
Collapse
Affiliation(s)
- Ling-Wei Kong
- Department of Computational Biology, Cornell University, Ithaca, New York, USA
- School of Electrical, Computer and Energy Engineering, Arizona State University, Tempe, Arizona, USA
| | - Gene A Brewer
- Department of Psychology, Arizona State University, Tempe, Arizona, USA
| | - Ying-Cheng Lai
- School of Electrical, Computer and Energy Engineering, Arizona State University, Tempe, Arizona, USA.
- Department of Physics, Arizona State University, Tempe, Arizona, USA.
| |
Collapse
|
4
|
Penny W. Stochastic attractor models of visual working memory. PLoS One 2024; 19:e0301039. [PMID: 38568927 PMCID: PMC10990203 DOI: 10.1371/journal.pone.0301039] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/06/2023] [Accepted: 03/10/2024] [Indexed: 04/05/2024] Open
Abstract
This paper investigates models of working memory in which memory traces evolve according to stochastic attractor dynamics. These models have previously been shown to account for response-biases that are manifest across multiple trials of a visual working memory task. Here we adapt this approach by making the stable fixed points correspond to the multiple items to be remembered within a single-trial, in accordance with standard dynamical perspectives of memory, and find evidence that this multi-item model can provide a better account of behavioural data from continuous-report tasks. Additionally, the multi-item model proposes a simple mechanism by which swap-errors arise: memory traces diffuse away from their initial state and are captured by the attractors of other items. Swap-error curves reveal the evolution of this process as a continuous function of time throughout the maintenance interval and can be inferred from experimental data. Consistent with previous findings, we find that empirical memory performance is not well characterised by a purely-diffusive process but rather by a stochastic process that also embodies error-correcting dynamics.
Collapse
Affiliation(s)
- W. Penny
- School of Psychology, University East Anglia, Norwich, United Kingdom
| |
Collapse
|
5
|
Fitz H, Hagoort P, Petersson KM. Neurobiological Causal Models of Language Processing. NEUROBIOLOGY OF LANGUAGE (CAMBRIDGE, MASS.) 2024; 5:225-247. [PMID: 38645618 PMCID: PMC11025648 DOI: 10.1162/nol_a_00133] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 09/29/2022] [Accepted: 12/18/2023] [Indexed: 04/23/2024]
Abstract
The language faculty is physically realized in the neurobiological infrastructure of the human brain. Despite significant efforts, an integrated understanding of this system remains a formidable challenge. What is missing from most theoretical accounts is a specification of the neural mechanisms that implement language function. Computational models that have been put forward generally lack an explicit neurobiological foundation. We propose a neurobiologically informed causal modeling approach which offers a framework for how to bridge this gap. A neurobiological causal model is a mechanistic description of language processing that is grounded in, and constrained by, the characteristics of the neurobiological substrate. It intends to model the generators of language behavior at the level of implementational causality. We describe key features and neurobiological component parts from which causal models can be built and provide guidelines on how to implement them in model simulations. Then we outline how this approach can shed new light on the core computational machinery for language, the long-term storage of words in the mental lexicon and combinatorial processing in sentence comprehension. In contrast to cognitive theories of behavior, causal models are formulated in the "machine language" of neurobiology which is universal to human cognition. We argue that neurobiological causal modeling should be pursued in addition to existing approaches. Eventually, this approach will allow us to develop an explicit computational neurobiology of language.
Collapse
Affiliation(s)
- Hartmut Fitz
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, The Netherlands
- Neurobiology of Language Department, Max Planck Institute for Psycholinguistics, Nijmegen, The Netherlands
| | - Peter Hagoort
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, The Netherlands
- Neurobiology of Language Department, Max Planck Institute for Psycholinguistics, Nijmegen, The Netherlands
| | - Karl Magnus Petersson
- Neurobiology of Language Department, Max Planck Institute for Psycholinguistics, Nijmegen, The Netherlands
- Faculty of Medicine and Biomedical Sciences, University of Algarve, Faro, Portugal
| |
Collapse
|
6
|
Cotteret M, Greatorex H, Ziegler M, Chicca E. Vector Symbolic Finite State Machines in Attractor Neural Networks. Neural Comput 2024; 36:549-595. [PMID: 38457766 DOI: 10.1162/neco_a_01638] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/09/2023] [Accepted: 10/19/2023] [Indexed: 03/10/2024]
Abstract
Hopfield attractor networks are robust distributed models of human memory, but they lack a general mechanism for effecting state-dependent attractor transitions in response to input. We propose construction rules such that an attractor network may implement an arbitrary finite state machine (FSM), where states and stimuli are represented by high-dimensional random vectors and all state transitions are enacted by the attractor network's dynamics. Numerical simulations show the capacity of the model, in terms of the maximum size of implementable FSM, to be linear in the size of the attractor network for dense bipolar state vectors and approximately quadratic for sparse binary state vectors. We show that the model is robust to imprecise and noisy weights, and so a prime candidate for implementation with high-density but unreliable devices. By endowing attractor networks with the ability to emulate arbitrary FSMs, we propose a plausible path by which FSMs could exist as a distributed computational primitive in biological neural networks.
Collapse
Affiliation(s)
- Madison Cotteret
- Micro- and Nanoelectronic Systems, Institute of Micro- and Nanotechnologies (IMN) MacroNano, Technische Universität Ilmenau, 98693 Ilmenau, Germany
- Bio-Inspired Circuits and Systems Lab, Zernike Institute for Advanced Materials, and Groningen Cognitive Systems and Materials Center, University of Groningen, 9747 AG Groningen, Netherlands
| | - Hugh Greatorex
- Bio-Inspired Circuits and Systems Lab, Zernike Institute for Advanced Materials, and Groningen Cognitive Systems and Materials Center, University of Groningen, 9747 AG Groningen, Netherlands
| | - Martin Ziegler
- Micro- and Nanoelectronic Systems, Institute of Micro- and Nanotechnologies (IMN) MacroNano, Technische Universität Ilmenau, 98693 Ilmenau, Germany
| | - Elisabetta Chicca
- Bio-Inspired Circuits and Systems Lab, Zernike Institute for Advanced Materials, and Groningen Cognitive Systems and Materials Center, University of Groningen, 9747 AG Groningen, Netherlands
| |
Collapse
|
7
|
Yoder L. Neural flip-flops I: Short-term memory. PLoS One 2024; 19:e0300534. [PMID: 38489250 PMCID: PMC10942071 DOI: 10.1371/journal.pone.0300534] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/04/2022] [Accepted: 02/27/2024] [Indexed: 03/17/2024] Open
Abstract
The networks proposed here show how neurons can be connected to form flip-flops, the basic building blocks in sequential logic systems. The novel neural flip-flops (NFFs) are explicit, dynamic, and can generate known phenomena of short-term memory. For each network design, all neurons, connections, and types of synapses are shown explicitly. The neurons' operation depends only on explicitly stated, minimal properties of excitement and inhibition. This operation is dynamic in the sense that the level of neuron activity is the only cellular change, making the NFFs' operation consistent with the speed of most brain functions. Memory tests have shown that certain neurons fire continuously at a high frequency while information is held in short-term memory. These neurons exhibit seven characteristics associated with memory formation, retention, retrieval, termination, and errors. One of the neurons in each of the NFFs produces all of the characteristics. This neuron and a second neighboring neuron together predict eight unknown phenomena. These predictions can be tested by the same methods that led to the discovery of the first seven phenomena. NFFs, together with a decoder from a previous paper, suggest a resolution to the longstanding controversy of whether short-term memory depends on neurons firing persistently or in brief, coordinated bursts. Two novel NFFs are composed of two and four neurons. Their designs follow directly from a standard electronic flip-flop design by moving each negation symbol from one end of the connection to the other. This does not affect the logic of the network, but it changes the logic of each component to a logic function that can be implemented by a single neuron. This transformation is reversible and is apparently new to engineering as well as neuroscience.
Collapse
Affiliation(s)
- Lane Yoder
- Department of Science and Mathematics, University of Hawaii, Honolulu, Hawaii, United States of America
| |
Collapse
|
8
|
Chang YJ, Chen YI, Yeh HC, Santacruz SR. Neurobiologically realistic neural network enables cross-scale modeling of neural dynamics. Sci Rep 2024; 14:5145. [PMID: 38429297 PMCID: PMC10907713 DOI: 10.1038/s41598-024-54593-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/02/2023] [Accepted: 02/14/2024] [Indexed: 03/03/2024] Open
Abstract
Fundamental principles underlying computation in multi-scale brain networks illustrate how multiple brain areas and their coordinated activity give rise to complex cognitive functions. Whereas brain activity has been studied at the micro- to meso-scale to reveal the connections between the dynamical patterns and the behaviors, investigations of neural population dynamics are mainly limited to single-scale analysis. Our goal is to develop a cross-scale dynamical model for the collective activity of neuronal populations. Here we introduce a bio-inspired deep learning approach, termed NeuroBondGraph Network (NBGNet), to capture cross-scale dynamics that can infer and map the neural data from multiple scales. Our model not only exhibits more than an 11-fold improvement in reconstruction accuracy, but also predicts synchronous neural activity and preserves correlated low-dimensional latent dynamics. We also show that the NBGNet robustly predicts held-out data across a long time scale (2 weeks) without retraining. We further validate the effective connectivity defined from our model by demonstrating that neural connectivity during motor behaviour agrees with the established neuroanatomical hierarchy of motor control in the literature. The NBGNet approach opens the door to revealing a comprehensive understanding of brain computation, where network mechanisms of multi-scale activity are critical.
Collapse
Affiliation(s)
- Yin-Jui Chang
- Biomedical Engineering, The University of Texas at Austin, Austin, TX, USA
| | - Yuan-I Chen
- Biomedical Engineering, The University of Texas at Austin, Austin, TX, USA
| | - Hsin-Chih Yeh
- Biomedical Engineering, The University of Texas at Austin, Austin, TX, USA
- Texas Materials Institute, The University of Texas at Austin, Austin, TX, USA
| | - Samantha R Santacruz
- Biomedical Engineering, The University of Texas at Austin, Austin, TX, USA.
- Institute for Neuroscience, The University of Texas at Austin, Austin, TX, USA.
- Electrical and Computer Engineering, The University of Texas at Austin, Austin, TX, USA.
| |
Collapse
|
9
|
Zhang T, Di Carlo D, Lim CT, Zhou T, Tian G, Tang T, Shen AQ, Li W, Li M, Yang Y, Goda K, Yan R, Lei C, Hosokawa Y, Yalikun Y. Passive microfluidic devices for cell separation. Biotechnol Adv 2024; 71:108317. [PMID: 38220118 DOI: 10.1016/j.biotechadv.2024.108317] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/01/2023] [Revised: 12/27/2023] [Accepted: 01/06/2024] [Indexed: 01/16/2024]
Abstract
The separation of specific cell populations is instrumental in gaining insights into cellular processes, elucidating disease mechanisms, and advancing applications in tissue engineering, regenerative medicine, diagnostics, and cell therapies. Microfluidic methods for cell separation have propelled the field forward, benefitting from miniaturization, advanced fabrication technologies, a profound understanding of fluid dynamics governing particle separation mechanisms, and a surge in interdisciplinary investigations focused on diverse applications. Cell separation methodologies can be categorized according to their underlying separation mechanisms. Passive microfluidic separation systems rely on channel structures and fluidic rheology, obviating the necessity for external force fields to facilitate label-free cell separation. These passive approaches offer a compelling combination of cost-effectiveness and scalability when compared to active methods that depend on external fields to manipulate cells. This review delves into the extensive utilization of passive microfluidic techniques for cell separation, encompassing various strategies such as filtration, sedimentation, adhesion-based techniques, pinched flow fractionation (PFF), deterministic lateral displacement (DLD), inertial microfluidics, hydrophoresis, viscoelastic microfluidics, and hybrid microfluidics. Besides, the review provides an in-depth discussion concerning cell types, separation markers, and the commercialization of these technologies. Subsequently, it outlines the current challenges faced in the field and presents a forward-looking perspective on potential future developments. This work hopes to aid in facilitating the dissemination of knowledge in cell separation, guiding future research, and informing practical applications across diverse scientific disciplines.
Collapse
Affiliation(s)
- Tianlong Zhang
- College of Mechanical Engineering, Jiangsu University of Science and Technology, Zhenjiang 212100, China
| | - Dino Di Carlo
- Department of Bioengineering, University of California, Los Angeles, CA 90095, USA
| | - Chwee Teck Lim
- Department of Biomedical Engineering, National University of Singapore, Singapore 117583, Singapore
| | - Tianyuan Zhou
- College of Mechanical Engineering, Jiangsu University of Science and Technology, Zhenjiang 212100, China
| | - Guizhong Tian
- College of Mechanical Engineering, Jiangsu University of Science and Technology, Zhenjiang 212100, China.
| | - Tao Tang
- Department of Biomedical Engineering, National University of Singapore, Singapore 117583, Singapore
| | - Amy Q Shen
- Micro/Bio/Nanofluidics Unit, Okinawa Institute of Science and Technology Graduate University, Onna-son, Okinawa 904-0495, Japan
| | - Weihua Li
- School of Mechanical, Materials, Mechatronic and Biomedical Engineering, University of Wollongong, Wollongong, NSW 2522, Australia
| | - Ming Li
- School of Mechanical and Manufacturing Engineering, University of New South Wales, Sydney, NSW 2052, Australia
| | - Yang Yang
- Institute of Deep-Sea Science and Engineering, Chinese Academy of Sciences, Sanya, Hainan 572000, China
| | - Keisuke Goda
- Department of Bioengineering, University of California, Los Angeles, CA 90095, USA; Department of Chemistry, University of Tokyo, Tokyo 113-0033, Japan; The Institute of Technological Sciences, Wuhan University, Wuhan 430072, China
| | - Ruopeng Yan
- The Institute of Technological Sciences, Wuhan University, Wuhan 430072, China
| | - Cheng Lei
- The Institute of Technological Sciences, Wuhan University, Wuhan 430072, China
| | - Yoichiroh Hosokawa
- Division of Materials Science, Nara Institute of Science and Technology, Nara 630-0192, Japan
| | - Yaxiaer Yalikun
- Division of Materials Science, Nara Institute of Science and Technology, Nara 630-0192, Japan.
| |
Collapse
|
10
|
Pang R, Baker C, Murthy M, Pillow J. Inferring neural dynamics of memory during naturalistic social communication. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.01.26.577404. [PMID: 38328156 PMCID: PMC10849655 DOI: 10.1101/2024.01.26.577404] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/09/2024]
Abstract
Memory processes in complex behaviors like social communication require forming representations of the past that grow with time. The neural mechanisms that support such continually growing memory remain unknown. We address this gap in the context of fly courtship, a natural social behavior involving the production and perception of long, complex song sequences. To study female memory for male song history in unrestrained courtship, we present 'Natural Continuation' (NC)-a general, simulation-based model comparison procedure to evaluate candidate neural codes for complex stimuli using naturalistic behavioral data. Applying NC to fly courtship revealed strong evidence for an adaptive population mechanism for how female auditory neural dynamics could convert long song histories into a rich mnemonic format. Song temporal patterning is continually transformed by heterogeneous nonlinear adaptation dynamics, then integrated into persistent activity, enabling common neural mechanisms to retain continuously unfolding information over long periods and yielding state-of-the-art predictions of female courtship behavior. At a population level this coding model produces multi-dimensional advection-diffusion-like responses that separate songs over a continuum of timescales and can be linearly transformed into flexible output signals, illustrating its potential to create a generic, scalable mnemonic format for extended input signals poised to drive complex behavioral responses. This work thus shows how naturalistic behavior can directly inform neural population coding models, revealing here a novel process for memory formation.
Collapse
Affiliation(s)
- Rich Pang
- Princeton Neuroscience Institute, Princeton, NJ, USA
- Center for the Physics of Biological Function, Princeton, NJ and New York, NY, USA
| | - Christa Baker
- Princeton Neuroscience Institute, Princeton, NJ, USA
- Present address: Department of Biological Sciences, North Carolina State University, Raleigh, NC, USA
| | - Mala Murthy
- Princeton Neuroscience Institute, Princeton, NJ, USA
| | | |
Collapse
|
11
|
Karbowski J, Urban P. Cooperativity, Information Gain, and Energy Cost During Early LTP in Dendritic Spines. Neural Comput 2024; 36:271-311. [PMID: 38101326 DOI: 10.1162/neco_a_01632] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/04/2023] [Accepted: 10/04/2023] [Indexed: 12/17/2023]
Abstract
We investigate a mutual relationship between information and energy during the early phase of LTP induction and maintenance in a large-scale system of mutually coupled dendritic spines, with discrete internal states and probabilistic dynamics, within the framework of nonequilibrium stochastic thermodynamics. In order to analyze this computationally intractable stochastic multidimensional system, we introduce a pair approximation, which allows us to reduce the spine dynamics into a lower-dimensional manageable system of closed equations. We found that the rates of information gain and energy attain their maximal values during an initial period of LTP (i.e., during stimulation), and after that, they recover to their baseline low values, as opposed to a memory trace that lasts much longer. This suggests that the learning phase is much more energy demanding than the memory phase. We show that positive correlations between neighboring spines increase both a duration of memory trace and energy cost during LTP, but the memory time per invested energy increases dramatically for very strong, positive synaptic cooperativity, suggesting a beneficial role of synaptic clustering on memory duration. In contrast, information gain after LTP is the largest for negative correlations, and energy efficiency of that information generally declines with increasing synaptic cooperativity. We also find that dendritic spines can use sparse representations for encoding long-term information, as both energetic and structural efficiencies of retained information and its lifetime exhibit maxima for low fractions of stimulated synapses during LTP. Moreover, we find that such efficiencies drop significantly with increasing the number of spines. In general, our stochastic thermodynamics approach provides a unifying framework for studying, from first principles, information encoding, and its energy cost during learning and memory in stochastic systems of interacting synapses.
Collapse
Affiliation(s)
- Jan Karbowski
- Institute of Applied Mathematics and Mechanics, University of Warsaw, Warsaw 02-097, Poland
| | - Paulina Urban
- College of Inter-Faculty Individual Studies in Mathematics and Natural Sciences and Laboratory of Functional and Structural Genomics, Centre of New Technologies, University of Warsaw, Warsaw 02-097, Poland
- Laboratory of Databases and Business Analytics, National Information Processing Institute, National Research Institute, Warsaw 00-608, Poland
| |
Collapse
|
12
|
Bhattacharyya S, Bhattarai N, Pfannenstiel DM, Wilkins B, Singh A, Harshey RM. A heritable iron memory enables decision-making in Escherichia coli. Proc Natl Acad Sci U S A 2023; 120:e2309082120. [PMID: 37988472 PMCID: PMC10691332 DOI: 10.1073/pnas.2309082120] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/31/2023] [Accepted: 10/12/2023] [Indexed: 11/23/2023] Open
Abstract
The importance of memory in bacterial decision-making is relatively unexplored. We show here that a prior experience of swarming is remembered when Escherichia coli encounters a new surface, improving its future swarming efficiency. We conducted >10,000 single-cell swarm assays to discover that cells store memory in the form of cellular iron levels. This "iron" memory preexists in planktonic cells, but the act of swarming reinforces it. A cell with low iron initiates swarming early and is a better swarmer, while the opposite is true for a cell with high iron. The swarming potential of a mother cell, which tracks with its iron memory, is passed down to its fourth-generation daughter cells. This memory is naturally lost by the seventh generation, but artificially manipulating iron levels allows it to persist much longer. A mathematical model with a time-delay component faithfully recreates the observed dynamic interconversions between different swarming potentials. We demonstrate that cellular iron levels also track with biofilm formation and antibiotic tolerance, suggesting that iron memory may impact other physiologies.
Collapse
Affiliation(s)
- Souvik Bhattacharyya
- Department of Molecular Biosciences, University of Texas at Austin, Austin, TX78712
- LaMontagne Center for Infectious Diseases, University of Texas at Austin, Austin, TX78712
| | - Nabin Bhattarai
- Department of Molecular Biosciences, University of Texas at Austin, Austin, TX78712
- LaMontagne Center for Infectious Diseases, University of Texas at Austin, Austin, TX78712
| | - Dylan M. Pfannenstiel
- Department of Molecular Biosciences, University of Texas at Austin, Austin, TX78712
- LaMontagne Center for Infectious Diseases, University of Texas at Austin, Austin, TX78712
| | - Brady Wilkins
- Department of Molecular Biosciences, University of Texas at Austin, Austin, TX78712
- LaMontagne Center for Infectious Diseases, University of Texas at Austin, Austin, TX78712
| | - Abhyudai Singh
- Department of Electrical and Computer Engineering, University of Delaware, Newark, DE19716
| | - Rasika M. Harshey
- Department of Molecular Biosciences, University of Texas at Austin, Austin, TX78712
- LaMontagne Center for Infectious Diseases, University of Texas at Austin, Austin, TX78712
| |
Collapse
|
13
|
Apostel A, Panichello M, Buschman TJ, Rose J. Corvids optimize working memory by categorizing continuous stimuli. Commun Biol 2023; 6:1122. [PMID: 37932494 PMCID: PMC10628182 DOI: 10.1038/s42003-023-05442-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/04/2023] [Accepted: 10/10/2023] [Indexed: 11/08/2023] Open
Abstract
Working memory (WM) is a crucial element of the higher cognition of primates and corvid songbirds. Despite its importance, WM has a severely limited capacity and is vulnerable to noise. In primates, attractor dynamics mitigate the effect of noise by discretizing continuous information. Yet, it remains unclear whether similar dynamics are seen in avian brains. Here, we show jackdaws (Corvus monedula) have similar behavioral biases as humans; memories are less precise and more biased as memory demands increase. Model-based analysis reveal discrete attractors are evenly spread across the stimulus space. Altogether, our comparative approach suggests attractor dynamics in primates and corvids mitigate the effect of noise by systematically drifting towards specific attractors. By demonstrating this effect in an evolutionary distant species, our results strengthen attractor dynamics as general, adaptive biological principle to efficiently use WM.
Collapse
Affiliation(s)
- Aylin Apostel
- Neural Basis of Learning, Department of Psychology, Ruhr University Bochum, Bochum, Germany.
| | | | - Timothy J Buschman
- Princeton Neuroscience Institute and Department of Psychology, Princeton University, Princeton, NJ, USA
| | - Jonas Rose
- Neural Basis of Learning, Department of Psychology, Ruhr University Bochum, Bochum, Germany.
| |
Collapse
|
14
|
Brennan C, Proekt A. Attractor dynamics with activity-dependent plasticity capture human working memory across time scales. COMMUNICATIONS PSYCHOLOGY 2023; 1:28. [PMID: 38764555 PMCID: PMC11101211 DOI: 10.1038/s44271-023-00027-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 03/09/2023] [Accepted: 09/15/2023] [Indexed: 05/21/2024]
Abstract
Most cognitive functions require the brain to maintain immediately preceding stimuli in working memory. Here, using a human working memory task with multiple delays, we test the hypothesis that working memories are stored in a discrete set of stable neuronal activity configurations called attractors. We show that while discrete attractor dynamics can approximate working memory on a single time scale, they fail to generalize across multiple timescales. This failure occurs because at longer delay intervals the responses contain more information about the stimuli than can be stored in a discrete attractor model. We present a modeling approach that combines discrete attractor dynamics with activity-dependent plasticity. This model successfully generalizes across all timescales and correctly predicts intertrial interactions. Thus, our findings suggest that discrete attractor dynamics are insufficient to model working memory and that activity-dependent plasticity improves durability of information storage in attractor systems.
Collapse
Affiliation(s)
- Connor Brennan
- University of Pennsylvania, 3160 Chestnut St., Philadelphia, PA USA
| | - Alex Proekt
- University of Pennsylvania, 3160 Chestnut St., Philadelphia, PA USA
| |
Collapse
|
15
|
Masuda FK, Aery Jones EA, Sun Y, Giocomo LM. Ketamine evoked disruption of entorhinal and hippocampal spatial maps. Nat Commun 2023; 14:6285. [PMID: 37805575 PMCID: PMC10560293 DOI: 10.1038/s41467-023-41750-4] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/06/2023] [Accepted: 09/15/2023] [Indexed: 10/09/2023] Open
Abstract
Ketamine, a rapid-acting anesthetic and acute antidepressant, carries undesirable spatial cognition side effects including out-of-body experiences and spatial memory impairments. The neural substrates that underlie these alterations in spatial cognition however, remain incompletely understood. Here, we used electrophysiology and calcium imaging to examine ketamine's impacts on the medial entorhinal cortex and hippocampus, which contain neurons that encode an animal's spatial position, as mice navigated virtual reality and real world environments. Ketamine acutely increased firing rates, degraded cell-pair temporal firing-rate relationships, and altered oscillations, leading to longer-term remapping of spatial representations. In the reciprocally connected hippocampus, the activity of neurons that encode the position of the animal was suppressed after ketamine administration. Together, these findings demonstrate ketamine-induced dysfunction of the MEC-hippocampal circuit at the single cell, local-circuit population, and network levels, connecting previously demonstrated physiological effects of ketamine on spatial cognition to alterations in the spatial navigation circuit.
Collapse
Affiliation(s)
- Francis Kei Masuda
- Department of Neurobiology, Stanford University School of Medicine, Stanford, CA, 94305, USA
| | - Emily A Aery Jones
- Department of Neurobiology, Stanford University School of Medicine, Stanford, CA, 94305, USA
| | - Yanjun Sun
- Department of Neurobiology, Stanford University School of Medicine, Stanford, CA, 94305, USA
| | - Lisa M Giocomo
- Department of Neurobiology, Stanford University School of Medicine, Stanford, CA, 94305, USA.
| |
Collapse
|
16
|
Champion KP, Gozel O, Lankow BS, Ermentrout GB, Goldman MS. An oscillatory mechanism for multi-level storage in short-term memory. Commun Biol 2023; 6:829. [PMID: 37563448 PMCID: PMC10415352 DOI: 10.1038/s42003-023-05200-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/01/2022] [Accepted: 08/01/2023] [Indexed: 08/12/2023] Open
Abstract
Oscillatory activity is commonly observed during the maintenance of information in short-term memory, but its role remains unclear. Non-oscillatory models of short-term memory storage are able to encode stimulus identity through their spatial patterns of activity, but are typically limited to either an all-or-none representation of stimulus amplitude or exhibit a biologically implausible exact-tuning condition. Here we demonstrate a simple mechanism by which oscillatory input enables a circuit to generate persistent or sequential activity that encodes information not only in the spatial pattern of activity, but also in the amplitude of activity. This is accomplished through a phase-locking phenomenon that permits many different amplitudes of persistent activity to be stored without requiring exact tuning of model parameters. Altogether, this work proposes a class of models for the storage of information in working memory, a potential role for brain oscillations, and a dynamical mechanism for maintaining multi-stable neural representations.
Collapse
Affiliation(s)
- Kathleen P Champion
- Department of Applied Mathematics, University of Washington, Seattle, WA, 98195, USA
| | - Olivia Gozel
- Departments of Neurobiology and Statistics, University of Chicago, Chicago, IL, 60637, USA
- Grossman Center for Quantitative Biology and Human Behavior, University of Chicago, Chicago, IL, 60637, USA
| | - Benjamin S Lankow
- Center for Neuroscience, University of California, Davis, Davis, CA, 95618, USA
| | - G Bard Ermentrout
- Department of Mathematics, University of Pittsburgh, Pittsburgh, PA, 15213, USA.
| | - Mark S Goldman
- Center for Neuroscience, University of California, Davis, Davis, CA, 95618, USA.
- Department of Neurobiology, Physiology, and Behavior, and Department of Ophthalmology and Vision Science, University of California, Davis, Davis, CA, 95618, USA.
| |
Collapse
|
17
|
Gebicke-Haerter PJ. The computational power of the human brain. Front Cell Neurosci 2023; 17:1220030. [PMID: 37608987 PMCID: PMC10441807 DOI: 10.3389/fncel.2023.1220030] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/19/2023] [Accepted: 07/05/2023] [Indexed: 08/24/2023] Open
Abstract
At the end of the 20th century, analog systems in computer science have been widely replaced by digital systems due to their higher computing power. Nevertheless, the question keeps being intriguing until now: is the brain analog or digital? Initially, the latter has been favored, considering it as a Turing machine that works like a digital computer. However, more recently, digital and analog processes have been combined to implant human behavior in robots, endowing them with artificial intelligence (AI). Therefore, we think it is timely to compare mathematical models with the biology of computation in the brain. To this end, digital and analog processes clearly identified in cellular and molecular interactions in the Central Nervous System are highlighted. But above that, we try to pinpoint reasons distinguishing in silico computation from salient features of biological computation. First, genuinely analog information processing has been observed in electrical synapses and through gap junctions, the latter both in neurons and astrocytes. Apparently opposed to that, neuronal action potentials (APs) or spikes represent clearly digital events, like the yes/no or 1/0 of a Turing machine. However, spikes are rarely uniform, but can vary in amplitude and widths, which has significant, differential effects on transmitter release at the presynaptic terminal, where notwithstanding the quantal (vesicular) release itself is digital. Conversely, at the dendritic site of the postsynaptic neuron, there are numerous analog events of computation. Moreover, synaptic transmission of information is not only neuronal, but heavily influenced by astrocytes tightly ensheathing the majority of synapses in brain (tripartite synapse). At least at this point, LTP and LTD modifying synaptic plasticity and believed to induce short and long-term memory processes including consolidation (equivalent to RAM and ROM in electronic devices) have to be discussed. The present knowledge of how the brain stores and retrieves memories includes a variety of options (e.g., neuronal network oscillations, engram cells, astrocytic syncytium). Also epigenetic features play crucial roles in memory formation and its consolidation, which necessarily guides to molecular events like gene transcription and translation. In conclusion, brain computation is not only digital or analog, or a combination of both, but encompasses features in parallel, and of higher orders of complexity.
Collapse
Affiliation(s)
- Peter J. Gebicke-Haerter
- Institute of Psychopharmacology, Central Institute of Mental Health, Faculty of Medicine, University of Heidelberg, Mannheim, Germany
| |
Collapse
|
18
|
Li KT, Ji D, Zhou C. Memory rescue and learning in synaptic impaired neuronal circuits. iScience 2023; 26:106931. [PMID: 37534172 PMCID: PMC10391582 DOI: 10.1016/j.isci.2023.106931] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/23/2022] [Revised: 04/05/2023] [Accepted: 05/16/2023] [Indexed: 08/04/2023] Open
Abstract
Neuronal impairment is a characteristic of Alzheimer's disease (AD), but its effect on neural activity dynamics underlying memory deficits is unclear. Here, we studied the effects of synaptic impairment on neural activities associated with memory recall, memory rescue, and learning a new memory, in an integrate-and-fire neuronal network. Our results showed that reducing connectivity decreases the neuronal synchronization of memory neurons and impairs memory recall performance. Although, slow-gamma stimulation rescued memory recall and slow-gamma oscillations, the rescue caused a side effect of activating mixed memories. During the learning of a new memory, reducing connectivity caused impairment in storing the new memory, but did not affect previously stored memories. We also explored the effects of other types of impairments including neuronal loss and excitation-inhibition imbalance and the rescue by general increase of excitability. Our results reveal potential computational mechanisms underlying the memory deficits caused by impairment in AD.
Collapse
Affiliation(s)
- Kwan Tung Li
- Department of Physics, Centre for Nonlinear Studies, Beijing–Hong Kong–Singapore Joint Centre for Nonlinear and Complex Systems (Hong Kong), Institute of Computational and Theoretical Studies, Hong Kong Baptist University, Hong Kong, China
- Research Center for Augmented Intelligence, Research Institute of Artificial Intelligence, Zhejiang Lab, Hangzhou 311100, China
| | - Daoyun Ji
- Department of Neuroscience, Baylor College of Medicine, Houston, TX 77030, USA
- Department of Molecular and Cellular Biology, Baylor College of Medicine, Houston, TX 77030, USA
| | - Changsong Zhou
- Department of Physics, Centre for Nonlinear Studies, Beijing–Hong Kong–Singapore Joint Centre for Nonlinear and Complex Systems (Hong Kong), Institute of Computational and Theoretical Studies, Hong Kong Baptist University, Hong Kong, China
| |
Collapse
|
19
|
Bhattacharyya S, Bhattarai N, Pfannenstiel DM, Wilkins B, Singh A, Harshey RM. Iron Memory in E. coli. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2023:2023.05.19.541523. [PMID: 37609133 PMCID: PMC10441380 DOI: 10.1101/2023.05.19.541523] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 08/24/2023]
Abstract
The importance of memory in bacterial decision-making is relatively unexplored. We show here that a prior experience of swarming is remembered when E. coli encounters a new surface, improving its future swarming efficiency. We conducted >10,000 single-cell swarm assays to discover that cells store memory in the form of cellular iron levels. This memory pre-exists in planktonic cells, but the act of swarming reinforces it. A cell with low iron initiates swarming early and is a better swarmer, while the opposite is true for a cell with high iron. The swarming potential of a mother cell, whether low or high, is passed down to its fourth-generation daughter cells. This memory is naturally lost by the seventh generation, but artificially manipulating iron levels allows it to persist much longer. A mathematical model with a time-delay component faithfully recreates the observed dynamic interconversions between different swarming potentials. We also demonstrate that iron memory can integrate multiple stimuli, impacting other bacterial behaviors such as biofilm formation and antibiotic tolerance.
Collapse
Affiliation(s)
- Souvik Bhattacharyya
- Department of Molecular Biosciences and LaMontagne Center for Infectious Diseases, University of Texas at Austin; Austin, TX 78712
| | - Nabin Bhattarai
- Department of Molecular Biosciences and LaMontagne Center for Infectious Diseases, University of Texas at Austin; Austin, TX 78712
| | - Dylan M. Pfannenstiel
- Department of Molecular Biosciences and LaMontagne Center for Infectious Diseases, University of Texas at Austin; Austin, TX 78712
| | - Brady Wilkins
- Department of Molecular Biosciences and LaMontagne Center for Infectious Diseases, University of Texas at Austin; Austin, TX 78712
| | - Abhyudai Singh
- Electrical & Computer Engineering, University of Delaware, Newark, DE 19716
| | - Rasika M. Harshey
- Department of Molecular Biosciences and LaMontagne Center for Infectious Diseases, University of Texas at Austin; Austin, TX 78712
| |
Collapse
|
20
|
Naik S, Dehaene-Lambertz G, Battaglia D. Repairing Artifacts in Neural Activity Recordings Using Low-Rank Matrix Estimation. SENSORS (BASEL, SWITZERLAND) 2023; 23:4847. [PMID: 37430760 DOI: 10.3390/s23104847] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/11/2023] [Revised: 05/09/2023] [Accepted: 05/10/2023] [Indexed: 07/12/2023]
Abstract
Electrophysiology recordings are frequently affected by artifacts (e.g., subject motion or eye movements), which reduces the number of available trials and affects the statistical power. When artifacts are unavoidable and data are scarce, signal reconstruction algorithms that allow for the retention of sufficient trials become crucial. Here, we present one such algorithm that makes use of large spatiotemporal correlations in neural signals and solves the low-rank matrix completion problem, to fix artifactual entries. The method uses a gradient descent algorithm in lower dimensions to learn the missing entries and provide faithful reconstruction of signals. We carried out numerical simulations to benchmark the method and estimate optimal hyperparameters for actual EEG data. The fidelity of reconstruction was assessed by detecting event-related potentials (ERP) from a highly artifacted EEG time series from human infants. The proposed method significantly improved the standardized error of the mean in ERP group analysis and a between-trial variability analysis compared to a state-of-the-art interpolation technique. This improvement increased the statistical power and revealed significant effects that would have been deemed insignificant without reconstruction. The method can be applied to any time-continuous neural signal where artifacts are sparse and spread out across epochs and channels, increasing data retention and statistical power.
Collapse
Affiliation(s)
- Shruti Naik
- Cognitive Neuroimaging Unit, Centre National de la Recherche Scientifique (CNRS), Institut National de la Santé et de la Recherche Médicale (INSERM), CEA, Université Paris-Saclay, NeuroSpin Center, F-91190 Gif-sur-Yvette, France
| | - Ghislaine Dehaene-Lambertz
- Cognitive Neuroimaging Unit, Centre National de la Recherche Scientifique (CNRS), Institut National de la Santé et de la Recherche Médicale (INSERM), CEA, Université Paris-Saclay, NeuroSpin Center, F-91190 Gif-sur-Yvette, France
| | - Demian Battaglia
- Institut de Neurosciences des Systèmes, U1106, Centre National de la Recherche Scientifique (CNRS) Aix-Marseille Université, F-13005 Marseille, France
- Institute for Advanced Studies, University of Strasbourg, (USIAS), F-67000 Strasbourg, France
| |
Collapse
|
21
|
Masuda FK, Sun Y, Aery Jones EA, Giocomo LM. Ketamine evoked disruption of entorhinal and hippocampal spatial maps. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2023:2023.02.05.527227. [PMID: 36798242 PMCID: PMC9934572 DOI: 10.1101/2023.02.05.527227] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 02/08/2023]
Abstract
Ketamine, a rapid-acting anesthetic and acute antidepressant, carries undesirable spatial cognition side effects including out-of-body experiences and spatial memory impairments. The neural substrates that underlie these alterations in spatial cognition however, remain incompletely understood. Here, we used electrophysiology and calcium imaging to examine ketamine's impacts on the medial entorhinal cortex and hippocampus, which contain neurons that encode an animal's spatial position, as mice navigated virtual reality and real world environments. Ketamine induced an acute disruption and long-term re-organization of entorhinal spatial representations. This acute ketamine-induced disruption reflected increased excitatory neuron firing rates and degradation of cell-pair temporal firing rate relationships. In the reciprocally connected hippocampus, the activity of neurons that encode the position of the animal was suppressed after ketamine administration. Together, these findings point to disruption in the spatial coding properties of the entorhinal-hippocampal circuit as a potential neural substrate for ketamine-induced changes in spatial cognition.
Collapse
Affiliation(s)
- Francis Kei Masuda
- Department of Neurobiology, Stanford University School of Medicine, Stanford, CA 94305, USA
| | - Yanjun Sun
- Department of Neurobiology, Stanford University School of Medicine, Stanford, CA 94305, USA
| | - Emily A Aery Jones
- Department of Neurobiology, Stanford University School of Medicine, Stanford, CA 94305, USA
| | - Lisa M Giocomo
- Department of Neurobiology, Stanford University School of Medicine, Stanford, CA 94305, USA
| |
Collapse
|
22
|
Lu L, Gao Z, Wei Z, Yi M. Working memory depends on the excitatory-inhibitory balance in neuron-astrocyte network. CHAOS (WOODBURY, N.Y.) 2023; 33:013127. [PMID: 36725632 DOI: 10.1063/5.0126890] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/18/2022] [Accepted: 12/19/2022] [Indexed: 06/18/2023]
Abstract
Previous studies have shown that astrocytes are involved in information processing and working memory (WM) in the central nervous system. Here, the neuron-astrocyte network model with biological properties is built to study the effects of excitatory-inhibitory balance and neural network structures on WM tasks. It is found that the performance metrics of WM tasks under the scale-free network are higher than other network structures, and the WM task can be successfully completed when the proportion of excitatory neurons in the network exceeds 30%. There exists an optimal region for the proportion of excitatory neurons and synaptic weight that the memory performance metrics of the WM tasks are higher. The multi-item WM task shows that the spatial calcium patterns for different items overlap significantly in the astrocyte network, which is consistent with the formation of cognitive memory in the brain. Moreover, complex image tasks show that cued recall can significantly reduce systematic noise and maintain the stability of the WM tasks. The results may contribute to understand the mechanisms of WM formation and provide some inspirations into the dynamic storage and recall of memory.
Collapse
Affiliation(s)
- Lulu Lu
- School of Mathematics and Physics, China University of Geosciences, Wuhan 430074, China
| | - Zhuoheng Gao
- School of Mathematics and Physics, China University of Geosciences, Wuhan 430074, China
| | - Zhouchao Wei
- School of Mathematics and Physics, China University of Geosciences, Wuhan 430074, China
| | - Ming Yi
- School of Mathematics and Physics, China University of Geosciences, Wuhan 430074, China
| |
Collapse
|
23
|
Brennan C, Aggarwal A, Pei R, Sussillo D, Proekt A. One dimensional approximations of neuronal dynamics reveal computational strategy. PLoS Comput Biol 2023; 19:e1010784. [PMID: 36607933 PMCID: PMC9821456 DOI: 10.1371/journal.pcbi.1010784] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/25/2022] [Accepted: 12/01/2022] [Indexed: 01/07/2023] Open
Abstract
The relationship between neuronal activity and computations embodied by it remains an open question. We develop a novel methodology that condenses observed neuronal activity into a quantitatively accurate, simple, and interpretable model and validate it on diverse systems and scales from single neurons in C. elegans to fMRI in humans. The model treats neuronal activity as collections of interlocking 1-dimensional trajectories. Despite their simplicity, these models accurately predict future neuronal activity and future decisions made by human participants. Moreover, the structure formed by interconnected trajectories-a scaffold-is closely related to the computational strategy of the system. We use these scaffolds to compare the computational strategy of primates and artificial systems trained on the same task to identify specific conditions under which the artificial agent learns the same strategy as the primate. The computational strategy extracted using our methodology predicts specific errors on novel stimuli. These results show that our methodology is a powerful tool for studying the relationship between computation and neuronal activity across diverse systems.
Collapse
Affiliation(s)
- Connor Brennan
- Department of Neuroscience, University of Pennsylvania, Philadelphia, Pennsylvania, United States of America
| | - Adeeti Aggarwal
- Department of Neuroscience, University of Pennsylvania, Philadelphia, Pennsylvania, United States of America
| | - Rui Pei
- Department of Psychology, Stanford University, Palo Alto, California, United States of America
| | - David Sussillo
- Stanford Neurosciences Institute, Stanford University, Palo Alto, California, United States of America
- Department of Electrical Engineering, Stanford University, Palo Alto, California, United States of America
| | - Alex Proekt
- Department of Anesthesiology and Critical Care, University of Pennsylvania, Philadelphia, Pennsylvania, United States of America
| |
Collapse
|
24
|
Multiple forms of working memory emerge from synapse-astrocyte interactions in a neuron-glia network model. Proc Natl Acad Sci U S A 2022; 119:e2207912119. [PMID: 36256810 PMCID: PMC9618090 DOI: 10.1073/pnas.2207912119] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022] Open
Abstract
Persistent activity in populations of neurons, time-varying activity across a neural population, or activity-silent mechanisms carried out by hidden internal states of the neural population have been proposed as different mechanisms of working memory (WM). Whether these mechanisms could be mutually exclusive or occur in the same neuronal circuit remains, however, elusive, and so do their biophysical underpinnings. While WM is traditionally regarded to depend purely on neuronal mechanisms, cortical networks also include astrocytes that can modulate neural activity. We propose and investigate a network model that includes both neurons and glia and show that glia-synapse interactions can lead to multiple stable states of synaptic transmission. Depending on parameters, these interactions can lead in turn to distinct patterns of network activity that can serve as substrates for WM.
Collapse
|
25
|
Laing CR, Krauskopf B. Theta neuron subject to delayed feedback: a prototypical model for self-sustained pulsing. Proc Math Phys Eng Sci 2022. [DOI: 10.1098/rspa.2022.0292] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022] Open
Abstract
We consider a single theta neuron with delayed self-feedback in the form of a Dirac delta function in time. Because the dynamics of a theta neuron on its own can be solved explicitly—it is either excitable or shows self-pulsations—we are able to derive algebraic expressions for the existence and stability of the periodic solutions that arise in the presence of feedback. These periodic solutions are characterized by one or more equally spaced pulses per delay interval, and there is an increasing amount of multistability with increasing delay time. We present a complete description of where these self-sustained oscillations can be found in parameter space; in particular, we derive explicit expressions for the loci of their saddle-node bifurcations. We conclude that the theta neuron with delayed self-feedback emerges as a prototypical model: it provides an analytical basis for understanding pulsating dynamics observed in other excitable systems subject to delayed self-coupling.
Collapse
Affiliation(s)
- Carlo R. Laing
- School of Natural and Computational Sciences Massey University, Private Bag 102-904, North Shore Mail Centre, Auckland 0745, New Zealand
| | - Bernd Krauskopf
- Department of Mathematics, The University of Auckland, Private Bag 92019, Auckland 1142, New Zealand
| |
Collapse
|
26
|
Capouskova K, Kringelbach ML, Deco G. Modes of cognition: Evidence from metastable brain dynamics. Neuroimage 2022; 260:119489. [PMID: 35882268 DOI: 10.1016/j.neuroimage.2022.119489] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/20/2021] [Revised: 07/12/2022] [Accepted: 07/15/2022] [Indexed: 01/31/2023] Open
Abstract
Managing cognitive load depends on adequate resource allocation by the human brain through the engagement of metastable substates, which are large-scale functional networks that change over time. We employed a novel analysis method, deep autoencoder dynamical analysis (DADA), with 100 healthy adults selected from the Human Connectome Project (HCP) data set in rest and six cognitive tasks. The deep autoencoder of DADA described seven recurrent stochastic metastable substates from the functional connectome of BOLD phase coherence matrices. These substates were significantly differentiated in terms of their probability of appearance, time duration, and spatial attributes. We found that during different cognitive tasks, there was a higher probability of having more connected substates dominated by a high degree of connectivity in the thalamus. In addition, compared with those during tasks, resting brain dynamics have a lower level of predictability, indicating a more uniform distribution of metastability between substates, quantified by higher entropy. These novel findings provide empirical evidence for the philosophically motivated cognitive theory, suggesting on-line and off-line as two fundamentally distinct modes of cognition. On-line cognition refers to task-dependent engagement with the sensory input, while off-line cognition is a slower, environmentally detached mode engaged with decision and planning. Overall, the DADA framework provides a bridge between neuroscience and cognitive theory that can be further explored in the future.
Collapse
Affiliation(s)
- Katerina Capouskova
- Center for Brain and Cognition, Computational Neuroscience Group, Department of Information and Communication Technologies, Universitat Pompeu Fabra, Ramon Trias Fargas 25-27, Barcelona 08005, Spain.
| | - Morten L Kringelbach
- Department of Psychiatry, University of Oxford, Oxford, United Kingdom; Center for Music in the Brain, Department of Clinical Medicine, Aarhus University, Aarhus, Denmark
| | - Gustavo Deco
- Center for Brain and Cognition, Computational Neuroscience Group, Department of Information and Communication Technologies, Universitat Pompeu Fabra, Ramon Trias Fargas 25-27, Barcelona 08005, Spain; Department of Neuropsychology, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany; Institució Catalana de Recerca i Estudis Avançats (ICREA), Barcelona, Spain; Turner Institute for Brain and Mental Health, Monash University, Melbourne, VIC, Australia
| |
Collapse
|
27
|
Inagaki HK, Chen S, Daie K, Finkelstein A, Fontolan L, Romani S, Svoboda K. Neural Algorithms and Circuits for Motor Planning. Annu Rev Neurosci 2022; 45:249-271. [PMID: 35316610 DOI: 10.1146/annurev-neuro-092021-121730] [Citation(s) in RCA: 18] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
The brain plans and executes volitional movements. The underlying patterns of neural population activity have been explored in the context of movements of the eyes, limbs, tongue, and head in nonhuman primates and rodents. How do networks of neurons produce the slow neural dynamics that prepare specific movements and the fast dynamics that ultimately initiate these movements? Recent work exploits rapid and calibrated perturbations of neural activity to test specific dynamical systems models that are capable of producing the observed neural activity. These joint experimental and computational studies show that cortical dynamics during motor planning reflect fixed points of neural activity (attractors). Subcortical control signals reshape and move attractors over multiple timescales, causing commitment to specific actions and rapid transitions to movement execution. Experiments in rodents are beginning to reveal how these algorithms are implemented at the level of brain-wide neural circuits.
Collapse
Affiliation(s)
| | - Susu Chen
- Janelia Research Campus, Howard Hughes Medical Institute, Ashburn, Virginia, USA
| | - Kayvon Daie
- Janelia Research Campus, Howard Hughes Medical Institute, Ashburn, Virginia, USA.,Allen Institute for Neural Dynamics, Seattle, Washington, USA;
| | - Arseny Finkelstein
- Janelia Research Campus, Howard Hughes Medical Institute, Ashburn, Virginia, USA.,Department of Physiology and Pharmacology, Sackler Faculty of Medicine, Tel Aviv University, Tel Aviv-Yafo, Israel
| | - Lorenzo Fontolan
- Janelia Research Campus, Howard Hughes Medical Institute, Ashburn, Virginia, USA
| | - Sandro Romani
- Janelia Research Campus, Howard Hughes Medical Institute, Ashburn, Virginia, USA
| | - Karel Svoboda
- Janelia Research Campus, Howard Hughes Medical Institute, Ashburn, Virginia, USA.,Allen Institute for Neural Dynamics, Seattle, Washington, USA;
| |
Collapse
|
28
|
Izadifar M. The Neurobiological Basis of the Conundrum of Self-continuity: A Hypothesis. Front Psychol 2022; 13:740542. [PMID: 35664197 PMCID: PMC9159515 DOI: 10.3389/fpsyg.2022.740542] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/13/2021] [Accepted: 03/11/2022] [Indexed: 01/19/2023] Open
Abstract
Life, whatsoever it is, is a temporal flux. Everything is doomed to change often apparently beyond our awareness. My body appears totally different now, so does my mind. I have gained new attitudes and new ambitions, and a substantial number of old ones have been discarded. But, I am still the same person in an ongoing manner. Besides, recent neuroscientific and psychological evidence has shown that our conscious perception happens as a series of discrete or bounded instants-it emerges in temporally scattered, gappy, and discrete forms. But, if it is so, how does the brain persevere our self-continuity (or continuity of identity) in this gappy setting? How is it possible that despite moment-to-moment changes in my appearance and mind, I am still feeling that I am that person? How can we tackle with this second by second gap and resurrection in our existence which leads to a foundation of wholeness and continuity of our self? How is continuity of self (collective set of our connected experiences in the vessel of time) that results in a feeling that one's life has purpose and meaning preserved? To answer these questions, the problem has been comprehended from a philosophical, psychological, and neuroscientific perspective. I realize that first and foremost fact lies in the temporal nature of identity. Having equipped with these thoughts, in this article, it is hypothesized that according to two principles (the principle of reafference or corollary discharge and the principle of a time theory) self-continuity is maintained. It is supposed that there should be a precise temporal integration mechanism in the CNS with the outside world that provides us this smooth, ungappy flow of the Self. However, we are often taken for granted the importance of self-continuity, but it can be challenged by life transitions such as entering adulthood, retirement, senility, emigration, and societal changes such as immigration, globalization, and in much unfortunate and extreme cases of mental illnesses such as schizophrenia.
Collapse
Affiliation(s)
- Morteza Izadifar
- Institute of Medical Psychology and Human Science Center, Ludwig-Maximilian University Munich, Munich, Germany
| |
Collapse
|
29
|
Yang J, Cheng Z, Xiao G, Xu X, Wang Y, Ding H, Zhou D. Engineering design optimisation using reinforcement learning with episodic controllers. COGNITIVE COMPUTATION AND SYSTEMS 2022. [DOI: 10.1049/ccs2.12063] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/19/2022] Open
Affiliation(s)
- Jun Yang
- College of Mechanical Engineering ZheJiang University of Technology HangZhou ZheJiang China
| | - Zhenbo Cheng
- College of Computer Science and Technology and College of Software Engineer ZheJiang University of Technology HangZhou ZheJiang China
| | - Gang Xiao
- College of Computer Science and Technology and College of Software Engineer ZheJiang University of Technology HangZhou ZheJiang China
| | - Xuesong Xu
- College of Computer Science and Technology and College of Software Engineer ZheJiang University of Technology HangZhou ZheJiang China
| | - Yaming Wang
- College of Mechanical Engineering ZheJiang University of Technology HangZhou ZheJiang China
| | - Haonan Ding
- College of Mechanical Engineering ZheJiang University of Technology HangZhou ZheJiang China
| | - Diting Zhou
- Library Computing Center ZheJiang University of Technology HangZhou ZheJiang China
| |
Collapse
|
30
|
Danchin A, Fenton AA. From Analog to Digital Computing: Is Homo sapiens’ Brain on Its Way to Become a Turing Machine? Front Ecol Evol 2022. [DOI: 10.3389/fevo.2022.796413] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/13/2022] Open
Abstract
The abstract basis of modern computation is the formal description of a finite state machine, the Universal Turing Machine, based on manipulation of integers and logic symbols. In this contribution to the discourse on the computer-brain analogy, we discuss the extent to which analog computing, as performed by the mammalian brain, is like and unlike the digital computing of Universal Turing Machines. We begin with ordinary reality being a permanent dialog between continuous and discontinuous worlds. So it is with computing, which can be analog or digital, and is often mixed. The theory behind computers is essentially digital, but efficient simulations of phenomena can be performed by analog devices; indeed, any physical calculation requires implementation in the physical world and is therefore analog to some extent, despite being based on abstract logic and arithmetic. The mammalian brain, comprised of neuronal networks, functions as an analog device and has given rise to artificial neural networks that are implemented as digital algorithms but function as analog models would. Analog constructs compute with the implementation of a variety of feedback and feedforward loops. In contrast, digital algorithms allow the implementation of recursive processes that enable them to generate unparalleled emergent properties. We briefly illustrate how the cortical organization of neurons can integrate signals and make predictions analogically. While we conclude that brains are not digital computers, we speculate on the recent implementation of human writing in the brain as a possible digital path that slowly evolves the brain into a genuine (slow) Turing machine.
Collapse
|
31
|
Wei H, Jin X, Su Z. A Circuit Model for Working Memory Based on Hybrid Positive and Negative-Derivative Feedback Mechanism. Brain Sci 2022; 12:547. [PMID: 35624934 PMCID: PMC9139460 DOI: 10.3390/brainsci12050547] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/09/2022] [Revised: 03/28/2022] [Accepted: 04/22/2022] [Indexed: 12/10/2022] Open
Abstract
Working memory (WM) plays an important role in cognitive activity. The WM system is used to temporarily store information in learning and decision-making. WM always functions in many aspects of daily life, such as the short-term memory of words, cell phone verification codes, and cell phone numbers. In young adults, studies have shown that a central memory store is limited to three to five meaningful items. Little is known about how WM functions at the microscopic neural level, but appropriate neural network computational models can help us gain a better understanding of it. In this study, we attempt to design a microscopic neural network model to explain the internal mechanism of WM. The performance of existing positive feedback models depends on the parameters of a synapse. We use a negative-derivative feedback mechanism to counteract the drift in persistent activity, making the hybrid positive and negative-derivative feedback (HPNF) model more robust to common disturbances. To fulfill the mechanism of WM at the neural circuit level, we construct two main neural networks based on the HPNF model: a memory-storage sub-network (the memory-storage sub-network is composed of several sets of neurons, so we call it "SET network", or "SET" for short) with positive feedback and negative-derivative feedback and a storage distribution network (SDN) designed by combining SET for memory item storage and memory updating. The SET network is a neural information self-sustaining mechanism, which is robust to common disturbances; the SDN constructs a storage distribution network at the neural circuit level; the experimental results show that our network can fulfill the storage, association, updating, and forgetting of information at the level of neural circuits, and it can work in different individuals with little change in parameters.
Collapse
Affiliation(s)
- Hui Wei
- Laboratory of Cognitive Model and Algorithm, Department of Computer Science, Fudan University, No. 825 Zhangheng Road, Shanghai 201203, China; (X.J.); (Z.S.)
- Shanghai Key Laboratory of Data Science, No. 220 Handan Road, Shanghai 200433, China
| | - Xiao Jin
- Laboratory of Cognitive Model and Algorithm, Department of Computer Science, Fudan University, No. 825 Zhangheng Road, Shanghai 201203, China; (X.J.); (Z.S.)
- Shanghai Key Laboratory of Data Science, No. 220 Handan Road, Shanghai 200433, China
| | - Zihao Su
- Laboratory of Cognitive Model and Algorithm, Department of Computer Science, Fudan University, No. 825 Zhangheng Road, Shanghai 201203, China; (X.J.); (Z.S.)
- Shanghai Key Laboratory of Data Science, No. 220 Handan Road, Shanghai 200433, China
| |
Collapse
|
32
|
Darshan R, Rivkind A. Learning to represent continuous variables in heterogeneous neural networks. Cell Rep 2022; 39:110612. [PMID: 35385721 DOI: 10.1016/j.celrep.2022.110612] [Citation(s) in RCA: 9] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/05/2021] [Revised: 02/08/2022] [Accepted: 03/11/2022] [Indexed: 12/13/2022] Open
Abstract
Animals must monitor continuous variables such as position or head direction. Manifold attractor networks-which enable a continuum of persistent neuronal states-provide a key framework to explain this monitoring ability. Neural networks with symmetric synaptic connectivity dominate this framework but are inconsistent with the diverse synaptic connectivity and neuronal representations observed in experiments. Here, we developed a theory for manifold attractors in trained neural networks, which approximates a continuum of persistent states, without assuming unrealistic symmetry. We exploit the theory to predict how asymmetries in the representation and heterogeneity in the connectivity affect the formation of the manifold via training, shape network response to stimulus, and govern mechanisms that possibly lead to destabilization of the manifold. Our work suggests that the functional properties of manifold attractors in the brain can be inferred from the overlooked asymmetries in connectivity and in the low-dimensional representation of the encoded variable.
Collapse
Affiliation(s)
- Ran Darshan
- Janelia Research Campus, Howard Hughes Medical Institute, Ashburn, VA, USA.
| | | |
Collapse
|
33
|
Robson DN, Li JM. A dynamical systems view of neuroethology: Uncovering stateful computation in natural behaviors. Curr Opin Neurobiol 2022; 73:102517. [PMID: 35217311 DOI: 10.1016/j.conb.2022.01.002] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/31/2021] [Revised: 01/06/2022] [Accepted: 01/11/2022] [Indexed: 11/03/2022]
Abstract
State-dependent computation is key to cognition in both biological and artificial systems. Alan Turing recognized the power of stateful computation when he created the Turing machine with theoretically infinite computational capacity in 1936. Independently, by 1950, ethologists such as Tinbergen and Lorenz also began to implicitly embed rudimentary forms of state-dependent computation to create qualitative models of internal drives and naturally occurring animal behaviors. Here, we reformulate core ethological concepts in explicitly dynamical systems terms for stateful computation. We examine, based on a wealth of recent neural data collected during complex innate behaviors across species, the neural dynamics that determine the temporal structure of internal states. We will also discuss the degree to which the brain can be hierarchically partitioned into nested dynamical systems and the need for a multi-dimensional state-space model of the neuromodulatory system that underlies motivational and affective states.
Collapse
Affiliation(s)
- Drew N Robson
- Max Planck Institute for Biological Cybernetics, Tuebingen, Germany.
| | - Jennifer M Li
- Max Planck Institute for Biological Cybernetics, Tuebingen, Germany.
| |
Collapse
|
34
|
Widloski J, Foster DJ. Flexible rerouting of hippocampal replay sequences around changing barriers in the absence of global place field remapping. Neuron 2022; 110:1547-1558.e8. [PMID: 35180390 PMCID: PMC9473153 DOI: 10.1016/j.neuron.2022.02.002] [Citation(s) in RCA: 34] [Impact Index Per Article: 17.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/28/2021] [Revised: 11/30/2021] [Accepted: 02/01/2022] [Indexed: 01/12/2023]
Abstract
Flexibility is a hallmark of memories that depend on the hippocampus. For navigating animals, flexibility is necessitated by environmental changes such as blocked paths and extinguished food sources. To better understand the neural basis of this flexibility, we recorded hippocampal replays in a spatial memory task where barriers as well as goals were moved between sessions to see whether replays could adapt to new spatial and reward contingencies. Strikingly, replays consistently depicted new goal-directed trajectories around each new barrier configuration and largely avoided barrier violations. Barrier-respecting replays were learned rapidly and did not rely on place cell remapping. These data distinguish sharply between place field responses, which were largely stable and remained tied to sensory cues, and replays, which changed flexibly to reflect the learned contingencies in the environment and suggest sequenced activations such as replay to be an important link between the hippocampus and flexible memory.
Collapse
Affiliation(s)
- John Widloski
- Helen Wills Neuroscience Institute and Department of Psychology, University of California, Berkeley, CA 94720, USA
| | - David J Foster
- Helen Wills Neuroscience Institute and Department of Psychology, University of California, Berkeley, CA 94720, USA.
| |
Collapse
|
35
|
Astrocytes mediate analogous memory in a multi-layer neuron–astrocyte network. Neural Comput Appl 2022. [DOI: 10.1007/s00521-022-06936-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
Abstract
AbstractModeling the neuronal processes underlying short-term working memory remains the focus of many theoretical studies in neuroscience. In this paper, we propose a mathematical model of a spiking neural network (SNN) which simulates the way a fragment of information is maintained as a robust activity pattern for several seconds and the way it completely disappears if no other stimuli are fed to the system. Such short-term memory traces are preserved due to the activation of astrocytes accompanying the SNN. The astrocytes exhibit calcium transients at a time scale of seconds. These transients further modulate the efficiency of synaptic transmission and, hence, the firing rate of neighboring neurons at diverse timescales through gliotransmitter release. We demonstrate how such transients continuously encode frequencies of neuronal discharges and provide robust short-term storage of analogous information. This kind of short-term memory can store relevant information for seconds and then completely forget it to avoid overlapping with forthcoming patterns. The SNN is inter-connected with the astrocytic layer by local inter-cellular diffusive connections. The astrocytes are activated only when the neighboring neurons fire synchronously, e.g., when an information pattern is loaded. For illustration, we took grayscale photographs of people’s faces where the shades of gray correspond to the level of applied current which stimulates the neurons. The astrocyte feedback modulates (facilitates) synaptic transmission by varying the frequency of neuronal firing. We show how arbitrary patterns can be loaded, then stored for a certain interval of time, and retrieved if the appropriate clue pattern is applied to the input.
Collapse
|
36
|
Aguilera M, Douchamps V, Battaglia D, Goutagny R. How Many Gammas? Redefining Hippocampal Theta-Gamma Dynamic During Spatial Learning. Front Behav Neurosci 2022; 16:811278. [PMID: 35177972 PMCID: PMC8843838 DOI: 10.3389/fnbeh.2022.811278] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/08/2021] [Accepted: 01/03/2022] [Indexed: 01/09/2023] Open
Abstract
The hippocampal formation is one of the brain systems in which the functional roles of coordinated oscillations in information representation and communication are better studied. Within this circuit, neuronal oscillations are conceived as a mechanism to precisely coordinate upstream and downstream neuronal ensembles, underlying dynamic exchange of information. Within a global reference framework provided by theta (θ) oscillations, different gamma-frequency (γ) carriers would temporally segregate information originating from different sources, thereby allowing networks to disambiguate convergent inputs. Two γ sub-bands were thus defined according to their frequency (slow γ, 30–80 Hz; medium γ, 60–120 Hz) and differential power distribution across CA1 dendritic layers. According to this prevalent model, layer-specific γ oscillations in CA1 would reliably identify the temporal dynamics of afferent inputs and may therefore aid in identifying specific memory processes (encoding for medium γ vs. retrieval for slow γ). However, this influential view, derived from time-averages of either specific γ sub-bands or different projection methods, might not capture the complexity of CA1 θ-γ interactions. Recent studies investigating γ oscillations at the θ cycle timescale have revealed a more dynamic and diverse landscape of θ-γ motifs, with many θ cycles containing multiple γ bouts of various frequencies. To properly capture the hippocampal oscillatory complexity, we have argued in this review that we should consider the entirety of the data and its multidimensional complexity. This will call for a revision of the actual model and will require the use of new tools allowing the description of individual γ bouts in their full complexity.
Collapse
Affiliation(s)
- Matthieu Aguilera
- Laboratoire de Neurosciences Cognitives et Adaptatives (LNCA), Faculté de Psychologie, Université de Strasbourg, Strasbourg, France
| | - Vincent Douchamps
- Laboratoire de Neurosciences Cognitives et Adaptatives (LNCA), Faculté de Psychologie, Université de Strasbourg, Strasbourg, France
| | - Demian Battaglia
- Institut de Neurosciences des Systèmes, CNRS, Aix-Marseille Université, Marseille, France
- University of Strasbourg Institute for Advanced Study (USIAS), Strasbourg, France
| | - Romain Goutagny
- Laboratoire de Neurosciences Cognitives et Adaptatives (LNCA), Faculté de Psychologie, Université de Strasbourg, Strasbourg, France
- *Correspondence: Romain Goutagny,
| |
Collapse
|
37
|
Krishnamurthy K, Can T, Schwab DJ. Theory of Gating in Recurrent Neural Networks. PHYSICAL REVIEW. X 2022; 12:011011. [PMID: 36545030 PMCID: PMC9762509 DOI: 10.1103/physrevx.12.011011] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
Recurrent neural networks (RNNs) are powerful dynamical models, widely used in machine learning (ML) and neuroscience. Prior theoretical work has focused on RNNs with additive interactions. However gating i.e., multiplicative interactions are ubiquitous in real neurons and also the central feature of the best-performing RNNs in ML. Here, we show that gating offers flexible control of two salient features of the collective dynamics: (i) timescales and (ii) dimensionality. The gate controlling timescales leads to a novel marginally stable state, where the network functions as a flexible integrator. Unlike previous approaches, gating permits this important function without parameter fine-tuning or special symmetries. Gates also provide a flexible, context-dependent mechanism to reset the memory trace, thus complementing the memory function. The gate modulating the dimensionality can induce a novel, discontinuous chaotic transition, where inputs push a stable system to strong chaotic activity, in contrast to the typically stabilizing effect of inputs. At this transition, unlike additive RNNs, the proliferation of critical points (topological complexity) is decoupled from the appearance of chaotic dynamics (dynamical complexity). The rich dynamics are summarized in phase diagrams, thus providing a map for principled parameter initialization choices to ML practitioners.
Collapse
Affiliation(s)
- Kamesh Krishnamurthy
- Joseph Henry Laboratories of Physics and PNI, Princeton University, Princeton, New Jersey 08544, USA
| | - Tankut Can
- Institute for Advanced Study, Princeton, New Jersey 08540, USA
| | - David J. Schwab
- Initiative for Theoretical Sciences, Graduate Center, CUNY, New York, New York 10016, USA
| |
Collapse
|
38
|
Boboeva V, Pezzotta A, Clopath C. Free recall scaling laws and short-term memory effects in a latching attractor network. Proc Natl Acad Sci U S A 2021; 118:e2026092118. [PMID: 34873052 PMCID: PMC8670499 DOI: 10.1073/pnas.2026092118] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 10/14/2021] [Indexed: 11/18/2022] Open
Abstract
Despite the complexity of human memory, paradigms like free recall have revealed robust qualitative and quantitative characteristics, such as power laws governing recall capacity. Although abstract random matrix models could explain such laws, the possibility of their implementation in large networks of interacting neurons has so far remained underexplored. We study an attractor network model of long-term memory endowed with firing rate adaptation and global inhibition. Under appropriate conditions, the transitioning behavior of the network from memory to memory is constrained by limit cycles that prevent the network from recalling all memories, with scaling similar to what has been found in experiments. When the model is supplemented with a heteroassociative learning rule, complementing the standard autoassociative learning rule, as well as short-term synaptic facilitation, our model reproduces other key findings in the free recall literature, namely, serial position effects, contiguity and forward asymmetry effects, and the semantic effects found to guide memory recall. The model is consistent with a broad series of manipulations aimed at gaining a better understanding of the variables that affect recall, such as the role of rehearsal, presentation rates, and continuous and/or end-of-list distractor conditions. We predict that recall capacity may be increased with the addition of small amounts of noise, for example, in the form of weak random stimuli during recall. Finally, we predict that, although the statistics of the encoded memories has a strong effect on the recall capacity, the power laws governing recall capacity may still be expected to hold.
Collapse
Affiliation(s)
- Vezha Boboeva
- Department of Bioengineering, Imperial College London, London SW7 2BX, United Kingdom
| | - Alberto Pezzotta
- Developmental Dynamics Laboratory, The Francis Crick Institute, London NW1 1AT, United Kingdom
| | - Claudia Clopath
- Department of Bioengineering, Imperial College London, London SW7 2BX, United Kingdom;
| |
Collapse
|
39
|
Zhang CL, Koukouli F, Allegra M, Ortiz C, Kao HL, Maskos U, Changeux JP, Schmidt-Hieber C. Inhibitory control of synaptic signals preceding locomotion in mouse frontal cortex. Cell Rep 2021; 37:110035. [PMID: 34818555 PMCID: PMC8640223 DOI: 10.1016/j.celrep.2021.110035] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/01/2020] [Revised: 09/29/2021] [Accepted: 10/31/2021] [Indexed: 11/03/2022] Open
Abstract
The frontal cortex is essential for organizing voluntary movement. The secondary motor cortex (MOs) is a frontal subregion thought to integrate internal and external inputs before motor action. However, how excitatory and inhibitory synaptic inputs to MOs neurons are integrated preceding movement remains unclear. Here, we address this question by performing in vivo whole-cell recordings from MOs neurons of head-fixed mice moving on a treadmill. We find that principal neurons produce slowly increasing membrane potential and spike ramps preceding spontaneous running. After goal-directed training, ramps show larger amplitudes and accelerated kinetics. Chemogenetic suppression of interneurons combined with modeling suggests that the interplay between parvalbumin-positive (PV+) and somatostatin-positive (SOM+) interneurons, along with principal neuron recurrent connectivity, shape ramping signals. Plasticity of excitatory synapses on SOM+ interneurons can explain the ramp acceleration after training. Altogether, our data reveal that local interneurons differentially control task-dependent ramping signals when MOs neurons integrate inputs preceding movement.
Collapse
Affiliation(s)
- Chun-Lei Zhang
- Institut Pasteur, Université de Paris, Neural Circuits for Spatial Navigation and Memory, 75015 Paris, France.
| | - Fani Koukouli
- Institut Pasteur, Université de Paris, CNRS UMR 3571, Integrative Neurobiology of Cholinergic Systems, 75015 Paris, France; Institut Du Cerveau-Paris Brain Institute-ICM, Sorbonne Université, Inserm U1127, CNRS UMR 7225, 75013 Paris, France
| | - Manuela Allegra
- Institut Pasteur, Université de Paris, Neural Circuits for Spatial Navigation and Memory, 75015 Paris, France
| | - Cantin Ortiz
- Institut Pasteur, Université de Paris, Neural Circuits for Spatial Navigation and Memory, 75015 Paris, France; Sorbonne Université, Collège Doctoral, 75005 Paris, France
| | - Hsin-Lun Kao
- Institut Pasteur, Université de Paris, Neural Circuits for Spatial Navigation and Memory, 75015 Paris, France
| | - Uwe Maskos
- Institut Pasteur, Université de Paris, CNRS UMR 3571, Integrative Neurobiology of Cholinergic Systems, 75015 Paris, France
| | - Jean-Pierre Changeux
- Institut Pasteur, Université de Paris, Department of Neuroscience, 75015 Paris, France; Collège de France, 75005 Paris, France
| | - Christoph Schmidt-Hieber
- Institut Pasteur, Université de Paris, Neural Circuits for Spatial Navigation and Memory, 75015 Paris, France.
| |
Collapse
|
40
|
Do Q, Hasselmo ME. Neural circuits and symbolic processing. Neurobiol Learn Mem 2021; 186:107552. [PMID: 34763073 PMCID: PMC10121157 DOI: 10.1016/j.nlm.2021.107552] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/13/2021] [Revised: 10/14/2021] [Accepted: 11/02/2021] [Indexed: 11/29/2022]
Abstract
The ability to use symbols is a defining feature of human intelligence. However, neuroscience has yet to explain the fundamental neural circuit mechanisms for flexibly representing and manipulating abstract concepts. This article will review the research on neural models for symbolic processing. The review first focuses on the question of how symbols could possibly be represented in neural circuits. The review then addresses how neural symbolic representations could be flexibly combined to meet a wide range of reasoning demands. Finally, the review assesses the research on program synthesis and proposes that the most flexible neural representation of symbolic processing would involve the capacity to rapidly synthesize neural operations analogous to lambda calculus to solve complex cognitive tasks.
Collapse
Affiliation(s)
- Quan Do
- Center for Systems Neuroscience, Boston University, 610 Commonwealth Ave, Boston, MA 02215, United States.
| | - Michael E Hasselmo
- Center for Systems Neuroscience, Boston University, 610 Commonwealth Ave, Boston, MA 02215, United States.
| |
Collapse
|
41
|
Trial-to-Trial Variability of Spiking Delay Activity in Prefrontal Cortex Constrains Burst-Coding Models of Working Memory. J Neurosci 2021; 41:8928-8945. [PMID: 34551937 DOI: 10.1523/jneurosci.0167-21.2021] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/22/2021] [Revised: 08/17/2021] [Accepted: 08/29/2021] [Indexed: 11/21/2022] Open
Abstract
A hallmark neuronal correlate of working memory (WM) is stimulus-selective spiking activity of neurons in PFC during mnemonic delays. These observations have motivated an influential computational modeling framework in which WM is supported by persistent activity. Recently, this framework has been challenged by arguments that observed persistent activity may be an artifact of trial-averaging, which potentially masks high variability of delay activity at the single-trial level. In an alternative scenario, WM delay activity could be encoded in bursts of selective neuronal firing which occur intermittently across trials. However, this alternative proposal has not been tested on single-neuron spike-train data. Here, we developed a framework for addressing this issue by characterizing the trial-to-trial variability of neuronal spiking quantified by Fano factor (FF). By building a doubly stochastic Poisson spiking model, we first demonstrated that the burst-coding proposal implies a significant increase in FF positively correlated with firing rate, and thus loss of stability across trials during the delay. Simulation of spiking cortical circuit WM models further confirmed that FF is a sensitive measure that can well dissociate distinct WM mechanisms. We then tested these predictions on datasets of single-neuron recordings from macaque PFC during three WM tasks. In sharp contrast to the burst-coding model predictions, we only found a small fraction of neurons showing increased WM-dependent burstiness, and stability across trials during delay was strengthened in empirical data. Therefore, reduced trial-to-trial variability during delay provides strong constraints on the contribution of single-neuron intermittent bursting to WM maintenance.SIGNIFICANCE STATEMENT There are diverging classes of theoretical models explaining how information is maintained in working memory by cortical circuits. In an influential model class, neurons exhibit persistent elevated memorandum-selective firing, whereas a recently developed class of burst-coding models suggests that persistent activity is an artifact of trial-averaging, and spiking is sparse in each single trial, subserved by brief intermittent bursts. However, this alternative picture has not been characterized or tested on empirical spike-train data. Here we combine mathematical analysis, computational model simulation, and experimental data analysis to test empirically these two classes of models and show that the trial-to-trial variability of empirical spike trains is not consistent with burst coding. These findings provide constraints for theoretical models of working memory.
Collapse
|
42
|
Joo HR, Liang H, Chung JE, Geaghan-Breiner C, Fan JL, Nachman BP, Kepecs A, Frank LM. Rats use memory confidence to guide decisions. Curr Biol 2021; 31:4571-4583.e4. [PMID: 34473948 DOI: 10.1016/j.cub.2021.08.013] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/07/2021] [Revised: 05/29/2021] [Accepted: 08/03/2021] [Indexed: 12/20/2022]
Abstract
Memory enables access to past experiences to guide future behavior. Humans can determine which memories to trust (high confidence) and which to doubt (low confidence). How memory retrieval, memory confidence, and memory-guided decisions are related, however, is not understood. In particular, how confidence in memories is used in decision making is unknown. We developed a spatial memory task in which rats were incentivized to gamble their time: betting more following a correct choice yielded greater reward. Rat behavior reflected memory confidence, with higher temporal bets following correct choices. We applied machine learning to identify a memory decision variable and built a generative model of memories evolving over time that accurately predicted both choices and confidence reports. Our results reveal in rats an ability thought to exist exclusively in primates and introduce a unified model of memory dynamics, retrieval, choice, and confidence.
Collapse
Affiliation(s)
- Hannah R Joo
- Medical Scientist Training Program, University of California, San Francisco, 513 Parnassus Avenue, San Francisco, CA 94143, USA; Kavli Institute for Fundamental Neuroscience, Center for Integrative Neuroscience, University of California, San Francisco, 675 Nelson Rising Lane, San Francisco, CA 94158, USA; Department of Physiology, University of California, San Francisco, 401 Parnassus Avenue, San Francisco, CA 94158, USA; Department of Psychiatry, University of California, San Francisco, 401 Parnassus Avenue, San Francisco, CA 94158, USA.
| | - Hexin Liang
- Neuroscience Graduate Program, The Solomon H. Snyder Department of Neuroscience, Johns Hopkins University School of Medicine, 725 N. Wolfe Street, Baltimore, MD 21205, USA
| | - Jason E Chung
- Kavli Institute for Fundamental Neuroscience, Center for Integrative Neuroscience, University of California, San Francisco, 675 Nelson Rising Lane, San Francisco, CA 94158, USA; Department of Physiology, University of California, San Francisco, 401 Parnassus Avenue, San Francisco, CA 94158, USA; Department of Psychiatry, University of California, San Francisco, 401 Parnassus Avenue, San Francisco, CA 94158, USA; Department of Neurological Surgery, University of California, San Francisco, 505 Parnassus Avenue, San Francisco, CA 94143, USA
| | - Charlotte Geaghan-Breiner
- Kavli Institute for Fundamental Neuroscience, Center for Integrative Neuroscience, University of California, San Francisco, 675 Nelson Rising Lane, San Francisco, CA 94158, USA; Department of Physiology, University of California, San Francisco, 401 Parnassus Avenue, San Francisco, CA 94158, USA; Department of Psychiatry, University of California, San Francisco, 401 Parnassus Avenue, San Francisco, CA 94158, USA
| | - Jiang Lan Fan
- Bioengineering Graduate Program, University of California, Berkeley/University of California, San Francisco, 1675 Owens Street, San Francisco, CA 94158, USA
| | - Benjamin P Nachman
- Physics Division, Lawrence Berkeley National Laboratory, 1 Cyclotron Road, Berkeley, CA 94720, USA; Berkeley Institute of Data Science, University of California, Berkeley, 190 Doe Library, Berkeley, CA 94720, USA
| | - Adam Kepecs
- Department of Psychiatry, Washington University School of Medicine, 660 S. Euclid Avenue, St. Louis, MO 63110, USA
| | - Loren M Frank
- Kavli Institute for Fundamental Neuroscience, Center for Integrative Neuroscience, University of California, San Francisco, 675 Nelson Rising Lane, San Francisco, CA 94158, USA; Department of Physiology, University of California, San Francisco, 401 Parnassus Avenue, San Francisco, CA 94158, USA; Department of Psychiatry, University of California, San Francisco, 401 Parnassus Avenue, San Francisco, CA 94158, USA; Howard Hughes Medical Institute, 4000 Jones Bridge Road, Chevy Chase, MD 20815, USA.
| |
Collapse
|
43
|
Morningstar MD, Barnett WH, Goodlett CR, Kuznetsov A, Lapish CC. Understanding ethanol's acute effects on medial prefrontal cortex neural activity using state-space approaches. Neuropharmacology 2021; 198:108780. [PMID: 34480911 PMCID: PMC8488975 DOI: 10.1016/j.neuropharm.2021.108780] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/28/2021] [Revised: 08/10/2021] [Accepted: 08/30/2021] [Indexed: 12/22/2022]
Abstract
Acute ethanol (EtOH) intoxication results in several maladaptive behaviors that may be attributable, in part, to the effects of EtOH on neural activity in medial prefrontal cortex (mPFC). The acute effects of EtOH on mPFC function have been largely described as inhibitory. However, translating these observations on function into a mechanism capable of delineating acute EtOH's effects on behavior has proven difficult. This review highlights the role of acute EtOH on electrophysiological measurements of mPFC function and proposes that interpreting these changes through the lens of dynamical systems theory is critical to understand the mechanisms that mediate the effects of EtOH intoxication on behavior. Specifically, the present review posits that the effects of EtOH on mPFC N-methyl-d-aspartate (NMDA) receptors are critical for the expression of impaired behavior following EtOH consumption. This hypothesis is based on the observation that recurrent activity in cortical networks is supported by NMDA receptors, and, when disrupted, may lead to impairments in cognitive function. To evaluate this hypothesis, we discuss the representation of mPFC neural activity in low-dimensional, dynamic state spaces. This approach has proven useful for identifying the underlying computations necessary for the production of behavior. Ultimately, we hypothesize that EtOH-related alterations to NMDA receptor function produces alterations that can be effectively conceptualized as impairments in attractor dynamics and provides insight into how acute EtOH disrupts forms of cognition that rely on mPFC function. This article is part of the special Issue on 'Neurocircuitry Modulating Drug and Alcohol Abuse'.
Collapse
Affiliation(s)
| | - William H Barnett
- Indiana University-Purdue University Indianapolis, Department of Psychology, USA
| | - Charles R Goodlett
- Indiana University-Purdue University Indianapolis, Department of Psychology, USA; Indiana University School of Medicine, Stark Neurosciences, USA
| | - Alexey Kuznetsov
- Indiana University-Purdue University Indianapolis, Department of Mathematics, USA; Indiana University School of Medicine, Stark Neurosciences, USA
| | - Christopher C Lapish
- Indiana University-Purdue University Indianapolis, Department of Psychology, USA; Indiana University School of Medicine, Stark Neurosciences, USA
| |
Collapse
|
44
|
Ebitz RB, Hayden BY. The population doctrine in cognitive neuroscience. Neuron 2021; 109:3055-3068. [PMID: 34416170 PMCID: PMC8725976 DOI: 10.1016/j.neuron.2021.07.011] [Citation(s) in RCA: 69] [Impact Index Per Article: 23.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/08/2021] [Revised: 07/02/2021] [Accepted: 07/13/2021] [Indexed: 01/08/2023]
Abstract
A major shift is happening within neurophysiology: a population doctrine is drawing level with the single-neuron doctrine that has long dominated the field. Population-level ideas have so far had their greatest impact in motor neuroscience, but they hold great promise for resolving open questions in cognition as well. Here, we codify the population doctrine and survey recent work that leverages this view to specifically probe cognition. Our discussion is organized around five core concepts that provide a foundation for population-level thinking: (1) state spaces, (2) manifolds, (3) coding dimensions, (4) subspaces, and (5) dynamics. The work we review illustrates the progress and promise that population-level thinking holds for cognitive neuroscience-for delivering new insight into attention, working memory, decision-making, executive function, learning, and reward processing.
Collapse
Affiliation(s)
- R Becket Ebitz
- Department of Neurosciences, Faculté de médecine, Université de Montréal, Montréal, QC, Canada.
| | - Benjamin Y Hayden
- Department of Neuroscience, Center for Magnetic Resonance Research, and Center for Neuroengineering, University of Minnesota, Minneapolis, MN, USA
| |
Collapse
|
45
|
Slow manifolds within network dynamics encode working memory efficiently and robustly. PLoS Comput Biol 2021; 17:e1009366. [PMID: 34525089 PMCID: PMC8475983 DOI: 10.1371/journal.pcbi.1009366] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/28/2021] [Revised: 09/27/2021] [Accepted: 08/19/2021] [Indexed: 11/19/2022] Open
Abstract
Working memory is a cognitive function involving the storage and manipulation of latent information over brief intervals of time, thus making it crucial for context-dependent computation. Here, we use a top-down modeling approach to examine network-level mechanisms of working memory, an enigmatic issue and central topic of study in neuroscience. We optimize thousands of recurrent rate-based neural networks on a working memory task and then perform dynamical systems analysis on the ensuing optimized networks, wherein we find that four distinct dynamical mechanisms can emerge. In particular, we show the prevalence of a mechanism in which memories are encoded along slow stable manifolds in the network state space, leading to a phasic neuronal activation profile during memory periods. In contrast to mechanisms in which memories are directly encoded at stable attractors, these networks naturally forget stimuli over time. Despite this seeming functional disadvantage, they are more efficient in terms of how they leverage their attractor landscape and paradoxically, are considerably more robust to noise. Our results provide new hypotheses regarding how working memory function may be encoded within the dynamics of neural circuits.
Collapse
|
46
|
Mathematical framework for place coding in the auditory system. PLoS Comput Biol 2021; 17:e1009251. [PMID: 34339409 PMCID: PMC8360601 DOI: 10.1371/journal.pcbi.1009251] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/14/2020] [Revised: 08/12/2021] [Accepted: 07/06/2021] [Indexed: 11/18/2022] Open
Abstract
In the auditory system, tonotopy is postulated to be the substrate for a place code, where sound frequency is encoded by the location of the neurons that fire during the stimulus. Though conceptually simple, the computations that allow for the representation of intensity and complex sounds are poorly understood. Here, a mathematical framework is developed in order to define clearly the conditions that support a place code. To accommodate both frequency and intensity information, the neural network is described as a space with elements that represent individual neurons and clusters of neurons. A mapping is then constructed from acoustic space to neural space so that frequency and intensity are encoded, respectively, by the location and size of the clusters. Algebraic operations -addition and multiplication- are derived to elucidate the rules for representing, assembling, and modulating multi-frequency sound in networks. The resulting outcomes of these operations are consistent with network simulations as well as with electrophysiological and psychophysical data. The analyses show how both frequency and intensity can be encoded with a purely place code, without the need for rate or temporal coding schemes. The algebraic operations are used to describe loudness summation and suggest a mechanism for the critical band. The mathematical approach complements experimental and computational approaches and provides a foundation for interpreting data and constructing models.
Collapse
|
47
|
Abstract
Significant experimental, computational, and theoretical work has identified rich structure within the coordinated activity of interconnected neural populations. An emerging challenge now is to uncover the nature of the associated computations, how they are implemented, and what role they play in driving behavior. We term this computation through neural population dynamics. If successful, this framework will reveal general motifs of neural population activity and quantitatively describe how neural population dynamics implement computations necessary for driving goal-directed behavior. Here, we start with a mathematical primer on dynamical systems theory and analytical tools necessary to apply this perspective to experimental data. Next, we highlight some recent discoveries resulting from successful application of dynamical systems. We focus on studies spanning motor control, timing, decision-making, and working memory. Finally, we briefly discuss promising recent lines of investigation and future directions for the computation through neural population dynamics framework.
Collapse
Affiliation(s)
- Saurabh Vyas
- Department of Bioengineering, Stanford University, Stanford, California 94305, USA; .,Wu Tsai Neurosciences Institute, Stanford University, Stanford, California 94305, USA
| | - Matthew D Golub
- Department of Electrical Engineering, Stanford University, Stanford, California 94305, USA.,Wu Tsai Neurosciences Institute, Stanford University, Stanford, California 94305, USA
| | - David Sussillo
- Department of Electrical Engineering, Stanford University, Stanford, California 94305, USA.,Wu Tsai Neurosciences Institute, Stanford University, Stanford, California 94305, USA.,Google AI, Google Inc., Mountain View, California 94305, USA
| | - Krishna V Shenoy
- Department of Bioengineering, Stanford University, Stanford, California 94305, USA; .,Department of Electrical Engineering, Stanford University, Stanford, California 94305, USA.,Wu Tsai Neurosciences Institute, Stanford University, Stanford, California 94305, USA.,Department of Neurobiology, Bio-X Institute, Neurosciences Program, and Howard Hughes Medical Institute, Stanford University, Stanford, California 94305, USA
| |
Collapse
|
48
|
Rué-Queralt J, Stevner A, Tagliazucchi E, Laufs H, Kringelbach ML, Deco G, Atasoy S. Decoding brain states on the intrinsic manifold of human brain dynamics across wakefulness and sleep. Commun Biol 2021; 4:854. [PMID: 34244598 PMCID: PMC8270946 DOI: 10.1038/s42003-021-02369-7] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/26/2020] [Accepted: 06/18/2021] [Indexed: 02/06/2023] Open
Abstract
Current state-of-the-art functional magnetic resonance imaging (fMRI) offers remarkable imaging quality and resolution, yet, the intrinsic dimensionality of brain dynamics in different states (wakefulness, light and deep sleep) remains unknown. Here we present a method to reveal the low dimensional intrinsic manifold underlying human brain dynamics, which is invariant of the high dimensional spatio-temporal representation of the neuroimaging technology. By applying this intrinsic manifold framework to fMRI data acquired in wakefulness and sleep, we reveal the nonlinear differences between wakefulness and three different sleep stages, and successfully decode these different brain states with a mean accuracy across participants of 96%. Remarkably, a further group analysis shows that the intrinsic manifolds of all participants share a common topology. Overall, our results reveal the intrinsic manifold underlying the spatiotemporal dynamics of brain activity and demonstrate how this manifold enables the decoding of different brain states such as wakefulness and various sleep stages.
Collapse
Affiliation(s)
- Joan Rué-Queralt
- grid.5612.00000 0001 2172 2676Center of Brain and Cognition, Universitat Pompeu Fabra, Barcelona, Spain
| | - Angus Stevner
- grid.4991.50000 0004 1936 8948Centre for Eudaimonia and Human Flourishing, University of Oxford, Oxford, UK ,grid.7048.b0000 0001 1956 2722Center for Music in the Brain, Aarhus University, Aarhus, Denmark
| | - Enzo Tagliazucchi
- grid.7345.50000 0001 0056 1981Instituto de Física de Buenos Aires and Physics Deparment (University of Buenos Aires), Buenos Aires, Argentina
| | - Helmut Laufs
- grid.7839.50000 0004 1936 9721Department of Neurology and Brain Imaging Center, Goethe University, Frankfurt am Main, Germany ,grid.9764.c0000 0001 2153 9986Department of Neurology, University Hospital Schleswig-Holstein, Christian-Albrechts-University, Kiel, Germany
| | - Morten L. Kringelbach
- grid.4991.50000 0004 1936 8948Centre for Eudaimonia and Human Flourishing, University of Oxford, Oxford, UK ,grid.7048.b0000 0001 1956 2722Center for Music in the Brain, Aarhus University, Aarhus, Denmark
| | - Gustavo Deco
- grid.5612.00000 0001 2172 2676Center of Brain and Cognition, Universitat Pompeu Fabra, Barcelona, Spain ,grid.425902.80000 0000 9601 989XInstitució Catalana de Recerca i Estudis Avancats (ICREA), Barcelona, Spain ,grid.419524.f0000 0001 0041 5028Department of Neuropsychology, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany ,grid.1002.30000 0004 1936 7857School of Psychological Sciences, Monash University, Melbourne, Australia
| | - Selen Atasoy
- grid.4991.50000 0004 1936 8948Centre for Eudaimonia and Human Flourishing, University of Oxford, Oxford, UK ,grid.7048.b0000 0001 1956 2722Center for Music in the Brain, Aarhus University, Aarhus, Denmark
| |
Collapse
|
49
|
Amengual JL, Ben Hamed S. Revisiting Persistent Neuronal Activity During Covert Spatial Attention. Front Neural Circuits 2021; 15:679796. [PMID: 34276314 PMCID: PMC8278237 DOI: 10.3389/fncir.2021.679796] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/12/2021] [Accepted: 06/03/2021] [Indexed: 11/13/2022] Open
Abstract
Persistent activity has been observed in the prefrontal cortex (PFC), in particular during the delay periods of visual attention tasks. Classical approaches based on the average activity over multiple trials have revealed that such an activity encodes the information about the attentional instruction provided in such tasks. However, single-trial approaches have shown that activity in this area is rather sparse than persistent and highly heterogeneous not only within the trials but also between the different trials. Thus, this observation raised the question of how persistent the actually persistent attention-related prefrontal activity is and how it contributes to spatial attention. In this paper, we review recent evidence of precisely deconstructing the persistence of the neural activity in the PFC in the context of attention orienting. The inclusion of machine-learning methods for decoding the information reveals that attention orienting is a highly dynamic process, possessing intrinsic oscillatory dynamics working at multiple timescales spanning from milliseconds to minutes. Dimensionality reduction methods further show that this persistent activity dynamically incorporates multiple sources of information. This novel framework reflects a high complexity in the neural representation of the attention-related information in the PFC, and how its computational organization predicts behavior.
Collapse
Affiliation(s)
- Julian L Amengual
- Institut des Sciences Cognitives Marc Jeannerod, CNRS UMR 5229, Université Claude Bernard Lyon I, 67 Boulevard Pinel, Bron, France
| | - Suliann Ben Hamed
- Institut des Sciences Cognitives Marc Jeannerod, CNRS UMR 5229, Université Claude Bernard Lyon I, 67 Boulevard Pinel, Bron, France
| |
Collapse
|
50
|
Libby A, Buschman TJ. Rotational dynamics reduce interference between sensory and memory representations. Nat Neurosci 2021; 24:715-726. [PMID: 33821001 PMCID: PMC8102338 DOI: 10.1038/s41593-021-00821-9] [Citation(s) in RCA: 66] [Impact Index Per Article: 22.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/09/2019] [Accepted: 02/19/2021] [Indexed: 01/16/2023]
Abstract
Cognition depends on integrating sensory percepts with the memory of recent stimuli. However, the distributed nature of neural coding can lead to interference between sensory and memory representations. Here, we show that the brain mitigates such interference by rotating sensory representations into orthogonal memory representations over time. To study how sensory inputs and memories are represented, we recorded neurons from the auditory cortex of mice as they implicitly learned sequences of sounds. We found that the neural population represented sensory inputs and the memory of recent stimuli in two orthogonal dimensions. The transformation of sensory information into a memory was facilitated by a combination of 'stable' neurons, which maintained their selectivity over time, and 'switching' neurons, which inverted their selectivity over time. Together, these neural responses rotated the population representation, transforming sensory inputs into memory. Theoretical modeling showed that this rotational dynamic is an efficient mechanism for generating orthogonal representations, thereby protecting memories from sensory interference.
Collapse
Affiliation(s)
- Alexandra Libby
- Princeton Neuroscience Institute, Princeton University, Princeton, NJ, USA
| | - Timothy J Buschman
- Princeton Neuroscience Institute, Princeton University, Princeton, NJ, USA.
- Department of Psychology, Princeton University, Princeton, NJ, USA.
| |
Collapse
|