1
|
Event Integration and Temporal Differentiation: How Hierarchical Knowledge Emerges in Hippocampal Subfields through Learning. J Neurosci 2024; 44:e0627232023. [PMID: 38129134 PMCID: PMC10919070 DOI: 10.1523/jneurosci.0627-23.2023] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/05/2023] [Revised: 11/10/2023] [Accepted: 11/16/2023] [Indexed: 12/23/2023] Open
Abstract
Everyday life is composed of events organized by changes in contexts, with each event containing an unfolding sequence of occurrences. A major challenge facing our memory systems is how to integrate sequential occurrences within events while also maintaining their details and avoiding over-integration across different contexts. We asked if and how distinct hippocampal subfields come to hierarchically and, in parallel, represent both event context and subevent occurrences with learning. Female and male human participants viewed sequential events defined as sequences of objects superimposed on shared color frames while undergoing high-resolution fMRI. Importantly, these events were repeated to induce learning. Event segmentation, as indexed by increased reaction times at event boundaries, was observed in all repetitions. Temporal memory decisions were quicker for items from the same event compared to across different events, indicating that events shaped memory. With learning, hippocampal CA3 multivoxel activation patterns clustered to reflect the event context, with more clustering correlated with behavioral facilitation during event transitions. In contrast, in the dentate gyrus (DG), temporally proximal items that belonged to the same event became associated with more differentiated neural patterns. A computational model explained these results by dynamic inhibition in the DG. Additional similarity measures support the notion that CA3 clustered representations reflect shared voxel populations, while DG's distinct item representations reflect different voxel populations. These findings suggest an interplay between temporal differentiation in the DG and attractor dynamics in CA3. They advance our understanding of how knowledge is structured through integration and separation across time and context.
Collapse
|
2
|
Strength training as a dynamical model of motor learning. J Sports Sci 2023:1-16. [PMID: 37270792 DOI: 10.1080/02640414.2023.2220177] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/30/2022] [Accepted: 05/22/2023] [Indexed: 06/06/2023]
Abstract
This paper outlines a framework for strength training as a dynamical model of perceptual-motor learning. We show, with emphasis on fixed-point attractor dynamics, that strength training can be mapped to the general dynamical principles of motor learning that arise from the constraints on action, including the distribution of practice/training. The time scales of the respective dynamics of performance change (increment and decrement) in discrete strength training and motor learning tasks reveal superposition of exponential functions in fixed-point dynamics, but distinctive attractor and parameter dynamics in oscillatory limit cycle and more continuous tasks, together with unique timescales to process influences (including practice, learning, strength, fitness, fatigue, warm-up decrement). Increments and decrements of strength can be viewed within a dynamical model of change in motor performance that reflects the integration of practice and training processes at multiple levels of learning and skill development.
Collapse
|
3
|
Organization of hippocampal CA3 into correlated cell assemblies supports a stable spatial code. Cell Rep 2023; 42:112119. [PMID: 36807137 PMCID: PMC9989830 DOI: 10.1016/j.celrep.2023.112119] [Citation(s) in RCA: 7] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/15/2022] [Revised: 11/30/2022] [Accepted: 01/30/2023] [Indexed: 02/19/2023] Open
Abstract
Hippocampal subfield CA3 is thought to stably store memories in assemblies of recurrently connected cells functioning as a collective. However, the collective hippocampal coding properties that are unique to CA3 and how such properties facilitate the stability or precision of the neural code remain unclear. Here, we performed large-scale Ca2+ imaging in hippocampal CA1 and CA3 of freely behaving mice that repeatedly explored the same, initially novel environments over weeks. CA3 place cells have more precise and more stable tuning and show a higher statistical dependence with their peers compared with CA1 place cells, uncovering a cell assembly organization in CA3. Surprisingly, although tuning precision and long-term stability are correlated, cells with stronger peer dependence exhibit higher stability but not higher precision. Overall, our results expose the three-way relationship between tuning precision, long-term stability, and peer dependence, suggesting that a cell assembly organization underlies long-term storage of information in the hippocampus.
Collapse
|
4
|
Microcircuit Synchronization and Heavy-Tailed Synaptic Weight Distribution Augment preBötzinger Complex Bursting Dynamics. J Neurosci 2023; 43:240-260. [PMID: 36400528 PMCID: PMC9838711 DOI: 10.1523/jneurosci.1195-22.2022] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/17/2022] [Revised: 11/05/2022] [Accepted: 11/10/2022] [Indexed: 11/19/2022] Open
Abstract
The preBötzinger Complex (preBötC) encodes inspiratory time as rhythmic bursts of activity underlying each breath. Spike synchronization throughout a sparsely connected preBötC microcircuit initiates bursts that ultimately drive the inspiratory motor patterns. Using minimal microcircuit models to explore burst initiation dynamics, we examined the variability in probability and latency to burst following exogenous stimulation of a small subset of neurons, mimicking experiments. Among various physiologically plausible graphs of 1000 excitatory neurons constructed using experimentally determined synaptic and connectivity parameters, directed Erdős-Rényi graphs with a broad (lognormal) distribution of synaptic weights best captured the experimentally observed dynamics. preBötC synchronization leading to bursts was regulated by the efferent connectivity of spiking neurons that are optimally tuned to amplify modest preinspiratory activity through input convergence. Using graph-theoretic and machine learning-based analyses, we found that input convergence of efferent connectivity at the next-nearest neighbor order was a strong predictor of incipient synchronization. Our analyses revealed a crucial role of synaptic heterogeneity in imparting exceptionally robust yet flexible preBötC attractor dynamics. Given the pervasiveness of lognormally distributed synaptic strengths throughout the nervous system, we postulate that these mechanisms represent a ubiquitous template for temporal processing and decision-making computational motifs.SIGNIFICANCE STATEMENT Mammalian breathing is robust, virtually continuous throughout life, yet is inherently labile: to adapt to rapid metabolic shifts (e.g., fleeing a predator or chasing prey); for airway reflexes; and to enable nonventilatory behaviors (e.g., vocalization, breathholding, laughing). Canonical theoretical frameworks-based on pacemakers and intrinsic bursting-cannot account for the observed robustness and flexibility of the preBötzinger Complex rhythm. Experiments reveal that network synchronization is the key to initiate inspiratory bursts in each breathing cycle. We investigated preBötC synchronization dynamics using network models constructed with experimentally determined neuronal and synaptic parameters. We discovered that a fat-tailed (non-Gaussian) synaptic weight distribution-a manifestation of synaptic heterogeneity-augments neuronal synchronization and attractor dynamics in this vital rhythmogenic network, contributing to its extraordinary reliability and responsiveness.
Collapse
|
5
|
Understanding cingulotomy's therapeutic effect in OCD through computer models. Front Integr Neurosci 2023; 16:889831. [PMID: 36704759 PMCID: PMC9871832 DOI: 10.3389/fnint.2022.889831] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/04/2022] [Accepted: 12/05/2022] [Indexed: 01/12/2023] Open
Abstract
Cingulotomy is therapeutic in OCD, but what are the possible mechanisms? Computer models that formalize cortical OCD abnormalities and anterior cingulate cortex (ACC) function can help answer this. At the neural dynamics level, cortical dynamics in OCD have been modeled using attractor networks, where activity patterns resistant to change denote the inability to switch to new patterns, which can reflect inflexible thinking patterns or behaviors. From that perspective, cingulotomy might reduce the influence of difficult-to-escape ACC attractor dynamics on other cortical areas. At the functional level, computer formulations based on model-free reinforcement learning (RL) have been used to describe the multitude of phenomena ACC is involved in, such as tracking the timing of expected outcomes and estimating the cost of exerting cognitive control and effort. Different elements of model-free RL models of ACC could be affected by the inflexible cortical dynamics, making it challenging to update their values. An agent can also use a world model, a representation of how the states of the world change, to plan its actions, through model-based RL. OCD has been hypothesized to be driven by reduced certainty of how the brain's world model describes changes. Cingulotomy might improve such uncertainties about the world and one's actions, making it possible to trust the outcomes of these actions more and thus reduce the urge to collect more sensory information in the form of compulsions. Connecting the neural dynamics models with the functional formulations can provide new ways of understanding the role of ACC in OCD, with potential therapeutic insights.
Collapse
|
6
|
Attractor-like Dynamics in the Subicular Complex. J Neurosci 2022; 42:7594-7614. [PMID: 36028315 PMCID: PMC9546466 DOI: 10.1523/jneurosci.2048-20.2022] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/05/2020] [Revised: 07/02/2022] [Accepted: 07/15/2022] [Indexed: 02/02/2023] Open
Abstract
Distinct computations are performed at multiple brain regions during the encoding of spatial environments. Neural representations in the hippocampal, entorhinal, and head direction (HD) networks during spatial navigation have been clearly documented, while the representational properties of the subicular complex (SC) are relatively underexplored, although it has extensive anatomic connections with various brain regions involved in spatial information processing. We simultaneously recorded single units from different subregions of the SC in male rats while they ran clockwise on a centrally placed textured circular track (four different textures, each covering a quadrant), surrounded by six distal cues. The neural activity was monitored in standard sessions by maintaining the same configuration between the cues, while in cue manipulation sessions, the distal and local cues were either rotated in opposite directions to create a mismatch between them or the distal cues were removed. We report a highly coherent neural representation of the environment and a robust coupling between the HD cells and the spatial cells in the SC, strikingly different from previous reports of coupling between cells from co-recorded sites. Neural representations were (1) originally governed by the distal cues under local-distal cue-conflict conditions, (2) controlled by the local cues in the absence of distal cues, and (3) governed by the cues that are perceived to be stable. We propose that such attractor-like dynamics in the SC might play a critical role in the orientation of spatial representations, thus providing a "reference map" of the environment for further processing by other networks.SIGNIFICANCE STATEMENT The subicular complex (SC) receives major inputs from the entorhinal cortex and the hippocampus, and head direction (HD) information directly from the HD system. Using cue-conflict experiments, we studied the hierarchical representation of the local and distal cues in the SC to understand its role in the cognitive map, and report a highly coherent neural representation with robust coupling between the HD cells and the spatial cells in different subregions of the SC exhibiting attractor-like dynamics unaffected by the cue manipulations, strikingly different from previous reports of coupling between cells from co-recorded sites. This unique feature may allow the SC to function as a single computational unit during the representation of space, which may serve as a reference map of the environment.
Collapse
|
7
|
Cell-type-specific population dynamics of diverse reward computations. Cell 2022; 185:3568-3587.e27. [PMID: 36113428 PMCID: PMC10387374 DOI: 10.1016/j.cell.2022.08.019] [Citation(s) in RCA: 11] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/16/2021] [Revised: 06/16/2022] [Accepted: 08/17/2022] [Indexed: 01/26/2023]
Abstract
Computational analysis of cellular activity has developed largely independently of modern transcriptomic cell typology, but integrating these approaches may be essential for full insight into cellular-level mechanisms underlying brain function and dysfunction. Applying this approach to the habenula (a structure with diverse, intermingled molecular, anatomical, and computational features), we identified encoding of reward-predictive cues and reward outcomes in distinct genetically defined neural populations, including TH+ cells and Tac1+ cells. Data from genetically targeted recordings were used to train an optimized nonlinear dynamical systems model and revealed activity dynamics consistent with a line attractor. High-density, cell-type-specific electrophysiological recordings and optogenetic perturbation provided supporting evidence for this model. Reverse-engineering predicted how Tac1+ cells might integrate reward history, which was complemented by in vivo experimentation. This integrated approach describes a process by which data-driven computational models of population activity can generate and frame actionable hypotheses for cell-type-specific investigation in biological systems.
Collapse
|
8
|
Flexible rerouting of hippocampal replay sequences around changing barriers in the absence of global place field remapping. Neuron 2022; 110:1547-1558.e8. [PMID: 35180390 PMCID: PMC9473153 DOI: 10.1016/j.neuron.2022.02.002] [Citation(s) in RCA: 33] [Impact Index Per Article: 16.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/28/2021] [Revised: 11/30/2021] [Accepted: 02/01/2022] [Indexed: 01/12/2023]
Abstract
Flexibility is a hallmark of memories that depend on the hippocampus. For navigating animals, flexibility is necessitated by environmental changes such as blocked paths and extinguished food sources. To better understand the neural basis of this flexibility, we recorded hippocampal replays in a spatial memory task where barriers as well as goals were moved between sessions to see whether replays could adapt to new spatial and reward contingencies. Strikingly, replays consistently depicted new goal-directed trajectories around each new barrier configuration and largely avoided barrier violations. Barrier-respecting replays were learned rapidly and did not rely on place cell remapping. These data distinguish sharply between place field responses, which were largely stable and remained tied to sensory cues, and replays, which changed flexibly to reflect the learned contingencies in the environment and suggest sequenced activations such as replay to be an important link between the hippocampus and flexible memory.
Collapse
|
9
|
Sequential Attractors in Combinatorial Threshold-Linear Networks. SIAM JOURNAL ON APPLIED DYNAMICAL SYSTEMS 2022; 21:1597-1630. [PMID: 37485069 PMCID: PMC10362966 DOI: 10.1137/21m1445120] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 07/25/2023]
Abstract
Sequences of neural activity arise in many brain areas, including cortex, hippocampus, and central pattern generator circuits that underlie rhythmic behaviors like locomotion. While network architectures supporting sequence generation vary considerably, a common feature is an abundance of inhibition. In this work, we focus on architectures that support sequential activity in recurrently connected networks with inhibition-dominated dynamics. Specifically, we study emergent sequences in a special family of threshold-linear networks, called combinatorial threshold-linear networks (CTLNs), whose connectivity matrices are defined from directed graphs. Such networks naturally give rise to an abundance of sequences whose dynamics are tightly connected to the underlying graph. We find that architectures based on generalizations of cycle graphs produce limit cycle attractors that can be activated to generate transient or persistent (repeating) sequences. Each architecture type gives rise to an infinite family of graphs that can be built from arbitrary component subgraphs. Moreover, we prove a number of graph rules for the corresponding CTLNs in each family. The graph rules allow us to strongly constrain, and in some cases fully determine, the fixed points of the network in terms of the fixed points of the component subnetworks. Finally, we also show how the structure of certain architectures gives insight into the sequential dynamics of the corresponding attractor.
Collapse
|
10
|
Binocular rivalry reveals an out-of-equilibrium neural dynamics suited for decision-making. eLife 2021; 10:61581. [PMID: 34369875 PMCID: PMC8352598 DOI: 10.7554/elife.61581] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/29/2020] [Accepted: 05/24/2021] [Indexed: 12/19/2022] Open
Abstract
In ambiguous or conflicting sensory situations, perception is often ‘multistable’ in that it perpetually changes at irregular intervals, shifting abruptly between distinct alternatives. The interval statistics of these alternations exhibits quasi-universal characteristics, suggesting a general mechanism. Using binocular rivalry, we show that many aspects of this perceptual dynamics are reproduced by a hierarchical model operating out of equilibrium. The constitutive elements of this model idealize the metastability of cortical networks. Independent elements accumulate visual evidence at one level, while groups of coupled elements compete for dominance at another level. As soon as one group dominates perception, feedback inhibition suppresses supporting evidence. Previously unreported features in the serial dependencies of perceptual alternations compellingly corroborate this mechanism. Moreover, the proposed out-of-equilibrium dynamics satisfies normative constraints of continuous decision-making. Thus, multistable perception may reflect decision-making in a volatile world: integrating evidence over space and time, choosing categorically between hypotheses, while concurrently evaluating alternatives.
Collapse
|
11
|
Hippocampal sub-networks exhibit distinct spatial representation deficits in Alzheimer's disease model mice. Curr Biol 2021; 31:3292-3302.e6. [PMID: 34146487 DOI: 10.1016/j.cub.2021.05.039] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/19/2020] [Revised: 01/03/2021] [Accepted: 05/18/2021] [Indexed: 12/25/2022]
Abstract
Not much is known about how the dentate gyrus (DG) and hippocampal CA3 networks, critical for memory and spatial processing, malfunction in Alzheimer's disease (AD). While studies of associative memory deficits in AD have focused mainly on behavior, here, we directly measured neurophysiological network dysfunction. We asked what the pattern of deterioration of different networks is during disease progression. We investigated how the associative memory-processing capabilities in different hippocampal subfields are affected by familial AD (fAD) mutations leading to amyloid-β dyshomeostasis. Specifically, we focused on the DG and CA3, which are known to be involved in pattern completion and separation and are susceptible to pathological alterations in AD. To identify AD-related deficits in neural-ensemble dynamics, we recorded single-unit activity in wild-type (WT) and fAD model mice (APPSwe+PSEN1/ΔE9) in a novel tactile morph task, which utilizes the extremely developed somatosensory modality of mice. As expected from the sub-network regional specialization, we found that tactile changes induced lower rate map correlations in the DG than in CA3 of WT mice. This reflects DG pattern separation and CA3 pattern completion. In contrast, in fAD model mice, we observed pattern separation deficits in the DG and pattern completion deficits in CA3. This demonstration of region-dependent impairments in fAD model mice contributes to understanding of brain networks deterioration during fAD progression. Furthermore, it implies that the deterioration cannot be studied generally throughout the hippocampus but must be researched at a finer resolution of microcircuits. This opens novel systems-level approaches for analyzing AD-related neural network deficits.
Collapse
|
12
|
Attractor competition enriches cortical dynamics during awakening from anesthesia. Cell Rep 2021; 35:109270. [PMID: 34161772 DOI: 10.1016/j.celrep.2021.109270] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/06/2020] [Revised: 02/19/2021] [Accepted: 05/27/2021] [Indexed: 10/21/2022] Open
Abstract
Slow oscillations (≲ 1 Hz), a hallmark of slow-wave sleep and deep anesthesia across species, arise from spatiotemporal patterns of activity whose complexity increases as wakefulness is approached and cognitive functions emerge. The arousal process constitutes an open window to the unknown mechanisms underlying the emergence of such dynamical richness in awake cortical networks. Here, we investigate the changes in network dynamics as anesthesia fades out in the rat visual cortex. Starting from deep anesthesia, slow oscillations gradually increase their frequency, eventually expressing maximum regularity. This stage is followed by the abrupt onset of an infra-slow (~0.2 Hz) alternation between sleep-like oscillations and activated states. A population rate model reproduces this transition driven by an increased excitability that brings it to periodically cross a critical point. Based on our model, dynamical richness emerges as a competition between two metastable attractor states, a conclusion strongly supported by the data.
Collapse
|
13
|
Bad Timing for Epileptic Networks: Role of Temporal Dynamics in Seizures and Cognitive Deficits. Epilepsy Curr 2021; 21:15357597211001877. [PMID: 33724060 PMCID: PMC8609592 DOI: 10.1177/15357597211001877] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/05/2022] Open
Abstract
The precise coordination of neuronal activity is critical for optimal brain function. When such coordination fails, this can lead to dire consequences. In this review, I will present evidence that in epilepsy, failed coordination leads not only to seizures but also to alterations of the rhythmical patterns observed in the electroencephalogram and cognitive deficits. Restoring the dynamic coordination of epileptic networks could therefore both improve seizures and cognitive outcomes.
Collapse
|
14
|
To stay or not to stay: The stability of choice perseveration in value-based decision making. Q J Exp Psychol (Hove) 2020; 74:199-217. [PMID: 32976065 DOI: 10.1177/1747021820964330] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
In real life, decisions are often naturally embedded in decision sequences. In contrast, in the laboratory, decisions are oftentimes analysed in isolation. Here, we investigated the influence of decision sequences in value-based decision making and whether the stability of such effects can be modulated. In our decision task, participants needed to collect rewards in a virtual two-dimensional world. We presented a series of two reward options that were either quick to collect but were smaller in value or took longer to collect but were larger in value. The subjective value of each option was driven by the options' value and how quickly they could be reached. We manipulated the subjective values of the options so that one option became gradually less valuable over the course of a sequence, which allowed us to measure choice perseveration (i.e., how long participants stick to this option). In two experiments, we further manipulated the time interval between two trials (inter-trial interval), and the time delay between the onsets of both reward options (stimulus onset asynchrony). We predicted how these manipulations would affect choice perseveration using a computational attractor model. Our results indicate that both the inter-trial interval and the stimulus onset asynchrony modulate choice perseveration as predicted by the model. We discuss how our findings extend to research on cognitive stability and flexibility.
Collapse
|
15
|
Engagement of Pulvino-cortical Feedforward and Feedback Pathways in Cognitive Computations. Neuron 2018; 101:321-336.e9. [PMID: 30553546 DOI: 10.1016/j.neuron.2018.11.023] [Citation(s) in RCA: 83] [Impact Index Per Article: 13.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/17/2018] [Revised: 09/14/2018] [Accepted: 11/12/2018] [Indexed: 01/18/2023]
Abstract
Computational modeling of brain mechanisms of cognition has largely focused on the cortex, but recent experiments have shown that higher-order nuclei of the thalamus participate in major cognitive functions and are implicated in psychiatric disorders. Here, we show that a pulvino-cortical circuit model, composed of the pulvinar and two cortical areas, captures several physiological and behavioral observations related to the macaque pulvinar. Effective connections between the two cortical areas are gated by the pulvinar, allowing the pulvinar to shift the operation regime of these areas during attentional processing and working memory and resolve conflict in decision making. Furthermore, cortico-pulvinar projections that engage the thalamic reticular nucleus enable the pulvinar to estimate decision confidence. Finally, feedforward and feedback pulvino-cortical pathways participate in frequency-dependent inter-areal interactions that modify the relative hierarchical positions of cortical areas. Overall, our model suggests that the pulvinar provides crucial contextual modulation to cortical computations associated with cognition.
Collapse
|
16
|
Abstract
Fast object tracking on embedded devices is of great importance for applications such as autonomous driving, unmanned aerial vehicle, and intelligent monitoring. Whereas, most of previous general solutions failed to reach this goal due to the facts that (i) high computational complexity and heterogeneous operation steps in the tracking models and (ii) parallelism-limited and bloated hardware platforms (e.g., CPU/GPU). Although previously proposed devices leverage neural dynamics and near-data processing for efficient tracking, their flexibility is limited due to the tight integration with vision sensor and the effectiveness on various video datasets is yet to be fully demonstrated. On the other side, recently the many-core architecture with massive parallelism and optimized memory locality is being widely applied to improve the performance for flexibly executing neural networks. This motivates us to adapt and map an object tracking model based on attractor neural networks with continuous and smooth attractor dynamics onto neural network chips for fast tracking. In order to make the model hardware friendly, we add local-connection restriction. We analyze the tracking accuracy and observe that the model achieves comparable results on typical video datasets. Then, we design a many-core neural network architecture with several computation and transformation operations to support the model. Moreover, by discretizing the continuous dynamics to the corresponding discrete counterpart, designing a slicing scheme for efficient topology mapping, and introducing a constant-restricted scaling chain rule for data quantization, we build a complete mapping framework to implement the tracking model on the many-core architecture. We fabricate a many-core neural network chip to evaluate the real execution performance. Results show that a single chip is able to accommodate the whole tracking model, and a fast tracking speed of nearly 800 FPS (frames per second) can be achieved. This work enables high-speed object tracking on embedded devices which normally have limited resources and energy.
Collapse
|
17
|
Abstract
Upon encountering a novel environment, an animal must construct a consistent environmental map, as well as an internal estimate of its position within that map, by combining information from two distinct sources: self-motion cues and sensory landmark cues. How do known aspects of neural circuit dynamics and synaptic plasticity conspire to accomplish this feat? Here we show analytically how a neural attractor model that combines path integration of self-motion cues with Hebbian plasticity in synaptic weights from landmark cells can self-organize a consistent map of space as the animal explores an environment. Intriguingly, the emergence of this map can be understood as an elastic relaxation process between landmark cells mediated by the attractor network. Moreover, our model makes several experimentally testable predictions, including (i) systematic path-dependent shifts in the firing fields of grid cells toward the most recently encountered landmark, even in a fully learned environment; (ii) systematic deformations in the firing fields of grid cells in irregular environments, akin to elastic deformations of solids forced into irregular containers; and (iii) the creation of topological defects in grid cell firing patterns through specific environmental manipulations. Taken together, our results conceptually link known aspects of neurons and synapses to an emergent solution of a fundamental computational problem in navigation, while providing a unified account of disparate experimental observations.
Collapse
|
18
|
Characteristic Fluctuations Around Stable Attractor Dynamics Extracted from Highly Nonstationary Electroencephalographic Recordings. Brain Connect 2018; 8:457-474. [PMID: 30198323 DOI: 10.1089/brain.2018.0609] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/22/2022] Open
Abstract
Since the discovery of electrical activity of the brain, electroencephalographic (EEG) recordings constitute one of the most popular techniques of brain research. However, EEG signals are highly nonstationary and one should expect that averages of the cross-correlation coefficient, which may take positive and negative values with equal probability, (almost) vanish when estimated over long data segments. Instead, we found that the average zero-lag cross-correlation matrix estimated with a running window over the whole night of sleep EEGs, or of resting state during eyes-open and eyes-closed conditions of healthy subjects shows a characteristic correlation pattern containing pronounced nonzero values. A similar correlation structure has already been encountered in scalp EEG signals containing focal onset seizures. Therefore, we conclude that this structure is independent of the physiological state. Because of its pronounced similarity across subjects, we believe that it depicts a generic feature of the brain dynamics. Namely, we interpret this pattern as a manifestation of a dynamical ground state of the brain activity, necessary to preserve an efficient operational mode, or, expressed in terms of dynamical system theory, we interpret it as a "shadow" of the evolution on (or close to) an attractor in phase space. Nonstationary dynamical aspects of higher cerebral processes should manifest in deviations from this stable pattern. We confirm this hypothesis through a correlation analysis of EEG recordings of 10 healthy subjects during night sleep, 20 recordings of 9 epilepsy patients, and 42 recordings of 21 healthy subjects in resting state during eyes-open and eyes-closed conditions. In particular, we show that the estimation of deviations from the stationary correlation structures provides a more significant differentiation of physiological states and more homogeneous results across subjects.
Collapse
|
19
|
Landmark-Based Updating of the Head Direction System by Retrosplenial Cortex: A Computational Model. Front Cell Neurosci 2018; 12:191. [PMID: 30061814 PMCID: PMC6055052 DOI: 10.3389/fncel.2018.00191] [Citation(s) in RCA: 28] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/31/2018] [Accepted: 06/13/2018] [Indexed: 12/04/2022] Open
Abstract
Maintaining a sense of direction is fundamental to navigation, and is achieved in the brain by a network of head direction (HD) cells, which update their signal using stable environmental landmarks. How landmarks are detected and their stability determined is still unknown. Recently we reported a new class of cells (Jacob et al., 2017), the bidirectional cells, in a brain region called retrosplenial cortex (RSC) which relays environmental sensory information to the HD system. A subset of these cells, between-compartment (BC) cells, are directionally tuned (like ordinary HD cells) but follow environmental cues in preference to the global HD signal, resulting in opposing (i.e., bidirectional) tuning curves in opposed environments. Another subset, within-compartment (WC) cells, unexpectedly expressed bidirectional tuning curves in each one of the opposed compartments. Both BC and WC cells lost directional tuning in an open field, unlike HD cells. Two questions arise from this discovery: (i) how do these cells acquire their unusual response properties, and (ii) what are they for? We propose that bidirectional cells reflect a two-way interaction between local direction, as indicated by the visual environment, and global direction as signaled by the HD system. We suggest that BC cells receive strong inputs from visual cues, while WC cells additionally receive modifiable inputs from HD cells which, due to Hebbian coactivation of visual inputs plus two opposing sets of HD inputs, acquire the ability to fire in both directions. A neural network model instantiating this hypothesis is presented, which indeed forms both BC and WC bidirectional cells with properties similar to those seen experimentally. We then demonstrate how tuning specificity degrades when WC/BC cells are exposed to multiple directionalities, replicating the observed loss of WC and BC directional tuning in the open field. We suggest that the function of these neurons is to assess the stability of environmental landmarks, thereby determining their utility as reference points by which to set the HD sense of direction. This role could extend to the ability of the HD system to prefer distal over proximal landmarks, and to correct for parallax errors.
Collapse
|
20
|
Attractor Dynamics in Networks with Learning Rules Inferred from In Vivo Data. Neuron 2018; 99:227-238.e4. [PMID: 29909997 PMCID: PMC6091895 DOI: 10.1016/j.neuron.2018.05.038] [Citation(s) in RCA: 33] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/19/2017] [Revised: 04/08/2018] [Accepted: 05/23/2018] [Indexed: 01/12/2023]
Abstract
The attractor neural network scenario is a popular scenario for memory storage in the association cortex, but there is still a large gap between models based on this scenario and experimental data. We study a recurrent network model in which both learning rules and distribution of stored patterns are inferred from distributions of visual responses for novel and familiar images in the inferior temporal cortex (ITC). Unlike classical attractor neural network models, our model exhibits graded activity in retrieval states, with distributions of firing rates that are close to lognormal. Inferred learning rules are close to maximizing the number of stored patterns within a family of unsupervised Hebbian learning rules, suggesting that learning rules in ITC are optimized to store a large number of attractor states. Finally, we show that there exist two types of retrieval states: one in which firing rates are constant in time and another in which firing rates fluctuate chaotically.
Collapse
|
21
|
Inferring circuit mechanisms from sparse neural recording and global perturbation in grid cells. eLife 2018; 7:e33503. [PMID: 29985132 PMCID: PMC6078497 DOI: 10.7554/elife.33503] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/11/2017] [Accepted: 07/07/2018] [Indexed: 02/02/2023] Open
Abstract
A goal of systems neuroscience is to discover the circuit mechanisms underlying brain function. Despite experimental advances that enable circuit-wide neural recording, the problem remains open in part because solving the 'inverse problem' of inferring circuity and mechanism by merely observing activity is hard. In the grid cell system, we show through modeling that a technique based on global circuit perturbation and examination of a novel theoretical object called the distribution of relative phase shifts (DRPS) could reveal the mechanisms of a cortical circuit at unprecedented detail using extremely sparse neural recordings. We establish feasibility, showing that the method can discriminate between recurrent versus feedforward mechanisms and amongst various recurrent mechanisms using recordings from a handful of cells. The proposed strategy demonstrates that sparse recording coupled with simple perturbation can reveal more about circuit mechanism than can full knowledge of network activity or the synaptic connectivity matrix.
Collapse
|
22
|
Editorial: Metastable Dynamics of Neural Ensembles. Front Syst Neurosci 2018; 11:99. [PMID: 29472845 PMCID: PMC5810260 DOI: 10.3389/fnsys.2017.00099] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/27/2017] [Accepted: 12/22/2017] [Indexed: 11/13/2022] Open
|
23
|
A speed-accurate self-sustaining head direction cell path integration model without recurrent excitation. NETWORK (BRISTOL, ENGLAND) 2018; 29:37-69. [PMID: 30905280 DOI: 10.1080/0954898x.2018.1559960] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/12/2018] [Revised: 12/04/2018] [Accepted: 12/13/2018] [Indexed: 06/09/2023]
Abstract
The head direction (HD) system signals HD in an allocentric frame of reference. The system is able to update firing based on internally derived information about self-motion, a process known as path integration. Of particular interest is how path integration might maintain concordance between true HD and internally represented HD. Here we present a self-sustaining two-layer model, capable of self-organizing, which produces extremely accurate path integration. The implications of this work for future investigations of HD system path integration are discussed.
Collapse
|
24
|
Learning to Generate Sequences with Combination of Hebbian and Non-hebbian Plasticity in Recurrent Spiking Neural Networks. Front Neurosci 2017; 11:693. [PMID: 29311774 PMCID: PMC5733011 DOI: 10.3389/fnins.2017.00693] [Citation(s) in RCA: 22] [Impact Index Per Article: 3.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/07/2017] [Accepted: 11/23/2017] [Indexed: 11/13/2022] Open
Abstract
Synaptic Plasticity, the foundation for learning and memory formation in the human brain, manifests in various forms. Here, we combine the standard spike timing correlation based Hebbian plasticity with a non-Hebbian synaptic decay mechanism for training a recurrent spiking neural model to generate sequences. We show that inclusion of the adaptive decay of synaptic weights with standard STDP helps learn stable contextual dependencies between temporal sequences, while reducing the strong attractor states that emerge in recurrent models due to feedback loops. Furthermore, we show that the combined learning scheme suppresses the chaotic activity in the recurrent model substantially, thereby enhancing its' ability to generate sequences consistently even in the presence of perturbations.
Collapse
|
25
|
Cognitive Mapping Based on Conjunctive Representations of Space and Movement. Front Neurorobot 2017; 11:61. [PMID: 29213234 PMCID: PMC5703018 DOI: 10.3389/fnbot.2017.00061] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/05/2017] [Accepted: 10/19/2017] [Indexed: 11/16/2022] Open
Abstract
It is a challenge to build robust simultaneous localization and mapping (SLAM) system in dynamical large-scale environments. Inspired by recent findings in the entorhinal–hippocampal neuronal circuits, we propose a cognitive mapping model that includes continuous attractor networks of head-direction cells and conjunctive grid cells to integrate velocity information by conjunctive encodings of space and movement. Visual inputs from the local view cells in the model provide feedback cues to correct drifting errors of the attractors caused by the noisy velocity inputs. We demonstrate the mapping performance of the proposed cognitive mapping model on an open-source dataset of 66 km car journey in a 3 km × 1.6 km urban area. Experimental results show that the proposed model is robust in building a coherent semi-metric topological map of the entire urban area using a monocular camera, even though the image inputs contain various changes caused by different light conditions and terrains. The results in this study could inspire both neuroscience and robotic research to better understand the neural computational mechanisms of spatial cognition and to build robust robotic navigation systems in large-scale environments.
Collapse
|
26
|
Metastability of Neuronal Dynamics during General Anesthesia: Time for a Change in Our Assumptions? Front Neural Circuits 2017; 11:58. [PMID: 28890688 PMCID: PMC5574877 DOI: 10.3389/fncir.2017.00058] [Citation(s) in RCA: 23] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/14/2017] [Accepted: 08/09/2017] [Indexed: 01/01/2023] Open
Abstract
There is strong evidence that anesthetics have stereotypical effects on brain state, so that a given anesthetic appears to have a signature in the electroencephalogram (EEG), which may vary with dose. This can be usefully interpreted as the anesthetic determining an attractor in the phase space of the brain. How brain activity shifts between these attractors in time remains understudied, as most studies implicitly assume a one-to-one relationship between drug dose and attractor features by assuming stationarity over the analysis interval and analyzing data segments of several minutes in length. Yet data in rats anesthetized with isoflurane suggests that, at anesthetic levels consistent with surgical anesthesia, brain activity alternates between multiple attractors, often spending on the order of 10 min in one activity pattern before shifting to another. Moreover, the probability of these jumps between attractors changes with anesthetic concentration. This suggests the hypothesis that brain state is metastable during anesthesia: though it appears at equilibrium on short timescales (on the order of seconds to a few minutes), longer intervals show shifting behavior. Compelling evidence for metastability in rats anesthetized with isoflurane is reviewed, but so far only suggestive hints of metastability in brain states exist with other anesthetics or in other species. Explicit testing of metastability during anesthesia will require experiments with longer acquisition intervals and carefully designed analytic approaches; some of the implications of these constraints are reviewed for typical spectral analysis approaches. If metastability exists during anesthesia, it implies degeneracy in the relationship between brain state and effect site concentration, as there is not a one-to-one mapping between the two. This degeneracy could explain some of the reported difficulty in using brain activity monitors to titrate drug dose to prevent awareness during anesthesia and should force a rethinking of the notion of depth of anesthesia as a single dimension. Finally, explicit incorporation of knowledge of the dynamics of the brain during anesthesia could offer better depth of anesthesia monitoring.
Collapse
|
27
|
Neuromorphic Implementation of Attractor Dynamics in a Two-Variable Winner-Take-All Circuit with NMDARs: A Simulation Study. Front Neurosci 2017; 11:40. [PMID: 28223913 PMCID: PMC5293789 DOI: 10.3389/fnins.2017.00040] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/04/2016] [Accepted: 01/19/2017] [Indexed: 11/13/2022] Open
Abstract
Neural networks configured with winner-take-all (WTA) competition and N-methyl-D-aspartate receptor (NMDAR)-mediated synaptic dynamics are endowed with various dynamic characteristics of attractors underlying many cognitive functions. This paper presents a novel method for neuromorphic implementation of a two-variable WTA circuit with NMDARs aimed at implementing decision-making, working memory and hysteresis in visual perceptions. The method proposed is a dynamical system approach of circuit synthesis based on a biophysically plausible WTA model. Notably, slow and non-linear temporal dynamics of NMDAR-mediated synapses was generated. Circuit simulations in Cadence reproduced ramping neural activities observed in electrophysiological recordings in experiments of decision-making, the sustained activities observed in the prefrontal cortex during working memory, and classical hysteresis behavior during visual discrimination tasks. Furthermore, theoretical analysis of the dynamical system approach illuminated the underlying mechanisms of decision-making, memory capacity and hysteresis loops. The consistence between the circuit simulations and theoretical analysis demonstrated that the WTA circuit with NMDARs was able to capture the attractor dynamics underlying these cognitive functions. Their physical implementations as elementary modules are promising for assembly into integrated neuromorphic cognitive systems.
Collapse
|
28
|
Alliance: a common factor of psychotherapy modeled by structural theory. Front Psychol 2015; 6:421. [PMID: 25954215 PMCID: PMC4404724 DOI: 10.3389/fpsyg.2015.00421] [Citation(s) in RCA: 16] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/19/2015] [Accepted: 03/25/2015] [Indexed: 11/17/2022] Open
Abstract
There is broad consensus that the therapeutic alliance constitutes a core common factor for all modalities of psychotherapy. Meta-analyses corroborated that alliance, as it emerges from therapeutic process, is a significant predictor of therapy outcome. Psychotherapy process is traditionally described and explored using two categorically different approaches, the experiential (first-person) perspective and the behavioral (third-person) perspective. We propose to add to this duality a third, structural approach. Dynamical systems theory and synergetics on the one hand and enactivist theory on the other together can provide this structural approach, which contributes in specific ways to a clarification of the alliance factor. Systems theory offers concepts and tools for the modeling of the individual self and, building on this, of alliance processes. In the enactive perspective, the self is conceived as a socially enacted autonomous system that strives to maintain identity by observing a two-fold goal: to exist as an individual self in its own right (distinction) while also being open to others (participation). Using this conceptualization, we formalized the therapeutic alliance as a phase space whose potential minima (attractors) can be shifted by the therapist to approximate therapy goals. This mathematical formalization is derived from probability theory and synergetics. We draw the conclusion that structural theory provides powerful tools for the modeling of how therapeutic change is staged by the formation, utilization, and dissolution of the therapeutic alliance. In addition, we point out novel testable hypotheses and future applications.
Collapse
|
29
|
Architectural constraints are a major factor reducing path integration accuracy in the rat head direction cell system. Front Comput Neurosci 2015; 9:10. [PMID: 25705190 PMCID: PMC4319401 DOI: 10.3389/fncom.2015.00010] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/29/2014] [Accepted: 01/18/2015] [Indexed: 11/13/2022] Open
Abstract
Head direction cells fire to signal the direction in which an animal's head is pointing. They are able to track head direction using only internally-derived information (path integration)In this simulation study we investigate the factors that affect path integration accuracy. Specifically, two major limiting factors are identified: rise time, the time after stimulation it takes for a neuron to start firing, and the presence of symmetric non-offset within-layer recurrent collateral connectivity. On the basis of the latter, the important prediction is made that head direction cell regions directly involved in path integration will not contain this type of connectivity; giving a theoretical explanation for architectural observations. Increased neuronal rise time is found to slow path integration, and the slowing effect for a given rise time is found to be more severe in the context of short conduction delays. Further work is suggested on the basis of our findings, which represent a valuable contribution to understanding of the head direction cell system.
Collapse
|
30
|
A tweaking principle for executive control: neuronal circuit mechanism for rule-based task switching and conflict resolution. J Neurosci 2014; 33:19504-17. [PMID: 24336717 DOI: 10.1523/jneurosci.1356-13.2013] [Citation(s) in RCA: 29] [Impact Index Per Article: 2.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022] Open
Abstract
A hallmark of executive control is the brain's agility to shift between different tasks depending on the behavioral rule currently in play. In this work, we propose a "tweaking hypothesis" for task switching: a weak rule signal provides a small bias that is dramatically amplified by reverberating attractor dynamics in neural circuits for stimulus categorization and action selection, leading to an all-or-none reconfiguration of sensory-motor mapping. Based on this principle, we developed a biologically realistic model with multiple modules for task switching. We found that the model quantitatively accounts for complex task switching behavior: switch cost, congruency effect, and task-response interaction; as well as monkey's single-neuron activity associated with task switching. The model yields several testable predictions, in particular, that category-selective neurons play a key role in resolving sensory-motor conflict. This work represents a neural circuit model for task switching and sheds insights in the brain mechanism of a fundamental cognitive capability.
Collapse
|
31
|
Abstract
During rest, the human brain performs essential functions such as memory maintenance, which are associated with resting-state brain networks (RSNs) including the default-mode network (DMN) and frontoparietal network (FPN). Previous studies based on spiking-neuron network models and their reduced models, as well as those based on imaging data, suggest that resting-state network activity can be captured as attractor dynamics, i.e., dynamics of the brain state toward an attractive state and transitions between different attractors. Here, we analyze the energy landscapes of the RSNs by applying the maximum entropy model, or equivalently the Ising spin model, to human RSN data. We use the previously estimated parameter values to define the energy landscape, and the disconnectivity graph method to estimate the number of local energy minima (equivalent to attractors in attractor dynamics), the basin size, and hierarchical relationships among the different local minima. In both of the DMN and FPN, low-energy local minima tended to have large basins. A majority of the network states belonged to a basin of one of a few local minima. Therefore, a small number of local minima constituted the backbone of each RSN. In the DMN, the energy landscape consisted of two groups of low-energy local minima that are separated by a relatively high energy barrier. Within each group, the activity patterns of the local minima were similar, and different minima were connected by relatively low energy barriers. In the FPN, all dominant local minima were separated by relatively low energy barriers such that they formed a single coarse-grained global minimum. Our results indicate that multistable attractor dynamics may underlie the DMN, but not the FPN, and assist memory maintenance with different memory states.
Collapse
|
32
|
Abstract
Head direction (HD) cell responses are thought to be derived from a combination of internal (or idiothetic) and external (or allothetic) sources of information. Recent work from the Jeffery laboratory shows that the relative influence of visual versus vestibular inputs upon the HD cell response depends on the disparity between these sources. In this paper, we present simulation results from a model designed to explain these observations. The model accurately replicates the Knight et al. data. We suggest that cue conflict resolution is critically dependent on plastic remapping of visual information onto the HD cell layer. This remap results in a shift in preferred directions of a subset of HD cells, which is then inherited by the rest of the cells during path integration. Thus, we demonstrate how, over a period of several minutes, a visual landmark may gain cue control. Furthermore, simulation results show that weaker visual landmarks fail to gain cue control as readily. We therefore suggest a second longer term plasticity in visual projections onto HD cell areas, through which landmarks with an inconsistent relationship to idiothetic information are made less salient, significantly hindering their ability to gain cue control. Our results provide a mechanism for reliability-weighted cue averaging that may pertain to other neural systems in addition to the HD system.
Collapse
|
33
|
Weighted cue integration in the rodent head direction system. Philos Trans R Soc Lond B Biol Sci 2013; 369:20120512. [PMID: 24366127 DOI: 10.1098/rstb.2012.0512] [Citation(s) in RCA: 36] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022] Open
Abstract
How the brain combines information from different sensory modalities and of differing reliability is an important and still-unanswered question. Using the head direction (HD) system as a model, we explored the resolution of conflicts between landmarks and background cues. Sensory cue integration models predict averaging of the two cues, whereas attractor models predict capture of the signal by the dominant cue. We found that a visual landmark mostly captured the HD signal at low conflicts: however, there was an increasing propensity for the cells to integrate the cues thereafter. A large conflict presented to naive rats resulted in greater visual cue capture (less integration) than in experienced rats, revealing an effect of experience. We propose that weighted cue integration in HD cells arises from dynamic plasticity of the feed-forward inputs to the network, causing within-trial spatial redistribution of the visual inputs onto the ring. This suggests that an attractor network can implement decision processes about cue reliability using simple architecture and learning rules, thus providing a potential neural substrate for weighted cue integration.
Collapse
|
34
|
Robust Working Memory in an Asynchronously Spiking Neural Network Realized with Neuromorphic VLSI. Front Neurosci 2012; 5:149. [PMID: 22347151 PMCID: PMC3270576 DOI: 10.3389/fnins.2011.00149] [Citation(s) in RCA: 31] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/30/2011] [Accepted: 12/29/2011] [Indexed: 11/29/2022] Open
Abstract
We demonstrate bistable attractor dynamics in a spiking neural network implemented with neuromorphic VLSI hardware. The on-chip network consists of three interacting populations (two excitatory, one inhibitory) of leaky integrate-and-fire (LIF) neurons. One excitatory population is distinguished by strong synaptic self-excitation, which sustains meta-stable states of “high” and “low”-firing activity. Depending on the overall excitability, transitions to the “high” state may be evoked by external stimulation, or may occur spontaneously due to random activity fluctuations. In the former case, the “high” state retains a “working memory” of a stimulus until well after its release. In the latter case, “high” states remain stable for seconds, three orders of magnitude longer than the largest time-scale implemented in the circuitry. Evoked and spontaneous transitions form a continuum and may exhibit a wide range of latencies, depending on the strength of external stimulation and of recurrent synaptic excitation. In addition, we investigated “corrupted” “high” states comprising neurons of both excitatory populations. Within a “basin of attraction,” the network dynamics “corrects” such states and re-establishes the prototypical “high” state. We conclude that, with effective theoretical guidance, full-fledged attractor dynamics can be realized with comparatively small populations of neuromorphic hardware neurons.
Collapse
|
35
|
Anterior-posterior and medial-lateral control of sway in infants during sitting acquisition does not become adult-like. Gait Posture 2011; 33:88-92. [PMID: 21050764 PMCID: PMC3053025 DOI: 10.1016/j.gaitpost.2010.10.002] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/26/2009] [Revised: 08/23/2010] [Accepted: 10/04/2010] [Indexed: 02/02/2023]
Abstract
We examined (1) how sitting postural control in infants develops in the anterior-posterior (A/P) and medial-lateral (M/L) directions of sway, and (2) whether this control is already adult-like during the late phase of infant's sitting acquisition. COP data were acquired from 14 healthy infants (from the onset of sitting until independent sitting) and 21 healthy adults while sitting on a force platform. Attractor dimensionality (CoD: correlation dimension), attractor predictability (LyE: largest Lyapunov exponent), and sway variability (RMS: root-mean square) were calculated from the COP data to evaluate postural control. In the A/P direction, sitting was mastered by the infants by decreasing the active degrees of freedom of the postural system (decreased CoD), using a more predictable and (locally) stable sway (decreased LyE), and increasing sway variability (increased RMS). Control of sitting became practically simple, stable and exploratory with infant development. This may support the hypothesis that the sitting posture serves as the foundation for the development of other motor skills, as reaching. In the M/L direction, only sway variability decreased with development, possibly due to changes in the infant's body dimensions. Taken together, these findings indicate that early in development the focus is more in the A/P than the M/L direction. Adults' postural control was found more adaptable than the infants in both directions, involving more active degrees of freedom and less predictable sway patterns. Identifying the factors that make the dynamics of the postural system adult-like requires further research.
Collapse
|