1
|
Sihn D, Kim SP. A neural basis for learning sequential memory in brain loop structures. Front Comput Neurosci 2024; 18:1421458. [PMID: 39161702 PMCID: PMC11330804 DOI: 10.3389/fncom.2024.1421458] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/22/2024] [Accepted: 07/12/2024] [Indexed: 08/21/2024] Open
Abstract
Introduction Behaviors often involve a sequence of events, and learning and reproducing it is essential for sequential memory. Brain loop structures refer to loop-shaped inter-regional connection structures in the brain such as cortico-basal ganglia-thalamic and cortico-cerebellar loops. They are thought to play a crucial role in supporting sequential memory, but it is unclear what properties of the loop structure are important and why. Methods In this study, we investigated conditions necessary for the learning of sequential memory in brain loop structures via computational modeling. We assumed that sequential memory emerges due to delayed information transmission in loop structures and presented a basic neural activity model and validated our theoretical considerations with spiking neural network simulations. Results Based on this model, we described the factors for the learning of sequential memory: first, the information transmission delay should decrease as the size of the loop structure increases; and second, the likelihood of the learning of sequential memory increases as the size of the loop structure increases and soon saturates. Combining these factors, we showed that moderate-sized brain loop structures are advantageous for the learning of sequential memory due to the physiological restrictions of information transmission delay. Discussion Our results will help us better understand the relationship between sequential memory and brain loop structures.
Collapse
Affiliation(s)
| | - Sung-Phil Kim
- Department of Biomedical Engineering, Ulsan National Institute of Science and Technology, Ulsan, Republic of Korea
| |
Collapse
|
2
|
Zajzon B, Duarte R, Morrison A. Toward reproducible models of sequence learning: replication and analysis of a modular spiking network with reward-based learning. Front Integr Neurosci 2023; 17:935177. [PMID: 37396571 PMCID: PMC10310927 DOI: 10.3389/fnint.2023.935177] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/03/2022] [Accepted: 05/15/2023] [Indexed: 07/04/2023] Open
Abstract
To acquire statistical regularities from the world, the brain must reliably process, and learn from, spatio-temporally structured information. Although an increasing number of computational models have attempted to explain how such sequence learning may be implemented in the neural hardware, many remain limited in functionality or lack biophysical plausibility. If we are to harvest the knowledge within these models and arrive at a deeper mechanistic understanding of sequential processing in cortical circuits, it is critical that the models and their findings are accessible, reproducible, and quantitatively comparable. Here we illustrate the importance of these aspects by providing a thorough investigation of a recently proposed sequence learning model. We re-implement the modular columnar architecture and reward-based learning rule in the open-source NEST simulator, and successfully replicate the main findings of the original study. Building on these, we perform an in-depth analysis of the model's robustness to parameter settings and underlying assumptions, highlighting its strengths and weaknesses. We demonstrate a limitation of the model consisting in the hard-wiring of the sequence order in the connectivity patterns, and suggest possible solutions. Finally, we show that the core functionality of the model is retained under more biologically-plausible constraints.
Collapse
Affiliation(s)
- Barna Zajzon
- Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA-BRAIN Institute I, Jülich Research Centre, Jülich, Germany
- Department of Computer Science 3—Software Engineering, RWTH Aachen University, Aachen, Germany
| | - Renato Duarte
- Donders Institute for Brain, Cognition and Behavior, Radboud University, Nijmegen, Netherlands
| | - Abigail Morrison
- Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA-BRAIN Institute I, Jülich Research Centre, Jülich, Germany
- Department of Computer Science 3—Software Engineering, RWTH Aachen University, Aachen, Germany
| |
Collapse
|
3
|
Rozells J, Gavornik JP. Optogenetic manipulation of inhibitory interneurons can be used to validate a model of spatiotemporal sequence learning. Front Comput Neurosci 2023; 17:1198128. [PMID: 37362060 PMCID: PMC10288026 DOI: 10.3389/fncom.2023.1198128] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/31/2023] [Accepted: 05/24/2023] [Indexed: 06/28/2023] Open
Abstract
The brain uses temporal information to link discrete events into memory structures supporting recognition, prediction, and a wide variety of complex behaviors. It is still an open question how experience-dependent synaptic plasticity creates memories including temporal and ordinal information. Various models have been proposed to explain how this could work, but these are often difficult to validate in a living brain. A recent model developed to explain sequence learning in the visual cortex encodes intervals in recurrent excitatory synapses and uses a learned offset between excitation and inhibition to generate precisely timed "messenger" cells that signal the end of an instance of time. This mechanism suggests that the recall of stored temporal intervals should be particularly sensitive to the activity of inhibitory interneurons that can be easily targeted in vivo with standard optogenetic tools. In this work we examined how simulated optogenetic manipulations of inhibitory cells modifies temporal learning and recall based on these mechanisms. We show that disinhibition and excess inhibition during learning or testing cause characteristic errors in recalled timing that could be used to validate the model in vivo using either physiological or behavioral measurements.
Collapse
Affiliation(s)
| | - Jeffrey P. Gavornik
- Center for Systems Neuroscience, Department of Biology, Boston University, Boston, MA, United States
| |
Collapse
|
4
|
Bouhadjar Y, Wouters DJ, Diesmann M, Tetzlaff T. Coherent noise enables probabilistic sequence replay in spiking neuronal networks. PLoS Comput Biol 2023; 19:e1010989. [PMID: 37130121 PMCID: PMC10153753 DOI: 10.1371/journal.pcbi.1010989] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/14/2022] [Accepted: 03/02/2023] [Indexed: 05/03/2023] Open
Abstract
Animals rely on different decision strategies when faced with ambiguous or uncertain cues. Depending on the context, decisions may be biased towards events that were most frequently experienced in the past, or be more explorative. A particular type of decision making central to cognition is sequential memory recall in response to ambiguous cues. A previously developed spiking neuronal network implementation of sequence prediction and recall learns complex, high-order sequences in an unsupervised manner by local, biologically inspired plasticity rules. In response to an ambiguous cue, the model deterministically recalls the sequence shown most frequently during training. Here, we present an extension of the model enabling a range of different decision strategies. In this model, explorative behavior is generated by supplying neurons with noise. As the model relies on population encoding, uncorrelated noise averages out, and the recall dynamics remain effectively deterministic. In the presence of locally correlated noise, the averaging effect is avoided without impairing the model performance, and without the need for large noise amplitudes. We investigate two forms of correlated noise occurring in nature: shared synaptic background inputs, and random locking of the stimulus to spatiotemporal oscillations in the network activity. Depending on the noise characteristics, the network adopts various recall strategies. This study thereby provides potential mechanisms explaining how the statistics of learned sequences affect decision making, and how decision strategies can be adjusted after learning.
Collapse
Affiliation(s)
- Younes Bouhadjar
- Institute of Neuroscience and Medicine (INM-6), & Institute for Advanced Simulation (IAS-6), & JARA BRAIN Institute Structure-Function Relationships (INM-10), Jülich Research Centre, Jülich, Germany
- Peter Grünberg Institute (PGI-7,10), Jülich Research Centre and JARA, Jülich, Germany
- RWTH Aachen University, Aachen, Germany
| | - Dirk J Wouters
- Institute of Electronic Materials (IWE 2) & JARA-FIT, RWTH Aachen University, Aachen, Germany
| | - Markus Diesmann
- Institute of Neuroscience and Medicine (INM-6), & Institute for Advanced Simulation (IAS-6), & JARA BRAIN Institute Structure-Function Relationships (INM-10), Jülich Research Centre, Jülich, Germany
- Department of Physics, Faculty 1, & Department of Psychiatry, Psychotherapy, and Psychosomatics, Medical School, RWTH Aachen University, Aachen, Germany
| | - Tom Tetzlaff
- Institute of Neuroscience and Medicine (INM-6), & Institute for Advanced Simulation (IAS-6), & JARA BRAIN Institute Structure-Function Relationships (INM-10), Jülich Research Centre, Jülich, Germany
| |
Collapse
|
5
|
Zajzon B, Dahmen D, Morrison A, Duarte R. Signal denoising through topographic modularity of neural circuits. eLife 2023; 12:77009. [PMID: 36700545 PMCID: PMC9981157 DOI: 10.7554/elife.77009] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/12/2022] [Accepted: 01/25/2023] [Indexed: 01/27/2023] Open
Abstract
Information from the sensory periphery is conveyed to the cortex via structured projection pathways that spatially segregate stimulus features, providing a robust and efficient encoding strategy. Beyond sensory encoding, this prominent anatomical feature extends throughout the neocortex. However, the extent to which it influences cortical processing is unclear. In this study, we combine cortical circuit modeling with network theory to demonstrate that the sharpness of topographic projections acts as a bifurcation parameter, controlling the macroscopic dynamics and representational precision across a modular network. By shifting the balance of excitation and inhibition, topographic modularity gradually increases task performance and improves the signal-to-noise ratio across the system. We demonstrate that in biologically constrained networks, such a denoising behavior is contingent on recurrent inhibition. We show that this is a robust and generic structural feature that enables a broad range of behaviorally relevant operating regimes, and provide an in-depth theoretical analysis unraveling the dynamical principles underlying the mechanism.
Collapse
Affiliation(s)
- Barna Zajzon
- Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA-BRAIN Institute I, Jülich Research CentreJülichGermany
- Department of Psychiatry, Psychotherapy and Psychosomatics, RWTH Aachen UniversityAachenGermany
| | - David Dahmen
- Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA-BRAIN Institute I, Jülich Research CentreJülichGermany
| | - Abigail Morrison
- Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA-BRAIN Institute I, Jülich Research CentreJülichGermany
- Department of Computer Science 3 - Software Engineering, RWTH Aachen UniversityAachenGermany
| | - Renato Duarte
- Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA-BRAIN Institute I, Jülich Research CentreJülichGermany
- Donders Institute for Brain, Cognition and Behavior, Radboud University NijmegenNijmegenNetherlands
| |
Collapse
|
6
|
Bouhadjar Y, Wouters DJ, Diesmann M, Tetzlaff T. Sequence learning, prediction, and replay in networks of spiking neurons. PLoS Comput Biol 2022; 18:e1010233. [PMID: 35727857 PMCID: PMC9273101 DOI: 10.1371/journal.pcbi.1010233] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/08/2021] [Revised: 07/11/2022] [Accepted: 05/20/2022] [Indexed: 11/24/2022] Open
Abstract
Sequence learning, prediction and replay have been proposed to constitute the universal computations performed by the neocortex. The Hierarchical Temporal Memory (HTM) algorithm realizes these forms of computation. It learns sequences in an unsupervised and continuous manner using local learning rules, permits a context specific prediction of future sequence elements, and generates mismatch signals in case the predictions are not met. While the HTM algorithm accounts for a number of biological features such as topographic receptive fields, nonlinear dendritic processing, and sparse connectivity, it is based on abstract discrete-time neuron and synapse dynamics, as well as on plasticity mechanisms that can only partly be related to known biological mechanisms. Here, we devise a continuous-time implementation of the temporal-memory (TM) component of the HTM algorithm, which is based on a recurrent network of spiking neurons with biophysically interpretable variables and parameters. The model learns high-order sequences by means of a structural Hebbian synaptic plasticity mechanism supplemented with a rate-based homeostatic control. In combination with nonlinear dendritic input integration and local inhibitory feedback, this type of plasticity leads to the dynamic self-organization of narrow sequence-specific subnetworks. These subnetworks provide the substrate for a faithful propagation of sparse, synchronous activity, and, thereby, for a robust, context specific prediction of future sequence elements as well as for the autonomous replay of previously learned sequences. By strengthening the link to biology, our implementation facilitates the evaluation of the TM hypothesis based on experimentally accessible quantities. The continuous-time implementation of the TM algorithm permits, in particular, an investigation of the role of sequence timing for sequence learning, prediction and replay. We demonstrate this aspect by studying the effect of the sequence speed on the sequence learning performance and on the speed of autonomous sequence replay. Essentially all data processed by mammals and many other living organisms is sequential. This holds true for all types of sensory input data as well as motor output activity. Being able to form memories of such sequential data, to predict future sequence elements, and to replay learned sequences is a necessary prerequisite for survival. It has been hypothesized that sequence learning, prediction and replay constitute the fundamental computations performed by the neocortex. The Hierarchical Temporal Memory (HTM) constitutes an abstract powerful algorithm implementing this form of computation and has been proposed to serve as a model of neocortical processing. In this study, we are reformulating this algorithm in terms of known biological ingredients and mechanisms to foster the verifiability of the HTM hypothesis based on electrophysiological and behavioral data. The proposed model learns continuously in an unsupervised manner by biologically plausible, local plasticity mechanisms, and successfully predicts and replays complex sequences. Apart from establishing contact to biology, the study sheds light on the mechanisms determining at what speed we can process sequences and provides an explanation of fast sequence replay observed in the hippocampus and in the neocortex.
Collapse
Affiliation(s)
- Younes Bouhadjar
- Institute of Neuroscience and Medicine (INM-6), & Institute for Advanced Simulation (IAS-6), & JARA BRAIN Institute Structure-Function Relationships (INM-10), Jülich Research Centre, Jülich, Germany
- Peter Grünberg Institute (PGI-7,10), Jülich Research Centre and JARA, Jülich, Germany
- RWTH Aachen University, Aachen, Germany
- * E-mail:
| | - Dirk J. Wouters
- Institute of Electronic Materials (IWE 2) & JARA-FIT, RWTH Aachen University, Aachen, Germany
| | - Markus Diesmann
- Institute of Neuroscience and Medicine (INM-6), & Institute for Advanced Simulation (IAS-6), & JARA BRAIN Institute Structure-Function Relationships (INM-10), Jülich Research Centre, Jülich, Germany
- Department of Physics, Faculty 1, & Department of Psychiatry, Psychotherapy, and Psychosomatics, Medical School, RWTH Aachen University, Aachen, Germany
| | - Tom Tetzlaff
- Institute of Neuroscience and Medicine (INM-6), & Institute for Advanced Simulation (IAS-6), & JARA BRAIN Institute Structure-Function Relationships (INM-10), Jülich Research Centre, Jülich, Germany
| |
Collapse
|
7
|
Braun W, Memmesheimer RM. High-frequency oscillations and sequence generation in two-population models of hippocampal region CA1. PLoS Comput Biol 2022; 18:e1009891. [PMID: 35176028 PMCID: PMC8890743 DOI: 10.1371/journal.pcbi.1009891] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/04/2021] [Revised: 03/02/2022] [Accepted: 02/02/2022] [Indexed: 11/19/2022] Open
Abstract
Hippocampal sharp wave/ripple oscillations are a prominent pattern of collective activity, which consists of a strong overall increase of activity with superimposed (140 − 200 Hz) ripple oscillations. Despite its prominence and its experimentally demonstrated importance for memory consolidation, the mechanisms underlying its generation are to date not understood. Several models assume that recurrent networks of inhibitory cells alone can explain the generation and main characteristics of the ripple oscillations. Recent experiments, however, indicate that in addition to inhibitory basket cells, the pattern requires in vivo the activity of the local population of excitatory pyramidal cells. Here, we study a model for networks in the hippocampal region CA1 incorporating such a local excitatory population of pyramidal neurons. We start by investigating its ability to generate ripple oscillations using extensive simulations. Using biologically plausible parameters, we find that short pulses of external excitation triggering excitatory cell spiking are required for sharp/wave ripple generation with oscillation patterns similar to in vivo observations. Our model has plausible values for single neuron, synapse and connectivity parameters, random connectivity and no strong feedforward drive to the inhibitory population. Specifically, whereas temporally broad excitation can lead to high-frequency oscillations in the ripple range, sparse pyramidal cell activity is only obtained with pulse-like external CA3 excitation. Further simulations indicate that such short pulses could originate from dendritic spikes in the apical or basal dendrites of CA1 pyramidal cells, which are triggered by coincident spike arrivals from hippocampal region CA3. Finally we show that replay of sequences by pyramidal neurons and ripple oscillations can arise intrinsically in CA1 due to structured connectivity that gives rise to alternating excitatory pulse and inhibitory gap coding; the latter denotes phases of silence in specific basket cell groups, which induce selective disinhibition of groups of pyramidal neurons. This general mechanism for sequence generation leads to sparse pyramidal cell and dense basket cell spiking, does not rely on synfire chain-like feedforward excitation and may be relevant for other brain regions as well.
Collapse
Affiliation(s)
- Wilhelm Braun
- Neural Network Dynamics and Computation, Institute of Genetics, University of Bonn, Bonn, Germany
- Institute of Computational Neuroscience, University Medical Center Hamburg-Eppendorf, Hamburg, Germany
- * E-mail: (WB); (R-MM)
| | - Raoul-Martin Memmesheimer
- Neural Network Dynamics and Computation, Institute of Genetics, University of Bonn, Bonn, Germany
- * E-mail: (WB); (R-MM)
| |
Collapse
|
8
|
Hu X, Zeng Z. Bridging the Functional and Wiring Properties of V1 Neurons Through Sparse Coding. Neural Comput 2021; 34:104-137. [PMID: 34758484 DOI: 10.1162/neco_a_01453] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/23/2021] [Accepted: 07/20/2021] [Indexed: 11/04/2022]
Abstract
The functional properties of neurons in the primary visual cortex (V1) are thought to be closely related to the structural properties of this network, but the specific relationships remain unclear. Previous theoretical studies have suggested that sparse coding, an energy-efficient coding method, might underlie the orientation selectivity of V1 neurons. We thus aimed to delineate how the neurons are wired to produce this feature. We constructed a model and endowed it with a simple Hebbian learning rule to encode images of natural scenes. The excitatory neurons fired sparsely in response to images and developed strong orientation selectivity. After learning, the connectivity between excitatory neuron pairs, inhibitory neuron pairs, and excitatory-inhibitory neuron pairs depended on firing pattern and receptive field similarity between the neurons. The receptive fields (RFs) of excitatory neurons and inhibitory neurons were well predicted by the RFs of presynaptic excitatory neurons and inhibitory neurons, respectively. The excitatory neurons formed a small-world network, in which certain local connection patterns were significantly overrepresented. Bidirectionally manipulating the firing rates of inhibitory neurons caused linear transformations of the firing rates of excitatory neurons, and vice versa. These wiring properties and modulatory effects were congruent with a wide variety of data measured in V1, suggesting that the sparse coding principle might underlie both the functional and wiring properties of V1 neurons.
Collapse
Affiliation(s)
- Xiaolin Hu
- Department of Computer Science and Technology, State Key Laboratory of Intelligent Technology and Systems, BNRist, Tsinghua Laboratory of Brain and Intelligence, and IDG/McGovern Institute for Brain Research, Tsinghua University, Beijing 100084, China
| | - Zhigang Zeng
- School of Artificial Intelligence and Automation, Huazhong University of Science and Technology, Wuhan 430074, China, and Key Laboratory of Image Processing and Intelligent Control, Education Ministry of China, Wuhan 430074, China
| |
Collapse
|
9
|
Lutz ND, Admard M, Genzoni E, Born J, Rauss K. Occipital sleep spindles predict sequence learning in a visuo-motor task. Sleep 2021; 44:zsab056. [PMID: 33743012 PMCID: PMC8361350 DOI: 10.1093/sleep/zsab056] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/15/2020] [Revised: 03/01/2021] [Indexed: 11/13/2022] Open
Abstract
STUDY OBJECTIVES The brain appears to use internal models to successfully interact with its environment via active predictions of future events. Both internal models and the predictions derived from them are based on previous experience. However, it remains unclear how previously encoded information is maintained to support this function, especially in the visual domain. In the present study, we hypothesized that sleep consolidates newly encoded spatio-temporal regularities to improve predictions afterwards. METHODS We tested this hypothesis using a novel sequence-learning paradigm that aimed to dissociate perceptual from motor learning. We recorded behavioral performance and high-density electroencephalography (EEG) in male human participants during initial training and during testing two days later, following an experimental night of sleep (n = 16, including high-density EEG recordings) or wakefulness (n = 17). RESULTS Our results show sleep-dependent behavioral improvements correlated with sleep-spindle activity specifically over occipital cortices. Moreover, event-related potential (ERP) responses indicate a shift of attention away from predictable to unpredictable sequences after sleep, consistent with enhanced automaticity in the processing of predictable sequences. CONCLUSIONS These findings suggest a sleep-dependent improvement in the prediction of visual sequences, likely related to visual cortex reactivation during sleep spindles. Considering that controls in our experiments did not fully exclude oculomotor contributions, future studies will need to address the extent to which these effects depend on purely perceptual versus oculomotor sequence learning.
Collapse
Affiliation(s)
- Nicolas D Lutz
- Institute of Medical Psychology and Behavioral Neurobiology, University of Tübingen, Tübingen, Germany
- Graduate Training Centre of Neuroscience/IMPRS for Cognitive & Systems Neuroscience, University of Tübingen, Tübingen, Germany
| | - Marie Admard
- Institute of Medical Psychology and Behavioral Neurobiology, University of Tübingen, Tübingen, Germany
| | - Elsa Genzoni
- Institute of Medical Psychology and Behavioral Neurobiology, University of Tübingen, Tübingen, Germany
- School of Life Sciences, Swiss Federal Institute of Technology (EPFL), Lausanne, Switzerland
| | - Jan Born
- Institute of Medical Psychology and Behavioral Neurobiology, University of Tübingen, Tübingen, Germany
- Werner Reichardt Centre for Integrative Neuroscience, University of Tübingen, Tübingen, Germany
- German Center for Diabetes Research (DZD), Institute for Diabetes Research & Metabolic Diseases of the Helmholtz Center Munich at the University Tübingen (IDM), Germany
| | - Karsten Rauss
- Institute of Medical Psychology and Behavioral Neurobiology, University of Tübingen, Tübingen, Germany
| |
Collapse
|
10
|
Miner D, Wörgötter F, Tetzlaff C, Fauth M. Self-Organized Structuring of Recurrent Neuronal Networks for Reliable Information Transmission. BIOLOGY 2021; 10:biology10070577. [PMID: 34202473 PMCID: PMC8301101 DOI: 10.3390/biology10070577] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/07/2021] [Revised: 06/09/2021] [Accepted: 06/15/2021] [Indexed: 11/16/2022]
Abstract
Simple Summary Information processing in the brain takes places at multiple stages, each of which is a local network of neurons. The long-range connections between these network stages are sparse and do not change over time. Thus, within each stage information arrives at a sparse subset of input neurons and must be routed to a sparse subset of output neurons. In this theoretical work, we investigate how networks achieve this routing in a self-organized manner without losing information. We show that biologically inspired self-organization entails that input information is distributed to all neurons in the network by strengthening many synapses in the local networks. Thus, after successful self-organization, input information can be read out and decoded from a small number of outputs. We also show that this way of self-organization can still be more energy efficient than creating more long-range in- and output connections. Abstract Our brains process information using a layered hierarchical network architecture, with abundant connections within each layer and sparse long-range connections between layers. As these long-range connections are mostly unchanged after development, each layer has to locally self-organize in response to new inputs to enable information routing between the sparse in- and output connections. Here we demonstrate that this can be achieved by a well-established model of cortical self-organization based on a well-orchestrated interplay between several plasticity processes. After this self-organization, stimuli conveyed by sparse inputs can be rapidly read out from a layer using only very few long-range connections. To achieve this information routing, the neurons that are stimulated form feed-forward projections into the unstimulated parts of the same layer and get more neurons to represent the stimulus. Hereby, the plasticity processes ensure that each neuron only receives projections from and responds to only one stimulus such that the network is partitioned into parts with different preferred stimuli. Along this line, we show that the relation between the network activity and connectivity self-organizes into a biologically plausible regime. Finally, we argue how the emerging connectivity may minimize the metabolic cost for maintaining a network structure that rapidly transmits stimulus information despite sparse input and output connectivity.
Collapse
|
11
|
Cone I, Shouval HZ. Learning precise spatiotemporal sequences via biophysically realistic learning rules in a modular, spiking network. eLife 2021; 10:e63751. [PMID: 33734085 PMCID: PMC7972481 DOI: 10.7554/elife.63751] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/06/2020] [Accepted: 02/16/2021] [Indexed: 11/13/2022] Open
Abstract
Multiple brain regions are able to learn and express temporal sequences, and this functionality is an essential component of learning and memory. We propose a substrate for such representations via a network model that learns and recalls discrete sequences of variable order and duration. The model consists of a network of spiking neurons placed in a modular microcolumn based architecture. Learning is performed via a biophysically realistic learning rule that depends on synaptic 'eligibility traces'. Before training, the network contains no memory of any particular sequence. After training, presentation of only the first element in that sequence is sufficient for the network to recall an entire learned representation of the sequence. An extended version of the model also demonstrates the ability to successfully learn and recall non-Markovian sequences. This model provides a possible framework for biologically plausible sequence learning and memory, in agreement with recent experimental results.
Collapse
Affiliation(s)
- Ian Cone
- Neurobiology and Anatomy, University of Texas Medical School at HoustonHouston, TXUnited States
- Applied Physics, Rice UniversityHouston, TXUnited States
| | - Harel Z Shouval
- Neurobiology and Anatomy, University of Texas Medical School at HoustonHouston, TXUnited States
| |
Collapse
|
12
|
An Integrate-and-Fire Spiking Neural Network Model Simulating Artificially Induced Cortical Plasticity. eNeuro 2021; 8:ENEURO.0333-20.2021. [PMID: 33632810 PMCID: PMC7986529 DOI: 10.1523/eneuro.0333-20.2021] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/31/2020] [Revised: 02/10/2021] [Accepted: 02/16/2021] [Indexed: 11/21/2022] Open
Abstract
We describe an integrate-and-fire (IF) spiking neural network that incorporates spike-timing-dependent plasticity (STDP) and simulates the experimental outcomes of four different conditioning protocols that produce cortical plasticity. The original conditioning experiments were performed in freely moving non-human primates (NHPs) with an autonomous head-fixed bidirectional brain-computer interface (BCI). Three protocols involved closed-loop stimulation triggered from (1) spike activity of single cortical neurons, (2) electromyographic (EMG) activity from forearm muscles, and (3) cycles of spontaneous cortical beta activity. A fourth protocol involved open-loop delivery of pairs of stimuli at neighboring cortical sites. The IF network that replicates the experimental results consists of 360 units with simulated membrane potentials produced by synaptic inputs and triggering a spike when reaching threshold. The 240 cortical units produce either excitatory or inhibitory postsynaptic potentials (PSPs) in their target units. In addition to the experimentally observed conditioning effects, the model also allows computation of underlying network behavior not originally documented. Furthermore, the model makes predictions about outcomes from protocols not yet investigated, including spike-triggered inhibition, γ-triggered stimulation and disynaptic conditioning. The success of the simulations suggests that a simple voltage-based IF model incorporating STDP can capture the essential mechanisms mediating targeted plasticity with closed-loop stimulation.
Collapse
|
13
|
Hey, look over there: Distraction effects on rapid sequence recall. PLoS One 2020; 15:e0223743. [PMID: 32275703 PMCID: PMC7147745 DOI: 10.1371/journal.pone.0223743] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/25/2019] [Accepted: 03/11/2020] [Indexed: 11/19/2022] Open
Abstract
In the course of everyday life, the brain must store and recall a huge variety of representations of stimuli which are presented in an ordered or sequential way. The processes by which the ordering of these various things is stored and recalled are moderately well understood. We use here a computational model of a cortex-like recurrent neural network adapted by a multitude of plasticity mechanisms. We first demonstrate the learning of a sequence. Then, we examine the influence of different types of distractors on the network dynamics during the recall of the encoded ordered information being ordered in a sequence. We are able to broadly arrive at two distinct effect-categories for distractors, arrive at a basic understanding of why this is so, and predict what distractors will fall into each category.
Collapse
|