1
|
Sleep prevents catastrophic forgetting in spiking neural networks by forming a joint synaptic weight representation. PLoS Comput Biol 2022; 18:e1010628. [PMID: 36399437 PMCID: PMC9674146 DOI: 10.1371/journal.pcbi.1010628] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/22/2022] [Accepted: 10/03/2022] [Indexed: 11/19/2022] Open
Abstract
Artificial neural networks overwrite previously learned tasks when trained sequentially, a phenomenon known as catastrophic forgetting. In contrast, the brain learns continuously, and typically learns best when new training is interleaved with periods of sleep for memory consolidation. Here we used spiking network to study mechanisms behind catastrophic forgetting and the role of sleep in preventing it. The network could be trained to learn a complex foraging task but exhibited catastrophic forgetting when trained sequentially on different tasks. In synaptic weight space, new task training moved the synaptic weight configuration away from the manifold representing old task leading to forgetting. Interleaving new task training with periods of off-line reactivation, mimicking biological sleep, mitigated catastrophic forgetting by constraining the network synaptic weight state to the previously learned manifold, while allowing the weight configuration to converge towards the intersection of the manifolds representing old and new tasks. The study reveals a possible strategy of synaptic weights dynamics the brain applies during sleep to prevent forgetting and optimize learning.
Collapse
|
2
|
Delahunt CB, Maia PD, Kutz JN. Built to Last: Functional and Structural Mechanisms in the Moth Olfactory Network Mitigate Effects of Neural Injury. Brain Sci 2021; 11:brainsci11040462. [PMID: 33916469 PMCID: PMC8067361 DOI: 10.3390/brainsci11040462] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/28/2021] [Revised: 03/18/2021] [Accepted: 03/24/2021] [Indexed: 11/16/2022] Open
Abstract
Most organisms suffer neuronal damage throughout their lives, which can impair performance of core behaviors. Their neural circuits need to maintain function despite injury, which in particular requires preserving key system outputs. In this work, we explore whether and how certain structural and functional neuronal network motifs act as injury mitigation mechanisms. Specifically, we examine how (i) Hebbian learning, (ii) high levels of noise, and (iii) parallel inhibitory and excitatory connections contribute to the robustness of the olfactory system in the Manduca sexta moth. We simulate injuries on a detailed computational model of the moth olfactory network calibrated to data. The injuries are modeled on focal axonal swellings, a ubiquitous form of axonal pathology observed in traumatic brain injuries and other brain disorders. Axonal swellings effectively compromise spike train propagation along the axon, reducing the effective neural firing rate delivered to downstream neurons. All three of the network motifs examined significantly mitigate the effects of injury on readout neurons, either by reducing injury’s impact on readout neuron responses or by restoring these responses to pre-injury levels. These motifs may thus be partially explained by their value as adaptive mechanisms to minimize the functional effects of neural injury. More generally, robustness to injury is a vital design principle to consider when analyzing neural systems.
Collapse
Affiliation(s)
- Charles B. Delahunt
- Department of Applied Mathematics, University of Washington, Seattle, WA 98195-3925, USA;
- Correspondence: (C.B.D.); (P.D.M.)
| | - Pedro D. Maia
- Department of Mathematics, University of Texas at Arlington, Arlington, TX 76019, USA
- Correspondence: (C.B.D.); (P.D.M.)
| | - J. Nathan Kutz
- Department of Applied Mathematics, University of Washington, Seattle, WA 98195-3925, USA;
| |
Collapse
|
3
|
Delahunt CB, Kutz JN. Putting a bug in ML: The moth olfactory network learns to read MNIST. Neural Netw 2019; 118:54-64. [PMID: 31228724 DOI: 10.1016/j.neunet.2019.05.012] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/05/2018] [Revised: 05/15/2019] [Accepted: 05/19/2019] [Indexed: 10/26/2022]
Abstract
We seek to (i) characterize the learning architectures exploited in biological neural networks for training on very few samples, and (ii) port these algorithmic structures to a machine learning context. The moth olfactory network is among the simplest biological neural systems that can learn, and its architecture includes key structural elements and mechanisms widespread in biological neural nets, such as cascaded networks, competitive inhibition, high intrinsic noise, sparsity, reward mechanisms, and Hebbian plasticity. These structural biological elements, in combination, enable rapid learning. MothNet is a computational model of the moth olfactory network, closely aligned with the moth's known biophysics and with in vivo electrode data collected from moths learning new odors. We assign this model the task of learning to read the MNIST digits. We show that MothNet successfully learns to read given very few training samples (1-10 samples per class). In this few-samples regime, it outperforms standard machine learning methods such as nearest-neighbors, support-vector machines, and neural networks (NNs), and matches specialized one-shot transfer-learning methods but without the need for pre-training. The MothNet architecture illustrates how algorithmic structures derived from biological brains can be used to build alternative NNs that may avoid the high training data demands of many current engineered NNs.
Collapse
Affiliation(s)
- Charles B Delahunt
- Department of Applied Mathematics, University of Washington, Seattle, United States; Computational Neuroscience Center, University of Washington, Seattle, United States.
| | - J Nathan Kutz
- Department of Applied Mathematics, University of Washington, Seattle, United States.
| |
Collapse
|
4
|
Delahunt CB, Riffell JA, Kutz JN. Biological Mechanisms for Learning: A Computational Model of Olfactory Learning in the Manduca sexta Moth, With Applications to Neural Nets. Front Comput Neurosci 2018; 12:102. [PMID: 30618694 PMCID: PMC6306094 DOI: 10.3389/fncom.2018.00102] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/30/2018] [Accepted: 12/03/2018] [Indexed: 11/23/2022] Open
Abstract
The insect olfactory system, which includes the antennal lobe (AL), mushroom body (MB), and ancillary structures, is a relatively simple neural system capable of learning. Its structural features, which are widespread in biological neural systems, process olfactory stimuli through a cascade of networks where large dimension shifts occur from stage to stage and where sparsity and randomness play a critical role in coding. Learning is partly enabled by a neuromodulatory reward mechanism of octopamine stimulation of the AL, whose increased activity induces synaptic weight updates in the MB through Hebbian plasticity. Enforced sparsity in the MB focuses Hebbian growth on neurons that are the most important for the representation of the learned odor. Based upon current biophysical knowledge, we have constructed an end-to-end computational firing-rate model of the Manduca sexta moth olfactory system which includes the interaction of the AL and MB under octopamine stimulation. Our model is able to robustly learn new odors, and neural firing rates in our simulations match the statistical features of in vivo firing rate data. From a biological perspective, the model provides a valuable tool for examining the role of neuromodulators, like octopamine, in learning, and gives insight into critical interactions between sparsity, Hebbian growth, and stimulation during learning. Our simulations also inform predictions about structural details of the olfactory system that are not currently well-characterized. From a machine learning perspective, the model yields bio-inspired mechanisms that are potentially useful in constructing neural nets for rapid learning from very few samples. These mechanisms include high-noise layers, sparse layers as noise filters, and a biologically-plausible optimization method to train the network based on octopamine stimulation, sparse layers, and Hebbian growth.
Collapse
Affiliation(s)
- Charles B. Delahunt
- Department of Electrical Engineering, University of Washington, Seattle, WA, United States
- Computational Neuroscience Center, University of Washington, Seattle, WA, United States
| | - Jeffrey A. Riffell
- Department of Biology, University of Washington, Seattle, WA, United States
| | - J. Nathan Kutz
- Department of Applied Mathematics, University of Washington, Seattle, WA, United States
| |
Collapse
|
5
|
Malerba P, Bazhenov M. Circuit mechanisms of hippocampal reactivation during sleep. Neurobiol Learn Mem 2018; 160:98-107. [PMID: 29723670 DOI: 10.1016/j.nlm.2018.04.018] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/18/2017] [Revised: 03/13/2018] [Accepted: 04/30/2018] [Indexed: 10/17/2022]
Abstract
The hippocampus is important for memory and learning, being a brain site where initial memories are formed and where sharp wave - ripples (SWR) are found, which are responsible for mapping recent memories to long-term storage during sleep-related memory replay. While this conceptual schema is well established, specific intrinsic and network-level mechanisms driving spatio-temporal patterns of hippocampal activity during sleep, and specifically controlling off-line memory reactivation are unknown. In this study, we discuss a model of hippocampal CA1-CA3 network generating spontaneous characteristic SWR activity. Our study predicts the properties of CA3 input which are necessary for successful CA1 ripple generation and the role of synaptic interactions and intrinsic excitability in spike sequence replay during SWRs. Specifically, we found that excitatory synaptic connections promote reactivation in both CA3 and CA1, but the different dynamics of sharp waves in CA3 and ripples in CA1 result in a differential role for synaptic inhibition in modulating replay: promoting spike sequence specificity in CA3 but not in CA1 areas. Finally, we describe how awake learning of spatial trajectories leads to synaptic changes sufficient to drive hippocampal cells' reactivation during sleep, as required for sleep-related memory consolidation.
Collapse
Affiliation(s)
- Paola Malerba
- Department of Medicine, University of California San Diego, United States
| | - Maxim Bazhenov
- Department of Medicine, University of California San Diego, United States.
| |
Collapse
|
6
|
Sanda P, Skorheim S, Bazhenov M. Multi-layer network utilizing rewarded spike time dependent plasticity to learn a foraging task. PLoS Comput Biol 2017; 13:e1005705. [PMID: 28961245 PMCID: PMC5636167 DOI: 10.1371/journal.pcbi.1005705] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/18/2017] [Revised: 10/11/2017] [Accepted: 07/26/2017] [Indexed: 12/01/2022] Open
Abstract
Neural networks with a single plastic layer employing reward modulated spike time dependent plasticity (STDP) are capable of learning simple foraging tasks. Here we demonstrate advanced pattern discrimination and continuous learning in a network of spiking neurons with multiple plastic layers. The network utilized both reward modulated and non-reward modulated STDP and implemented multiple mechanisms for homeostatic regulation of synaptic efficacy, including heterosynaptic plasticity, gain control, output balancing, activity normalization of rewarded STDP and hard limits on synaptic strength. We found that addition of a hidden layer of neurons employing non-rewarded STDP created neurons that responded to the specific combinations of inputs and thus performed basic classification of the input patterns. When combined with a following layer of neurons implementing rewarded STDP, the network was able to learn, despite the absence of labeled training data, discrimination between rewarding patterns and the patterns designated as punishing. Synaptic noise allowed for trial-and-error learning that helped to identify the goal-oriented strategies which were effective in task solving. The study predicts a critical set of properties of the spiking neuronal network with STDP that was sufficient to solve a complex foraging task involving pattern classification and decision making. This study explores how intelligent behavior emerges from the basic principles known at the cellular level of biological neuronal network dynamics. Compared to the approaches used in the artificial intelligence community, we applied biologically realistic modeling of neuronal dynamics and plasticity. The building blocks of the model are spiking neurons, spike-time dependent plasticity (STDP) and homeostatic rules, known experimentally, which are shown to play a fundamental role in both keeping the network stable and capable of continous learning. Our study predicts that a combination of these principles makes possible a foraging behavior in a previously unknown environment, including pattern classification to distinct between environment shapes which are rewarded and those which are punished and decision making to select the optimal strategy to acquire the maximal number of the rewarded elements. To solve this complex task we used multi-layer neuronal processing that implemented pattern generalization by unsupervised STDP at the earlier processing step, as commonly observed in the animal and human sensory processing, followed by reinforcement learning at the later steps. In the model, the intelligent behavior emerged spontaneously due to the network organization implementing both local unsupervised plasticity and reward feedback resulting from a successful behavior in the environment.
Collapse
Affiliation(s)
- Pavel Sanda
- Department of Medicine, University of California, San Diego, La Jolla, California, United States of America
| | - Steven Skorheim
- Information and Systems Sciences Lab, HRL Laboratories, LLC, Malibu, California, United States of America
| | - Maxim Bazhenov
- Department of Medicine, University of California, San Diego, La Jolla, California, United States of America
- * E-mail:
| |
Collapse
|
7
|
Linking dynamics of the inhibitory network to the input structure. J Comput Neurosci 2016; 41:367-391. [PMID: 27650865 DOI: 10.1007/s10827-016-0622-8] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/27/2016] [Revised: 08/19/2016] [Accepted: 08/24/2016] [Indexed: 10/21/2022]
Abstract
Networks of inhibitory interneurons are found in many distinct classes of biological systems. Inhibitory interneurons govern the dynamics of principal cells and are likely to be critically involved in the coding of information. In this theoretical study, we describe the dynamics of a generic inhibitory network in terms of low-dimensional, simplified rate models. We study the relationship between the structure of external input applied to the network and the patterns of activity arising in response to that stimulation. We found that even a minimal inhibitory network can generate a great diversity of spatio-temporal patterning including complex bursting regimes with non-trivial ratios of burst firing. Despite the complexity of these dynamics, the network's response patterns can be predicted from the rankings of the magnitudes of external inputs to the inhibitory neurons. This type of invariant dynamics is robust to noise and stable in densely connected networks with strong inhibitory coupling. Our study predicts that the response dynamics generated by an inhibitory network may provide critical insights about the temporal structure of the sensory input it receives.
Collapse
|
8
|
Kee T, Sanda P, Gupta N, Stopfer M, Bazhenov M. Feed-Forward versus Feedback Inhibition in a Basic Olfactory Circuit. PLoS Comput Biol 2015; 11:e1004531. [PMID: 26458212 PMCID: PMC4601731 DOI: 10.1371/journal.pcbi.1004531] [Citation(s) in RCA: 30] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/24/2015] [Accepted: 08/28/2015] [Indexed: 11/23/2022] Open
Abstract
Inhibitory interneurons play critical roles in shaping the firing patterns of principal neurons in many brain systems. Despite difference in the anatomy or functions of neuronal circuits containing inhibition, two basic motifs repeatedly emerge: feed-forward and feedback. In the locust, it was proposed that a subset of lateral horn interneurons (LHNs), provide feed-forward inhibition onto Kenyon cells (KCs) to maintain their sparse firing—a property critical for olfactory learning and memory. But recently it was established that a single inhibitory cell, the giant GABAergic neuron (GGN), is the main and perhaps sole source of inhibition in the mushroom body, and that inhibition from this cell is mediated by a feedback (FB) loop including KCs and the GGN. To clarify basic differences in the effects of feedback vs. feed-forward inhibition in circuit dynamics we here use a model of the locust olfactory system. We found both inhibitory motifs were able to maintain sparse KCs responses and provide optimal odor discrimination. However, we further found that only FB inhibition could create a phase response consistent with data recorded in vivo. These findings describe general rules for feed-forward versus feedback inhibition and suggest GGN is potentially capable of providing the primary source of inhibition to the KCs. A better understanding of how inhibitory motifs impact post-synaptic neuronal activity could be used to reveal unknown inhibitory structures within biological networks. Understanding how inhibitory neurons interact with excitatory neurons is critical for understanding the behaviors of neuronal networks. Here we address this question with simple but biologically relevant models based on the anatomy of the locust olfactory pathway. Two ubiquitous and basic inhibitory motifs were tested: feed-forward and feedback. Feed-forward inhibition typically occurs between different brain areas when excitatory neurons excite inhibitory cells, which then inhibit a group of postsynaptic excitatory neurons outside of the initializing excitatory neurons’ area. On the other hand, the feedback inhibitory motif requires a population of excitatory neurons to drive the inhibitory cells, which in turn inhibit the same population of excitatory cells. We found the type of the inhibitory motif determined the timing with which each group of cells fired action potentials in comparison to one another (relative timing). It also affected the range of inhibitory neurons’ activity, with the inhibitory neurons having a wider range in the feedback circuit than that in the feed-forward one. These results will allow predicting the type of the connectivity structure within unexplored biological circuits given only electrophysiological recordings.
Collapse
Affiliation(s)
- Tiffany Kee
- Department of Cell Biology and Neuroscience, University of California, Riverside, Riverside, California, United States of America
| | - Pavel Sanda
- Department of Cell Biology and Neuroscience, University of California, Riverside, Riverside, California, United States of America
| | - Nitin Gupta
- Department of Biological Sciences and Bioengineering, Indian Institute of Technology Kanpur, Kanpur, India
| | - Mark Stopfer
- US National Institutes of Health, National Institute of Child Health and Human Development, Bethesda, Maryland, United States of America
| | - Maxim Bazhenov
- Department of Cell Biology and Neuroscience, University of California, Riverside, Riverside, California, United States of America
- * E-mail:
| |
Collapse
|
9
|
Abstract
Frequency modulated (FM) sweeps are common in species-specific vocalizations, including human speech. Auditory neurons selective for the direction and rate of frequency change in FM sweeps are present across species, but the synaptic mechanisms underlying such selectivity are only beginning to be understood. Even less is known about mechanisms of experience-dependent changes in FM sweep selectivity. We present three network models of synaptic mechanisms of FM sweep direction and rate selectivity that explains experimental data: (1) The 'facilitation' model contains frequency selective cells operating as coincidence detectors, summing up multiple excitatory inputs with different time delays. (2) The 'duration tuned' model depends on interactions between delayed excitation and early inhibition. The strength of delayed excitation determines the preferred duration. Inhibitory rebound can reinforce the delayed excitation. (3) The 'inhibitory sideband' model uses frequency selective inputs to a network of excitatory and inhibitory cells. The strength and asymmetry of these connections results in neurons responsive to sweeps in a single direction of sufficient sweep rate. Variations of these properties, can explain the diversity of rate-dependent direction selectivity seen across species. We show that the inhibitory sideband model can be trained using spike timing dependent plasticity (STDP) to develop direction selectivity from a non-selective network. These models provide a means to compare the proposed synaptic and spectrotemporal mechanisms of FM sweep processing and can be utilized to explore cellular mechanisms underlying experience- or training-dependent changes in spectrotemporal processing across animal models. Given the analogy between FM sweeps and visual motion, these models can serve a broader function in studying stimulus movement across sensory epithelia.
Collapse
|
10
|
Skorheim S, Lonjers P, Bazhenov M. A spiking network model of decision making employing rewarded STDP. PLoS One 2014; 9:e90821. [PMID: 24632858 PMCID: PMC3954625 DOI: 10.1371/journal.pone.0090821] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/06/2013] [Accepted: 02/05/2014] [Indexed: 01/08/2023] Open
Abstract
Reward-modulated spike timing dependent plasticity (STDP) combines unsupervised STDP with a reinforcement signal that modulates synaptic changes. It was proposed as a learning rule capable of solving the distal reward problem in reinforcement learning. Nonetheless, performance and limitations of this learning mechanism have yet to be tested for its ability to solve biological problems. In our work, rewarded STDP was implemented to model foraging behavior in a simulated environment. Over the course of training the network of spiking neurons developed the capability of producing highly successful decision-making. The network performance remained stable even after significant perturbations of synaptic structure. Rewarded STDP alone was insufficient to learn effective decision making due to the difficulty maintaining homeostatic equilibrium of synaptic weights and the development of local performance maxima. Our study predicts that successful learning requires stabilizing mechanisms that allow neurons to balance their input and output synapses as well as synaptic noise.
Collapse
Affiliation(s)
- Steven Skorheim
- Department of Cell Biology and Neuroscience, University of California Riverside, Riverside, California, United States of America
| | - Peter Lonjers
- Department of Cell Biology and Neuroscience, University of California Riverside, Riverside, California, United States of America
| | - Maxim Bazhenov
- Department of Cell Biology and Neuroscience, University of California Riverside, Riverside, California, United States of America
- * E-mail:
| |
Collapse
|
11
|
Abstract
Recurrent inhibition, wherein excitatory principal neurons stimulate inhibitory interneurons that feedback on the same principal cells, occurs ubiquitously in the brain. However, the regulation and function of recurrent inhibition are poorly understood in terms of the contributing interneuron subtypes as well as their effect on neural and cognitive outputs. In the Drosophila olfactory system, odorants activate olfactory sensory neurons (OSNs), which stimulate projection neurons (PNs) in the antennal lobe. Both OSNs and PNs activate local inhibitory neurons (LNs) that provide either feedforward or recurrent/feedback inhibition in the lobe. During olfactory habituation, prior exposure to an odorant selectively decreases the animal's subsequent response to the odorant. We show here that habituation occurs in response to feedback from PNs. Output from PNs is necessary for olfactory habituation and, in the absence of odorant, direct PN activation is sufficient to induce the odorant-selective behavioral attenuation characteristic of olfactory habituation. PN-induced habituation occludes further odor-induced habituation and similarly requires GABA(A)Rs and NMDARs in PNs, as well as VGLUT and cAMP signaling in the multiglomerular inhibitory local interneurons (LN1) type of LN. Thus, PN output is monitored by an LN subtype whose resultant plasticity underlies behavioral habituation. We propose that recurrent inhibitory motifs common in neural circuits may similarly underlie habituation to other complex stimuli.
Collapse
|
12
|
Chen JY, Chauvette S, Skorheim S, Timofeev I, Bazhenov M. Interneuron-mediated inhibition synchronizes neuronal activity during slow oscillation. J Physiol 2012; 590:3987-4010. [PMID: 22641778 DOI: 10.1113/jphysiol.2012.227462] [Citation(s) in RCA: 69] [Impact Index Per Article: 5.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022] Open
Abstract
The signature of slow-wave sleep in the electroencephalogram (EEG) is large-amplitude fluctuation of the field potential, which reflects synchronous alternation of activity and silence across cortical neurons. While initiation of the active cortical states during sleep slow oscillation has been intensively studied, the biological mechanisms which drive the network transition from an active state to silence remain poorly understood. In the current study, using a combination of in vivo electrophysiology and thalamocortical network simulation, we explored the impact of intrinsic and synaptic inhibition on state transition during sleep slow oscillation. We found that in normal physiological conditions, synaptic inhibition controls the duration and the synchrony of active state termination. The decline of interneuron-mediated inhibition led to asynchronous downward transition across the cortical network and broke the regular slow oscillation pattern. Furthermore, in both in vivo experiment and computational modelling, we revealed that when the level of synaptic inhibition was reduced significantly, it led to a recovery of synchronized oscillations in the form of seizure-like bursting activity. In this condition, the fast active state termination was mediated by intrinsic hyperpolarizing conductances. Our study highlights the significance of both intrinsic and synaptic inhibition in manipulating sleep slow rhythms.
Collapse
Affiliation(s)
- Jen-Yung Chen
- Department of Cell Biology and Neuroscience, University of California, Riverside, Riverside, CA 92521, USA
| | | | | | | | | |
Collapse
|
13
|
|
14
|
Assisi C, Stopfer M, Bazhenov M. Using the structure of inhibitory networks to unravel mechanisms of spatiotemporal patterning. Neuron 2011; 69:373-86. [PMID: 21262473 DOI: 10.1016/j.neuron.2010.12.019] [Citation(s) in RCA: 27] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 11/15/2010] [Indexed: 10/18/2022]
Abstract
Neuronal networks exhibit a rich dynamical repertoire, a consequence of both the intrinsic properties of neurons and the structure of the network. It has been hypothesized that inhibitory interneurons corral principal neurons into transiently synchronous ensembles that encode sensory information and subserve behavior. How does the structure of the inhibitory network facilitate such spatiotemporal patterning? We established a relationship between an important structural property of a network, its colorings, and the dynamics it constrains. Using a model of the insect antennal lobe, we show that our description allows the explicit identification of the groups of inhibitory interneurons that switch, during odor stimulation, between activity and quiescence in a coordinated manner determined by features of the network structure. This description optimally matches the perspective of the downstream neurons looking for synchrony in ensembles of presynaptic cells and allows a low-dimensional description of seemingly complex high-dimensional network activity.
Collapse
Affiliation(s)
- Collins Assisi
- Department of Cell Biology and Neuroscience, University of California, Riverside, Riverside, CA 92521, USA.
| | | | | |
Collapse
|