1
|
Garcia-Garcia MG, Kapoor A, Akinwale O, Takemaru L, Kim TH, Paton C, Litwin-Kumar A, Schnitzer MJ, Luo L, Wagner MJ. A cerebellar granule cell-climbing fiber computation to learn to track long time intervals. Neuron 2024; 112:2749-2764.e7. [PMID: 38870929 PMCID: PMC11343686 DOI: 10.1016/j.neuron.2024.05.019] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/01/2024] [Revised: 03/31/2024] [Accepted: 05/16/2024] [Indexed: 06/15/2024]
Abstract
In classical cerebellar learning, Purkinje cells (PkCs) associate climbing fiber (CF) error signals with predictive granule cells (GrCs) that were active just prior (∼150 ms). The cerebellum also contributes to behaviors characterized by longer timescales. To investigate how GrC-CF-PkC circuits might learn seconds-long predictions, we imaged simultaneous GrC-CF activity over days of forelimb operant conditioning for delayed water reward. As mice learned reward timing, numerous GrCs developed anticipatory activity ramping at different rates until reward delivery, followed by widespread time-locked CF spiking. Relearning longer delays further lengthened GrC activations. We computed CF-dependent GrC→PkC plasticity rules, demonstrating that reward-evoked CF spikes sufficed to grade many GrC synapses by anticipatory timing. We predicted and confirmed that PkCs could thereby continuously ramp across seconds-long intervals from movement to reward. Learning thus leads to new GrC temporal bases linking predictors to remote CF reward signals-a strategy well suited for learning to track the long intervals common in cognitive domains.
Collapse
Affiliation(s)
- Martha G Garcia-Garcia
- National Institute of Neurological Disorders & Stroke, National Institutes of Health, Bethesda, MD 20894, USA
| | - Akash Kapoor
- National Institute of Neurological Disorders & Stroke, National Institutes of Health, Bethesda, MD 20894, USA
| | - Oluwatobi Akinwale
- National Institute of Neurological Disorders & Stroke, National Institutes of Health, Bethesda, MD 20894, USA
| | - Lina Takemaru
- National Institute of Neurological Disorders & Stroke, National Institutes of Health, Bethesda, MD 20894, USA
| | - Tony Hyun Kim
- Department of Biology and Howard Hughes Medical Institute, Stanford University, Stanford, CA 94305, USA
| | - Casey Paton
- National Institute of Neurological Disorders & Stroke, National Institutes of Health, Bethesda, MD 20894, USA
| | - Ashok Litwin-Kumar
- Zuckerman Mind Brain Behavior Institute, Department of Neuroscience, Columbia University, New York, NY 10027, USA
| | - Mark J Schnitzer
- Department of Biology and Howard Hughes Medical Institute, Stanford University, Stanford, CA 94305, USA; Department of Applied Physics, Stanford University, Stanford, CA 94305, USA
| | - Liqun Luo
- Department of Biology and Howard Hughes Medical Institute, Stanford University, Stanford, CA 94305, USA
| | - Mark J Wagner
- National Institute of Neurological Disorders & Stroke, National Institutes of Health, Bethesda, MD 20894, USA.
| |
Collapse
|
2
|
Toth J, Sidleck B, Lombardi O, Hou T, Eldo A, Kerlin M, Zeng X, Saeed D, Agarwal P, Leonard D, Andrino L, Inbar T, Malina M, Insanally MN. Dynamic gating of perceptual flexibility by non-classically responsive cortical neurons. RESEARCH SQUARE 2024:rs.3.rs-4650869. [PMID: 39108496 PMCID: PMC11302693 DOI: 10.21203/rs.3.rs-4650869/v1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Indexed: 08/13/2024]
Abstract
The ability to flexibly respond to sensory cues in dynamic environments is essential to adaptive auditory-guided behaviors. Cortical spiking responses during behavior are highly diverse, ranging from reliable trial-averaged responses to seemingly random firing patterns. While the reliable responses of 'classically responsive' cells have been extensively studied for decades, the contribution of irregular spiking 'non-classically responsive' cells to behavior has remained underexplored despite their prevalence. Here, we show that flexible auditory behavior results from interactions between local auditory cortical circuits comprised of heterogeneous responses and inputs from secondary motor cortex. Strikingly, non-classically responsive neurons in auditory cortex were preferentially recruited during learning, specifically during rapid learning phases when the greatest gains in behavioral performance occur. Population-level decoding revealed that during rapid learning mixed ensembles comprised of both classically and non-classically responsive cells encode significantly more task information than homogenous ensembles of either type and emerge as a functional unit critical for learning. Optogenetically silencing inputs from secondary motor cortex selectively modulated non-classically responsive cells in the auditory cortex and impaired reversal learning by preventing the remapping of a previously learned stimulus-reward association. Top-down inputs orchestrated highly correlated non-classically responsive ensembles in sensory cortex providing a unique task-relevant manifold for learning. Thus, non-classically responsive cells in sensory cortex are preferentially recruited by top-down inputs to enable neural and behavioral flexibility.
Collapse
Affiliation(s)
- Jade Toth
- Department of Otolaryngology, University of Pittsburgh School of Medicine, Pittsburgh, PA 15213
- Pittsburgh Hearing Research Center, University of Pittsburgh, Pittsburgh, PA 15213
| | - Blake Sidleck
- Department of Otolaryngology, University of Pittsburgh School of Medicine, Pittsburgh, PA 15213
- Pittsburgh Hearing Research Center, University of Pittsburgh, Pittsburgh, PA 15213
| | - Olivia Lombardi
- Department of Otolaryngology, University of Pittsburgh School of Medicine, Pittsburgh, PA 15213
- Pittsburgh Hearing Research Center, University of Pittsburgh, Pittsburgh, PA 15213
| | - Tiange Hou
- Department of Otolaryngology, University of Pittsburgh School of Medicine, Pittsburgh, PA 15213
- Pittsburgh Hearing Research Center, University of Pittsburgh, Pittsburgh, PA 15213
| | - Abraham Eldo
- Department of Otolaryngology, University of Pittsburgh School of Medicine, Pittsburgh, PA 15213
- Pittsburgh Hearing Research Center, University of Pittsburgh, Pittsburgh, PA 15213
| | - Madelyn Kerlin
- Department of Otolaryngology, University of Pittsburgh School of Medicine, Pittsburgh, PA 15213
- Pittsburgh Hearing Research Center, University of Pittsburgh, Pittsburgh, PA 15213
| | - Xiangjian Zeng
- Department of Otolaryngology, University of Pittsburgh School of Medicine, Pittsburgh, PA 15213
- Pittsburgh Hearing Research Center, University of Pittsburgh, Pittsburgh, PA 15213
| | - Danyall Saeed
- Department of Otolaryngology, University of Pittsburgh School of Medicine, Pittsburgh, PA 15213
- Pittsburgh Hearing Research Center, University of Pittsburgh, Pittsburgh, PA 15213
| | - Priya Agarwal
- Department of Otolaryngology, University of Pittsburgh School of Medicine, Pittsburgh, PA 15213
- Pittsburgh Hearing Research Center, University of Pittsburgh, Pittsburgh, PA 15213
| | - Dylan Leonard
- Department of Otolaryngology, University of Pittsburgh School of Medicine, Pittsburgh, PA 15213
- Pittsburgh Hearing Research Center, University of Pittsburgh, Pittsburgh, PA 15213
| | - Luz Andrino
- Department of Otolaryngology, University of Pittsburgh School of Medicine, Pittsburgh, PA 15213
- Center for the Neural Basis of Cognition, Carnegie Mellon University, Pittsburgh, PA 15213
| | - Tal Inbar
- Department of Otolaryngology, University of Pittsburgh School of Medicine, Pittsburgh, PA 15213
- Pittsburgh Hearing Research Center, University of Pittsburgh, Pittsburgh, PA 15213
- Center for the Neural Basis of Cognition, Carnegie Mellon University, Pittsburgh, PA 15213
| | - Michael Malina
- Department of Otolaryngology, University of Pittsburgh School of Medicine, Pittsburgh, PA 15213
- Center for the Neural Basis of Cognition, Carnegie Mellon University, Pittsburgh, PA 15213
| | - Michele N. Insanally
- Department of Otolaryngology, University of Pittsburgh School of Medicine, Pittsburgh, PA 15213
- Department of Neurobiology, University of Pittsburgh School of Medicine, Pittsburgh, PA 15213
- Department of Bioengineering, University of Pittsburgh, Pittsburgh, PA 15213
- Pittsburgh Hearing Research Center, University of Pittsburgh, Pittsburgh, PA 15213
- Center for the Neural Basis of Cognition, Carnegie Mellon University, Pittsburgh, PA 15213
| |
Collapse
|
3
|
Thornton-Kolbe EM, Ahmed M, Gordon FR, Sieriebriennikov B, Williams DL, Kurmangaliyev YZ, Clowney EJ. Spatial constraints and cell surface molecule depletion structure a randomly connected learning circuit. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.07.17.603956. [PMID: 39071296 PMCID: PMC11275898 DOI: 10.1101/2024.07.17.603956] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 07/30/2024]
Abstract
The brain can represent almost limitless objects to "categorize an unlabeled world" (Edelman, 1989). This feat is supported by expansion layer circuit architectures, in which neurons carrying information about discrete sensory channels make combinatorial connections onto much larger postsynaptic populations. Combinatorial connections in expansion layers are modeled as randomized sets. The extent to which randomized wiring exists in vivo is debated, and how combinatorial connectivity patterns are generated during development is not understood. Non-deterministic wiring algorithms could program such connectivity using minimal genomic information. Here, we investigate anatomic and transcriptional patterns and perturb partner availability to ask how Kenyon cells, the expansion layer neurons of the insect mushroom body, obtain combinatorial input from olfactory projection neurons. Olfactory projection neurons form their presynaptic outputs in an orderly, predictable, and biased fashion. We find that Kenyon cells accept spatially co-located but molecularly heterogeneous inputs from this orderly map, and ask how Kenyon cell surface molecule expression impacts partner choice. Cell surface immunoglobulins are broadly depleted in Kenyon cells, and we propose that this allows them to form connections with molecularly heterogeneous partners. This model can explain how developmentally identical neurons acquire diverse wiring identities.
Collapse
Affiliation(s)
- Emma M. Thornton-Kolbe
- Neurosciences Graduate Program, University of Michigan Medical School, Ann Arbor, MI, USA
| | - Maria Ahmed
- Department of Molecular, Cellular, and Developmental Biology, University of Michigan, Ann Arbor, MI, USA
| | - Finley R. Gordon
- Department of Molecular, Cellular, and Developmental Biology, University of Michigan, Ann Arbor, MI, USA
| | | | - Donnell L. Williams
- Department of Molecular, Cellular, and Developmental Biology, University of Michigan, Ann Arbor, MI, USA
| | | | - E. Josephine Clowney
- Department of Molecular, Cellular, and Developmental Biology, University of Michigan, Ann Arbor, MI, USA
- Michigan Neuroscience Institute, Ann Arbor, MI, USA
| |
Collapse
|
4
|
Fernández JG, Keemink S, van Gerven M. Gradient-free training of recurrent neural networks using random perturbations. Front Neurosci 2024; 18:1439155. [PMID: 39050673 PMCID: PMC11267880 DOI: 10.3389/fnins.2024.1439155] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/27/2024] [Accepted: 06/25/2024] [Indexed: 07/27/2024] Open
Abstract
Recurrent neural networks (RNNs) hold immense potential for computations due to their Turing completeness and sequential processing capabilities, yet existing methods for their training encounter efficiency challenges. Backpropagation through time (BPTT), the prevailing method, extends the backpropagation (BP) algorithm by unrolling the RNN over time. However, this approach suffers from significant drawbacks, including the need to interleave forward and backward phases and store exact gradient information. Furthermore, BPTT has been shown to struggle to propagate gradient information for long sequences, leading to vanishing gradients. An alternative strategy to using gradient-based methods like BPTT involves stochastically approximating gradients through perturbation-based methods. This learning approach is exceptionally simple, necessitating only forward passes in the network and a global reinforcement signal as feedback. Despite its simplicity, the random nature of its updates typically leads to inefficient optimization, limiting its effectiveness in training neural networks. In this study, we present a new approach to perturbation-based learning in RNNs whose performance is competitive with BPTT, while maintaining the inherent advantages over gradient-based learning. To this end, we extend the recently introduced activity-based node perturbation (ANP) method to operate in the time domain, leading to more efficient learning and generalization. We subsequently conduct a range of experiments to validate our approach. Our results show similar performance, convergence time and scalability when compared to BPTT, strongly outperforming standard node perturbation and weight perturbation methods. These findings suggest that perturbation-based learning methods offer a versatile alternative to gradient-based methods for training RNNs which can be ideally suited for neuromorphic computing applications.
Collapse
Affiliation(s)
- Jesús García Fernández
- Department of Machine Learning and Neural Computing, Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, Netherlands
| | | | | |
Collapse
|
5
|
Ostojic S, Fusi S. Computational role of structure in neural activity and connectivity. Trends Cogn Sci 2024; 28:677-690. [PMID: 38553340 DOI: 10.1016/j.tics.2024.03.003] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/24/2023] [Revised: 02/29/2024] [Accepted: 03/07/2024] [Indexed: 07/05/2024]
Abstract
One major challenge of neuroscience is identifying structure in seemingly disorganized neural activity. Different types of structure have different computational implications that can help neuroscientists understand the functional role of a particular brain area. Here, we outline a unified approach to characterize structure by inspecting the representational geometry and the modularity properties of the recorded activity and show that a similar approach can also reveal structure in connectivity. We start by setting up a general framework for determining geometry and modularity in activity and connectivity and relating these properties with computations performed by the network. We then use this framework to review the types of structure found in recent studies of model networks performing three classes of computations.
Collapse
Affiliation(s)
- Srdjan Ostojic
- Laboratoire de Neurosciences Cognitives et Computationnelles, INSERM U960, Ecole Normale Superieure - PSL Research University, 75005 Paris, France.
| | - Stefano Fusi
- Center for Theoretical Neuroscience, Columbia University, New York, NY, USA; Zuckerman Mind Brain Behavior Institute, Columbia University, New York, NY, USA; Department of Neuroscience, Columbia University, New York, NY, USA; Kavli Institute for Brain Science, Columbia University, New York, NY, USA
| |
Collapse
|
6
|
Lin TF, Busch SE, Hansel C. Intrinsic and synaptic determinants of receptive field plasticity in Purkinje cells of the mouse cerebellum. Nat Commun 2024; 15:4645. [PMID: 38821918 PMCID: PMC11143328 DOI: 10.1038/s41467-024-48373-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/15/2023] [Accepted: 04/28/2024] [Indexed: 06/02/2024] Open
Abstract
Non-synaptic (intrinsic) plasticity of membrane excitability contributes to aspects of memory formation, but it remains unclear whether it merely facilitates synaptic long-term potentiation or plays a permissive role in determining the impact of synaptic weight increase. We use tactile stimulation and electrical activation of parallel fibers to probe intrinsic and synaptic contributions to receptive field plasticity in awake mice during two-photon calcium imaging of cerebellar Purkinje cells. Repetitive activation of both stimuli induced response potentiation that is impaired in mice with selective deficits in either synaptic or intrinsic plasticity. Spatial analysis of calcium signals demonstrated that intrinsic, but not synaptic plasticity, enhances the spread of dendritic parallel fiber response potentiation. Simultaneous dendrite and axon initial segment recordings confirm these dendritic events affect axonal output. Our findings support the hypothesis that intrinsic plasticity provides an amplification mechanism that exerts a permissive control over the impact of long-term potentiation on neuronal responsiveness.
Collapse
Affiliation(s)
- Ting-Feng Lin
- Department of Neurobiology and Neuroscience Institute, University of Chicago, Chicago, IL, USA
| | - Silas E Busch
- Department of Neurobiology and Neuroscience Institute, University of Chicago, Chicago, IL, USA
| | - Christian Hansel
- Department of Neurobiology and Neuroscience Institute, University of Chicago, Chicago, IL, USA.
| |
Collapse
|
7
|
Shu WC, Jackson MB. Intrinsic and Synaptic Contributions to Repetitive Spiking in Dentate Granule Cells. J Neurosci 2024; 44:e0716232024. [PMID: 38503495 PMCID: PMC11063872 DOI: 10.1523/jneurosci.0716-23.2024] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/21/2023] [Revised: 03/03/2024] [Accepted: 03/05/2024] [Indexed: 03/21/2024] Open
Abstract
Repetitive firing of granule cells (GCs) in the dentate gyrus (DG) facilitates synaptic transmission to the CA3 region. This facilitation can gate and amplify the flow of information through the hippocampus. High-frequency bursts in the DG are linked to behavior and plasticity, but GCs do not readily burst. Under normal conditions, a single shock to the perforant path in a hippocampal slice typically drives a GC to fire a single spike, and only occasionally more than one spike is seen. Repetitive spiking in GCs is not robust, and the mechanisms are poorly understood. Here, we used a hybrid genetically encoded voltage sensor to image voltage changes evoked by cortical inputs in many mature GCs simultaneously in hippocampal slices from male and female mice. This enabled us to study relatively infrequent double and triple spikes. We found GCs are relatively homogeneous and their double spiking behavior is cell autonomous. Blockade of GABA type A receptors increased multiple spikes and prolonged the interspike interval, indicating inhibitory interneurons limit repetitive spiking and set the time window for successive spikes. Inhibiting synaptic glutamate release showed that recurrent excitation mediated by hilar mossy cells contributes to, but is not necessary for, multiple spiking. Blockade of T-type Ca2+ channels did not reduce multiple spiking but prolonged interspike intervals. Imaging voltage changes in different GC compartments revealed that second spikes can be initiated in either dendrites or somata. Thus, pharmacological and biophysical experiments reveal roles for both synaptic circuitry and intrinsic excitability in GC repetitive spiking.
Collapse
Affiliation(s)
- Wen-Chi Shu
- Department of Neuroscience and Biophysics Program, University of Wisconsin-Madison, Wisconsin 53705
| | - Meyer B Jackson
- Department of Neuroscience and Biophysics Program, University of Wisconsin-Madison, Wisconsin 53705
| |
Collapse
|
8
|
Fleming EA, Field GD, Tadross MR, Hull C. Local synaptic inhibition mediates cerebellar granule cell pattern separation and enables learned sensorimotor associations. Nat Neurosci 2024; 27:689-701. [PMID: 38321293 PMCID: PMC11288180 DOI: 10.1038/s41593-023-01565-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/04/2022] [Accepted: 12/21/2023] [Indexed: 02/08/2024]
Abstract
The cerebellar cortex has a key role in generating predictive sensorimotor associations. To do so, the granule cell layer is thought to establish unique sensorimotor representations for learning. However, how this is achieved and how granule cell population responses contribute to behavior have remained unclear. To address these questions, we have used in vivo calcium imaging and granule cell-specific pharmacological manipulation of synaptic inhibition in awake, behaving mice. These experiments indicate that inhibition sparsens and thresholds sensory responses, limiting overlap between sensory ensembles and preventing spiking in many granule cells that receive excitatory input. Moreover, inhibition can be recruited in a stimulus-specific manner to powerfully decorrelate multisensory ensembles. Consistent with these results, granule cell inhibition is required for accurate cerebellum-dependent sensorimotor behavior. These data thus reveal key mechanisms for granule cell layer pattern separation beyond those envisioned by classical models.
Collapse
Affiliation(s)
| | - Greg D Field
- Department of Neurobiology, Duke University Medical School, Durham, NC, USA
- Stein Eye Institute, Department of Ophthalmology, University of California, Los Angeles, CA, USA
| | - Michael R Tadross
- Department of Neurobiology, Duke University Medical School, Durham, NC, USA
- Department of Biomedical Engineering, Duke University, Durham, NC, USA
| | - Court Hull
- Department of Neurobiology, Duke University Medical School, Durham, NC, USA.
| |
Collapse
|
9
|
Rajeswaran P, Payeur A, Lajoie G, Orsborn AL. Assistive sensory-motor perturbations influence learned neural representations. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.03.20.585972. [PMID: 38562772 PMCID: PMC10983972 DOI: 10.1101/2024.03.20.585972] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 04/04/2024]
Abstract
Task errors are used to learn and refine motor skills. We investigated how task assistance influences learned neural representations using Brain-Computer Interfaces (BCIs), which map neural activity into movement via a decoder. We analyzed motor cortex activity as monkeys practiced BCI with a decoder that adapted to improve or maintain performance over days. Population dimensionality remained constant or increased with learning, counter to trends with non-adaptive BCIs. Yet, over time, task information was contained in a smaller subset of neurons or population modes. Moreover, task information was ultimately stored in neural modes that occupied a small fraction of the population variance. An artificial neural network model suggests the adaptive decoders contribute to forming these compact neural representations. Our findings show that assistive decoders manipulate error information used for long-term learning computations, like credit assignment, which informs our understanding of motor learning and has implications for designing real-world BCIs.
Collapse
Affiliation(s)
| | - Alexandre Payeur
- Université de Montreál, Department of Mathematics and Statistics, Montreál (QC), Canada, H3C 3J7
- Mila - Québec Artificial Intelligence Institute, Montreál (QC), Canada, H2S 3H1
| | - Guillaume Lajoie
- Université de Montreál, Department of Mathematics and Statistics, Montreál (QC), Canada, H3C 3J7
- Mila - Québec Artificial Intelligence Institute, Montreál (QC), Canada, H2S 3H1
| | - Amy L. Orsborn
- University of Washington, Bioengineering, Seattle, 98115, USA
- University of Washington, Electrical and Computer Engineering, Seattle, 98115, USA
- Washington National Primate Research Center, Seattle, Washington, 98115, USA
| |
Collapse
|
10
|
Gilbert M, Rasmussen A. Gap Junctions May Have A Computational Function In The Cerebellum: A Hypothesis. CEREBELLUM (LONDON, ENGLAND) 2024:10.1007/s12311-024-01680-3. [PMID: 38499814 DOI: 10.1007/s12311-024-01680-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Accepted: 02/29/2024] [Indexed: 03/20/2024]
Abstract
In the cerebellum, granule cells make parallel fibre contact on (and excite) Golgi cells and Golgi cells inhibit granule cells, forming an open feedback loop. Parallel fibres excite Golgi cells synaptically, each making a single contact. Golgi cells inhibit granule cells in a structure called a glomerulus almost exclusively by GABA spillover acting through extrasynaptic GABAA receptors. Golgi cells are connected dendritically by gap junctions. It has long been suspected that feedback contributes to homeostatic regulation of parallel fibre signals activity, causing the fraction of the population that are active to be maintained at a low level. We present a detailed neurophysiological and computationally-rendered model of functionally grouped Golgi cells which can infer the density of parallel fibre signals activity and convert it into proportional modulation of inhibition of granule cells. The conversion is unlearned and not actively computed; rather, output is simply the computational effect of cell morphology and network architecture. Unexpectedly, the conversion becomes more precise at low density, suggesting that self-regulation is attracted to sparse code, because it is stable. A computational function of gap junctions may not be confined to the cerebellum.
Collapse
Affiliation(s)
- Mike Gilbert
- School of Psychology, College of Life and Environmental Sciences, University of Birmingham, B15 2TT, Birmingham, UK.
| | - Anders Rasmussen
- Department of Experimental Medical Science, Lund University, BMC F10, 22184, Lund, Sweden
| |
Collapse
|
11
|
Kang L, Toyoizumi T. Distinguishing examples while building concepts in hippocampal and artificial networks. Nat Commun 2024; 15:647. [PMID: 38245502 PMCID: PMC10799871 DOI: 10.1038/s41467-024-44877-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/09/2023] [Accepted: 01/03/2024] [Indexed: 01/22/2024] Open
Abstract
The hippocampal subfield CA3 is thought to function as an auto-associative network that stores experiences as memories. Information from these experiences arrives directly from the entorhinal cortex as well as indirectly through the dentate gyrus, which performs sparsification and decorrelation. The computational purpose for these dual input pathways has not been firmly established. We model CA3 as a Hopfield-like network that stores both dense, correlated encodings and sparse, decorrelated encodings. As more memories are stored, the former merge along shared features while the latter remain distinct. We verify our model's prediction in rat CA3 place cells, which exhibit more distinct tuning during theta phases with sparser activity. Finally, we find that neural networks trained in multitask learning benefit from a loss term that promotes both correlated and decorrelated representations. Thus, the complementary encodings we have found in CA3 can provide broad computational advantages for solving complex tasks.
Collapse
Affiliation(s)
- Louis Kang
- Neural Circuits and Computations Unit, RIKEN Center for Brain Science, 2-1 Hirosawa, Wako-shi, Saitama, 351-0198, Japan.
- Graduate School of Informatics, Kyoto University, 36-1 Yoshida-honmachi, Sakyo-ku, Kyoto, 606-8501, Japan.
| | - Taro Toyoizumi
- Laboratory for Neural Computation and Adaptation, RIKEN Center for Brain Science, 2-1 Hirosawa, Wako-shi, Saitama, 351-0198, Japan
- Graduate School of Information Science and Technology, University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo, 113-0033, Japan
| |
Collapse
|
12
|
Bruel A, Abadía I, Collin T, Sakr I, Lorach H, Luque NR, Ros E, Ijspeert A. The spinal cord facilitates cerebellar upper limb motor learning and control; inputs from neuromusculoskeletal simulation. PLoS Comput Biol 2024; 20:e1011008. [PMID: 38166093 PMCID: PMC10786408 DOI: 10.1371/journal.pcbi.1011008] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/08/2023] [Revised: 01/12/2024] [Accepted: 12/12/2023] [Indexed: 01/04/2024] Open
Abstract
Complex interactions between brain regions and the spinal cord (SC) govern body motion, which is ultimately driven by muscle activation. Motor planning or learning are mainly conducted at higher brain regions, whilst the SC acts as a brain-muscle gateway and as a motor control centre providing fast reflexes and muscle activity regulation. Thus, higher brain areas need to cope with the SC as an inherent and evolutionary older part of the body dynamics. Here, we address the question of how SC dynamics affects motor learning within the cerebellum; in particular, does the SC facilitate cerebellar motor learning or constitute a biological constraint? We provide an exploratory framework by integrating biologically plausible cerebellar and SC computational models in a musculoskeletal upper limb control loop. The cerebellar model, equipped with the main form of cerebellar plasticity, provides motor adaptation; whilst the SC model implements stretch reflex and reciprocal inhibition between antagonist muscles. The resulting spino-cerebellar model is tested performing a set of upper limb motor tasks, including external perturbation studies. A cerebellar model, lacking the implemented SC model and directly controlling the simulated muscles, was also tested in the same. The performances of the spino-cerebellar and cerebellar models were then compared, thus allowing directly addressing the SC influence on cerebellar motor adaptation and learning, and on handling external motor perturbations. Performance was assessed in both joint and muscle space, and compared with kinematic and EMG recordings from healthy participants. The differences in cerebellar synaptic adaptation between both models were also studied. We conclude that the SC facilitates cerebellar motor learning; when the SC circuits are in the loop, faster convergence in motor learning is achieved with simpler cerebellar synaptic weight distributions. The SC is also found to improve robustness against external perturbations, by better reproducing and modulating muscle cocontraction patterns.
Collapse
Affiliation(s)
- Alice Bruel
- Biorobotics Laboratory, EPFL, Lausanne, Switzerland
| | - Ignacio Abadía
- Research Centre for Information and Communication Technologies, Department of Computer Engineering, Automation and Robotics, University of Granada, Granada, Spain
| | | | - Icare Sakr
- NeuroRestore, EPFL, Lausanne, Switzerland
| | | | - Niceto R. Luque
- Research Centre for Information and Communication Technologies, Department of Computer Engineering, Automation and Robotics, University of Granada, Granada, Spain
| | - Eduardo Ros
- Research Centre for Information and Communication Technologies, Department of Computer Engineering, Automation and Robotics, University of Granada, Granada, Spain
| | | |
Collapse
|
13
|
Farrell M, Recanatesi S, Shea-Brown E. From lazy to rich to exclusive task representations in neural networks and neural codes. Curr Opin Neurobiol 2023; 83:102780. [PMID: 37757585 DOI: 10.1016/j.conb.2023.102780] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/16/2023] [Revised: 08/04/2023] [Accepted: 08/16/2023] [Indexed: 09/29/2023]
Abstract
Neural circuits-both in the brain and in "artificial" neural network models-learn to solve a remarkable variety of tasks, and there is a great current opportunity to use neural networks as models for brain function. Key to this endeavor is the ability to characterize the representations formed by both artificial and biological brains. Here, we investigate this potential through the lens of recently developing theory that characterizes neural networks as "lazy" or "rich" depending on the approach they use to solve tasks: lazy networks solve tasks by making small changes in connectivity, while rich networks solve tasks by significantly modifying weights throughout the network (including "hidden layers"). We further elucidate rich networks through the lens of compression and "neural collapse", ideas that have recently been of significant interest to neuroscience and machine learning. We then show how these ideas apply to a domain of increasing importance to both fields: extracting latent structures through self-supervised learning.
Collapse
Affiliation(s)
- Matthew Farrell
- John A. Paulson School of Engineering and Applied Sciences, Harvard University and Center for Brain Science, Harvard University, United States
| | - Stefano Recanatesi
- Applied Mathematics, Physiology and Biophysics, and Computational Neuroscience Center, University of Washington, United States
| | - Eric Shea-Brown
- Applied Mathematics, Physiology and Biophysics, and Computational Neuroscience Center, University of Washington, United States.
| |
Collapse
|
14
|
Kang L, Toyoizumi T. Hopfield-like network with complementary encodings of memories. Phys Rev E 2023; 108:054410. [PMID: 38115467 DOI: 10.1103/physreve.108.054410] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/12/2023] [Accepted: 08/28/2023] [Indexed: 12/21/2023]
Abstract
We present a Hopfield-like autoassociative network for memories representing examples of concepts. Each memory is encoded by two activity patterns with complementary properties. The first is dense and correlated across examples within concepts, and the second is sparse and exhibits no correlation among examples. The network stores each memory as a linear combination of its encodings. During retrieval, the network recovers sparse or dense patterns with a high or low activity threshold, respectively. As more memories are stored, the dense representation at low threshold shifts from examples to concepts, which are learned from accumulating common example features. Meanwhile, the sparse representation at high threshold maintains distinctions between examples due to the high capacity of sparse, decorrelated patterns. Thus, a single network can retrieve memories at both example and concept scales and perform heteroassociation between them. We obtain our results by deriving macroscopic mean-field equations that yield capacity formulas for sparse examples, dense examples, and dense concepts. We also perform simulations that verify our theoretical results and explicitly demonstrate the capabilities of the network.
Collapse
Affiliation(s)
- Louis Kang
- Neural Circuits and Computations Unit, RIKEN Center for Brain Science, 2-1 Hirosawa, Wako-shi, Saitama 351-0198, Japan
- Graduate School of Informatics, Kyoto University, 36-1 Yoshida-honmachi, Sakyo-ku, Kyoto 606-8501, Japan
| | - Taro Toyoizumi
- Laboratory for Neural Computation and Adaptation, RIKEN Center for Brain Science, 2-1 Hirosawa, Wako-shi, Saitama 351-0198, Japan
- Graduate School of Information Science and Technology, University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo 113-0033, Japan
| |
Collapse
|
15
|
Zang Y, De Schutter E. Recent data on the cerebellum require new models and theories. Curr Opin Neurobiol 2023; 82:102765. [PMID: 37591124 DOI: 10.1016/j.conb.2023.102765] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/09/2023] [Revised: 07/22/2023] [Accepted: 07/23/2023] [Indexed: 08/19/2023]
Abstract
The cerebellum has been a popular topic for theoretical studies because its structure was thought to be simple. Since David Marr and James Albus related its function to motor skill learning and proposed the Marr-Albus cerebellar learning model, this theory has guided and inspired cerebellar research. In this review, we summarize the theoretical progress that has been made within this framework of error-based supervised learning. We discuss the experimental progress that demonstrates more complicated molecular and cellular mechanisms in the cerebellum as well as new cell types and recurrent connections. We also cover its involvement in diverse non-motor functions and evidence of other forms of learning. Finally, we highlight the need to explain these new experimental findings into an integrated cerebellar model that can unify its diverse computational functions.
Collapse
Affiliation(s)
- Yunliang Zang
- Academy of Medical Engineering and Translational Medicine, Medical Faculty, Tianjin University, Tianjin 300072, China; Volen Center and Biology Department, Brandeis University, Waltham, MA 02454, USA.
| | - Erik De Schutter
- Computational Neuroscience Unit, Okinawa Institute of Science and Technology, Japan. https://twitter.com/DeschutterOIST
| |
Collapse
|
16
|
Müller-Komorowska D, Kuru B, Beck H, Braganza O. Phase information is conserved in sparse, synchronous population-rate-codes via phase-to-rate recoding. Nat Commun 2023; 14:6106. [PMID: 37777512 PMCID: PMC10543394 DOI: 10.1038/s41467-023-41803-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/08/2022] [Accepted: 09/19/2023] [Indexed: 10/02/2023] Open
Abstract
Neural computation is often traced in terms of either rate- or phase-codes. However, most circuit operations will simultaneously affect information across both coding schemes. It remains unclear how phase and rate coded information is transmitted, in the face of continuous modification at consecutive processing stages. Here, we study this question in the entorhinal cortex (EC)- dentate gyrus (DG)- CA3 system using three distinct computational models. We demonstrate that DG feedback inhibition leverages EC phase information to improve rate-coding, a computation we term phase-to-rate recoding. Our results suggest that it i) supports the conservation of phase information within sparse rate-codes and ii) enhances the efficiency of plasticity in downstream CA3 via increased synchrony. Given the ubiquity of both phase-coding and feedback circuits, our results raise the question whether phase-to-rate recoding is a recurring computational motif, which supports the generation of sparse, synchronous population-rate-codes in areas beyond the DG.
Collapse
Affiliation(s)
- Daniel Müller-Komorowska
- Neural Coding and Brain Computing Unit, Okinawa Institute of Science and Technology Graduate University, Okinawa, 904-0495, Japan.
- Institute for Experimental Epileptology and Cognition Research, University of Bonn, Bonn, Germany.
| | - Baris Kuru
- Institute for Experimental Epileptology and Cognition Research, University of Bonn, Bonn, Germany
| | - Heinz Beck
- Institute for Experimental Epileptology and Cognition Research, University of Bonn, Bonn, Germany
- Deutsches Zentrum für Neurodegenerative Erkrankungen e.V, Bonn, Germany
| | - Oliver Braganza
- Institute for Experimental Epileptology and Cognition Research, University of Bonn, Bonn, Germany.
- Institute for Socio-Economics, University of Duisburg-Essen, Duisburg, Germany.
| |
Collapse
|
17
|
Xie M, Muscinelli SP, Decker Harris K, Litwin-Kumar A. Task-dependent optimal representations for cerebellar learning. eLife 2023; 12:e82914. [PMID: 37671785 PMCID: PMC10541175 DOI: 10.7554/elife.82914] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/22/2022] [Accepted: 09/05/2023] [Indexed: 09/07/2023] Open
Abstract
The cerebellar granule cell layer has inspired numerous theoretical models of neural representations that support learned behaviors, beginning with the work of Marr and Albus. In these models, granule cells form a sparse, combinatorial encoding of diverse sensorimotor inputs. Such sparse representations are optimal for learning to discriminate random stimuli. However, recent observations of dense, low-dimensional activity across granule cells have called into question the role of sparse coding in these neurons. Here, we generalize theories of cerebellar learning to determine the optimal granule cell representation for tasks beyond random stimulus discrimination, including continuous input-output transformations as required for smooth motor control. We show that for such tasks, the optimal granule cell representation is substantially denser than predicted by classical theories. Our results provide a general theory of learning in cerebellum-like systems and suggest that optimal cerebellar representations are task-dependent.
Collapse
Affiliation(s)
- Marjorie Xie
- Zuckerman Mind Brain Behavior Institute, Columbia UniversityNew YorkUnited States
| | - Samuel P Muscinelli
- Zuckerman Mind Brain Behavior Institute, Columbia UniversityNew YorkUnited States
| | - Kameron Decker Harris
- Department of Computer Science, Western Washington UniversityBellinghamUnited States
| | - Ashok Litwin-Kumar
- Zuckerman Mind Brain Behavior Institute, Columbia UniversityNew YorkUnited States
| |
Collapse
|
18
|
Jeon I, Kim T. Distinctive properties of biological neural networks and recent advances in bottom-up approaches toward a better biologically plausible neural network. Front Comput Neurosci 2023; 17:1092185. [PMID: 37449083 PMCID: PMC10336230 DOI: 10.3389/fncom.2023.1092185] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/07/2022] [Accepted: 06/12/2023] [Indexed: 07/18/2023] Open
Abstract
Although it may appear infeasible and impractical, building artificial intelligence (AI) using a bottom-up approach based on the understanding of neuroscience is straightforward. The lack of a generalized governing principle for biological neural networks (BNNs) forces us to address this problem by converting piecemeal information on the diverse features of neurons, synapses, and neural circuits into AI. In this review, we described recent attempts to build a biologically plausible neural network by following neuroscientifically similar strategies of neural network optimization or by implanting the outcome of the optimization, such as the properties of single computational units and the characteristics of the network architecture. In addition, we proposed a formalism of the relationship between the set of objectives that neural networks attempt to achieve, and neural network classes categorized by how closely their architectural features resemble those of BNN. This formalism is expected to define the potential roles of top-down and bottom-up approaches for building a biologically plausible neural network and offer a map helping the navigation of the gap between neuroscience and AI engineering.
Collapse
Affiliation(s)
| | - Taegon Kim
- Brain Science Institute, Korea Institute of Science and Technology, Seoul, Republic of Korea
| |
Collapse
|
19
|
Xu Z, Geron E, Pérez-Cuesta LM, Bai Y, Gan WB. Generalized extinction of fear memory depends on co-allocation of synaptic plasticity in dendrites. Nat Commun 2023; 14:503. [PMID: 36720872 PMCID: PMC9889816 DOI: 10.1038/s41467-023-35805-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/31/2021] [Accepted: 01/03/2023] [Indexed: 02/02/2023] Open
Abstract
Memories can be modified by new experience in a specific or generalized manner. Changes in synaptic connections are crucial for memory storage, but it remains unknown how synaptic changes associated with different memories are distributed within neuronal circuits and how such distributions affect specific or generalized modification by novel experience. Here we show that fear conditioning with two different auditory stimuli (CS) and footshocks (US) induces dendritic spine elimination mainly on different dendritic branches of layer 5 pyramidal neurons in the mouse motor cortex. Subsequent fear extinction causes CS-specific spine formation and extinction of freezing behavior. In contrast, spine elimination induced by fear conditioning with >2 different CS-USs often co-exists on the same dendritic branches. Fear extinction induces CS-nonspecific spine formation and generalized fear extinction. Moreover, activation of somatostatin-expressing interneurons increases the occurrence of spine elimination induced by different CS-USs on the same dendritic branches and facilitates the generalization of fear extinction. These findings suggest that specific or generalized modification of existing memories by new experience depends on whether synaptic changes induced by previous experiences are segregated or co-exist at the level of individual dendritic branches.
Collapse
Affiliation(s)
- Zhiwei Xu
- Institute of Neurological and Psychiatric Disorders, Shenzhen Bay Laboratory, Shenzhen, 518132, China
- Peking University Shenzhen Graduate School, Shenzhen, 518055, China
| | - Erez Geron
- Skirball Institute, Department of Neuroscience and Physiology, New York University School of Medicine, New York, NY, 10016, USA
| | - Luis M Pérez-Cuesta
- Skirball Institute, Department of Neuroscience and Physiology, New York University School of Medicine, New York, NY, 10016, USA
| | - Yang Bai
- Peking University Shenzhen Graduate School, Shenzhen, 518055, China
| | - Wen-Biao Gan
- Institute of Neurological and Psychiatric Disorders, Shenzhen Bay Laboratory, Shenzhen, 518132, China.
- Peking University Shenzhen Graduate School, Shenzhen, 518055, China.
| |
Collapse
|
20
|
Structured cerebellar connectivity supports resilient pattern separation. Nature 2023; 613:543-549. [PMID: 36418404 DOI: 10.1038/s41586-022-05471-w] [Citation(s) in RCA: 15] [Impact Index Per Article: 15.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/26/2021] [Accepted: 10/20/2022] [Indexed: 11/25/2022]
Abstract
The cerebellum is thought to help detect and correct errors between intended and executed commands1,2 and is critical for social behaviours, cognition and emotion3-6. Computations for motor control must be performed quickly to correct errors in real time and should be sensitive to small differences between patterns for fine error correction while being resilient to noise7. Influential theories of cerebellar information processing have largely assumed random network connectivity, which increases the encoding capacity of the network's first layer8-13. However, maximizing encoding capacity reduces the resilience to noise7. To understand how neuronal circuits address this fundamental trade-off, we mapped the feedforward connectivity in the mouse cerebellar cortex using automated large-scale transmission electron microscopy and convolutional neural network-based image segmentation. We found that both the input and output layers of the circuit exhibit redundant and selective connectivity motifs, which contrast with prevailing models. Numerical simulations suggest that these redundant, non-random connectivity motifs increase the resilience to noise at a negligible cost to the overall encoding capacity. This work reveals how neuronal network structure can support a trade-off between encoding capacity and redundancy, unveiling principles of biological network architecture with implications for the design of artificial neural networks.
Collapse
|
21
|
Gilmer JI, Farries MA, Kilpatrick Z, Delis I, Cohen JD, Person AL. An emergent temporal basis set robustly supports cerebellar time-series learning. J Neurophysiol 2023; 129:159-176. [PMID: 36416445 PMCID: PMC9990911 DOI: 10.1152/jn.00312.2022] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/26/2022] [Revised: 11/14/2022] [Accepted: 11/17/2022] [Indexed: 11/24/2022] Open
Abstract
The cerebellum is considered a "learning machine" essential for time interval estimation underlying motor coordination and other behaviors. Theoretical work has proposed that the cerebellum's input recipient structure, the granule cell layer (GCL), performs pattern separation of inputs that facilitates learning in Purkinje cells (P-cells). However, the relationship between input reformatting and learning has remained debated, with roles emphasized for pattern separation features from sparsification to decorrelation. We took a novel approach by training a minimalist model of the cerebellar cortex to learn complex time-series data from time-varying inputs, typical during movements. The model robustly produced temporal basis sets from these inputs, and the resultant GCL output supported better learning of temporally complex target functions than mossy fibers alone. Learning was optimized at intermediate threshold levels, supporting relatively dense granule cell activity, yet the key statistical features in GCL population activity that drove learning differed from those seen previously for classification tasks. These findings advance testable hypotheses for mechanisms of temporal basis set formation and predict that moderately dense population activity optimizes learning.NEW & NOTEWORTHY During movement, mossy fiber inputs to the cerebellum relay time-varying information with strong intrinsic relationships to ongoing movement. Are such mossy fibers signals sufficient to support Purkinje signals and learning? In a model, we show how the GCL greatly improves Purkinje learning of complex, temporally dynamic signals relative to mossy fibers alone. Learning-optimized GCL population activity was moderately dense, which retained intrinsic input variance while also performing pattern separation.
Collapse
Affiliation(s)
- Jesse I Gilmer
- Neuroscience Graduate Program, University of Colorado School of Medicine, Aurora, Colorado
- Department of Physiology and Biophysics, University of Colorado School of Medicine, Aurora, Colorado
| | - Michael A Farries
- Knoebel Institute for Healthy Aging, University of Denver, Denver, Colorado
| | - Zachary Kilpatrick
- Department of Applied Mathematics, University of Colorado Boulder, Boulder, Colorado
| | - Ioannis Delis
- School of Biomedical Sciences, University of Leeds, Leeds, United Kingdom
| | - Jeremy D Cohen
- University of North Carolina Neuroscience Center, Chapel Hill, North Carolina
| | - Abigail L Person
- Department of Physiology and Biophysics, University of Colorado School of Medicine, Aurora, Colorado
| |
Collapse
|
22
|
Barri A, Wiechert MT, Jazayeri M, DiGregorio DA. Synaptic basis of a sub-second representation of time in a neural circuit model. Nat Commun 2022; 13:7902. [PMID: 36550115 PMCID: PMC9780315 DOI: 10.1038/s41467-022-35395-y] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/31/2022] [Accepted: 11/29/2022] [Indexed: 12/24/2022] Open
Abstract
Temporal sequences of neural activity are essential for driving well-timed behaviors, but the underlying cellular and circuit mechanisms remain elusive. We leveraged the well-defined architecture of the cerebellum, a brain region known to support temporally precise actions, to explore theoretically whether the experimentally observed diversity of short-term synaptic plasticity (STP) at the input layer could generate neural dynamics sufficient for sub-second temporal learning. A cerebellar circuit model equipped with dynamic synapses produced a diverse set of transient granule cell firing patterns that provided a temporal basis set for learning precisely timed pauses in Purkinje cell activity during simulated delay eyelid conditioning and Bayesian interval estimation. The learning performance across time intervals was influenced by the temporal bandwidth of the temporal basis, which was determined by the input layer synaptic properties. The ubiquity of STP throughout the brain positions it as a general, tunable cellular mechanism for sculpting neural dynamics and fine-tuning behavior.
Collapse
Affiliation(s)
- A. Barri
- grid.508487.60000 0004 7885 7602Institut Pasteur, Université Paris Cité, Synapse and Circuit Dynamics Laboratory, CNRS UMR 3571 Paris, France
| | - M. T. Wiechert
- grid.508487.60000 0004 7885 7602Institut Pasteur, Université Paris Cité, Synapse and Circuit Dynamics Laboratory, CNRS UMR 3571 Paris, France
| | - M. Jazayeri
- grid.116068.80000 0001 2341 2786McGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, MA USA ,grid.116068.80000 0001 2341 2786Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA USA
| | - D. A. DiGregorio
- grid.508487.60000 0004 7885 7602Institut Pasteur, Université Paris Cité, Synapse and Circuit Dynamics Laboratory, CNRS UMR 3571 Paris, France
| |
Collapse
|
23
|
Bae H, Park SY, Kim SJ, Kim CE. Cerebellum as a kernel machine: A novel perspective on expansion recoding in granule cell layer. Front Comput Neurosci 2022; 16:1062392. [PMID: 36618271 PMCID: PMC9815768 DOI: 10.3389/fncom.2022.1062392] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/05/2022] [Accepted: 11/30/2022] [Indexed: 12/24/2022] Open
Abstract
Sensorimotor information provided by mossy fibers (MF) is mapped to high-dimensional space by a huge number of granule cells (GrC) in the cerebellar cortex's input layer. Significant studies have demonstrated the computational advantages and primary contributor of this expansion recoding. Here, we propose a novel perspective on the expansion recoding where each GrC serve as a kernel basis function, thereby the cerebellum can operate like a kernel machine that implicitly use high dimensional (even infinite) feature spaces. We highlight that the generation of kernel basis function is indeed biologically plausible scenario, considering that the key idea of kernel machine is to memorize important input patterns. We present potential regimes for developing kernels under constrained resources and discuss the advantages and disadvantages of each regime using various simulation settings.
Collapse
Affiliation(s)
- Hyojin Bae
- Department of Physiology, Gachon University College of Korean Medicine, Seongnam, South Korea
| | - Sa-Yoon Park
- Department of Physiology, Gachon University College of Korean Medicine, Seongnam, South Korea
| | - Sang Jeong Kim
- Department of Physiology, Seoul National University College of Medicine, Seoul, South Korea,*Correspondence: Sang Jeong Kim,
| | - Chang-Eop Kim
- Department of Physiology, Gachon University College of Korean Medicine, Seongnam, South Korea,Chang-Eop Kim,
| |
Collapse
|
24
|
Khalil AJ, Mansvelder HD, Witter L. Mesodiencephalic junction GABAergic inputs are processed separately from motor cortical inputs in the basilar pons. iScience 2022; 25:104641. [PMID: 35800775 PMCID: PMC9254490 DOI: 10.1016/j.isci.2022.104641] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/14/2022] [Revised: 04/13/2022] [Accepted: 06/14/2022] [Indexed: 11/21/2022] Open
Abstract
The basilar pontine nuclei (bPN) are known to receive excitatory input from the entire neocortex and constitute the main source of mossy fibers to the cerebellum. Various potential inhibitory afferents have been described, but their origin, synaptic plasticity, and network function have remained elusive. Here we identify the mesodiencephalic junction (MDJ) as a prominent source of monosynaptic GABAergic inputs to the bPN. We found no evidence that these inputs converge with motor cortex (M1) inputs at the single neuron or at the local network level. Tracing the inputs to GABAergic MDJ neurons revealed inputs to these neurons from neocortical areas. Additionally, we observed little short-term synaptic facilitation or depression in afferents from the MDJ, enabling MDJ inputs to carry sign-inversed neocortical inputs. Thus, our results show a prominent source of GABAergic inhibition to the bPN that could enrich input to the cerebellar granule cell layer.
Collapse
Affiliation(s)
- Ayoub J. Khalil
- Department of Integrative Neurophysiology, Amsterdam Neuroscience, Center for Neurogenomics and Cognitive Research (CNCR), Vrije Universiteit Amsterdam, 1081HV Amsterdam, the Netherlands
| | - Huibert D. Mansvelder
- Department of Integrative Neurophysiology, Amsterdam Neuroscience, Center for Neurogenomics and Cognitive Research (CNCR), Vrije Universiteit Amsterdam, 1081HV Amsterdam, the Netherlands
| | - Laurens Witter
- Department of Integrative Neurophysiology, Amsterdam Neuroscience, Center for Neurogenomics and Cognitive Research (CNCR), Vrije Universiteit Amsterdam, 1081HV Amsterdam, the Netherlands
- Department for Developmental Origins of Disease, Wilhelmina Children’s Hospital and Brain Center, University Medical Center Utrecht, 3584 EA Utrecht, the Netherlands
| |
Collapse
|
25
|
Gradient-based learning drives robust representations in recurrent neural networks by balancing compression and expansion. NAT MACH INTELL 2022. [DOI: 10.1038/s42256-022-00498-0] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/02/2023]
|
26
|
Sheng J, Zhang L, Liu C, Liu J, Feng J, Zhou Y, Hu H, Xue G. Higher-dimensional neural representations predict better episodic memory. SCIENCE ADVANCES 2022; 8:eabm3829. [PMID: 35442734 PMCID: PMC9020666 DOI: 10.1126/sciadv.abm3829] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 09/14/2021] [Accepted: 03/03/2022] [Indexed: 06/14/2023]
Abstract
Episodic memory enables humans to encode and later vividly retrieve information about our rich experiences, yet the neural representations that support this mental capacity are poorly understood. Using a large fMRI dataset (n = 468) of face-name associative memory tasks and principal component analysis to examine neural representational dimensionality (RD), we found that the human brain maintained a high-dimensional representation of faces through hierarchical representation within and beyond the face-selective regions. Critically, greater RD was associated with better subsequent memory performance both within and across participants, and this association was specific to episodic memory but not general cognitive abilities. Furthermore, the frontoparietal activities could suppress the shared low-dimensional fluctuations and reduce the correlations of local neural responses, resulting in greater RD. RD was not associated with the degree of item-specific pattern similarity, and it made complementary contributions to episodic memory. These results provide a mechanistic understanding of the role of RD in supporting accurate episodic memory.
Collapse
|
27
|
Kumar MG, Tan C, Libedinsky C, Yen SC, Tan AYY. A Nonlinear Hidden Layer Enables Actor-Critic Agents to Learn Multiple Paired Association Navigation. Cereb Cortex 2022; 32:3917-3936. [PMID: 35034127 DOI: 10.1093/cercor/bhab456] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/23/2021] [Revised: 11/05/2021] [Accepted: 11/06/2021] [Indexed: 11/15/2022] Open
Abstract
Navigation to multiple cued reward locations has been increasingly used to study rodent learning. Though deep reinforcement learning agents have been shown to be able to learn the task, they are not biologically plausible. Biologically plausible classic actor-critic agents have been shown to learn to navigate to single reward locations, but which biologically plausible agents are able to learn multiple cue-reward location tasks has remained unclear. In this computational study, we show versions of classic agents that learn to navigate to a single reward location, and adapt to reward location displacement, but are not able to learn multiple paired association navigation. The limitation is overcome by an agent in which place cell and cue information are first processed by a feedforward nonlinear hidden layer with synapses to the actor and critic subject to temporal difference error-modulated plasticity. Faster learning is obtained when the feedforward layer is replaced by a recurrent reservoir network.
Collapse
Affiliation(s)
- M Ganesh Kumar
- Integrative Sciences and Engineering Programme, NUS Graduate School, National University of Singapore, Singapore 119077, Singapore
- The N.1 Institute for Health, National University of Singapore, Singapore 117456, Singapore
- Innovation and Design Programme, Faculty of Engineering, National University of Singapore, Singapore 117579, Singapore
| | - Cheston Tan
- Institute for Infocomm Research, Agency for Science, Technology and Research, Singapore 138632, Singapore
| | - Camilo Libedinsky
- Integrative Sciences and Engineering Programme, NUS Graduate School, National University of Singapore, Singapore 119077, Singapore
- The N.1 Institute for Health, National University of Singapore, Singapore 117456, Singapore
- Department of Psychology, National University of Singapore, Singapore 117570, Singapore
- Institute of Molecular and Cell Biology, Agency for Science, Technology and Research, Singapore 138673, Singapore
| | - Shih-Cheng Yen
- Integrative Sciences and Engineering Programme, NUS Graduate School, National University of Singapore, Singapore 119077, Singapore
- The N.1 Institute for Health, National University of Singapore, Singapore 117456, Singapore
- Innovation and Design Programme, Faculty of Engineering, National University of Singapore, Singapore 117579, Singapore
| | - Andrew Y Y Tan
- Department of Physiology, Yong Loo Lin School of Medicine, National University of Singapore, Singapore 117593, Singapore
- Healthy Longevity Translational Research Programme, Yong Loo Lin School of Medicine, National University of Singapore, Singapore 119228, Singapore
- Cardiovascular Disease Translational Research Programme, Yong Loo Lin School of Medicine, National University of Singapore, Singapore 119228, Singapore
- Neurobiology Programme, Life Sciences Institute, National University of Singapore, Singapore 119077, Singapore
| |
Collapse
|
28
|
Prisco L, Deimel SH, Yeliseyeva H, Fiala A, Tavosanis G. The anterior paired lateral neuron normalizes odour-evoked activity in the Drosophila mushroom body calyx. eLife 2021; 10:e74172. [PMID: 34964714 PMCID: PMC8741211 DOI: 10.7554/elife.74172] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/23/2021] [Accepted: 12/28/2021] [Indexed: 11/25/2022] Open
Abstract
To identify and memorize discrete but similar environmental inputs, the brain needs to distinguish between subtle differences of activity patterns in defined neuronal populations. The Kenyon cells (KCs) of the Drosophila adult mushroom body (MB) respond sparsely to complex olfactory input, a property that is thought to support stimuli discrimination in the MB. To understand how this property emerges, we investigated the role of the inhibitory anterior paired lateral (APL) neuron in the input circuit of the MB, the calyx. Within the calyx, presynaptic boutons of projection neurons (PNs) form large synaptic microglomeruli (MGs) with dendrites of postsynaptic KCs. Combining electron microscopy (EM) data analysis and in vivo calcium imaging, we show that APL, via inhibitory and reciprocal synapses targeting both PN boutons and KC dendrites, normalizes odour-evoked representations in MGs of the calyx. APL response scales with the PN input strength and is regionalized around PN input distribution. Our data indicate that the formation of a sparse code by the KCs requires APL-driven normalization of their MG postsynaptic responses. This work provides experimental insights on how inhibition shapes sensory information representation in a higher brain centre, thereby supporting stimuli discrimination and allowing for efficient associative memory formation.
Collapse
Affiliation(s)
- Luigi Prisco
- Dynamics of neuronal circuits, German Center for Neurodegenerative Diseases (DZNE)BonnGermany
| | | | - Hanna Yeliseyeva
- Dynamics of neuronal circuits, German Center for Neurodegenerative Diseases (DZNE)BonnGermany
| | - André Fiala
- Department of Molecular Neurobiology of Behavior, University of GöttingenGöttingenGermany
| | - Gaia Tavosanis
- Dynamics of neuronal circuits, German Center for Neurodegenerative Diseases (DZNE)BonnGermany
- LIMES, Rheinische Friedrich Wilhelms Universität BonnBonnGermany
| |
Collapse
|
29
|
Gilbert M. The Shape of Data: a Theory of the Representation of Information in the Cerebellar Cortex. THE CEREBELLUM 2021; 21:976-986. [PMID: 34902112 PMCID: PMC9596575 DOI: 10.1007/s12311-021-01352-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Accepted: 11/28/2021] [Indexed: 11/30/2022]
Abstract
This paper presents a model of rate coding in the cerebellar cortex. The pathway of input to output of the cerebellum forms an anatomically repeating, functionally modular network, whose basic wiring is preserved across vertebrate taxa. Each network is bisected centrally by a functionally defined cell group, a microzone, which forms part of the cerebellar circuit. Input to a network may be from tens of thousands of concurrently active mossy fibres. The model claims to quantify the conversion of input rates into the code received by a microzone. Recoding on entry converts input rates into an internal code which is homogenised in the functional equivalent of an imaginary plane, occupied by the centrally positioned microzone. Homogenised means the code exists in any random sample of parallel fibre signals over a minimum number. The nature of the code and the regimented architecture of the cerebellar cortex mean that the threshold can be represented by space so that the threshold can be met by the physical dimensions of the Purkinje cell dendritic arbour and planar interneuron networks. As a result, the whole population of a microzone receives the same code. This is part of a mechanism which orchestrates functionally indivisible behaviour of the cerebellar circuit and is necessary for coordinated control of the output cells of the circuit. In this model, fine control of Purkinje cells is by input rates to the system and not by learning so that it is in conflict with the for-years-dominant supervised learning model.
Collapse
Affiliation(s)
- Mike Gilbert
- School of Psychology, University of Birmingham, Birmingham, UK.
| |
Collapse
|
30
|
Guzman SJ, Schlögl A, Espinoza C, Zhang X, Suter BA, Jonas P. How connectivity rules and synaptic properties shape the efficacy of pattern separation in the entorhinal cortex-dentate gyrus-CA3 network. NATURE COMPUTATIONAL SCIENCE 2021; 1:830-842. [PMID: 38217181 DOI: 10.1038/s43588-021-00157-1] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/29/2020] [Accepted: 10/12/2021] [Indexed: 01/15/2024]
Abstract
Pattern separation is a fundamental brain computation that converts small differences in input patterns into large differences in output patterns. Several synaptic mechanisms of pattern separation have been proposed, including code expansion, inhibition and plasticity; however, which of these mechanisms play a role in the entorhinal cortex (EC)-dentate gyrus (DG)-CA3 circuit, a classical pattern separation circuit, remains unclear. Here we show that a biologically realistic, full-scale EC-DG-CA3 circuit model, including granule cells (GCs) and parvalbumin-positive inhibitory interneurons (PV+-INs) in the DG, is an efficient pattern separator. Both external gamma-modulated inhibition and internal lateral inhibition mediated by PV+-INs substantially contributed to pattern separation. Both local connectivity and fast signaling at GC-PV+-IN synapses were important for maximum effectiveness. Similarly, mossy fiber synapses with conditional detonator properties contributed to pattern separation. By contrast, perforant path synapses with Hebbian synaptic plasticity and direct EC-CA3 connection shifted the network towards pattern completion. Our results demonstrate that the specific properties of cells and synapses optimize higher-order computations in biological networks and might be useful to improve the deep learning capabilities of technical networks.
Collapse
Affiliation(s)
- S Jose Guzman
- IST Austria, Klosterneuburg, Austria
- Institute of Molecular Biotechnology, Vienna, Austria
| | | | - Claudia Espinoza
- IST Austria, Klosterneuburg, Austria
- Medical University of Austria, Division of Cognitive Neurobiology, Vienna, Austria
| | - Xiaomin Zhang
- IST Austria, Klosterneuburg, Austria
- Brain Research Institute, University of Zürich, Zurich, Switzerland
| | | | | |
Collapse
|
31
|
Gilbert M. Gating by Memory: a Theory of Learning in the Cerebellum. THE CEREBELLUM 2021; 21:926-943. [PMID: 34757585 DOI: 10.1007/s12311-021-01325-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Accepted: 08/24/2021] [Indexed: 11/30/2022]
Abstract
This paper presents a model of learning by the cerebellar circuit. In the traditional and dominant learning model, training teaches finely graded parallel fibre synaptic weights which modify transmission to Purkinje cells and to interneurons that inhibit Purkinje cells. Following training, input in a learned pattern drives a training-modified response. The function is that the naive response to input rates is displaced by a learned one, trained under external supervision. In the proposed model, there is no weight-controlled graduated balance of excitation and inhibition of Purkinje cells. Instead, the balance has two functional states-a switch-at synaptic, whole cell and microzone level. The paper is in two parts. The first is a detailed physiological argument for the synaptic learning function. The second uses the function in a computational simulation of pattern memory. Against expectation, this generates a predictable outcome from input chaos (real-world variables). Training always forces synaptic weights away from the middle and towards the limits of the range, causing them to polarise, so that transmission is either robust or blocked. All conditions teach the same outcome, such that all learned patterns receive the same, rather than a bespoke, effect on transmission. In this model, the function of learning is gating-that is, to select patterns that trigger output merely, and not to modify output. The outcome is memory-operated gate activation which operates a two-state balance of weight-controlled transmission. Group activity of parallel fibres also simultaneously contains a second code contained in collective rates, which varies independently of the pattern code. A two-state response to the pattern code allows faithful, and graduated, control of Purkinje cell firing by the rate code, at gated times.
Collapse
Affiliation(s)
- Mike Gilbert
- School of Psychology, University of Birmingham, Birmingham, UK.
| |
Collapse
|
32
|
Biane C, Rückerl F, Abrahamsson T, Saint-Cloment C, Mariani J, Shigemoto R, DiGregorio DA, Sherrard RM, Cathala L. Developmental emergence of two-stage nonlinear synaptic integration in cerebellar interneurons. eLife 2021; 10:65954. [PMID: 34730085 PMCID: PMC8565927 DOI: 10.7554/elife.65954] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/20/2020] [Accepted: 09/28/2021] [Indexed: 11/13/2022] Open
Abstract
Synaptic transmission, connectivity, and dendritic morphology mature in parallel during brain development and are often disrupted in neurodevelopmental disorders. Yet how these changes influence the neuronal computations necessary for normal brain function are not well understood. To identify cellular mechanisms underlying the maturation of synaptic integration in interneurons, we combined patch-clamp recordings of excitatory inputs in mouse cerebellar stellate cells (SCs), three-dimensional reconstruction of SC morphology with excitatory synapse location, and biophysical modeling. We found that postnatal maturation of postsynaptic strength was homogeneously reduced along the somatodendritic axis, but dendritic integration was always sublinear. However, dendritic branching increased without changes in synapse density, leading to a substantial gain in distal inputs. Thus, changes in synapse distribution, rather than dendrite cable properties, are the dominant mechanism underlying the maturation of neuronal computation. These mechanisms favor the emergence of a spatially compartmentalized two-stage integration model promoting location-dependent integration within dendritic subunits.
Collapse
Affiliation(s)
- Celia Biane
- Sorbonne Université et CNRS UMR 8256, Adaptation Biologique et Vieillissement, Paris, France
| | - Florian Rückerl
- Institut Pasteur, Université de Paris, CNRS UMR 3571, Unit of Synapse and Circuit Dynamics, Paris, France
| | - Therese Abrahamsson
- Institut Pasteur, Université de Paris, CNRS UMR 3571, Unit of Synapse and Circuit Dynamics, Paris, France
| | - Cécile Saint-Cloment
- Institut Pasteur, Université de Paris, CNRS UMR 3571, Unit of Synapse and Circuit Dynamics, Paris, France
| | - Jean Mariani
- Sorbonne Université et CNRS UMR 8256, Adaptation Biologique et Vieillissement, Paris, France
| | - Ryuichi Shigemoto
- Institute of Science and Technology Austria, Klosterneuburg, Austria
| | - David A DiGregorio
- Institut Pasteur, Université de Paris, CNRS UMR 3571, Unit of Synapse and Circuit Dynamics, Paris, France
| | - Rachel M Sherrard
- Sorbonne Université et CNRS UMR 8256, Adaptation Biologique et Vieillissement, Paris, France
| | - Laurence Cathala
- Sorbonne Université et CNRS UMR 8256, Adaptation Biologique et Vieillissement, Paris, France.,Paris Brain Institute, CNRS UMR 7225 - Inserm U1127 - Sorbonne Université Groupe Hospitalier Pitié Salpêtrière, Paris, France
| |
Collapse
|
33
|
Lee JM, Devaraj V, Jeong NN, Lee Y, Kim YJ, Kim T, Yi SH, Kim WG, Choi EJ, Kim HM, Chang CL, Mao C, Oh JW. Neural mechanism mimetic selective electronic nose based on programmed M13 bacteriophage. Biosens Bioelectron 2021; 196:113693. [PMID: 34700263 DOI: 10.1016/j.bios.2021.113693] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/10/2021] [Revised: 09/30/2021] [Accepted: 10/02/2021] [Indexed: 01/03/2023]
Abstract
The electronic nose is a reliable practical sensor device that mimics olfactory organs. Although numerous studies have demonstrated excellence in detecting various target substances with the help of ideal models, biomimetic approaches still suffer in practical realization because of the inability to mimic the signal processing performed by olfactory neural systems. Herein, we propose an electronic nose based on the programable surface chemistry of M13 bacteriophage, inspired by the neural mechanism of the mammalian olfactory system. The neural pattern separation (NPS) was devised to apply the pattern separation that operates in the memory and learning process of the brain to the electronic nose. We demonstrate an electronic nose in a portable device form, distinguishing polycyclic aromatic compounds (harmful in living environment) in an atomic-level resolution (97.5% selectivity rate) for the first time. Our results provide practical methodology and inspiration for the second-generation electronic nose development toward the performance of detection dogs (K9).
Collapse
Affiliation(s)
- Jong-Min Lee
- Bio-IT Fusion Technology Research Institute, Pusan National University, Busan, 46241, South Korea; School of Nano Convergence Technology, Hallym University, Chuncheon, Gangwon-do, 24252, South Korea
| | - Vasanthan Devaraj
- Bio-IT Fusion Technology Research Institute, Pusan National University, Busan, 46241, South Korea
| | - Na-Na Jeong
- Department of Public Health Science, Graduate School of Korea University, Seoul, 02841, South Korea
| | - Yujin Lee
- Department of Nano Fusion Technology, Pusan National University, Busan, 46241, South Korea
| | - Ye-Ji Kim
- Department of Nano Fusion Technology, Pusan National University, Busan, 46241, South Korea
| | - Taehyeong Kim
- Finance·Fishery·Manufacture Industrial Mathematics Center on Big Data and Department of Mathematics, Pusan National University, Busan, 46241, South Korea
| | - Seung Heon Yi
- Finance·Fishery·Manufacture Industrial Mathematics Center on Big Data and Department of Mathematics, Pusan National University, Busan, 46241, South Korea
| | - Won-Geun Kim
- Bio-IT Fusion Technology Research Institute, Pusan National University, Busan, 46241, South Korea
| | - Eun Jung Choi
- Bio-IT Fusion Technology Research Institute, Pusan National University, Busan, 46241, South Korea
| | - Hyun-Min Kim
- Finance·Fishery·Manufacture Industrial Mathematics Center on Big Data and Department of Mathematics, Pusan National University, Busan, 46241, South Korea.
| | - Chulhun L Chang
- Department of Laboratory Medicine, College of Medicine, Pusan National University, Yangsan, 50612, South Korea.
| | - Chuanbin Mao
- Department of Chemistry and Biochemistry, University of Oklahoma, Norman, OK, 73019, United States.
| | - Jin-Woo Oh
- Bio-IT Fusion Technology Research Institute, Pusan National University, Busan, 46241, South Korea; Department of Nano Fusion Technology, Pusan National University, Busan, 46241, South Korea.
| |
Collapse
|
34
|
Jazayeri M, Ostojic S. Interpreting neural computations by examining intrinsic and embedding dimensionality of neural activity. Curr Opin Neurobiol 2021; 70:113-120. [PMID: 34537579 PMCID: PMC8688220 DOI: 10.1016/j.conb.2021.08.002] [Citation(s) in RCA: 53] [Impact Index Per Article: 17.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/05/2021] [Revised: 08/11/2021] [Accepted: 08/12/2021] [Indexed: 11/16/2022]
Abstract
The ongoing exponential rise in recording capacity calls for new approaches for analysing and interpreting neural data. Effective dimensionality has emerged as an important property of neural activity across populations of neurons, yet different studies rely on different definitions and interpretations of this quantity. Here, we focus on intrinsic and embedding dimensionality, and discuss how they might reveal computational principles from data. Reviewing recent works, we propose that the intrinsic dimensionality reflects information about the latent variables encoded in collective activity while embedding dimensionality reveals the manner in which this information is processed. We conclude by highlighting the role of network models as an ideal substrate for testing more specifically various hypotheses on the computational principles reflected through intrinsic and embedding dimensionality.
Collapse
Affiliation(s)
- Mehrdad Jazayeri
- McGovern Institute for Brain Research, Department of Brain & Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA, 02139, USA
| | - Srdjan Ostojic
- Laboratoire de Neurosciences Cognitives, INSERM U960, École Normale Supérieure - PSL Research University, 75005, Paris, France.
| |
Collapse
|
35
|
Li BX, Dong GH, Li HL, Zhang JS, Bing YH, Chu CP, Cui SB, Qiu DL. Chronic Ethanol Exposure Enhances Facial Stimulation-Evoked Mossy Fiber-Granule Cell Synaptic Transmission via GluN2A Receptors in the Mouse Cerebellar Cortex. Front Syst Neurosci 2021; 15:657884. [PMID: 34408633 PMCID: PMC8365521 DOI: 10.3389/fnsys.2021.657884] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/24/2021] [Accepted: 07/08/2021] [Indexed: 11/16/2022] Open
Abstract
Sensory information is transferred to the cerebellar cortex via the mossy fiber–granule cell (MF–GC) pathway, which participates in motor coordination and motor learning. We previously reported that chronic ethanol exposure from adolescence facilitated the sensory-evoked molecular layer interneuron–Purkinje cell synaptic transmission in adult mice in vivo. Herein, we investigated the effect of chronic ethanol exposure from adolescence on facial stimulation-evoked MF–GC synaptic transmission in the adult mouse cerebellar cortex using electrophysiological recording techniques and pharmacological methods. Chronic ethanol exposure from adolescence induced an enhancement of facial stimulation-evoked MF–GC synaptic transmission in the cerebellar cortex of adult mice. The application of an N-methyl-D-aspartate receptor (NMDAR) antagonist, D-APV (250 μM), induced stronger depression of facial stimulation-evoked MF–GC synaptic transmission in chronic ethanol-exposed mice compared with that in control mice. Chronic ethanol exposure-induced facilitation of facial stimulation evoked by MF–GC synaptic transmission was abolished by a selective GluN2A antagonist, PEAQX (10 μM), but was unaffected by the application of a selective GluN2B antagonist, TCN-237 (10 μM), or a type 1 metabotropic glutamate receptor blocker, JNJ16259685 (10 μM). These results indicate that chronic ethanol exposure from adolescence enhances facial stimulation-evoked MF–GC synaptic transmission via GluN2A, which suggests that chronic ethanol exposure from adolescence impairs the high-fidelity transmission capability of sensory information in the cerebellar cortex by enhancing the NMDAR-mediated components of MF–GC synaptic transmission in adult mice in vivo.
Collapse
Affiliation(s)
- Bing-Xue Li
- Brain Science Research Center, Yanbian University, Yanji, China.,Department of Physiology and Pathophysiology, College of Medicine, Yanbian University, Yanji, China
| | - Guang-Hui Dong
- Brain Science Research Center, Yanbian University, Yanji, China.,Department of Neurology, Affiliated Hospital of Yanbian University, Yanji, China
| | - Hao-Long Li
- Brain Science Research Center, Yanbian University, Yanji, China.,Department of Physiology and Pathophysiology, College of Medicine, Yanbian University, Yanji, China
| | - Jia-Song Zhang
- Brain Science Research Center, Yanbian University, Yanji, China.,Department of Physiology and Pathophysiology, College of Medicine, Yanbian University, Yanji, China
| | - Yan-Hua Bing
- Brain Science Research Center, Yanbian University, Yanji, China
| | - Chun-Ping Chu
- Brain Science Research Center, Yanbian University, Yanji, China.,Department of Physiology and Pathophysiology, College of Medicine, Yanbian University, Yanji, China
| | - Song-Biao Cui
- Department of Neurology, Affiliated Hospital of Yanbian University, Yanji, China
| | - De-Lai Qiu
- Brain Science Research Center, Yanbian University, Yanji, China.,Department of Physiology and Pathophysiology, College of Medicine, Yanbian University, Yanji, China
| |
Collapse
|
36
|
Lanore F, Cayco-Gajic NA, Gurnani H, Coyle D, Silver RA. Cerebellar granule cell axons support high-dimensional representations. Nat Neurosci 2021; 24:1142-1150. [PMID: 34168340 PMCID: PMC7611462 DOI: 10.1038/s41593-021-00873-x] [Citation(s) in RCA: 31] [Impact Index Per Article: 10.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/14/2020] [Accepted: 05/13/2021] [Indexed: 02/05/2023]
Abstract
In classical theories of cerebellar cortex, high-dimensional sensorimotor representations are used to separate neuronal activity patterns, improving associative learning and motor performance. Recent experimental studies suggest that cerebellar granule cell (GrC) population activity is low-dimensional. To examine sensorimotor representations from the point of view of downstream Purkinje cell 'decoders', we used three-dimensional acousto-optic lens two-photon microscopy to record from hundreds of GrC axons. Here we show that GrC axon population activity is high dimensional and distributed with little fine-scale spatial structure during spontaneous behaviors. Moreover, distinct behavioral states are represented along orthogonal dimensions in neuronal activity space. These results suggest that the cerebellar cortex supports high-dimensional representations and segregates behavioral state-dependent computations into orthogonal subspaces, as reported in the neocortex. Our findings match the predictions of cerebellar pattern separation theories and suggest that the cerebellum and neocortex use population codes with common features, despite their vastly different circuit structures.
Collapse
Affiliation(s)
- Frederic Lanore
- Department of Neuroscience, Physiology, and Pharmacology, University College London, London, UK
- University of Bordeaux, CNRS, Interdisciplinary Institute for Neuroscience, IINS, UMR 5297, Bordeaux, France
| | - N Alex Cayco-Gajic
- Department of Neuroscience, Physiology, and Pharmacology, University College London, London, UK
- Group for Neural Theory, Laboratoire de neurosciences cognitives et computationnelles, Département d'études cognitives, École normale supérieure, INSERM U960, Université Paris Sciences et Lettres, Paris, France
| | - Harsha Gurnani
- Department of Neuroscience, Physiology, and Pharmacology, University College London, London, UK
| | - Diccon Coyle
- Department of Neuroscience, Physiology, and Pharmacology, University College London, London, UK
| | - R Angus Silver
- Department of Neuroscience, Physiology, and Pharmacology, University College London, London, UK.
| |
Collapse
|
37
|
Why Does the Neocortex Need the Cerebellum for Working Memory? J Neurosci 2021; 41:6368-6370. [PMID: 34321336 DOI: 10.1523/jneurosci.0701-21.2021] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/02/2021] [Revised: 04/28/2021] [Accepted: 05/03/2021] [Indexed: 11/21/2022] Open
|
38
|
Kita K, Albergaria C, Machado AS, Carey MR, Müller M, Delvendahl I. GluA4 facilitates cerebellar expansion coding and enables associative memory formation. eLife 2021; 10:65152. [PMID: 34219651 PMCID: PMC8291978 DOI: 10.7554/elife.65152] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/24/2020] [Accepted: 07/01/2021] [Indexed: 01/17/2023] Open
Abstract
AMPA receptors (AMPARs) mediate excitatory neurotransmission in the central nervous system (CNS) and their subunit composition determines synaptic efficacy. Whereas AMPAR subunits GluA1–GluA3 have been linked to particular forms of synaptic plasticity and learning, the functional role of GluA4 remains elusive. Here, we demonstrate a crucial function of GluA4 for synaptic excitation and associative memory formation in the cerebellum. Notably, GluA4-knockout mice had ~80% reduced mossy fiber to granule cell synaptic transmission. The fidelity of granule cell spike output was markedly decreased despite attenuated tonic inhibition and increased NMDA receptor-mediated transmission. Computational network modeling incorporating these changes revealed that deletion of GluA4 impairs granule cell expansion coding, which is important for pattern separation and associative learning. On a behavioral level, while locomotor coordination was generally spared, GluA4-knockout mice failed to form associative memories during delay eyeblink conditioning. These results demonstrate an essential role for GluA4-containing AMPARs in cerebellar information processing and associative learning.
Collapse
Affiliation(s)
- Katarzyna Kita
- Department of Molecular Life Sciences, University of Zurich, Zurich, Switzerland.,Neuroscience Center Zurich, Zurich, Switzerland
| | - Catarina Albergaria
- Champalimaud Neuroscience Programme, Champalimaud Centre for the Unknown, Lisbon, Portugal
| | - Ana S Machado
- Champalimaud Neuroscience Programme, Champalimaud Centre for the Unknown, Lisbon, Portugal
| | - Megan R Carey
- Champalimaud Neuroscience Programme, Champalimaud Centre for the Unknown, Lisbon, Portugal
| | - Martin Müller
- Department of Molecular Life Sciences, University of Zurich, Zurich, Switzerland.,Neuroscience Center Zurich, Zurich, Switzerland
| | - Igor Delvendahl
- Department of Molecular Life Sciences, University of Zurich, Zurich, Switzerland.,Neuroscience Center Zurich, Zurich, Switzerland
| |
Collapse
|
39
|
Gurnani H, Silver RA. Multidimensional population activity in an electrically coupled inhibitory circuit in the cerebellar cortex. Neuron 2021; 109:1739-1753.e8. [PMID: 33848473 PMCID: PMC8153252 DOI: 10.1016/j.neuron.2021.03.027] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/23/2020] [Revised: 01/20/2021] [Accepted: 03/20/2021] [Indexed: 01/05/2023]
Abstract
Inhibitory neurons orchestrate the activity of excitatory neurons and play key roles in circuit function. Although individual interneurons have been studied extensively, little is known about their properties at the population level. Using random-access 3D two-photon microscopy, we imaged local populations of cerebellar Golgi cells (GoCs), which deliver inhibition to granule cells. We show that population activity is organized into multiple modes during spontaneous behaviors. A slow, network-wide common modulation of GoC activity correlates with the level of whisking and locomotion, while faster (<1 s) differential population activity, arising from spatially mixed heterogeneous GoC responses, encodes more precise information. A biologically detailed GoC circuit model reproduced the common population mode and the dimensionality observed experimentally, but these properties disappeared when electrical coupling was removed. Our results establish that local GoC circuits exhibit multidimensional activity patterns that could be used for inhibition-mediated adaptive gain control and spatiotemporal patterning of downstream granule cells.
Collapse
Affiliation(s)
- Harsha Gurnani
- Department of Neuroscience, Physiology, and Pharmacology, University College London, London WC1E 6BT, UK
| | - R Angus Silver
- Department of Neuroscience, Physiology, and Pharmacology, University College London, London WC1E 6BT, UK.
| |
Collapse
|
40
|
Farrell M, Recanatesi S, Reid RC, Mihalas S, Shea-Brown E. Autoencoder networks extract latent variables and encode these variables in their connectomes. Neural Netw 2021; 141:330-343. [PMID: 33957382 DOI: 10.1016/j.neunet.2021.03.010] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/01/2020] [Revised: 03/02/2021] [Accepted: 03/08/2021] [Indexed: 11/30/2022]
Abstract
Advances in electron microscopy and data processing techniques are leading to increasingly large and complete microscale connectomes. At the same time, advances in artificial neural networks have produced model systems that perform comparably rich computations with perfectly specified connectivity. This raises an exciting scientific opportunity for the study of both biological and artificial neural networks: to infer the underlying circuit function from the structure of its connectivity. A potential roadblock, however, is that - even with well constrained neural dynamics - there are in principle many different connectomes that could support a given computation. Here, we define a tractable setting in which the problem of inferring circuit function from circuit connectivity can be analyzed in detail: the function of input compression and reconstruction, in an autoencoder network with a single hidden layer. Here, in general there is substantial ambiguity in the weights that can produce the same circuit function, because largely arbitrary changes to input weights can be undone by applying the inverse modifications to the output weights. However, we use mathematical arguments and simulations to show that adding simple, biologically motivated regularization of connectivity resolves this ambiguity in an interesting way: weights are constrained such that the latent variable structure underlying the inputs can be extracted from the weights by using nonlinear dimensionality reduction methods.
Collapse
Affiliation(s)
- Matthew Farrell
- Applied Mathematics Department, University of Washington, Seattle, WA, United States of America; Computational Neuroscience Center, University of Washington, Seattle, WA, United States of America.
| | - Stefano Recanatesi
- Computational Neuroscience Center, University of Washington, Seattle, WA, United States of America
| | - R Clay Reid
- Allen Institute for Brain Science, Seattle, WA, United States of America
| | - Stefan Mihalas
- Allen Institute for Brain Science, Seattle, WA, United States of America
| | - Eric Shea-Brown
- Applied Mathematics Department, University of Washington, Seattle, WA, United States of America; Computational Neuroscience Center, University of Washington, Seattle, WA, United States of America; Allen Institute for Brain Science, Seattle, WA, United States of America
| |
Collapse
|
41
|
Raman DV, O'Leary T. Frozen algorithms: how the brain's wiring facilitates learning. Curr Opin Neurobiol 2021; 67:207-214. [PMID: 33508698 PMCID: PMC8202511 DOI: 10.1016/j.conb.2020.12.017] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/30/2020] [Revised: 12/21/2020] [Accepted: 12/30/2020] [Indexed: 12/03/2022]
Abstract
Synapses and neural connectivity are plastic and shaped by experience. But to what extent does connectivity itself influence the ability of a neural circuit to learn? Insights from optimization theory and AI shed light on how learning can be implemented in neural circuits. Though abstract in their nature, learning algorithms provide a principled set of hypotheses on the necessary ingredients for learning in neural circuits. These include the kinds of signals and circuit motifs that enable learning from experience, as well as an appreciation of the constraints that make learning challenging in a biological setting. Remarkably, some simple connectivity patterns can boost the efficiency of relatively crude learning rules, showing how the brain can use anatomy to compensate for the biological constraints of known synaptic plasticity mechanisms. Modern connectomics provides rich data for exploring this principle, and may reveal how brain connectivity is constrained by the requirement to learn efficiently.
Collapse
Affiliation(s)
- Dhruva V Raman
- Department of Engineering, University of Cambridge, United Kingdom
| | - Timothy O'Leary
- Department of Engineering, University of Cambridge, United Kingdom.
| |
Collapse
|
42
|
Mishra P, Narayanan R. Ion-channel regulation of response decorrelation in a heterogeneous multi-scale model of the dentate gyrus. CURRENT RESEARCH IN NEUROBIOLOGY 2021; 2:100007. [PMID: 33997798 PMCID: PMC7610774 DOI: 10.1016/j.crneur.2021.100007] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/01/2023] Open
Abstract
Heterogeneities in biological neural circuits manifest in afferent connectivity as well as in local-circuit components such as neuronal excitability, neural structure and local synaptic strengths. The expression of adult neurogenesis in the dentate gyrus (DG) amplifies local-circuit heterogeneities and guides heterogeneities in afferent connectivity. How do neurons and their networks endowed with these distinct forms of heterogeneities respond to perturbations to individual ion channels, which are known to change under several physiological and pathophysiological conditions? We sequentially traversed the ion channels-neurons-network scales and assessed the impact of eliminating individual ion channels on conductance-based neuronal and network models endowed with disparate local-circuit and afferent heterogeneities. We found that many ion channels differentially contributed to specific neuronal or network measurements, and the elimination of any given ion channel altered several functional measurements. We then quantified the impact of ion-channel elimination on response decorrelation, a well-established metric to assess the ability of neurons in a network to convey complementary information, in DG networks endowed with different forms of heterogeneities. Notably, we found that networks constructed with structurally immature neurons exhibited functional robustness, manifesting as minimal changes in response decorrelation in the face of ion-channel elimination. Importantly, the average change in output correlation was dependent on the eliminated ion channel but invariant to input correlation. Our analyses suggest that neurogenesis-driven structural heterogeneities could assist the DG network in providing functional resilience to molecular perturbations. Perturbations at one scale result in a cascading impact on physiology across scales. Heterogeneous multi-scale models used to assess the impact of ion-channel deletion. Mapping of structural components to functional outcomes is many-to-many. Differential & variable impact of ion channel deletion on response decorrelation. Neurogenesis-induced structural heterogeneity confers resilience to perturbations.
Collapse
Affiliation(s)
- Poonam Mishra
- Cellular Neurophysiology Laboratory, Molecular Biophysics Unit, Indian Institute of Science, Bangalore 560012, India
| | - Rishikesh Narayanan
- Cellular Neurophysiology Laboratory, Molecular Biophysics Unit, Indian Institute of Science, Bangalore 560012, India
| |
Collapse
|
43
|
Recanatesi S, Farrell M, Lajoie G, Deneve S, Rigotti M, Shea-Brown E. Predictive learning as a network mechanism for extracting low-dimensional latent space representations. Nat Commun 2021; 12:1417. [PMID: 33658520 PMCID: PMC7930246 DOI: 10.1038/s41467-021-21696-1] [Citation(s) in RCA: 29] [Impact Index Per Article: 9.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/12/2019] [Accepted: 01/22/2021] [Indexed: 01/02/2023] Open
Abstract
Artificial neural networks have recently achieved many successes in solving sequential processing and planning tasks. Their success is often ascribed to the emergence of the task’s low-dimensional latent structure in the network activity – i.e., in the learned neural representations. Here, we investigate the hypothesis that a means for generating representations with easily accessed low-dimensional latent structure, possibly reflecting an underlying semantic organization, is through learning to predict observations about the world. Specifically, we ask whether and when network mechanisms for sensory prediction coincide with those for extracting the underlying latent variables. Using a recurrent neural network model trained to predict a sequence of observations we show that network dynamics exhibit low-dimensional but nonlinearly transformed representations of sensory inputs that map the latent structure of the sensory environment. We quantify these results using nonlinear measures of intrinsic dimensionality and linear decodability of latent variables, and provide mathematical arguments for why such useful predictive representations emerge. We focus throughout on how our results can aid the analysis and interpretation of experimental data. Neural networks trained using predictive models generate representations that recover the underlying low-dimensional latent structure in the data. Here, the authors demonstrate that a network trained on a spatial navigation task generates place-related neural activations similar to those observed in the hippocampus and show that these are related to the latent structure.
Collapse
Affiliation(s)
- Stefano Recanatesi
- University of Washington Center for Computational Neuroscience and Swartz Center for Theoretical Neuroscience, Seattle, WA, USA.
| | - Matthew Farrell
- Department of Applied Mathematics, University of Washington, Seattle, WA, USA
| | - Guillaume Lajoie
- Department of Mathematics and Statistics, Université de Montréal, Montreal, QC, Canada.,Mila-Quebec Artificial Intelligence Institute, Montreal, QC, Canada
| | - Sophie Deneve
- Group for Neural Theory, Ecole Normal Superieur, Paris, France
| | | | - Eric Shea-Brown
- University of Washington Center for Computational Neuroscience and Swartz Center for Theoretical Neuroscience, Seattle, WA, USA.,Department of Applied Mathematics, University of Washington, Seattle, WA, USA.,Allen Institute for Brain Science, Seattle, WA, USA
| |
Collapse
|
44
|
Gilbert M, Chris Miall R. How and Why the Cerebellum Recodes Input Signals: An Alternative to Machine Learning. Neuroscientist 2021; 28:206-221. [PMID: 33559532 PMCID: PMC9136479 DOI: 10.1177/1073858420986795] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
Mossy fiber input to the cerebellum is received by granule cells where it is thought to be recoded into internal signals received by Purkinje cells, which alone carry the output of the cerebellar cortex. In any neural network, variables are contained in groups of signals as well as signals themselves—which cells are active and how many, for example, and statistical variables coded in rates, such as the mean and range, and which rates are strongly represented, in a defined population. We argue that the primary function of recoding is to confine translation to an effect of some variables and not others—both where input is recoded into internal signals and the translation downstream of internal signals into an effect on Purkinje cells. The cull of variables is harsh. Internal signaling is group coded. This allows coding to exploit statistics for a reliable and precise effect despite needing to work with high-dimensional input which is a highly unpredictably variable. An important effect is to normalize eclectic input signals, so that the basic, repeating cerebellar circuit, preserved across taxa, does not need to specialize (within regional variations). With this model, there is no need to slavishly conserve or compute data coded in single signals. If we are correct, a learning algorithm—for years, a mainstay of cerebellar modeling—would be redundant.
Collapse
Affiliation(s)
- Mike Gilbert
- School of Psychology, University of Birmingham, Birmingham, UK
| | - R Chris Miall
- School of Psychology, University of Birmingham, Birmingham, UK
| |
Collapse
|
45
|
Gating by Functionally Indivisible Cerebellar Circuits: a Hypothesis. THE CEREBELLUM 2021; 20:518-532. [PMID: 33464470 PMCID: PMC8360902 DOI: 10.1007/s12311-020-01223-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Accepted: 12/01/2020] [Indexed: 11/08/2022]
Abstract
The attempt to understand the cerebellum has been dominated for years by supervised learning models. The central idea is that a learning algorithm modifies transmission strength at repeatedly co-active synapses, creating memories stored as finely calibrated synaptic weights. As a result, Purkinje cells, usually the de facto output cells of these models, acquire a modified response to input in a remembered pattern. This paper proposes an alternative model of pattern memory in which the function of a match is permissive, allowing but not driving output, and accordingly controlling the timing of output but not the rate of firing by Purkinje cells. Learning does not result in graded synaptic weights. There is no supervised learning algorithm or memory of individual patterns, which, like graded weights, are unnecessary to explain the evidence. Instead, patterns are classed as simply either known or not, at the level of input to a functional population of 100s of Purkinje cells (a microzone). The standard is strict. If only a handful of Purkinje cells receive a mismatch output of the whole circuit is blocked. Only if there is a full and accurate match are projection neurons in deep nuclei, which carry the output of most circuits, released from default inhibitory restraint. Purkinje cell firing at those times is a linear function of input rates. There is no effect of modification of synaptic transmission except to either allow or block output.
Collapse
|
46
|
Puñal VM, Ahmed M, Thornton-Kolbe EM, Clowney EJ. Untangling the wires: development of sparse, distributed connectivity in the mushroom body calyx. Cell Tissue Res 2021; 383:91-112. [PMID: 33404837 PMCID: PMC9835099 DOI: 10.1007/s00441-020-03386-4] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/31/2020] [Accepted: 12/07/2020] [Indexed: 01/16/2023]
Abstract
Appropriate perception and representation of sensory stimuli pose an everyday challenge to the brain. In order to represent the wide and unpredictable array of environmental stimuli, principle neurons of associative learning regions receive sparse, combinatorial sensory inputs. Despite the broad role of such networks in sensory neural circuits, the developmental mechanisms underlying their emergence are not well understood. As mammalian sensory coding regions are numerically complex and lack the accessibility of simpler invertebrate systems, we chose to focus this review on the numerically simpler, yet functionally similar, Drosophila mushroom body calyx. We bring together current knowledge about the cellular and molecular mechanisms orchestrating calyx development, in addition to drawing insights from literature regarding construction of sparse wiring in the mammalian cerebellum. From this, we formulate hypotheses to guide our future understanding of the development of this critical perceptual center.
Collapse
Affiliation(s)
- Vanessa M. Puñal
- Department of Molecular, Cellular & Developmental Biology, The University of Michigan, Ann Arbor, MI 48109, USA,Department of Molecular & Integrative Physiology, The University of Michigan, Ann Arbor, MI 48109, USA
| | - Maria Ahmed
- Department of Molecular, Cellular & Developmental Biology, The University of Michigan, Ann Arbor, MI 48109, USA
| | - Emma M. Thornton-Kolbe
- Department of Molecular, Cellular & Developmental Biology, The University of Michigan, Ann Arbor, MI 48109, USA,Neuroscience Graduate Program, The University of Michigan, Ann Arbor, MI 48109, USA
| | - E. Josephine Clowney
- Department of Molecular, Cellular & Developmental Biology, The University of Michigan, Ann Arbor, MI 48109, USA
| |
Collapse
|
47
|
Effect of diverse recoding of granule cells on optokinetic response in a cerebellar ring network with synaptic plasticity. Neural Netw 2020; 134:173-204. [PMID: 33316723 DOI: 10.1016/j.neunet.2020.11.014] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/26/2020] [Revised: 11/12/2020] [Accepted: 11/24/2020] [Indexed: 11/21/2022]
Abstract
We consider a cerebellar ring network for the optokinetic response (OKR), and investigate the effect of diverse recoding of granule (GR) cells on OKR by varying the connection probability pc from Golgi to GR cells. For an optimal value of pc∗(=0.06), individual GR cells exhibit diverse spiking patterns which are in-phase, anti-phase, or complex out-of-phase with respect to their population-averaged firing activity. Then, these diversely-recoded signals via parallel fibers (PFs) from GR cells are effectively depressed by the error-teaching signals via climbing fibers from the inferior olive which are also in-phase ones. Synaptic weights at in-phase PF-Purkinje cell (PC) synapses of active GR cells are strongly depressed via strong long-term depression (LTD), while those at anti-phase and complex out-of-phase PF-PC synapses are weakly depressed through weak LTD. This kind of "effective" depression (i.e., strong/weak LTD) at the PF-PC synapses causes a big modulation in firings of PCs, which then exert effective inhibitory coordination on the vestibular nucleus (VN) neuron (which evokes OKR). For the firing of the VN neuron, the learning gain degree Lg, corresponding to the modulation gain ratio, increases with increasing the learning cycle, and it saturates at about the 300th cycle. By varying pc from pc∗, we find that a plot of saturated learning gain degree Lg∗ versus pc forms a bell-shaped curve with a peak at pc∗ (where the diversity degree in spiking patterns of GR cells is also maximum). Consequently, the more diverse in recoding of GR cells, the more effective in motor learning for the OKR adaptation.
Collapse
|
48
|
Yamaura H, Igarashi J, Yamazaki T. Simulation of a Human-Scale Cerebellar Network Model on the K Computer. Front Neuroinform 2020; 14:16. [PMID: 32317955 PMCID: PMC7146068 DOI: 10.3389/fninf.2020.00016] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/21/2019] [Accepted: 03/18/2020] [Indexed: 12/15/2022] Open
Abstract
Computer simulation of the human brain at an individual neuron resolution is an ultimate goal of computational neuroscience. The Japanese flagship supercomputer, K, provides unprecedented computational capability toward this goal. The cerebellum contains 80% of the neurons in the whole brain. Therefore, computer simulation of the human-scale cerebellum will be a challenge for modern supercomputers. In this study, we built a human-scale spiking network model of the cerebellum, composed of 68 billion spiking neurons, on the K computer. As a benchmark, we performed a computer simulation of a cerebellum-dependent eye movement task known as the optokinetic response. We succeeded in reproducing plausible neuronal activity patterns that are observed experimentally in animals. The model was built on dedicated neural network simulation software called MONET (Millefeuille-like Organization NEural neTwork), which calculates layered sheet types of neural networks with parallelization by tile partitioning. To examine the scalability of the MONET simulator, we repeatedly performed simulations while changing the number of compute nodes from 1,024 to 82,944 and measured the computational time. We observed a good weak-scaling property for our cerebellar network model. Using all 82,944 nodes, we succeeded in simulating a human-scale cerebellum for the first time, although the simulation was 578 times slower than the wall clock time. These results suggest that the K computer is already capable of creating a simulation of a human-scale cerebellar model with the aid of the MONET simulator.
Collapse
Affiliation(s)
- Hiroshi Yamaura
- Graduate School of Informatics and Engineering, The University of Electro-Communications, Tokyo, Japan
| | - Jun Igarashi
- Head Office for Information Systems and Cybersecurity, RIKEN, Saitama, Japan
| | - Tadashi Yamazaki
- Graduate School of Informatics and Engineering, The University of Electro-Communications, Tokyo, Japan
| |
Collapse
|
49
|
Acetylcholine Modulates Cerebellar Granule Cell Spiking by Regulating the Balance of Synaptic Excitation and Inhibition. J Neurosci 2020; 40:2882-2894. [PMID: 32111698 DOI: 10.1523/jneurosci.2148-19.2020] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/05/2019] [Revised: 02/03/2020] [Accepted: 02/20/2020] [Indexed: 12/20/2022] Open
Abstract
Sensorimotor integration in the cerebellum is essential for refining motor output, and the first stage of this processing occurs in the granule cell layer. Recent evidence suggests that granule cell layer synaptic integration can be contextually modified, although the circuit mechanisms that could mediate such modulation remain largely unknown. Here we investigate the role of ACh in regulating granule cell layer synaptic integration in male rats and mice of both sexes. We find that Golgi cells, interneurons that provide the sole source of inhibition to the granule cell layer, express both nicotinic and muscarinic cholinergic receptors. While acute ACh application can modestly depolarize some Golgi cells, the net effect of longer, optogenetically induced ACh release is to strongly hyperpolarize Golgi cells. Golgi cell hyperpolarization by ACh leads to a significant reduction in both tonic and evoked granule cell synaptic inhibition. ACh also reduces glutamate release from mossy fibers by acting on presynaptic muscarinic receptors. Surprisingly, despite these consistent effects on Golgi cells and mossy fibers, ACh can either increase or decrease the spike probability of granule cells as measured by noninvasive cell-attached recordings. By constructing an integrate-and-fire model of granule cell layer population activity, we find that the direction of spike rate modulation can be accounted for predominately by the initial balance of excitation and inhibition onto individual granule cells. Together, these experiments demonstrate that ACh can modulate population-level granule cell responses by altering the ratios of excitation and inhibition at the first stage of cerebellar processing.SIGNIFICANCE STATEMENT The cerebellum plays a key role in motor control and motor learning. While it is known that behavioral context can modify motor learning, the circuit basis of such modulation has remained unclear. Here we find that a key neuromodulator, ACh, can alter the balance of excitation and inhibition at the first stage of cerebellar processing. These results suggest that ACh could play a key role in altering cerebellar learning by modifying how sensorimotor input is represented at the input layer of the cerebellum.
Collapse
|
50
|
Braganza O, Mueller-Komorowska D, Kelly T, Beck H. Quantitative properties of a feedback circuit predict frequency-dependent pattern separation. eLife 2020; 9:53148. [PMID: 32077850 PMCID: PMC7032930 DOI: 10.7554/elife.53148] [Citation(s) in RCA: 16] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/29/2019] [Accepted: 01/20/2020] [Indexed: 12/16/2022] Open
Abstract
Feedback inhibitory motifs are thought to be important for pattern separation across species. How feedback circuits may implement pattern separation of biologically plausible, temporally structured input in mammals is, however, poorly understood. We have quantitatively determined key properties of netfeedback inhibition in the mouse dentate gyrus, a region critically involved in pattern separation. Feedback inhibition is recruited steeply with a low dynamic range (0% to 4% of active GCs), and with a non-uniform spatial profile. Additionally, net feedback inhibition shows frequency-dependent facilitation, driven by strongly facilitating mossy fiber inputs. Computational analyses show a significant contribution of the feedback circuit to pattern separation of theta modulated inputs, even within individual theta cycles. Moreover, pattern separation was selectively boosted at gamma frequencies, in particular for highly similar inputs. This effect was highly robust, suggesting that frequency-dependent pattern separation is a key feature of the feedback inhibitory microcircuit. You can probably recall where you left your car this morning without too much trouble. But assuming you use the same busy parking lot every day, can you remember which space you parked in yesterday? Or the day before that? Most people find this difficult not because they cannot remember what happened two or three days ago, but because it requires distinguishing between very similar memories. The car, the parking lot, and the time of day were the same on each occasion. So how do you remember where you parked this morning? This ability to distinguish between memories of similar events depends on a brain region called the hippocampus. A subregion of the hippocampus called the dentate gyrus generates different patterns of activity in response to events that are similar but distinct. This process is called pattern separation, and it helps ensure that you do not look for your car in yesterday’s parking space. Pattern separation in the dentate gyrus is thought to involve a form of negative feedback called feedback inhibition, a phenomenon where the output of a process acts to limit or stop the same process. To test this idea, Braganza et al. studied feedback inhibition in the dentate gyrus of mice, before building a computer model simulating the inhibition process and supplying the model with two types of realistic input. The first consisted of low-frequency theta brainwaves, which occur, for instance, in the dentate gyrus when animals explore their environment. The second consisted of higher frequency gamma brainwaves, which occur, for example, when animals experience something new. Testing the model showed that feedback inhibition contributes to pattern separation with both theta and gamma inputs. However, pattern separation is stronger with gamma input. This suggests that high frequency brainwaves in the hippocampus could help animals distinguish new events from old ones by promoting pattern separation. Various brain disorders, including Alzheimer’s disease, schizophrenia and epilepsy, involve changes in the dentate gyrus and altered brain rhythms. The current findings could help reveal how these changes contribute to memory impairments and to a reduced ability to distinguish similar experiences.
Collapse
Affiliation(s)
- Oliver Braganza
- Institute for Experimental Epileptology and Cognition Research, University of Bonn, Bonn, Germany
| | - Daniel Mueller-Komorowska
- Institute for Experimental Epileptology and Cognition Research, University of Bonn, Bonn, Germany.,International Max Planck Research School for Brain and Behavior, University of Bonn, Bonn, Germany
| | - Tony Kelly
- Institute for Experimental Epileptology and Cognition Research, University of Bonn, Bonn, Germany
| | - Heinz Beck
- Institute for Experimental Epileptology and Cognition Research, University of Bonn, Bonn, Germany.,Deutsches Zentrum für Neurodegenerative Erkrankungen e.V., Bonn, Germany
| |
Collapse
|