1
|
Weilenmann C, Ziogas AN, Zellweger T, Portner K, Mladenović M, Kaniselvan M, Moraitis T, Luisier M, Emboras A. Single neuromorphic memristor closely emulates multiple synaptic mechanisms for energy efficient neural networks. Nat Commun 2024; 15:6898. [PMID: 39138160 PMCID: PMC11322324 DOI: 10.1038/s41467-024-51093-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/30/2024] [Accepted: 07/27/2024] [Indexed: 08/15/2024] Open
Abstract
Biological neural networks do not only include long-term memory and weight multiplication capabilities, as commonly assumed in artificial neural networks, but also more complex functions such as short-term memory, short-term plasticity, and meta-plasticity - all collocated within each synapse. Here, we demonstrate memristive nano-devices based on SrTiO3 that inherently emulate all these synaptic functions. These memristors operate in a non-filamentary, low conductance regime, which enables stable and energy efficient operation. They can act as multi-functional hardware synapses in a class of bio-inspired deep neural networks (DNN) that make use of both long- and short-term synaptic dynamics and are capable of meta-learning or learning-to-learn. The resulting bio-inspired DNN is then trained to play the video game Atari Pong, a complex reinforcement learning task in a dynamic environment. Our analysis shows that the energy consumption of the DNN with multi-functional memristive synapses decreases by about two orders of magnitude as compared to a pure GPU implementation. Based on this finding, we infer that memristive devices with a better emulation of the synaptic functionalities do not only broaden the applicability of neuromorphic computing, but could also improve the performance and energy costs of certain artificial intelligence applications.
Collapse
Affiliation(s)
| | | | - Till Zellweger
- Integrated Systems Laboratory, ETH Zurich, Zurich, Switzerland
| | - Kevin Portner
- Integrated Systems Laboratory, ETH Zurich, Zurich, Switzerland
| | | | | | | | - Mathieu Luisier
- Integrated Systems Laboratory, ETH Zurich, Zurich, Switzerland
| | | |
Collapse
|
2
|
Gu S, Mattar MG, Tang H, Pan G. Emergence and reconfiguration of modular structure for artificial neural networks during continual familiarity detection. SCIENCE ADVANCES 2024; 10:eadm8430. [PMID: 39058783 PMCID: PMC11277393 DOI: 10.1126/sciadv.adm8430] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/08/2023] [Accepted: 06/21/2024] [Indexed: 07/28/2024]
Abstract
Advances in artificial intelligence enable neural networks to learn a wide variety of tasks, yet our understanding of the learning dynamics of these networks remains limited. Here, we study the temporal dynamics during learning of Hebbian feedforward neural networks in tasks of continual familiarity detection. Drawing inspiration from network neuroscience, we examine the network's dynamic reconfiguration, focusing on how network modules evolve throughout learning. Through a comprehensive assessment involving metrics like network accuracy, modular flexibility, and distribution entropy across diverse learning modes, our approach reveals various previously unknown patterns of network reconfiguration. We find that the emergence of network modularity is a salient predictor of performance and that modularization strengthens with increasing flexibility throughout learning. These insights not only elucidate the nuanced interplay of network modularity, accuracy, and learning dynamics but also bridge our understanding of learning in artificial and biological agents.
Collapse
Affiliation(s)
- Shi Gu
- School of Computer Science and Engineering, University of Electronic Science and Technology of China, Chengdu, China
- Shenzhen Institute for Advanced Study, University of Electronic Science and Technology of China, Shenzhen, China
| | - Marcelo G. Mattar
- Department of Psychology, New York University, New York, NY 10003, USA
| | - Huajin Tang
- College of Computer Science and Technology, Zhejiang University, Hangzhou, China
- State Key Laboratory of Brain Machine Intelligence, Zhejiang University, Hangzhou, China
| | - Gang Pan
- College of Computer Science and Technology, Zhejiang University, Hangzhou, China
- State Key Laboratory of Brain Machine Intelligence, Zhejiang University, Hangzhou, China
| |
Collapse
|
3
|
Aitken K, Campagnola L, Garrett ME, Olsen SR, Mihalas S. Simple synaptic modulations implement diverse novelty computations. Cell Rep 2024; 43:114188. [PMID: 38713584 DOI: 10.1016/j.celrep.2024.114188] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/06/2023] [Revised: 02/09/2024] [Accepted: 04/17/2024] [Indexed: 05/09/2024] Open
Abstract
Detecting novelty is ethologically useful for an organism's survival. Recent experiments characterize how different types of novelty over timescales from seconds to weeks are reflected in the activity of excitatory and inhibitory neuron types. Here, we introduce a learning mechanism, familiarity-modulated synapses (FMSs), consisting of multiplicative modulations dependent on presynaptic or pre/postsynaptic neuron activity. With FMSs, network responses that encode novelty emerge under unsupervised continual learning and minimal connectivity constraints. Implementing FMSs within an experimentally constrained model of a visual cortical circuit, we demonstrate the generalizability of FMSs by simultaneously fitting absolute, contextual, and omission novelty effects. Our model also reproduces functional diversity within cell subpopulations, leading to experimentally testable predictions about connectivity and synaptic dynamics that can produce both population-level novelty responses and heterogeneous individual neuron signals. Altogether, our findings demonstrate how simple plasticity mechanisms within a cortical circuit structure can produce qualitatively distinct and complex novelty responses.
Collapse
Affiliation(s)
- Kyle Aitken
- Center for Data-Driven Discovery for Biology, Allen Institute, Seattle, WA 98109, USA.
| | | | | | - Shawn R Olsen
- Allen Institute for Neural Dynamics, Seattle, WA 98109, USA
| | - Stefan Mihalas
- Center for Data-Driven Discovery for Biology, Allen Institute, Seattle, WA 98109, USA; Applied Mathematics, University of Washington, Seattle, WA 98195, USA
| |
Collapse
|
4
|
Mastrovito D, Liu YH, Kusmierz L, Shea-Brown E, Koch C, Mihalas S. Transition to chaos separates learning regimes and relates to measure of consciousness in recurrent neural networks. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.05.15.594236. [PMID: 38798582 PMCID: PMC11118502 DOI: 10.1101/2024.05.15.594236] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/29/2024]
Abstract
Recurrent neural networks exhibit chaotic dynamics when the variance in their connection strengths exceed a critical value. Recent work indicates connection variance also modulates learning strategies; networks learn "rich" representations when initialized with low coupling and "lazier" solutions with larger variance. Using Watts-Strogatz networks of varying sparsity, structure, and hidden weight variance, we find that the critical coupling strength dividing chaotic from ordered dynamics also differentiates rich and lazy learning strategies. Training moves both stable and chaotic networks closer to the edge of chaos, with networks learning richer representations before the transition to chaos. In contrast, biologically realistic connectivity structures foster stability over a wide range of variances. The transition to chaos is also reflected in a measure that clinically discriminates levels of consciousness, the perturbational complexity index (PCIst). Networks with high values of PCIst exhibit stable dynamics and rich learning, suggesting a consciousness prior may promote rich learning. The results suggest a clear relationship between critical dynamics, learning regimes and complexity-based measures of consciousness.
Collapse
|
5
|
Lakshminarasimhan KJ, Xie M, Cohen JD, Sauerbrei BA, Hantman AW, Litwin-Kumar A, Escola S. Specific connectivity optimizes learning in thalamocortical loops. Cell Rep 2024; 43:114059. [PMID: 38602873 PMCID: PMC11104520 DOI: 10.1016/j.celrep.2024.114059] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/22/2023] [Revised: 01/04/2024] [Accepted: 03/20/2024] [Indexed: 04/13/2024] Open
Abstract
Thalamocortical loops have a central role in cognition and motor control, but precisely how they contribute to these processes is unclear. Recent studies showing evidence of plasticity in thalamocortical synapses indicate a role for the thalamus in shaping cortical dynamics through learning. Since signals undergo a compression from the cortex to the thalamus, we hypothesized that the computational role of the thalamus depends critically on the structure of corticothalamic connectivity. To test this, we identified the optimal corticothalamic structure that promotes biologically plausible learning in thalamocortical synapses. We found that corticothalamic projections specialized to communicate an efference copy of the cortical output benefit motor control, while communicating the modes of highest variance is optimal for working memory tasks. We analyzed neural recordings from mice performing grasping and delayed discrimination tasks and found corticothalamic communication consistent with these predictions. These results suggest that the thalamus orchestrates cortical dynamics in a functionally precise manner through structured connectivity.
Collapse
Affiliation(s)
| | - Marjorie Xie
- Zuckerman Mind Brain Behavior Institute, Columbia University, New York, NY 10027, USA
| | - Jeremy D Cohen
- Neuroscience Center, University of North Carolina, Chapel Hill, NC 27559, USA
| | - Britton A Sauerbrei
- Department of Neurosciences, Case Western Reserve University, Cleveland, OH 44106, USA
| | - Adam W Hantman
- Neuroscience Center, University of North Carolina, Chapel Hill, NC 27559, USA
| | - Ashok Litwin-Kumar
- Zuckerman Mind Brain Behavior Institute, Columbia University, New York, NY 10027, USA.
| | - Sean Escola
- Department of Psychiatry, Columbia University, New York, NY 10032, USA.
| |
Collapse
|
6
|
Yao Z, Sun K, He S. Synchronization in fractional-order neural networks by the energy balance strategy. Cogn Neurodyn 2024; 18:701-713. [PMID: 39554725 PMCID: PMC11564445 DOI: 10.1007/s11571-023-10023-7] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/07/2023] [Revised: 09/28/2023] [Accepted: 10/12/2023] [Indexed: 11/19/2024] Open
Abstract
Considering the individual differences between neurons, the fractional-order framework is introduced, and the neurons with various orders denote the individual differences during the cell differentiation. In this paper, the fractional-order FithzHugh-Nagumo (FHN) neural circuit is used to reproduce the firing patterns. In addition, an energy balance strategy is applied to determine the inter-neuronal communication. The neurons with energy imbalance exchange the information whereas the synaptic channels are blocked when energy balance is achieved. Two neurons coupled by this strategy achieve the phase synchronization and phase lock, and it indicates the two neurons generate spiking at the same time or with an interval. Similarly, the synchronization results are also obtained in the chain neuronal network, and the neurons exhibit the same firing patterns since the synchronization factor is closed to 1. Particularly, the neurons with order diversities lead to the heterogeneity and gradient field in the regular network, and the target wave is developed over time. With the wave spreading in the network, the silent states and exciting states appear in the whole network. The formation and diffusion of the target wave reveals the information transmission in neuronal network, and it indicates the individual differences paly an essential role in the collective behavior of neurons.
Collapse
Affiliation(s)
- Zhao Yao
- School of Physics, Central South University, Changsha, 410083 China
| | - Kehui Sun
- School of Physics, Central South University, Changsha, 410083 China
| | - Shaobo He
- School of Automation and Electronic Information, Xiangtan University, Xiangtan, 411105 China
| |
Collapse
|
7
|
Monosov IE. Curiosity: primate neural circuits for novelty and information seeking. Nat Rev Neurosci 2024; 25:195-208. [PMID: 38263217 DOI: 10.1038/s41583-023-00784-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 12/13/2023] [Indexed: 01/25/2024]
Abstract
For many years, neuroscientists have investigated the behavioural, computational and neurobiological mechanisms that support value-based decisions, revealing how humans and animals make choices to obtain rewards. However, many decisions are influenced by factors other than the value of physical rewards or second-order reinforcers (such as money). For instance, animals (including humans) frequently explore novel objects that have no intrinsic value solely because they are novel and they exhibit the desire to gain information to reduce their uncertainties about the future, even if this information cannot lead to reward or assist them in accomplishing upcoming tasks. In this Review, I discuss how circuits in the primate brain responsible for detecting, predicting and assessing novelty and uncertainty regulate behaviour and give rise to these behavioural components of curiosity. I also briefly discuss how curiosity-related behaviours arise during postnatal development and point out some important reasons for the persistence of curiosity across generations.
Collapse
Affiliation(s)
- Ilya E Monosov
- Department of Neuroscience, Washington University School of Medicine, St. Louis, MO, USA.
- Department of Electrical Engineering, Washington University, St. Louis, MO, USA.
- Department of Biomedical Engineering, Washington University, St. Louis, MO, USA.
- Department of Neurosurgery, Washington University, St. Louis, MO, USA.
- Pain Center, Washington University, St. Louis, MO, USA.
| |
Collapse
|
8
|
Read J, Delhaye E, Sougné J. Computational models can distinguish the contribution from different mechanisms to familiarity recognition. Hippocampus 2024; 34:36-50. [PMID: 37985213 DOI: 10.1002/hipo.23588] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/14/2023] [Revised: 09/26/2023] [Accepted: 10/28/2023] [Indexed: 11/22/2023]
Abstract
Familiarity is the strange feeling of knowing that something has already been seen in our past. Over the past decades, several attempts have been made to model familiarity using artificial neural networks. Recently, two learning algorithms successfully reproduced the functioning of the perirhinal cortex, a key structure involved during familiarity: Hebbian and anti-Hebbian learning. However, performance of these learning rules is very different from one to another thus raising the question of their complementarity. In this work, we designed two distinct computational models that combined Deep Learning and a Hebbian learning rule to reproduce familiarity on natural images, the Hebbian model and the anti-Hebbian model, respectively. We compared the performance of both models during different simulations to highlight the inner functioning of both learning rules. We showed that the anti-Hebbian model fits human behavioral data whereas the Hebbian model fails to fit the data under large training set sizes. Besides, we observed that only our Hebbian model is highly sensitive to homogeneity between images. Taken together, we interpreted these results considering the distinction between absolute and relative familiarity. With our framework, we proposed a novel way to distinguish the contribution of these familiarity mechanisms to the overall feeling of familiarity. By viewing them as complementary, our two models allow us to make new testable predictions that could be of interest to shed light on the familiarity phenomenon.
Collapse
Affiliation(s)
- John Read
- GIGA Centre de Recherche du Cyclotron In Vivo Imaging, University of Liège, Liège, Belgium
| | - Emma Delhaye
- GIGA Centre de Recherche du Cyclotron In Vivo Imaging, University of Liège, Liège, Belgium
- Psychology and Cognitive Neuroscience Research Unit, University of Liège, Liège, Belgium
| | - Jacques Sougné
- Psychology and Cognitive Neuroscience Research Unit, University of Liège, Liège, Belgium
- UDI-FPLSE, University of Liège, Liège, Belgium
| |
Collapse
|
9
|
Zou X, Ji Z, Zhang T, Huang T, Wu S. Visual information processing through the interplay between fine and coarse signal pathways. Neural Netw 2023; 166:692-703. [PMID: 37604078 DOI: 10.1016/j.neunet.2023.07.048] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/23/2022] [Revised: 07/19/2023] [Accepted: 07/30/2023] [Indexed: 08/23/2023]
Abstract
Object recognition is often viewed as a feedforward, bottom-up process in machine learning, but in real neural systems, object recognition is a complicated process which involves the interplay between two signal pathways. One is the parvocellular pathway (P-pathway), which is slow and extracts fine features of objects; the other is the magnocellular pathway (M-pathway), which is fast and extracts coarse features of objects. It has been suggested that the interplay between the two pathways endows the neural system with the capacity of processing visual information rapidly, adaptively, and robustly. However, the underlying computational mechanism remains largely unknown. In this study, we build a two-pathway model to elucidate the computational properties associated with the interactions between two visual pathways. Specifically, we model two visual pathways using two convolution neural networks: one mimics the P-pathway, referred to as FineNet, which is deep, has small-size kernels, and receives detailed visual inputs; the other mimics the M-pathway, referred to as CoarseNet, which is shallow, has large-size kernels, and receives blurred visual inputs. We show that CoarseNet can learn from FineNet through imitation to improve its performance, FineNet can benefit from the feedback of CoarseNet to improve its robustness to noise; and the two pathways interact with each other to achieve rough-to-fine information processing. Using visual backward masking as an example, we further demonstrate that our model can explain visual cognitive behaviors that involve the interplay between two pathways. We hope that this study gives us insight into understanding the interaction principles between two visual pathways.
Collapse
Affiliation(s)
- Xiaolong Zou
- School of Psychological and Cognitive Sciences, IDG/McGovern Institute for Brain Research, Peking-Tsinghua Center for Life Sciences, Center of Quantitative Biology, Academy for Advanced Interdisciplinary Studies, Peking University, Beijing, China; Institute of Artificial Intelligence, Hefei Comprehensive National Science Center, Hefei, China; Beijing Academy of Artificial Intelligence, Beijing, China.
| | - Zilong Ji
- School of Psychological and Cognitive Sciences, IDG/McGovern Institute for Brain Research, Peking-Tsinghua Center for Life Sciences, Center of Quantitative Biology, Academy for Advanced Interdisciplinary Studies, Peking University, Beijing, China; Institue of Cognitive Neuroscience, University College London, London, UK.
| | - Tianqiu Zhang
- School of Psychological and Cognitive Sciences, IDG/McGovern Institute for Brain Research, Peking-Tsinghua Center for Life Sciences, Center of Quantitative Biology, Academy for Advanced Interdisciplinary Studies, Peking University, Beijing, China.
| | - Tiejun Huang
- Beijing Academy of Artificial Intelligence, Beijing, China; School of Computer Science, Peking University, Beijing, China.
| | - Si Wu
- School of Psychological and Cognitive Sciences, IDG/McGovern Institute for Brain Research, Peking-Tsinghua Center for Life Sciences, Center of Quantitative Biology, Academy for Advanced Interdisciplinary Studies, Peking University, Beijing, China; Institute of Artificial Intelligence, Hefei Comprehensive National Science Center, Hefei, China; Beijing Academy of Artificial Intelligence, Beijing, China.
| |
Collapse
|
10
|
Fang C, Aronov D, Abbott LF, Mackevicius EL. Neural learning rules for generating flexible predictions and computing the successor representation. eLife 2023; 12:e80680. [PMID: 36928104 PMCID: PMC10019889 DOI: 10.7554/elife.80680] [Citation(s) in RCA: 13] [Impact Index Per Article: 13.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/30/2022] [Accepted: 10/26/2022] [Indexed: 03/18/2023] Open
Abstract
The predictive nature of the hippocampus is thought to be useful for memory-guided cognitive behaviors. Inspired by the reinforcement learning literature, this notion has been formalized as a predictive map called the successor representation (SR). The SR captures a number of observations about hippocampal activity. However, the algorithm does not provide a neural mechanism for how such representations arise. Here, we show the dynamics of a recurrent neural network naturally calculate the SR when the synaptic weights match the transition probability matrix. Interestingly, the predictive horizon can be flexibly modulated simply by changing the network gain. We derive simple, biologically plausible learning rules to learn the SR in a recurrent network. We test our model with realistic inputs and match hippocampal data recorded during random foraging. Taken together, our results suggest that the SR is more accessible in neural circuits than previously thought and can support a broad range of cognitive functions.
Collapse
Affiliation(s)
- Ching Fang
- Zuckerman Institute, Department of Neuroscience, Columbia UniversityNew YorkUnited States
| | - Dmitriy Aronov
- Zuckerman Institute, Department of Neuroscience, Columbia UniversityNew YorkUnited States
| | - LF Abbott
- Zuckerman Institute, Department of Neuroscience, Columbia UniversityNew YorkUnited States
| | - Emily L Mackevicius
- Zuckerman Institute, Department of Neuroscience, Columbia UniversityNew YorkUnited States
- Basis Research InstituteNew YorkUnited States
| |
Collapse
|
11
|
Aitken K, Mihalas S. Neural population dynamics of computing with synaptic modulations. eLife 2023; 12:e83035. [PMID: 36820526 PMCID: PMC10072874 DOI: 10.7554/elife.83035] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/27/2022] [Accepted: 02/22/2023] [Indexed: 02/24/2023] Open
Abstract
In addition to long-timescale rewiring, synapses in the brain are subject to significant modulation that occurs at faster timescales that endow the brain with additional means of processing information. Despite this, models of the brain like recurrent neural networks (RNNs) often have their weights frozen after training, relying on an internal state stored in neuronal activity to hold task-relevant information. In this work, we study the computational potential and resulting dynamics of a network that relies solely on synapse modulation during inference to process task-relevant information, the multi-plasticity network (MPN). Since the MPN has no recurrent connections, this allows us to study the computational capabilities and dynamical behavior contributed by synapses modulations alone. The generality of the MPN allows for our results to apply to synaptic modulation mechanisms ranging from short-term synaptic plasticity (STSP) to slower modulations such as spike-time dependent plasticity (STDP). We thoroughly examine the neural population dynamics of the MPN trained on integration-based tasks and compare it to known RNN dynamics, finding the two to have fundamentally different attractor structure. We find said differences in dynamics allow the MPN to outperform its RNN counterparts on several neuroscience-relevant tests. Training the MPN across a battery of neuroscience tasks, we find its computational capabilities in such settings is comparable to networks that compute with recurrent connections. Altogether, we believe this work demonstrates the computational possibilities of computing with synaptic modulations and highlights important motifs of these computations so that they can be identified in brain-like systems.
Collapse
Affiliation(s)
- Kyle Aitken
- Allen Institute, MindScope ProgramSeattleUnited States
| | | |
Collapse
|
12
|
Ji-An L, Stefanini F, Benna MK, Fusi S, La Porta CA. Face familiarity detection with complex synapses. iScience 2022; 26:105856. [PMID: 36636347 PMCID: PMC9829748 DOI: 10.1016/j.isci.2022.105856] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/20/2022] [Revised: 11/30/2022] [Accepted: 12/16/2022] [Indexed: 12/24/2022] Open
Abstract
Synaptic plasticity is a complex phenomenon involving multiple biochemical processes that operate on different timescales. Complexity can greatly increase memory capacity when the variables characterizing the synaptic dynamics have limited precision, as shown in simple memory retrieval problems involving random patterns. Here we turn to a real-world problem, face familiarity detection, and we show that synaptic complexity can be harnessed to store in memory a large number of faces that can be recognized at a later time. The number of recognizable faces grows almost linearly with the number of synapses and quadratically with the number of neurons. Complex synapses outperform simple ones characterized by a single variable, even when the total number of dynamical variables is matched. Complex and simple synapses have distinct signatures that are testable in experiments. Our results indicate that a system with complex synapses can be used in real-world tasks such as face familiarity detection.
Collapse
Affiliation(s)
- Li Ji-An
- Zuckerman Institute, Columbia University, New York, NY 10027, USA,Neurosciences Graduate Program, University of California San Diego, La Jolla, CA 92093, USA
| | - Fabio Stefanini
- Zuckerman Institute, Columbia University, New York, NY 10027, USA
| | - Marcus K. Benna
- Zuckerman Institute, Columbia University, New York, NY 10027, USA,Department of Neurobiology, School of Biological Sciences, University of California San Diego, La Jolla, CA 92093, USA,Corresponding author
| | - Stefano Fusi
- Zuckerman Institute, Columbia University, New York, NY 10027, USA,Corresponding author
| | | |
Collapse
|
13
|
Dasgupta S, Hattori D, Navlakha S. A neural theory for counting memories. Nat Commun 2022; 13:5961. [PMID: 36217003 PMCID: PMC9551066 DOI: 10.1038/s41467-022-33577-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/18/2022] [Accepted: 09/22/2022] [Indexed: 11/09/2022] Open
Abstract
Keeping track of the number of times different stimuli have been experienced is a critical computation for behavior. Here, we propose a theoretical two-layer neural circuit that stores counts of stimulus occurrence frequencies. This circuit implements a data structure, called a count sketch, that is commonly used in computer science to maintain item frequencies in streaming data. Our first model implements a count sketch using Hebbian synapses and outputs stimulus-specific frequencies. Our second model uses anti-Hebbian plasticity and only tracks frequencies within four count categories ("1-2-3-many"), which trades-off the number of categories that need to be distinguished with the potential ethological value of those categories. We show how both models can robustly track stimulus occurrence frequencies, thus expanding the traditional novelty-familiarity memory axis from binary to discrete with more than two possible values. Finally, we show that an implementation of the "1-2-3-many" count sketch exists in the insect mushroom body.
Collapse
Affiliation(s)
- Sanjoy Dasgupta
- Computer Science and Engineering Department, University of California San Diego, La Jolla, CA, 92037, USA
| | - Daisuke Hattori
- Department of Physiology, UT Southwestern Medical Center, Dallas, TX, 75390, USA.
| | - Saket Navlakha
- Simons Center for Quantitative Biology, Cold Spring Harbor Laboratory, Cold Spring Harbor, NY, 11724, USA.
| |
Collapse
|
14
|
Zhang K, Bromberg-Martin ES, Sogukpinar F, Kocher K, Monosov IE. Surprise and recency in novelty detection in the primate brain. Curr Biol 2022; 32:2160-2173.e6. [DOI: 10.1016/j.cub.2022.03.064] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/10/2022] [Revised: 02/28/2022] [Accepted: 03/24/2022] [Indexed: 11/16/2022]
|
15
|
Confavreux B, Vogels TP. A familiar thought: Machines that replace us? Neuron 2022; 110:361-362. [PMID: 35114107 DOI: 10.1016/j.neuron.2022.01.014] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/19/2022]
Abstract
In this issue of Neuron, Tyulmankov et al., 2022 propose a model for familiarity detection whose parameters-including those guiding plasticity-are fully machine-tuned.
Collapse
Affiliation(s)
| | - Tim P Vogels
- Institute of Science and Technology, 3400 Klosterneuberg, Austria.
| |
Collapse
|