1
|
Jannesar N, Akbarzadeh-Sherbaf K, Safari S, Vahabie AH. SSTE: Syllable-Specific Temporal Encoding to FORCE-learn audio sequences with an associative memory approach. Neural Netw 2024; 177:106368. [PMID: 38761415 DOI: 10.1016/j.neunet.2024.106368] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/09/2023] [Revised: 03/28/2024] [Accepted: 05/05/2024] [Indexed: 05/20/2024]
Abstract
The circuitry and pathways in the brains of humans and other species have long inspired researchers and system designers to develop accurate and efficient systems capable of solving real-world problems and responding in real-time. We propose the Syllable-Specific Temporal Encoding (SSTE) to learn vocal sequences in a reservoir of Izhikevich neurons, by forming associations between exclusive input activities and their corresponding syllables in the sequence. Our model converts the audio signals to cochleograms using the CAR-FAC model to simulate a brain-like auditory learning and memorization process. The reservoir is trained using a hardware-friendly approach to FORCE learning. Reservoir computing could yield associative memory dynamics with far less computational complexity compared to RNNs. The SSTE-based learning enables competent accuracy and stable recall of spatiotemporal sequences with fewer reservoir inputs compared with existing encodings in the literature for similar purpose, offering resource savings. The encoding points to syllable onsets and allows recalling from a desired point in the sequence, making it particularly suitable for recalling subsets of long vocal sequences. The SSTE demonstrates the capability of learning new signals without forgetting previously memorized sequences and displays robustness against occasional noise, a characteristic of real-world scenarios. The components of this model are configured to improve resource consumption and computational intensity, addressing some of the cost-efficiency issues that might arise in future implementations aiming for compactness and real-time, low-power operation. Overall, this model proposes a brain-inspired pattern generation network for vocal sequences that can be extended with other bio-inspired computations to explore their potentials for brain-like auditory perception. Future designs could inspire from this model to implement embedded devices that learn vocal sequences and recall them as needed in real-time. Such systems could acquire language and speech, operate as artificial assistants, and transcribe text to speech, in the presence of natural noise and corruption on audio data.
Collapse
Affiliation(s)
- Nastaran Jannesar
- High Performance Embedded Architecture Lab., School of Electrical and Computer Engineering, College of Engineering, University of Tehran, Tehran, Iran.
| | | | - Saeed Safari
- High Performance Embedded Architecture Lab., School of Electrical and Computer Engineering, College of Engineering, University of Tehran, Tehran, Iran.
| | - Abdol-Hossein Vahabie
- Department of Psychology, Faculty of Psychology and Education, University of Tehran, Tehran, Iran; Cognitive Systems Laboratory, Control and Intelligent Processing Center of Excellence (CIPCE), School of Electrical and Computer Engineering, College of Engineering, University of Tehran, Tehran, Iran.
| |
Collapse
|
2
|
Insanally MN, Albanna BF, Toth J, DePasquale B, Fadaei SS, Gupta T, Lombardi O, Kuchibhotla K, Rajan K, Froemke RC. Contributions of cortical neuron firing patterns, synaptic connectivity, and plasticity to task performance. Nat Commun 2024; 15:6023. [PMID: 39019848 PMCID: PMC11255273 DOI: 10.1038/s41467-024-49895-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/19/2023] [Accepted: 06/20/2024] [Indexed: 07/19/2024] Open
Abstract
Neuronal responses during behavior are diverse, ranging from highly reliable 'classical' responses to irregular 'non-classically responsive' firing. While a continuum of response properties is observed across neural systems, little is known about the synaptic origins and contributions of diverse responses to network function, perception, and behavior. To capture the heterogeneous responses measured from auditory cortex of rodents performing a frequency recognition task, we use a novel task-performing spiking recurrent neural network incorporating spike-timing-dependent plasticity. Reliable and irregular units contribute differentially to task performance via output and recurrent connections, respectively. Excitatory plasticity shifts the response distribution while inhibition constrains its diversity. Together both improve task performance with full network engagement. The same local patterns of synaptic inputs predict spiking response properties of network units and auditory cortical neurons from in vivo whole-cell recordings during behavior. Thus, diverse neural responses contribute to network function and emerge from synaptic plasticity rules.
Collapse
Affiliation(s)
- Michele N Insanally
- Department of Otolaryngology, University of Pittsburgh School of Medicine, Pittsburgh, PA, 15213, USA.
- Pittsburgh Hearing Research Center, University of Pittsburgh, Pittsburgh, PA, 15213, USA.
- Department of Neurobiology, University of Pittsburgh School of Medicine, Pittsburgh, PA, 15213, USA.
- Department of Bioengineering, University of Pittsburgh, Pittsburgh, PA, 15213, USA.
| | - Badr F Albanna
- Department of Otolaryngology, University of Pittsburgh School of Medicine, Pittsburgh, PA, 15213, USA
| | - Jade Toth
- Department of Otolaryngology, University of Pittsburgh School of Medicine, Pittsburgh, PA, 15213, USA
- Pittsburgh Hearing Research Center, University of Pittsburgh, Pittsburgh, PA, 15213, USA
| | - Brian DePasquale
- Department of Biomedical Engineering, Boston University, Boston, MA, 02215, USA
- Center for Systems Neuroscience, Boston University, Boston, MA, 02215, USA
| | - Saba Shokat Fadaei
- Skirball Institute for Biomolecular Medicine, New York University Grossman School of Medicine, New York, NY, 10016, USA
- Neuroscience Institute, New York University Grossman School of Medicine, New York, NY, 10016, USA
- Department of Otolaryngology, New York University Grossman School of Medicine, New York, NY, 10016, USA
- Department of Neuroscience, New York University Grossman School of Medicine, New York, NY, 10016, USA
- Department of Physiology, New York University Grossman School of Medicine, New York, NY, 10016, USA
| | - Trisha Gupta
- Department of Otolaryngology, University of Pittsburgh School of Medicine, Pittsburgh, PA, 15213, USA
- Pittsburgh Hearing Research Center, University of Pittsburgh, Pittsburgh, PA, 15213, USA
| | - Olivia Lombardi
- Department of Otolaryngology, University of Pittsburgh School of Medicine, Pittsburgh, PA, 15213, USA
- Pittsburgh Hearing Research Center, University of Pittsburgh, Pittsburgh, PA, 15213, USA
| | - Kishore Kuchibhotla
- Department of Psychological and Brain Sciences, Johns Hopkins University, Baltimore, MD, 21218, USA
- Department of Neuroscience, Johns Hopkins University, Baltimore, MD, 21218, USA
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD, 21218, USA
| | - Kanaka Rajan
- Department of Neurobiology, Harvard Medical School, Boston, MA, 02115, USA
- Kempner Institute, Harvard University, Cambridge, MA, 02138, USA
| | - Robert C Froemke
- Skirball Institute for Biomolecular Medicine, New York University Grossman School of Medicine, New York, NY, 10016, USA.
- Neuroscience Institute, New York University Grossman School of Medicine, New York, NY, 10016, USA.
- Department of Otolaryngology, New York University Grossman School of Medicine, New York, NY, 10016, USA.
- Department of Neuroscience, New York University Grossman School of Medicine, New York, NY, 10016, USA.
- Department of Physiology, New York University Grossman School of Medicine, New York, NY, 10016, USA.
- Center for Neural Science, New York University, New York, NY, 10003, USA.
| |
Collapse
|
3
|
Wang B, Torok Z, Duffy A, Bell DG, Wongso S, Velho TAF, Fairhall AL, Lois C. Unsupervised restoration of a complex learned behavior after large-scale neuronal perturbation. Nat Neurosci 2024; 27:1176-1186. [PMID: 38684893 DOI: 10.1038/s41593-024-01630-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/16/2022] [Accepted: 03/26/2024] [Indexed: 05/02/2024]
Abstract
Reliable execution of precise behaviors requires that brain circuits are resilient to variations in neuronal dynamics. Genetic perturbation of the majority of excitatory neurons in HVC, a brain region involved in song production, in adult songbirds with stereotypical songs triggered severe degradation of the song. The song fully recovered within 2 weeks, and substantial improvement occurred even when animals were prevented from singing during the recovery period, indicating that offline mechanisms enable recovery in an unsupervised manner. Song restoration was accompanied by increased excitatory synaptic input to neighboring, unmanipulated neurons in the same brain region. A model inspired by the behavioral and electrophysiological findings suggests that unsupervised single-cell and population-level homeostatic plasticity rules can support the functional restoration after large-scale disruption of networks that implement sequential dynamics. These observations suggest the existence of cellular and systems-level restorative mechanisms that ensure behavioral resilience.
Collapse
Affiliation(s)
- Bo Wang
- Division of Biology and Biological Engineering, California Institute of Technology, Pasadena, CA, USA.
| | - Zsofia Torok
- Division of Biology and Biological Engineering, California Institute of Technology, Pasadena, CA, USA
| | - Alison Duffy
- Department of Physiology and Biophysics, University of Washington, Seattle, WA, USA
- Computational Neuroscience Center, University of Washington, Seattle, WA, USA
| | - David G Bell
- Computational Neuroscience Center, University of Washington, Seattle, WA, USA
- Department of Physics, University of Washington, Seattle, WA, USA
| | - Shelyn Wongso
- Division of Biology and Biological Engineering, California Institute of Technology, Pasadena, CA, USA
| | - Tarciso A F Velho
- Division of Biology and Biological Engineering, California Institute of Technology, Pasadena, CA, USA
| | - Adrienne L Fairhall
- Department of Physiology and Biophysics, University of Washington, Seattle, WA, USA
- Computational Neuroscience Center, University of Washington, Seattle, WA, USA
- Department of Physics, University of Washington, Seattle, WA, USA
| | - Carlos Lois
- Division of Biology and Biological Engineering, California Institute of Technology, Pasadena, CA, USA.
| |
Collapse
|
4
|
Nicola W, Newton TR, Clopath C. The impact of spike timing precision and spike emission reliability on decoding accuracy. Sci Rep 2024; 14:10536. [PMID: 38719897 PMCID: PMC11078995 DOI: 10.1038/s41598-024-58524-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/24/2023] [Accepted: 04/01/2024] [Indexed: 05/12/2024] Open
Abstract
Precisely timed and reliably emitted spikes are hypothesized to serve multiple functions, including improving the accuracy and reproducibility of encoding stimuli, memories, or behaviours across trials. When these spikes occur as a repeating sequence, they can be used to encode and decode a potential time series. Here, we show both analytically and in simulations that the error incurred in approximating a time series with precisely timed and reliably emitted spikes decreases linearly with the number of neurons or spikes used in the decoding. This was verified numerically with synthetically generated patterns of spikes. Further, we found that if spikes were imprecise in their timing, or unreliable in their emission, the error incurred in decoding with these spikes would be sub-linear. However, if the spike precision or spike reliability increased with network size, the error incurred in decoding a time-series with sequences of spikes would maintain a linear decrease with network size. The spike precision had to increase linearly with network size, while the probability of spike failure had to decrease with the square-root of the network size. Finally, we identified a candidate circuit to test this scaling relationship: the repeating sequences of spikes with sub-millisecond precision in area HVC (proper name) of the zebra finch. This scaling relationship can be tested using both neural data and song-spectrogram-based recordings while taking advantage of the natural fluctuation in HVC network size due to neurogenesis.
Collapse
Affiliation(s)
- Wilten Nicola
- University of Calgary, Calgary, Canada.
- Department of Cell Biology and Anatomy, Calgary, Canada.
- Hotchkiss Brain Institute, Calgary, Canada.
| | | | - Claudia Clopath
- Department of Bioengineering, Imperial College London, London, UK
| |
Collapse
|
5
|
Agnes EJ, Vogels TP. Co-dependent excitatory and inhibitory plasticity accounts for quick, stable and long-lasting memories in biological networks. Nat Neurosci 2024; 27:964-974. [PMID: 38509348 PMCID: PMC11089004 DOI: 10.1038/s41593-024-01597-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/29/2022] [Accepted: 02/08/2024] [Indexed: 03/22/2024]
Abstract
The brain's functionality is developed and maintained through synaptic plasticity. As synapses undergo plasticity, they also affect each other. The nature of such 'co-dependency' is difficult to disentangle experimentally, because multiple synapses must be monitored simultaneously. To help understand the experimentally observed phenomena, we introduce a framework that formalizes synaptic co-dependency between different connection types. The resulting model explains how inhibition can gate excitatory plasticity while neighboring excitatory-excitatory interactions determine the strength of long-term potentiation. Furthermore, we show how the interplay between excitatory and inhibitory synapses can account for the quick rise and long-term stability of a variety of synaptic weight profiles, such as orientation tuning and dendritic clustering of co-active synapses. In recurrent neuronal networks, co-dependent plasticity produces rich and stable motor cortex-like dynamics with high input sensitivity. Our results suggest an essential role for the neighborly synaptic interaction during learning, connecting micro-level physiology with network-wide phenomena.
Collapse
Affiliation(s)
- Everton J Agnes
- Centre for Neural Circuits and Behaviour, University of Oxford, Oxford, UK.
- Biozentrum, University of Basel, Basel, Switzerland.
| | - Tim P Vogels
- Centre for Neural Circuits and Behaviour, University of Oxford, Oxford, UK
- Institute of Science and Technology Austria, Klosterneuburg, Austria
| |
Collapse
|
6
|
Fitz H, Hagoort P, Petersson KM. Neurobiological Causal Models of Language Processing. NEUROBIOLOGY OF LANGUAGE (CAMBRIDGE, MASS.) 2024; 5:225-247. [PMID: 38645618 PMCID: PMC11025648 DOI: 10.1162/nol_a_00133] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 09/29/2022] [Accepted: 12/18/2023] [Indexed: 04/23/2024]
Abstract
The language faculty is physically realized in the neurobiological infrastructure of the human brain. Despite significant efforts, an integrated understanding of this system remains a formidable challenge. What is missing from most theoretical accounts is a specification of the neural mechanisms that implement language function. Computational models that have been put forward generally lack an explicit neurobiological foundation. We propose a neurobiologically informed causal modeling approach which offers a framework for how to bridge this gap. A neurobiological causal model is a mechanistic description of language processing that is grounded in, and constrained by, the characteristics of the neurobiological substrate. It intends to model the generators of language behavior at the level of implementational causality. We describe key features and neurobiological component parts from which causal models can be built and provide guidelines on how to implement them in model simulations. Then we outline how this approach can shed new light on the core computational machinery for language, the long-term storage of words in the mental lexicon and combinatorial processing in sentence comprehension. In contrast to cognitive theories of behavior, causal models are formulated in the "machine language" of neurobiology which is universal to human cognition. We argue that neurobiological causal modeling should be pursued in addition to existing approaches. Eventually, this approach will allow us to develop an explicit computational neurobiology of language.
Collapse
Affiliation(s)
- Hartmut Fitz
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, The Netherlands
- Neurobiology of Language Department, Max Planck Institute for Psycholinguistics, Nijmegen, The Netherlands
| | - Peter Hagoort
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, The Netherlands
- Neurobiology of Language Department, Max Planck Institute for Psycholinguistics, Nijmegen, The Netherlands
| | - Karl Magnus Petersson
- Neurobiology of Language Department, Max Planck Institute for Psycholinguistics, Nijmegen, The Netherlands
- Faculty of Medicine and Biomedical Sciences, University of Algarve, Faro, Portugal
| |
Collapse
|
7
|
Maslennikov O, Perc M, Nekorkin V. Topological features of spike trains in recurrent spiking neural networks that are trained to generate spatiotemporal patterns. Front Comput Neurosci 2024; 18:1363514. [PMID: 38463243 PMCID: PMC10920356 DOI: 10.3389/fncom.2024.1363514] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/30/2023] [Accepted: 02/06/2024] [Indexed: 03/12/2024] Open
Abstract
In this study, we focus on training recurrent spiking neural networks to generate spatiotemporal patterns in the form of closed two-dimensional trajectories. Spike trains in the trained networks are examined in terms of their dissimilarity using the Victor-Purpura distance. We apply algebraic topology methods to the matrices obtained by rank-ordering the entries of the distance matrices, specifically calculating the persistence barcodes and Betti curves. By comparing the features of different types of output patterns, we uncover the complex relations between low-dimensional target signals and the underlying multidimensional spike trains.
Collapse
Affiliation(s)
- Oleg Maslennikov
- Federal Research Center A.V. Gaponov-Grekhov Institute of Applied Physics of the Russian Academy of Sciences, Nizhny Novgorod, Russia
| | - Matjaž Perc
- Faculty of Natural Sciences and Mathematics, University of Maribor, Maribor, Slovenia
- Department of Medical Research, China Medical University Hospital, China Medical University, Taichung City, Taiwan
- Complexity Science Hub Vienna, Vienna, Austria
- Department of Physics, Kyung Hee University, Seoul, Republic of Korea
| | - Vladimir Nekorkin
- Federal Research Center A.V. Gaponov-Grekhov Institute of Applied Physics of the Russian Academy of Sciences, Nizhny Novgorod, Russia
| |
Collapse
|
8
|
Suárez LE, Mihalik A, Milisav F, Marshall K, Li M, Vértes PE, Lajoie G, Misic B. Connectome-based reservoir computing with the conn2res toolbox. Nat Commun 2024; 15:656. [PMID: 38253577 PMCID: PMC10803782 DOI: 10.1038/s41467-024-44900-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/19/2023] [Accepted: 01/09/2024] [Indexed: 01/24/2024] Open
Abstract
The connection patterns of neural circuits form a complex network. How signaling in these circuits manifests as complex cognition and adaptive behaviour remains the central question in neuroscience. Concomitant advances in connectomics and artificial intelligence open fundamentally new opportunities to understand how connection patterns shape computational capacity in biological brain networks. Reservoir computing is a versatile paradigm that uses high-dimensional, nonlinear dynamical systems to perform computations and approximate cognitive functions. Here we present conn2res: an open-source Python toolbox for implementing biological neural networks as artificial neural networks. conn2res is modular, allowing arbitrary network architecture and dynamics to be imposed. The toolbox allows researchers to input connectomes reconstructed using multiple techniques, from tract tracing to noninvasive diffusion imaging, and to impose multiple dynamical systems, from spiking neurons to memristive dynamics. The versatility of the conn2res toolbox allows us to ask new questions at the confluence of neuroscience and artificial intelligence. By reconceptualizing function as computation, conn2res sets the stage for a more mechanistic understanding of structure-function relationships in brain networks.
Collapse
Affiliation(s)
- Laura E Suárez
- McConnell Brain Imaging Centre, Montréal Neurological Institute, McGill University, Montréal, QC, Canada
- Mila, Quebec Artificial Intelligence Institute, Montreal, QC, Canada
| | - Agoston Mihalik
- Department of Psychiatry, University of Cambridge, Cambridge, UK
| | - Filip Milisav
- McConnell Brain Imaging Centre, Montréal Neurological Institute, McGill University, Montréal, QC, Canada
| | - Kenji Marshall
- Department of Bioengineering, Stanford University, Stanford, CA, USA
| | - Mingze Li
- McConnell Brain Imaging Centre, Montréal Neurological Institute, McGill University, Montréal, QC, Canada
- Mila, Quebec Artificial Intelligence Institute, Montreal, QC, Canada
| | - Petra E Vértes
- Department of Psychiatry, University of Cambridge, Cambridge, UK
| | - Guillaume Lajoie
- Mila, Quebec Artificial Intelligence Institute, Montreal, QC, Canada
- Department of Mathematics and Statistics, Université de Montréal, Montreal, QC, Canada
| | - Bratislav Misic
- McConnell Brain Imaging Centre, Montréal Neurological Institute, McGill University, Montréal, QC, Canada.
| |
Collapse
|
9
|
Du Z, Gupta M, Xu F, Zhang K, Zhang J, Zhou Y, Liu Y, Wang Z, Wrachtrup J, Wong N, Li C, Chu Z. Widefield Diamond Quantum Sensing with Neuromorphic Vision Sensors. ADVANCED SCIENCE (WEINHEIM, BADEN-WURTTEMBERG, GERMANY) 2024; 11:e2304355. [PMID: 37939304 PMCID: PMC10787069 DOI: 10.1002/advs.202304355] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/29/2023] [Revised: 09/04/2023] [Indexed: 11/10/2023]
Abstract
Despite increasing interest in developing ultrasensitive widefield diamond magnetometry for various applications, achieving high temporal resolution and sensitivity simultaneously remains a key challenge. This is largely due to the transfer and processing of massive amounts of data from the frame-based sensor to capture the widefield fluorescence intensity of spin defects in diamonds. In this study, a neuromorphic vision sensor to encode the changes of fluorescence intensity into spikes in the optically detected magnetic resonance (ODMR) measurements is adopted, closely resembling the operation of the human vision system, which leads to highly compressed data volume and reduced latency. It also results in a vast dynamic range, high temporal resolution, and exceptional signal-to-background ratio. After a thorough theoretical evaluation, the experiment with an off-the-shelf event camera demonstrated a 13× improvement in temporal resolution with comparable precision of detecting ODMR resonance frequencies compared with the state-of-the-art highly specialized frame-based approach. It is successfully deploy this technology in monitoring dynamically modulated laser heating of gold nanoparticles coated on a diamond surface, a recognizably difficult task using existing approaches. The current development provides new insights for high-precision and low-latency widefield quantum sensing, with possibilities for integration with emerging memory devices to realize more intelligent quantum sensors.
Collapse
Affiliation(s)
- Zhiyuan Du
- Department of Electrical and Electronic Engineering, The University of Hong Kong, Hong Kong, 999077, P. R. China
| | - Madhav Gupta
- Department of Electrical and Electronic Engineering, The University of Hong Kong, Hong Kong, 999077, P. R. China
| | - Feng Xu
- Department of Electrical and Electronic Engineering, The University of Hong Kong, Hong Kong, 999077, P. R. China
| | - Kai Zhang
- Department of Electrical and Electronic Engineering, The University of Hong Kong, Hong Kong, 999077, P. R. China
- School of Science and Engineering, The Chinese University of Hong Kong, Shenzhen, 518000, China
| | - Jiahua Zhang
- Department of Electrical and Electronic Engineering, The University of Hong Kong, Hong Kong, 999077, P. R. China
| | - Yan Zhou
- School of Science and Engineering, The Chinese University of Hong Kong, Shenzhen, 518000, China
| | - Yiyao Liu
- Guangdong Provincial Key Laboratory of Quantum Engineering and Quantum Materials, School of Physics and Telecommunication Engineering, South China Normal University, Guangzhou, 510006, China
| | - Zhenyu Wang
- Guangdong Provincial Key Laboratory of Quantum Engineering and Quantum Materials, School of Physics and Telecommunication Engineering, South China Normal University, Guangzhou, 510006, China
- Frontier Research Institute for Physics, South China Normal University, Guangzhou, 510006, China
| | - Jörg Wrachtrup
- 3rd Institute of Physics, Research Center SCoPE and IQST, University of Stuttgart, 70569, Stuttgart, Germany
- Max Planck Institute for Solid State Research, 70569, Stuttgart, Germany
| | - Ngai Wong
- Department of Electrical and Electronic Engineering, The University of Hong Kong, Hong Kong, 999077, P. R. China
| | - Can Li
- Department of Electrical and Electronic Engineering, The University of Hong Kong, Hong Kong, 999077, P. R. China
| | - Zhiqin Chu
- Department of Electrical and Electronic Engineering, The University of Hong Kong, Hong Kong, 999077, P. R. China
- School of Biomedical Sciences, The University of Hong Kong, Hong Kong, 999077, P. R. China
- Advanced Biomedical Instrumentation Centre, Hong Kong Science Park, Hong Kong, 999077, P. R. China
| |
Collapse
|
10
|
Capone C, Lupo C, Muratore P, Paolucci PS. Beyond spiking networks: The computational advantages of dendritic amplification and input segregation. Proc Natl Acad Sci U S A 2023; 120:e2220743120. [PMID: 38019856 PMCID: PMC10710097 DOI: 10.1073/pnas.2220743120] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/07/2022] [Accepted: 10/11/2023] [Indexed: 12/01/2023] Open
Abstract
The brain can efficiently learn a wide range of tasks, motivating the search for biologically inspired learning rules for improving current artificial intelligence technology. Most biological models are composed of point neurons and cannot achieve state-of-the-art performance in machine learning. Recent works have proposed that input segregation (neurons receive sensory information and higher-order feedback in segregated compartments), and nonlinear dendritic computation would support error backpropagation in biological neurons. However, these approaches require propagating errors with a fine spatiotemporal structure to all the neurons, which is unlikely to be feasible in a biological network. To relax this assumption, we suggest that bursts and dendritic input segregation provide a natural support for target-based learning, which propagates targets rather than errors. A coincidence mechanism between the basal and the apical compartments allows for generating high-frequency bursts of spikes. This architecture supports a burst-dependent learning rule, based on the comparison between the target bursting activity triggered by the teaching signal and the one caused by the recurrent connections, providing support for target-based learning. We show that this framework can be used to efficiently solve spatiotemporal tasks, such as context-dependent store and recall of three-dimensional trajectories, and navigation tasks. Finally, we suggest that this neuronal architecture naturally allows for orchestrating "hierarchical imitation learning", enabling the decomposition of challenging long-horizon decision-making tasks into simpler subtasks. We show a possible implementation of this in a two-level network, where the high network produces the contextual signal for the low network.
Collapse
Affiliation(s)
- Cristiano Capone
- Istituto Nazionale di Fisica Nucleare (INFN), Sezione di Roma, Rome00185, Italy
| | - Cosimo Lupo
- Istituto Nazionale di Fisica Nucleare (INFN), Sezione di Roma, Rome00185, Italy
| | - Paolo Muratore
- Scuola Internazionale Superiore di Studi Avanzati (SISSA), Visual Neuroscience Lab, Trieste34136, Italy
| | | |
Collapse
|
11
|
Dura-Bernal S, Griffith EY, Barczak A, O'Connell MN, McGinnis T, Moreira JVS, Schroeder CE, Lytton WW, Lakatos P, Neymotin SA. Data-driven multiscale model of macaque auditory thalamocortical circuits reproduces in vivo dynamics. Cell Rep 2023; 42:113378. [PMID: 37925640 PMCID: PMC10727489 DOI: 10.1016/j.celrep.2023.113378] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/07/2022] [Revised: 09/05/2023] [Accepted: 10/19/2023] [Indexed: 11/07/2023] Open
Abstract
We developed a detailed model of macaque auditory thalamocortical circuits, including primary auditory cortex (A1), medial geniculate body (MGB), and thalamic reticular nucleus, utilizing the NEURON simulator and NetPyNE tool. The A1 model simulates a cortical column with over 12,000 neurons and 25 million synapses, incorporating data on cell-type-specific neuron densities, morphology, and connectivity across six cortical layers. It is reciprocally connected to the MGB thalamus, which includes interneurons and core and matrix-layer-specific projections to A1. The model simulates multiscale measures, including physiological firing rates, local field potentials (LFPs), current source densities (CSDs), and electroencephalography (EEG) signals. Laminar CSD patterns, during spontaneous activity and in response to broadband noise stimulus trains, mirror experimental findings. Physiological oscillations emerge spontaneously across frequency bands comparable to those recorded in vivo. We elucidate population-specific contributions to observed oscillation events and relate them to firing and presynaptic input patterns. The model offers a quantitative theoretical framework to integrate and interpret experimental data and predict its underlying cellular and circuit mechanisms.
Collapse
Affiliation(s)
- Salvador Dura-Bernal
- Department of Physiology and Pharmacology, State University of New York (SUNY) Downstate Health Sciences University, Brooklyn, NY, USA; Center for Biomedical Imaging and Neuromodulation, Nathan S. Kline Institute for Psychiatric Research, Orangeburg, NY, USA.
| | - Erica Y Griffith
- Department of Physiology and Pharmacology, State University of New York (SUNY) Downstate Health Sciences University, Brooklyn, NY, USA; Center for Biomedical Imaging and Neuromodulation, Nathan S. Kline Institute for Psychiatric Research, Orangeburg, NY, USA.
| | - Annamaria Barczak
- Center for Biomedical Imaging and Neuromodulation, Nathan S. Kline Institute for Psychiatric Research, Orangeburg, NY, USA
| | - Monica N O'Connell
- Center for Biomedical Imaging and Neuromodulation, Nathan S. Kline Institute for Psychiatric Research, Orangeburg, NY, USA
| | - Tammy McGinnis
- Center for Biomedical Imaging and Neuromodulation, Nathan S. Kline Institute for Psychiatric Research, Orangeburg, NY, USA
| | - Joao V S Moreira
- Department of Physiology and Pharmacology, State University of New York (SUNY) Downstate Health Sciences University, Brooklyn, NY, USA
| | - Charles E Schroeder
- Center for Biomedical Imaging and Neuromodulation, Nathan S. Kline Institute for Psychiatric Research, Orangeburg, NY, USA; Departments of Psychiatry and Neurology, Columbia University Medical Center, New York, NY, USA
| | - William W Lytton
- Department of Physiology and Pharmacology, State University of New York (SUNY) Downstate Health Sciences University, Brooklyn, NY, USA; Kings County Hospital Center, Brooklyn, NY, USA
| | - Peter Lakatos
- Center for Biomedical Imaging and Neuromodulation, Nathan S. Kline Institute for Psychiatric Research, Orangeburg, NY, USA; Department Psychiatry, NYU Grossman School of Medicine, New York, NY, USA
| | - Samuel A Neymotin
- Center for Biomedical Imaging and Neuromodulation, Nathan S. Kline Institute for Psychiatric Research, Orangeburg, NY, USA; Department Psychiatry, NYU Grossman School of Medicine, New York, NY, USA.
| |
Collapse
|
12
|
Chapman GW, Hasselmo ME. Predictive learning by a burst-dependent learning rule. Neurobiol Learn Mem 2023; 205:107826. [PMID: 37696414 DOI: 10.1016/j.nlm.2023.107826] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/07/2022] [Revised: 08/05/2023] [Accepted: 09/03/2023] [Indexed: 09/13/2023]
Abstract
Humans and other animals are able to quickly generalize latent dynamics of spatiotemporal sequences, often from a minimal number of previous experiences. Additionally, internal representations of external stimuli must remain stable, even in the presence of sensory noise, in order to be useful for informing behavior. In contrast, typical machine learning approaches require many thousands of samples, and generalize poorly to unexperienced examples, or fail completely to predict at long timescales. Here, we propose a novel neural network module which incorporates hierarchy and recurrent feedback terms, constituting a simplified model of neocortical microcircuits. This microcircuit predicts spatiotemporal trajectories at the input layer using a temporal error minimization algorithm. We show that this module is able to predict with higher accuracy into the future compared to traditional models. Investigating this model we find that successive predictive models learn representations which are increasingly removed from the raw sensory space, namely as successive temporal derivatives of the positional information. Next, we introduce a spiking neural network model which implements the rate-model through the use of a recently proposed biological learning rule utilizing dual-compartment neurons. We show that this network performs well on the same tasks as the mean-field models, by developing intrinsic dynamics that follow the dynamics of the external stimulus, while coordinating transmission of higher-order dynamics. Taken as a whole, these findings suggest that hierarchical temporal abstraction of sequences, rather than feed-forward reconstruction, may be responsible for the ability of neural systems to quickly adapt to novel situations.
Collapse
Affiliation(s)
- G William Chapman
- Center for Systems Neuroscience, Boston University, Boston, MA, USA.
| | | |
Collapse
|
13
|
Xing D, Yang Y, Zhang T, Xu B. A Brain-Inspired Approach for Probabilistic Estimation and Efficient Planning in Precision Physical Interaction. IEEE TRANSACTIONS ON CYBERNETICS 2023; 53:6248-6262. [PMID: 35442901 DOI: 10.1109/tcyb.2022.3164750] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
This article presents a novel structure of spiking neural networks (SNNs) to simulate the joint function of multiple brain regions in handling precision physical interactions. This task desires efficient movement planning while considering contact prediction and fast radial compensation. Contact prediction demands the cognitive memory of the interaction model, and we novelly propose a double recurrent network to imitate the hippocampus, addressing the spatiotemporal property of the distribution. Radial contact response needs rich spatial information, and we use a cerebellum-inspired module to achieve temporally dynamic prediction. We also use a block-based feedforward network to plan movements, behaving like the prefrontal cortex. These modules are integrated to realize the joint cognitive function of multiple brain regions in prediction, controlling, and planning. We present an appropriate controller and planner to generate teaching signals and provide a feasible network initialization for reinforcement learning, which modifies synapses in accordance with reality. The experimental results demonstrate the validity of the proposed method.
Collapse
|
14
|
Maslennikov OV, Gao C, Nekorkin VI. Internal dynamics of recurrent neural networks trained to generate complex spatiotemporal patterns. CHAOS (WOODBURY, N.Y.) 2023; 33:093125. [PMID: 37722673 DOI: 10.1063/5.0166359] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/04/2023] [Accepted: 08/28/2023] [Indexed: 09/20/2023]
Abstract
How complex patterns generated by neural systems are represented in individual neuronal activity is an essential problem in computational neuroscience as well as machine learning communities. Here, based on recurrent neural networks in the form of feedback reservoir computers, we show microscopic features resulting in generating spatiotemporal patterns including multicluster and chimera states. We show the effect of individual neural trajectories as well as whole-network activity distributions on exhibiting particular regimes. In addition, we address the question how trained output weights contribute to the autonomous multidimensional dynamics.
Collapse
Affiliation(s)
- Oleg V Maslennikov
- Federal Research Center A.V. Gaponov-Grekhov Institute of Applied Physics of the Russian Academy of Sciences, Nizhny Novgorod, Russia
| | - Chao Gao
- School of Artificial Intelligence, Optics and Electronics, Northwestern Polytechnical University, Xian, China
| | - Vladimir I Nekorkin
- Federal Research Center A.V. Gaponov-Grekhov Institute of Applied Physics of the Russian Academy of Sciences, Nizhny Novgorod, Russia
| |
Collapse
|
15
|
Maes A, Barahona M, Clopath C. Long- and short-term history effects in a spiking network model of statistical learning. Sci Rep 2023; 13:12939. [PMID: 37558704 PMCID: PMC10412617 DOI: 10.1038/s41598-023-39108-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/23/2023] [Accepted: 07/20/2023] [Indexed: 08/11/2023] Open
Abstract
The statistical structure of the environment is often important when making decisions. There are multiple theories of how the brain represents statistical structure. One such theory states that neural activity spontaneously samples from probability distributions. In other words, the network spends more time in states which encode high-probability stimuli. Starting from the neural assembly, increasingly thought of to be the building block for computation in the brain, we focus on how arbitrary prior knowledge about the external world can both be learned and spontaneously recollected. We present a model based upon learning the inverse of the cumulative distribution function. Learning is entirely unsupervised using biophysical neurons and biologically plausible learning rules. We show how this prior knowledge can then be accessed to compute expectations and signal surprise in downstream networks. Sensory history effects emerge from the model as a consequence of ongoing learning.
Collapse
Affiliation(s)
- Amadeus Maes
- Department of Neuroscience, Feinberg School of Medicine, Northwestern University, Chicago, USA.
- Department of Bioengineering, Imperial College London, London, UK.
| | | | - Claudia Clopath
- Department of Bioengineering, Imperial College London, London, UK
| |
Collapse
|
16
|
Cimeša L, Ciric L, Ostojic S. Geometry of population activity in spiking networks with low-rank structure. PLoS Comput Biol 2023; 19:e1011315. [PMID: 37549194 PMCID: PMC10461857 DOI: 10.1371/journal.pcbi.1011315] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/25/2022] [Revised: 08/28/2023] [Accepted: 06/27/2023] [Indexed: 08/09/2023] Open
Abstract
Recurrent network models are instrumental in investigating how behaviorally-relevant computations emerge from collective neural dynamics. A recently developed class of models based on low-rank connectivity provides an analytically tractable framework for understanding of how connectivity structure determines the geometry of low-dimensional dynamics and the ensuing computations. Such models however lack some fundamental biological constraints, and in particular represent individual neurons in terms of abstract units that communicate through continuous firing rates rather than discrete action potentials. Here we examine how far the theoretical insights obtained from low-rank rate networks transfer to more biologically plausible networks of spiking neurons. Adding a low-rank structure on top of random excitatory-inhibitory connectivity, we systematically compare the geometry of activity in networks of integrate-and-fire neurons to rate networks with statistically equivalent low-rank connectivity. We show that the mean-field predictions of rate networks allow us to identify low-dimensional dynamics at constant population-average activity in spiking networks, as well as novel non-linear regimes of activity such as out-of-phase oscillations and slow manifolds. We finally exploit these results to directly build spiking networks that perform nonlinear computations.
Collapse
Affiliation(s)
- Ljubica Cimeša
- Laboratoire de Neurosciences Cognitives Computationnelles, Département d’Études Cognitives, École Normale Supérieure, INSERM U960, PSL University, Paris, France
| | - Lazar Ciric
- Laboratoire de Neurosciences Cognitives Computationnelles, Département d’Études Cognitives, École Normale Supérieure, INSERM U960, PSL University, Paris, France
| | - Srdjan Ostojic
- Laboratoire de Neurosciences Cognitives Computationnelles, Département d’Études Cognitives, École Normale Supérieure, INSERM U960, PSL University, Paris, France
| |
Collapse
|
17
|
Xue X, Wimmer RD, Halassa MM, Chen ZS. Spiking Recurrent Neural Networks Represent Task-Relevant Neural Sequences in Rule-Dependent Computation. Cognit Comput 2023; 15:1167-1189. [PMID: 37771569 PMCID: PMC10530699 DOI: 10.1007/s12559-022-09994-2] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/16/2021] [Accepted: 01/13/2022] [Indexed: 11/28/2022]
Abstract
Background Prefrontal cortical neurons play essential roles in performing rule-dependent tasks and working memory-based decision making. Methods Motivated by PFG recordings of task-performing mice, we developed an excitatory-inhibitory spiking recurrent neural network (SRNN) to perform a rule-dependent two-alternative forced choice (2AFC) task. We imposed several important biological constraints onto the SRNN, and adapted spike frequency adaptation (SFA) and SuperSpike gradient methods to train the SRNN efficiently. Results The trained SRNN produced emergent rule-specific tunings in single-unit representations, showing rule-dependent population dynamics that resembled experimentally observed data. Under varying test conditions, we manipulated the SRNN parameters or configuration in computer simulations, and we investigated the impacts of rule-coding error, delay duration, recurrent weight connectivity and sparsity, and excitation/inhibition (E/I) balance on both task performance and neural representations. Conclusions Overall, our modeling study provides a computational framework to understand neuronal representations at a fine timescale during working memory and cognitive control, and provides new experimentally testable hypotheses in future experiments.
Collapse
Affiliation(s)
- Xiaohe Xue
- Courant Institute of Mathematical Sciences, New York University, New York, NY, USA
| | - Ralf D. Wimmer
- Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA, USA
| | - Michael M. Halassa
- Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA, USA
| | - Zhe Sage Chen
- Department of Psychiatry, New York University School of Medicine, New York, NY, USA
- Department of Neuroscience and Physiology, New York University School of Medicine, New York, NY, USA
- Neuroscience Institute, New York University School of Medicine, New York, NY, USA
| |
Collapse
|
18
|
Liu T, Ning Y, Liu P, Zhang Y, Chua Y, Chen W, Zhang S. Modularity Facilitates Classification Performance of Spiking Neural Networks for Decoding Cortical Spike Trains. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2023; 2023:1-4. [PMID: 38083788 DOI: 10.1109/embc40787.2023.10340358] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/18/2023]
Abstract
After the introduction of recurrence, an important property of the biological brain, spiking neural networks (SNNs) have achieved unprecedented classification performance. But they still cannot outperform many artificial neural networks. Modularity is another crucial feature of the biological brain. It remains unclear if modularity can also improve the performance of SNNs. To investigate this idea, we proposed the modular SNN, and compared its performance with a uniform SNN without modularity by employing them to classify cortical spike trains. For the first time, a significant improvement was found in our modular SNN. Further, we probed into the factors influencing the performance of the modular SNN and found: (a). The modular SNN outperformed the uniform SNN more significantly when the number of neurons in the networks increased; (b). The performance of the modular SNNs increased as the number of modules dropped. These preliminary but novel findings suggest that modularity may help develop better artificial intelligence and brain-machine interfaces. Also, the modular SNN may serve as a model for the study of neuronal spike synchrony.
Collapse
|
19
|
Arthur BJ, Kim CM, Chen S, Preibisch S, Darshan R. A scalable implementation of the recursive least-squares algorithm for training spiking neural networks. Front Neuroinform 2023; 17:1099510. [PMID: 37441157 PMCID: PMC10333503 DOI: 10.3389/fninf.2023.1099510] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/15/2022] [Accepted: 06/05/2023] [Indexed: 07/15/2023] Open
Abstract
Training spiking recurrent neural networks on neuronal recordings or behavioral tasks has become a popular way to study computations performed by the nervous system. As the size and complexity of neural recordings increase, there is a need for efficient algorithms that can train models in a short period of time using minimal resources. We present optimized CPU and GPU implementations of the recursive least-squares algorithm in spiking neural networks. The GPU implementation can train networks of one million neurons, with 100 million plastic synapses and a billion static synapses, about 1,000 times faster than an unoptimized reference CPU implementation. We demonstrate the code's utility by training a network, in less than an hour, to reproduce the activity of > 66, 000 recorded neurons of a mouse performing a decision-making task. The fast implementation enables a more interactive in-silico study of the dynamics and connectivity underlying multi-area computations. It also admits the possibility to train models as in-vivo experiments are being conducted, thus closing the loop between modeling and experiments.
Collapse
Affiliation(s)
- Benjamin J. Arthur
- Janelia Research Campus, Howard Hughes Medical Institute, Ashburn, VA, United States
| | - Christopher M. Kim
- Janelia Research Campus, Howard Hughes Medical Institute, Ashburn, VA, United States
- Laboratory of Biological Modeling, National Institute of Diabetes and Digestive and Kidney Diseases, National Institutes of Health, Bethesda, MD, United States
| | - Susu Chen
- Janelia Research Campus, Howard Hughes Medical Institute, Ashburn, VA, United States
| | - Stephan Preibisch
- Janelia Research Campus, Howard Hughes Medical Institute, Ashburn, VA, United States
| | - Ran Darshan
- Janelia Research Campus, Howard Hughes Medical Institute, Ashburn, VA, United States
| |
Collapse
|
20
|
Kim CM, Finkelstein A, Chow CC, Svoboda K, Darshan R. Distributing task-related neural activity across a cortical network through task-independent connections. Nat Commun 2023; 14:2851. [PMID: 37202424 DOI: 10.1038/s41467-023-38529-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/18/2022] [Accepted: 05/05/2023] [Indexed: 05/20/2023] Open
Abstract
Task-related neural activity is widespread across populations of neurons during goal-directed behaviors. However, little is known about the synaptic reorganization and circuit mechanisms that lead to broad activity changes. Here we trained a subset of neurons in a spiking network with strong synaptic interactions to reproduce the activity of neurons in the motor cortex during a decision-making task. Task-related activity, resembling the neural data, emerged across the network, even in the untrained neurons. Analysis of trained networks showed that strong untrained synapses, which were independent of the task and determined the dynamical state of the network, mediated the spread of task-related activity. Optogenetic perturbations suggest that the motor cortex is strongly-coupled, supporting the applicability of the mechanism to cortical networks. Our results reveal a cortical mechanism that facilitates distributed representations of task-variables by spreading the activity from a subset of plastic neurons to the entire network through task-independent strong synapses.
Collapse
Affiliation(s)
- Christopher M Kim
- Laboratory of Biological Modeling, National Institute of Diabetes and Digestive and Kidney Diseases, National Institutes of Health, Bethesda, MD, USA.
- Janelia Research Campus, Howard Hughes Medical Institute, Ashburn, VA, USA.
| | - Arseny Finkelstein
- Department of Physiology and Pharmacology, Sackler Faculty of Medicine, Tel Aviv University, Tel Aviv, Israel
- Sagol School of Neuroscience, Tel Aviv University, Tel Aviv, Israel
| | - Carson C Chow
- Laboratory of Biological Modeling, National Institute of Diabetes and Digestive and Kidney Diseases, National Institutes of Health, Bethesda, MD, USA
| | - Karel Svoboda
- Janelia Research Campus, Howard Hughes Medical Institute, Ashburn, VA, USA
- Allen Institute for Neural Dynamics, Seattle, WA, USA
| | - Ran Darshan
- Janelia Research Campus, Howard Hughes Medical Institute, Ashburn, VA, USA.
| |
Collapse
|
21
|
Pugavko MM, Maslennikov OV, Nekorkin VI. Multitask computation through dynamics in recurrent spiking neural networks. Sci Rep 2023; 13:3997. [PMID: 36899052 PMCID: PMC10006454 DOI: 10.1038/s41598-023-31110-z] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/27/2022] [Accepted: 03/06/2023] [Indexed: 03/12/2023] Open
Abstract
In this work, inspired by cognitive neuroscience experiments, we propose recurrent spiking neural networks trained to perform multiple target tasks. These models are designed by considering neurocognitive activity as computational processes through dynamics. Trained by input-output examples, these spiking neural networks are reverse engineered to find the dynamic mechanisms that are fundamental to their performance. We show that considering multitasking and spiking within one system provides insightful ideas on the principles of neural computation.
Collapse
Affiliation(s)
- Mechislav M Pugavko
- Institute of Applied Physics of the Russian Academy of Sciences, Nizhny Novgorod, 603950, Russia
| | - Oleg V Maslennikov
- Institute of Applied Physics of the Russian Academy of Sciences, Nizhny Novgorod, 603950, Russia.
| | - Vladimir I Nekorkin
- Institute of Applied Physics of the Russian Academy of Sciences, Nizhny Novgorod, 603950, Russia
| |
Collapse
|
22
|
DePasquale B, Sussillo D, Abbott LF, Churchland MM. The centrality of population-level factors to network computation is demonstrated by a versatile approach for training spiking networks. Neuron 2023; 111:631-649.e10. [PMID: 36630961 PMCID: PMC10118067 DOI: 10.1016/j.neuron.2022.12.007] [Citation(s) in RCA: 13] [Impact Index Per Article: 13.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/21/2020] [Revised: 06/17/2022] [Accepted: 12/05/2022] [Indexed: 01/12/2023]
Abstract
Neural activity is often described in terms of population-level factors extracted from the responses of many neurons. Factors provide a lower-dimensional description with the aim of shedding light on network computations. Yet, mechanistically, computations are performed not by continuously valued factors but by interactions among neurons that spike discretely and variably. Models provide a means of bridging these levels of description. We developed a general method for training model networks of spiking neurons by leveraging factors extracted from either data or firing-rate-based networks. In addition to providing a useful model-building framework, this formalism illustrates how reliable and continuously valued factors can arise from seemingly stochastic spiking. Our framework establishes procedures for embedding this property in network models with different levels of realism. The relationship between spikes and factors in such networks provides a foundation for interpreting (and subtly redefining) commonly used quantities such as firing rates.
Collapse
Affiliation(s)
- Brian DePasquale
- Princeton Neuroscience Institute, Princeton University, Princeton NJ, USA; Department of Neuroscience, Columbia University, New York, NY, USA; Center for Theoretical Neuroscience, Columbia University, New York, NY, USA.
| | - David Sussillo
- Department of Electrical Engineering, Stanford University, Stanford, CA, USA; Wu Tsai Neurosciences Institute, Stanford University, Stanford, CA, USA
| | - L F Abbott
- Department of Neuroscience, Columbia University, New York, NY, USA; Center for Theoretical Neuroscience, Columbia University, New York, NY, USA; Zuckerman Mind Brain Behavior Institute, Columbia University, New York, NY, USA; Department of Physiology and Cellular Biophysics, Columbia University, New York, NY, USA; Kavli Institute for Brain Science, Columbia University, New York, NY, USA
| | - Mark M Churchland
- Department of Neuroscience, Columbia University, New York, NY, USA; Zuckerman Mind Brain Behavior Institute, Columbia University, New York, NY, USA; Kavli Institute for Brain Science, Columbia University, New York, NY, USA; Grossman Center for the Statistics of Mind, Columbia University, New York, NY, USA
| |
Collapse
|
23
|
Does Deep Learning Have Epileptic Seizures? On the Modeling of the Brain. Cognit Comput 2023. [DOI: 10.1007/s12559-023-10113-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/25/2023]
|
24
|
Sakemi Y, Morino K, Morie T, Aihara K. A Supervised Learning Algorithm for Multilayer Spiking Neural Networks Based on Temporal Coding Toward Energy-Efficient VLSI Processor Design. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2023; 34:394-408. [PMID: 34280109 DOI: 10.1109/tnnls.2021.3095068] [Citation(s) in RCA: 7] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
Spiking neural networks (SNNs) are brain-inspired mathematical models with the ability to process information in the form of spikes. SNNs are expected to provide not only new machine-learning algorithms but also energy-efficient computational models when implemented in very-large-scale integration (VLSI) circuits. In this article, we propose a novel supervised learning algorithm for SNNs based on temporal coding. A spiking neuron in this algorithm is designed to facilitate analog VLSI implementations with analog resistive memory, by which ultrahigh energy efficiency can be achieved. We also propose several techniques to improve the performance on recognition tasks and show that the classification accuracy of the proposed algorithm is as high as that of the state-of-the-art temporal coding SNN algorithms on the MNIST and Fashion-MNIST datasets. Finally, we discuss the robustness of the proposed SNNs against variations that arise from the device manufacturing process and are unavoidable in analog VLSI implementation. We also propose a technique to suppress the effects of variations in the manufacturing process on the recognition performance.
Collapse
|
25
|
tension: A Python package for FORCE learning. PLoS Comput Biol 2022; 18:e1010722. [PMID: 36534709 PMCID: PMC9810194 DOI: 10.1371/journal.pcbi.1010722] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/31/2022] [Revised: 01/03/2023] [Accepted: 11/09/2022] [Indexed: 12/23/2022] Open
Abstract
First-Order, Reduced and Controlled Error (FORCE) learning and its variants are widely used to train chaotic recurrent neural networks (RNNs), and outperform gradient methods on certain tasks. However, there is currently no standard software framework for FORCE learning. We present tension, an object-oriented, open-source Python package that implements a TensorFlow / Keras API for FORCE. We show how rate networks, spiking networks, and networks constrained by biological data can all be trained using a shared, easily extensible high-level API. With the same resources, our implementation outperforms a conventional RNN in loss and published FORCE implementations in runtime. Our work here makes FORCE training chaotic RNNs accessible and simple to iterate, and facilitates modeling of how behaviors of interest emerge from neural dynamics.
Collapse
|
26
|
George AM, Dey S, Banerjee D, Mukherjee A, Suri M. Online Time-Series Forecasting using Spiking Reservoir. Neurocomputing 2022. [DOI: 10.1016/j.neucom.2022.10.067] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
|
27
|
Duggins P, Eliasmith C. Constructing functional models from biophysically-detailed neurons. PLoS Comput Biol 2022; 18:e1010461. [PMID: 36074765 PMCID: PMC9455888 DOI: 10.1371/journal.pcbi.1010461] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/15/2022] [Accepted: 07/30/2022] [Indexed: 11/25/2022] Open
Abstract
Improving biological plausibility and functional capacity are two important goals for brain models that connect low-level neural details to high-level behavioral phenomena. We develop a method called “oracle-supervised Neural Engineering Framework” (osNEF) to train biologically-detailed spiking neural networks that realize a variety of cognitively-relevant dynamical systems. Specifically, we train networks to perform computations that are commonly found in cognitive systems (communication, multiplication, harmonic oscillation, and gated working memory) using four distinct neuron models (leaky-integrate-and-fire neurons, Izhikevich neurons, 4-dimensional nonlinear point neurons, and 4-compartment, 6-ion-channel layer-V pyramidal cell reconstructions) connected with various synaptic models (current-based synapses, conductance-based synapses, and voltage-gated synapses). We show that osNEF networks exhibit the target dynamics by accounting for nonlinearities present within the neuron models: performance is comparable across all four systems and all four neuron models, with variance proportional to task and neuron model complexity. We also apply osNEF to build a model of working memory that performs a delayed response task using a combination of pyramidal cells and inhibitory interneurons connected with NMDA and GABA synapses. The baseline performance and forgetting rate of the model are consistent with animal data from delayed match-to-sample tasks (DMTST): we observe a baseline performance of 95% and exponential forgetting with time constant τ = 8.5s, while a recent meta-analysis of DMTST performance across species observed baseline performances of 58 − 99% and exponential forgetting with time constants of τ = 2.4 − 71s. These results demonstrate that osNEF can train functional brain models using biologically-detailed components and open new avenues for investigating the relationship between biophysical mechanisms and functional capabilities. Computational models of biologically realistic neural networks help scientists understand and recreate a wide variety of brain processes, responsible for everything from fish locomotion to human cognition. To be useful, these models must both recreate features of the brain, such as the electrical, chemical, and geometric properties of neurons, and perform useful functional operations, such as storing and retrieving information from a short term memory. Here, we develop a new method for training networks built from biologically detailed components. We simulate networks that contain a variety of complex neurons and synapses, then show that our method successfully trains them to perform a variety of cognitive operations. Most notably, we train a working memory model that contains detailed reconstructions of cortical neurons, and demonstrate that it performs a memory task with performance that is comparable to simple animals. Researchers can use our method to train detailed brain models and investigate how biological features (or deficits thereof) relate to cognition, which may provide insights into the biological basis of mental disorders such as Parkinson’s disease.
Collapse
Affiliation(s)
- Peter Duggins
- Computational Neuroscience Research Group, Department of Systems Design Engineering, University of Waterloo, Waterloo, Canada
- * E-mail:
| | - Chris Eliasmith
- Computational Neuroscience Research Group, Department of Systems Design Engineering, University of Waterloo, Waterloo, Canada
| |
Collapse
|
28
|
Connectivity concepts in neuronal network modeling. PLoS Comput Biol 2022; 18:e1010086. [PMID: 36074778 PMCID: PMC9455883 DOI: 10.1371/journal.pcbi.1010086] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/07/2021] [Accepted: 04/07/2022] [Indexed: 11/19/2022] Open
Abstract
Sustainable research on computational models of neuronal networks requires published models to be understandable, reproducible, and extendable. Missing details or ambiguities about mathematical concepts and assumptions, algorithmic implementations, or parameterizations hinder progress. Such flaws are unfortunately frequent and one reason is a lack of readily applicable standards and tools for model description. Our work aims to advance complete and concise descriptions of network connectivity but also to guide the implementation of connection routines in simulation software and neuromorphic hardware systems. We first review models made available by the computational neuroscience community in the repositories ModelDB and Open Source Brain, and investigate the corresponding connectivity structures and their descriptions in both manuscript and code. The review comprises the connectivity of networks with diverse levels of neuroanatomical detail and exposes how connectivity is abstracted in existing description languages and simulator interfaces. We find that a substantial proportion of the published descriptions of connectivity is ambiguous. Based on this review, we derive a set of connectivity concepts for deterministically and probabilistically connected networks and also address networks embedded in metric space. Beside these mathematical and textual guidelines, we propose a unified graphical notation for network diagrams to facilitate an intuitive understanding of network properties. Examples of representative network models demonstrate the practical use of the ideas. We hope that the proposed standardizations will contribute to unambiguous descriptions and reproducible implementations of neuronal network connectivity in computational neuroscience.
Collapse
|
29
|
Cramer B, Stradmann Y, Schemmel J, Zenke F. The Heidelberg Spiking Data Sets for the Systematic Evaluation of Spiking Neural Networks. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2022; 33:2744-2757. [PMID: 33378266 DOI: 10.1109/tnnls.2020.3044364] [Citation(s) in RCA: 19] [Impact Index Per Article: 9.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Spiking neural networks are the basis of versatile and power-efficient information processing in the brain. Although we currently lack a detailed understanding of how these networks compute, recently developed optimization techniques allow us to instantiate increasingly complex functional spiking neural networks in-silico. These methods hold the promise to build more efficient non-von-Neumann computing hardware and will offer new vistas in the quest of unraveling brain circuit function. To accelerate the development of such methods, objective ways to compare their performance are indispensable. Presently, however, there are no widely accepted means for comparing the computational performance of spiking neural networks. To address this issue, we introduce two spike-based classification data sets, broadly applicable to benchmark both software and neuromorphic hardware implementations of spiking neural networks. To accomplish this, we developed a general audio-to-spiking conversion procedure inspired by neurophysiology. Furthermore, we applied this conversion to an existing and a novel speech data set. The latter is the free, high-fidelity, and word-level aligned Heidelberg digit data set that we created specifically for this study. By training a range of conventional and spiking classifiers, we show that leveraging spike timing information within these data sets is essential for good classification accuracy. These results serve as the first reference for future performance comparisons of spiking neural networks.
Collapse
|
30
|
Capone C, Muratore P, Paolucci PS. Error-based or target-based? A unified framework for learning in recurrent spiking networks. PLoS Comput Biol 2022; 18:e1010221. [PMID: 35727852 PMCID: PMC9249234 DOI: 10.1371/journal.pcbi.1010221] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/06/2021] [Revised: 07/01/2022] [Accepted: 05/17/2022] [Indexed: 11/25/2022] Open
Abstract
The field of recurrent neural networks is over-populated by a variety of proposed learning rules and protocols. The scope of this work is to define a generalized framework, to move a step forward towards the unification of this fragmented scenario. In the field of supervised learning, two opposite approaches stand out, error-based and target-based. This duality gave rise to a scientific debate on which learning framework is the most likely to be implemented in biological networks of neurons. Moreover, the existence of spikes raises the question of whether the coding of information is rate-based or spike-based. To face these questions, we proposed a learning model with two main parameters, the rank of the feedback learning matrix R and the tolerance to spike timing τ⋆. We demonstrate that a low (high) rank R accounts for an error-based (target-based) learning rule, while high (low) tolerance to spike timing promotes rate-based (spike-based) coding. We show that in a store and recall task, high-ranks allow for lower MSE values, while low-ranks enable a faster convergence. Our framework naturally lends itself to Behavioral Cloning and allows for efficiently solving relevant closed-loop tasks, investigating what parameters (R,τ⋆) are optimal to solve a specific task. We found that a high R is essential for tasks that require retaining memory for a long time (Button and Food). On the other hand, this is not relevant for a motor task (the 2D Bipedal Walker). In this case, we find that precise spike-based coding enables optimal performances. Finally, we show that our theoretical formulation allows for defining protocols to estimate the rank of the feedback error in biological networks. We release a PyTorch implementation of our model supporting GPU parallelization. Learning in biological or artificial networks means changing the laws governing the network dynamics in order to better behave in a specific situation. However, there exists no consensus on what rules regulate learning in biological systems. To face these questions, we propose a novel theoretical formulation for learning with two main parameters, the number of learning constraints ( R) and the tolerance to spike timing (τ⋆). We demonstrate that a low (high) rank R accounts for an error-based (target-based) learning rule, while high (low) tolerance to spike timing τ⋆ promotes rate-based (spike-based) coding. Our approach naturally lends itself to Imitation Learning (and Behavioral Cloning in particular) and we apply it to solve relevant closed-loop tasks such as the button-and-food task, and the 2D Bipedal Walker. The button-and-food is a navigation task that requires retaining a long-term memory, and benefits from a high R. On the other hand, the 2D Bipedal Walker is a motor task and benefits from a low τ⋆. Finally, we show that our theoretical formulation suggests protocols to deduce the structure of learning feedback in biological networks.
Collapse
Affiliation(s)
| | - Paolo Muratore
- Cognitive Neuroscience, SISSA, Trieste, Italy
- * E-mail: (CC); (PM)
| | | |
Collapse
|
31
|
Dynamical differential covariance recovers directional network structure in multiscale neural systems. Proc Natl Acad Sci U S A 2022; 119:e2117234119. [PMID: 35679342 PMCID: PMC9214501 DOI: 10.1073/pnas.2117234119] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/01/2022] Open
Abstract
We sense, move, and think by dynamical interactions between neurons. It is now possible to simultaneously record from many individual neurons and brain regions. Methods for analyzing these large-scale recordings are needed that can reveal how the patterns of activity give rise to behavior. We developed dynamical differential covariance (DDC), an efficient, intuitive, and robust way to analyze these recordings and validated it on simulations of model neural networks where the ground truth was known. It can estimate not only the presence of a connection but also which direction the information is flowing in a network between neurons or cortical areas. We applied DDC to recordings from functional magnetic resonance imaging in humans and confirmed predicted connectivity with direct measurements. Investigating neural interactions is essential to understanding the neural basis of behavior. Many statistical methods have been used for analyzing neural activity, but estimating the direction of network interactions correctly and efficiently remains a difficult problem. Here, we derive dynamical differential covariance (DDC), a method based on dynamical network models that detects directional interactions with low bias and high noise tolerance under nonstationarity conditions. Moreover, DDC scales well with the number of recording sites and the computation required is comparable to that needed for covariance. DDC was validated and compared favorably with other methods on networks with false positive motifs and multiscale neural simulations where the ground-truth connectivity was known. When applied to recordings of resting-state functional magnetic resonance imaging (rs-fMRI), DDC consistently detected regional interactions with strong structural connectivity in over 1,000 individual subjects obtained by diffusion MRI (dMRI). DDC is a promising family of methods for estimating connectivity that can be generalized to a wide range of dynamical models and recording techniques and to other applications where system identification is needed.
Collapse
|
32
|
Yi H. Efficient machine learning algorithm for electroencephalogram modeling in brain–computer interfaces. Neural Comput Appl 2022. [DOI: 10.1007/s00521-020-04861-3] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022]
|
33
|
Singanamalla SKR, Lin CT. Spike-Representation of EEG Signals for Performance Enhancement of Brain-Computer Interfaces. Front Neurosci 2022; 16:792318. [PMID: 35444515 PMCID: PMC9014221 DOI: 10.3389/fnins.2022.792318] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/10/2021] [Accepted: 01/12/2022] [Indexed: 11/13/2022] Open
Abstract
Brain-computer interfaces (BCI) relying on electroencephalography (EEG) based neuroimaging mode has shown prospects for real-world usage due to its portability and optional selectivity of fewer channels for compactness. However, noise and artifacts often limit the capacity of BCI systems especially for event-related potentials such as P300 and error-related negativity (ERN), whose biomarkers are present in short time segments at the time-series level. Contrary to EEG, invasive recording is less prone to noise but requires a tedious surgical procedure. But EEG signal is the result of aggregation of neuronal spiking information underneath the scalp surface and transforming the relevant BCI task's EEG signal to spike representation could potentially help improve the BCI performance. In this study, we designed an approach using a spiking neural network (SNN) which is trained using surrogate-gradient descent to generate task-related multi-channel EEG template signals of all classes. The trained model is in turn leveraged to obtain the latent spike representation for each EEG sample. Comparing the classification performance of EEG signal and its spike-representation, the proposed approach enhanced the performance of ERN dataset from 79.22 to 82.27% with naive bayes and for P300 dataset, the accuracy was improved from 67.73 to 69.87% using xGboost. In addition, principal component analysis and correlation metrics were evaluated on both EEG signals and their spike-representation to identify the reason for such improvement.
Collapse
Affiliation(s)
- Sai Kalyan Ranga Singanamalla
- Computational Intelligence and Brain Computer Interface Lab, School of Computer Science, University of Technology Sydney, Sydney, NSW, Australia
| | - Chin-Teng Lin
- Computational Intelligence and Brain Computer Interface Lab, School of Computer Science, University of Technology Sydney, Sydney, NSW, Australia
- Australian Artificial Intelligence Institute, University of Technology Sydney, Sydney, NSW, Australia
- *Correspondence: Chin-Teng Lin
| |
Collapse
|
34
|
Intrinsic bursts facilitate learning of Lévy flight movements in recurrent neural network models. Sci Rep 2022; 12:4951. [PMID: 35322813 PMCID: PMC8943163 DOI: 10.1038/s41598-022-08953-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/08/2021] [Accepted: 03/09/2022] [Indexed: 11/24/2022] Open
Abstract
Isolated spikes and bursts of spikes are thought to provide the two major modes of information coding by neurons. Bursts are known to be crucial for fundamental processes between neuron pairs, such as neuronal communications and synaptic plasticity. Neuronal bursting also has implications in neurodegenerative diseases and mental disorders. Despite these findings on the roles of bursts, whether and how bursts have an advantage over isolated spikes in the network-level computation remains elusive. Here, we demonstrate in a computational model that not isolated spikes, but intrinsic bursts can greatly facilitate learning of Lévy flight random walk trajectories by synchronizing burst onsets across a neural population. Lévy flight is a hallmark of optimal search strategies and appears in cognitive behaviors such as saccadic eye movements and memory retrieval. Our results suggest that bursting is crucial for sequence learning by recurrent neural networks when sequences comprise long-tailed distributed discrete jumps.
Collapse
|
35
|
Ioannides G, Kourouklides I, Astolfi A. Spatiotemporal dynamics in spiking recurrent neural networks using modified-full-FORCE on EEG signals. Sci Rep 2022; 12:2896. [PMID: 35190579 PMCID: PMC8861015 DOI: 10.1038/s41598-022-06573-1] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/15/2021] [Accepted: 01/05/2022] [Indexed: 11/22/2022] Open
Abstract
Methods on modelling the human brain as a Complex System have increased remarkably in the literature as researchers seek to understand the underlying foundations behind cognition, behaviour, and perception. Computational methods, especially Graph Theory-based methods, have recently contributed significantly in understanding the wiring connectivity of the brain, modelling it as a set of nodes connected by edges. Therefore, the brain’s spatiotemporal dynamics can be holistically studied by considering a network, which consists of many neurons, represented by nodes. Various models have been proposed for modelling such neurons. A recently proposed method in training such networks, called full-Force, produces networks that perform tasks with fewer neurons and greater noise robustness than previous least-squares approaches (i.e. FORCE method). In this paper, the first direct applicability of a variant of the full-Force method to biologically-motivated Spiking RNNs (SRNNs) is demonstrated. The SRNN is a graph consisting of modules. Each module is modelled as a Small-World Network (SWN), which is a specific type of a biologically-plausible graph. So, the first direct applicability of a variant of the full-Force method to modular SWNs is demonstrated, evaluated through regression and information theoretic metrics. For the first time, the aforementioned method is applied to spiking neuron models and trained on various real-life Electroencephalography (EEG) signals. To the best of the authors’ knowledge, all the contributions of this paper are novel. Results show that trained SRNNs match EEG signals almost perfectly, while network dynamics can mimic the target dynamics. This demonstrates that the holistic setup of the network model and the neuron model which are both more biologically plausible than previous work, can be tuned into real biological signal dynamics.
Collapse
Affiliation(s)
- Georgios Ioannides
- Department of Electrical and Electronic Engineering, Imperial College London, London, SW7 2AZ, UK.
| | - Ioannis Kourouklides
- Department of Electrical Engineering, Computer Engineering and Informatics, Cyprus University of Technology, 33 Saripolou Street, 3036, Limassol, Cyprus
| | - Alessandro Astolfi
- Department of Electrical and Electronic Engineering, Imperial College London, London, SW7 2AZ, UK
| |
Collapse
|
36
|
Toker D, Pappas I, Lendner JD, Frohlich J, Mateos DM, Muthukumaraswamy S, Carhart-Harris R, Paff M, Vespa PM, Monti MM, Sommer FT, Knight RT, D'Esposito M. Consciousness is supported by near-critical slow cortical electrodynamics. Proc Natl Acad Sci U S A 2022; 119:e2024455119. [PMID: 35145021 PMCID: PMC8851554 DOI: 10.1073/pnas.2024455119] [Citation(s) in RCA: 42] [Impact Index Per Article: 21.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/02/2020] [Accepted: 12/20/2021] [Indexed: 12/21/2022] Open
Abstract
Mounting evidence suggests that during conscious states, the electrodynamics of the cortex are poised near a critical point or phase transition and that this near-critical behavior supports the vast flow of information through cortical networks during conscious states. Here, we empirically identify a mathematically specific critical point near which waking cortical oscillatory dynamics operate, which is known as the edge-of-chaos critical point, or the boundary between stability and chaos. We do so by applying the recently developed modified 0-1 chaos test to electrocorticography (ECoG) and magnetoencephalography (MEG) recordings from the cortices of humans and macaques across normal waking, generalized seizure, anesthesia, and psychedelic states. Our evidence suggests that cortical information processing is disrupted during unconscious states because of a transition of low-frequency cortical electric oscillations away from this critical point; conversely, we show that psychedelics may increase the information richness of cortical activity by tuning low-frequency cortical oscillations closer to this critical point. Finally, we analyze clinical electroencephalography (EEG) recordings from patients with disorders of consciousness (DOC) and show that assessing the proximity of slow cortical oscillatory electrodynamics to the edge-of-chaos critical point may be useful as an index of consciousness in the clinical setting.
Collapse
Affiliation(s)
- Daniel Toker
- Department of Psychology, University of California, Los Angeles, CA 90095;
| | - Ioannis Pappas
- Helen Wills Neuroscience Institute, University of California, Berkeley, CA 94704
- Department of Psychology, University of California, Berkeley, CA 94704
- Laboratory of Neuro Imaging, Stevens Institute for Neuroimaging and Informatics, Keck School of Medicine, University of Southern California, Los Angeles, CA 90033
| | - Janna D Lendner
- Helen Wills Neuroscience Institute, University of California, Berkeley, CA 94704
- Department of Anesthesiology and Intensive Care, University Medical Center, 72076 Tübingen, Germany
| | - Joel Frohlich
- Department of Psychology, University of California, Los Angeles, CA 90095
| | - Diego M Mateos
- Consejo Nacional de Investigaciones Científicas y Técnicas de Argentina, C1425 Buenos Aires, Argentina
- Facultad de Ciencia y Tecnología, Universidad Autónoma de Entre Ríos, E3202 Paraná, Entre Ríos, Argentina
- Grupo de Análisis de Neuroimágenes, Instituo de Matemática Aplicada del Litoral, S3000 Santa Fe, Argentina
| | - Suresh Muthukumaraswamy
- School of Pharmacy, Faculty of Medical and Health Sciences, The University of Auckland, 1010 Auckland, New Zealand
| | - Robin Carhart-Harris
- Neuropsychopharmacology Unit, Centre for Psychiatry, Imperial College London, London SW7 2AZ, United Kingdom
- Centre for Psychedelic Research, Department of Psychiatry, Imperial College London, London SW7 2AZ, United Kingdom
| | - Michelle Paff
- Department of Neurological Surgery, University of California, Irvine, CA 92697
| | - Paul M Vespa
- Brain Injury Research Center, Department of Neurosurgery, University of California, Los Angeles, CA 90095
| | - Martin M Monti
- Department of Psychology, University of California, Los Angeles, CA 90095
- Brain Injury Research Center, Department of Neurosurgery, University of California, Los Angeles, CA 90095
| | - Friedrich T Sommer
- Helen Wills Neuroscience Institute, University of California, Berkeley, CA 94704
- Redwood Center for Theoretical Neuroscience, University of California, Berkeley, CA 94704
| | - Robert T Knight
- Helen Wills Neuroscience Institute, University of California, Berkeley, CA 94704
- Department of Psychology, University of California, Berkeley, CA 94704
| | - Mark D'Esposito
- Helen Wills Neuroscience Institute, University of California, Berkeley, CA 94704
- Department of Psychology, University of California, Berkeley, CA 94704
| |
Collapse
|
37
|
Cell-type-specific neuromodulation guides synaptic credit assignment in a spiking neural network. Proc Natl Acad Sci U S A 2021; 118:2111821118. [PMID: 34916291 PMCID: PMC8713766 DOI: 10.1073/pnas.2111821118] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 10/28/2021] [Indexed: 12/27/2022] Open
Abstract
Synaptic connectivity provides the foundation for our present understanding of neuronal network function, but static connectivity cannot explain learning and memory. We propose a computational role for the diversity of cortical neuronal types and their associated cell-type–specific neuromodulators in improving the efficiency of synaptic weight adjustments for task learning in neuronal networks. Brains learn tasks via experience-driven differential adjustment of their myriad individual synaptic connections, but the mechanisms that target appropriate adjustment to particular connections remain deeply enigmatic. While Hebbian synaptic plasticity, synaptic eligibility traces, and top-down feedback signals surely contribute to solving this synaptic credit-assignment problem, alone, they appear to be insufficient. Inspired by new genetic perspectives on neuronal signaling architectures, here, we present a normative theory for synaptic learning, where we predict that neurons communicate their contribution to the learning outcome to nearby neurons via cell-type–specific local neuromodulation. Computational tests suggest that neuron-type diversity and neuron-type–specific local neuromodulation may be critical pieces of the biological credit-assignment puzzle. They also suggest algorithms for improved artificial neural network learning efficiency.
Collapse
|
38
|
Büchel J, Zendrikov D, Solinas S, Indiveri G, Muir DR. Supervised training of spiking neural networks for robust deployment on mixed-signal neuromorphic processors. Sci Rep 2021; 11:23376. [PMID: 34862429 PMCID: PMC8642544 DOI: 10.1038/s41598-021-02779-x] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/07/2021] [Accepted: 11/22/2021] [Indexed: 11/14/2022] Open
Abstract
Mixed-signal analog/digital circuits emulate spiking neurons and synapses with extremely high energy efficiency, an approach known as "neuromorphic engineering". However, analog circuits are sensitive to process-induced variation among transistors in a chip ("device mismatch"). For neuromorphic implementation of Spiking Neural Networks (SNNs), mismatch causes parameter variation between identically-configured neurons and synapses. Each chip exhibits a different distribution of neural parameters, causing deployed networks to respond differently between chips. Current solutions to mitigate mismatch based on per-chip calibration or on-chip learning entail increased design complexity, area and cost, making deployment of neuromorphic devices expensive and difficult. Here we present a supervised learning approach that produces SNNs with high robustness to mismatch and other common sources of noise. Our method trains SNNs to perform temporal classification tasks by mimicking a pre-trained dynamical system, using a local learning rule from non-linear control theory. We demonstrate our method on two tasks requiring temporal memory, and measure the robustness of our approach to several forms of noise and mismatch. We show that our approach is more robust than common alternatives for training SNNs. Our method provides robust deployment of pre-trained networks on mixed-signal neuromorphic hardware, without requiring per-device training or calibration.
Collapse
Affiliation(s)
- Julian Büchel
- SynSense, Thurgauerstrasse 40, 8050, Zurich, Switzerland
- Institute of Neuroinformatics, University of Zurich and ETH Zurich, Winterthurerstrasse 190, 8057, Zurich, Switzerland
| | - Dmitrii Zendrikov
- Institute of Neuroinformatics, University of Zurich and ETH Zurich, Winterthurerstrasse 190, 8057, Zurich, Switzerland
| | - Sergio Solinas
- Department of Biomedical Science, University of Sassari, Piazza Università, 21, 07100, Sassari, Sardegna, Italy
| | - Giacomo Indiveri
- SynSense, Thurgauerstrasse 40, 8050, Zurich, Switzerland
- Institute of Neuroinformatics, University of Zurich and ETH Zurich, Winterthurerstrasse 190, 8057, Zurich, Switzerland
| | - Dylan R Muir
- SynSense, Thurgauerstrasse 40, 8050, Zurich, Switzerland.
| |
Collapse
|
39
|
Li Y, Kim R, Sejnowski TJ. Learning the Synaptic and Intrinsic Membrane Dynamics Underlying Working Memory in Spiking Neural Network Models. Neural Comput 2021; 33:3264-3287. [PMID: 34710902 PMCID: PMC8662709 DOI: 10.1162/neco_a_01409] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/03/2020] [Accepted: 03/15/2021] [Indexed: 12/12/2022]
Abstract
Recurrent neural network (RNN) models trained to perform cognitive tasks are a useful computational tool for understanding how cortical circuits execute complex computations. However, these models are often composed of units that interact with one another using continuous signals and overlook parameters intrinsic to spiking neurons. Here, we developed a method to directly train not only synaptic-related variables but also membrane-related parameters of a spiking RNN model. Training our model on a wide range of cognitive tasks resulted in diverse yet task-specific synaptic and membrane parameters. We also show that fast membrane time constants and slow synaptic decay dynamics naturally emerge from our model when it is trained on tasks associated with working memory (WM). Further dissecting the optimized parameters revealed that fast membrane properties are important for encoding stimuli, and slow synaptic dynamics are needed for WM maintenance. This approach offers a unique window into how connectivity patterns and intrinsic neuronal properties contribute to complex dynamics in neural populations.
Collapse
Affiliation(s)
- Yinghao Li
- Computational Neurobiology Laboratory, Salk Institute for Biological Studies, La Jolla, CA 92037, U.S.A.
| | - Robert Kim
- Computational Neurobiology Laboratory, Salk Institute for Biological Studies, La Jolla, CA 92037, and Neurosciences Graduate Program and Medical Scientist Training Program, University of California San Diego, La Jolla, CA 92093, U.S.A.
| | - Terrence J Sejnowski
- Computational Neurobiology Laboratory, Salk Institute for Biological Studies, La Jolla, CA 92037, and Institute for Neural Computation and Division of Biological Sciences, University of California San Diego, La Jolla, CA 92093, U.S.A.
| |
Collapse
|
40
|
Zhang T, Cheng X, Jia S, Poo MM, Zeng Y, Xu B. Self-backpropagation of synaptic modifications elevates the efficiency of spiking and artificial neural networks. SCIENCE ADVANCES 2021; 7:eabh0146. [PMID: 34669481 PMCID: PMC8528419 DOI: 10.1126/sciadv.abh0146] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/15/2023]
Abstract
Many synaptic plasticity rules found in natural circuits have not been incorporated into artificial neural networks (ANNs). We showed that incorporating a nonlocal feature of synaptic plasticity found in natural neural networks, whereby synaptic modification at output synapses of a neuron backpropagates to its input synapses made by upstream neurons, markedly reduced the computational cost without affecting the accuracy of spiking neural networks (SNNs) and ANNs in supervised learning for three benchmark tasks. For SNNs, synaptic modification at output neurons generated by spike timing–dependent plasticity was allowed to self-propagate to limited upstream synapses. For ANNs, modified synaptic weights via conventional backpropagation algorithm at output neurons self-backpropagated to limited upstream synapses. Such self-propagating plasticity may produce coordinated synaptic modifications across neuronal layers that reduce computational cost.
Collapse
Affiliation(s)
- Tielin Zhang
- Research Center for Brain-inspired Intelligence, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China
- School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing 100049, China
| | - Xiang Cheng
- Research Center for Brain-inspired Intelligence, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China
- School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing 100049, China
| | - Shuncheng Jia
- Research Center for Brain-inspired Intelligence, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China
- School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing 100049, China
| | - Mu-ming Poo
- School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing 100049, China
- Institute of Neuroscience, State Key Laboratory of Neuroscience, Chinese Academy of Sciences, Shanghai 200031, China
- Center for Excellence in Brain Science and Intelligence Technology, Chinese Academy of Sciences, Shanghai 200031, China
- Shanghai Center for Brain Science and Brain-Inspired Intelligence Technology, Shanghai 201210, China
| | - Yi Zeng
- Research Center for Brain-inspired Intelligence, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China
- School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing 100049, China
- Center for Excellence in Brain Science and Intelligence Technology, Chinese Academy of Sciences, Shanghai 200031, China
| | - Bo Xu
- Research Center for Brain-inspired Intelligence, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China
- School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing 100049, China
- Center for Excellence in Brain Science and Intelligence Technology, Chinese Academy of Sciences, Shanghai 200031, China
- Corresponding author.
| |
Collapse
|
41
|
Perez-Nieves N, Leung VCH, Dragotti PL, Goodman DFM. Neural heterogeneity promotes robust learning. Nat Commun 2021; 12:5791. [PMID: 34608134 PMCID: PMC8490404 DOI: 10.1038/s41467-021-26022-3] [Citation(s) in RCA: 42] [Impact Index Per Article: 14.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/26/2021] [Accepted: 09/10/2021] [Indexed: 11/24/2022] Open
Abstract
The brain is a hugely diverse, heterogeneous structure. Whether or not heterogeneity at the neural level plays a functional role remains unclear, and has been relatively little explored in models which are often highly homogeneous. We compared the performance of spiking neural networks trained to carry out tasks of real-world difficulty, with varying degrees of heterogeneity, and found that heterogeneity substantially improved task performance. Learning with heterogeneity was more stable and robust, particularly for tasks with a rich temporal structure. In addition, the distribution of neuronal parameters in the trained networks is similar to those observed experimentally. We suggest that the heterogeneity observed in the brain may be more than just the byproduct of noisy processes, but rather may serve an active and important role in allowing animals to learn in changing environments.
Collapse
Affiliation(s)
- Nicolas Perez-Nieves
- Department of Electrical and Electronic Engineering, Imperial College London, London, SW7 2AZ, UK.
| | - Vincent C H Leung
- Department of Electrical and Electronic Engineering, Imperial College London, London, SW7 2AZ, UK
| | - Pier Luigi Dragotti
- Department of Electrical and Electronic Engineering, Imperial College London, London, SW7 2AZ, UK
| | - Dan F M Goodman
- Department of Electrical and Electronic Engineering, Imperial College London, London, SW7 2AZ, UK.
| |
Collapse
|
42
|
Transfer-RLS method and transfer-FORCE learning for simple and fast training of reservoir computing models. Neural Netw 2021; 143:550-563. [PMID: 34304003 DOI: 10.1016/j.neunet.2021.06.031] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/10/2020] [Revised: 05/13/2021] [Accepted: 06/29/2021] [Indexed: 11/22/2022]
Abstract
Reservoir computing is a machine learning framework derived from a special type of recurrent neural network. Following recent advances in physical reservoir computing, some reservoir computing devices are thought to be promising as energy-efficient machine learning hardware for real-time information processing. To realize efficient online learning with low-power reservoir computing devices, it is beneficial to develop fast convergence learning methods with simpler operations. This study proposes a training method located in the middle between the recursive least squares (RLS) method and the least mean squares (LMS) method, which are standard online learning methods for reservoir computing models. The RLS method converges fast but requires updates of a huge matrix called a gain matrix, whereas the LMS method does not use a gain matrix but converges very slow. On the other hand, the proposed method called a transfer-RLS method does not require updates of the gain matrix in the main-training phase by updating that in advance (i.e., in a pre-training phase). As a result, the transfer-RLS method can work with simpler operations than the original RLS method without sacrificing much convergence speed. We numerically and analytically show that the transfer-RLS method converges much faster than the LMS method. Furthermore, we show that a modified version of the transfer-RLS method (called transfer-FORCE learning) can be applied to the first-order reduced and controlled error (FORCE) learning for a reservoir computing model with a closed-loop, which is challenging to train.
Collapse
|
43
|
Kim CM, Chow CC. Training Spiking Neural Networks in the Strong Coupling Regime. Neural Comput 2021; 33:1199-1233. [PMID: 34496392 DOI: 10.1162/neco_a_01379] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/24/2020] [Accepted: 11/23/2020] [Indexed: 11/04/2022]
Abstract
Recurrent neural networks trained to perform complex tasks can provide insight into the dynamic mechanism that underlies computations performed by cortical circuits. However, due to a large number of unconstrained synaptic connections, the recurrent connectivity that emerges from network training may not be biologically plausible. Therefore, it remains unknown if and how biological neural circuits implement dynamic mechanisms proposed by the models. To narrow this gap, we developed a training scheme that, in addition to achieving learning goals, respects the structural and dynamic properties of a standard cortical circuit model: strongly coupled excitatory-inhibitory spiking neural networks. By preserving the strong mean excitatory and inhibitory coupling of initial networks, we found that most of trained synapses obeyed Dale's law without additional constraints, exhibited large trial-to-trial spiking variability, and operated in inhibition-stabilized regime. We derived analytical estimates on how training and network parameters constrained the changes in mean synaptic strength during training. Our results demonstrate that training recurrent neural networks subject to strong coupling constraints can result in connectivity structure and dynamic regime relevant to cortical circuits.
Collapse
Affiliation(s)
- Christopher M Kim
- Laboratory of Biological Modeling, National Institute of Diabetes and Digestive and Kidney Diseases/National Institutes of Health, Bethesda, MD 20814, U.S.A.
| | - Carson C Chow
- Laboratory of Biological Modeling, National Institute of Diabetes and Digestive and Kidney Diseases/National Institutes of Health, Bethesda, MD 20814, U.S.A.
| |
Collapse
|
44
|
Singanamalla SKR, Lin CT. Spiking Neural Network for Augmenting Electroencephalographic Data for Brain Computer Interfaces. Front Neurosci 2021; 15:651762. [PMID: 33867928 PMCID: PMC8047134 DOI: 10.3389/fnins.2021.651762] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/10/2021] [Accepted: 02/22/2021] [Indexed: 11/28/2022] Open
Abstract
With the advent of advanced machine learning methods, the performance of brain–computer interfaces (BCIs) has improved unprecedentedly. However, electroencephalography (EEG), a commonly used brain imaging method for BCI, is characterized by a tedious experimental setup, frequent data loss due to artifacts, and is time consuming for bulk trial recordings to take advantage of the capabilities of deep learning classifiers. Some studies have tried to address this issue by generating artificial EEG signals. However, a few of these methods are limited in retaining the prominent features or biomarker of the signal. And, other deep learning-based generative methods require a huge number of samples for training, and a majority of these models can handle data augmentation of one category or class of data at any training session. Therefore, there exists a necessity for a generative model that can generate synthetic EEG samples with as few available trials as possible and generate multi-class while retaining the biomarker of the signal. Since EEG signal represents an accumulation of action potentials from neuronal populations beneath the scalp surface and as spiking neural network (SNN), a biologically closer artificial neural network, communicates via spiking behavior, we propose an SNN-based approach using surrogate-gradient descent learning to reconstruct and generate multi-class artificial EEG signals from just a few original samples. The network was employed for augmenting motor imagery (MI) and steady-state visually evoked potential (SSVEP) data. These artificial data are further validated through classification and correlation metrics to assess its resemblance with original data and in-turn enhanced the MI classification performance.
Collapse
Affiliation(s)
- Sai Kalyan Ranga Singanamalla
- Computational Intelligence and Brain Computer Interface Lab, School of Computer Science, University of Technology Sydney, Sydney, NSW, Australia
| | - Chin-Teng Lin
- Computational Intelligence and Brain Computer Interface Lab, School of Computer Science, University of Technology Sydney, Sydney, NSW, Australia.,Centre for Artificial Intelligence, University of Technology Sydney, Sydney, NSW, Australia
| |
Collapse
|
45
|
|
46
|
Cone I, Shouval HZ. Learning precise spatiotemporal sequences via biophysically realistic learning rules in a modular, spiking network. eLife 2021; 10:63751. [PMID: 33734085 PMCID: PMC7972481 DOI: 10.7554/elife.63751] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/06/2020] [Accepted: 02/16/2021] [Indexed: 11/13/2022] Open
Abstract
Multiple brain regions are able to learn and express temporal sequences, and this functionality is an essential component of learning and memory. We propose a substrate for such representations via a network model that learns and recalls discrete sequences of variable order and duration. The model consists of a network of spiking neurons placed in a modular microcolumn based architecture. Learning is performed via a biophysically realistic learning rule that depends on synaptic 'eligibility traces'. Before training, the network contains no memory of any particular sequence. After training, presentation of only the first element in that sequence is sufficient for the network to recall an entire learned representation of the sequence. An extended version of the model also demonstrates the ability to successfully learn and recall non-Markovian sequences. This model provides a possible framework for biologically plausible sequence learning and memory, in agreement with recent experimental results.
Collapse
Affiliation(s)
- Ian Cone
- Neurobiology and Anatomy, University of Texas Medical School at Houston, Houston, TX, United States.,Applied Physics, Rice University, Houston, TX, United States
| | - Harel Z Shouval
- Neurobiology and Anatomy, University of Texas Medical School at Houston, Houston, TX, United States
| |
Collapse
|
47
|
Zenke F, Bohté SM, Clopath C, Comşa IM, Göltz J, Maass W, Masquelier T, Naud R, Neftci EO, Petrovici MA, Scherr F, Goodman DFM. Visualizing a joint future of neuroscience and neuromorphic engineering. Neuron 2021; 109:571-575. [PMID: 33600754 DOI: 10.1016/j.neuron.2021.01.009] [Citation(s) in RCA: 14] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/20/2020] [Revised: 12/16/2020] [Accepted: 01/07/2021] [Indexed: 11/25/2022]
Abstract
Recent research resolves the challenging problem of building biophysically plausible spiking neural models that are also capable of complex information processing. This advance creates new opportunities in neuroscience and neuromorphic engineering, which we discussed at an online focus meeting.
Collapse
Affiliation(s)
- Friedemann Zenke
- Friedrich Miescher Institute for Biomedical Research, Basel, Switzerland.
| | - Sander M Bohté
- CWI, Amsterdam, the Netherlands; Swammerdam Institute for Life Sciences (SILS), University of Amsterdam, Amsterdam, the Netherlands; AI Department, Rijksuniversiteit Groningen, Groningen, the Netherlands
| | - Claudia Clopath
- Bioengineering Department, Imperial College London, London, UK
| | | | - Julian Göltz
- Kirchhoff Institute for Physics, Heidelberg University, Heidelberg, Germany; Department of Physiology, University of Bern, Bern, Switzerland
| | - Wolfgang Maass
- Institute of Theoretical Computer Science, Graz University of Technology, Graz, Austria
| | | | - Richard Naud
- Brain and Mind Research Institute of the University of Ottawa, Department of Cellular Molecular Medicine, University of Ottawa, Ottawa, Canada
| | - Emre O Neftci
- Department of Cognitive Sciences, University of California, Irvine, Irvine, CA, USA; Department of Computer Science, University of California, Irvine, Irvine, CA, USA
| | - Mihai A Petrovici
- Kirchhoff Institute for Physics, Heidelberg University, Heidelberg, Germany; Department of Physiology, University of Bern, Bern, Switzerland
| | - Franz Scherr
- Institute of Theoretical Computer Science, Graz University of Technology, Graz, Austria
| | - Dan F M Goodman
- Department of Electrical and Electronic Engineering, Imperial College London, London, UK
| |
Collapse
|
48
|
Maes A, Barahona M, Clopath C. Learning compositional sequences with multiple time scales through a hierarchical network of spiking neurons. PLoS Comput Biol 2021; 17:e1008866. [PMID: 33764970 PMCID: PMC8023498 DOI: 10.1371/journal.pcbi.1008866] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/23/2020] [Revised: 04/06/2021] [Accepted: 03/08/2021] [Indexed: 11/17/2022] Open
Abstract
Sequential behaviour is often compositional and organised across multiple time scales: a set of individual elements developing on short time scales (motifs) are combined to form longer functional sequences (syntax). Such organisation leads to a natural hierarchy that can be used advantageously for learning, since the motifs and the syntax can be acquired independently. Despite mounting experimental evidence for hierarchical structures in neuroscience, models for temporal learning based on neuronal networks have mostly focused on serial methods. Here, we introduce a network model of spiking neurons with a hierarchical organisation aimed at sequence learning on multiple time scales. Using biophysically motivated neuron dynamics and local plasticity rules, the model can learn motifs and syntax independently. Furthermore, the model can relearn sequences efficiently and store multiple sequences. Compared to serial learning, the hierarchical model displays faster learning, more flexible relearning, increased capacity, and higher robustness to perturbations. The hierarchical model redistributes the variability: it achieves high motif fidelity at the cost of higher variability in the between-motif timings.
Collapse
Affiliation(s)
- Amadeus Maes
- Bioengineering Department, Imperial College London, London, United Kingdom
| | - Mauricio Barahona
- Mathematics Department, Imperial College London, London, United Kingdom
| | - Claudia Clopath
- Bioengineering Department, Imperial College London, London, United Kingdom
| |
Collapse
|
49
|
Muratore P, Capone C, Paolucci PS. Target spike patterns enable efficient and biologically plausible learning for complex temporal tasks. PLoS One 2021; 16:e0247014. [PMID: 33592040 PMCID: PMC7886200 DOI: 10.1371/journal.pone.0247014] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/25/2020] [Accepted: 01/31/2021] [Indexed: 11/28/2022] Open
Abstract
Recurrent spiking neural networks (RSNN) in the brain learn to perform a wide range of perceptual, cognitive and motor tasks very efficiently in terms of energy consumption and their training requires very few examples. This motivates the search for biologically inspired learning rules for RSNNs, aiming to improve our understanding of brain computation and the efficiency of artificial intelligence. Several spiking models and learning rules have been proposed, but it remains a challenge to design RSNNs whose learning relies on biologically plausible mechanisms and are capable of solving complex temporal tasks. In this paper, we derive a learning rule, local to the synapse, from a simple mathematical principle, the maximization of the likelihood for the network to solve a specific task. We propose a novel target-based learning scheme in which the learning rule derived from likelihood maximization is used to mimic a specific spatio-temporal spike pattern that encodes the solution to complex temporal tasks. This method makes the learning extremely rapid and precise, outperforming state of the art algorithms for RSNNs. While error-based approaches, (e.g. e-prop) trial after trial optimize the internal sequence of spikes in order to progressively minimize the MSE we assume that a signal randomly projected from an external origin (e.g. from other brain areas) directly defines the target sequence. This facilitates the learning procedure since the network is trained from the beginning to reproduce the desired internal sequence. We propose two versions of our learning rule: spike-dependent and voltage-dependent. We find that the latter provides remarkable benefits in terms of learning speed and robustness to noise. We demonstrate the capacity of our model to tackle several problems like learning multidimensional trajectories and solving the classical temporal XOR benchmark. Finally, we show that an online approximation of the gradient ascent, in addition to guaranteeing complete locality in time and space, allows learning after very few presentations of the target output. Our model can be applied to different types of biological neurons. The analytically derived plasticity learning rule is specific to each neuron model and can produce a theoretical prediction for experimental validation.
Collapse
Affiliation(s)
- Paolo Muratore
- SISSA—International School for Advanced Studies, Trieste, Italy
- * E-mail:
| | | | | |
Collapse
|
50
|
She X, Dash S, Kim D, Mukhopadhyay S. A Heterogeneous Spiking Neural Network for Unsupervised Learning of Spatiotemporal Patterns. Front Neurosci 2021; 14:615756. [PMID: 33519366 PMCID: PMC7841292 DOI: 10.3389/fnins.2020.615756] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/09/2020] [Accepted: 12/11/2020] [Indexed: 11/19/2022] Open
Abstract
This paper introduces a heterogeneous spiking neural network (H-SNN) as a novel, feedforward SNN structure capable of learning complex spatiotemporal patterns with spike-timing-dependent plasticity (STDP) based unsupervised training. Within H-SNN, hierarchical spatial and temporal patterns are constructed with convolution connections and memory pathways containing spiking neurons with different dynamics. We demonstrate analytically the formation of long and short term memory in H-SNN and distinct response functions of memory pathways. In simulation, the network is tested on visual input of moving objects to simultaneously predict for object class and motion dynamics. Results show that H-SNN achieves prediction accuracy on similar or higher level than supervised deep neural networks (DNN). Compared to SNN trained with back-propagation, H-SNN effectively utilizes STDP to learn spatiotemporal patterns that have better generalizability to unknown motion and/or object classes encountered during inference. In addition, the improved performance is achieved with 6x fewer parameters than complex DNNs, showing H-SNN as an efficient approach for applications with constrained computation resources.
Collapse
Affiliation(s)
- Xueyuan She
- Department of Electrical and Computer Engineering, Georgia Institute of Technology, Atlanta, GA, United States
| | | | | | | |
Collapse
|