1
|
Quintana D, Bounds H, Veit J, Adesnik H. Balanced bidirectional optogenetics reveals the causal impact of cortical temporal dynamics in sensory perception. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.05.30.596706. [PMID: 38853943 PMCID: PMC11160799 DOI: 10.1101/2024.05.30.596706] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2024]
Abstract
Whether the fast temporal dynamics of neural activity in brain circuits causally drive perception and cognition remains one of most longstanding unresolved questions in neuroscience 1-6 . While some theories posit a 'timing code' in which dynamics on the millisecond timescale is central to brain function, others instead argue that mean firing rates over more extended periods (a 'rate code') carry most of the relevant information. Existing tools, such as optogenetics, can be used to alter temporal structure of neural dynamics 7 , but they invariably change mean firing rates, leaving the interpretation of such experiments ambiguous. Here we developed and validated a new approach based on balanced, bidirectional optogenetics that can alter temporal structure of neural dynamics while mitigating effects on mean activity. Using this new approach, we found that selectively altering cortical temporal dynamics substantially reduced performance in a sensory perceptual task. These results demonstrate that endogenous temporal dynamics in the cortex are causally required for perception and behavior. More generally, this new bidirectional optogenetic approach should be broadly useful for disentangling the causal impact of different timescales of neural dynamics on behavior.
Collapse
|
2
|
Fitz H, Hagoort P, Petersson KM. Neurobiological Causal Models of Language Processing. NEUROBIOLOGY OF LANGUAGE (CAMBRIDGE, MASS.) 2024; 5:225-247. [PMID: 38645618 PMCID: PMC11025648 DOI: 10.1162/nol_a_00133] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 09/29/2022] [Accepted: 12/18/2023] [Indexed: 04/23/2024]
Abstract
The language faculty is physically realized in the neurobiological infrastructure of the human brain. Despite significant efforts, an integrated understanding of this system remains a formidable challenge. What is missing from most theoretical accounts is a specification of the neural mechanisms that implement language function. Computational models that have been put forward generally lack an explicit neurobiological foundation. We propose a neurobiologically informed causal modeling approach which offers a framework for how to bridge this gap. A neurobiological causal model is a mechanistic description of language processing that is grounded in, and constrained by, the characteristics of the neurobiological substrate. It intends to model the generators of language behavior at the level of implementational causality. We describe key features and neurobiological component parts from which causal models can be built and provide guidelines on how to implement them in model simulations. Then we outline how this approach can shed new light on the core computational machinery for language, the long-term storage of words in the mental lexicon and combinatorial processing in sentence comprehension. In contrast to cognitive theories of behavior, causal models are formulated in the "machine language" of neurobiology which is universal to human cognition. We argue that neurobiological causal modeling should be pursued in addition to existing approaches. Eventually, this approach will allow us to develop an explicit computational neurobiology of language.
Collapse
Affiliation(s)
- Hartmut Fitz
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, The Netherlands
- Neurobiology of Language Department, Max Planck Institute for Psycholinguistics, Nijmegen, The Netherlands
| | - Peter Hagoort
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, The Netherlands
- Neurobiology of Language Department, Max Planck Institute for Psycholinguistics, Nijmegen, The Netherlands
| | - Karl Magnus Petersson
- Neurobiology of Language Department, Max Planck Institute for Psycholinguistics, Nijmegen, The Netherlands
- Faculty of Medicine and Biomedical Sciences, University of Algarve, Faro, Portugal
| |
Collapse
|
3
|
Wilson E. Adaptive Filter Model of Cerebellum for Biological Muscle Control With Spike Train Inputs. Neural Comput 2023; 35:1938-1969. [PMID: 37844325 DOI: 10.1162/neco_a_01617] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/21/2022] [Accepted: 06/05/2023] [Indexed: 10/18/2023]
Abstract
Prior applications of the cerebellar adaptive filter model have included a range of tasks within simulated and robotic systems. However, this has been limited to systems driven by continuous signals. Here, the adaptive filter model of the cerebellum is applied to the control of a system driven by spiking inputs by considering the problem of controlling muscle force. The performance of the standard adaptive filter algorithm is compared with the algorithm with a modified learning rule that minimizes inputs and a simple proportional-integral-derivative (PID) controller. Control performance is evaluated in terms of the number of spikes, the accuracy of spike input locations, and the accuracy of muscle force output. Results show that the cerebellar adaptive filter model can be applied without change to the control of systems driven by spiking inputs. The cerebellar algorithm results in good agreement between input spikes and force outputs and significantly improves on a PID controller. Input minimization can be used to reduce the number of spike inputs, but at the expense of a decrease in accuracy of spike input location and force output. This work extends the applications of the cerebellar algorithm and demonstrates the potential of the adaptive filter model to be used to improve functional electrical stimulation muscle control.
Collapse
Affiliation(s)
- Emma Wilson
- School of Computing and Communications, Lancaster University, Lancaster LA1 4WA, U.K.
| |
Collapse
|
4
|
Liu S, Leung VCH, Dragotti PL. First-spike coding promotes accurate and efficient spiking neural networks for discrete events with rich temporal structures. Front Neurosci 2023; 17:1266003. [PMID: 37849889 PMCID: PMC10577212 DOI: 10.3389/fnins.2023.1266003] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/24/2023] [Accepted: 09/11/2023] [Indexed: 10/19/2023] Open
Abstract
Spiking neural networks (SNNs) are well-suited to process asynchronous event-based data. Most of the existing SNNs use rate-coding schemes that focus on firing rate (FR), and so they generally ignore the spike timing in events. On the contrary, methods based on temporal coding, particularly time-to-first-spike (TTFS) coding, can be accurate and efficient but they are difficult to train. Currently, there is limited research on applying TTFS coding to real events, since traditional TTFS-based methods impose one-spike constraint, which is not realistic for event-based data. In this study, we present a novel decision-making strategy based on first-spike (FS) coding that encodes FS timings of the output neurons to investigate the role of the first-spike timing in classifying real-world event sequences with complex temporal structures. To achieve FS coding, we propose a novel surrogate gradient learning method for discrete spike trains. In the forward pass, output spikes are encoded into discrete times to generate FS times. In the backpropagation, we develop an error assignment method that propagates error from FS times to spikes through a Gaussian window, and then supervised learning for spikes is implemented through a surrogate gradient approach. Additional strategies are introduced to facilitate the training of FS timings, such as adding empty sequences and employing different parameters for different layers. We make a comprehensive comparison between FS and FR coding in the experiments. Our results show that FS coding achieves comparable accuracy to FR coding while leading to superior energy efficiency and distinct neuronal dynamics on data sequences with very rich temporal structures. Additionally, a longer time delay in the first spike leads to higher accuracy, indicating important information is encoded in the timing of the first spike.
Collapse
Affiliation(s)
- Siying Liu
- Communications and Signal Processing Group, Department of Electrical and Electronic Engineering, Imperial College London, London, United Kingdom
| | | | | |
Collapse
|
5
|
Yan Z, Zhou J, Wong WF. CQ + Training: Minimizing Accuracy Loss in Conversion From Convolutional Neural Networks to Spiking Neural Networks. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE 2023; 45:11600-11611. [PMID: 37314899 DOI: 10.1109/tpami.2023.3286121] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/16/2023]
Abstract
Spiking neural networks (SNNs) are attractive for energy-constrained use-cases due to their binarized activation, eliminating the need for weight multiplication. However, its lag in accuracy compared to traditional convolutional network networks (CNNs) has limited its deployment. In this paper, we propose CQ+ training (extended "clamped" and "quantized" training), an SNN-compatible CNN training algorithm that achieves state-of-the-art accuracy for both CIFAR-10 and CIFAR-100 datasets. Using a 7-layer modified VGG model (VGG-*), we achieved 95.06% accuracy on the CIFAR-10 dataset for equivalent SNNs. The accuracy drop from converting the CNN solution to an SNN is only 0.09% when using a time step of 600. To reduce the latency, we propose a parameterized input encoding method and a threshold training method, which further reduces the time window size to 64 while still achieving an accuracy of 94.09%. For the CIFAR-100 dataset, we achieved an accuracy of 77.27% using the same VGG-* structure and a time window of 500. We also demonstrate the transformation of popular CNNs, including ResNet (basic, bottleneck, and shortcut block), MobileNet v1/2, and Densenet, to SNNs with near-zero conversion accuracy loss and a time window size smaller than 60. The framework was developed in PyTorch and is publicly available.
Collapse
|
6
|
Pham MD, D’Angiulli A, Dehnavi MM, Chhabra R. From Brain Models to Robotic Embodied Cognition: How Does Biological Plausibility Inform Neuromorphic Systems? Brain Sci 2023; 13:1316. [PMID: 37759917 PMCID: PMC10526461 DOI: 10.3390/brainsci13091316] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/05/2023] [Revised: 09/05/2023] [Accepted: 09/07/2023] [Indexed: 09/29/2023] Open
Abstract
We examine the challenging "marriage" between computational efficiency and biological plausibility-A crucial node in the domain of spiking neural networks at the intersection of neuroscience, artificial intelligence, and robotics. Through a transdisciplinary review, we retrace the historical and most recent constraining influences that these parallel fields have exerted on descriptive analysis of the brain, construction of predictive brain models, and ultimately, the embodiment of neural networks in an enacted robotic agent. We study models of Spiking Neural Networks (SNN) as the central means enabling autonomous and intelligent behaviors in biological systems. We then provide a critical comparison of the available hardware and software to emulate SNNs for investigating biological entities and their application on artificial systems. Neuromorphics is identified as a promising tool to embody SNNs in real physical systems and different neuromorphic chips are compared. The concepts required for describing SNNs are dissected and contextualized in the new no man's land between cognitive neuroscience and artificial intelligence. Although there are recent reviews on the application of neuromorphic computing in various modules of the guidance, navigation, and control of robotic systems, the focus of this paper is more on closing the cognition loop in SNN-embodied robotics. We argue that biologically viable spiking neuronal models used for electroencephalogram signals are excellent candidates for furthering our knowledge of the explainability of SNNs. We complete our survey by reviewing different robotic modules that can benefit from neuromorphic hardware, e.g., perception (with a focus on vision), localization, and cognition. We conclude that the tradeoff between symbolic computational power and biological plausibility of hardware can be best addressed by neuromorphics, whose presence in neurorobotics provides an accountable empirical testbench for investigating synthetic and natural embodied cognition. We argue this is where both theoretical and empirical future work should converge in multidisciplinary efforts involving neuroscience, artificial intelligence, and robotics.
Collapse
Affiliation(s)
- Martin Do Pham
- Department of Computer Science, University of Toronto, Toronto, ON M5S 1A1, Canada; (M.D.P.); (M.M.D.)
| | - Amedeo D’Angiulli
- Department of Neuroscience, Carleton University, Ottawa, ON K1S 5B6, Canada;
| | - Maryam Mehri Dehnavi
- Department of Computer Science, University of Toronto, Toronto, ON M5S 1A1, Canada; (M.D.P.); (M.M.D.)
| | - Robin Chhabra
- Department of Mechanical and Aerospace Engineering, Carleton University, Ottawa, ON K1S 5B6, Canada
| |
Collapse
|
7
|
Drukarch B, Wilhelmus MMM. Thinking about the action potential: the nerve signal as a window to the physical principles guiding neuronal excitability. Front Cell Neurosci 2023; 17:1232020. [PMID: 37701723 PMCID: PMC10493309 DOI: 10.3389/fncel.2023.1232020] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/31/2023] [Accepted: 07/17/2023] [Indexed: 09/14/2023] Open
Abstract
Ever since the work of Edgar Adrian, the neuronal action potential has been considered as an electric signal, modeled and interpreted using concepts and theories lent from electronic engineering. Accordingly, the electric action potential, as the prime manifestation of neuronal excitability, serving processing and reliable "long distance" communication of the information contained in the signal, was defined as a non-linear, self-propagating, regenerative, wave of electrical activity that travels along the surface of nerve cells. Thus, in the ground-breaking theory and mathematical model of Hodgkin and Huxley (HH), linking Nernst's treatment of the electrochemistry of semi-permeable membranes to the physical laws of electricity and Kelvin's cable theory, the electrical characteristics of the action potential are presented as the result of the depolarization-induced, voltage- and time-dependent opening and closure of ion channels in the membrane allowing the passive flow of charge, particularly in the form of Na+ and K+ -ions, into and out of the neuronal cytoplasm along the respective electrochemical ion gradient. In the model, which treats the membrane as a capacitor and ion channels as resistors, these changes in ionic conductance across the membrane cause a sudden and transient alteration of the transmembrane potential, i.e., the action potential, which is then carried forward and spreads over long(er) distances by means of both active and passive conduction dependent on local current flow by diffusion of Na+ ion in the neuronal cytoplasm. However, although highly successful in predicting and explaining many of the electric characteristics of the action potential, the HH model, nevertheless cannot accommodate the various non-electrical physical manifestations (mechanical, thermal and optical changes) that accompany action potential propagation, and for which there is ample experimental evidence. As such, the electrical conception of neuronal excitability appears to be incomplete and alternatives, aiming to improve, extend or even replace it, have been sought for. Commonly misunderstood as to their basic premises and the physical principles they are built on, and mistakenly perceived as a threat to the generally acknowledged explanatory power of the "classical" HH framework, these attempts to present a more complete picture of neuronal physiology, have met with fierce opposition from mainstream neuroscience and, as a consequence, currently remain underdeveloped and insufficiently tested. Here we present our perspective that this may be an unfortunate state of affairs as these different biophysics-informed approaches to incorporate also non-electrical signs of the action potential into the modeling and explanation of the nerve signal, in our view, are well suited to foster a new, more complete and better integrated understanding of the (multi)physical nature of neuronal excitability and signal transport and, hence, of neuronal function. In doing so, we will emphasize attempts to derive the different physical manifestations of the action potential from one common, macroscopic thermodynamics-based, framework treating the multiphysics of the nerve signal as the inevitable result of the collective material, i.e., physico-chemical, properties of the lipid bilayer neuronal membrane (in particular, the axolemma) and/or the so-called ectoplasm or membrane skeleton consisting of cytoskeletal protein polymers, in particular, actin fibrils. Potential consequences for our view of action potential physiology and role in neuronal function are identified and discussed.
Collapse
Affiliation(s)
| | - Micha M. M. Wilhelmus
- Amsterdam UMC, Vrije Universiteit Amsterdam, Department of Anatomy and Neurosciences, Amsterdam Neuroscience, Amsterdam, Netherlands
| |
Collapse
|
8
|
Bernáez Timón L, Ekelmans P, Kraynyukova N, Rose T, Busse L, Tchumatchenko T. How to incorporate biological insights into network models and why it matters. J Physiol 2023; 601:3037-3053. [PMID: 36069408 DOI: 10.1113/jp282755] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/22/2022] [Accepted: 08/24/2022] [Indexed: 11/08/2022] Open
Abstract
Due to the staggering complexity of the brain and its neural circuitry, neuroscientists rely on the analysis of mathematical models to elucidate its function. From Hodgkin and Huxley's detailed description of the action potential in 1952 to today, new theories and increasing computational power have opened up novel avenues to study how neural circuits implement the computations that underlie behaviour. Computational neuroscientists have developed many models of neural circuits that differ in complexity, biological realism or emergent network properties. With recent advances in experimental techniques for detailed anatomical reconstructions or large-scale activity recordings, rich biological data have become more available. The challenge when building network models is to reflect experimental results, either through a high level of detail or by finding an appropriate level of abstraction. Meanwhile, machine learning has facilitated the development of artificial neural networks, which are trained to perform specific tasks. While they have proven successful at achieving task-oriented behaviour, they are often abstract constructs that differ in many features from the physiology of brain circuits. Thus, it is unclear whether the mechanisms underlying computation in biological circuits can be investigated by analysing artificial networks that accomplish the same function but differ in their mechanisms. Here, we argue that building biologically realistic network models is crucial to establishing causal relationships between neurons, synapses, circuits and behaviour. More specifically, we advocate for network models that consider the connectivity structure and the recorded activity dynamics while evaluating task performance.
Collapse
Affiliation(s)
- Laura Bernáez Timón
- Institute for Physiological Chemistry, University of Mainz Medical Center, Mainz, Germany
| | - Pierre Ekelmans
- Frankfurt Institute for Advanced Studies, Frankfurt, Germany
| | - Nataliya Kraynyukova
- Institute of Experimental Epileptology and Cognition Research, University of Bonn Medical Center, Bonn, Germany
| | - Tobias Rose
- Institute of Experimental Epileptology and Cognition Research, University of Bonn Medical Center, Bonn, Germany
| | - Laura Busse
- Division of Neurobiology, Faculty of Biology, LMU Munich, Munich, Germany
- Bernstein Center for Computational Neuroscience, Munich, Germany
| | - Tatjana Tchumatchenko
- Institute for Physiological Chemistry, University of Mainz Medical Center, Mainz, Germany
- Institute of Experimental Epileptology and Cognition Research, University of Bonn Medical Center, Bonn, Germany
| |
Collapse
|
9
|
Fabian JM, O'Carrol DC, Wiederman SD. Sparse spike trains and the limitation of rate codes underlying rapid behaviours. Biol Lett 2023; 19:20230099. [PMID: 37161293 PMCID: PMC10170213 DOI: 10.1098/rsbl.2023.0099] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/11/2023] Open
Abstract
Animals live in dynamic worlds where they use sensorimotor circuits to rapidly process information and drive behaviours. For example, dragonflies are aerial predators that react to movements of prey within tens of milliseconds. These pursuits are likely controlled by identified neurons in the dragonfly, which have well-characterized physiological responses to moving targets. Predominantly, neural activity in these circuits is interpreted in context of a rate code, where information is conveyed by changes in the number of spikes over a time period. However, such a description of neuronal activity is difficult to achieve in real-world, real-time scenarios. Here, we contrast a neuroscientists' post-hoc view of spiking activity with the information available to the animal in real-time. We describe how performance of a rate code is readily overestimated and outline a rate code's significant limitations in driving rapid behaviours.
Collapse
Affiliation(s)
- Joseph M Fabian
- School of Biomedicine, The University of Adelaide, Adelaide, South Australia 5005, Australia
| | | | - Steven D Wiederman
- School of Biomedicine, The University of Adelaide, Adelaide, South Australia 5005, Australia
| |
Collapse
|
10
|
DePasquale B, Sussillo D, Abbott LF, Churchland MM. The centrality of population-level factors to network computation is demonstrated by a versatile approach for training spiking networks. Neuron 2023; 111:631-649.e10. [PMID: 36630961 PMCID: PMC10118067 DOI: 10.1016/j.neuron.2022.12.007] [Citation(s) in RCA: 12] [Impact Index Per Article: 12.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/21/2020] [Revised: 06/17/2022] [Accepted: 12/05/2022] [Indexed: 01/12/2023]
Abstract
Neural activity is often described in terms of population-level factors extracted from the responses of many neurons. Factors provide a lower-dimensional description with the aim of shedding light on network computations. Yet, mechanistically, computations are performed not by continuously valued factors but by interactions among neurons that spike discretely and variably. Models provide a means of bridging these levels of description. We developed a general method for training model networks of spiking neurons by leveraging factors extracted from either data or firing-rate-based networks. In addition to providing a useful model-building framework, this formalism illustrates how reliable and continuously valued factors can arise from seemingly stochastic spiking. Our framework establishes procedures for embedding this property in network models with different levels of realism. The relationship between spikes and factors in such networks provides a foundation for interpreting (and subtly redefining) commonly used quantities such as firing rates.
Collapse
Affiliation(s)
- Brian DePasquale
- Princeton Neuroscience Institute, Princeton University, Princeton NJ, USA; Department of Neuroscience, Columbia University, New York, NY, USA; Center for Theoretical Neuroscience, Columbia University, New York, NY, USA.
| | - David Sussillo
- Department of Electrical Engineering, Stanford University, Stanford, CA, USA; Wu Tsai Neurosciences Institute, Stanford University, Stanford, CA, USA
| | - L F Abbott
- Department of Neuroscience, Columbia University, New York, NY, USA; Center for Theoretical Neuroscience, Columbia University, New York, NY, USA; Zuckerman Mind Brain Behavior Institute, Columbia University, New York, NY, USA; Department of Physiology and Cellular Biophysics, Columbia University, New York, NY, USA; Kavli Institute for Brain Science, Columbia University, New York, NY, USA
| | - Mark M Churchland
- Department of Neuroscience, Columbia University, New York, NY, USA; Zuckerman Mind Brain Behavior Institute, Columbia University, New York, NY, USA; Kavli Institute for Brain Science, Columbia University, New York, NY, USA; Grossman Center for the Statistics of Mind, Columbia University, New York, NY, USA
| |
Collapse
|
11
|
Wang K, Hao X, Wang J, Deng B. Comparison and Selection of Spike Encoding Algorithms for SNN on FPGA. IEEE TRANSACTIONS ON BIOMEDICAL CIRCUITS AND SYSTEMS 2023; PP:129-141. [PMID: 37021893 DOI: 10.1109/tbcas.2023.3238165] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/19/2023]
Abstract
The information in Spiking Neural Networks (SNNs) is carried by discrete spikes. Therefore, the conversion between the spiking signals and real-value signals has an important impact on the encoding efficiency and performance of SNNs, which is usually completed by spike encoding algorithms. In order to select suitable spike encoding algorithms for different SNNs, this work evaluates four commonly used spike encoding algorithms. The evaluation is based on the FPGA implementation results of the algorithms, including calculation speed, resource consumption, accuracy, and anti-noiseability, so as to better adapt to the neuromorphic implementation of SNN. Two real-world applicaitons are also used to verify the evaluation results. By analyzing and comparing the evaluation results, this work summarizes the characteristics and application range of different algorithms. In general, the sliding window algorithm has relatively low accuracy and is suitable for observing signal trends. Pulsewidth modulated-Based algorithm and step-forward algorithm are suitable for accurate reconstruction of various signals except for square wave signals, while Ben's Spiker algorithm can remedy this. Finally, a scoring method that can be used for spiking coding algorithm selection is proposed, which can help to improve the encoding efficiency of neuromorphic SNNs.
Collapse
|
12
|
Riquelme JL, Hemberger M, Laurent G, Gjorgjieva J. Single spikes drive sequential propagation and routing of activity in a cortical network. eLife 2023; 12:79928. [PMID: 36780217 PMCID: PMC9925052 DOI: 10.7554/elife.79928] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/03/2022] [Accepted: 12/19/2022] [Indexed: 02/14/2023] Open
Abstract
Single spikes can trigger repeatable firing sequences in cortical networks. The mechanisms that support reliable propagation of activity from such small events and their functional consequences remain unclear. By constraining a recurrent network model with experimental statistics from turtle cortex, we generate reliable and temporally precise sequences from single spike triggers. We find that rare strong connections support sequence propagation, while dense weak connections modulate propagation reliability. We identify sections of sequences corresponding to divergent branches of strongly connected neurons which can be selectively gated. Applying external inputs to specific neurons in the sparse backbone of strong connections can effectively control propagation and route activity within the network. Finally, we demonstrate that concurrent sequences interact reliably, generating a highly combinatorial space of sequence activations. Our results reveal the impact of individual spikes in cortical circuits, detailing how repeatable sequences of activity can be triggered, sustained, and controlled during cortical computations.
Collapse
Affiliation(s)
- Juan Luis Riquelme
- Max Planck Institute for Brain ResearchFrankfurt am MainGermany,School of Life Sciences, Technical University of MunichFreisingGermany
| | - Mike Hemberger
- Max Planck Institute for Brain ResearchFrankfurt am MainGermany
| | - Gilles Laurent
- Max Planck Institute for Brain ResearchFrankfurt am MainGermany
| | - Julijana Gjorgjieva
- Max Planck Institute for Brain ResearchFrankfurt am MainGermany,School of Life Sciences, Technical University of MunichFreisingGermany
| |
Collapse
|
13
|
Nilsson M, Schelén O, Lindgren A, Bodin U, Paniagua C, Delsing J, Sandin F. Integration of neuromorphic AI in event-driven distributed digitized systems: Concepts and research directions. Front Neurosci 2023; 17:1074439. [PMID: 36875653 PMCID: PMC9981939 DOI: 10.3389/fnins.2023.1074439] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/19/2022] [Accepted: 01/23/2023] [Indexed: 02/19/2023] Open
Abstract
Increasing complexity and data-generation rates in cyber-physical systems and the industrial Internet of things are calling for a corresponding increase in AI capabilities at the resource-constrained edges of the Internet. Meanwhile, the resource requirements of digital computing and deep learning are growing exponentially, in an unsustainable manner. One possible way to bridge this gap is the adoption of resource-efficient brain-inspired "neuromorphic" processing and sensing devices, which use event-driven, asynchronous, dynamic neurosynaptic elements with colocated memory for distributed processing and machine learning. However, since neuromorphic systems are fundamentally different from conventional von Neumann computers and clock-driven sensor systems, several challenges are posed to large-scale adoption and integration of neuromorphic devices into the existing distributed digital-computational infrastructure. Here, we describe the current landscape of neuromorphic computing, focusing on characteristics that pose integration challenges. Based on this analysis, we propose a microservice-based conceptual framework for neuromorphic systems integration, consisting of a neuromorphic-system proxy, which would provide virtualization and communication capabilities required in distributed systems of systems, in combination with a declarative programming approach offering engineering-process abstraction. We also present concepts that could serve as a basis for the realization of this framework, and identify directions for further research required to enable large-scale system integration of neuromorphic devices.
Collapse
Affiliation(s)
- Mattias Nilsson
- Embedded Intelligent Systems Lab (EISLAB), Department of Computer Science, Electrical and Space Engineering, Luleå University of Technology, Lulea, Sweden
| | - Olov Schelén
- Embedded Intelligent Systems Lab (EISLAB), Department of Computer Science, Electrical and Space Engineering, Luleå University of Technology, Lulea, Sweden
| | - Anders Lindgren
- Embedded Intelligent Systems Lab (EISLAB), Department of Computer Science, Electrical and Space Engineering, Luleå University of Technology, Lulea, Sweden.,Applied AI and IoT, Industrial Systems, Digital Systems, RISE Research Institutes of Sweden, Kista, Sweden
| | - Ulf Bodin
- Embedded Intelligent Systems Lab (EISLAB), Department of Computer Science, Electrical and Space Engineering, Luleå University of Technology, Lulea, Sweden
| | - Cristina Paniagua
- Embedded Intelligent Systems Lab (EISLAB), Department of Computer Science, Electrical and Space Engineering, Luleå University of Technology, Lulea, Sweden
| | - Jerker Delsing
- Embedded Intelligent Systems Lab (EISLAB), Department of Computer Science, Electrical and Space Engineering, Luleå University of Technology, Lulea, Sweden
| | - Fredrik Sandin
- Embedded Intelligent Systems Lab (EISLAB), Department of Computer Science, Electrical and Space Engineering, Luleå University of Technology, Lulea, Sweden
| |
Collapse
|
14
|
Neurodynamical Computing at the Information Boundaries of Intelligent Systems. Cognit Comput 2022. [DOI: 10.1007/s12559-022-10081-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/28/2022]
Abstract
AbstractArtificial intelligence has not achieved defining features of biological intelligence despite models boasting more parameters than neurons in the human brain. In this perspective article, we synthesize historical approaches to understanding intelligent systems and argue that methodological and epistemic biases in these fields can be resolved by shifting away from cognitivist brain-as-computer theories and recognizing that brains exist within large, interdependent living systems. Integrating the dynamical systems view of cognition with the massive distributed feedback of perceptual control theory highlights a theoretical gap in our understanding of nonreductive neural mechanisms. Cell assemblies—properly conceived as reentrant dynamical flows and not merely as identified groups of neurons—may fill that gap by providing a minimal supraneuronal level of organization that establishes a neurodynamical base layer for computation. By considering information streams from physical embodiment and situational embedding, we discuss this computational base layer in terms of conserved oscillatory and structural properties of cortical-hippocampal networks. Our synthesis of embodied cognition, based in dynamical systems and perceptual control, aims to bypass the neurosymbolic stalemates that have arisen in artificial intelligence, cognitive science, and computational neuroscience.
Collapse
|
15
|
Lehr AB, Luboeinski J, Tetzlaff C. Neuromodulator-dependent synaptic tagging and capture retroactively controls neural coding in spiking neural networks. Sci Rep 2022; 12:17772. [PMID: 36273097 PMCID: PMC9588040 DOI: 10.1038/s41598-022-22430-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/11/2022] [Accepted: 10/14/2022] [Indexed: 01/19/2023] Open
Abstract
Events that are important to an individual's life trigger neuromodulator release in brain areas responsible for cognitive and behavioral function. While it is well known that the presence of neuromodulators such as dopamine and norepinephrine is required for memory consolidation, the impact of neuromodulator concentration is, however, less understood. In a recurrent spiking neural network model featuring neuromodulator-dependent synaptic tagging and capture, we study how synaptic memory consolidation depends on the amount of neuromodulator present in the minutes to hours after learning. We find that the storage of rate-based and spike timing-based information is controlled by the level of neuromodulation. Specifically, we find better recall of temporal information for high levels of neuromodulation, while we find better recall of rate-coded spatial patterns for lower neuromodulation, mediated by the selection of different groups of synapses for consolidation. Hence, our results indicate that in minutes to hours after learning, the level of neuromodulation may alter the process of synaptic consolidation to ultimately control which type of information becomes consolidated in the recurrent neural network.
Collapse
Affiliation(s)
- Andrew B. Lehr
- grid.7450.60000 0001 2364 4210Department of Computational Neuroscience, University of Göttingen, Göttingen, Germany ,grid.7450.60000 0001 2364 4210Bernstein Center for Computational Neuroscience, University of Göttingen, Göttingen, Germany ,grid.7450.60000 0001 2364 4210Department of Computational Synaptic Physiology, University of Göttingen, Göttingen, Germany
| | - Jannik Luboeinski
- grid.7450.60000 0001 2364 4210Department of Computational Neuroscience, University of Göttingen, Göttingen, Germany ,grid.7450.60000 0001 2364 4210Bernstein Center for Computational Neuroscience, University of Göttingen, Göttingen, Germany ,grid.7450.60000 0001 2364 4210Department of Computational Synaptic Physiology, University of Göttingen, Göttingen, Germany
| | - Christian Tetzlaff
- grid.7450.60000 0001 2364 4210Department of Computational Neuroscience, University of Göttingen, Göttingen, Germany ,grid.7450.60000 0001 2364 4210Bernstein Center for Computational Neuroscience, University of Göttingen, Göttingen, Germany ,grid.7450.60000 0001 2364 4210Department of Computational Synaptic Physiology, University of Göttingen, Göttingen, Germany
| |
Collapse
|
16
|
Bialas M, Mandziuk J. Spike-Timing-Dependent Plasticity With Activation-Dependent Scaling for Receptive Fields Development. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2022; 33:5215-5228. [PMID: 33844634 DOI: 10.1109/tnnls.2021.3069683] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Spike-timing-dependent plasticity (STDP) is one of the most popular and deeply biologically motivated forms of unsupervised Hebbian-type learning. In this article, we propose a variant of STDP extended by an additional activation-dependent scale factor. The consequent learning rule is an efficient algorithm, which is simple to implement and applicable to spiking neural networks (SNNs). It is demonstrated that the proposed plasticity mechanism combined with competitive learning can serve as an effective mechanism for the unsupervised development of receptive fields (RFs). Furthermore, the relationship between synaptic scaling and lateral inhibition is explored in the context of the successful development of RFs. Specifically, we demonstrate that maintaining a high level of synaptic scaling followed by its rapid increase is crucial for the development of neuronal mechanisms of selectivity. The strength of the proposed solution is assessed in classification tasks performed on the Modified National Institute of Standards and Technology (MNIST) data set with an accuracy level of 94.65% (a single network) and 95.17% (a network committee)-comparable to the state-of-the-art results of single-layer SNN architectures trained in an unsupervised manner. Furthermore, the training process leads to sparse data representation and the developed RFs have the potential to serve as local feature detectors in multilayered spiking networks. We also prove theoretically that when applied to linear Poisson neurons, our rule conserves total synaptic strength, guaranteeing the convergence of the learning process.
Collapse
|
17
|
Bologna LL, Smiriglia R, Lupascu CA, Appukuttan S, Davison AP, Ivaska G, Courcol JD, Migliore M. The EBRAINS Hodgkin-Huxley Neuron Builder: An online resource for building data-driven neuron models. Front Neuroinform 2022; 16:991609. [PMID: 36225653 PMCID: PMC9549939 DOI: 10.3389/fninf.2022.991609] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/11/2022] [Accepted: 09/06/2022] [Indexed: 11/27/2022] Open
Abstract
In the last decades, brain modeling has been established as a fundamental tool for understanding neural mechanisms and information processing in individual cells and circuits at different scales of observation. Building data-driven brain models requires the availability of experimental data and analysis tools as well as neural simulation environments and, often, large scale computing facilities. All these components are rarely found in a comprehensive framework and usually require ad hoc programming. To address this, we developed the EBRAINS Hodgkin-Huxley Neuron Builder (HHNB), a web resource for building single cell neural models via the extraction of activity features from electrophysiological traces, the optimization of the model parameters via a genetic algorithm executed on high performance computing facilities and the simulation of the optimized model in an interactive framework. Thanks to its inherent characteristics, the HHNB facilitates the data-driven model building workflow and its reproducibility, hence fostering a collaborative approach to brain modeling.
Collapse
Affiliation(s)
- Luca Leonardo Bologna
- Institute of Biophysics, National Research Council, Palermo, Italy
- *Correspondence: Luca Leonardo Bologna,
| | | | | | - Shailesh Appukuttan
- Centre National de la Recherche Scientifique, Institut des Neurosciences Paris-Saclay, Université Paris-Saclay, Saclay, France
| | - Andrew P. Davison
- Centre National de la Recherche Scientifique, Institut des Neurosciences Paris-Saclay, Université Paris-Saclay, Saclay, France
| | - Genrich Ivaska
- Blue Brain Project, École Polytechnique Fédérale de Lausanne, Geneva, Switzerland
| | - Jean-Denis Courcol
- Blue Brain Project, École Polytechnique Fédérale de Lausanne, Geneva, Switzerland
| | - Michele Migliore
- Institute of Biophysics, National Research Council, Palermo, Italy
| |
Collapse
|
18
|
Zhou J, Li H, Tian M, Chen A, Chen L, Pu D, Hu J, Cao J, Li L, Xu X, Tian F, Malik M, Xu Y, Wan N, Zhao Y, Yu B. Multi-Stimuli-Responsive Synapse Based on Vertical van der Waals Heterostructures. ACS APPLIED MATERIALS & INTERFACES 2022; 14:35917-35926. [PMID: 35882423 DOI: 10.1021/acsami.2c08335] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
Brain-inspired intelligent systems demand diverse neuromorphic devices beyond simple functionalities. Merging biomimetic sensing with weight-updating capabilities in artificial synaptic devices represents one of the key research focuses. Here, we report a multiresponsive synapse device that integrates synaptic and optical-sensing functions. The device adopts vertically stacked graphene/h-BN/WSe2 heterostructures, including an ultrahigh-mobility readout layer, a weight-control layer, and a dual-stimuli-responsive layer. The unique structure endows synapse devices with excellent synaptic plasticity, short response time (3 μs), and excellent optical responsivity (105 A/W). To demonstrate the application in neuromorphic computing, handwritten digit recognition was simulated based on an unsupervised spiking neural network (SNN) with a precision of 90.89%, well comparable with the state-of-the-art results. Furthermore, multiterminal neuromorphic devices are demonstrated to mimic dendritic integration and photoswitching logic. Different from other synaptic devices, the research work validates multifunctional integration in synaptic devices, supporting the potential fusion of sensing and self-learning in neuromorphic networks.
Collapse
Affiliation(s)
- Jiachao Zhou
- School of Micro-Nano Electronics, ZJU-Hangzhou Global Scientific and Technological Innovation Center, Zhejiang University, Hangzhou 310027, China
| | - Hanxi Li
- School of Micro-Nano Electronics, ZJU-Hangzhou Global Scientific and Technological Innovation Center, Zhejiang University, Hangzhou 310027, China
| | - Ming Tian
- Key Laboratory of MEMS of Ministry of Education, School of Electronics Science and Engineering, Southeast University, Nanjing 210096, China
| | - Anzhe Chen
- School of Micro-Nano Electronics, ZJU-Hangzhou Global Scientific and Technological Innovation Center, Zhejiang University, Hangzhou 310027, China
| | - Li Chen
- School of Micro-Nano Electronics, ZJU-Hangzhou Global Scientific and Technological Innovation Center, Zhejiang University, Hangzhou 310027, China
| | - Dong Pu
- School of Micro-Nano Electronics, ZJU-Hangzhou Global Scientific and Technological Innovation Center, Zhejiang University, Hangzhou 310027, China
- Joint Institute of Zhejiang University and University of Illinois at Urbana-Champaign, Zhejiang University, Haining 314400, China
| | - Jiayang Hu
- School of Micro-Nano Electronics, ZJU-Hangzhou Global Scientific and Technological Innovation Center, Zhejiang University, Hangzhou 310027, China
| | - Jiehua Cao
- School of Physical Science and Technology, Laboratory of Optoelectronic Materials and Detection Technology, Guangxi Key Laboratory for Relativistic Astrophysics, Guangxi University, Nanning 530004, China
| | - Lingfei Li
- School of Micro-Nano Electronics, ZJU-Hangzhou Global Scientific and Technological Innovation Center, Zhejiang University, Hangzhou 310027, China
| | - Xinyi Xu
- School of Micro-Nano Electronics, ZJU-Hangzhou Global Scientific and Technological Innovation Center, Zhejiang University, Hangzhou 310027, China
- Joint Institute of Zhejiang University and University of Illinois at Urbana-Champaign, Zhejiang University, Haining 314400, China
| | - Feng Tian
- School of Micro-Nano Electronics, ZJU-Hangzhou Global Scientific and Technological Innovation Center, Zhejiang University, Hangzhou 310027, China
- Joint Institute of Zhejiang University and University of Illinois at Urbana-Champaign, Zhejiang University, Haining 314400, China
| | - Muhammad Malik
- School of Micro-Nano Electronics, ZJU-Hangzhou Global Scientific and Technological Innovation Center, Zhejiang University, Hangzhou 310027, China
| | - Yang Xu
- School of Micro-Nano Electronics, ZJU-Hangzhou Global Scientific and Technological Innovation Center, Zhejiang University, Hangzhou 310027, China
- Joint Institute of Zhejiang University and University of Illinois at Urbana-Champaign, Zhejiang University, Haining 314400, China
| | - Neng Wan
- Key Laboratory of MEMS of Ministry of Education, School of Electronics Science and Engineering, Southeast University, Nanjing 210096, China
| | - Yuda Zhao
- School of Micro-Nano Electronics, ZJU-Hangzhou Global Scientific and Technological Innovation Center, Zhejiang University, Hangzhou 310027, China
| | - Bin Yu
- School of Micro-Nano Electronics, ZJU-Hangzhou Global Scientific and Technological Innovation Center, Zhejiang University, Hangzhou 310027, China
| |
Collapse
|
19
|
Grillo M, Geminiani A, Alessandro C, D'Angelo E, Pedrocchi A, Casellato C. Bayesian Integration in a Spiking Neural System for Sensorimotor Control. Neural Comput 2022; 34:1893-1914. [PMID: 35896162 DOI: 10.1162/neco_a_01525] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/05/2021] [Accepted: 04/30/2022] [Indexed: 11/04/2022]
Abstract
The brain continuously estimates the state of body and environment, with specific regions that are thought to act as Bayesian estimator, optimally integrating noisy and delayed sensory feedback with sensory predictions generated by the cerebellum. In control theory, Bayesian estimators are usually implemented using high-level representations. In this work, we designed a new spike-based computational model of a Bayesian estimator. The state estimator receives spiking activity from two neural populations encoding the sensory feedback and the cerebellar prediction, and it continuously computes the spike variability within each population as a reliability index of the signal these populations encode. The state estimator output encodes the current state estimate. We simulated a reaching task at different stages of cerebellar learning. The activity of the sensory feedback neurons encoded a noisy version of the trajectory after actual movement, with an almost constant intrapopulation spiking variability. Conversely, the activity of the cerebellar output neurons depended on the phase of the learning process. Before learning, they fired at their baseline not encoding any relevant information, and the variability was set to be higher than that of the sensory feedback (more reliable, albeit delayed). When learning was complete, their activity encoded the trajectory before the actual execution, providing an accurate sensory prediction; in this case, the variability was set to be lower than that of the sensory feedback. The state estimator model optimally integrated the neural activities of the afferent populations, so that the output state estimate was primarily driven by sensory feedback in prelearning and by the cerebellar prediction in postlearning. It was able to deal even with more complex scenarios, for example, by shifting the dominant source during the movement execution if information availability suddenly changed. The proposed tool will be a critical block within integrated spiking, brain-inspired control systems for simulations of sensorimotor tasks.
Collapse
Affiliation(s)
- Massimo Grillo
- Nearlab, Department of Electronics, Information and Bioengineering, Politecnico di Milano, 20133, Milan, Italy
| | - Alice Geminiani
- Department of Brain and Behavioral Sciences, University of Pavia 27100, Italy
| | - Cristiano Alessandro
- Department of Brain and Behavioral Sciences, University of Pavia 27100, Italy.,School of Medicine and Surgery/Sport and Exercise Science, University of Milano-Bicocca, 20126 Milan, Italy
| | - Egidio D'Angelo
- Department of Brain and Behavioral Sciences, University of Pavia 27100, Italy.,Brain Connectivity Center, IRCCS Mondino Foundation, Pavia 27100, Italy
| | - Alessandra Pedrocchi
- Nearlab, Department of Electronics, Information and Bioengineering, Politecnico di Milano, 20133, Milan, Italy
| | - Claudia Casellato
- Department of Brain and Behavioral Sciences, University of Pavia 27100, Italy
| |
Collapse
|
20
|
Abstract
The nematode worm Caenorhabditis elegans has a relatively simple neural system for analysis of information transmission from sensory organ to muscle fiber. Consequently, this study includes an example of a neural circuit from the nematode worm, and a procedure is shown for measuring its information optimality by use of a logic gate model. This approach is useful where the assumptions are applicable for a neural circuit, and also for choosing between competing mathematical hypotheses that explain the function of a neural circuit. In this latter case, the logic gate model can estimate computational complexity and distinguish which of the mathematical models require fewer computations. In addition, the concept of information optimality is generalized to other biological systems, along with an extended discussion of its role in genetic-based pathways of organisms.
Collapse
|
21
|
Antonietti A, Geminiani A, Negri E, D'Angelo E, Casellato C, Pedrocchi A. Brain-Inspired Spiking Neural Network Controller for a Neurorobotic Whisker System. Front Neurorobot 2022; 16:817948. [PMID: 35770277 PMCID: PMC9234954 DOI: 10.3389/fnbot.2022.817948] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/18/2021] [Accepted: 05/17/2021] [Indexed: 11/13/2022] Open
Abstract
It is common for animals to use self-generated movements to actively sense the surrounding environment. For instance, rodents rhythmically move their whiskers to explore the space close to their body. The mouse whisker system has become a standard model for studying active sensing and sensorimotor integration through feedback loops. In this work, we developed a bioinspired spiking neural network model of the sensorimotor peripheral whisker system, modeling trigeminal ganglion, trigeminal nuclei, facial nuclei, and central pattern generator neuronal populations. This network was embedded in a virtual mouse robot, exploiting the Human Brain Project's Neurorobotics Platform, a simulation platform offering a virtual environment to develop and test robots driven by brain-inspired controllers. Eventually, the peripheral whisker system was adequately connected to an adaptive cerebellar network controller. The whole system was able to drive active whisking with learning capability, matching neural correlates of behavior experimentally recorded in mice.
Collapse
Affiliation(s)
- Alberto Antonietti
- Neurocomputational Laboratory, Department of Brain and Behavioral Sciences, University of Pavia, Pavia, Italy
- Nearlab, Department of Electronics, Information and Bioengineering, Politecnico di Milano, Milan, Italy
- *Correspondence: Alberto Antonietti
| | - Alice Geminiani
- Neurocomputational Laboratory, Department of Brain and Behavioral Sciences, University of Pavia, Pavia, Italy
| | - Edoardo Negri
- Neurocomputational Laboratory, Department of Brain and Behavioral Sciences, University of Pavia, Pavia, Italy
- Nearlab, Department of Electronics, Information and Bioengineering, Politecnico di Milano, Milan, Italy
| | - Egidio D'Angelo
- Neurocomputational Laboratory, Department of Brain and Behavioral Sciences, University of Pavia, Pavia, Italy
- Brain Connectivity Center, IRCCS Mondino Foundation, Pavia, Italy
| | - Claudia Casellato
- Neurocomputational Laboratory, Department of Brain and Behavioral Sciences, University of Pavia, Pavia, Italy
| | - Alessandra Pedrocchi
- Nearlab, Department of Electronics, Information and Bioengineering, Politecnico di Milano, Milan, Italy
| |
Collapse
|
22
|
Javanshir A, Nguyen TT, Mahmud MAP, Kouzani AZ. Advancements in Algorithms and Neuromorphic Hardware for Spiking Neural Networks. Neural Comput 2022; 34:1289-1328. [PMID: 35534005 DOI: 10.1162/neco_a_01499] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/01/2021] [Accepted: 01/18/2022] [Indexed: 11/04/2022]
Abstract
Artificial neural networks (ANNs) have experienced a rapid advancement for their success in various application domains, including autonomous driving and drone vision. Researchers have been improving the performance efficiency and computational requirement of ANNs inspired by the mechanisms of the biological brain. Spiking neural networks (SNNs) provide a power-efficient and brain-inspired computing paradigm for machine learning applications. However, evaluating large-scale SNNs on classical von Neumann architectures (central processing units/graphics processing units) demands a high amount of power and time. Therefore, hardware designers have developed neuromorphic platforms to execute SNNs in and approach that combines fast processing and low power consumption. Recently, field-programmable gate arrays (FPGAs) have been considered promising candidates for implementing neuromorphic solutions due to their varied advantages, such as higher flexibility, shorter design, and excellent stability. This review aims to describe recent advances in SNNs and the neuromorphic hardware platforms (digital, analog, hybrid, and FPGA based) suitable for their implementation. We present that biological background of SNN learning, such as neuron models and information encoding techniques, followed by a categorization of SNN training. In addition, we describe state-of-the-art SNN simulators. Furthermore, we review and present FPGA-based hardware implementation of SNNs. Finally, we discuss some future directions for research in this field.
Collapse
Affiliation(s)
| | - Thanh Thi Nguyen
- School of Information Technology, Deakin University (Burwood Campus) Burwood, VIC 3125, Australia
| | - M A Parvez Mahmud
- School of Engineering, Deakin University, Geelong, VIC 3216, Australia
| | - Abbas Z Kouzani
- School of Engineering, Deakin University, Geelong, VIC 3216, Australia
| |
Collapse
|
23
|
Pestell N, Griffith T, Lepora NF. Artificial SA-I and RA-I afferents for tactile sensing of ridges and gratings. J R Soc Interface 2022; 19:20210822. [PMID: 35382575 PMCID: PMC8984303 DOI: 10.1098/rsif.2021.0822] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/07/2023] Open
Abstract
For robot touch to reach the capabilities of human touch, artificial tactile sensors may require transduction principles like those of natural tactile afferents. Here we propose that a biomimetic tactile sensor (the TacTip) could provide suitable artificial analogues of the tactile skin dynamics, afferent responses and population encoding. Our three-dimensionally printed sensor skin is based on the physiology of the dermal-epidermal interface with an underlying mesh of biomimetic intermediate ridges and dermal papillae, comprising inner pins tipped with markers. Slowly adapting SA-I activity is modelled by marker displacements and rapidly adapting RA-I activity by marker speeds. We test the biological plausibility of these artificial population codes with three classic experiments used for natural touch: (1a) responses to normal pressure to test adaptation of single afferents and spatial modulation across the population; (1b) responses to bars, edges and gratings to compare with measurements from monkey primary afferents; and (2) discrimination of grating orientation to compare with human perceptual performance. Our results show a match between artificial and natural touch at single afferent, population and perceptual levels. As expected, natural skin is more sensitive, which raises a challenge to fabricate a biomimetic fingertip that demonstrates human sensitivity using the transduction principles of human touch.
Collapse
Affiliation(s)
- Nicholas Pestell
- Department of Engineering Mathematics and Bristol Robotics Laboratory, University of Bristol, Bristol BS8 1QU, UK
| | - Thom Griffith
- Department of Engineering Mathematics and Bristol Robotics Laboratory, University of Bristol, Bristol BS8 1QU, UK
| | - Nathan F Lepora
- Department of Engineering Mathematics and Bristol Robotics Laboratory, University of Bristol, Bristol BS8 1QU, UK
| |
Collapse
|
24
|
Introducing principles of synaptic integration in the optimization of deep neural networks. Nat Commun 2022; 13:1885. [PMID: 35393422 PMCID: PMC8989917 DOI: 10.1038/s41467-022-29491-2] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/23/2021] [Accepted: 03/15/2022] [Indexed: 11/17/2022] Open
Abstract
Plasticity circuits in the brain are known to be influenced by the distribution of the synaptic weights through the mechanisms of synaptic integration and local regulation of synaptic strength. However, the complex interplay of stimulation-dependent plasticity with local learning signals is disregarded by most of the artificial neural network training algorithms devised so far. Here, we propose a novel biologically inspired optimizer for artificial and spiking neural networks that incorporates key principles of synaptic plasticity observed in cortical dendrites: GRAPES (Group Responsibility for Adjusting the Propagation of Error Signals). GRAPES implements a weight-distribution-dependent modulation of the error signal at each node of the network. We show that this biologically inspired mechanism leads to a substantial improvement of the performance of artificial and spiking networks with feedforward, convolutional, and recurrent architectures, it mitigates catastrophic forgetting, and it is optimally suited for dedicated hardware implementations. Overall, our work indicates that reconciling neurophysiology insights with machine intelligence is key to boosting the performance of neural networks. Tasks involving continual learning and adaptation to real-time scenarios remain challenging for artificial neural networks in contrast to real brain. The authors propose here a brain-inspired optimizer based on mechanisms of synaptic integration and strength regulation for improved performance of both artificial and spiking neural networks.
Collapse
|
25
|
Steinmetz ST, Layton OW, Powell NV, Fajen BR. A Dynamic Efficient Sensory Encoding Approach to Adaptive Tuning in Neural Models of Optic Flow Processing. Front Comput Neurosci 2022; 16:844289. [PMID: 35431848 PMCID: PMC9011806 DOI: 10.3389/fncom.2022.844289] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/27/2021] [Accepted: 02/10/2022] [Indexed: 11/13/2022] Open
Abstract
This paper introduces a self-tuning mechanism for capturing rapid adaptation to changing visual stimuli by a population of neurons. Building upon the principles of efficient sensory encoding, we show how neural tuning curve parameters can be continually updated to optimally encode a time-varying distribution of recently detected stimulus values. We implemented this mechanism in a neural model that produces human-like estimates of self-motion direction (i.e., heading) based on optic flow. The parameters of speed-sensitive units were dynamically tuned in accordance with efficient sensory encoding such that the network remained sensitive as the distribution of optic flow speeds varied. In two simulation experiments, we found that model performance with dynamic tuning yielded more accurate, shorter latency heading estimates compared to the model with static tuning. We conclude that dynamic efficient sensory encoding offers a plausible approach for capturing adaptation to varying visual environments in biological visual systems and neural models alike.
Collapse
Affiliation(s)
- Scott T. Steinmetz
- Cognitive Science Department, Rensselaer Polytechnic Institute, Troy, NY, United States
- *Correspondence: Scott T. Steinmetz,
| | - Oliver W. Layton
- Computer Science Department, Colby College, Waterville, ME, United States
| | - Nathaniel V. Powell
- Cognitive Science Department, Rensselaer Polytechnic Institute, Troy, NY, United States
| | - Brett R. Fajen
- Cognitive Science Department, Rensselaer Polytechnic Institute, Troy, NY, United States
| |
Collapse
|
26
|
An STDP-based encoding method for associative and composite data. Sci Rep 2022; 12:4666. [PMID: 35304537 PMCID: PMC8933433 DOI: 10.1038/s41598-022-08469-6] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/06/2021] [Accepted: 03/04/2022] [Indexed: 11/09/2022] Open
Abstract
Spike-timing-dependent plasticity(STDP) is a biological process of synaptic modification caused by the difference of firing order and timing between neurons. One of neurodynamical roles of STDP is to form a macroscopic geometrical structure in the neuronal state space in response to a periodic input by Susman et al. (Nat. Commun. 10(1), 1-9 2019), Yoon, & Kim. Stdp-based associative memory formation and retrieval. arXiv:2107.02429v2 (2021). In this work, we propose a practical memory model based on STDP which can store and retrieve high dimensional associative data. The model combines STDP dynamics with an encoding scheme for distributed representations and is able to handle multiple composite data in a continuous manner. In the auto-associative memory task where a group of images are continuously streamed to the model, the images are successfully retrieved from an oscillating neural state whenever a proper cue is given. In the second task that deals with semantic memories embedded from sentences, the results show that words can recall multiple sentences simultaneously or one exclusively, depending on their grammatical relations.
Collapse
|
27
|
Yu Q, Li S, Tang H, Wang L, Dang J, Tan KC. Toward Efficient Processing and Learning With Spikes: New Approaches for Multispike Learning. IEEE TRANSACTIONS ON CYBERNETICS 2022; 52:1364-1376. [PMID: 32356771 DOI: 10.1109/tcyb.2020.2984888] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
Spikes are the currency in central nervous systems for information transmission and processing. They are also believed to play an essential role in low-power consumption of the biological systems, whose efficiency attracts increasing attentions to the field of neuromorphic computing. However, efficient processing and learning of discrete spikes still remain a challenging problem. In this article, we make our contributions toward this direction. A simplified spiking neuron model is first introduced with the effects of both synaptic input and firing output on the membrane potential being modeled with an impulse function. An event-driven scheme is then presented to further improve the processing efficiency. Based on the neuron model, we propose two new multispike learning rules which demonstrate better performance over other baselines on various tasks, including association, classification, and feature detection. In addition to efficiency, our learning rules demonstrate high robustness against the strong noise of different types. They can also be generalized to different spike coding schemes for the classification task, and notably, the single neuron is capable of solving multicategory classifications with our learning rules. In the feature detection task, we re-examine the ability of unsupervised spike-timing-dependent plasticity with its limitations being presented, and find a new phenomenon of losing selectivity. In contrast, our proposed learning rules can reliably solve the task over a wide range of conditions without specific constraints being applied. Moreover, our rules cannot only detect features but also discriminate them. The improved performance of our methods would contribute to neuromorphic computing as a preferable choice.
Collapse
|
28
|
Jia S, Li X, Huang T, Liu JK, Yu Z. Representing the dynamics of high-dimensional data with non-redundant wavelets. PATTERNS 2022; 3:100424. [PMID: 35510192 PMCID: PMC9058841 DOI: 10.1016/j.patter.2021.100424] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 08/26/2021] [Revised: 09/22/2021] [Accepted: 12/09/2021] [Indexed: 11/19/2022]
Abstract
A crucial question in data science is to extract meaningful information embedded in high-dimensional data into a low-dimensional set of features that can represent the original data at different levels. Wavelet analysis is a pervasive method for decomposing time-series signals into a few levels with detailed temporal resolution. However, obtained wavelets are intertwined and over-represented across levels for each sample and across different samples within one population. Here, using neuroscience data of simulated spikes, experimental spikes, calcium imaging signals, and human electrocorticography signals, we leveraged conditional mutual information between wavelets for feature selection. The meaningfulness of selected features was verified to decode stimulus or condition with high accuracy yet using only a small set of features. These results provide a new way of wavelet analysis for extracting essential features of the dynamics of spatiotemporal neural data, which then enables to support novel model design of machine learning with representative features. WCMI can extract meaningful information from high-dimensional data Extracted features from neural signals are non-redundant Simple decoders can read out these features with superb accuracy
One of the essential questions in data science is to extract meaningful information from high-dimensional data. A useful approach is to represent data using a few features that maintain the crucial information. The leading property of spatiotemporal data is foremost ever-changing dynamics in time. Wavelet analysis, as a classical method for disentangling time series, can capture temporal dynamics with detail. Here, we leveraged conditional mutual information between wavelets to select a small subset of non-redundant features. We demonstrated the efficiency and effectiveness of features using various types of neuroscience data with different sampling frequencies at the level of the single cell, cell population, and coarse-scale brain activity. Our results shed new insights into representing the dynamics of spatiotemporal data using a few fundamental features extracted by wavelet analysis, which may have wide implications to other types of data with rich temporal dynamics.
Collapse
|
29
|
Input rate encoding and gain control in dendrites of neocortical pyramidal neurons. Cell Rep 2022; 38:110382. [PMID: 35172157 PMCID: PMC8967317 DOI: 10.1016/j.celrep.2022.110382] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/26/2021] [Revised: 11/15/2021] [Accepted: 01/23/2022] [Indexed: 01/06/2023] Open
Abstract
Elucidating how neurons encode network activity is essential to understanding how the brain processes information. Neocortical pyramidal cells receive excitatory input onto spines distributed along dendritic branches. Local dendritic branch nonlinearities can boost the response to spatially clustered and synchronous input, but how this translates into the integration of patterns of ongoing activity remains unclear. To examine dendritic integration under naturalistic stimulus regimes, we use two-photon glutamate uncaging to repeatedly activate multiple dendritic spines at random intervals. In the proximal dendrites of two populations of layer 5 pyramidal neurons in the mouse motor cortex, spatially restricted synchrony is not a prerequisite for dendritic boosting. Branches encode afferent inputs with distinct rate sensitivities depending upon cell and branch type. Thus, inputs distributed along a dendritic branch can recruit supralinear boosting and the window of this nonlinearity may provide a mechanism by which dendrites can preferentially amplify slow-frequency network oscillations.
Collapse
|
30
|
Xu Z, Zhou X, Xu Y, Wu W. Removing nonlinear misalignment in neuronal spike trains using the Fisher-Rao registration framework. J Neurosci Methods 2022; 367:109436. [PMID: 34890697 DOI: 10.1016/j.jneumeth.2021.109436] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/03/2021] [Revised: 11/29/2021] [Accepted: 12/02/2021] [Indexed: 10/19/2022]
Abstract
BACKGROUND The temporal precision in neural spike train data is critically important for understanding functional mechanism in the nervous systems. However, the timing variability of spiking activity can be highly nonlinear in practical observations due to behavioral variability or unobserved/unobservable cognitive states. NEW METHOD In this study, we propose to adopt a powerful nonlinear method, referred to as the Fisher-Rao Registration (FRR), to remove such nonlinear phase variability in discrete neuronal spike trains. We also develop a smoothing procedure on the discrete spike train data in order to use the FRR framework. COMPARISON WITH EXISTING METHODS We systematically compare the FRR with the state-of-the-art linear and nonlinear methods in terms of model efficiency and effectiveness. RESULTS We show that the FRR has superior performance and the advantages are well illustrated with simulation and real experimental data. CONCLUSIONS It is found the FRR framework provides more appropriate alignment performance to understand the temporal variability in neuronal spike trains.
Collapse
Affiliation(s)
- Zishen Xu
- Department of Statistics, Florida State University, 117 N Woodward Ave., Tallahassee, FL 32306-4330, USA
| | - Xinyu Zhou
- Department of Statistics, Florida State University, 117 N Woodward Ave., Tallahassee, FL 32306-4330, USA
| | - Yiqi Xu
- Department of Statistics, Florida State University, 117 N Woodward Ave., Tallahassee, FL 32306-4330, USA
| | - Wei Wu
- Department of Statistics, Florida State University, 117 N Woodward Ave., Tallahassee, FL 32306-4330, USA
| |
Collapse
|
31
|
Recent advances in machine learning for maximal oxygen uptake (VO2 max) prediction: A review. INFORMATICS IN MEDICINE UNLOCKED 2022. [DOI: 10.1016/j.imu.2022.100863] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022] Open
|
32
|
Timsit Y, Grégoire SP. Towards the Idea of Molecular Brains. Int J Mol Sci 2021; 22:ijms222111868. [PMID: 34769300 PMCID: PMC8584932 DOI: 10.3390/ijms222111868] [Citation(s) in RCA: 14] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/22/2021] [Revised: 10/24/2021] [Accepted: 10/28/2021] [Indexed: 02/06/2023] Open
Abstract
How can single cells without nervous systems perform complex behaviours such as habituation, associative learning and decision making, which are considered the hallmark of animals with a brain? Are there molecular systems that underlie cognitive properties equivalent to those of the brain? This review follows the development of the idea of molecular brains from Darwin’s “root brain hypothesis”, through bacterial chemotaxis, to the recent discovery of neuron-like r-protein networks in the ribosome. By combining a structural biology view with a Bayesian brain approach, this review explores the evolutionary labyrinth of information processing systems across scales. Ribosomal protein networks open a window into what were probably the earliest signalling systems to emerge before the radiation of the three kingdoms. While ribosomal networks are characterised by long-lasting interactions between their protein nodes, cell signalling networks are essentially based on transient interactions. As a corollary, while signals propagated in persistent networks may be ephemeral, networks whose interactions are transient constrain signals diffusing into the cytoplasm to be durable in time, such as post-translational modifications of proteins or second messenger synthesis. The duration and nature of the signals, in turn, implies different mechanisms for the integration of multiple signals and decision making. Evolution then reinvented networks with persistent interactions with the development of nervous systems in metazoans. Ribosomal protein networks and simple nervous systems display architectural and functional analogies whose comparison could suggest scale invariance in information processing. At the molecular level, the significant complexification of eukaryotic ribosomal protein networks is associated with a burst in the acquisition of new conserved aromatic amino acids. Knowing that aromatic residues play a critical role in allosteric receptors and channels, this observation suggests a general role of π systems and their interactions with charged amino acids in multiple signal integration and information processing. We think that these findings may provide the molecular basis for designing future computers with organic processors.
Collapse
Affiliation(s)
- Youri Timsit
- Aix Marseille Université, Université de Toulon, CNRS, IRD, MIO UM110, 13288 Marseille, France
- Research Federation for the Study of Global Ocean Systems Ecology and Evolution, FR2022/Tara GOSEE, 3 rue Michel-Ange, 75016 Paris, France
- Correspondence:
| | - Sergeant-Perthuis Grégoire
- Institut de Mathématiques de Jussieu—Paris Rive Gauche (IMJ-PRG), UMR 7586, CNRS-Université Paris Diderot, 75013 Paris, France;
| |
Collapse
|
33
|
Blazek PJ, Lin MM. Explainable neural networks that simulate reasoning. NATURE COMPUTATIONAL SCIENCE 2021; 1:607-618. [PMID: 38217134 DOI: 10.1038/s43588-021-00132-w] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/22/2020] [Accepted: 08/16/2021] [Indexed: 01/15/2024]
Abstract
The success of deep neural networks suggests that cognition may emerge from indecipherable patterns of distributed neural activity. Yet these networks are pattern-matching black boxes that cannot simulate higher cognitive functions and lack numerous neurobiological features. Accordingly, they are currently insufficient computational models for understanding neural information processing. Here, we show how neural circuits can directly encode cognitive processes via simple neurobiological principles. To illustrate, we implemented this model in a non-gradient-based machine learning algorithm to train deep neural networks called essence neural networks (ENNs). Neural information processing in ENNs is intrinsically explainable, even on benchmark computer vision tasks. ENNs can also simulate higher cognitive functions such as deliberation, symbolic reasoning and out-of-distribution generalization. ENNs display network properties associated with the brain, such as modularity, distributed and localist firing, and adversarial robustness. ENNs establish a broad computational framework to decipher the neural basis of cognition and pursue artificial general intelligence.
Collapse
Affiliation(s)
- Paul J Blazek
- Green Center for Systems Biology, University of Texas Southwestern Medical Center, Dallas, TX, USA
- Department of Bioinformatics, University of Texas Southwestern Medical Center, Dallas, TX, USA
- Department of Biophysics, University of Texas Southwestern Medical Center, Dallas, TX, USA
| | - Milo M Lin
- Green Center for Systems Biology, University of Texas Southwestern Medical Center, Dallas, TX, USA.
- Department of Bioinformatics, University of Texas Southwestern Medical Center, Dallas, TX, USA.
- Department of Biophysics, University of Texas Southwestern Medical Center, Dallas, TX, USA.
- Center for Alzheimer's and Neurodegenerative Diseases, University of Texas Southwestern Medical Center, Dallas, TX, USA.
| |
Collapse
|
34
|
Santos-Mayo A, Moratti S, de Echegaray J, Susi G. A Model of the Early Visual System Based on Parallel Spike-Sequence Detection, Showing Orientation Selectivity. BIOLOGY 2021; 10:biology10080801. [PMID: 34440033 PMCID: PMC8389551 DOI: 10.3390/biology10080801] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 07/08/2021] [Revised: 08/12/2021] [Accepted: 08/16/2021] [Indexed: 12/22/2022]
Abstract
Simple Summary A computational model of primates’ early visual processing, showing orientation selectivity, is presented. The system importantly integrates two key elements: (1) a neuromorphic spike-decoding structure that considerably resembles the circuitry between layers IV and II/III of the primary visual cortex, both in topology and operation; (2) the plasticity of intrinsic excitability, to embed recent findings about the operation of the same area. The model is proposed as a tool for the analysis and reproduction of the orientation selectivity phenomenon, whose underlying neuronal-level computational mechanisms are today the subject of intense scrutiny. In response to rotated Gabor patches the model is able to exhibit realistic orientation tuning curves and to reproduce responses similar to those found in neurophysiological recordings from the primary visual cortex obtained under the same task, considering different stages of the network. This demonstrates its aptness to capture the mechanisms underlying the evoked response in the primary visual cortex. Our tool is available online, and can be expanded to other experiments using a dedicated software library developed by the authors, to elucidate the computational mechanisms underlying orientation selectivity. Abstract Since the first half of the twentieth century, numerous studies have been conducted on how the visual cortex encodes basic image features. One of the hallmarks of basic feature extraction is the phenomenon of orientation selectivity, of which the underlying neuronal-level computational mechanisms remain partially unclear despite being intensively investigated. In this work we present a reduced visual system model (RVSM) of the first level of scene analysis, involving the retina, the lateral geniculate nucleus and the primary visual cortex (V1), showing orientation selectivity. The detection core of the RVSM is the neuromorphic spike-decoding structure MNSD, which is able to learn and recognize parallel spike sequences and considerably resembles the neuronal microcircuits of V1 in both topology and operation. This structure is equipped with plasticity of intrinsic excitability to embed recent findings about V1 operation. The RVSM, which embeds 81 groups of MNSD arranged in 4 oriented columns, is tested using sets of rotated Gabor patches as input. Finally, synthetic visual evoked activity generated by the RVSM is compared with real neurophysiological signal from V1 area: (1) postsynaptic activity of human subjects obtained by magnetoencephalography and (2) spiking activity of macaques obtained by multi-tetrode arrays. The system is implemented using the NEST simulator. The results attest to a good level of resemblance between the model response and real neurophysiological recordings. As the RVSM is available online, and the model parameters can be customized by the user, we propose it as a tool to elucidate the computational mechanisms underlying orientation selectivity.
Collapse
Affiliation(s)
- Alejandro Santos-Mayo
- Laboratory of Cognitive and Computational Neuroscience, Center for Biomedical Technology, Technical University of Madrid, 28040 Madrid, Spain; (A.S.-M.); (S.M.); (J.d.E.)
- Department of Experimental Psychology, Faculty of Psychology, Complutense University of Madrid, 28040 Madrid, Spain
| | - Stephan Moratti
- Laboratory of Cognitive and Computational Neuroscience, Center for Biomedical Technology, Technical University of Madrid, 28040 Madrid, Spain; (A.S.-M.); (S.M.); (J.d.E.)
- Department of Experimental Psychology, Faculty of Psychology, Complutense University of Madrid, 28040 Madrid, Spain
- Laboratory of Clinical Neuroscience, Center for Biomedical Technology, Technical University of Madrid, 28040 Madrid, Spain
| | - Javier de Echegaray
- Laboratory of Cognitive and Computational Neuroscience, Center for Biomedical Technology, Technical University of Madrid, 28040 Madrid, Spain; (A.S.-M.); (S.M.); (J.d.E.)
- Department of Experimental Psychology, Faculty of Psychology, Complutense University of Madrid, 28040 Madrid, Spain
| | - Gianluca Susi
- Laboratory of Cognitive and Computational Neuroscience, Center for Biomedical Technology, Technical University of Madrid, 28040 Madrid, Spain; (A.S.-M.); (S.M.); (J.d.E.)
- Department of Experimental Psychology, Faculty of Psychology, Complutense University of Madrid, 28040 Madrid, Spain
- Department of Civil Engineering and Computer Science, University of Rome “Tor Vergata”, 00133 Rome, Italy
- Correspondence: ; Tel.: +34-(61)-86893399-79317
| |
Collapse
|
35
|
Comşa IM, Versari L, Fischbacher T, Alakuijala J. Spiking Autoencoders With Temporal Coding. Front Neurosci 2021; 15:712667. [PMID: 34483829 PMCID: PMC8414972 DOI: 10.3389/fnins.2021.712667] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/21/2021] [Accepted: 07/06/2021] [Indexed: 01/31/2023] Open
Abstract
Spiking neural networks with temporal coding schemes process information based on the relative timing of neuronal spikes. In supervised learning tasks, temporal coding allows learning through backpropagation with exact derivatives, and achieves accuracies on par with conventional artificial neural networks. Here we introduce spiking autoencoders with temporal coding and pulses, trained using backpropagation to store and reconstruct images with high fidelity from compact representations. We show that spiking autoencoders with a single layer are able to effectively represent and reconstruct images from the neuromorphically-encoded MNIST and FMNIST datasets. We explore the effect of different spike time target latencies, data noise levels and embedding sizes, as well as the classification performance from the embeddings. The spiking autoencoders achieve results similar to or better than conventional non-spiking autoencoders. We find that inhibition is essential in the functioning of the spiking autoencoders, particularly when the input needs to be memorised for a longer time before the expected output spike times. To reconstruct images with a high target latency, the network learns to accumulate negative evidence and to use the pulses as excitatory triggers for producing the output spikes at the required times. Our results highlight the potential of spiking autoencoders as building blocks for more complex biologically-inspired architectures. We also provide open-source code for the model.
Collapse
|
36
|
Isbister JB, Reyes-Puerta V, Sun JJ, Horenko I, Luhmann HJ. Clustering and control for adaptation uncovers time-warped spike time patterns in cortical networks in vivo. Sci Rep 2021; 11:15066. [PMID: 34326363 PMCID: PMC8322153 DOI: 10.1038/s41598-021-94002-0] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/07/2020] [Accepted: 06/29/2021] [Indexed: 12/04/2022] Open
Abstract
How information in the nervous system is encoded by patterns of action potentials (i.e. spikes) remains an open question. Multi-neuron patterns of single spikes are a prime candidate for spike time encoding but their temporal variability requires further characterisation. Here we show how known sources of spike count variability affect stimulus-evoked spike time patterns between neurons separated over multiple layers and columns of adult rat somatosensory cortex in vivo. On subsets of trials (clusters) and after controlling for stimulus-response adaptation, spike time differences between pairs of neurons are “time-warped” (compressed/stretched) by trial-to-trial changes in shared excitability, explaining why fixed spike time patterns and noise correlations are seldom reported. We show that predicted cortical state is correlated between groups of 4 neurons, introducing the possibility of spike time pattern modulation by population-wide trial-to-trial changes in excitability (i.e. cortical state). Under the assumption of state-dependent coding, we propose an improved potential encoding capacity.
Collapse
Affiliation(s)
- James B Isbister
- Oxford Centre for Theoretical Neuroscience and Artificial Intelligence, Department of Experimental Psychology, University of Oxford, Oxford, UK. .,The Blue Brain Project, École Polytechnique Fédérale de Lausanne, 1202, Geneva, Switzerland.
| | - Vicente Reyes-Puerta
- Institute of Physiology, University Medical Center, Johannes Gutenberg University, Mainz, Germany
| | - Jyh-Jang Sun
- Institute of Physiology, University Medical Center, Johannes Gutenberg University, Mainz, Germany.,NERF, Kapeldreef 75, 3001, Leuven, Belgium.,imec, Remisebosweg 1, 3001, Leuven, Belgium
| | - Illia Horenko
- Faculty of Informatics, Universita della Svizzera Italiana, Via G. Buffi 13, 6900, Lugano, Switzerland
| | - Heiko J Luhmann
- Institute of Physiology, University Medical Center, Johannes Gutenberg University, Mainz, Germany
| |
Collapse
|
37
|
Movement Analysis for Neurological and Musculoskeletal Disorders Using Graph Convolutional Neural Network. FUTURE INTERNET 2021. [DOI: 10.3390/fi13080194] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/23/2022] Open
Abstract
Using optical motion capture and wearable sensors is a common way to analyze impaired movement in individuals with neurological and musculoskeletal disorders. However, using optical motion sensors and wearable sensors is expensive and often requires highly trained professionals to identify specific impairments. In this work, we proposed a graph convolutional neural network that mimics the intuition of physical therapists to identify patient-specific impairments based on video of a patient. In addition, two modeling approaches are compared: a graph convolutional network applied solely on skeleton input data and a graph convolutional network accompanied with a 1-dimensional convolutional neural network (1D-CNN). Experiments on the dataset showed that the proposed method not only improves the correlation of the predicted gait measure with the ground truth value (speed = 0.791, gait deviation index (GDI) = 0.792) but also enables faster training with fewer parameters. In conclusion, the proposed method shows that the possibility of using video-based data to treat neurological and musculoskeletal disorders with acceptable accuracy instead of depending on the expensive and labor-intensive optical motion capture systems.
Collapse
|
38
|
Gerum RC, Schilling A. Integration of Leaky-Integrate-and-Fire Neurons in Standard Machine Learning Architectures to Generate Hybrid Networks: A Surrogate Gradient Approach. Neural Comput 2021; 33:2827-2852. [PMID: 34280298 DOI: 10.1162/neco_a_01424] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/08/2020] [Accepted: 04/26/2021] [Indexed: 11/04/2022]
Abstract
Up to now, modern machine learning (ML) has been based on approximating big data sets with high-dimensional functions, taking advantage of huge computational resources. We show that biologically inspired neuron models such as the leaky-integrate-and-fire (LIF) neuron provide novel and efficient ways of information processing. They can be integrated in machine learning models and are a potential target to improve ML performance. Thus, we have derived simple update rules for LIF units to numerically integrate the differential equations. We apply a surrogate gradient approach to train the LIF units via backpropagation. We demonstrate that tuning the leak term of the LIF neurons can be used to run the neurons in different operating modes, such as simple signal integrators or coincidence detectors. Furthermore, we show that the constant surrogate gradient, in combination with tuning the leak term of the LIF units, can be used to achieve the learning dynamics of more complex surrogate gradients. To prove the validity of our method, we applied it to established image data sets (the Oxford 102 flower data set, MNIST), implemented various network architectures, used several input data encodings and demonstrated that the method is suitable to achieve state-of-the-art classification performance. We provide our method as well as further surrogate gradient methods to train spiking neural networks via backpropagation as an open-source KERAS package to make it available to the neuroscience and machine learning community. To increase the interpretability of the underlying effects and thus make a small step toward opening the black box of machine learning, we provide interactive illustrations, with the possibility of systematically monitoring the effects of parameter changes on the learning characteristics.
Collapse
Affiliation(s)
- Richard C Gerum
- Department of Physics and Center for Vision Research, York University, Toronto, Ontario M3J 1P3 Canada
| | - Achim Schilling
- Experimental Otolaryngology, Neuroscience Lab, University Hospital Erlangen, 91054 Erlangen, Germany; Cognitive Computational Neuroscience Group at the Chair of English Philology and Linguistics, Friedrich-Alexander University Erlangen-Nürnberg 91054 Erlangen Germany; and Laboratoire Neuorsciences Sensorielles et Cognitives, Aix Marseille-University, 13331 Marseille, France
| |
Collapse
|
39
|
Downer JD, Bigelow J, Runfeldt MJ, Malone BJ. Temporally precise population coding of dynamic sounds by auditory cortex. J Neurophysiol 2021; 126:148-169. [PMID: 34077273 DOI: 10.1152/jn.00709.2020] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
Fluctuations in the amplitude envelope of complex sounds provide critical cues for hearing, particularly for speech and animal vocalizations. Responses to amplitude modulation (AM) in the ascending auditory pathway have chiefly been described for single neurons. How neural populations might collectively encode and represent information about AM remains poorly characterized, even in primary auditory cortex (A1). We modeled population responses to AM based on data recorded from A1 neurons in awake squirrel monkeys and evaluated how accurately single trial responses to modulation frequencies from 4 to 512 Hz could be decoded as functions of population size, composition, and correlation structure. We found that a population-based decoding model that simulated convergent, equally weighted inputs was highly accurate and remarkably robust to the inclusion of neurons that were individually poor decoders. By contrast, average rate codes based on convergence performed poorly; effective decoding using average rates was only possible when the responses of individual neurons were segregated, as in classical population decoding models using labeled lines. The relative effectiveness of dynamic rate coding in auditory cortex was explained by shared modulation phase preferences among cortical neurons, despite heterogeneity in rate-based modulation frequency tuning. Our results indicate significant population-based synchrony in primary auditory cortex and suggest that robust population coding of the sound envelope information present in animal vocalizations and speech can be reliably achieved even with indiscriminate pooling of cortical responses. These findings highlight the importance of firing rate dynamics in population-based sensory coding.NEW & NOTEWORTHY Fundamental questions remain about population coding in primary auditory cortex (A1). In particular, issues of spike timing in models of neural populations have been largely ignored. We find that spike-timing in response to sound envelope fluctuations is highly similar across neuron populations in A1. This property of shared envelope phase preference allows for a simple population model involving unweighted convergence of neuronal responses to classify amplitude modulation frequencies with high accuracy.
Collapse
Affiliation(s)
- Joshua D Downer
- Department of Otolaryngology-Head and Neck Surgery, University of California, San Francisco, California
| | - James Bigelow
- Department of Otolaryngology-Head and Neck Surgery, University of California, San Francisco, California
| | - Melissa J Runfeldt
- Department of Otolaryngology-Head and Neck Surgery, University of California, San Francisco, California
| | - Brian J Malone
- Department of Otolaryngology-Head and Neck Surgery, University of California, San Francisco, California.,Kavli Institute for Fundamental Neuroscience, University of California, San Francisco, California
| |
Collapse
|
40
|
Inagaki T, Inaba K, Leleu T, Honjo T, Ikuta T, Enbutsu K, Umeki T, Kasahara R, Aihara K, Takesue H. Collective and synchronous dynamics of photonic spiking neurons. Nat Commun 2021; 12:2325. [PMID: 33893296 PMCID: PMC8065174 DOI: 10.1038/s41467-021-22576-4] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/07/2020] [Accepted: 03/16/2021] [Indexed: 02/02/2023] Open
Abstract
Nonlinear dynamics of spiking neural networks have recently attracted much interest as an approach to understand possible information processing in the brain and apply it to artificial intelligence. Since information can be processed by collective spiking dynamics of neurons, the fine control of spiking dynamics is desirable for neuromorphic devices. Here we show that photonic spiking neurons implemented with paired nonlinear optical oscillators can be controlled to generate two modes of bio-realistic spiking dynamics by changing optical-pump amplitude. When the photonic neurons are coupled in a network, the interaction between them induces an effective change in the pump amplitude depending on the order parameter that characterizes synchronization. The experimental results show that the effective change causes spontaneous modification of the spiking modes and firing rates of clustered neurons, and such collective dynamics can be utilized to realize efficient heuristics for solving NP-hard combinatorial optimization problems.
Collapse
Affiliation(s)
- Takahiro Inagaki
- NTT Basic Research Laboratories, NTT Corporation, 3-1 Morinosato Wakamiya, Atsugi, Kanagawa, 243-0198, Japan.
| | - Kensuke Inaba
- NTT Basic Research Laboratories, NTT Corporation, 3-1 Morinosato Wakamiya, Atsugi, Kanagawa, 243-0198, Japan.
| | - Timothée Leleu
- Institute of Industrial Science, The University of Tokyo, 4-6-1, Komaba, Meguro-ku, Tokyo, 153-8505, Japan
- International Research Center for Neurointelligence, The University of Tokyo Institute for Advanced Study, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo, 113-0033, Japan
| | - Toshimori Honjo
- NTT Basic Research Laboratories, NTT Corporation, 3-1 Morinosato Wakamiya, Atsugi, Kanagawa, 243-0198, Japan
| | - Takuya Ikuta
- NTT Basic Research Laboratories, NTT Corporation, 3-1 Morinosato Wakamiya, Atsugi, Kanagawa, 243-0198, Japan
| | - Koji Enbutsu
- NTT Device Technology Laboratories, NTT Corporation, 3-1 Morinosato Wakamiya, Atsugi, Kanagawa, 243-0198, Japan
| | - Takeshi Umeki
- NTT Device Technology Laboratories, NTT Corporation, 3-1 Morinosato Wakamiya, Atsugi, Kanagawa, 243-0198, Japan
| | - Ryoichi Kasahara
- NTT Device Technology Laboratories, NTT Corporation, 3-1 Morinosato Wakamiya, Atsugi, Kanagawa, 243-0198, Japan
| | - Kazuyuki Aihara
- Institute of Industrial Science, The University of Tokyo, 4-6-1, Komaba, Meguro-ku, Tokyo, 153-8505, Japan
- International Research Center for Neurointelligence, The University of Tokyo Institute for Advanced Study, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo, 113-0033, Japan
| | - Hiroki Takesue
- NTT Basic Research Laboratories, NTT Corporation, 3-1 Morinosato Wakamiya, Atsugi, Kanagawa, 243-0198, Japan
| |
Collapse
|
41
|
Iyer LR, Chua Y, Li H. Is Neuromorphic MNIST Neuromorphic? Analyzing the Discriminative Power of Neuromorphic Datasets in the Time Domain. Front Neurosci 2021; 15:608567. [PMID: 33841072 PMCID: PMC8027306 DOI: 10.3389/fnins.2021.608567] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/21/2020] [Accepted: 03/01/2021] [Indexed: 11/26/2022] Open
Abstract
A major characteristic of spiking neural networks (SNNs) over conventional artificial neural networks (ANNs) is their ability to spike, enabling them to use spike timing for coding and efficient computing. In this paper, we assess if neuromorphic datasets recorded from static images are able to evaluate the ability of SNNs to use spike timings in their calculations. We have analyzed N-MNIST, N-Caltech101 and DvsGesture along these lines, but focus our study on N-MNIST. First we evaluate if additional information is encoded in the time domain in a neuromorphic dataset. We show that an ANN trained with backpropagation on frame-based versions of N-MNIST and N-Caltech101 images achieve 99.23 and 78.01% accuracy. These are comparable to the state of the art-showing that an algorithm that purely works on spatial data can classify these datasets. Second we compare N-MNIST and DvsGesture on two STDP algorithms, RD-STDP, that can classify only spatial data, and STDP-tempotron that classifies spatiotemporal data. We demonstrate that RD-STDP performs very well on N-MNIST, while STDP-tempotron performs better on DvsGesture. Since DvsGesture has a temporal dimension, it requires STDP-tempotron, while N-MNIST can be adequately classified by an algorithm that works on spatial data alone. This shows that precise spike timings are not important in N-MNIST. N-MNIST does not, therefore, highlight the ability of SNNs to classify temporal data. The conclusions of this paper open the question-what dataset can evaluate SNN ability to classify temporal data?
Collapse
Affiliation(s)
- Laxmi R. Iyer
- Neuromorphic Computing, Institute of Infocomms Research, A*Star, Singapore, Singapore
| | - Yansong Chua
- Neuromorphic Computing, Institute of Infocomms Research, A*Star, Singapore, Singapore
| | - Haizhou Li
- Neuromorphic Computing, Institute of Infocomms Research, A*Star, Singapore, Singapore
- Huawei Technologies Co., Ltd., Shenzhen, China
| |
Collapse
|
42
|
Abstract
Dragonflies visually detect prey and conspecifics, rapidly pursuing these targets via acrobatic flights. Over many decades, studies have investigated the elaborate neuronal circuits proposed to underlie this rapid behaviour. A subset of dragonfly visual neurons exhibit exquisite tuning to small, moving targets even when presented in cluttered backgrounds. In prior work, these neuronal responses were quantified by computing the rate of spikes fired during an analysis window of interest. However, neuronal systems can utilize a variety of neuronal coding principles to signal information, so a spike train’s information content is not necessarily encapsulated by spike rate alone. One example of this is burst coding, where neurons fire rapid bursts of spikes, followed by a period of inactivity. Here we show that the most studied target-detecting neuron in dragonflies, CSTMD1, responds to moving targets with a series of spike bursts. This spiking activity differs from those in other identified visual neurons in the dragonfly, indicative of different physiological mechanisms underlying CSTMD1’s spike generation. Burst codes present several advantages and disadvantages compared to other coding approaches. We propose functional implications of CSTMD1’s burst coding activity and show that spike bursts enhance the robustness of target-evoked responses.
Collapse
|
43
|
Yu Q, Yao Y, Wang L, Tang H, Dang J, Tan KC. Robust Environmental Sound Recognition With Sparse Key-Point Encoding and Efficient Multispike Learning. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2021; 32:625-638. [PMID: 32203038 DOI: 10.1109/tnnls.2020.2978764] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
The capability for environmental sound recognition (ESR) can determine the fitness of individuals in a way to avoid dangers or pursue opportunities when critical sound events occur. It still remains mysterious about the fundamental principles of biological systems that result in such a remarkable ability. Additionally, the practical importance of ESR has attracted an increasing amount of research attention, but the chaotic and nonstationary difficulties continue to make it a challenging task. In this article, we propose a spike-based framework from a more brain-like perspective for the ESR task. Our framework is a unifying system with consistent integration of three major functional parts which are sparse encoding, efficient learning, and robust readout. We first introduce a simple sparse encoding, where key points are used for feature representation, and demonstrate its generalization to both spike- and nonspike-based systems. Then, we evaluate the learning properties of different learning rules in detail with our contributions being added for improvements. Our results highlight the advantages of multispike learning, providing a selection reference for various spike-based developments. Finally, we combine the multispike readout with the other parts to form a system for ESR. Experimental results show that our framework performs the best as compared to other baseline approaches. In addition, we show that our spike-based framework has several advantageous characteristics including early decision making, small dataset acquiring, and ongoing dynamic processing. Our framework is the first attempt to apply the multispike characteristic of nervous neurons to ESR. The outstanding performance of our approach would potentially contribute to draw more research efforts to push the boundaries of spike-based paradigm to a new horizon.
Collapse
|
44
|
Spike Train Coactivity Encodes Learned Natural Stimulus Invariances in Songbird Auditory Cortex. J Neurosci 2020; 41:73-88. [PMID: 33177068 DOI: 10.1523/jneurosci.0248-20.2020] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/31/2020] [Revised: 10/30/2020] [Accepted: 10/31/2020] [Indexed: 11/21/2022] Open
Abstract
The capacity for sensory systems to encode relevant information that is invariant to many stimulus changes is central to normal, real-world, cognitive function. This invariance is thought to be reflected in the complex spatiotemporal activity patterns of neural populations, but our understanding of population-level representational invariance remains coarse. Applied topology is a promising tool to discover invariant structure in large datasets. Here, we use topological techniques to characterize and compare the spatiotemporal pattern of coactive spiking within populations of simultaneously recorded neurons in the secondary auditory region caudal medial neostriatum of European starlings (Sturnus vulgaris). We show that the pattern of population spike train coactivity carries stimulus-specific structure that is not reducible to that of individual neurons. We then introduce a topology-based similarity measure for population coactivity that is sensitive to invariant stimulus structure and show that this measure captures invariant neural representations tied to the learned relationships between natural vocalizations. This demonstrates one mechanism whereby emergent stimulus properties can be encoded in population activity, and shows the potential of applied topology for understanding invariant representations in neural populations.SIGNIFICANCE STATEMENT Information in neural populations is carried by the temporal patterns of spikes. We applied novel mathematical tools from the field of algebraic topology to quantify the structure of these temporal patterns. We found that, in a secondary auditory region of a songbird, these patterns reflected invariant information about a learned stimulus relationship. These results demonstrate that topology provides a novel approach for characterizing neural responses that is sensitive to invariant relationships that are critical for the perception of natural stimuli.
Collapse
|
45
|
IoT-Oriented Design of an Associative Memory Based on Impulsive Hopfield Neural Network with Rate Coding of LIF Oscillators. ELECTRONICS 2020. [DOI: 10.3390/electronics9091468] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
The smart devices in Internet of Things (IoT) need more effective data storage opportunities, as well as support for Artificial Intelligence (AI) methods such as neural networks (NNs). This study presents a design of new associative memory in the form of impulsive Hopfield network based on leaky integrated-and-fire (LIF) RC oscillators with frequency control and hybrid analog–digital coding. Two variants of the network schemes have been developed, where spiking frequencies of oscillators are controlled either by supply currents or by variable resistances. The principle of operation of impulsive networks based on these schemes is presented and the recognition dynamics using simple two-dimensional images in gray gradation as an example is analyzed. A fast digital recognition method is proposed that uses the thresholds of zero crossing of output voltages of neurons. The time scale of this method is compared with the execution time of some network algorithms on IoT devices for moderate data amounts. The proposed Hopfield algorithm uses rate coding to expand the capabilities of neuromorphic engineering, including the design of new hardware circuits of IoT.
Collapse
|
46
|
Tan C, Šarlija M, Kasabov N. Spiking Neural Networks: Background, Recent Development and the NeuCube Architecture. Neural Process Lett 2020. [DOI: 10.1007/s11063-020-10322-8] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/23/2022]
|
47
|
Kheradpisheh SR, Masquelier T. Temporal Backpropagation for Spiking Neural Networks with One Spike per Neuron. Int J Neural Syst 2020; 30:2050027. [DOI: 10.1142/s0129065720500276] [Citation(s) in RCA: 57] [Impact Index Per Article: 14.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022]
Abstract
We propose a new supervised learning rule for multilayer spiking neural networks (SNNs) that use a form of temporal coding known as rank-order-coding. With this coding scheme, all neurons fire exactly one spike per stimulus, but the firing order carries information. In particular, in the readout layer, the first neuron to fire determines the class of the stimulus. We derive a new learning rule for this sort of network, named S4NN, akin to traditional error backpropagation, yet based on latencies. We show how approximated error gradients can be computed backward in a feedforward network with any number of layers. This approach reaches state-of-the-art performance with supervised multi-fully connected layer SNNs: test accuracy of 97.4% for the MNIST dataset, and 99.2% for the Caltech Face/Motorbike dataset. Yet, the neuron model that we use, nonleaky integrate-and-fire, is much simpler than the one used in all previous works. The source codes of the proposed S4NN are publicly available at https://github.com/SRKH/S4NN .
Collapse
Affiliation(s)
- Saeed Reza Kheradpisheh
- Department of Computer and Data Sciences, Faculty of Mathematical Sciences, Shahid Beheshti University, Tehran, Iran
| | | |
Collapse
|
48
|
Montangie L, Miehl C, Gjorgjieva J. Autonomous emergence of connectivity assemblies via spike triplet interactions. PLoS Comput Biol 2020; 16:e1007835. [PMID: 32384081 PMCID: PMC7239496 DOI: 10.1371/journal.pcbi.1007835] [Citation(s) in RCA: 19] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/20/2019] [Revised: 05/20/2020] [Accepted: 03/31/2020] [Indexed: 01/08/2023] Open
Abstract
Non-random connectivity can emerge without structured external input driven by activity-dependent mechanisms of synaptic plasticity based on precise spiking patterns. Here we analyze the emergence of global structures in recurrent networks based on a triplet model of spike timing dependent plasticity (STDP), which depends on the interactions of three precisely-timed spikes, and can describe plasticity experiments with varying spike frequency better than the classical pair-based STDP rule. We derive synaptic changes arising from correlations up to third-order and describe them as the sum of structural motifs, which determine how any spike in the network influences a given synaptic connection through possible connectivity paths. This motif expansion framework reveals novel structural motifs under the triplet STDP rule, which support the formation of bidirectional connections and ultimately the spontaneous emergence of global network structure in the form of self-connected groups of neurons, or assemblies. We propose that under triplet STDP assembly structure can emerge without the need for externally patterned inputs or assuming a symmetric pair-based STDP rule common in previous studies. The emergence of non-random network structure under triplet STDP occurs through internally-generated higher-order correlations, which are ubiquitous in natural stimuli and neuronal spiking activity, and important for coding. We further demonstrate how neuromodulatory mechanisms that modulate the shape of the triplet STDP rule or the synaptic transmission function differentially promote structural motifs underlying the emergence of assemblies, and quantify the differences using graph theoretic measures. Emergent non-random connectivity structures in different brain regions are tightly related to specific patterns of neural activity and support diverse brain functions. For instance, self-connected groups of neurons, known as assemblies, have been proposed to represent functional units in brain circuits and can emerge even without patterned external instruction. Here we investigate the emergence of non-random connectivity in recurrent networks using a particular plasticity rule, triplet STDP, which relies on the interaction of spike triplets and can capture higher-order statistical dependencies in neural activity. We derive the evolution of the synaptic strengths in the network and explore the conditions for the self-organization of connectivity into assemblies. We demonstrate key differences of the triplet STDP rule compared to the classical pair-based rule in terms of how assemblies are formed, including the realistic asymmetric shape and influence of novel connectivity motifs on network plasticity driven by higher-order correlations. Assembly formation depends on the specific shape of the STDP window and synaptic transmission function, pointing towards an important role of neuromodulatory signals on formation of intrinsically generated assemblies.
Collapse
Affiliation(s)
- Lisandro Montangie
- Computation in Neural Circuits Group, Max Planck Institute for Brain Research, Frankfurt, Germany
| | - Christoph Miehl
- Computation in Neural Circuits Group, Max Planck Institute for Brain Research, Frankfurt, Germany
- Technical University of Munich, School of Life Sciences, Freising, Germany
| | - Julijana Gjorgjieva
- Computation in Neural Circuits Group, Max Planck Institute for Brain Research, Frankfurt, Germany
- Technical University of Munich, School of Life Sciences, Freising, Germany
- * E-mail:
| |
Collapse
|
49
|
Hong C, Wei X, Wang J, Deng B, Yu H, Che Y. Training Spiking Neural Networks for Cognitive Tasks: A Versatile Framework Compatible With Various Temporal Codes. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2020; 31:1285-1296. [PMID: 31247574 DOI: 10.1109/tnnls.2019.2919662] [Citation(s) in RCA: 14] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Abstract
Recent studies have demonstrated the effectiveness of supervised learning in spiking neural networks (SNNs). A trainable SNN provides a valuable tool not only for engineering applications but also for theoretical neuroscience studies. Here, we propose a modified SpikeProp learning algorithm, which ensures better learning stability for SNNs and provides more diverse network structures and coding schemes. Specifically, we designed a spike gradient threshold rule to solve the well-known gradient exploding problem in SNN training. In addition, regulation rules on firing rates and connection weights are proposed to control the network activity during training. Based on these rules, biologically realistic features such as lateral connections, complex synaptic dynamics, and sparse activities are included in the network to facilitate neural computation. We demonstrate the versatility of this framework by implementing three well-known temporal codes for different types of cognitive tasks, namely, handwritten digit recognition, spatial coordinate transformation, and motor sequence generation. Several important features observed in experimental studies, such as selective activity, excitatory-inhibitory balance, and weak pairwise correlation, emerged in the trained model. This agreement between experimental and computational results further confirmed the importance of these features in neural function. This work provides a new framework, in which various neural behaviors can be modeled and the underlying computational mechanisms can be studied.
Collapse
|
50
|
Walsh DA, Brown JT, Randall AD. Neurophysiological alterations in the nucleus reuniens of a mouse model of Alzheimer's disease. Neurobiol Aging 2019; 88:1-10. [PMID: 32065917 DOI: 10.1016/j.neurobiolaging.2019.12.006] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/11/2019] [Revised: 11/30/2019] [Accepted: 12/06/2019] [Indexed: 02/06/2023]
Abstract
Recently, increased neuronal activity in nucleus reuniens (Re) has been linked to hyperexcitability within hippocampal-thalamo-cortical networks in the J20 mouse model of amyloidopathy. Here in vitro whole-cell patch clamp recordings were used to compare old pathology-bearing J20 mice and wild-type controls to examine whether altered intrinsic electrophysiological properties could contribute to the amyloidopathy-associated Re hyperactivity. A greater proportion of Re neurons display hyperpolarized membrane potentials in J20 mice without changes to the incidence or frequency of spontaneous action potentials. Re neurons recorded from J20 mice did not exhibit increased action potential generation in response to depolarizing current stimuli but an increased propensity to rebound burst following hyperpolarizing current stimuli. Increased rebound firing did not appear to result from alterations to T-type Ca2+ channels. Finally, in J20 mice, there was an ~8% reduction in spike width, similar to what has been reported in CA1 pyramidal neurons from multiple amyloidopathy mice. We conclude that alterations to the intrinsic properties of Re neurons may contribute to hippocampal-thalmo-cortical hyperexcitability observed under pathological beta-amyloid load.
Collapse
Affiliation(s)
- Darren A Walsh
- Institute of Biomedical and Clinical Sciences, University of Exeter Medical School, Hatherly Laboratory, Exeter, UK
| | - Jon T Brown
- Institute of Biomedical and Clinical Sciences, University of Exeter Medical School, Hatherly Laboratory, Exeter, UK
| | - Andrew D Randall
- Institute of Biomedical and Clinical Sciences, University of Exeter Medical School, Hatherly Laboratory, Exeter, UK.
| |
Collapse
|