1
|
Cotteret M, Greatorex H, Ziegler M, Chicca E. Vector Symbolic Finite State Machines in Attractor Neural Networks. Neural Comput 2024; 36:549-595. [PMID: 38457766 DOI: 10.1162/neco_a_01638] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/09/2023] [Accepted: 10/19/2023] [Indexed: 03/10/2024]
Abstract
Hopfield attractor networks are robust distributed models of human memory, but they lack a general mechanism for effecting state-dependent attractor transitions in response to input. We propose construction rules such that an attractor network may implement an arbitrary finite state machine (FSM), where states and stimuli are represented by high-dimensional random vectors and all state transitions are enacted by the attractor network's dynamics. Numerical simulations show the capacity of the model, in terms of the maximum size of implementable FSM, to be linear in the size of the attractor network for dense bipolar state vectors and approximately quadratic for sparse binary state vectors. We show that the model is robust to imprecise and noisy weights, and so a prime candidate for implementation with high-density but unreliable devices. By endowing attractor networks with the ability to emulate arbitrary FSMs, we propose a plausible path by which FSMs could exist as a distributed computational primitive in biological neural networks.
Collapse
Affiliation(s)
- Madison Cotteret
- Micro- and Nanoelectronic Systems, Institute of Micro- and Nanotechnologies (IMN) MacroNano, Technische Universität Ilmenau, 98693 Ilmenau, Germany
- Bio-Inspired Circuits and Systems Lab, Zernike Institute for Advanced Materials, and Groningen Cognitive Systems and Materials Center, University of Groningen, 9747 AG Groningen, Netherlands
| | - Hugh Greatorex
- Bio-Inspired Circuits and Systems Lab, Zernike Institute for Advanced Materials, and Groningen Cognitive Systems and Materials Center, University of Groningen, 9747 AG Groningen, Netherlands
| | - Martin Ziegler
- Micro- and Nanoelectronic Systems, Institute of Micro- and Nanotechnologies (IMN) MacroNano, Technische Universität Ilmenau, 98693 Ilmenau, Germany
| | - Elisabetta Chicca
- Bio-Inspired Circuits and Systems Lab, Zernike Institute for Advanced Materials, and Groningen Cognitive Systems and Materials Center, University of Groningen, 9747 AG Groningen, Netherlands
| |
Collapse
|
2
|
Abstract
The design of robots that interact autonomously with the environment and exhibit complex behaviours is an open challenge that can benefit from understanding what makes living beings fit to act in the world. Neuromorphic engineering studies neural computational principles to develop technologies that can provide a computing substrate for building compact and low-power processing systems. We discuss why endowing robots with neuromorphic technologies - from perception to motor control - represents a promising approach for the creation of robots which can seamlessly integrate in society. We present initial attempts in this direction, highlight open challenges, and propose actions required to overcome current limitations.
Collapse
Affiliation(s)
- Chiara Bartolozzi
- Event-Driven Perception for Robotics, Istituto Italiano di Tecnologia, via San Quirico 19D, 16163, Genova, Italy.
| | - Giacomo Indiveri
- Institute of Neuroinformatics, University of Zurich and ETH Zurich, Winterthurerstr. 190, 8057, Zurich, Switzerland
| | - Elisa Donati
- Institute of Neuroinformatics, University of Zurich and ETH Zurich, Winterthurerstr. 190, 8057, Zurich, Switzerland
| |
Collapse
|
3
|
Quiroga MDM, Morris AP, Krekelberg B. Short-Term Attractive Tilt Aftereffects Predicted by a Recurrent Network Model of Primary Visual Cortex. Front Syst Neurosci 2019; 13:67. [PMID: 31780906 PMCID: PMC6857575 DOI: 10.3389/fnsys.2019.00067] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/20/2019] [Accepted: 10/22/2019] [Indexed: 11/23/2022] Open
Abstract
Adaptation is a multi-faceted phenomenon that is of interest in terms of both its function and its potential to reveal underlying neural processing. Many behavioral studies have shown that after exposure to an oriented adapter the perceived orientation of a subsequent test is repulsed away from the orientation of the adapter. This is the well-known Tilt Aftereffect (TAE). Recently, we showed that the dynamics of recurrently connected networks may contribute substantially to the neural changes induced by adaptation, especially on short time scales. Here we extended the network model and made the novel behavioral prediction that the TAE should be attractive, not repulsive, on a time scale of a few 100 ms. Our experiments, using a novel adaptation protocol that specifically targeted adaptation on a short time scale, confirmed this prediction. These results support our hypothesis that recurrent network dynamics may contribute to short-term adaptation. More broadly, they show that understanding the neural processing of visual inputs that change on the time scale of a typical fixation requires a detailed analysis of not only the intrinsic properties of neurons, but also the slow and complex dynamics that emerge from their recurrent connectivity. We argue that this is but one example of how even simple recurrent networks can underlie surprisingly complex information processing, and are involved in rudimentary forms of memory, spatio-temporal integration, and signal amplification.
Collapse
Affiliation(s)
- Maria Del Mar Quiroga
- Center for Molecular and Behavioral Neuroscience, Rutgers University, Newark, NJ, United States.,Behavioral and Neural Sciences Graduate Program, Rutgers University, Newark, NJ, United States
| | - Adam P Morris
- Center for Molecular and Behavioral Neuroscience, Rutgers University, Newark, NJ, United States.,Neuroscience Program, Department of Physiology, Biomedicine Discovery Institute, Monash University, Clayton, VIC, Australia
| | - Bart Krekelberg
- Center for Molecular and Behavioral Neuroscience, Rutgers University, Newark, NJ, United States
| |
Collapse
|
4
|
Neural dynamics of spreading attentional labels in mental contour tracing. Neural Netw 2019; 119:113-138. [PMID: 31404805 DOI: 10.1016/j.neunet.2019.07.016] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/11/2018] [Revised: 07/12/2019] [Accepted: 07/21/2019] [Indexed: 11/22/2022]
Abstract
Behavioral and neural data suggest that visual attention spreads along contour segments to bind them into a unified object representation. Such attentional labeling segregates the target contour from distractors in a process known as mental contour tracing. A recurrent competitive map is developed to simulate the dynamics of mental contour tracing. In the model, local excitation opposes global inhibition and enables enhanced activity to propagate on the path offered by the contour. The extent of local excitatory interactions is modulated by the output of the multi-scale contour detection network, which constrains the speed of activity spreading in a scale-dependent manner. Furthermore, an L-junction detection network enables tracing to switch direction at the L-junctions, but not at the X- or T-junctions, thereby preventing spillover to a distractor contour. Computer simulations reveal that the model exhibits a monotonic increase in tracing time as a function of the distance to be traced. Also, the speed of tracing increases with decreasing proximity to the distractor contour and with the reduced curvature of the contours. The proposed model demonstrated how an elaborated version of the winner-takes-all network can implement a complex cognitive operation such as contour tracing.
Collapse
|
5
|
Kreiser R, Aathmani D, Qiao N, Indiveri G, Sandamirskaya Y. Organizing Sequential Memory in a Neuromorphic Device Using Dynamic Neural Fields. Front Neurosci 2018; 12:717. [PMID: 30524218 PMCID: PMC6262404 DOI: 10.3389/fnins.2018.00717] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/12/2018] [Accepted: 09/19/2018] [Indexed: 11/26/2022] Open
Abstract
Neuromorphic Very Large Scale Integration (VLSI) devices emulate the activation dynamics of biological neuronal networks using either mixed-signal analog/digital or purely digital electronic circuits. Using analog circuits in silicon to physically emulate the functionality of biological neurons and synapses enables faithful modeling of neural and synaptic dynamics at ultra low power consumption in real-time, and thus may serve as computational substrate for a new generation of efficient neural controllers for artificial intelligent systems. Although one of the main advantages of neural networks is their ability to perform on-line learning, only a small number of neuromorphic hardware devices implement this feature on-chip. In this work, we use a reconfigurable on-line learning spiking (ROLLS) neuromorphic processor chip to build a neuronal architecture for sequence learning. The proposed neuronal architecture uses the attractor properties of winner-takes-all (WTA) dynamics to cope with mismatch and noise in the ROLLS analog computing elements, and it uses its on-chip plasticity features to store sequences of states. We demonstrate, with a proof-of-concept feasibility study how this architecture can store, replay, and update sequences of states, induced by external inputs. Controlled by the attractor dynamics and an explicit destabilizing signal, the items in a sequence can last for varying amounts of time and thus reliable sequence learning and replay can be robustly implemented in a real sensorimotor system.
Collapse
Affiliation(s)
- Raphaela Kreiser
- Institute of Neuroinformatics, University of Zurich and ETH Zurich, Zurich, Switzerland
| | - Dora Aathmani
- The School of Electrical and Computer Engineering, Georgia Institute of Technology, Atlanta, GA, United States
| | - Ning Qiao
- Institute of Neuroinformatics, University of Zurich and ETH Zurich, Zurich, Switzerland
| | - Giacomo Indiveri
- Institute of Neuroinformatics, University of Zurich and ETH Zurich, Zurich, Switzerland
| | - Yulia Sandamirskaya
- Institute of Neuroinformatics, University of Zurich and ETH Zurich, Zurich, Switzerland
| |
Collapse
|
6
|
Rutishauser U, Slotine JJ, Douglas RJ. Solving Constraint-Satisfaction Problems with Distributed Neocortical-Like Neuronal Networks. Neural Comput 2018; 30:1359-1393. [PMID: 29566357 PMCID: PMC5930080 DOI: 10.1162/neco_a_01074] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/03/2023]
Abstract
Finding actions that satisfy the constraints imposed by both external inputs and internal representations is central to decision making. We demonstrate that some important classes of constraint satisfaction problems (CSPs) can be solved by networks composed of homogeneous cooperative-competitive modules that have connectivity similar to motifs observed in the superficial layers of neocortex. The winner-take-all modules are sparsely coupled by programming neurons that embed the constraints onto the otherwise homogeneous modular computational substrate. We show rules that embed any instance of the CSP's planar four-color graph coloring, maximum independent set, and sudoku on this substrate and provide mathematical proofs that guarantee these graph coloring problems will convergence to a solution. The network is composed of nonsaturating linear threshold neurons. Their lack of right saturation allows the overall network to explore the problem space driven through the unstable dynamics generated by recurrent excitation. The direction of exploration is steered by the constraint neurons. While many problems can be solved using only linear inhibitory constraints, network performance on hard problems benefits significantly when these negative constraints are implemented by nonlinear multiplicative inhibition. Overall, our results demonstrate the importance of instability rather than stability in network computation and offer insight into the computational role of dual inhibitory mechanisms in neural circuits.
Collapse
Affiliation(s)
- Ueli Rutishauser
- Computation and Neural Systems, Division of Biology and Biological Engineering, California Institute of Technology, Pasadena, CA 91125, U.S.A., and Cedars-Sinai Medical Center, Departments of Neurosurgery, Neurology and Biomedical Sciences, Los Angeles, CA 90048, U.S.A.
| | - Jean-Jacques Slotine
- Nonlinear Systems Laboratory, Department of Mechanical Engineering and Department of Brain and Cognitive Sciences, MIT, Cambridge, MA 02139, U.S.A.
| | - Rodney J Douglas
- Institute of Neuroinformatics, University and ETH Zurich, Zurich 8057, Switzerland
| |
Collapse
|
7
|
Marić M, Domijan D. A Neurodynamic Model of Feature-Based Spatial Selection. Front Psychol 2018; 9:417. [PMID: 29643826 PMCID: PMC5883145 DOI: 10.3389/fpsyg.2018.00417] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/17/2017] [Accepted: 03/13/2018] [Indexed: 11/21/2022] Open
Abstract
Huang and Pashler (2007) suggested that feature-based attention creates a special form of spatial representation, which is termed a Boolean map. It partitions the visual scene into two distinct and complementary regions: selected and not selected. Here, we developed a model of a recurrent competitive network that is capable of state-dependent computation. It selects multiple winning locations based on a joint top-down cue. We augmented a model of the WTA circuit that is based on linear-threshold units with two computational elements: dendritic non-linearity that acts on the excitatory units and activity-dependent modulation of synaptic transmission between excitatory and inhibitory units. Computer simulations showed that the proposed model could create a Boolean map in response to a featured cue and elaborate it using the logical operations of intersection and union. In addition, it was shown that in the absence of top-down guidance, the model is sensitive to bottom-up cues such as saliency and abrupt visual onset.
Collapse
Affiliation(s)
- Mateja Marić
- Department of Psychology, Faculty of Humanities and Social Sciences, University of Rijeka, Rijeka, Croatia
| | - Dražen Domijan
- Department of Psychology, Faculty of Humanities and Social Sciences, University of Rijeka, Rijeka, Croatia
| |
Collapse
|
8
|
Sadeh S, Silver RA, Mrsic-Flogel TD, Muir DR. Assessing the Role of Inhibition in Stabilizing Neocortical Networks Requires Large-Scale Perturbation of the Inhibitory Population. J Neurosci 2017; 37:12050-12067. [PMID: 29074575 PMCID: PMC5719979 DOI: 10.1523/jneurosci.0963-17.2017] [Citation(s) in RCA: 31] [Impact Index Per Article: 4.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/11/2017] [Revised: 09/12/2017] [Accepted: 10/08/2017] [Indexed: 12/20/2022] Open
Abstract
Neurons within cortical microcircuits are interconnected with recurrent excitatory synaptic connections that are thought to amplify signals (Douglas and Martin, 2007), form selective subnetworks (Ko et al., 2011), and aid feature discrimination. Strong inhibition (Haider et al., 2013) counterbalances excitation, enabling sensory features to be sharpened and represented by sparse codes (Willmore et al., 2011). This balance between excitation and inhibition makes it difficult to assess the strength, or gain, of recurrent excitatory connections within cortical networks, which is key to understanding their operational regime and the computations that they perform. Networks that combine an unstable high-gain excitatory population with stabilizing inhibitory feedback are known as inhibition-stabilized networks (ISNs) (Tsodyks et al., 1997). Theoretical studies using reduced network models predict that ISNs produce paradoxical responses to perturbation, but experimental perturbations failed to find evidence for ISNs in cortex (Atallah et al., 2012). Here, we reexamined this question by investigating how cortical network models consisting of many neurons behave after perturbations and found that results obtained from reduced network models fail to predict responses to perturbations in more realistic networks. Our models predict that a large proportion of the inhibitory network must be perturbed to reliably detect an ISN regime robustly in cortex. We propose that wide-field optogenetic suppression of inhibition under promoters targeting a large fraction of inhibitory neurons may provide a perturbation of sufficient strength to reveal the operating regime of cortex. Our results suggest that detailed computational models of optogenetic perturbations are necessary to interpret the results of experimental paradigms.SIGNIFICANCE STATEMENT Many useful computational mechanisms proposed for cortex require local excitatory recurrence to be very strong, such that local inhibitory feedback is necessary to avoid epileptiform runaway activity (an "inhibition-stabilized network" or "ISN" regime). However, recent experimental results suggest that this regime may not exist in cortex. We simulated activity perturbations in cortical networks of increasing realism and found that, to detect ISN-like properties in cortex, large proportions of the inhibitory population must be perturbed. Current experimental methods for inhibitory perturbation are unlikely to satisfy this requirement, implying that existing experimental observations are inconclusive about the computational regime of cortex. Our results suggest that new experimental designs targeting a majority of inhibitory neurons may be able to resolve this question.
Collapse
Affiliation(s)
- Sadra Sadeh
- Department of Neuroscience, Physiology, and Pharmacology, University College London, WC1E 6BT London, United Kingdom, and
| | - R Angus Silver
- Department of Neuroscience, Physiology, and Pharmacology, University College London, WC1E 6BT London, United Kingdom, and
| | | | | |
Collapse
|
9
|
Abstract
Recurrent neural network architectures can have useful computational properties, with complex temporal dynamics and input-sensitive attractor states. However, evaluation of recurrent dynamic architectures requires solving systems of differential equations, and the number of evaluations required to determine their response to a given input can vary with the input or can be indeterminate altogether in the case of oscillations or instability. In feedforward networks, by contrast, only a single pass through the network is needed to determine the response to a given input. Modern machine learning systems are designed to operate efficiently on feedforward architectures. We hypothesized that two-layer feedforward architectures with simple, deterministic dynamics could approximate the responses of single-layer recurrent network architectures. By identifying the fixed-point responses of a given recurrent network, we trained two-layer networks to directly approximate the fixed-point response to a given input. These feedforward networks then embodied useful computations, including competitive interactions, information transformations, and noise rejection. Our approach was able to find useful approximations to recurrent networks, which can then be evaluated in linear and deterministic time complexity.
Collapse
Affiliation(s)
- Dylan R Muir
- Biozentrum, University of Basel, Basel 4056, Switzerland
| |
Collapse
|
10
|
Quiroga MDM, Morris AP, Krekelberg B. Adaptation without Plasticity. Cell Rep 2017; 17:58-68. [PMID: 27681421 DOI: 10.1016/j.celrep.2016.08.089] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/25/2016] [Revised: 06/25/2016] [Accepted: 08/25/2016] [Indexed: 11/30/2022] Open
Abstract
Sensory adaptation is a phenomenon in which neurons are affected not only by their immediate input but also by the sequence of preceding inputs. In visual cortex, for example, neurons shift their preferred orientation after exposure to an oriented stimulus. This adaptation is traditionally attributed to plasticity. We show that a recurrent network generates tuning curve shifts observed in cat and macaque visual cortex, even when all synaptic weights and intrinsic properties in the model are fixed. This demonstrates that, in a recurrent network, adaptation on timescales of hundreds of milliseconds does not require plasticity. Given the ubiquity of recurrent connections, this phenomenon likely contributes to responses observed across cortex and shows that plasticity cannot be inferred solely from changes in tuning on these timescales. More broadly, our findings show that recurrent connections can endow a network with a powerful mechanism to store and integrate recent contextual information.
Collapse
Affiliation(s)
- Maria Del Mar Quiroga
- Center for Molecular and Behavioral Neuroscience, Rutgers University-Newark, Newark, NJ 07102, USA; Behavioral and Neural Sciences Graduate Program, Rutgers University-Newark, Newark, NJ 07102, USA
| | - Adam P Morris
- Department of Physiology, Neuroscience Program, Biomedicine Discovery Institute, Monash University, Clayton, VIC 3800, Australia
| | - Bart Krekelberg
- Center for Molecular and Behavioral Neuroscience, Rutgers University-Newark, Newark, NJ 07102, USA.
| |
Collapse
|
11
|
Chen Y. Mechanisms of Winner-Take-All and Group Selection in Neuronal Spiking Networks. Front Comput Neurosci 2017; 11:20. [PMID: 28484384 PMCID: PMC5399521 DOI: 10.3389/fncom.2017.00020] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/12/2016] [Accepted: 03/20/2017] [Indexed: 11/13/2022] Open
Abstract
A major function of central nervous systems is to discriminate different categories or types of sensory input. Neuronal networks accomplish such tasks by learning different sensory maps at several stages of neural hierarchy, such that different neurons fire selectively to reflect different internal or external patterns and states. The exact mechanisms of such map formation processes in the brain are not completely understood. Here we study the mechanism by which a simple recurrent/reentrant neuronal network accomplish group selection and discrimination to different inputs in order to generate sensory maps. We describe the conditions and mechanism of transition from a rhythmic epileptic state (in which all neurons fire synchronized and indiscriminately to any input) to a winner-take-all state in which only a subset of neurons fire for a specific input. We prove an analytic condition under which a stable bump solution and a winner-take-all state can emerge from the local recurrent excitation-inhibition interactions in a three-layer spiking network with distinct excitatory and inhibitory populations, and demonstrate the importance of surround inhibitory connection topology on the stability of dynamic patterns in spiking neural network.
Collapse
|
12
|
Kamiński J, Sullivan S, Chung JM, Ross IB, Mamelak AN, Rutishauser U. Persistently active neurons in human medial frontal and medial temporal lobe support working memory. Nat Neurosci 2017; 20:590-601. [PMID: 28218914 PMCID: PMC5374017 DOI: 10.1038/nn.4509] [Citation(s) in RCA: 141] [Impact Index Per Article: 20.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/01/2016] [Accepted: 01/18/2017] [Indexed: 12/21/2022]
Abstract
Persistent neural activity is a putative mechanism for the maintenance of working memories. Persistent activity relies on the activity of a distributed network of areas, but the differential contribution of each area remains unclear. We recorded single neurons in the human medial frontal cortex and medial temporal lobe while subjects held up to three items in memory. We found persistently active neurons in both areas. Persistent activity of hippocampal and amygdala neurons was stimulus-specific, formed stable attractors and was predictive of memory content. Medial frontal cortex persistent activity, on the other hand, was modulated by memory load and task set but was not stimulus-specific. Trial-by-trial variability in persistent activity in both areas was related to memory strength, because it predicted the speed and accuracy by which stimuli were remembered. This work reveals, in humans, direct evidence for a distributed network of persistently active neurons supporting working memory maintenance.
Collapse
Affiliation(s)
- Jan Kamiński
- Department of Neurosurgery, Cedars-Sinai Medical Center, Los Angeles, California, USA
- Division of Biology and Biological Engineering, California Institute of Technology, Pasadena, California, USA
| | - Shannon Sullivan
- Department of Neurosurgery, Cedars-Sinai Medical Center, Los Angeles, California, USA
| | - Jeffrey M Chung
- Department of Neurology, Cedars-Sinai Medical Center, Los Angeles, California, USA
| | - Ian B Ross
- Department of Neurosurgery, Huntington Memorial Hospital, Pasadena, California, USA
| | - Adam N Mamelak
- Department of Neurosurgery, Cedars-Sinai Medical Center, Los Angeles, California, USA
| | - Ueli Rutishauser
- Department of Neurosurgery, Cedars-Sinai Medical Center, Los Angeles, California, USA
- Department of Neurology, Cedars-Sinai Medical Center, Los Angeles, California, USA
- Department of Biomedical Sciences, Cedars-Sinai Medical Center, Los Angeles, California, USA
- Division of Biology and Biological Engineering, California Institute of Technology, Pasadena, California, USA
| |
Collapse
|
13
|
Fusi S, Miller EK, Rigotti M. Why neurons mix: high dimensionality for higher cognition. Curr Opin Neurobiol 2016; 37:66-74. [PMID: 26851755 DOI: 10.1016/j.conb.2016.01.010] [Citation(s) in RCA: 332] [Impact Index Per Article: 41.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/08/2015] [Revised: 01/14/2016] [Accepted: 01/18/2016] [Indexed: 12/15/2022]
Abstract
Neurons often respond to diverse combinations of task-relevant variables. This form of mixed selectivity plays an important computational role which is related to the dimensionality of the neural representations: high-dimensional representations with mixed selectivity allow a simple linear readout to generate a huge number of different potential responses. In contrast, neural representations based on highly specialized neurons are low dimensional and they preclude a linear readout from generating several responses that depend on multiple task-relevant variables. Here we review the conceptual and theoretical framework that explains the importance of mixed selectivity and the experimental evidence that recorded neural representations are high-dimensional. We end by discussing the implications for the design of future experiments.
Collapse
Affiliation(s)
- Stefano Fusi
- Center for Theoretical Neuroscience, Columbia University College of Physicians and Surgeons, USA.
| | - Earl K Miller
- The Picower Institute for Learning and Memory & Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, USA
| | - Mattia Rigotti
- IBM T.J. Watson Research Center, Yorktown Heights, NY 10598, USA
| |
Collapse
|
14
|
Binas J, Indiveri G, Pfeiffer M. Local structure supports learning of deterministic behavior in recurrent neural networks. BMC Neurosci 2015. [PMCID: PMC4698769 DOI: 10.1186/1471-2202-16-s1-p195] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022] Open
|
15
|
Qiao N, Mostafa H, Corradi F, Osswald M, Stefanini F, Sumislawska D, Indiveri G. A reconfigurable on-line learning spiking neuromorphic processor comprising 256 neurons and 128K synapses. Front Neurosci 2015; 9:141. [PMID: 25972778 PMCID: PMC4413675 DOI: 10.3389/fnins.2015.00141] [Citation(s) in RCA: 154] [Impact Index Per Article: 17.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/21/2014] [Accepted: 04/06/2015] [Indexed: 11/13/2022] Open
Abstract
Implementing compact, low-power artificial neural processing systems with real-time on-line learning abilities is still an open challenge. In this paper we present a full-custom mixed-signal VLSI device with neuromorphic learning circuits that emulate the biophysics of real spiking neurons and dynamic synapses for exploring the properties of computational neuroscience models and for building brain-inspired computing systems. The proposed architecture allows the on-chip configuration of a wide range of network connectivities, including recurrent and deep networks, with short-term and long-term plasticity. The device comprises 128 K analog synapse and 256 neuron circuits with biologically plausible dynamics and bi-stable spike-based plasticity mechanisms that endow it with on-line learning abilities. In addition to the analog circuits, the device comprises also asynchronous digital logic circuits for setting different synapse and neuron properties as well as different network configurations. This prototype device, fabricated using a 180 nm 1P6M CMOS process, occupies an area of 51.4 mm(2), and consumes approximately 4 mW for typical experiments, for example involving attractor networks. Here we describe the details of the overall architecture and of the individual circuits and present experimental results that showcase its potential. By supporting a wide range of cortical-like computational modules comprising plasticity mechanisms, this device will enable the realization of intelligent autonomous systems with on-line learning capabilities.
Collapse
Affiliation(s)
- Ning Qiao
- Institute of Neuroinformatics, University of Zurich and ETH Zurich Zurich, Switzerland
| | - Hesham Mostafa
- Institute of Neuroinformatics, University of Zurich and ETH Zurich Zurich, Switzerland
| | - Federico Corradi
- Institute of Neuroinformatics, University of Zurich and ETH Zurich Zurich, Switzerland
| | - Marc Osswald
- Institute of Neuroinformatics, University of Zurich and ETH Zurich Zurich, Switzerland
| | - Fabio Stefanini
- Institute of Neuroinformatics, University of Zurich and ETH Zurich Zurich, Switzerland
| | - Dora Sumislawska
- Institute of Neuroinformatics, University of Zurich and ETH Zurich Zurich, Switzerland
| | - Giacomo Indiveri
- Institute of Neuroinformatics, University of Zurich and ETH Zurich Zurich, Switzerland
| |
Collapse
|
16
|
Muir DR, Mrsic-Flogel T. Eigenspectrum bounds for semirandom matrices with modular and spatial structure for neural networks. PHYSICAL REVIEW. E, STATISTICAL, NONLINEAR, AND SOFT MATTER PHYSICS 2015; 91:042808. [PMID: 25974548 DOI: 10.1103/physreve.91.042808] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/01/2014] [Indexed: 06/04/2023]
Abstract
The eigenvalue spectrum of the matrix of directed weights defining a neural network model is informative of several stability and dynamical properties of network activity. Existing results for eigenspectra of sparse asymmetric random matrices neglect spatial or other constraints in determining entries in these matrices, and so are of partial applicability to cortical-like architectures. Here we examine a parameterized class of networks that are defined by sparse connectivity, with connection weighting modulated by physical proximity (i.e., asymmetric Euclidean random matrices), modular network partitioning, and functional specificity within the excitatory population. We present a set of analytical constraints that apply to the eigenvalue spectra of associated weight matrices, highlighting the relationship between connectivity rules and classes of network dynamics.
Collapse
Affiliation(s)
- Dylan R Muir
- Biozentrum, University of Basel, 4056 Basel, Switzerland
| | | |
Collapse
|
17
|
Marx S, Gruenhage G, Walper D, Rutishauser U, Einhäuser W. Competition with and without priority control: linking rivalry to attention through winner-take-all networks with memory. Ann N Y Acad Sci 2015; 1339:138-53. [PMID: 25581077 PMCID: PMC4376592 DOI: 10.1111/nyas.12575] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022]
Abstract
Competition is ubiquitous in perception. For example, items in the visual field compete for processing resources, and attention controls their priority (biased competition). The inevitable ambiguity in the interpretation of sensory signals yields another form of competition: distinct perceptual interpretations compete for access to awareness. Rivalry, where two equally likely percepts compete for dominance, explicates the latter form of competition. Building upon the similarity between attention and rivalry, we propose to model rivalry by a generic competitive circuit that is widely used in the attention literature-a winner-take-all (WTA) network. Specifically, we show that a network of two coupled WTA circuits replicates three common hallmarks of rivalry: the distribution of dominance durations, their dependence on input strength ("Levelt's propositions"), and the effects of stimulus removal (blanking). This model introduces a form of memory by forming discrete states and explains experimental data better than competitive models of rivalry without memory. This result supports the crucial role of memory in rivalry specifically and in competitive processes in general. Our approach unifies the seemingly distinct phenomena of rivalry, memory, and attention in a single model with competition as the common underlying principle.
Collapse
Affiliation(s)
- Svenja Marx
- Neurophysics, Philipp-University of MarburgMarburg, Germany
| | - Gina Gruenhage
- Bernstein Center for Computational NeurosciencesBerlin, Germany
| | - Daniel Walper
- Neurophysics, Philipp-University of MarburgMarburg, Germany
| | - Ueli Rutishauser
- Department of Neurosurgery, Cedars-Sinai Medical CenterLos Angeles, California
- Division of Biology and Biological Engineering, California Institute of TechnologyPasadena, California
| | | |
Collapse
|
18
|
Rutishauser U, Slotine JJ, Douglas R. Computation in dynamically bounded asymmetric systems. PLoS Comput Biol 2015; 11:e1004039. [PMID: 25617645 PMCID: PMC4305289 DOI: 10.1371/journal.pcbi.1004039] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/01/2014] [Accepted: 11/12/2014] [Indexed: 11/18/2022] Open
Abstract
Previous explanations of computations performed by recurrent networks have focused on symmetrically connected saturating neurons and their convergence toward attractors. Here we analyze the behavior of asymmetrical connected networks of linear threshold neurons, whose positive response is unbounded. We show that, for a wide range of parameters, this asymmetry brings interesting and computationally useful dynamical properties. When driven by input, the network explores potential solutions through highly unstable 'expansion' dynamics. This expansion is steered and constrained by negative divergence of the dynamics, which ensures that the dimensionality of the solution space continues to reduce until an acceptable solution manifold is reached. Then the system contracts stably on this manifold towards its final solution trajectory. The unstable positive feedback and cross inhibition that underlie expansion and divergence are common motifs in molecular and neuronal networks. Therefore we propose that very simple organizational constraints that combine these motifs can lead to spontaneous computation and so to the spontaneous modification of entropy that is characteristic of living systems.
Collapse
Affiliation(s)
- Ueli Rutishauser
- Computation and Neural Systems, California Institute of Technology, Pasadena, California, United States of America
- Division of Biology and Biological Engineering, California Institute of Technology, Pasadena, California, United States of America
- Departments of Neurosurgery, Neurology and Biomedical Sciences, Cedars-Sinai Medical Center, Los Angeles, California, United States of America
| | - Jean-Jacques Slotine
- Nonlinear Systems Laboratory, Department of Mechanical Engineering, Massachusetts Institute of Technology, Cambridge, Massachusetts, United States of America
| | - Rodney Douglas
- Institute of Neuroinformatics, University and ETH Zurich, Zurich, Switzerland
| |
Collapse
|
19
|
Bauer R, Zubler F, Pfister S, Hauri A, Pfeiffer M, Muir DR, Douglas RJ. Developmental self-construction and -configuration of functional neocortical neuronal networks. PLoS Comput Biol 2014; 10:e1003994. [PMID: 25474693 PMCID: PMC4256067 DOI: 10.1371/journal.pcbi.1003994] [Citation(s) in RCA: 20] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/24/2014] [Accepted: 10/09/2014] [Indexed: 11/20/2022] Open
Abstract
The prenatal development of neural circuits must provide sufficient configuration to support at least a set of core postnatal behaviors. Although knowledge of various genetic and cellular aspects of development is accumulating rapidly, there is less systematic understanding of how these various processes play together in order to construct such functional networks. Here we make some steps toward such understanding by demonstrating through detailed simulations how a competitive co-operative (‘winner-take-all’, WTA) network architecture can arise by development from a single precursor cell. This precursor is granted a simplified gene regulatory network that directs cell mitosis, differentiation, migration, neurite outgrowth and synaptogenesis. Once initial axonal connection patterns are established, their synaptic weights undergo homeostatic unsupervised learning that is shaped by wave-like input patterns. We demonstrate how this autonomous genetically directed developmental sequence can give rise to self-calibrated WTA networks, and compare our simulation results with biological data. Models of learning in artificial neural networks generally assume that the neurons and approximate network are given, and then learning tunes the synaptic weights. By contrast, we address the question of how an entire functional neuronal network containing many differentiated neurons and connections can develop from only a single progenitor cell. We chose a winner-take-all network as the developmental target, because it is a computationally powerful circuit, and a candidate motif of neocortical networks. The key aspect of this challenge is that the developmental mechanisms must be locally autonomous as in Biology: They cannot depend on global knowledge or supervision. We have explored this developmental process by simulating in physical detail the fundamental biological behaviors, such as cell proliferation, neurite growth and synapse formation that give rise to the structural connectivity observed in the superficial layers of the neocortex. These differentiated, approximately connected neurons then adapt their synaptic weights homeostatically to obtain a uniform electrical signaling activity before going on to organize themselves according to the fundamental correlations embedded in a noisy wave-like input signal. In this way the precursor expands itself through development and unsupervised learning into winner-take-all functionality and orientation selectivity in a biologically plausible manner.
Collapse
Affiliation(s)
- Roman Bauer
- Institute of Neuroinformatics, University/ETH Zürich, Zürich, Switzerland
- School of Computing Science, Newcastle University, Newcastle upon Tyne, United Kingdom
- * E-mail:
| | - Frédéric Zubler
- Institute of Neuroinformatics, University/ETH Zürich, Zürich, Switzerland
- Department of Neurology, Inselspital Bern, Bern University Hospital, University of Bern, Bern, Switzerland
| | - Sabina Pfister
- Institute of Neuroinformatics, University/ETH Zürich, Zürich, Switzerland
| | - Andreas Hauri
- Institute of Neuroinformatics, University/ETH Zürich, Zürich, Switzerland
| | - Michael Pfeiffer
- Institute of Neuroinformatics, University/ETH Zürich, Zürich, Switzerland
| | - Dylan R. Muir
- Institute of Neuroinformatics, University/ETH Zürich, Zürich, Switzerland
- Biozentrum, University of Basel, Basel, Switzerland
| | - Rodney J. Douglas
- Institute of Neuroinformatics, University/ETH Zürich, Zürich, Switzerland
| |
Collapse
|
20
|
Stefanini F, Neftci EO, Sheik S, Indiveri G. PyNCS: a microkernel for high-level definition and configuration of neuromorphic electronic systems. Front Neuroinform 2014; 8:73. [PMID: 25232314 PMCID: PMC4152885 DOI: 10.3389/fninf.2014.00073] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/01/2013] [Accepted: 08/01/2014] [Indexed: 11/13/2022] Open
Abstract
Neuromorphic hardware offers an electronic substrate for the realization of asynchronous event-based sensory-motor systems and large-scale spiking neural network architectures. In order to characterize these systems, configure them, and carry out modeling experiments, it is often necessary to interface them to workstations. The software used for this purpose typically consists of a large monolithic block of code which is highly specific to the hardware setup used. While this approach can lead to highly integrated hardware/software systems, it hampers the development of modular and reconfigurable infrastructures thus preventing a rapid evolution of such systems. To alleviate this problem, we propose PyNCS, an open-source front-end for the definition of neural network models that is interfaced to the hardware through a set of Python Application Programming Interfaces (APIs). The design of PyNCS promotes modularity, portability and expandability and separates implementation from hardware description. The high-level front-end that comes with PyNCS includes tools to define neural network models as well as to create, monitor and analyze spiking data. Here we report the design philosophy behind the PyNCS framework and describe its implementation. We demonstrate its functionality with two representative case studies, one using an event-based neuromorphic vision sensor, and one using a set of multi-neuron devices for carrying out a cognitive decision-making task involving state-dependent computation. PyNCS, already applicable to a wide range of existing spike-based neuromorphic setups, will accelerate the development of hybrid software/hardware neuromorphic systems, thanks to its code flexibility. The code is open-source and available online at https://github.com/inincs/pyNCS.
Collapse
Affiliation(s)
- Fabio Stefanini
- Department of Information Technology and Electrical Engineering, Institute of Neuroinformatics, University of Zurich and ETH Zurich Zurich, Switzerland
| | - Emre O Neftci
- Department of Bioengineering, Institute for Neural Computation, University of California at San Diego La Jolla, CA, USA
| | - Sadique Sheik
- Department of Information Technology and Electrical Engineering, Institute of Neuroinformatics, University of Zurich and ETH Zurich Zurich, Switzerland
| | - Giacomo Indiveri
- Department of Information Technology and Electrical Engineering, Institute of Neuroinformatics, University of Zurich and ETH Zurich Zurich, Switzerland
| |
Collapse
|
21
|
Binas J, Rutishauser U, Indiveri G, Pfeiffer M. Learning and stabilization of winner-take-all dynamics through interacting excitatory and inhibitory plasticity. Front Comput Neurosci 2014; 8:68. [PMID: 25071538 PMCID: PMC4086298 DOI: 10.3389/fncom.2014.00068] [Citation(s) in RCA: 25] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/15/2014] [Accepted: 06/16/2014] [Indexed: 12/31/2022] Open
Abstract
Winner-Take-All (WTA) networks are recurrently connected populations of excitatory and inhibitory neurons that represent promising candidate microcircuits for implementing cortical computation. WTAs can perform powerful computations, ranging from signal-restoration to state-dependent processing. However, such networks require fine-tuned connectivity parameters to keep the network dynamics within stable operating regimes. In this article, we show how such stability can emerge autonomously through an interaction of biologically plausible plasticity mechanisms that operate simultaneously on all excitatory and inhibitory synapses of the network. A weight-dependent plasticity rule is derived from the triplet spike-timing dependent plasticity model, and its stabilization properties in the mean-field case are analyzed using contraction theory. Our main result provides simple constraints on the plasticity rule parameters, rather than on the weights themselves, which guarantee stable WTA behavior. The plastic network we present is able to adapt to changing input conditions, and to dynamically adjust its gain, therefore exhibiting self-stabilization mechanisms that are crucial for maintaining stable operation in large networks of interconnected subunits. We show how distributed neural assemblies can adjust their parameters for stable WTA function autonomously while respecting anatomical constraints on neural wiring.
Collapse
Affiliation(s)
- Jonathan Binas
- Institute of Neuroinformatics, University of Zurich and ETH ZurichZurich, Switzerland
| | - Ueli Rutishauser
- Department of Neurosurgery and Department of Neurology, Cedars-Sinai Medical CenterLos Angeles, CA, USA
- Computation and Neural Systems Program, Division of Biology and Biological Engineering, California Institute of TechnologyPasadena, CA, USA
| | - Giacomo Indiveri
- Institute of Neuroinformatics, University of Zurich and ETH ZurichZurich, Switzerland
| | - Michael Pfeiffer
- Institute of Neuroinformatics, University of Zurich and ETH ZurichZurich, Switzerland
| |
Collapse
|
22
|
Muir DR, Cook M. Anatomical constraints on lateral competition in columnar cortical architectures. Neural Comput 2014; 26:1624-66. [PMID: 24877732 DOI: 10.1162/neco_a_00613] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Competition is a well-studied and powerful mechanism for information processing in neuronal networks, providing noise rejection, signal restoration, decision making and associative memory properties, with relatively simple requirements for network architecture. Models based on competitive interactions have been used to describe the shaping of functional properties in visual cortex, as well as the development of functional maps in columnar cortex. These models require competition within a cortical area to occur on a wider spatial scale than cooperation, usually implemented by lateral inhibitory connections having a longer range than local excitatory connections. However, measurements of cortical anatomy reveal that the spatial extent of inhibition is in fact more restricted than that of excitation. Relatively few models reflect this, and it is unknown whether lateral competition can occur in cortical-like networks that have a realistic spatial relationship between excitation and inhibition. Here we analyze simple models for cortical columns and perform simulations of larger models to show how the spatial scales of excitation and inhibition can interact to produce competition through disynaptic inhibition. Our findings give strong support to the direct coupling effect-that the presence of competition across the cortical surface is predicted well by the anatomy of direct excitatory and inhibitory coupling and that multisynaptic network effects are negligible. This implies that for networks with short-range inhibition and longer-range excitation, the spatial extent of competition is even narrower than the range of inhibitory connections. Our results suggest the presence of network mechanisms that focus on intra-rather than intercolumn competition in neocortex, highlighting the need for both new models and direct experimental characterizations of lateral inhibition and competition in columnar cortex.
Collapse
Affiliation(s)
- Dylan R Muir
- Institute of Neuroinformatics, University of Zürich and ETH Zürich, Winterthurerstrasse 190, 8057 Zürich, Switzerland
| | | |
Collapse
|
23
|
Mostafa H, Indiveri G. Sequential activity in asymmetrically coupled winner-take-all circuits. Neural Comput 2014; 26:1973-2004. [PMID: 24877737 DOI: 10.1162/neco_a_00619] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Understanding the sequence generation and learning mechanisms used by recurrent neural networks in the nervous system is an important problem that has been studied extensively. However, most of the models proposed in the literature are either not compatible with neuroanatomy and neurophysiology experimental findings, or are not robust to noise and rely on fine tuning of the parameters. In this work, we propose a novel model of sequence learning and generation that is based on the interactions among multiple asymmetrically coupled winner-take-all (WTA) circuits. The network architecture is consistent with mammalian cortical connectivity data and uses realistic neuronal and synaptic dynamics that give rise to noise-robust patterns of sequential activity. The novel aspect of the network we propose lies in its ability to produce robust patterns of sequential activity that can be halted, resumed, and readily modulated by external input, and in its ability to make use of realistic plastic synapses to learn and reproduce the arbitrary input-imposed sequential patterns. Sequential activity takes the form of a single activity bump that stably propagates through multiple WTA circuits along one of a number of possible paths. Because the network can be configured to either generate spontaneous sequences or wait for external inputs to trigger a transition in the sequence, it provides the basis for creating state-dependent perception-action loops. We first analyze a rate-based approximation of the proposed spiking network to highlight the relevant features of the network dynamics and then show numerical simulation results with spiking neurons, realistic conductance-based synapses, and spike-timing dependent plasticity (STDP) rules to validate the rate-based model.
Collapse
Affiliation(s)
- Hesham Mostafa
- Institute for Neuroinformatics University of Zurich and ETH Zurich Zurich 8057, Switzerland
| | | |
Collapse
|
24
|
Maoz U, Rutishauser U, Kim S, Cai X, Lee D, Koch C. Predeliberation activity in prefrontal cortex and striatum and the prediction of subsequent value judgment. Front Neurosci 2013; 7:225. [PMID: 24324396 PMCID: PMC3840801 DOI: 10.3389/fnins.2013.00225] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/16/2013] [Accepted: 11/05/2013] [Indexed: 11/15/2022] Open
Abstract
Rational, value-based decision-making mandates selecting the option with highest subjective expected value after appropriate deliberation. We examined activity in the dorsolateral prefrontal cortex (DLPFC) and striatum of monkeys deciding between smaller, immediate rewards and larger, delayed ones. We previously found neurons that modulated their activity in this task according to the animal's choice, while it deliberated (choice neurons). Here we found neurons whose spiking activities were predictive of the spatial location of the selected target (spatial-bias neurons) or the size of the chosen reward (reward-bias neurons) before the onset of the cue presenting the decision-alternatives, and thus before rational deliberation could begin. Their predictive power increased as the values the animals associated with the two decision alternatives became more similar. The ventral striatum (VS) preferentially contained spatial-bias neurons; the caudate nucleus (CD) preferentially contained choice neurons. In contrast, the DLPFC contained significant numbers of all three neuron types, but choice neurons were not preferentially also bias neurons of either kind there, nor were spatial-bias neurons preferentially also choice neurons, and vice versa. We suggest a simple winner-take-all (WTA) circuit model to account for the dissociation of choice and bias neurons. The model reproduced our results and made additional predictions that were borne out empirically. Our data are compatible with the hypothesis that the DLPFC and striatum harbor dissociated neural populations that represent choices and predeliberation biases that are combined after cue onset; the bias neurons have a weaker effect on the ultimate decision than the choice neurons, so their influence is progressively apparent for trials where the values associated with the decision alternatives are increasingly similar.
Collapse
Affiliation(s)
- Uri Maoz
- Division of Biology, California Institute of Technology Pasadena, CA, USA
| | | | | | | | | | | |
Collapse
|
25
|
Edelman GM, Gally JA. Reentry: a key mechanism for integration of brain function. Front Integr Neurosci 2013; 7:63. [PMID: 23986665 PMCID: PMC3753453 DOI: 10.3389/fnint.2013.00063] [Citation(s) in RCA: 57] [Impact Index Per Article: 5.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/05/2013] [Accepted: 08/06/2013] [Indexed: 11/26/2022] Open
Abstract
Reentry in nervous systems is the ongoing bidirectional exchange of signals along reciprocal axonal fibers linking two or more brain areas. The hypothesis that reentrant signaling serves as a general mechanism to couple the functioning of multiple areas of the cerebral cortex and thalamus was first proposed in 1977 and 1978 (Edelman, 1978). A review of the amount and diversity of supporting experimental evidence accumulated since then suggests that reentry is among the most important integrative mechanisms in vertebrate brains (Edelman, 1993). Moreover, these data prompt testable hypotheses regarding mechanisms that favor the development and evolution of reentrant neural architectures.
Collapse
|
26
|
Abstract
The quest to implement intelligent processing in electronic neuromorphic systems lacks methods for achieving reliable behavioral dynamics on substrates of inherently imprecise and noisy neurons. Here we report a solution to this problem that involves first mapping an unreliable hardware layer of spiking silicon neurons into an abstract computational layer composed of generic reliable subnetworks of model neurons and then composing the target behavioral dynamics as a "soft state machine" running on these reliable subnets. In the first step, the neural networks of the abstract layer are realized on the hardware substrate by mapping the neuron circuit bias voltages to the model parameters. This mapping is obtained by an automatic method in which the electronic circuit biases are calibrated against the model parameters by a series of population activity measurements. The abstract computational layer is formed by configuring neural networks as generic soft winner-take-all subnetworks that provide reliable processing by virtue of their active gain, signal restoration, and multistability. The necessary states and transitions of the desired high-level behavior are then easily embedded in the computational layer by introducing only sparse connections between some neurons of the various subnets. We demonstrate this synthesis method for a neuromorphic sensory agent that performs real-time context-dependent classification of motion patterns observed by a silicon retina.
Collapse
|
27
|
McKinstry JL, Edelman GM. Temporal sequence learning in winner-take-all networks of spiking neurons demonstrated in a brain-based device. Front Neurorobot 2013; 7:10. [PMID: 23760804 PMCID: PMC3674315 DOI: 10.3389/fnbot.2013.00010] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/21/2013] [Accepted: 05/20/2013] [Indexed: 11/24/2022] Open
Abstract
Animal behavior often involves a temporally ordered sequence of actions learned from experience. Here we describe simulations of interconnected networks of spiking neurons that learn to generate patterns of activity in correct temporal order. The simulation consists of large-scale networks of thousands of excitatory and inhibitory neurons that exhibit short-term synaptic plasticity and spike-timing dependent synaptic plasticity. The neural architecture within each area is arranged to evoke winner-take-all (WTA) patterns of neural activity that persist for tens of milliseconds. In order to generate and switch between consecutive firing patterns in correct temporal order, a reentrant exchange of signals between these areas was necessary. To demonstrate the capacity of this arrangement, we used the simulation to train a brain-based device responding to visual input by autonomously generating temporal sequences of motor actions.
Collapse
|
28
|
Nessler B, Pfeiffer M, Buesing L, Maass W. Bayesian computation emerges in generic cortical microcircuits through spike-timing-dependent plasticity. PLoS Comput Biol 2013; 9:e1003037. [PMID: 23633941 PMCID: PMC3636028 DOI: 10.1371/journal.pcbi.1003037] [Citation(s) in RCA: 112] [Impact Index Per Article: 10.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/19/2012] [Accepted: 03/04/2013] [Indexed: 11/24/2022] Open
Abstract
The principles by which networks of neurons compute, and how spike-timing dependent plasticity (STDP) of synaptic weights generates and maintains their computational function, are unknown. Preceding work has shown that soft winner-take-all (WTA) circuits, where pyramidal neurons inhibit each other via interneurons, are a common motif of cortical microcircuits. We show through theoretical analysis and computer simulations that Bayesian computation is induced in these network motifs through STDP in combination with activity-dependent changes in the excitability of neurons. The fundamental components of this emergent Bayesian computation are priors that result from adaptation of neuronal excitability and implicit generative models for hidden causes that are created in the synaptic weights through STDP. In fact, a surprising result is that STDP is able to approximate a powerful principle for fitting such implicit generative models to high-dimensional spike inputs: Expectation Maximization. Our results suggest that the experimentally observed spontaneous activity and trial-to-trial variability of cortical neurons are essential features of their information processing capability, since their functional role is to represent probability distributions rather than static neural codes. Furthermore it suggests networks of Bayesian computation modules as a new model for distributed information processing in the cortex.
Collapse
Affiliation(s)
- Bernhard Nessler
- Institute for Theoretical Computer Science, Graz University of Technology, Graz, Austria.
| | | | | | | |
Collapse
|
29
|
Genot AJ, Fujii T, Rondelez Y. Computing with competition in biochemical networks. PHYSICAL REVIEW LETTERS 2012; 109:208102. [PMID: 23215526 DOI: 10.1103/physrevlett.109.208102] [Citation(s) in RCA: 27] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/20/2012] [Revised: 08/28/2012] [Indexed: 06/01/2023]
Abstract
Cells rely on limited resources such as enzymes or transcription factors to process signals and make decisions. However, independent cellular pathways often compete for a common molecular resource. Competition is difficult to analyze because of its nonlinear global nature, and its role remains unclear. Here we show how decision pathways such as transcription networks may exploit competition to process information. Competition for one resource leads to the recognition of convex sets of patterns, whereas competition for several resources (overlapping or cascaded regulons) allows even more general pattern recognition. Competition also generates surprising couplings, such as correlating species that share no resource but a common competitor. The mechanism we propose relies on three primitives that are ubiquitous in cells: multiinput motifs, competition for a resource, and positive feedback loops.
Collapse
Affiliation(s)
- Anthony J Genot
- LIMMS/CNRS-IIS, Institute of Industrial Science, The University of Tokyo, 4-6-1 Komaba, Tokyo 153-8505, Japan
| | | | | |
Collapse
|
30
|
Dyer EL, Rutishauser U, Baraniuk RG. Group sparse coding with a collection of winner-take-all networks. BMC Neurosci 2012. [PMCID: PMC3403521 DOI: 10.1186/1471-2202-13-s1-p184] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022] Open
|
31
|
Abstract
In 1873 Camillo Golgi discovered his eponymous stain, which he called la reazione nera. By adding to it the concepts of the Neuron Doctrine and the Law of Dynamic Polarisation, Santiago Ramon y Cajal was able to link the individual Golgi-stained neurons he saw down his microscope into circuits. This was revolutionary and we have all followed Cajal's winning strategy for over a century. We are now on the verge of a new revolution, which offers the prize of a far more comprehensive description of neural circuits and their operation. The hope is that we will exploit the power of computer vision algorithms and modern molecular biological techniques to acquire rapidly reconstructions of single neurons and synaptic circuits, and to control the function of selected types of neurons. Only one item is now conspicuous by its absence: the 21st century equivalent of the concepts of the Neuron Doctrine and the Law of Dynamic Polarisation. Without their equivalent we will inevitably struggle to make sense of our 21st century observations within the 19th and 20th century conceptual framework we have inherited.
Collapse
Affiliation(s)
- Rodney J Douglas
- Institute of Neuroinformatics, UZH/ETH, Winterthurerstrasse 190, 8057 Zürich, Switzerland
| | | |
Collapse
|
32
|
Rutishauser U, Slotine JJ, Douglas RJ. Competition through selective inhibitory synchrony. Neural Comput 2012; 24:2033-52. [PMID: 22509969 DOI: 10.1162/neco_a_00304] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Models of cortical neuronal circuits commonly depend on inhibitory feedback to control gain, provide signal normalization, and selectively amplify signals using winner-take-all (WTA) dynamics. Such models generally assume that excitatory and inhibitory neurons are able to interact easily because their axons and dendrites are colocalized in the same small volume. However, quantitative neuroanatomical studies of the dimensions of axonal and dendritic trees of neurons in the neocortex show that this colocalization assumption is not valid. In this letter, we describe a simple modification to the WTA circuit design that permits the effects of distributed inhibitory neurons to be coupled through synchronization, and so allows a single WTA to be distributed widely in cortical space, well beyond the arborization of any single inhibitory neuron and even across different cortical areas. We prove by nonlinear contraction analysis and demonstrate by simulation that distributed WTA subsystems combined by such inhibitory synchrony are inherently stable. We show analytically that synchronization is substantially faster than winner selection. This circuit mechanism allows networks of independent WTAs to fully or partially compete with other.
Collapse
Affiliation(s)
- Ueli Rutishauser
- Department of Neural Systems, Max Planck Institute for Brain Research, Frankfurt am Main, Hessen 60528, Germany.
| | | | | |
Collapse
|
33
|
Neftci E, Chicca E, Indiveri G, Douglas R. A Systematic Method for Configuring VLSI Networks of Spiking Neurons. Neural Comput 2011; 23:2457-97. [DOI: 10.1162/neco_a_00182] [Citation(s) in RCA: 46] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
An increasing number of research groups are developing custom hybrid analog/digital very large scale integration (VLSI) chips and systems that implement hundreds to thousands of spiking neurons with biophysically realistic dynamics, with the intention of emulating brainlike real-world behavior in hardware and robotic systems rather than simply simulating their performance on general-purpose digital computers. Although the electronic engineering aspects of these emulation systems is proceeding well, progress toward the actual emulation of brainlike tasks is restricted by the lack of suitable high-level configuration methods of the kind that have already been developed over many decades for simulations on general-purpose computers. The key difficulty is that the dynamics of the CMOS electronic analogs are determined by transistor biases that do not map simply to the parameter types and values used in typical abstract mathematical models of neurons and their networks. Here we provide a general method for resolving this difficulty. We describe a parameter mapping technique that permits an automatic configuration of VLSI neural networks so that their electronic emulation conforms to a higher-level neuronal simulation. We show that the neurons configured by our method exhibit spike timing statistics and temporal dynamics that are the same as those observed in the software simulated neurons and, in particular, that the key parameters of recurrent VLSI neural networks (e.g., implementing soft winner-take-all) can be precisely tuned. The proposed method permits a seamless integration between software simulations with hardware emulations and intertranslatability between the parameters of abstract neuronal models and their emulation counterparts. Most important, our method offers a route toward a high-level task configuration language for neuromorphic VLSI systems.
Collapse
Affiliation(s)
- Emre Neftci
- Institute of Neuroinformatics, ETH, and University of Zurich, Zurich 8057, Switzerland
| | - Elisabetta Chicca
- Institute of Neuroinformatics, ETH, and University of Zurich, Zurich 8057, Switzerland
| | - Giacomo Indiveri
- Institute of Neuroinformatics, ETH, and University of Zurich, Zurich 8057, Switzerland
| | - Rodney Douglas
- Institute of Neuroinformatics, ETH, and University of Zurich, Zurich 8057, Switzerland
| |
Collapse
|
34
|
Destexhe A. Intracellular and computational evidence for a dominant role of internal network activity in cortical computations. Curr Opin Neurobiol 2011; 21:717-25. [PMID: 21715156 DOI: 10.1016/j.conb.2011.06.002] [Citation(s) in RCA: 35] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/17/2011] [Revised: 05/28/2011] [Accepted: 06/03/2011] [Indexed: 11/28/2022]
Abstract
The mammalian cerebral cortex is characterized by intense spontaneous activity, depending on brain region, age, and behavioral state. Classically, the cortex is considered as being driven by the senses, a paradigm which corresponds well to experiments in quiescent or deeply anesthetized states. In awake animals, however, the spontaneous activity cannot be considered as 'background noise', but is of comparable-or even higher-amplitude than evoked sensory responses. Recent evidence suggests that this internal activity is not only dominant, but also it shares many properties with the responses to natural sensory inputs, suggesting that the spontaneous activity is not independent of the sensory input. Such evidence is reviewed here, with an emphasis on intracellular and computational aspects. Statistical measures, such as the spike-triggered average of synaptic conductances, show that the impact of internal network state on spiking activity is major in awake animals. Thus, cortical activity cannot be considered as being driven by the senses, but sensory inputs rather seem to modulate and modify the internal dynamics of cerebral cortex. This view offers an attractive interpretation not only of dreaming activity (absence of sensory input), but also of several mental disorders.
Collapse
Affiliation(s)
- Alain Destexhe
- Unité de Neurosciences, Information et Complexité, CNRS, 91198 Gif-sur-Yvette, France.
| |
Collapse
|
35
|
Zhu J. A multifactor winner-take-all dynamics. Neural Comput 2011; 23:1835-61. [PMID: 21492007 DOI: 10.1162/neco_a_00136] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Perceptual systems often have to disentangle different factors from mixed observations. If each factor is represented by a set of variables, each standing for a discrete value of the factor, the factor values underlying an observation can be extracted by a winner-take-all (WTA) mechanism over the direct product of the factors. Search in the product space, however, is expensive. It is computationally attractive to work on the marginal factors. In this letter we study the dynamics of a multifactor system modeled by a number of interacting WTA dynamics, one for each factor. We give theoretical results on the stable fixed points of this system and show experimental results on invariant object recognition.
Collapse
Affiliation(s)
- Junmei Zhu
- Frankfurt Institute for Advanced Studies, 60438 Frankfurt am Main, Germany.
| |
Collapse
|
36
|
Rutishauser U, Douglas RJ, Slotine JJ. Collective stability of networks of winner-take-all circuits. Neural Comput 2010; 23:735-73. [PMID: 21162667 DOI: 10.1162/neco_a_00091] [Citation(s) in RCA: 45] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
The neocortex has a remarkably uniform neuronal organization, suggesting that common principles of processing are employed throughout its extent. In particular, the patterns of connectivity observed in the superficial layers of the visual cortex are consistent with the recurrent excitation and inhibitory feedback required for cooperative-competitive circuits such as the soft winner-take-all (WTA). WTA circuits offer interesting computational properties such as selective amplification, signal restoration, and decision making. But these properties depend on the signal gain derived from positive feedback, and so there is a critical trade-off between providing feedback strong enough to support the sophisticated computations while maintaining overall circuit stability. The issue of stability is all the more intriguing when one considers that the WTAs are expected to be densely distributed through the superficial layers and that they are at least partially interconnected. We consider how to reason about stability in very large distributed networks of such circuits. We approach this problem by approximating the regular cortical architecture as many interconnected cooperative-competitive modules. We demonstrate that by properly understanding the behavior of this small computational module, one can reason over the stability and convergence of very large networks composed of these modules. We obtain parameter ranges in which the WTA circuit operates in a high-gain regime, is stable, and can be aggregated arbitrarily to form large, stable networks. We use nonlinear contraction theory to establish conditions for stability in the fully nonlinear case and verify these solutions using numerical simulations. The derived bounds allow modes of operation in which the WTA network is multistable and exhibits state-dependent persistent activities. Our approach is sufficiently general to reason systematically about the stability of any network, biological or technological, composed of networks of small modules that express competition through shared inhibition.
Collapse
Affiliation(s)
- Ueli Rutishauser
- Department of Neural Systems and Coding, Max Planck Institute for Brain Research, Frankfurt am Main, Hessen 60528, Germany
| | | | | |
Collapse
|
37
|
Neuromorphic sensory systems. Curr Opin Neurobiol 2010; 20:288-95. [DOI: 10.1016/j.conb.2010.03.007] [Citation(s) in RCA: 210] [Impact Index Per Article: 15.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/02/2010] [Revised: 03/22/2010] [Accepted: 03/24/2010] [Indexed: 11/17/2022]
|
38
|
Haider B, McCormick DA. Rapid neocortical dynamics: cellular and network mechanisms. Neuron 2009; 62:171-89. [PMID: 19409263 PMCID: PMC3132648 DOI: 10.1016/j.neuron.2009.04.008] [Citation(s) in RCA: 321] [Impact Index Per Article: 21.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/23/2008] [Revised: 04/12/2009] [Accepted: 04/13/2009] [Indexed: 01/07/2023]
Abstract
The highly interconnected local and large-scale networks of the neocortical sheet rapidly and dynamically modulate their functional connectivity according to behavioral demands. This basic operating principle of the neocortex is mediated by the continuously changing flow of excitatory and inhibitory synaptic barrages that not only control participation of neurons in networks but also define the networks themselves. The rapid control of neuronal responsiveness via synaptic bombardment is a fundamental property of cortical dynamics that may provide the basis of diverse behaviors, including sensory perception, motor integration, working memory, and attention.
Collapse
Affiliation(s)
- Bilal Haider
- Department of Neurobiology, Kavli Institute for Neuroscience, Yale University School of Medicine, 333 Cedar Street, New Haven, CT 06510, USA
| | - David A. McCormick
- Department of Neurobiology, Kavli Institute for Neuroscience, Yale University School of Medicine, 333 Cedar Street, New Haven, CT 06510, USA
| |
Collapse
|
39
|
Artificial Cognitive Systems: From VLSI Networks of Spiking Neurons to Neuromorphic Cognition. Cognit Comput 2009. [DOI: 10.1007/s12559-008-9003-6] [Citation(s) in RCA: 40] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/21/2022]
|