1
|
Cotteret M, Greatorex H, Ziegler M, Chicca E. Vector Symbolic Finite State Machines in Attractor Neural Networks. Neural Comput 2024; 36:549-595. [PMID: 38457766 DOI: 10.1162/neco_a_01638] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/09/2023] [Accepted: 10/19/2023] [Indexed: 03/10/2024]
Abstract
Hopfield attractor networks are robust distributed models of human memory, but they lack a general mechanism for effecting state-dependent attractor transitions in response to input. We propose construction rules such that an attractor network may implement an arbitrary finite state machine (FSM), where states and stimuli are represented by high-dimensional random vectors and all state transitions are enacted by the attractor network's dynamics. Numerical simulations show the capacity of the model, in terms of the maximum size of implementable FSM, to be linear in the size of the attractor network for dense bipolar state vectors and approximately quadratic for sparse binary state vectors. We show that the model is robust to imprecise and noisy weights, and so a prime candidate for implementation with high-density but unreliable devices. By endowing attractor networks with the ability to emulate arbitrary FSMs, we propose a plausible path by which FSMs could exist as a distributed computational primitive in biological neural networks.
Collapse
Affiliation(s)
- Madison Cotteret
- Micro- and Nanoelectronic Systems, Institute of Micro- and Nanotechnologies (IMN) MacroNano, Technische Universität Ilmenau, 98693 Ilmenau, Germany
- Bio-Inspired Circuits and Systems Lab, Zernike Institute for Advanced Materials, and Groningen Cognitive Systems and Materials Center, University of Groningen, 9747 AG Groningen, Netherlands
| | - Hugh Greatorex
- Bio-Inspired Circuits and Systems Lab, Zernike Institute for Advanced Materials, and Groningen Cognitive Systems and Materials Center, University of Groningen, 9747 AG Groningen, Netherlands
| | - Martin Ziegler
- Micro- and Nanoelectronic Systems, Institute of Micro- and Nanotechnologies (IMN) MacroNano, Technische Universität Ilmenau, 98693 Ilmenau, Germany
| | - Elisabetta Chicca
- Bio-Inspired Circuits and Systems Lab, Zernike Institute for Advanced Materials, and Groningen Cognitive Systems and Materials Center, University of Groningen, 9747 AG Groningen, Netherlands
| |
Collapse
|
2
|
Breffle J, Mokashe S, Qiu S, Miller P. Multistability in neural systems with random cross-connections. BIOLOGICAL CYBERNETICS 2023; 117:485-506. [PMID: 38133664 DOI: 10.1007/s00422-023-00981-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/05/2023] [Accepted: 12/05/2023] [Indexed: 12/23/2023]
Abstract
Neural circuits with multiple discrete attractor states could support a variety of cognitive tasks according to both empirical data and model simulations. We assess the conditions for such multistability in neural systems using a firing rate model framework, in which clusters of similarly responsive neurons are represented as single units, which interact with each other through independent random connections. We explore the range of conditions in which multistability arises via recurrent input from other units while individual units, typically with some degree of self-excitation, lack sufficient self-excitation to become bistable on their own. We find many cases of multistability-defined as the system possessing more than one stable fixed point-in which stable states arise via a network effect, allowing subsets of units to maintain each others' activity because their net input to each other when active is sufficiently positive. In terms of the strength of within-unit self-excitation and standard deviation of random cross-connections, the region of multistability depends on the response function of units. Indeed, multistability can arise with zero self-excitation, purely through zero-mean random cross-connections, if the response function rises supralinearly at low inputs from a value near zero at zero input. We simulate and analyze finite systems, showing that the probability of multistability can peak at intermediate system size, and connect with other literature analyzing similar systems in the infinite-size limit. We find regions of multistability with a bimodal distribution for the number of active units in a stable state. Finally, we find evidence for a log-normal distribution of sizes of attractor basins, which produces Zipf's Law when enumerating the proportion of trials within which random initial conditions lead to a particular stable state of the system.
Collapse
Affiliation(s)
- Jordan Breffle
- Neuroscience Program, Brandeis University, 415 South St, Waltham, MA, 02454, USA
| | - Subhadra Mokashe
- Neuroscience Program, Brandeis University, 415 South St, Waltham, MA, 02454, USA
| | - Siwei Qiu
- Volen National Center for Complex Systems, Brandeis University, 415 South St, Waltham, MA, 02454, USA
- Department of Neurology, Cedars-Sinai Medical Center, Los Angeles, CA, USA
| | - Paul Miller
- Neuroscience Program, Brandeis University, 415 South St, Waltham, MA, 02454, USA.
- Volen National Center for Complex Systems, Brandeis University, 415 South St, Waltham, MA, 02454, USA.
- Department of Biology, Brandeis University, 415 South St, Waltham, MA, 02454, USA.
| |
Collapse
|
3
|
Köksal Ersöz E, Chossat P, Krupa M, Lavigne F. Dynamic branching in a neural network model for probabilistic prediction of sequences. J Comput Neurosci 2022; 50:537-557. [PMID: 35948839 DOI: 10.1007/s10827-022-00830-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/16/2022] [Revised: 07/11/2022] [Accepted: 07/13/2022] [Indexed: 10/15/2022]
Abstract
An important function of the brain is to predict which stimulus is likely to occur based on the perceived cues. The present research studied the branching behavior of a computational network model of populations of excitatory and inhibitory neurons, both analytically and through simulations. Results show how synaptic efficacy, retroactive inhibition and short-term synaptic depression determine the dynamics of selection between different branches predicting sequences of stimuli of different probabilities. Further results show that changes in the probability of the different predictions depend on variations of neuronal gain. Such variations allow the network to optimize the probability of its predictions to changing probabilities of the sequences without changing synaptic efficacy.
Collapse
Affiliation(s)
- Elif Köksal Ersöz
- Univ Rennes, INSERM, LTSI - UMR 1099, Campus Beaulieu, Rennes, F-35000, France. .,Project Team MathNeuro, INRIA-CNRS-UNS, 2004 route des Lucioles-BP 93, Sophia Antipolis, 06902, France.
| | - Pascal Chossat
- Project Team MathNeuro, INRIA-CNRS-UNS, 2004 route des Lucioles-BP 93, Sophia Antipolis, 06902, France.,Université Côte d'Azur, Laboratoire Jean-Alexandre Dieudonné, Campus Valrose, Nice, 06300, France
| | - Martin Krupa
- Project Team MathNeuro, INRIA-CNRS-UNS, 2004 route des Lucioles-BP 93, Sophia Antipolis, 06902, France.,Université Côte d'Azur, Laboratoire Jean-Alexandre Dieudonné, Campus Valrose, Nice, 06300, France
| | - Frédéric Lavigne
- Université Côte d'Azur, CNRS-BCL, Campus Saint Jean d'Angely, Nice, 06300, France
| |
Collapse
|