1
|
Vidal-Saez MS, Vilarroya O, Garcia-Ojalvo J. Biological computation through recurrence. Biochem Biophys Res Commun 2024; 728:150301. [PMID: 38971000 DOI: 10.1016/j.bbrc.2024.150301] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/25/2024] [Accepted: 05/12/2024] [Indexed: 07/08/2024]
Abstract
One of the defining features of living systems is their adaptability to changing environmental conditions. This requires organisms to extract temporal and spatial features of their environment, and use that information to compute the appropriate response. In the last two decades, a growing body of work, mainly coming from the machine learning and computational neuroscience fields, has shown that such complex information processing can be performed by recurrent networks. Temporal computations arise in these networks through the interplay between the external stimuli and the network's internal state. In this article we review our current understanding of how recurrent networks can be used by biological systems, from cells to brains, for complex information processing. Rather than focusing on sophisticated, artificial recurrent architectures such as long short-term memory (LSTM) networks, here we concentrate on simpler network structures and learning algorithms that can be expected to have been found by evolution. We also review studies showing evidence of naturally occurring recurrent networks in living organisms. Lastly, we discuss some relevant evolutionary aspects concerning the emergence of this natural computation paradigm.
Collapse
Affiliation(s)
- María Sol Vidal-Saez
- Department of Medicine and Life Sciences, Universitat Pompeu Fabra, Dr Aiguader 88, 08003 Barcelona, Spain
| | - Oscar Vilarroya
- Department of Psychiatry and Legal Medicine, Universitat Autònoma de Barcelona, 08193, Cerdanyola del Vallès, Spain; Hospital del Mar Medical Research Institute (IMIM), Dr Aiguader 88, 08003, Barcelona, Spain
| | - Jordi Garcia-Ojalvo
- Department of Medicine and Life Sciences, Universitat Pompeu Fabra, Dr Aiguader 88, 08003 Barcelona, Spain.
| |
Collapse
|
2
|
Goto Y, Kitajo K. Selective consistency of recurrent neural networks induced by plasticity as a mechanism of unsupervised perceptual learning. PLoS Comput Biol 2024; 20:e1012378. [PMID: 39226313 PMCID: PMC11398647 DOI: 10.1371/journal.pcbi.1012378] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/24/2024] [Revised: 09/13/2024] [Accepted: 07/30/2024] [Indexed: 09/05/2024] Open
Abstract
Understanding the mechanism by which the brain achieves relatively consistent information processing contrary to its inherent inconsistency in activity is one of the major challenges in neuroscience. Recently, it has been reported that the consistency of neural responses to stimuli that are presented repeatedly is enhanced implicitly in an unsupervised way, and results in improved perceptual consistency. Here, we propose the term "selective consistency" to describe this input-dependent consistency and hypothesize that it will be acquired in a self-organizing manner by plasticity within the neural system. To test this, we investigated whether a reservoir-based plastic model could acquire selective consistency to repeated stimuli. We used white noise sequences randomly generated in each trial and referenced white noise sequences presented multiple times. The results showed that the plastic network was capable of acquiring selective consistency rapidly, with as little as five exposures to stimuli, even for white noise. The acquisition of selective consistency could occur independently of performance optimization, as the network's time-series prediction accuracy for referenced stimuli did not improve with repeated exposure and optimization. Furthermore, the network could only achieve selective consistency when in the region between order and chaos. These findings suggest that the neural system can acquire selective consistency in a self-organizing manner and that this may serve as a mechanism for certain types of learning.
Collapse
Affiliation(s)
- Yujin Goto
- Division of Neural Dynamics, Department of System Neuroscience, National Institute for Physiological Sciences, National Institutes of Natural Sciences, Okazaki, Aichi, Japan
- Department of Physiological Sciences, School of Life Science, The Graduate University for Advanced Studies (SOKENDAI), Okazaki, Aichi, Japan
| | - Keiichi Kitajo
- Division of Neural Dynamics, Department of System Neuroscience, National Institute for Physiological Sciences, National Institutes of Natural Sciences, Okazaki, Aichi, Japan
- Department of Physiological Sciences, School of Life Science, The Graduate University for Advanced Studies (SOKENDAI), Okazaki, Aichi, Japan
| |
Collapse
|
3
|
Koch D, Nandan A, Ramesan G, Koseska A. Biological computations: Limitations of attractor-based formalisms and the need for transients. Biochem Biophys Res Commun 2024; 720:150069. [PMID: 38754165 DOI: 10.1016/j.bbrc.2024.150069] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/19/2023] [Revised: 04/15/2024] [Accepted: 05/07/2024] [Indexed: 05/18/2024]
Abstract
Living systems, from single cells to higher vertebrates, receive a continuous stream of non-stationary inputs that they sense, for e.g. via cell surface receptors or sensory organs. By integrating these time-varying, multi-sensory, and often noisy information with memory using complex molecular or neuronal networks, they generate a variety of responses beyond simple stimulus-response association, including avoidance behavior, life-long-learning or social interactions. In a broad sense, these processes can be understood as a type of biological computation. Taking as a basis generic features of biological computations, such as real-time responsiveness or robustness and flexibility of the computation, we highlight the limitations of the current attractor-based framework for understanding computations in biological systems. We argue that frameworks based on transient dynamics away from attractors are better suited for the description of computations performed by neuronal and signaling networks. In particular, we discuss how quasi-stable transient dynamics from ghost states that emerge at criticality have a promising potential for developing an integrated framework of computations, that can help us understand how living system actively process information and learn from their continuously changing environment.
Collapse
Affiliation(s)
- Daniel Koch
- Lise Meitner Group Cellular Computations and Learning, Max Planck Institute for Neurobiology of Behaviour - Caesar, Bonn, Germany
| | - Akhilesh Nandan
- Lise Meitner Group Cellular Computations and Learning, Max Planck Institute for Neurobiology of Behaviour - Caesar, Bonn, Germany
| | - Gayathri Ramesan
- Lise Meitner Group Cellular Computations and Learning, Max Planck Institute for Neurobiology of Behaviour - Caesar, Bonn, Germany
| | - Aneta Koseska
- Lise Meitner Group Cellular Computations and Learning, Max Planck Institute for Neurobiology of Behaviour - Caesar, Bonn, Germany.
| |
Collapse
|
4
|
Zhang QR, Ouyang WL, Wang XM, Yang F, Chen JG, Wen ZX, Liu JX, Wang G, Liu Q, Liu FC. Dynamic memristor for physical reservoir computing. NANOSCALE 2024; 16:13847-13860. [PMID: 38984618 DOI: 10.1039/d4nr01445f] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 07/11/2024]
Abstract
Reservoir computing (RC) has attracted considerable attention for its efficient handling of temporal signals and lower training costs. As a nonlinear dynamic system, RC can map low-dimensional inputs into high-dimensional spaces and implement classification using a simple linear readout layer. The memristor exhibits complex dynamic characteristics due to its internal physical processes, which renders them an ideal choice for the implementation of physical reservoir computing (PRC) systems. This review focuses on PRC systems based on memristors, explaining the resistive switching mechanism at the device level and emphasizing the tunability of their dynamic behavior. The development of memristor-based reservoir computing systems is highlighted, along with discussions on the challenges faced by this field and potential future research directions.
Collapse
Affiliation(s)
- Qi-Rui Zhang
- Yangtze Delta Region Institute (Huzhou), University of Electronic Science and Technology of China, Huzhou 313099, China.
- School of Optoelectronic Science and Engineering, University of Electronic Science and Technology of China, Chengdu 610054, China
| | - Wei-Lun Ouyang
- School of Optoelectronic Science and Engineering, University of Electronic Science and Technology of China, Chengdu 610054, China
| | - Xue-Mei Wang
- School of Optoelectronic Science and Engineering, University of Electronic Science and Technology of China, Chengdu 610054, China
| | - Fan Yang
- School of Optoelectronic Science and Engineering, University of Electronic Science and Technology of China, Chengdu 610054, China
| | - Jian-Gang Chen
- School of Optoelectronic Science and Engineering, University of Electronic Science and Technology of China, Chengdu 610054, China
| | - Zhi-Xing Wen
- Yangtze Delta Region Institute (Huzhou), University of Electronic Science and Technology of China, Huzhou 313099, China.
- School of Optoelectronic Science and Engineering, University of Electronic Science and Technology of China, Chengdu 610054, China
| | - Jia-Xin Liu
- School of Optoelectronic Science and Engineering, University of Electronic Science and Technology of China, Chengdu 610054, China
| | - Ge Wang
- School of Optoelectronic Science and Engineering, University of Electronic Science and Technology of China, Chengdu 610054, China
| | - Qing Liu
- School of Optoelectronic Science and Engineering, University of Electronic Science and Technology of China, Chengdu 610054, China
| | - Fu-Cai Liu
- Yangtze Delta Region Institute (Huzhou), University of Electronic Science and Technology of China, Huzhou 313099, China.
- School of Optoelectronic Science and Engineering, University of Electronic Science and Technology of China, Chengdu 610054, China
- State Key Laboratory of Electronic Thin Films and Integrated Devices, University of Electronic Science and Technology of China, Chengdu 610054, China
| |
Collapse
|
5
|
Correa A, Ponzi A, Calderón VM, Migliore R. Pathological cell assembly dynamics in a striatal MSN network model. Front Comput Neurosci 2024; 18:1410335. [PMID: 38903730 PMCID: PMC11188713 DOI: 10.3389/fncom.2024.1410335] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/01/2024] [Accepted: 05/15/2024] [Indexed: 06/22/2024] Open
Abstract
Under normal conditions the principal cells of the striatum, medium spiny neurons (MSNs), show structured cell assembly activity patterns which alternate sequentially over exceedingly long timescales of many minutes. It is important to understand this activity since it is characteristically disrupted in multiple pathologies, such as Parkinson's disease and dyskinesia, and thought to be caused by alterations in the MSN to MSN lateral inhibitory connections and in the strength and distribution of cortical excitation to MSNs. To understand how these long timescales arise we extended a previous network model of MSN cells to include synapses with short-term plasticity, with parameters taken from a recent detailed striatal connectome study. We first confirmed the presence of sequentially switching cell clusters using the non-linear dimensionality reduction technique, Uniform Manifold Approximation and Projection (UMAP). We found that the network could generate non-stationary activity patterns varying extremely slowly on the order of minutes under biologically realistic conditions. Next we used Simulation Based Inference (SBI) to train a deep net to map features of the MSN network generated cell assembly activity to MSN network parameters. We used the trained SBI model to estimate MSN network parameters from ex-vivo brain slice calcium imaging data. We found that best fit network parameters were very close to their physiologically observed values. On the other hand network parameters estimated from Parkinsonian, decorticated and dyskinetic ex-vivo slice preparations were different. Our work may provide a pipeline for diagnosis of basal ganglia pathology from spiking data as well as for the design pharmacological treatments.
Collapse
Affiliation(s)
- Astrid Correa
- Institute of Biophysics, National Research Council, Palermo, Italy
| | - Adam Ponzi
- Institute of Biophysics, National Research Council, Palermo, Italy
- Center for Human Nature, Artificial Intelligence, and Neuroscience, Hokkaido University, Sapporo, Japan
| | - Vladimir M. Calderón
- Department of Developmental Neurobiology and Neurophysiology, Neurobiology Institute, National Autonomous University of Mexico, Querétaro, Mexico
| | - Rosanna Migliore
- Institute of Biophysics, National Research Council, Palermo, Italy
| |
Collapse
|
6
|
Koster F, Yanchuk S, Ludge K. Master Memory Function for Delay-Based Reservoir Computers With Single-Variable Dynamics. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2024; 35:7712-7725. [PMID: 36399593 DOI: 10.1109/tnnls.2022.3220532] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/16/2023]
Abstract
We show that many delay-based reservoir computers considered in the literature can be characterized by a universal master memory function (MMF). Once computed for two independent parameters, this function provides linear memory capacity for any delay-based single-variable reservoir with small inputs. Moreover, we propose an analytical description of the MMF that enables its efficient and fast computation. Our approach can be applied not only to single-variable delay-based reservoirs governed by known dynamical rules, such as the Mackey-Glass or Stuart-Landau-like systems, but also to reservoirs whose dynamical model is not available.
Collapse
|
7
|
Terada Y, Toyoizumi T. Chaotic neural dynamics facilitate probabilistic computations through sampling. Proc Natl Acad Sci U S A 2024; 121:e2312992121. [PMID: 38648479 PMCID: PMC11067032 DOI: 10.1073/pnas.2312992121] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/28/2023] [Accepted: 02/13/2024] [Indexed: 04/25/2024] Open
Abstract
Cortical neurons exhibit highly variable responses over trials and time. Theoretical works posit that this variability arises potentially from chaotic network dynamics of recurrently connected neurons. Here, we demonstrate that chaotic neural dynamics, formed through synaptic learning, allow networks to perform sensory cue integration in a sampling-based implementation. We show that the emergent chaotic dynamics provide neural substrates for generating samples not only of a static variable but also of a dynamical trajectory, where generic recurrent networks acquire these abilities with a biologically plausible learning rule through trial and error. Furthermore, the networks generalize their experience in the stimulus-evoked samples to the inference without partial or all sensory information, which suggests a computational role of spontaneous activity as a representation of the priors as well as a tractable biological computation for marginal distributions. These findings suggest that chaotic neural dynamics may serve for the brain function as a Bayesian generative model.
Collapse
Affiliation(s)
- Yu Terada
- Laboratory for Neural Computation and Adaptation, RIKEN Center for Brain Science, Saitama351-0198, Japan
- Department of Neurobiology, University of California, San Diego, La Jolla, CA92093
- The Institute for Physics of Intelligence, The University of Tokyo, Tokyo113-0033, Japan
| | - Taro Toyoizumi
- Laboratory for Neural Computation and Adaptation, RIKEN Center for Brain Science, Saitama351-0198, Japan
- Department of Mathematical Informatics, Graduate School of Information Science and Technology, The University of Tokyo, Tokyo113-8656, Japan
| |
Collapse
|
8
|
Namiki W, Nishioka D, Tsuchiya T, Higuchi T, Terabe K. Magnetization Vector Rotation Reservoir Computing Operated by Redox Mechanism. NANO LETTERS 2024; 24:4383-4392. [PMID: 38513213 DOI: 10.1021/acs.nanolett.3c05029] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 03/23/2024]
Abstract
Physical reservoir computing is a promising way to develop efficient artificial intelligence using physical devices exhibiting nonlinear dynamics. Although magnetic materials have advantages in miniaturization, the need for a magnetic field and large electric current results in high electric power consumption and a complex device structure. To resolve these issues, we propose a redox-based physical reservoir utilizing the planar Hall effect and anisotropic magnetoresistance, which are phenomena described by different nonlinear functions of the magnetization vector that do not need a magnetic field to be applied. The expressive power of this reservoir based on a compact all-solid-state redox transistor is higher than the previous physical reservoir. The normalized mean square error of the reservoir on a second-order nonlinear equation task was 1.69 × 10-3, which is lower than that of a memristor array (3.13 × 10-3) even though the number of reservoir nodes was fewer than half that of the memristor array.
Collapse
Affiliation(s)
- Wataru Namiki
- Research Center for Materials Nanoarchitectonics, National Institute for Materials Science, 1-1 Namiki, Tsukuba, Ibaraki 305-0044, Japan
| | - Daiki Nishioka
- Research Center for Materials Nanoarchitectonics, National Institute for Materials Science, 1-1 Namiki, Tsukuba, Ibaraki 305-0044, Japan
- Department of Applied Physics, Faculty of Science, Tokyo University of Science, 6-3-1 Niijuku, Katsushika, Tokyo 125-8585, Japan
| | - Takashi Tsuchiya
- Research Center for Materials Nanoarchitectonics, National Institute for Materials Science, 1-1 Namiki, Tsukuba, Ibaraki 305-0044, Japan
| | - Tohru Higuchi
- Department of Applied Physics, Faculty of Science, Tokyo University of Science, 6-3-1 Niijuku, Katsushika, Tokyo 125-8585, Japan
| | - Kazuya Terabe
- Research Center for Materials Nanoarchitectonics, National Institute for Materials Science, 1-1 Namiki, Tsukuba, Ibaraki 305-0044, Japan
| |
Collapse
|
9
|
Yadav A, Pasupa K, Loo CK, Liu X. Optimizing echo state networks for continuous gesture recognition in mobile devices: A comparative study. Heliyon 2024; 10:e27108. [PMID: 38562498 PMCID: PMC10982987 DOI: 10.1016/j.heliyon.2024.e27108] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/05/2023] [Revised: 02/21/2024] [Accepted: 02/23/2024] [Indexed: 04/04/2024] Open
Abstract
Continuous gesture recognition can be used to enhance human-computer interaction. This can be accomplished by capturing human movement with the use of the Inertial Measurement Units in smartphones and using machine learning algorithms to predict the intended gestures. Echo State Networks (ESNs) consist of a fixed internal reservoir that is able to generate rich and diverse nonlinear dynamics in response to input signals that capture temporal dependencies within the signal. This makes ESNs well-suited for time series prediction tasks, such as continuous gesture recognition. However, their application has not been rigorously explored, with regard to gesture recognition. In this study, we sought to enhance the efficacy of ESN models in continuous gesture recognition by exploring diverse model structures, fine-tuning hyperparameters, and experimenting with various training approaches. We used three different training schemes that used the Leave-one-out Cross-validation (LOOCV) protocol to investigate the performance in real-world scenarios with different levels of data availability: Leaving out data from one user to use for testing (F 1 -score: 0.89), leaving out a fraction of data from all users to use in testing (F 1 -score: 0.96), and training and testing using LOOCV on a single user (F 1 -score: 0.99). The obtained results outperformed the Long Short-Term Memory (LSTM) performance from past research (F 1 -score: 0.87) while maintaining a low training time of approximately 13 seconds compared to 63 seconds for the LSTM model. Additionally, we further explored the performance of the ESN models through behaviour space analysis using memory capacity, Kernel Rank, and Generalization Rank. Our results demonstrate that ESNs can be optimized to achieve high performance on gesture recognition in mobile devices on multiple levels of data availability. These findings highlight the practical ability of ESNs to enhance human-computer interaction.
Collapse
Affiliation(s)
- Alok Yadav
- School of Information Technology, King Mongkut's Institute of Technology Ladkrabang, Bangkok, 10520, Thailand
| | - Kitsuchart Pasupa
- School of Information Technology, King Mongkut's Institute of Technology Ladkrabang, Bangkok, 10520, Thailand
| | - Chu Kiong Loo
- Faculty of Computer Science & Information Technology, University of Malaya, Kuala Lumpur, 50603, Malaysia
| | - Xiaofeng Liu
- College of IoT Engineering, Hohai University, Changzhou, 213022, China
| |
Collapse
|
10
|
Metzner C, Yamakou ME, Voelkl D, Schilling A, Krauss P. Quantifying and Maximizing the Information Flux in Recurrent Neural Networks. Neural Comput 2024; 36:351-384. [PMID: 38363658 DOI: 10.1162/neco_a_01651] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/21/2023] [Accepted: 12/04/2023] [Indexed: 02/18/2024]
Abstract
Free-running recurrent neural networks (RNNs), especially probabilistic models, generate an ongoing information flux that can be quantified with the mutual information I[x→(t),x→(t+1)] between subsequent system states x→. Although previous studies have shown that I depends on the statistics of the network's connection weights, it is unclear how to maximize I systematically and how to quantify the flux in large systems where computing the mutual information becomes intractable. Here, we address these questions using Boltzmann machines as model systems. We find that in networks with moderately strong connections, the mutual information I is approximately a monotonic transformation of the root-mean-square averaged Pearson correlations between neuron pairs, a quantity that can be efficiently computed even in large systems. Furthermore, evolutionary maximization of I[x→(t),x→(t+1)] reveals a general design principle for the weight matrices enabling the systematic construction of systems with a high spontaneous information flux. Finally, we simultaneously maximize information flux and the mean period length of cyclic attractors in the state-space of these dynamical networks. Our results are potentially useful for the construction of RNNs that serve as short-time memories or pattern generators.
Collapse
Affiliation(s)
- Claus Metzner
- Neuroscience Lab, University Hospital Erlangen, 91054 Erlangen, Germany
- Biophysics Lab, Friedrich-Alexander University of Erlangen-Nuremberg, 91054 Erlangen, Germany
| | - Marius E Yamakou
- Department of Data Science, Friedrich-Alexander University Erlangen-Nuremberg, 91054 Erlangen, Germany
| | - Dennis Voelkl
- Neuroscience Lab, University Hospital Erlangen, 91054 Erlangen, Germany
| | - Achim Schilling
- Neuroscience Lab, University Hospital Erlangen, 91054 Erlangen, Germany
- Cognitive Computational Neuroscience Group, Friedrich-Alexander University Erlangen-Nuremberg, 91054 Erlangen, Germany
| | - Patrick Krauss
- Neuroscience Lab, University Hospital Erlangen, 91054 Erlangen, Germany
- Cognitive Computational Neuroscience Group, Friedrich-Alexander University Erlangen-Nuremberg, 91054 Erlangen, Germany
- Pattern Recognition Lab, Friedrich-Alexander University Erlangen-Nuremberg, 91054 Erlangen, Germany
| |
Collapse
|
11
|
Suárez LE, Mihalik A, Milisav F, Marshall K, Li M, Vértes PE, Lajoie G, Misic B. Connectome-based reservoir computing with the conn2res toolbox. Nat Commun 2024; 15:656. [PMID: 38253577 PMCID: PMC10803782 DOI: 10.1038/s41467-024-44900-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/19/2023] [Accepted: 01/09/2024] [Indexed: 01/24/2024] Open
Abstract
The connection patterns of neural circuits form a complex network. How signaling in these circuits manifests as complex cognition and adaptive behaviour remains the central question in neuroscience. Concomitant advances in connectomics and artificial intelligence open fundamentally new opportunities to understand how connection patterns shape computational capacity in biological brain networks. Reservoir computing is a versatile paradigm that uses high-dimensional, nonlinear dynamical systems to perform computations and approximate cognitive functions. Here we present conn2res: an open-source Python toolbox for implementing biological neural networks as artificial neural networks. conn2res is modular, allowing arbitrary network architecture and dynamics to be imposed. The toolbox allows researchers to input connectomes reconstructed using multiple techniques, from tract tracing to noninvasive diffusion imaging, and to impose multiple dynamical systems, from spiking neurons to memristive dynamics. The versatility of the conn2res toolbox allows us to ask new questions at the confluence of neuroscience and artificial intelligence. By reconceptualizing function as computation, conn2res sets the stage for a more mechanistic understanding of structure-function relationships in brain networks.
Collapse
Affiliation(s)
- Laura E Suárez
- McConnell Brain Imaging Centre, Montréal Neurological Institute, McGill University, Montréal, QC, Canada
- Mila, Quebec Artificial Intelligence Institute, Montreal, QC, Canada
| | - Agoston Mihalik
- Department of Psychiatry, University of Cambridge, Cambridge, UK
| | - Filip Milisav
- McConnell Brain Imaging Centre, Montréal Neurological Institute, McGill University, Montréal, QC, Canada
| | - Kenji Marshall
- Department of Bioengineering, Stanford University, Stanford, CA, USA
| | - Mingze Li
- McConnell Brain Imaging Centre, Montréal Neurological Institute, McGill University, Montréal, QC, Canada
- Mila, Quebec Artificial Intelligence Institute, Montreal, QC, Canada
| | - Petra E Vértes
- Department of Psychiatry, University of Cambridge, Cambridge, UK
| | - Guillaume Lajoie
- Mila, Quebec Artificial Intelligence Institute, Montreal, QC, Canada
- Department of Mathematics and Statistics, Université de Montréal, Montreal, QC, Canada
| | - Bratislav Misic
- McConnell Brain Imaging Centre, Montréal Neurological Institute, McGill University, Montréal, QC, Canada.
| |
Collapse
|
12
|
Gast R, Solla SA, Kennedy A. Neural heterogeneity controls computations in spiking neural networks. Proc Natl Acad Sci U S A 2024; 121:e2311885121. [PMID: 38198531 PMCID: PMC10801870 DOI: 10.1073/pnas.2311885121] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/12/2023] [Accepted: 11/27/2023] [Indexed: 01/12/2024] Open
Abstract
The brain is composed of complex networks of interacting neurons that express considerable heterogeneity in their physiology and spiking characteristics. How does this neural heterogeneity influence macroscopic neural dynamics, and how might it contribute to neural computation? In this work, we use a mean-field model to investigate computation in heterogeneous neural networks, by studying how the heterogeneity of cell spiking thresholds affects three key computational functions of a neural population: the gating, encoding, and decoding of neural signals. Our results suggest that heterogeneity serves different computational functions in different cell types. In inhibitory interneurons, varying the degree of spike threshold heterogeneity allows them to gate the propagation of neural signals in a reciprocally coupled excitatory population. Whereas homogeneous interneurons impose synchronized dynamics that narrow the dynamic repertoire of the excitatory neurons, heterogeneous interneurons act as an inhibitory offset while preserving excitatory neuron function. Spike threshold heterogeneity also controls the entrainment properties of neural networks to periodic input, thus affecting the temporal gating of synaptic inputs. Among excitatory neurons, heterogeneity increases the dimensionality of neural dynamics, improving the network's capacity to perform decoding tasks. Conversely, homogeneous networks suffer in their capacity for function generation, but excel at encoding signals via multistable dynamic regimes. Drawing from these findings, we propose intra-cell-type heterogeneity as a mechanism for sculpting the computational properties of local circuits of excitatory and inhibitory spiking neurons, permitting the same canonical microcircuit to be tuned for diverse computational tasks.
Collapse
Affiliation(s)
- Richard Gast
- Department of Neuroscience, Feinberg School of Medicine, Northwestern University, Chicago, IL60611
- Aligning Science Across Parkinson’s Collaborative Research Network, Chevy Chase, MD20815
| | - Sara A. Solla
- Department of Neuroscience, Feinberg School of Medicine, Northwestern University, Chicago, IL60611
| | - Ann Kennedy
- Department of Neuroscience, Feinberg School of Medicine, Northwestern University, Chicago, IL60611
- Aligning Science Across Parkinson’s Collaborative Research Network, Chevy Chase, MD20815
| |
Collapse
|
13
|
Gemo E, Spiga S, Brivio S. SHIP: a computational framework for simulating and validating novel technologies in hardware spiking neural networks. Front Neurosci 2024; 17:1270090. [PMID: 38264497 PMCID: PMC10804805 DOI: 10.3389/fnins.2023.1270090] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/31/2023] [Accepted: 12/14/2023] [Indexed: 01/25/2024] Open
Abstract
Investigations in the field of spiking neural networks (SNNs) encompass diverse, yet overlapping, scientific disciplines. Examples range from purely neuroscientific investigations, researches on computational aspects of neuroscience, or applicative-oriented studies aiming to improve SNNs performance or to develop artificial hardware counterparts. However, the simulation of SNNs is a complex task that can not be adequately addressed with a single platform applicable to all scenarios. The optimization of a simulation environment to meet specific metrics often entails compromises in other aspects. This computational challenge has led to an apparent dichotomy of approaches, with model-driven algorithms dedicated to the detailed simulation of biological networks, and data-driven algorithms designed for efficient processing of large input datasets. Nevertheless, material scientists, device physicists, and neuromorphic engineers who develop new technologies for spiking neuromorphic hardware solutions would find benefit in a simulation environment that borrows aspects from both approaches, thus facilitating modeling, analysis, and training of prospective SNN systems. This manuscript explores the numerical challenges deriving from the simulation of spiking neural networks, and introduces SHIP, Spiking (neural network) Hardware In PyTorch, a numerical tool that supports the investigation and/or validation of materials, devices, small circuit blocks within SNN architectures. SHIP facilitates the algorithmic definition of the models for the components of a network, the monitoring of states and output of the modeled systems, and the training of the synaptic weights of the network, by way of user-defined unsupervised learning rules or supervised training techniques derived from conventional machine learning. SHIP offers a valuable tool for researchers and developers in the field of hardware-based spiking neural networks, enabling efficient simulation and validation of novel technologies.
Collapse
Affiliation(s)
- Emanuele Gemo
- CNR–IMM, Unit of Agrate Brianza, Agrate Brianza, Italy
| | | | | |
Collapse
|
14
|
Pan W, Zhao F, Zeng Y, Han B. Adaptive structure evolution and biologically plausible synaptic plasticity for recurrent spiking neural networks. Sci Rep 2023; 13:16924. [PMID: 37805632 PMCID: PMC10560283 DOI: 10.1038/s41598-023-43488-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/07/2023] [Accepted: 09/25/2023] [Indexed: 10/09/2023] Open
Abstract
The architecture design and multi-scale learning principles of the human brain that evolved over hundreds of millions of years are crucial to realizing human-like intelligence. Spiking neural network based Liquid State Machine (LSM) serves as a suitable architecture to study brain-inspired intelligence because of its brain-inspired structure and the potential for integrating multiple biological principles. Existing researches on LSM focus on different certain perspectives, including high-dimensional encoding or optimization of the liquid layer, network architecture search, and application to hardware devices. There is still a lack of in-depth inspiration from the learning and structural evolution mechanism of the brain. Considering these limitations, this paper presents a novel LSM learning model that integrates adaptive structural evolution and multi-scale biological learning rules. For structural evolution, an adaptive evolvable LSM model is developed to optimize the neural architecture design of liquid layer with separation property. For brain-inspired learning of LSM, we propose a dopamine-modulated Bienenstock-Cooper-Munros (DA-BCM) method that incorporates global long-term dopamine regulation and local trace-based BCM synaptic plasticity. Comparative experimental results on different decision-making tasks show that introducing structural evolution of the liquid layer, and the DA-BCM regulation of the liquid layer and the readout layer could improve the decision-making ability of LSM and flexibly adapt to rule reversal. This work is committed to exploring how evolution can help to design more appropriate network architectures and how multi-scale neuroplasticity principles coordinated to enable the optimization and learning of LSMs for relatively complex decision-making tasks.
Collapse
Affiliation(s)
- Wenxuan Pan
- Brain-inspired Cognitive Intelligence Lab, Institute of Automation, Chinese Academy of Sciences, Beijing, China
- School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing, China
| | - Feifei Zhao
- Brain-inspired Cognitive Intelligence Lab, Institute of Automation, Chinese Academy of Sciences, Beijing, China
| | - Yi Zeng
- Brain-inspired Cognitive Intelligence Lab, Institute of Automation, Chinese Academy of Sciences, Beijing, China.
- School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing, China.
- School of Future Technology, University of Chinese Academy of Sciences, Beijing, China.
- Center for Excellence in Brain Science and Intelligence Technology, Chinese Academy of Sciences, Shanghai, China.
| | - Bing Han
- Brain-inspired Cognitive Intelligence Lab, Institute of Automation, Chinese Academy of Sciences, Beijing, China
- School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing, China
| |
Collapse
|
15
|
Kawai Y, Park J, Asada M. Reservoir computing using self-sustained oscillations in a locally connected neural network. Sci Rep 2023; 13:15532. [PMID: 37726352 PMCID: PMC10509144 DOI: 10.1038/s41598-023-42812-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/21/2023] [Accepted: 09/14/2023] [Indexed: 09/21/2023] Open
Abstract
Understanding how the structural organization of neural networks influences their computational capabilities is of great interest to both machine learning and neuroscience communities. In our previous work, we introduced a novel learning system, called the reservoir of basal dynamics (reBASICS), which features a modular neural architecture (small-sized random neural networks) capable of reducing chaoticity of neural activity and of producing stable self-sustained limit cycle activities. The integration of these limit cycles is achieved by linear summation of their weights, and arbitrary time series are learned by modulating these weights. Despite its excellent learning performance, interpreting a modular structure of isolated small networks as a brain network has posed a significant challenge. Here, we investigate how local connectivity, a well-known characteristic of brain networks, contributes to reducing neural system chaoticity and generates self-sustained limit cycles based on empirical experiments. Moreover, we present the learning performance of the locally connected reBASICS in two tasks: a motor timing task and a learning task of the Lorenz time series. Although its performance was inferior to that of modular reBASICS, locally connected reBASICS could learn a time series of tens of seconds while the time constant of neural units was ten milliseconds. This work indicates that the locality of connectivity in neural networks may contribute to generation of stable self-sustained oscillations to learn arbitrary long-term time series, as well as the economy of wiring cost.
Collapse
Affiliation(s)
- Yuji Kawai
- Symbiotic Intelligent Systems Research Center, Institute for Open and Transdisciplinary Research Initiatives, Osaka University, Suita, Osaka, 565-0871, Japan.
| | - Jihoon Park
- Symbiotic Intelligent Systems Research Center, Institute for Open and Transdisciplinary Research Initiatives, Osaka University, Suita, Osaka, 565-0871, Japan
- Center for Information and Neural Networks, National Institute of Information and Communications Technology, Suita, Osaka, 565-0871, Japan
| | - Minoru Asada
- Symbiotic Intelligent Systems Research Center, Institute for Open and Transdisciplinary Research Initiatives, Osaka University, Suita, Osaka, 565-0871, Japan
- Center for Information and Neural Networks, National Institute of Information and Communications Technology, Suita, Osaka, 565-0871, Japan
- International Professional University of Technology in Osaka, Kita-ku, Osaka, 530-0001, Japan
- Chubu University Academy of Emerging Sciences, Chubu University, Kasugai, Aichi, 487-8501, Japan
| |
Collapse
|
16
|
Yamamoto H, Spitzner FP, Takemuro T, Buendía V, Murota H, Morante C, Konno T, Sato S, Hirano-Iwata A, Levina A, Priesemann V, Muñoz MA, Zierenberg J, Soriano J. Modular architecture facilitates noise-driven control of synchrony in neuronal networks. SCIENCE ADVANCES 2023; 9:eade1755. [PMID: 37624893 PMCID: PMC10456864 DOI: 10.1126/sciadv.ade1755] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/29/2022] [Accepted: 07/21/2023] [Indexed: 08/27/2023]
Abstract
High-level information processing in the mammalian cortex requires both segregated processing in specialized circuits and integration across multiple circuits. One possible way to implement these seemingly opposing demands is by flexibly switching between states with different levels of synchrony. However, the mechanisms behind the control of complex synchronization patterns in neuronal networks remain elusive. Here, we use precision neuroengineering to manipulate and stimulate networks of cortical neurons in vitro, in combination with an in silico model of spiking neurons and a mesoscopic model of stochastically coupled modules to show that (i) a modular architecture enhances the sensitivity of the network to noise delivered as external asynchronous stimulation and that (ii) the persistent depletion of synaptic resources in stimulated neurons is the underlying mechanism for this effect. Together, our results demonstrate that the inherent dynamical state in structured networks of excitable units is determined by both its modular architecture and the properties of the external inputs.
Collapse
Affiliation(s)
- Hideaki Yamamoto
- Research Institute of Electrical Communication (RIEC), Tohoku University, Sendai, Japan
- Graduate School of Engineering, Tohoku University, Sendai, Japan
| | - F. Paul Spitzner
- Max Planck Institute for Dynamics and Self-Organization, Göttingen, Germany
| | - Taiki Takemuro
- Research Institute of Electrical Communication (RIEC), Tohoku University, Sendai, Japan
- Graduate School of Biomedical Engineering, Tohoku University, Sendai, Japan
| | - Victor Buendía
- Max Planck Institute for Biological Cybernetics, Tübingen, Germany
- Department of Computer Science, University of Tübingen, Tübingen, Germany
- Departamento de Electromagnetismo y Física de la Materia, Universidad de Granada, Granada, Spain
| | - Hakuba Murota
- Research Institute of Electrical Communication (RIEC), Tohoku University, Sendai, Japan
- Graduate School of Engineering, Tohoku University, Sendai, Japan
| | - Carla Morante
- Departament de Física de la Matèria Condensada, Universitat de Barcelona, Barcelona, Spain
- Universitat de Barcelona Institute of Complex Systems (UBICS), Barcelona, Spain
| | - Tomohiro Konno
- Graduate School of Pharmaceutical Sciences, Tohoku University, Sendai, Japan
| | - Shigeo Sato
- Research Institute of Electrical Communication (RIEC), Tohoku University, Sendai, Japan
- Graduate School of Engineering, Tohoku University, Sendai, Japan
| | - Ayumi Hirano-Iwata
- Research Institute of Electrical Communication (RIEC), Tohoku University, Sendai, Japan
- Graduate School of Engineering, Tohoku University, Sendai, Japan
- Graduate School of Biomedical Engineering, Tohoku University, Sendai, Japan
- Advanced Institute for Materials Research (WPI-AIMR), Tohoku University, Sendai, Japan
| | - Anna Levina
- Max Planck Institute for Biological Cybernetics, Tübingen, Germany
- Department of Computer Science, University of Tübingen, Tübingen, Germany
| | - Viola Priesemann
- Max Planck Institute for Dynamics and Self-Organization, Göttingen, Germany
- Institute for the Dynamics of Complex Systems, University of Göttingen, Göttingen, Germany
| | - Miguel A. Muñoz
- Departamento de Electromagnetismo y Física de la Materia, Universidad de Granada, Granada, Spain
- Instituto Carlos I de Física Teórica y Computacional, Universidad de Granada, Granada, Spain
| | | | - Jordi Soriano
- Departament de Física de la Matèria Condensada, Universitat de Barcelona, Barcelona, Spain
- Universitat de Barcelona Institute of Complex Systems (UBICS), Barcelona, Spain
| |
Collapse
|
17
|
Janarek J, Drogosz Z, Grela J, Ochab JK, Oświęcimka P. Investigating structural and functional aspects of the brain's criticality in stroke. Sci Rep 2023; 13:12341. [PMID: 37524891 PMCID: PMC10390586 DOI: 10.1038/s41598-023-39467-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/29/2023] [Accepted: 07/26/2023] [Indexed: 08/02/2023] Open
Abstract
This paper addresses the question of the brain's critical dynamics after an injury such as a stroke. It is hypothesized that the healthy brain operates near a phase transition (critical point), which provides optimal conditions for information transmission and responses to inputs. If structural damage could cause the critical point to disappear and thus make self-organized criticality unachievable, it would offer the theoretical explanation for the post-stroke impairment of brain function. In our contribution, however, we demonstrate using network models of the brain, that the dynamics remain critical even after a stroke. In cases where the average size of the second-largest cluster of active nodes, which is one of the commonly used indicators of criticality, shows an anomalous behavior, it results from the loss of integrity of the network, quantifiable within graph theory, and not from genuine non-critical dynamics. We propose a new simple model of an artificial stroke that explains this anomaly. The proposed interpretation of the results is confirmed by an analysis of real connectomes acquired from post-stroke patients and a control group. The results presented refer to neurobiological data; however, the conclusions reached apply to a broad class of complex systems that admit a critical state.
Collapse
Affiliation(s)
- Jakub Janarek
- Institute of Theoretical Physics, Jagiellonian University, 30-348, Kraków, Poland
| | - Zbigniew Drogosz
- Institute of Theoretical Physics, Jagiellonian University, 30-348, Kraków, Poland
| | - Jacek Grela
- Institute of Theoretical Physics, Jagiellonian University, 30-348, Kraków, Poland
- Mark Kac Center for Complex Systems Research, Jagiellonian University, 30-348, Kraków, Poland
| | - Jeremi K Ochab
- Institute of Theoretical Physics, Jagiellonian University, 30-348, Kraków, Poland.
- Mark Kac Center for Complex Systems Research, Jagiellonian University, 30-348, Kraków, Poland.
| | - Paweł Oświęcimka
- Institute of Theoretical Physics, Jagiellonian University, 30-348, Kraków, Poland
- Mark Kac Center for Complex Systems Research, Jagiellonian University, 30-348, Kraków, Poland
- Complex Systems Theory Department, Institute of Nuclear Physics, Polish Academy of Sciences, 31-342, Kraków, Poland
| |
Collapse
|
18
|
Xu J, Zhao T, Chang P, Wang C, Wang A. Photonic reservoir computing with a silica microsphere cavity. OPTICS LETTERS 2023; 48:3653-3656. [PMID: 37450717 DOI: 10.1364/ol.495073] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/09/2023] [Accepted: 06/15/2023] [Indexed: 07/18/2023]
Abstract
We experimentally demonstrate a photonic reservoir computing (RC) system using a passive silica microsphere cavity. The microsphere cavity exhibits a consistent nonlinear response to the non-return-to-zero signal and the multiple-level signal due to strong interference between numerous whispering gallery modes in the "over-coupling" state. Benefiting from the fact that the long photon lifetime inside the microsphere cavity provides a memory of past inputs, this photonic reservoir does not require a delayed feedback loop. We evaluate the generalization property of the RC system and obtain a correlation coefficient of 0.923. In addition, we obtain a NMSE of 0.06 for the Santa-Fe chaotic time series prediction task and a SER of 0.02 at a SNR of 12 dB for the nonlinear channel equalization task. Moreover, a microsphere cavity with a higher quality factor can provide a larger memory capacity. The application of the silica microsphere cavity as a small-volume passive device in a reservoir furnishes a new avenue for achieving a low-consumption and integrated RC system.
Collapse
|
19
|
Kloucek MB, Machon T, Kajimura S, Royall CP, Masuda N, Turci F. Biases in inverse Ising estimates of near-critical behavior. Phys Rev E 2023; 108:014109. [PMID: 37583208 DOI: 10.1103/physreve.108.014109] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/19/2023] [Accepted: 04/27/2023] [Indexed: 08/17/2023]
Abstract
Inverse Ising inference allows pairwise interactions of complex binary systems to be reconstructed from empirical correlations. Typical estimators used for this inference, such as pseudo-likelihood maximization (PLM), are biased. Using the Sherrington-Kirkpatrick model as a benchmark, we show that these biases are large in critical regimes close to phase boundaries, and they may alter the qualitative interpretation of the inferred model. In particular, we show that the small-sample bias causes models inferred through PLM to appear closer to criticality than one would expect from the data. Data-driven methods to correct this bias are explored and applied to a functional magnetic resonance imaging data set from neuroscience. Our results indicate that additional care should be taken when attributing criticality to real-world data sets.
Collapse
Affiliation(s)
- Maximilian B Kloucek
- School of Physics, HH Wills Physics Laboratory, University of Bristol, Tyndall Avenue, Bristol BS8 1TL, United Kingdom
- Bristol Centre for Functional Nanomaterials, HH Wills Physics Laboratory, University of Bristol, Tyndall Avenue, Bristol BS8 1TL, United Kingdom
| | - Thomas Machon
- School of Physics, HH Wills Physics Laboratory, University of Bristol, Tyndall Avenue, Bristol BS8 1TL, United Kingdom
| | - Shogo Kajimura
- Faculty of Information and Human Sciences, Kyoto Institute of Technology, Kyoto 606-8585, Japan
| | - C Patrick Royall
- Gulliver UMR CNRS 7083, ESPCI Paris, Université PSL, 75005 Paris, France
| | - Naoki Masuda
- Department of Mathematics, State University of New York at Buffalo, Buffalo, New York 14260-2900, USA
- Computational and Data-Enabled Science and Engineering Program, State University of New York at Buffalo, Buffalo, New York 14260-5030, USA
| | - Francesco Turci
- School of Physics, HH Wills Physics Laboratory, University of Bristol, Tyndall Avenue, Bristol BS8 1TL, United Kingdom
| |
Collapse
|
20
|
López C. Artificial Intelligence and Advanced Materials. ADVANCED MATERIALS (DEERFIELD BEACH, FLA.) 2023; 35:e2208683. [PMID: 36560859 DOI: 10.1002/adma.202208683] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/21/2022] [Revised: 12/01/2022] [Indexed: 06/09/2023]
Abstract
Artificial intelligence (AI) is gaining strength, and materials science can both contribute to and profit from it. In a simultaneous progress race, new materials, systems, and processes can be devised and optimized thanks to machine learning (ML) techniques, and such progress can be turned into innovative computing platforms. Future materials scientists will profit from understanding how ML can boost the conception of advanced materials. This review covers aspects of computation from the fundamentals to directions taken and repercussions produced by computation to account for the origins, procedures, and applications of AI. ML and its methods are reviewed to provide basic knowledge of its implementation and its potential. The materials and systems used to implement AI with electric charges are finding serious competition from other information-carrying and processing agents. The impact these techniques have on the inception of new advanced materials is so deep that a new paradigm is developing where implicit knowledge is being mined to conceive materials and systems for functions instead of finding applications to found materials. How far this trend can be carried is hard to fathom, as exemplified by the power to discover unheard of materials or physical laws buried in data.
Collapse
Affiliation(s)
- Cefe López
- Instituto de Ciencia de Materiales de Madrid (ICMM), Consejo Superior de Investigaciones Científicas (CSIC), Calle Sor Juana Inés de la Cruz 3, Madrid, 28049, Spain
- Donostia International Physics Centre (DIPC), Paseo Manuel de Lardizábal 4, San Sebastián, 20018, España
| |
Collapse
|
21
|
Li SS, Li J, Zou X, Zhang L, Jiang L, Pan W, Yan L. Photonic reservoir computing using a self-injection locked semiconductor laser under narrowband optical feedback. OPTICS LETTERS 2023; 48:2006-2009. [PMID: 37058628 DOI: 10.1364/ol.485755] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/16/2023] [Accepted: 03/08/2023] [Indexed: 06/19/2023]
Abstract
Photonic time-delay reservoir computing (TDRC) using a self-injection locked semiconductor laser under optical feedback from a narrowband apodized fiber Bragg grating (AFBG) is proposed and numerically demonstrated. The narrowband AFBG suppresses the laser's relaxation oscillation and provides self-injection locking in both the weak and strong feedback regimes. By contrast, conventional optical feedback provides locking only in the weak feedback regime. The TDRC based on self-injection locking is first evaluated by the computational ability and memory capacity, then benchmarked by the time series prediction and channel equalization. Good computing performances can be achieved using both the weak and strong feedback regimes. Interestingly, the strong feedback regime broadens the usable feedback strength range and improves robustness to feedback phase variations in the benchmark tests.
Collapse
|
22
|
Grosu GF, Hopp AV, Moca VV, Bârzan H, Ciuparu A, Ercsey-Ravasz M, Winkel M, Linde H, Mureșan RC. The fractal brain: scale-invariance in structure and dynamics. Cereb Cortex 2023; 33:4574-4605. [PMID: 36156074 PMCID: PMC10110456 DOI: 10.1093/cercor/bhac363] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/02/2022] [Revised: 08/09/2022] [Accepted: 08/10/2022] [Indexed: 11/12/2022] Open
Abstract
The past 40 years have witnessed extensive research on fractal structure and scale-free dynamics in the brain. Although considerable progress has been made, a comprehensive picture has yet to emerge, and needs further linking to a mechanistic account of brain function. Here, we review these concepts, connecting observations across different levels of organization, from both a structural and functional perspective. We argue that, paradoxically, the level of cortical circuits is the least understood from a structural point of view and perhaps the best studied from a dynamical one. We further link observations about scale-freeness and fractality with evidence that the environment provides constraints that may explain the usefulness of fractal structure and scale-free dynamics in the brain. Moreover, we discuss evidence that behavior exhibits scale-free properties, likely emerging from similarly organized brain dynamics, enabling an organism to thrive in an environment that shares the same organizational principles. Finally, we review the sparse evidence for and try to speculate on the functional consequences of fractality and scale-freeness for brain computation. These properties may endow the brain with computational capabilities that transcend current models of neural computation and could hold the key to unraveling how the brain constructs percepts and generates behavior.
Collapse
Affiliation(s)
- George F Grosu
- Department of Experimental and Theoretical Neuroscience, Transylvanian Institute of Neuroscience, Str. Ploiesti 33, 400157 Cluj-Napoca, Romania
- Faculty of Electronics, Telecommunications and Information Technology, Technical University of Cluj-Napoca, Str. Memorandumului 28, 400114 Cluj-Napoca, Romania
| | | | - Vasile V Moca
- Department of Experimental and Theoretical Neuroscience, Transylvanian Institute of Neuroscience, Str. Ploiesti 33, 400157 Cluj-Napoca, Romania
| | - Harald Bârzan
- Department of Experimental and Theoretical Neuroscience, Transylvanian Institute of Neuroscience, Str. Ploiesti 33, 400157 Cluj-Napoca, Romania
- Faculty of Electronics, Telecommunications and Information Technology, Technical University of Cluj-Napoca, Str. Memorandumului 28, 400114 Cluj-Napoca, Romania
| | - Andrei Ciuparu
- Department of Experimental and Theoretical Neuroscience, Transylvanian Institute of Neuroscience, Str. Ploiesti 33, 400157 Cluj-Napoca, Romania
- Faculty of Electronics, Telecommunications and Information Technology, Technical University of Cluj-Napoca, Str. Memorandumului 28, 400114 Cluj-Napoca, Romania
| | - Maria Ercsey-Ravasz
- Department of Experimental and Theoretical Neuroscience, Transylvanian Institute of Neuroscience, Str. Ploiesti 33, 400157 Cluj-Napoca, Romania
- Faculty of Physics, Babes-Bolyai University, Str. Mihail Kogalniceanu 1, 400084 Cluj-Napoca, Romania
| | - Mathias Winkel
- Merck KGaA, Frankfurter Straße 250, 64293 Darmstadt, Germany
| | - Helmut Linde
- Department of Experimental and Theoretical Neuroscience, Transylvanian Institute of Neuroscience, Str. Ploiesti 33, 400157 Cluj-Napoca, Romania
- Merck KGaA, Frankfurter Straße 250, 64293 Darmstadt, Germany
| | - Raul C Mureșan
- Department of Experimental and Theoretical Neuroscience, Transylvanian Institute of Neuroscience, Str. Ploiesti 33, 400157 Cluj-Napoca, Romania
| |
Collapse
|
23
|
van de Ven B, Alegre-Ibarra U, Lemieszczuk PJ, Bobbert PA, Ruiz Euler HC, van der Wiel WG. Dopant network processing units as tuneable extreme learning machines. FRONTIERS IN NANOTECHNOLOGY 2023. [DOI: 10.3389/fnano.2023.1055527] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/03/2023] Open
Abstract
Inspired by the highly efficient information processing of the brain, which is based on the chemistry and physics of biological tissue, any material system and its physical properties could in principle be exploited for computation. However, it is not always obvious how to use a material system’s computational potential to the fullest. Here, we operate a dopant network processing unit (DNPU) as a tuneable extreme learning machine (ELM) and combine the principles of artificial evolution and ELM to optimise its computational performance on a non-linear classification benchmark task. We find that, for this task, there is an optimal, hybrid operation mode (“tuneable ELM mode”) in between the traditional ELM computing regime with a fixed DNPU and linearly weighted outputs (“fixed-ELM mode”) and the regime where the outputs of the non-linear system are directly tuned to generate the desired output (“direct-output mode”). We show that the tuneable ELM mode reduces the number of parameters needed to perform a formant-based vowel recognition benchmark task. Our results emphasise the power of analog in-matter computing and underline the importance of designing specialised material systems to optimally utilise their physical properties for computation.
Collapse
|
24
|
Haruna J, Toshio R, Nakano N. Path integral approach to universal dynamics of reservoir computers. Phys Rev E 2023; 107:034306. [PMID: 37073052 DOI: 10.1103/physreve.107.034306] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/14/2021] [Accepted: 02/06/2023] [Indexed: 04/20/2023]
Abstract
In this work, we give a characterization of the reservoir computer (RC) by the network structure, especially the probability distribution of random coupling constants. First, based on the path integral method, we clarify the universal behavior of the random network dynamics in the thermodynamic limit, which depends only on the asymptotic behavior of the second cumulant generating functions of the network coupling constants. This result enables us to classify the random networks into several universality classes, according to the distribution function of coupling constants chosen for the networks. Interestingly, it is revealed that such a classification has a close relationship with the distribution of eigenvalues of the random coupling matrix. We also comment on the relation between our theory and some practical choices of random connectivity in the RC. Subsequently, we investigate the relationship between the RC's computational power and the network parameters for several universality classes. We perform several numerical simulations to evaluate the phase diagrams of the steady reservoir states, common-signal-induced synchronization, and the computational power in the chaotic time series inference tasks. As a result, we clarify the close relationship between these quantities, especially a remarkable computational performance near the phase transitions, which is realized even near a nonchaotic transition boundary. These results may provide us with a new perspective on the designing principle for the RC.
Collapse
Affiliation(s)
- Junichi Haruna
- Department of Physics, Kyoto University, Kyoto 606-8502, Japan
| | - Riki Toshio
- Department of Physics, Kyoto University, Kyoto 606-8502, Japan
| | - Naoto Nakano
- Graduate School of Advanced Mathematical Sciences, Meiji University, Tokyo 164-8525, Japan
| |
Collapse
|
25
|
Chakraborty B, Mukhopadhyay S. Heterogeneous recurrent spiking neural network for spatio-temporal classification. Front Neurosci 2023; 17:994517. [PMID: 36793542 PMCID: PMC9922697 DOI: 10.3389/fnins.2023.994517] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/21/2022] [Accepted: 01/04/2023] [Indexed: 02/01/2023] Open
Abstract
Spiking Neural Networks are often touted as brain-inspired learning models for the third wave of Artificial Intelligence. Although recent SNNs trained with supervised backpropagation show classification accuracy comparable to deep networks, the performance of unsupervised learning-based SNNs remains much lower. This paper presents a heterogeneous recurrent spiking neural network (HRSNN) with unsupervised learning for spatio-temporal classification of video activity recognition tasks on RGB (KTH, UCF11, UCF101) and event-based datasets (DVS128 Gesture). We observed an accuracy of 94.32% for the KTH dataset, 79.58% and 77.53% for the UCF11 and UCF101 datasets, respectively, and an accuracy of 96.54% on the event-based DVS Gesture dataset using the novel unsupervised HRSNN model. The key novelty of the HRSNN is that the recurrent layer in HRSNN consists of heterogeneous neurons with varying firing/relaxation dynamics, and they are trained via heterogeneous spike-time-dependent-plasticity (STDP) with varying learning dynamics for each synapse. We show that this novel combination of heterogeneity in architecture and learning method outperforms current homogeneous spiking neural networks. We further show that HRSNN can achieve similar performance to state-of-the-art backpropagation trained supervised SNN, but with less computation (fewer neurons and sparse connection) and less training data.
Collapse
|
26
|
Boriskov P, Velichko A, Shilovsky N, Belyaev M. Bifurcation and Entropy Analysis of a Chaotic Spike Oscillator Circuit Based on the S-Switch. ENTROPY (BASEL, SWITZERLAND) 2022; 24:1693. [PMID: 36421548 PMCID: PMC9689857 DOI: 10.3390/e24111693] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 10/28/2022] [Revised: 11/13/2022] [Accepted: 11/17/2022] [Indexed: 06/16/2023]
Abstract
This paper presents a model and experimental study of a chaotic spike oscillator based on a leaky integrate-and-fire (LIF) neuron, which has a switching element with an S-type current-voltage characteristic (S-switch). The oscillator generates spikes of the S-switch in the form of chaotic pulse position modulation driven by the feedback with rate coding instability of LIF neuron. The oscillator model with piecewise function of the S-switch has resistive feedback using a second order filter. The oscillator circuit is built on four operational amplifiers and two field-effect transistors (MOSFETs) that form an S-switch based on a Schmitt trigger, an active RC filter and a matching amplifier. We investigate the bifurcation diagrams of the model and the circuit and calculate the entropy of oscillations. For the analog circuit, the "regular oscillation-chaos" transition is analysed in a series of tests initiated by a step voltage in the matching amplifier. Entropy values are used to estimate the average time for the transition of oscillations to chaos and the degree of signal correlation of the transition mode of different tests. Study results can be applied in various reservoir computing applications, for example, in choosing and configuring the LogNNet network reservoir circuits.
Collapse
|
27
|
Zeng K, Linot AJ, Graham MD. Data-driven control of spatiotemporal chaos with reduced-order neural ODE-based models and reinforcement learning. Proc Math Phys Eng Sci 2022. [DOI: 10.1098/rspa.2022.0297] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022] Open
Abstract
Deep reinforcement learning (RL) is a data-driven method capable of discovering complex control strategies for high-dimensional systems, making it promising for flow control applications. In particular, the present work is motivated by the goal of reducing energy dissipation in turbulent flows, and the example considered is the spatiotemporally chaotic dynamics of the Kuramoto–Sivashinsky equation (KSE). A major challenge associated with RL is that substantial training data must be generated by repeatedly interacting with the target system, making it costly when the system is computationally or experimentally expensive. We mitigate this challenge in a data-driven manner by combining dimensionality reduction via an autoencoder with a neural ODE framework to obtain a low-dimensional dynamical model from just a limited data set. We substitute this data-driven reduced-order model (ROM) in place of the true system during RL training to efficiently estimate the optimal policy, which can then be deployed on the true system. For the KSE actuated with localized forcing (‘jets’) at four locations, we demonstrate that we are able to learn a ROM that accurately captures the actuated dynamics as well as the underlying natural dynamics just from snapshots of the KSE experiencing random actuations. Using this ROM and a control objective of minimizing dissipation and power cost, we extract a control policy from it using deep RL. We show that the ROM-based control strategy translates well to the true KSE and highlight that the RL agent discovers and stabilizes an underlying forced equilibrium solution of the KSE system. We show that this forced equilibrium captured in the ROM and discovered through RL is related to an existing known equilibrium solution of the natural KSE.
Collapse
Affiliation(s)
- Kevin Zeng
- Department of Chemical and Biological Engineering, University of Wisconsin-Madison, Madison WI 53706, USA
| | - Alec J. Linot
- Department of Chemical and Biological Engineering, University of Wisconsin-Madison, Madison WI 53706, USA
| | - Michael D. Graham
- Department of Chemical and Biological Engineering, University of Wisconsin-Madison, Madison WI 53706, USA
| |
Collapse
|
28
|
Braccini M, Roli A, Barbieri E, Kauffman SA. On the Criticality of Adaptive Boolean Network Robots. ENTROPY (BASEL, SWITZERLAND) 2022; 24:1368. [PMID: 37420388 DOI: 10.3390/e24101368] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/08/2022] [Revised: 09/21/2022] [Accepted: 09/22/2022] [Indexed: 07/09/2023]
Abstract
Systems poised at a dynamical critical regime, between order and disorder, have been shown capable of exhibiting complex dynamics that balance robustness to external perturbations and rich repertoires of responses to inputs. This property has been exploited in artificial network classifiers, and preliminary results have also been attained in the context of robots controlled by Boolean networks. In this work, we investigate the role of dynamical criticality in robots undergoing online adaptation, i.e., robots that adapt some of their internal parameters to improve a performance metric over time during their activity. We study the behavior of robots controlled by random Boolean networks, which are either adapted in their coupling with robot sensors and actuators or in their structure or both. We observe that robots controlled by critical random Boolean networks have higher average and maximum performance than that of robots controlled by ordered and disordered nets. Notably, in general, adaptation by change of couplings produces robots with slightly higher performance than those adapted by changing their structure. Moreover, we observe that when adapted in their structure, ordered networks tend to move to the critical dynamical regime. These results provide further support to the conjecture that critical regimes favor adaptation and indicate the advantage of calibrating robot control systems at dynamical critical states.
Collapse
Affiliation(s)
- Michele Braccini
- Department of Computer Science and Engineering, Università di Bologna, Campus of Cesena, I-47521 Cesena, Italy
| | - Andrea Roli
- Department of Computer Science and Engineering, Università di Bologna, Campus of Cesena, I-47521 Cesena, Italy
- European Centre for Living Technology, I-30123 Venezia, Italy
| | - Edoardo Barbieri
- Department of Computer Science and Engineering, Università di Bologna, Campus of Cesena, I-47521 Cesena, Italy
| | | |
Collapse
|
29
|
Beggs JM. Addressing skepticism of the critical brain hypothesis. Front Comput Neurosci 2022; 16:703865. [PMID: 36185712 PMCID: PMC9520604 DOI: 10.3389/fncom.2022.703865] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/30/2021] [Accepted: 08/08/2022] [Indexed: 11/16/2022] Open
Abstract
The hypothesis that living neural networks operate near a critical phase transition point has received substantial discussion. This “criticality hypothesis” is potentially important because experiments and theory show that optimal information processing and health are associated with operating near the critical point. Despite the promise of this idea, there have been several objections to it. While earlier objections have been addressed already, the more recent critiques of Touboul and Destexhe have not yet been fully met. The purpose of this paper is to describe their objections and offer responses. Their first objection is that the well-known Brunel model for cortical networks does not display a peak in mutual information near its phase transition, in apparent contradiction to the criticality hypothesis. In response I show that it does have such a peak near the phase transition point, provided it is not strongly driven by random inputs. Their second objection is that even simple models like a coin flip can satisfy multiple criteria of criticality. This suggests that the emergent criticality claimed to exist in cortical networks is just the consequence of a random walk put through a threshold. In response I show that while such processes can produce many signatures criticality, these signatures (1) do not emerge from collective interactions, (2) do not support information processing, and (3) do not have long-range temporal correlations. Because experiments show these three features are consistently present in living neural networks, such random walk models are inadequate. Nevertheless, I conclude that these objections have been valuable for refining research questions and should always be welcomed as a part of the scientific process.
Collapse
Affiliation(s)
- John M. Beggs
- Department of Physics, Indiana University Bloomington, Bloomington, IN, United States
- Program in Neuroscience, Indiana University Bloomington, Bloomington, IN, United States
- *Correspondence: John M. Beggs,
| |
Collapse
|
30
|
Feketa P, Meurer T, Kohlstedt H. Structural plasticity driven by task performance leads to criticality signatures in neuromorphic oscillator networks. Sci Rep 2022; 12:15321. [PMID: 36096910 PMCID: PMC9468161 DOI: 10.1038/s41598-022-19386-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/14/2022] [Accepted: 08/29/2022] [Indexed: 12/04/2022] Open
Abstract
Oscillator networks rapidly become one of the promising vehicles for energy-efficient computing due to their intrinsic parallelism of execution. The criticality property of the oscillator-based networks is regarded to be essential for performing complex tasks. There are numerous bio-inspired synaptic and structural plasticity mechanisms available, especially for spiking neural networks, which can drive the network towards the criticality. However, there is no solid connection between these self-adaption mechanisms and the task performance, and it is not clear how and why particular self-adaptation mechanisms contribute to the solution of the task, although their relation to criticality is understood. Here we propose an evolutionary approach for the structural plasticity that relies solely on the task performance and does not contain any task-independent adaptation mechanisms, which usually contribute towards the criticality of the network. As a driver for the structural plasticity, we use a direct binary search guided by the performance of the classification task that can be interpreted as an interaction of the network with the environment. Remarkably, such interaction with the environment brings the network to criticality, although this property was not a part of the objectives of the employed structural plasticity mechanism. This observation confirms a duality of criticality and task performance, and legitimizes internal activity-dependent plasticity mechanisms from the viewpoint of evolution as mechanisms contributing to the task performance, but following the dual route. Finally, we analyze the trained network against task-independent information-theoretic measures and identify the interconnection graph’s entropy to be an essential ingredient for the classification task performance and network’s criticality.
Collapse
Affiliation(s)
- Petro Feketa
- Chair of Automation and Control, Kiel University, Kaiserstraße 2, 24143, Kiel, Germany. .,Kiel Nano, Surface and Interface Science KiNSIS, Kiel University, Christian-Albrechts-Platz 4, 24118, Kiel, Germany.
| | - Thomas Meurer
- Chair of Automation and Control, Kiel University, Kaiserstraße 2, 24143, Kiel, Germany.,Kiel Nano, Surface and Interface Science KiNSIS, Kiel University, Christian-Albrechts-Platz 4, 24118, Kiel, Germany
| | - Hermann Kohlstedt
- Chair of Nanoelectronics, Kiel University, Kaiserstraße 2, 24143, Kiel, Germany.,Kiel Nano, Surface and Interface Science KiNSIS, Kiel University, Christian-Albrechts-Platz 4, 24118, Kiel, Germany
| |
Collapse
|
31
|
O'Byrne J, Jerbi K. How critical is brain criticality? Trends Neurosci 2022; 45:820-837. [PMID: 36096888 DOI: 10.1016/j.tins.2022.08.007] [Citation(s) in RCA: 54] [Impact Index Per Article: 27.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/01/2022] [Revised: 07/27/2022] [Accepted: 08/10/2022] [Indexed: 10/31/2022]
Abstract
Criticality is the singular state of complex systems poised at the brink of a phase transition between order and randomness. Such systems display remarkable information-processing capabilities, evoking the compelling hypothesis that the brain may itself be critical. This foundational idea is now drawing renewed interest thanks to high-density data and converging cross-disciplinary knowledge. Together, these lines of inquiry have shed light on the intimate link between criticality, computation, and cognition. Here, we review these emerging trends in criticality neuroscience, highlighting new data pertaining to the edge of chaos and near-criticality, and making a case for the distance to criticality as a useful metric for probing cognitive states and mental illness. This unfolding progress in the field contributes to establishing criticality theory as a powerful mechanistic framework for studying emergent function and its efficiency in both biological and artificial neural networks.
Collapse
Affiliation(s)
- Jordan O'Byrne
- Cognitive and Computational Neuroscience Lab, Psychology Department, University of Montreal, Montreal, Quebec, Canada
| | - Karim Jerbi
- Cognitive and Computational Neuroscience Lab, Psychology Department, University of Montreal, Montreal, Quebec, Canada; MILA (Quebec Artificial Intelligence Institute), Montreal, Quebec, Canada; UNIQUE Center (Quebec Neuro-AI Research Center), Montreal, Quebec, Canada.
| |
Collapse
|
32
|
Optimal echo state network parameters based on behavioural spaces. Neurocomputing 2022. [DOI: 10.1016/j.neucom.2022.06.008] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
|
33
|
Ivanov VA, Michmizos KP. Astrocytes Learn to Detect and Signal Deviations from Critical Brain Dynamics. Neural Comput 2022; 34:2047-2074. [PMID: 36027803 DOI: 10.1162/neco_a_01532] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/22/2021] [Accepted: 06/03/2022] [Indexed: 11/04/2022]
Abstract
Astrocytes are nonneuronal brain cells that were recently shown to actively communicate with neurons and are implicated in memory, learning, and regulation of cognitive states. Interestingly, these information processing functions are also closely linked to the brain's ability to self-organize at a critical phase transition. Investigating the mechanistic link between astrocytes and critical brain dynamics remains beyond the reach of cellular experiments, but it becomes increasingly approachable through computational studies. We developed a biologically plausible computational model of astrocytes to analyze how astrocyte calcium waves can respond to changes in underlying network dynamics. Our results suggest that astrocytes detect synaptic activity and signal directional changes in neuronal network dynamics using the frequency of their calcium waves. We show that this function may be facilitated by receptor scaling plasticity by enabling astrocytes to learn the approximate information content of input synaptic activity. This resulted in a computationally simple, information-theoretic model, which we demonstrate replicating the signaling functionality of the biophysical astrocyte model with receptor scaling. Our findings provide several experimentally testable hypotheses that offer insight into the regulatory role of astrocytes in brain information processing.
Collapse
Affiliation(s)
- Vladimir A Ivanov
- Computational Brain Lab, Department of Computer Science, Rutgers University, Piscataway, NJ 08854, U.S.A.
| | - Konstantinos P Michmizos
- Computational Brain Lab, Department of Computer Science, Rutgers University, Piscataway, NJ 08854, U.S.A.
| |
Collapse
|
34
|
Gradient-based learning drives robust representations in recurrent neural networks by balancing compression and expansion. NAT MACH INTELL 2022. [DOI: 10.1038/s42256-022-00498-0] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/02/2023]
|
35
|
Simulation platform for pattern recognition based on reservoir computing with memristor networks. Sci Rep 2022; 12:9868. [PMID: 35701445 PMCID: PMC9197854 DOI: 10.1038/s41598-022-13687-z] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/27/2022] [Accepted: 05/26/2022] [Indexed: 11/25/2022] Open
Abstract
Memristive systems and devices are potentially available for implementing reservoir computing (RC) systems applied to pattern recognition. However, the computational ability of memristive RC systems depends on intertwined factors such as system architectures and physical properties of memristive elements, which complicates identifying the key factor for system performance. Here we develop a simulation platform for RC with memristor device networks, which enables testing different system designs for performance improvement. Numerical simulations show that the memristor-network-based RC systems can yield high computational performance comparable to that of state-of-the-art methods in three time series classification tasks. We demonstrate that the excellent and robust computation under device-to-device variability can be achieved by appropriately setting network structures, nonlinearity of memristors, and pre/post-processing, which increases the potential for reliable computation with unreliable component devices. Our results contribute to an establishment of a design guide for memristive reservoirs toward the realization of energy-efficient machine learning hardware.
Collapse
|
36
|
Teuscher C. Revisiting the edge of chaos: Again? Biosystems 2022; 218:104693. [DOI: 10.1016/j.biosystems.2022.104693] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/31/2022] [Revised: 05/05/2022] [Accepted: 05/05/2022] [Indexed: 01/14/2023]
|
37
|
Metzner C, Krauss P. Dynamics and Information Import in Recurrent Neural Networks. Front Comput Neurosci 2022; 16:876315. [PMID: 35573264 PMCID: PMC9091337 DOI: 10.3389/fncom.2022.876315] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/15/2022] [Accepted: 04/04/2022] [Indexed: 12/27/2022] Open
Abstract
Recurrent neural networks (RNNs) are complex dynamical systems, capable of ongoing activity without any driving input. The long-term behavior of free-running RNNs, described by periodic, chaotic and fixed point attractors, is controlled by the statistics of the neural connection weights, such as the density d of non-zero connections, or the balance b between excitatory and inhibitory connections. However, for information processing purposes, RNNs need to receive external input signals, and it is not clear which of the dynamical regimes is optimal for this information import. We use both the average correlations C and the mutual information I between the momentary input vector and the next system state vector as quantitative measures of information import and analyze their dependence on the balance and density of the network. Remarkably, both resulting phase diagrams C(b, d) and I(b, d) are highly consistent, pointing to a link between the dynamical systems and the information-processing approach to complex systems. Information import is maximal not at the "edge of chaos," which is optimally suited for computation, but surprisingly in the low-density chaotic regime and at the border between the chaotic and fixed point regime. Moreover, we find a completely new type of resonance phenomenon, which we call "Import Resonance" (IR), where the information import shows a maximum, i.e., a peak-like dependence on the coupling strength between the RNN and its external input. IR complements previously found Recurrence Resonance (RR), where correlation and mutual information of successive system states peak for a certain amplitude of noise added to the system. Both IR and RR can be exploited to optimize information processing in artificial neural networks and might also play a crucial role in biological neural systems.
Collapse
Affiliation(s)
- Claus Metzner
- Neuroscience Lab, University Hospital Erlangen, Erlangen, Germany
| | - Patrick Krauss
- Neuroscience Lab, University Hospital Erlangen, Erlangen, Germany
- Cognitive Computational Neuroscience Group, Friedrich-Alexander-University Erlangen-Nuremberg, Erlangen, Germany
- Pattern Recognition Lab, Friedrich-Alexander-University Erlangen-Nuremberg, Erlangen, Germany
| |
Collapse
|
38
|
Khoshkhou M, Montakhab A. Optimal reinforcement learning near the edge of a synchronization transition. Phys Rev E 2022; 105:044312. [PMID: 35590577 DOI: 10.1103/physreve.105.044312] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/21/2021] [Accepted: 03/30/2022] [Indexed: 06/15/2023]
Abstract
Recent experimental and theoretical studies have indicated that the putative criticality of cortical dynamics may correspond to a synchronization phase transition. The critical dynamics near such a critical point needs further investigation specifically when compared to the critical behavior near the standard absorbing state phase transition. Since the phenomena of learning and self-organized criticality (SOC) at the edge of synchronization transition can emerge jointly in spiking neural networks due to the presence of spike-timing dependent plasticity (STDP), it is tempting to ask the following: what is the relationship between synchronization and learning in neural networks? Further, does learning benefit from SOC at the edge of synchronization transition? In this paper, we intend to address these important issues. Accordingly, we construct a biologically inspired model of a cognitive system which learns to perform stimulus-response tasks. We train this system using a reinforcement learning rule implemented through dopamine-modulated STDP. We find that the system exhibits a continuous transition from synchronous to asynchronous neural oscillations upon increasing the average axonal time delay. We characterize the learning performance of the system and observe that it is optimized near the synchronization transition. We also study neuronal avalanches in the system and provide evidence that optimized learning is achieved in a slightly supercritical state.
Collapse
Affiliation(s)
- Mahsa Khoshkhou
- Department of Physics, College of Sciences, Shiraz University, Shiraz 71946-84795, Iran
| | - Afshin Montakhab
- Department of Physics, College of Sciences, Shiraz University, Shiraz 71946-84795, Iran
| |
Collapse
|
39
|
Sun J, Yang W, Zheng T, Xiong X, Guo X, Zou X. Enhancing the Recognition Task Performance of MEMS Resonator-Based Reservoir Computing System via Nonlinearity Tuning. MICROMACHINES 2022; 13:mi13020317. [PMID: 35208441 PMCID: PMC8875144 DOI: 10.3390/mi13020317] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/11/2022] [Revised: 02/06/2022] [Accepted: 02/10/2022] [Indexed: 02/04/2023]
Abstract
Reservoir computing (RC) is a potential neuromorphic paradigm for physically realizing artificial intelligence systems in the Internet of Things society, owing to its well-known low training cost and compatibility with nonlinear devices. Micro-electro-mechanical system (MEMS) resonators exhibiting rich nonlinear dynamics and fading behaviors are promising candidates for high-performance hardware RC. Previously, we presented a non-delay-based RC using one single micromechanical resonator with hybrid nonlinear dynamics. Here, we innovatively introduce a nonlinear tuning strategy to analyze the computing properties (the processing speed and recognition accuracy) of the presented RC. Meanwhile, we numerically and experimentally analyze the influence of the hybrid nonlinear dynamics using the image classification task. Specifically, we study the transient nonlinear saturation phenomenon by fitting quality factors under different vacuums, as well as searching the optimal operating point (the edge of chaos) by the static bifurcation analysis and dynamic vibration numerical models of the Duffing nonlinearity. Our results in the optimal operation conditions experimentally achieved a high classification accuracy of (93 ± 1)% and several times faster than previous work on the handwritten digits recognition benchmark, profit from the perfect high signal-to-noise ratios (quality factor) and the nonlinearity of the dynamical variables.
Collapse
Affiliation(s)
- Jie Sun
- The State Key Laboratory of Transducer Technology, Aerospace Information Research Institute, Chinese Academy of Sciences, Beijing 100190, China; (J.S.); (T.Z.); (X.G.)
- School of Electronic, Electrical and Communication Engineering, University of Chinese Academy of Sciences, Beijing 100049, China; (W.Y.); (X.X.)
| | - Wuhao Yang
- School of Electronic, Electrical and Communication Engineering, University of Chinese Academy of Sciences, Beijing 100049, China; (W.Y.); (X.X.)
| | - Tianyi Zheng
- The State Key Laboratory of Transducer Technology, Aerospace Information Research Institute, Chinese Academy of Sciences, Beijing 100190, China; (J.S.); (T.Z.); (X.G.)
- School of Electronic, Electrical and Communication Engineering, University of Chinese Academy of Sciences, Beijing 100049, China; (W.Y.); (X.X.)
| | - Xingyin Xiong
- School of Electronic, Electrical and Communication Engineering, University of Chinese Academy of Sciences, Beijing 100049, China; (W.Y.); (X.X.)
| | - Xiaowei Guo
- The State Key Laboratory of Transducer Technology, Aerospace Information Research Institute, Chinese Academy of Sciences, Beijing 100190, China; (J.S.); (T.Z.); (X.G.)
- School of Electronic, Electrical and Communication Engineering, University of Chinese Academy of Sciences, Beijing 100049, China; (W.Y.); (X.X.)
| | - Xudong Zou
- The State Key Laboratory of Transducer Technology, Aerospace Information Research Institute, Chinese Academy of Sciences, Beijing 100190, China; (J.S.); (T.Z.); (X.G.)
- QILU Aerospace Information Research Institute, Jinan 250101, China
- Correspondence:
| |
Collapse
|
40
|
Toker D, Pappas I, Lendner JD, Frohlich J, Mateos DM, Muthukumaraswamy S, Carhart-Harris R, Paff M, Vespa PM, Monti MM, Sommer FT, Knight RT, D'Esposito M. Consciousness is supported by near-critical slow cortical electrodynamics. Proc Natl Acad Sci U S A 2022; 119:e2024455119. [PMID: 35145021 PMCID: PMC8851554 DOI: 10.1073/pnas.2024455119] [Citation(s) in RCA: 46] [Impact Index Per Article: 23.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/02/2020] [Accepted: 12/20/2021] [Indexed: 12/21/2022] Open
Abstract
Mounting evidence suggests that during conscious states, the electrodynamics of the cortex are poised near a critical point or phase transition and that this near-critical behavior supports the vast flow of information through cortical networks during conscious states. Here, we empirically identify a mathematically specific critical point near which waking cortical oscillatory dynamics operate, which is known as the edge-of-chaos critical point, or the boundary between stability and chaos. We do so by applying the recently developed modified 0-1 chaos test to electrocorticography (ECoG) and magnetoencephalography (MEG) recordings from the cortices of humans and macaques across normal waking, generalized seizure, anesthesia, and psychedelic states. Our evidence suggests that cortical information processing is disrupted during unconscious states because of a transition of low-frequency cortical electric oscillations away from this critical point; conversely, we show that psychedelics may increase the information richness of cortical activity by tuning low-frequency cortical oscillations closer to this critical point. Finally, we analyze clinical electroencephalography (EEG) recordings from patients with disorders of consciousness (DOC) and show that assessing the proximity of slow cortical oscillatory electrodynamics to the edge-of-chaos critical point may be useful as an index of consciousness in the clinical setting.
Collapse
Affiliation(s)
- Daniel Toker
- Department of Psychology, University of California, Los Angeles, CA 90095;
| | - Ioannis Pappas
- Helen Wills Neuroscience Institute, University of California, Berkeley, CA 94704
- Department of Psychology, University of California, Berkeley, CA 94704
- Laboratory of Neuro Imaging, Stevens Institute for Neuroimaging and Informatics, Keck School of Medicine, University of Southern California, Los Angeles, CA 90033
| | - Janna D Lendner
- Helen Wills Neuroscience Institute, University of California, Berkeley, CA 94704
- Department of Anesthesiology and Intensive Care, University Medical Center, 72076 Tübingen, Germany
| | - Joel Frohlich
- Department of Psychology, University of California, Los Angeles, CA 90095
| | - Diego M Mateos
- Consejo Nacional de Investigaciones Científicas y Técnicas de Argentina, C1425 Buenos Aires, Argentina
- Facultad de Ciencia y Tecnología, Universidad Autónoma de Entre Ríos, E3202 Paraná, Entre Ríos, Argentina
- Grupo de Análisis de Neuroimágenes, Instituo de Matemática Aplicada del Litoral, S3000 Santa Fe, Argentina
| | - Suresh Muthukumaraswamy
- School of Pharmacy, Faculty of Medical and Health Sciences, The University of Auckland, 1010 Auckland, New Zealand
| | - Robin Carhart-Harris
- Neuropsychopharmacology Unit, Centre for Psychiatry, Imperial College London, London SW7 2AZ, United Kingdom
- Centre for Psychedelic Research, Department of Psychiatry, Imperial College London, London SW7 2AZ, United Kingdom
| | - Michelle Paff
- Department of Neurological Surgery, University of California, Irvine, CA 92697
| | - Paul M Vespa
- Brain Injury Research Center, Department of Neurosurgery, University of California, Los Angeles, CA 90095
| | - Martin M Monti
- Department of Psychology, University of California, Los Angeles, CA 90095
- Brain Injury Research Center, Department of Neurosurgery, University of California, Los Angeles, CA 90095
| | - Friedrich T Sommer
- Helen Wills Neuroscience Institute, University of California, Berkeley, CA 94704
- Redwood Center for Theoretical Neuroscience, University of California, Berkeley, CA 94704
| | - Robert T Knight
- Helen Wills Neuroscience Institute, University of California, Berkeley, CA 94704
- Department of Psychology, University of California, Berkeley, CA 94704
| | - Mark D'Esposito
- Helen Wills Neuroscience Institute, University of California, Berkeley, CA 94704
- Department of Psychology, University of California, Berkeley, CA 94704
| |
Collapse
|
41
|
Khajeh R, Fumarola F, Abbott LF. Sparse balance: Excitatory-inhibitory networks with small bias currents and broadly distributed synaptic weights. PLoS Comput Biol 2022; 18:e1008836. [PMID: 35139071 PMCID: PMC8827417 DOI: 10.1371/journal.pcbi.1008836] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/21/2021] [Accepted: 01/08/2022] [Indexed: 11/18/2022] Open
Abstract
Cortical circuits generate excitatory currents that must be cancelled by strong inhibition to assure stability. The resulting excitatory-inhibitory (E-I) balance can generate spontaneous irregular activity but, in standard balanced E-I models, this requires that an extremely strong feedforward bias current be included along with the recurrent excitation and inhibition. The absence of experimental evidence for such large bias currents inspired us to examine an alternative regime that exhibits asynchronous activity without requiring unrealistically large feedforward input. In these networks, irregular spontaneous activity is supported by a continually changing sparse set of neurons. To support this activity, synaptic strengths must be drawn from high-variance distributions. Unlike standard balanced networks, these sparse balance networks exhibit robust nonlinear responses to uniform inputs and non-Gaussian input statistics. Interestingly, the speed, not the size, of synaptic fluctuations dictates the degree of sparsity in the model. In addition to simulations, we provide a mean-field analysis to illustrate the properties of these networks.
Collapse
Affiliation(s)
- Ramin Khajeh
- Mortimer B. Zuckerman Mind Brain Behavior Institute, Department of Neuroscience, Columbia University, New York City, New York, United States of America
| | - Francesco Fumarola
- Mortimer B. Zuckerman Mind Brain Behavior Institute, Department of Neuroscience, Columbia University, New York City, New York, United States of America
- Laboratory for Neural Computation and Adaptation, RIKEN Center for Brain Science, Saitama, Japan
| | - LF Abbott
- Mortimer B. Zuckerman Mind Brain Behavior Institute, Department of Neuroscience, Columbia University, New York City, New York, United States of America
| |
Collapse
|
42
|
Krishnamurthy K, Can T, Schwab DJ. Theory of Gating in Recurrent Neural Networks. PHYSICAL REVIEW. X 2022; 12:011011. [PMID: 36545030 PMCID: PMC9762509 DOI: 10.1103/physrevx.12.011011] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
Recurrent neural networks (RNNs) are powerful dynamical models, widely used in machine learning (ML) and neuroscience. Prior theoretical work has focused on RNNs with additive interactions. However gating i.e., multiplicative interactions are ubiquitous in real neurons and also the central feature of the best-performing RNNs in ML. Here, we show that gating offers flexible control of two salient features of the collective dynamics: (i) timescales and (ii) dimensionality. The gate controlling timescales leads to a novel marginally stable state, where the network functions as a flexible integrator. Unlike previous approaches, gating permits this important function without parameter fine-tuning or special symmetries. Gates also provide a flexible, context-dependent mechanism to reset the memory trace, thus complementing the memory function. The gate modulating the dimensionality can induce a novel, discontinuous chaotic transition, where inputs push a stable system to strong chaotic activity, in contrast to the typically stabilizing effect of inputs. At this transition, unlike additive RNNs, the proliferation of critical points (topological complexity) is decoupled from the appearance of chaotic dynamics (dynamical complexity). The rich dynamics are summarized in phase diagrams, thus providing a map for principled parameter initialization choices to ML practitioners.
Collapse
Affiliation(s)
- Kamesh Krishnamurthy
- Joseph Henry Laboratories of Physics and PNI, Princeton University, Princeton, New Jersey 08544, USA
| | - Tankut Can
- Institute for Advanced Study, Princeton, New Jersey 08540, USA
| | - David J. Schwab
- Initiative for Theoretical Sciences, Graduate Center, CUNY, New York, New York 10016, USA
| |
Collapse
|
43
|
Yang K, Joshua Yang J, Huang R, Yang Y. Nonlinearity in Memristors for Neuromorphic Dynamic Systems. SMALL SCIENCE 2021. [DOI: 10.1002/smsc.202100049] [Citation(s) in RCA: 16] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022] Open
Affiliation(s)
- Ke Yang
- Department of Micro/nanoelectronics Peking University Beijing 100871 China
| | - J. Joshua Yang
- Electrical and Computer Engineering Department University of Southern California Los Angeles CA 90089 USA
| | - Ru Huang
- Department of Micro/nanoelectronics Peking University Beijing 100871 China
- Center for Brain Inspired Chips Institute for Artificial Intelligence Peking University Beijing 100871 China
- Center for Brain Inspired Intelligence Chinese Institute for Brain Research (CIBR) Beijing 102206 China
| | - Yuchao Yang
- Department of Micro/nanoelectronics Peking University Beijing 100871 China
- Center for Brain Inspired Chips Institute for Artificial Intelligence Peking University Beijing 100871 China
- Center for Brain Inspired Intelligence Chinese Institute for Brain Research (CIBR) Beijing 102206 China
| |
Collapse
|
44
|
Psychiatric Illnesses as Disorders of Network Dynamics. BIOLOGICAL PSYCHIATRY: COGNITIVE NEUROSCIENCE AND NEUROIMAGING 2021; 6:865-876. [DOI: 10.1016/j.bpsc.2020.01.001] [Citation(s) in RCA: 14] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/17/2019] [Accepted: 01/06/2020] [Indexed: 01/05/2023]
|
45
|
Suárez LE, Richards BA, Lajoie G, Misic B. Learning function from structure in neuromorphic networks. NAT MACH INTELL 2021. [DOI: 10.1038/s42256-021-00376-1] [Citation(s) in RCA: 17] [Impact Index Per Article: 5.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/31/2022]
|
46
|
Morales GB, Muñoz MA. Optimal Input Representation in Neural Systems at the Edge of Chaos. BIOLOGY 2021; 10:biology10080702. [PMID: 34439935 PMCID: PMC8389338 DOI: 10.3390/biology10080702] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 06/11/2021] [Revised: 07/16/2021] [Accepted: 07/19/2021] [Indexed: 11/16/2022]
Abstract
Shedding light on how biological systems represent, process and store information in noisy environments is a key and challenging goal. A stimulating, though controversial, hypothesis poses that operating in dynamical regimes near the edge of a phase transition, i.e., at criticality or the "edge of chaos", can provide information-processing living systems with important operational advantages, creating, e.g., an optimal trade-off between robustness and flexibility. Here, we elaborate on a recent theoretical result, which establishes that the spectrum of covariance matrices of neural networks representing complex inputs in a robust way needs to decay as a power-law of the rank, with an exponent close to unity, a result that has been indeed experimentally verified in neurons of the mouse visual cortex. Aimed at understanding and mimicking these results, we construct an artificial neural network and train it to classify images. We find that the best performance in such a task is obtained when the network operates near the critical point, at which the eigenspectrum of the covariance matrix follows the very same statistics as actual neurons do. Thus, we conclude that operating near criticality can also have-besides the usually alleged virtues-the advantage of allowing for flexible, robust and efficient input representations.
Collapse
|
47
|
Zhu R, Hochstetter J, Loeffler A, Diaz-Alvarez A, Nakayama T, Lizier JT, Kuncic Z. Information dynamics in neuromorphic nanowire networks. Sci Rep 2021; 11:13047. [PMID: 34158521 PMCID: PMC8219687 DOI: 10.1038/s41598-021-92170-7] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/22/2020] [Accepted: 05/31/2021] [Indexed: 12/18/2022] Open
Abstract
Neuromorphic systems comprised of self-assembled nanowires exhibit a range of neural-like dynamics arising from the interplay of their synapse-like electrical junctions and their complex network topology. Additionally, various information processing tasks have been demonstrated with neuromorphic nanowire networks. Here, we investigate the dynamics of how these unique systems process information through information-theoretic metrics. In particular, Transfer Entropy (TE) and Active Information Storage (AIS) are employed to investigate dynamical information flow and short-term memory in nanowire networks. In addition to finding that the topologically central parts of networks contribute the most to the information flow, our results also reveal TE and AIS are maximized when the networks transitions from a quiescent to an active state. The performance of neuromorphic networks in memory and learning tasks is demonstrated to be dependent on their internal dynamical states as well as topological structure. Optimal performance is found when these networks are pre-initialised to the transition state where TE and AIS are maximal. Furthermore, an optimal range of information processing resources (i.e. connectivity density) is identified for performance. Overall, our results demonstrate information dynamics is a valuable tool to study and benchmark neuromorphic systems.
Collapse
Affiliation(s)
- Ruomin Zhu
- School of Physics, The University of Sydney, Sydney, NSW, 2006, Australia.
| | - Joel Hochstetter
- School of Physics, The University of Sydney, Sydney, NSW, 2006, Australia
| | - Alon Loeffler
- School of Physics, The University of Sydney, Sydney, NSW, 2006, Australia
| | - Adrian Diaz-Alvarez
- International Center for Materials Nanoarchitectonics (WPI-MANA), National Institute for Materials Science (NIMS), 1-1 Namiki, Tsukuba, Ibaraki, 305-0044, Japan
| | - Tomonobu Nakayama
- School of Physics, The University of Sydney, Sydney, NSW, 2006, Australia
- International Center for Materials Nanoarchitectonics (WPI-MANA), National Institute for Materials Science (NIMS), 1-1 Namiki, Tsukuba, Ibaraki, 305-0044, Japan
- Graduate School of Pure and Applied Sciences, University of Tsukuba, Tsukuba, Japan
| | - Joseph T Lizier
- Centre for Complex Systems, Faculty of Engineering, The University of Sydney, Sydney, NSW, 2006, Australia
| | - Zdenka Kuncic
- School of Physics, The University of Sydney, Sydney, NSW, 2006, Australia.
- International Center for Materials Nanoarchitectonics (WPI-MANA), National Institute for Materials Science (NIMS), 1-1 Namiki, Tsukuba, Ibaraki, 305-0044, Japan.
- Centre for Complex Systems, Faculty of Engineering, The University of Sydney, Sydney, NSW, 2006, Australia.
- Sydney Nano Institute, The University of Sydney, Sydney, NSW, 2006, Australia.
| |
Collapse
|
48
|
Talamini J, Medvet E, Nichele S. Criticality-Driven Evolution of Adaptable Morphologies of Voxel-Based Soft-Robots. Front Robot AI 2021; 8:673156. [PMID: 34222354 PMCID: PMC8247470 DOI: 10.3389/frobt.2021.673156] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/26/2021] [Accepted: 05/27/2021] [Indexed: 11/24/2022] Open
Abstract
The paradigm of voxel-based soft robots has allowed to shift the complexity from the control algorithm to the robot morphology itself. The bodies of voxel-based soft robots are extremely versatile and more adaptable than the one of traditional robots, since they consist of many simple components that can be freely assembled. Nonetheless, it is still not clear which are the factors responsible for the adaptability of the morphology, which we define as the ability to cope with tasks requiring different skills. In this work, we propose a task-agnostic approach for automatically designing adaptable soft robotic morphologies in simulation, based on the concept of criticality. Criticality is a property belonging to dynamical systems close to a phase transition between the ordered and the chaotic regime. Our hypotheses are that 1) morphologies can be optimized for exhibiting critical dynamics and 2) robots with those morphologies are not worse, on a set of different tasks, than robots with handcrafted morphologies. We introduce a measure of criticality in the context of voxel-based soft robots which is based on the concept of avalanche analysis, often used to assess criticality in biological and artificial neural networks. We let the robot morphologies evolve toward criticality by measuring how close is their avalanche distribution to a power law distribution. We then validate the impact of this approach on the actual adaptability by measuring the resulting robots performance on three different tasks designed to require different skills. The validation results confirm that criticality is indeed a good indicator for the adaptability of a soft robotic morphology, and therefore a promising approach for guiding the design of more adaptive voxel-based soft robots.
Collapse
Affiliation(s)
- Jacopo Talamini
- Evolutionary Robotics and Artificial Life Lab, Department of Engineering and Architecture, University of Trieste, Trieste, Italy
| | - Eric Medvet
- Evolutionary Robotics and Artificial Life Lab, Department of Engineering and Architecture, University of Trieste, Trieste, Italy
| | - Stefano Nichele
- Department of Computer Science, Artificial Intelligence Lab, Oslo Metropolitan University, Oslo, Norway.,Department of Holistic Systems, Simula Metropolitan Center for Digital Engineering, Oslo, Norway
| |
Collapse
|
49
|
Gu L, Wu R. Robust cortical criticality and diverse dynamics resulting from functional specification. Phys Rev E 2021; 103:042407. [PMID: 34005915 DOI: 10.1103/physreve.103.042407] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/11/2020] [Accepted: 03/23/2021] [Indexed: 11/07/2022]
Abstract
Despite the recognition of the layered structure and evident criticality in the cortex, how the specification of input, output, and computational layers affects the self-organized criticality has not been much explored. By constructing heterogeneous structures with a well-accepted model of leaky neurons, we find that the specification can lead to robust criticality rather insensitive to the strength of external stimuli. This naturally unifies the adaptation to strong inputs without extra synaptic plasticity mechanisms. Low degree of recurrence constitutes an alternative explanation to subcriticality other than the high-frequency inputs. Unlike fully recurrent networks where external stimuli always render subcriticality, the dynamics of networks with sufficient feedforward connections can be driven to criticality and supercriticality. These findings indicate that functional and structural specification and their interplay with external stimuli are of crucial importance for the network dynamics. The robust criticality puts forward networks of the leaky neurons as promising platforms for realizing artificial neural networks that work in the vicinity of critical points.
Collapse
Affiliation(s)
- Lei Gu
- Department of Physics and Astronomy, University of California, Irvine, California 92697, USA
| | - Ruqian Wu
- Department of Physics and Astronomy, University of California, Irvine, California 92697, USA
| |
Collapse
|
50
|
Mandal S, Shrimali MD. Achieving criticality for reservoir computing using environment-induced explosive death. CHAOS (WOODBURY, N.Y.) 2021; 31:031101. [PMID: 33810729 DOI: 10.1063/5.0038881] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/27/2020] [Accepted: 02/08/2021] [Indexed: 06/12/2023]
Abstract
The network of oscillators coupled via a common environment has been widely studied due to its great abundance in nature. We exploit the occurrence of explosive oscillation quenching in a network of non-identical oscillators coupled to each other indirectly via an environment for efficient reservoir computing. At the very edge of explosive transition, the reservoir achieves criticality maximizing its information processing capacity. The efficiency of the reservoir at different configurations is determined by the computational accuracy for different tasks performed by it. We analyze the dependence of accuracy on the dynamical behavior of the reservoir in terms of an order parameter symbolizing the desynchronization of the system. We found that the reservoir achieves the criticality in the steady-state region right at the edge of the hysteresis area. By computing the entropy of the reservoir for different tasks, we confirm that maximum accuracy corresponds to the edge of chaos or the edge of stability for this reservoir.
Collapse
Affiliation(s)
- Swarnendu Mandal
- Department of Physics, Central University of Rajasthan, NH-8, Bandar Sindri, Ajmer 305 817, India
| | - Manish Dev Shrimali
- Department of Physics, Central University of Rajasthan, NH-8, Bandar Sindri, Ajmer 305 817, India
| |
Collapse
|