1
|
Makarov VA, Lobov SA, Shchanikov S, Mikhaylov A, Kazantsev VB. Toward Reflective Spiking Neural Networks Exploiting Memristive Devices. Front Comput Neurosci 2022; 16:859874. [PMID: 35782090 PMCID: PMC9243340 DOI: 10.3389/fncom.2022.859874] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/21/2022] [Accepted: 05/10/2022] [Indexed: 11/29/2022] Open
Abstract
The design of modern convolutional artificial neural networks (ANNs) composed of formal neurons copies the architecture of the visual cortex. Signals proceed through a hierarchy, where receptive fields become increasingly more complex and coding sparse. Nowadays, ANNs outperform humans in controlled pattern recognition tasks yet remain far behind in cognition. In part, it happens due to limited knowledge about the higher echelons of the brain hierarchy, where neurons actively generate predictions about what will happen next, i.e., the information processing jumps from reflex to reflection. In this study, we forecast that spiking neural networks (SNNs) can achieve the next qualitative leap. Reflective SNNs may take advantage of their intrinsic dynamics and mimic complex, not reflex-based, brain actions. They also enable a significant reduction in energy consumption. However, the training of SNNs is a challenging problem, strongly limiting their deployment. We then briefly overview new insights provided by the concept of a high-dimensional brain, which has been put forward to explain the potential power of single neurons in higher brain stations and deep SNN layers. Finally, we discuss the prospect of implementing neural networks in memristive systems. Such systems can densely pack on a chip 2D or 3D arrays of plastic synaptic contacts directly processing analog information. Thus, memristive devices are a good candidate for implementing in-memory and in-sensor computing. Then, memristive SNNs can diverge from the development of ANNs and build their niche, cognitive, or reflective computations.
Collapse
Affiliation(s)
- Valeri A. Makarov
- Instituto de Matemática Interdisciplinar, Universidad Complutense de Madrid, Madrid, Spain
- Department of Neurotechnologies, Research Institute of Physics and Technology, Laboratory of Stochastic Multistable Systems, Lobachevsky State University of Nizhny Novgorod, Nizhny Novgorod, Russia
- *Correspondence: Valeri A. Makarov,
| | - Sergey A. Lobov
- Department of Neurotechnologies, Research Institute of Physics and Technology, Laboratory of Stochastic Multistable Systems, Lobachevsky State University of Nizhny Novgorod, Nizhny Novgorod, Russia
- Neuroscience and Cognitive Technology Laboratory, Center for Technologies in Robotics and Mechatronics Components, Innopolis University, Innopolis, Russia
- Center For Neurotechnology and Machine Learning, Immanuel Kant Baltic Federal University, Kaliningrad, Russia
| | - Sergey Shchanikov
- Department of Neurotechnologies, Research Institute of Physics and Technology, Laboratory of Stochastic Multistable Systems, Lobachevsky State University of Nizhny Novgorod, Nizhny Novgorod, Russia
- Department of Information Technologies, Vladimir State University, Vladimir, Russia
| | - Alexey Mikhaylov
- Department of Neurotechnologies, Research Institute of Physics and Technology, Laboratory of Stochastic Multistable Systems, Lobachevsky State University of Nizhny Novgorod, Nizhny Novgorod, Russia
| | - Viktor B. Kazantsev
- Department of Neurotechnologies, Research Institute of Physics and Technology, Laboratory of Stochastic Multistable Systems, Lobachevsky State University of Nizhny Novgorod, Nizhny Novgorod, Russia
- Neuroscience and Cognitive Technology Laboratory, Center for Technologies in Robotics and Mechatronics Components, Innopolis University, Innopolis, Russia
- Center For Neurotechnology and Machine Learning, Immanuel Kant Baltic Federal University, Kaliningrad, Russia
| |
Collapse
|
2
|
Score Prediction of Sports Events Based on Parallel Self-Organizing Nonlinear Neural Network. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2022; 2022:4882309. [PMID: 35075357 PMCID: PMC8783733 DOI: 10.1155/2022/4882309] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 11/19/2021] [Accepted: 12/22/2021] [Indexed: 11/17/2022]
Abstract
This paper introduces the basic concepts and main characteristics of parallel self-organizing networks and analyzes and predicts parallel self-organizing networks through neural networks and their hybrid models. First, we train and describe the law and development trend of the parallel self-organizing network through historical data of the parallel self-organizing network and then use the discovered law to predict the performance of the new data and compare it with its true value. Second, this paper takes the prediction and application of chaotic parallel self-organizing networks as the main research line and neural networks as the main research method. Based on the summary and analysis of traditional neural networks, it jumps out of inertial thinking and first proposes phase space. Reconstruction parameters and neural network structure parameters are unified and optimized, and then, the idea of dividing the phase space into multiple subspaces is proposed. The multi-neural network method is adopted to track and predict the local trajectory of the chaotic attractor in the subspace with high precision to improve overall forecasting performance. During the experiment, short-term and longer-term prediction experiments were performed on the chaotic parallel self-organizing network. The results show that not only the accuracy of the simulation results is greatly improved but also the prediction performance of the real data observed in reality is also greatly improved. When predicting the parallel self-organizing network, the minimum error of the self-organizing difference model is 0.3691, and the minimum error of the self-organizing autoregressive neural network is 0.008, and neural network minimum error is 0.0081. In the parallel self-organizing network prediction of sports event scores, the errors of the above models are 0.0174, 0.0081, 0.0135, and 0.0381, respectively.
Collapse
|
3
|
Lobov SA, Zharinov AI, Makarov VA, Kazantsev VB. Spatial Memory in a Spiking Neural Network with Robot Embodiment. SENSORS 2021; 21:s21082678. [PMID: 33920246 PMCID: PMC8070389 DOI: 10.3390/s21082678] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 02/19/2021] [Revised: 04/06/2021] [Accepted: 04/07/2021] [Indexed: 11/16/2022]
Abstract
Cognitive maps and spatial memory are fundamental paradigms of brain functioning. Here, we present a spiking neural network (SNN) capable of generating an internal representation of the external environment and implementing spatial memory. The SNN initially has a non-specific architecture, which is then shaped by Hebbian-type synaptic plasticity. The network receives stimuli at specific loci, while the memory retrieval operates as a functional SNN response in the form of population bursts. The SNN function is explored through its embodiment in a robot moving in an arena with safe and dangerous zones. We propose a measure of the global network memory using the synaptic vector field approach to validate results and calculate information characteristics, including learning curves. We show that after training, the SNN can effectively control the robot’s cognitive behavior, allowing it to avoid dangerous regions in the arena. However, the learning is not perfect. The robot eventually visits dangerous areas. Such behavior, also observed in animals, enables relearning in time-evolving environments. If a dangerous zone moves into another place, the SNN remaps positive and negative areas, allowing escaping the catastrophic interference phenomenon known for some AI architectures. Thus, the robot adapts to changing world.
Collapse
Affiliation(s)
- Sergey A. Lobov
- Neurotechnology Department, Lobachevsky State University of Nizhny Novgorod, 23 Gagarin Ave., 603950 Nizhny Novgorod, Russia; (A.I.Z.); (V.A.M.); (V.B.K.)
- Neuroscience and Cognitive Technology Laboratory, Center for Technologies in Robotics and Mechatronics Components, Innopolis University, 1 Universitetskaya Str., 420500 Innopolis, Russia
- Center For Neurotechnology and Machine Learning, Immanuel Kant Baltic Federal University, 14 Nevsky Str., 236016 Kaliningrad, Russia
- Correspondence:
| | - Alexey I. Zharinov
- Neurotechnology Department, Lobachevsky State University of Nizhny Novgorod, 23 Gagarin Ave., 603950 Nizhny Novgorod, Russia; (A.I.Z.); (V.A.M.); (V.B.K.)
| | - Valeri A. Makarov
- Neurotechnology Department, Lobachevsky State University of Nizhny Novgorod, 23 Gagarin Ave., 603950 Nizhny Novgorod, Russia; (A.I.Z.); (V.A.M.); (V.B.K.)
- Instituto de Matemática Interdisciplinar, Facultad de Ciencias Matemáticas, Universidad Complutense de Madrid, 28040 Madrid, Spain
| | - Victor B. Kazantsev
- Neurotechnology Department, Lobachevsky State University of Nizhny Novgorod, 23 Gagarin Ave., 603950 Nizhny Novgorod, Russia; (A.I.Z.); (V.A.M.); (V.B.K.)
- Neuroscience and Cognitive Technology Laboratory, Center for Technologies in Robotics and Mechatronics Components, Innopolis University, 1 Universitetskaya Str., 420500 Innopolis, Russia
- Center For Neurotechnology and Machine Learning, Immanuel Kant Baltic Federal University, 14 Nevsky Str., 236016 Kaliningrad, Russia
- Lab of Neurocybernetics, Russian State Scientific Center for Robotics and Technical Cybernetics, 21 Tikhoretsky Ave., St., 194064 Petersburg, Russia
| |
Collapse
|
4
|
Villacorta-Atienza JA, Calvo Tapia C, Díez-Hermano S, Sánchez-Jiménez A, Lobov S, Krilova N, Murciano A, López-Tolsa GE, Pellón R, Makarov VA. Static internal representation of dynamic situations reveals time compaction in human cognition. J Adv Res 2020; 28:111-125. [PMID: 33364049 PMCID: PMC7753960 DOI: 10.1016/j.jare.2020.08.008] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/12/2020] [Revised: 08/05/2020] [Accepted: 08/11/2020] [Indexed: 11/30/2022] Open
Abstract
Introduction The human brain has evolved under the constraint of survival in complex dynamic situations. It makes fast and reliable decisions based on internal representations of the environment. Whereas neural mechanisms involved in the internal representation of space are becoming known, entire spatiotemporal cognition remains a challenge. Growing experimental evidence suggests that brain mechanisms devoted to spatial cognition may also participate in spatiotemporal information processing. Objectives The time compaction hypothesis postulates that the brain represents both static and dynamic situations as purely static maps. Such an internal reduction of the external complexity allows humans to process time-changing situations in real-time efficiently. According to time compaction, there may be a deep inner similarity between the representation of conventional static and dynamic visual stimuli. Here, we test the hypothesis and report the first experimental evidence of time compaction in humans. Methods We engaged human subjects in a discrimination-learning task consisting in the classification of static and dynamic visual stimuli. When there was a hidden correspondence between static and dynamic stimuli due to time compaction, the learning performance was expected to be modulated. We studied such a modulation experimentally and by a computational model. Results The collected data validated the predicted learning modulation and confirmed that time compaction is a salient cognitive strategy adopted by the human brain to process time-changing situations. Mathematical modelling supported the finding. We also revealed that men are more prone to exploit time compaction in accordance with the context of the hypothesis as a cognitive basis for survival. Conclusions The static internal representation of dynamic situations is a human cognitive mechanism involved in decision-making and strategy planning to cope with time-changing environments. The finding opens a new venue to understand how humans efficiently interact with our dynamic world and thrive in nature.
Collapse
Affiliation(s)
- José Antonio Villacorta-Atienza
- B.E.E. Department, Faculty of Biology, Complutense University of Madrid, Spain.,Institute of Interdisciplinary Mathematics, Complutense University of Madrid, Spain
| | - Carlos Calvo Tapia
- Institute of Interdisciplinary Mathematics, Complutense University of Madrid, Spain
| | - Sergio Díez-Hermano
- B.E.E. Department, Faculty of Biology, Complutense University of Madrid, Spain
| | - Abel Sánchez-Jiménez
- B.E.E. Department, Faculty of Biology, Complutense University of Madrid, Spain.,Institute of Interdisciplinary Mathematics, Complutense University of Madrid, Spain
| | - Sergey Lobov
- Neural Network Technologies Lab, Lobachevsky State University of Nizhny Novgorod, Russia
| | - Nadia Krilova
- Neural Network Technologies Lab, Lobachevsky State University of Nizhny Novgorod, Russia
| | - Antonio Murciano
- B.E.E. Department, Faculty of Biology, Complutense University of Madrid, Spain
| | - Gabriela E López-Tolsa
- Department of Basic Psychology, Faculty of Psychology, National Distance Education University, Spain
| | - Ricardo Pellón
- Department of Basic Psychology, Faculty of Psychology, National Distance Education University, Spain
| | - Valeri A Makarov
- Institute of Interdisciplinary Mathematics, Complutense University of Madrid, Spain.,Neural Network Technologies Lab, Lobachevsky State University of Nizhny Novgorod, Russia
| |
Collapse
|
5
|
Calvo Tapia C, Tyukin I, Makarov VA. Universal principles justify the existence of concept cells. Sci Rep 2020; 10:7889. [PMID: 32398873 PMCID: PMC7217959 DOI: 10.1038/s41598-020-64466-7] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/24/2019] [Accepted: 04/16/2020] [Indexed: 11/08/2022] Open
Abstract
The widespread consensus argues that the emergence of abstract concepts in the human brain, such as a "table", requires complex, perfectly orchestrated interaction of myriads of neurons. However, this is not what converging experimental evidence suggests. Single neurons, the so-called concept cells (CCs), may be responsible for complex tasks performed by humans. This finding, with deep implications for neuroscience and theory of neural networks, has no solid theoretical grounds so far. Our recent advances in stochastic separability of highdimensional data have provided the basis to validate the existence of CCs. Here, starting from a few first principles, we layout biophysical foundations showing that CCs are not only possible but highly likely in brain structures such as the hippocampus. Three fundamental conditions, fulfilled by the human brain, ensure high cognitive functionality of single cells: a hierarchical feedforward organization of large laminar neuronal strata, a suprathreshold number of synaptic entries to principal neurons in the strata, and a magnitude of synaptic plasticity adequate for each neuronal stratum. We illustrate the approach on a simple example of acquiring "musical memory" and show how the concept of musical notes can emerge.
Collapse
Affiliation(s)
- Carlos Calvo Tapia
- Instituto de Matemática Interdisciplinar, Faculty of Mathematics, Universidad Complutense de Madrid, Plaza de Ciencias 3, Madrid, 28040, Spain
| | - Ivan Tyukin
- University of Leicester, Department of Mathematics, University Road, LE1 7RH, United Kingdom
| | - Valeri A Makarov
- Instituto de Matemática Interdisciplinar, Faculty of Mathematics, Universidad Complutense de Madrid, Plaza de Ciencias 3, Madrid, 28040, Spain.
- Lobachevsky University of Nizhny Novgorod, Gagarin Ave. 23, Nizhny, Novgorod, 603950, Russia.
| |
Collapse
|