1
|
Elmoznino E, Bonner MF. High-performing neural network models of visual cortex benefit from high latent dimensionality. PLoS Comput Biol 2024; 20:e1011792. [PMID: 38198504 PMCID: PMC10805290 DOI: 10.1371/journal.pcbi.1011792] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/18/2023] [Revised: 01/23/2024] [Accepted: 12/30/2023] [Indexed: 01/12/2024] Open
Abstract
Geometric descriptions of deep neural networks (DNNs) have the potential to uncover core representational principles of computational models in neuroscience. Here we examined the geometry of DNN models of visual cortex by quantifying the latent dimensionality of their natural image representations. A popular view holds that optimal DNNs compress their representations onto low-dimensional subspaces to achieve invariance and robustness, which suggests that better models of visual cortex should have lower dimensional geometries. Surprisingly, we found a strong trend in the opposite direction-neural networks with high-dimensional image subspaces tended to have better generalization performance when predicting cortical responses to held-out stimuli in both monkey electrophysiology and human fMRI data. Moreover, we found that high dimensionality was associated with better performance when learning new categories of stimuli, suggesting that higher dimensional representations are better suited to generalize beyond their training domains. These findings suggest a general principle whereby high-dimensional geometry confers computational benefits to DNN models of visual cortex.
Collapse
Affiliation(s)
- Eric Elmoznino
- Department of Cognitive Science, Johns Hopkins University, Baltimore, Maryland, United States of America
| | - Michael F. Bonner
- Department of Cognitive Science, Johns Hopkins University, Baltimore, Maryland, United States of America
| |
Collapse
|
2
|
Makarov VA, Muñoz R, Herreras O, Makarova J. Correlation dimension of high-dimensional and high-definition experimental time series. CHAOS (WOODBURY, N.Y.) 2023; 33:123114. [PMID: 38079645 DOI: 10.1063/5.0168400] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/18/2023] [Accepted: 11/13/2023] [Indexed: 12/18/2023]
Abstract
The correlation dimension (CD) is a nonlinear measure of the complexity of invariant sets. First introduced for describing low-dimensional chaotic attractors, it has been later extended to the analysis of experimental electroencephalographic (EEG), magnetoencephalographic (MEG), and local field potential (LFP) recordings. However, its direct application to high-dimensional (dozens of signals) and high-definition (kHz sampling rate) 2HD data revealed a controversy in the results. We show that the need for an exponentially long data sample is the main difficulty in dealing with 2HD data. Then, we provide a novel method for estimating CD that enables orders of magnitude reduction of the required sample size. The approach decomposes raw data into statistically independent components and estimates the CD for each of them separately. In addition, the method allows ongoing insights into the interplay between the complexity of the contributing components, which can be related to different anatomical pathways and brain regions. The latter opens new approaches to a deeper interpretation of experimental data. Finally, we illustrate the method with synthetic data and LFPs recorded in the hippocampus of a rat.
Collapse
Affiliation(s)
- Valeri A Makarov
- Department of Applied Mathematics and Mathematical Analysis, Universidad Complutense de Madrid, Plaza de las Ciencias 3, Madrid 28040, Spain
| | - Ricardo Muñoz
- Department of Applied Mathematics and Mathematical Analysis, Universidad Complutense de Madrid, Plaza de las Ciencias 3, Madrid 28040, Spain
- Department of Translational Neuroscience, Cajal Institute, CSIC, Av. Doctor Arce 37, Madrid 28002, Spain
| | - Oscar Herreras
- Department of Translational Neuroscience, Cajal Institute, CSIC, Av. Doctor Arce 37, Madrid 28002, Spain
| | - Julia Makarova
- Department of Translational Neuroscience, Cajal Institute, CSIC, Av. Doctor Arce 37, Madrid 28002, Spain
| |
Collapse
|
3
|
Dunin-Barkowski W, Gorban A. Editorial: Toward and beyond human-level AI, volume II. Front Neurorobot 2023; 16:1120167. [PMID: 36687208 PMCID: PMC9853958 DOI: 10.3389/fnbot.2022.1120167] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/09/2022] [Accepted: 12/13/2022] [Indexed: 01/07/2023] Open
Affiliation(s)
- Witali Dunin-Barkowski
- Department of Neuroinformatics, Center for Optical Neural Technologies, Scientific Research Institute for System Analysis, Russian Academy of Sciences, Moscow, Russia,*Correspondence: Witali Dunin-Barkowski ✉
| | - Alexander Gorban
- Department of Mathematics, University of Leicester, Leicester, United Kingdom,Scientific and Educational Mathematical Center “Mathematics of Future Technology,” Lobachevsky State University of Nizhny Novgorod, Nizhny Novgorod, Russia
| |
Collapse
|
4
|
Lysov M, Maximova I, Vasiliev E, Getmanskaya A, Turlapov V. Entropy as a High-Level Feature for XAI-Based Early Plant Stress Detection. ENTROPY (BASEL, SWITZERLAND) 2022; 24:1597. [PMID: 36359687 PMCID: PMC9689005 DOI: 10.3390/e24111597] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 08/19/2022] [Revised: 10/17/2022] [Accepted: 10/26/2022] [Indexed: 06/16/2023]
Abstract
This article is devoted to searching for high-level explainable features that can remain explainable for a wide class of objects or phenomena and become an integral part of explainable AI (XAI). The present study involved a 25-day experiment on early diagnosis of wheat stress using drought stress as an example. The state of the plants was periodically monitored via thermal infrared (TIR) and hyperspectral image (HSI) cameras. A single-layer perceptron (SLP)-based classifier was used as the main instrument in the XAI study. To provide explainability of the SLP input, the direct HSI was replaced by images of six popular vegetation indices and three HSI channels (R630, G550, and B480; referred to as indices), along with the TIR image. Furthermore, in the explainability analysis, each of the 10 images was replaced by its 6 statistical features: min, max, mean, std, max-min, and the entropy. For the SLP output explainability, seven output neurons corresponding to the key states of the plants were chosen. The inner layer of the SLP was constructed using 15 neurons, including 10 corresponding to the indices and 5 reserved neurons. The classification possibilities of all 60 features and 10 indices of the SLP classifier were studied. Study result: Entropy is the earliest high-level stress feature for all indices; entropy and an entropy-like feature (max-min) paired with one of the other statistical features can provide, for most indices, 100% accuracy (or near 100%), serving as an integral part of XAI.
Collapse
|
5
|
Network-based dimensionality reduction of high-dimensional, low-sample-size datasets. Knowl Based Syst 2022. [DOI: 10.1016/j.knosys.2022.109180] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/23/2022]
|
6
|
Makarov VA, Lobov SA, Shchanikov S, Mikhaylov A, Kazantsev VB. Toward Reflective Spiking Neural Networks Exploiting Memristive Devices. Front Comput Neurosci 2022; 16:859874. [PMID: 35782090 PMCID: PMC9243340 DOI: 10.3389/fncom.2022.859874] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/21/2022] [Accepted: 05/10/2022] [Indexed: 11/29/2022] Open
Abstract
The design of modern convolutional artificial neural networks (ANNs) composed of formal neurons copies the architecture of the visual cortex. Signals proceed through a hierarchy, where receptive fields become increasingly more complex and coding sparse. Nowadays, ANNs outperform humans in controlled pattern recognition tasks yet remain far behind in cognition. In part, it happens due to limited knowledge about the higher echelons of the brain hierarchy, where neurons actively generate predictions about what will happen next, i.e., the information processing jumps from reflex to reflection. In this study, we forecast that spiking neural networks (SNNs) can achieve the next qualitative leap. Reflective SNNs may take advantage of their intrinsic dynamics and mimic complex, not reflex-based, brain actions. They also enable a significant reduction in energy consumption. However, the training of SNNs is a challenging problem, strongly limiting their deployment. We then briefly overview new insights provided by the concept of a high-dimensional brain, which has been put forward to explain the potential power of single neurons in higher brain stations and deep SNN layers. Finally, we discuss the prospect of implementing neural networks in memristive systems. Such systems can densely pack on a chip 2D or 3D arrays of plastic synaptic contacts directly processing analog information. Thus, memristive devices are a good candidate for implementing in-memory and in-sensor computing. Then, memristive SNNs can diverge from the development of ANNs and build their niche, cognitive, or reflective computations.
Collapse
Affiliation(s)
- Valeri A. Makarov
- Instituto de Matemática Interdisciplinar, Universidad Complutense de Madrid, Madrid, Spain
- Department of Neurotechnologies, Research Institute of Physics and Technology, Laboratory of Stochastic Multistable Systems, Lobachevsky State University of Nizhny Novgorod, Nizhny Novgorod, Russia
- *Correspondence: Valeri A. Makarov,
| | - Sergey A. Lobov
- Department of Neurotechnologies, Research Institute of Physics and Technology, Laboratory of Stochastic Multistable Systems, Lobachevsky State University of Nizhny Novgorod, Nizhny Novgorod, Russia
- Neuroscience and Cognitive Technology Laboratory, Center for Technologies in Robotics and Mechatronics Components, Innopolis University, Innopolis, Russia
- Center For Neurotechnology and Machine Learning, Immanuel Kant Baltic Federal University, Kaliningrad, Russia
| | - Sergey Shchanikov
- Department of Neurotechnologies, Research Institute of Physics and Technology, Laboratory of Stochastic Multistable Systems, Lobachevsky State University of Nizhny Novgorod, Nizhny Novgorod, Russia
- Department of Information Technologies, Vladimir State University, Vladimir, Russia
| | - Alexey Mikhaylov
- Department of Neurotechnologies, Research Institute of Physics and Technology, Laboratory of Stochastic Multistable Systems, Lobachevsky State University of Nizhny Novgorod, Nizhny Novgorod, Russia
| | - Viktor B. Kazantsev
- Department of Neurotechnologies, Research Institute of Physics and Technology, Laboratory of Stochastic Multistable Systems, Lobachevsky State University of Nizhny Novgorod, Nizhny Novgorod, Russia
- Neuroscience and Cognitive Technology Laboratory, Center for Technologies in Robotics and Mechatronics Components, Innopolis University, Innopolis, Russia
- Center For Neurotechnology and Machine Learning, Immanuel Kant Baltic Federal University, Kaliningrad, Russia
| |
Collapse
|
7
|
Altan E, Solla SA, Miller LE, Perreault EJ. Estimating the dimensionality of the manifold underlying multi-electrode neural recordings. PLoS Comput Biol 2021; 17:e1008591. [PMID: 34843461 PMCID: PMC8659648 DOI: 10.1371/journal.pcbi.1008591] [Citation(s) in RCA: 17] [Impact Index Per Article: 5.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/04/2020] [Revised: 12/09/2021] [Accepted: 11/11/2021] [Indexed: 01/07/2023] Open
Abstract
It is generally accepted that the number of neurons in a given brain area far exceeds the number of neurons needed to carry any specific function controlled by that area. For example, motor areas of the human brain contain tens of millions of neurons that control the activation of tens or at most hundreds of muscles. This massive redundancy implies the covariation of many neurons, which constrains the population activity to a low-dimensional manifold within the space of all possible patterns of neural activity. To gain a conceptual understanding of the complexity of the neural activity within a manifold, it is useful to estimate its dimensionality, which quantifies the number of degrees of freedom required to describe the observed population activity without significant information loss. While there are many algorithms for dimensionality estimation, we do not know which are well suited for analyzing neural activity. The objective of this study was to evaluate the efficacy of several representative algorithms for estimating the dimensionality of linearly and nonlinearly embedded data. We generated synthetic neural recordings with known intrinsic dimensionality and used them to test the algorithms' accuracy and robustness. We emulated some of the important challenges associated with experimental data by adding noise, altering the nature of the embedding of the low-dimensional manifold within the high-dimensional recordings, varying the dimensionality of the manifold, and limiting the amount of available data. We demonstrated that linear algorithms overestimate the dimensionality of nonlinear, noise-free data. In cases of high noise, most algorithms overestimated the dimensionality. We thus developed a denoising algorithm based on deep learning, the "Joint Autoencoder", which significantly improved subsequent dimensionality estimation. Critically, we found that all algorithms failed when the intrinsic dimensionality was high (above 20) or when the amount of data used for estimation was low. Based on the challenges we observed, we formulated a pipeline for estimating the dimensionality of experimental neural data.
Collapse
Affiliation(s)
- Ege Altan
- Department of Neuroscience, Northwestern University, Chicago, Illinois, United States of America
- Department of Biomedical Engineering, Northwestern University, Evanston, Illinois, United States of America
| | - Sara A. Solla
- Department of Neuroscience, Northwestern University, Chicago, Illinois, United States of America
- Department of Physics and Astronomy, Northwestern University, Evanston, Illinois, United States of America
| | - Lee E. Miller
- Department of Neuroscience, Northwestern University, Chicago, Illinois, United States of America
- Department of Biomedical Engineering, Northwestern University, Evanston, Illinois, United States of America
- Department of Physical Medicine and Rehabilitation, Northwestern University, Chicago, Illinois, United States of America
- Shirley Ryan AbilityLab, Chicago, Illinois, United States of America
| | - Eric J. Perreault
- Department of Biomedical Engineering, Northwestern University, Evanston, Illinois, United States of America
- Department of Physical Medicine and Rehabilitation, Northwestern University, Chicago, Illinois, United States of America
- Shirley Ryan AbilityLab, Chicago, Illinois, United States of America
| |
Collapse
|
8
|
Gorban AN, Grechuk B, Mirkes EM, Stasenko SV, Tyukin IY. High-Dimensional Separability for One- and Few-Shot Learning. ENTROPY (BASEL, SWITZERLAND) 2021; 23:1090. [PMID: 34441230 PMCID: PMC8392747 DOI: 10.3390/e23081090] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 06/28/2021] [Revised: 08/08/2021] [Accepted: 08/13/2021] [Indexed: 12/31/2022]
Abstract
This work is driven by a practical question: corrections of Artificial Intelligence (AI) errors. These corrections should be quick and non-iterative. To solve this problem without modification of a legacy AI system, we propose special 'external' devices, correctors. Elementary correctors consist of two parts, a classifier that separates the situations with high risk of error from the situations in which the legacy AI system works well and a new decision that should be recommended for situations with potential errors. Input signals for the correctors can be the inputs of the legacy AI system, its internal signals, and outputs. If the intrinsic dimensionality of data is high enough then the classifiers for correction of small number of errors can be very simple. According to the blessing of dimensionality effects, even simple and robust Fisher's discriminants can be used for one-shot learning of AI correctors. Stochastic separation theorems provide the mathematical basis for this one-short learning. However, as the number of correctors needed grows, the cluster structure of data becomes important and a new family of stochastic separation theorems is required. We refuse the classical hypothesis of the regularity of the data distribution and assume that the data can have a rich fine-grained structure with many clusters and corresponding peaks in the probability density. New stochastic separation theorems for data with fine-grained structure are formulated and proved. On the basis of these theorems, the multi-correctors for granular data are proposed. The advantages of the multi-corrector technology were demonstrated by examples of correcting errors and learning new classes of objects by a deep convolutional neural network on the CIFAR-10 dataset. The key problems of the non-classical high-dimensional data analysis are reviewed together with the basic preprocessing steps including the correlation transformation, supervised Principal Component Analysis (PCA), semi-supervised PCA, transfer component analysis, and new domain adaptation PCA.
Collapse
Affiliation(s)
- Alexander N. Gorban
- Department of Mathematics, University of Leicester, Leicester LE1 7RH, UK; (B.G.); (E.M.M.); (I.Y.T.)
- Laboratory of Advanced Methods for High-Dimensional Data Analysis, Lobachevsky University, 603105 Nizhni Novgorod, Russia;
| | - Bogdan Grechuk
- Department of Mathematics, University of Leicester, Leicester LE1 7RH, UK; (B.G.); (E.M.M.); (I.Y.T.)
| | - Evgeny M. Mirkes
- Department of Mathematics, University of Leicester, Leicester LE1 7RH, UK; (B.G.); (E.M.M.); (I.Y.T.)
- Laboratory of Advanced Methods for High-Dimensional Data Analysis, Lobachevsky University, 603105 Nizhni Novgorod, Russia;
| | - Sergey V. Stasenko
- Laboratory of Advanced Methods for High-Dimensional Data Analysis, Lobachevsky University, 603105 Nizhni Novgorod, Russia;
| | - Ivan Y. Tyukin
- Department of Mathematics, University of Leicester, Leicester LE1 7RH, UK; (B.G.); (E.M.M.); (I.Y.T.)
- Laboratory of Advanced Methods for High-Dimensional Data Analysis, Lobachevsky University, 603105 Nizhni Novgorod, Russia;
- Department of Geoscience and Petroleum, Norwegian University of Science and Technology, 7491 Trondheim, Norway
| |
Collapse
|
9
|
Blessing of dimensionality at the edge and geometry of few-shot learning. Inf Sci (N Y) 2021. [DOI: 10.1016/j.ins.2021.01.022] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/22/2022]
|
10
|
Sidorov S, Zolotykh N. Linear and Fisher Separability of Random Points in the d-Dimensional Spherical Layer and Inside the d-Dimensional Cube. ENTROPY (BASEL, SWITZERLAND) 2020; 22:E1281. [PMID: 33287049 PMCID: PMC7712262 DOI: 10.3390/e22111281] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 10/21/2020] [Revised: 11/08/2020] [Accepted: 11/10/2020] [Indexed: 11/16/2022]
Abstract
Stochastic separation theorems play important roles in high-dimensional data analysis and machine learning. It turns out that in high dimensional space, any point of a random set of points can be separated from other points by a hyperplane with high probability, even if the number of points is exponential in terms of dimensions. This and similar facts can be used for constructing correctors for artificial intelligent systems, for determining the intrinsic dimensionality of data and for explaining various natural intelligence phenomena. In this paper, we refine the estimations for the number of points and for the probability in stochastic separation theorems, thereby strengthening some results obtained earlier. We propose the boundaries for linear and Fisher separability, when the points are drawn randomly, independently and uniformly from a d-dimensional spherical layer and from the cube. These results allow us to better outline the applicability limits of the stochastic separation theorems in applications.
Collapse
Affiliation(s)
| | - Nikolai Zolotykh
- Institute of Information Technologies, Mathematics and Mechanics, Lobachevsky State University, 603950 Nizhni Novgorod, Russia;
| |
Collapse
|
11
|
Mirkes EM, Allohibi J, Gorban A. Fractional Norms and Quasinorms Do Not Help to Overcome the Curse of Dimensionality. ENTROPY (BASEL, SWITZERLAND) 2020; 22:E1105. [PMID: 33286874 PMCID: PMC7597215 DOI: 10.3390/e22101105] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 08/04/2020] [Revised: 09/22/2020] [Accepted: 09/27/2020] [Indexed: 11/25/2022]
Abstract
The curse of dimensionality causes the well-known and widely discussed problems for machine learning methods. There is a hypothesis that using the Manhattan distance and even fractional lp quasinorms (for p less than 1) can help to overcome the curse of dimensionality in classification problems. In this study, we systematically test this hypothesis. It is illustrated that fractional quasinorms have a greater relative contrast and coefficient of variation than the Euclidean norm l2, but it is shown that this difference decays with increasing space dimension. It has been demonstrated that the concentration of distances shows qualitatively the same behaviour for all tested norms and quasinorms. It is shown that a greater relative contrast does not mean a better classification quality. It was revealed that for different databases the best (worst) performance was achieved under different norms (quasinorms). A systematic comparison shows that the difference in the performance of kNN classifiers for lp at p = 0.5, 1, and 2 is statistically insignificant. Analysis of curse and blessing of dimensionality requires careful definition of data dimensionality that rarely coincides with the number of attributes. We systematically examined several intrinsic dimensions of the data.
Collapse
Affiliation(s)
- Evgeny M. Mirkes
- School of Mathematics and Actuarial Science, University of Leicester, Leicester LE1 7HR, UK; (J.A.); (A.G.)
- Laboratory of Advanced Methods for High-Dimensional Data Analysis, Lobachevsky State University, 603105 Nizhny Novgorod, Russia
| | - Jeza Allohibi
- School of Mathematics and Actuarial Science, University of Leicester, Leicester LE1 7HR, UK; (J.A.); (A.G.)
- Department of Mathematics, Taibah University, Janadah Bin Umayyah Road, Tayba, Medina 42353, Saudi Arabia
| | - Alexander Gorban
- School of Mathematics and Actuarial Science, University of Leicester, Leicester LE1 7HR, UK; (J.A.); (A.G.)
- Laboratory of Advanced Methods for High-Dimensional Data Analysis, Lobachevsky State University, 603105 Nizhny Novgorod, Russia
| |
Collapse
|
12
|
Calvo Tapia C, Tyukin I, Makarov VA. Universal principles justify the existence of concept cells. Sci Rep 2020; 10:7889. [PMID: 32398873 PMCID: PMC7217959 DOI: 10.1038/s41598-020-64466-7] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/24/2019] [Accepted: 04/16/2020] [Indexed: 11/08/2022] Open
Abstract
The widespread consensus argues that the emergence of abstract concepts in the human brain, such as a "table", requires complex, perfectly orchestrated interaction of myriads of neurons. However, this is not what converging experimental evidence suggests. Single neurons, the so-called concept cells (CCs), may be responsible for complex tasks performed by humans. This finding, with deep implications for neuroscience and theory of neural networks, has no solid theoretical grounds so far. Our recent advances in stochastic separability of highdimensional data have provided the basis to validate the existence of CCs. Here, starting from a few first principles, we layout biophysical foundations showing that CCs are not only possible but highly likely in brain structures such as the hippocampus. Three fundamental conditions, fulfilled by the human brain, ensure high cognitive functionality of single cells: a hierarchical feedforward organization of large laminar neuronal strata, a suprathreshold number of synaptic entries to principal neurons in the strata, and a magnitude of synaptic plasticity adequate for each neuronal stratum. We illustrate the approach on a simple example of acquiring "musical memory" and show how the concept of musical notes can emerge.
Collapse
Affiliation(s)
- Carlos Calvo Tapia
- Instituto de Matemática Interdisciplinar, Faculty of Mathematics, Universidad Complutense de Madrid, Plaza de Ciencias 3, Madrid, 28040, Spain
| | - Ivan Tyukin
- University of Leicester, Department of Mathematics, University Road, LE1 7RH, United Kingdom
| | - Valeri A Makarov
- Instituto de Matemática Interdisciplinar, Faculty of Mathematics, Universidad Complutense de Madrid, Plaza de Ciencias 3, Madrid, 28040, Spain.
- Lobachevsky University of Nizhny Novgorod, Gagarin Ave. 23, Nizhny, Novgorod, 603950, Russia.
| |
Collapse
|
13
|
Calvo Tapia C, Villacorta-Atienza JA, Díez-Hermano S, Khoruzhko M, Lobov S, Potapov I, Sánchez-Jiménez A, Makarov VA. Semantic Knowledge Representation for Strategic Interactions in Dynamic Situations. Front Neurorobot 2020; 14:4. [PMID: 32116635 PMCID: PMC7031254 DOI: 10.3389/fnbot.2020.00004] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/31/2019] [Accepted: 01/14/2020] [Indexed: 11/21/2022] Open
Abstract
Evolved living beings can anticipate the consequences of their actions in complex multilevel dynamic situations. This ability relies on abstracting the meaning of an action. The underlying brain mechanisms of such semantic processing of information are poorly understood. Here we show how our novel concept, known as time compaction, provides a natural way of representing semantic knowledge of actions in time-changing situations. As a testbed, we model a fencing scenario with a subject deciding between attack and defense strategies. The semantic content of each action in terms of lethality, versatility, and imminence is then structured as a spatial (static) map representing a particular fencing (dynamic) situation. The model allows deploying a variety of cognitive strategies in a fast and reliable way. We validate the approach in virtual reality and by using a real humanoid robot.
Collapse
Affiliation(s)
- Carlos Calvo Tapia
- Facultad de CC. Matemáticas, Instituto de Matemática Interdisciplinar, Universidad Complutense de Madrid, Madrid, Spain
| | | | - Sergio Díez-Hermano
- Biomathematics Unit, Faculty of Biology, Complutense University of Madrid, Madrid, Spain
| | | | - Sergey Lobov
- N. I. Lobachevsky State University, Nizhny Novgorod, Russia
| | - Ivan Potapov
- N. I. Lobachevsky State University, Nizhny Novgorod, Russia
| | - Abel Sánchez-Jiménez
- Biomathematics Unit, Faculty of Biology, Complutense University of Madrid, Madrid, Spain
| | - Valeri A. Makarov
- Facultad de CC. Matemáticas, Instituto de Matemática Interdisciplinar, Universidad Complutense de Madrid, Madrid, Spain
- N. I. Lobachevsky State University, Nizhny Novgorod, Russia
| |
Collapse
|