1
|
Hoang NL, Taniguchi T, Hagiwara Y, Taniguchi A. Emergent communication of multimodal deep generative models based on Metropolis-Hastings naming game. Front Robot AI 2024; 10:1290604. [PMID: 38356917 PMCID: PMC10864618 DOI: 10.3389/frobt.2023.1290604] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/07/2023] [Accepted: 12/18/2023] [Indexed: 02/16/2024] Open
Abstract
Deep generative models (DGM) are increasingly employed in emergent communication systems. However, their application in multimodal data contexts is limited. This study proposes a novel model that combines multimodal DGM with the Metropolis-Hastings (MH) naming game, enabling two agents to focus jointly on a shared subject and develop common vocabularies. The model proves that it can handle multimodal data, even in cases of missing modalities. Integrating the MH naming game with multimodal variational autoencoders (VAE) allows agents to form perceptual categories and exchange signs within multimodal contexts. Moreover, fine-tuning the weight ratio to favor a modality that the model could learn and categorize more readily improved communication. Our evaluation of three multimodal approaches - mixture-of-experts (MoE), product-of-experts (PoE), and mixture-of-product-of-experts (MoPoE)-suggests an impact on the creation of latent spaces, the internal representations of agents. Our results from experiments with the MNIST + SVHN and Multimodal165 datasets indicate that combining the Gaussian mixture model (GMM), PoE multimodal VAE, and MH naming game substantially improved information sharing, knowledge formation, and data reconstruction.
Collapse
Affiliation(s)
- Nguyen Le Hoang
- Graduate School of Information Science and Engineering, Ritsumeikan University, Kusatsu, Shiga, Japan
| | - Tadahiro Taniguchi
- College of Information Science and Engineering, Ritsumeikan University, Kusatsu, Shiga, Japan
| | - Yoshinobu Hagiwara
- Research Organization of Science and Technology, Ritsumeikan University, Kusatsu, Shiga, Japan
| | - Akira Taniguchi
- College of Information Science and Engineering, Ritsumeikan University, Kusatsu, Shiga, Japan
| |
Collapse
|
2
|
Taniguchi A, Fukawa A, Yamakawa H. Hippocampal formation-inspired probabilistic generative model. Neural Netw 2022; 151:317-335. [DOI: 10.1016/j.neunet.2022.04.001] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/12/2021] [Revised: 03/09/2022] [Accepted: 04/03/2022] [Indexed: 11/25/2022]
|
3
|
Taniguchi T, Yamakawa H, Nagai T, Doya K, Sakagami M, Suzuki M, Nakamura T, Taniguchi A. A whole brain probabilistic generative model: Toward realizing cognitive architectures for developmental robots. Neural Netw 2022; 150:293-312. [PMID: 35339010 DOI: 10.1016/j.neunet.2022.02.026] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/15/2021] [Revised: 02/25/2022] [Accepted: 02/28/2022] [Indexed: 01/08/2023]
Abstract
Building a human-like integrative artificial cognitive system, that is, an artificial general intelligence (AGI), is the holy grail of the artificial intelligence (AI) field. Furthermore, a computational model that enables an artificial system to achieve cognitive development will be an excellent reference for brain and cognitive science. This paper describes an approach to develop a cognitive architecture by integrating elemental cognitive modules to enable the training of the modules as a whole. This approach is based on two ideas: (1) brain-inspired AI, learning human brain architecture to build human-level intelligence, and (2) a probabilistic generative model (PGM)-based cognitive architecture to develop a cognitive system for developmental robots by integrating PGMs. The proposed development framework is called a whole brain PGM (WB-PGM), which differs fundamentally from existing cognitive architectures in that it can learn continuously through a system based on sensory-motor information. In this paper, we describe the rationale for WB-PGM, the current status of PGM-based elemental cognitive modules, their relationship with the human brain, the approach to the integration of the cognitive modules, and future challenges. Our findings can serve as a reference for brain studies. As PGMs describe explicit informational relationships between variables, WB-PGM provides interpretable guidance from computational sciences to brain science. By providing such information, researchers in neuroscience can provide feedback to researchers in AI and robotics on what the current models lack with reference to the brain. Further, it can facilitate collaboration among researchers in neuro-cognitive sciences as well as AI and robotics.
Collapse
Affiliation(s)
| | - Hiroshi Yamakawa
- The University of Tokyo, 7-3-1, Hongo, Bunkyo-ku, Tokyo, Japan; The Whole Brain Architecture Initiative, 2-19-21 Nishikoiwa , Edogawa-ku, Tokyo, Japan; RIKEN, 6-2-3 Furuedai, Suita, Osaka, Japan
| | - Takayuki Nagai
- Osaka University, 1-3 Machikane-yama, Toyonaka, Osaka, Japan
| | - Kenji Doya
- Okinawa Institute of Science and Technology Graduate University, 1919-1 Tancha, Onna-son, Kunigami, Okinawa, Japan
| | | | - Masahiro Suzuki
- The University of Tokyo, 7-3-1, Hongo, Bunkyo-ku, Tokyo, Japan
| | - Tomoaki Nakamura
- The University of Electro-Communications, 1-5-1 Chofugaoka, Chofu, Tokyo, Japan
| | | |
Collapse
|
4
|
Affiliation(s)
- Tatsuya Sakai
- Graduate School of Engineering Science, Osaka University, Osaka, Japan
| | - Takayuki Nagai
- Graduate School of Engineering Science, Osaka University, Osaka, Japan
- Artificial Intelligence Exploration Research Center, University of Electro-Communications, Tokyo, Japan
| |
Collapse
|
5
|
Hieida C, Nagai T. Survey and perspective on social emotions in robotics. Adv Robot 2022. [DOI: 10.1080/01691864.2021.2012512] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/01/2022]
Affiliation(s)
- Chie Hieida
- Division of Information Science, Graduate School of Science and Technology, Nara Institute of Science and Technology, Ikoma, Japan
| | - Takayuki Nagai
- Department of Systems Innovation, Graduate School of Engineering Science, Osaka University, Suita, Japan
- The University of Electro-Communications, Chofu, Japan
| |
Collapse
|
6
|
Marge M, Espy-Wilson C, Ward NG, Alwan A, Artzi Y, Bansal M, Blankenship G, Chai J, Daumé H, Dey D, Harper M, Howard T, Kennington C, Kruijff-Korbayová I, Manocha D, Matuszek C, Mead R, Mooney R, Moore RK, Ostendorf M, Pon-Barry H, Rudnicky AI, Scheutz M, Amant RS, Sun T, Tellex S, Traum D, Yu Z. Spoken language interaction with robots: Recommendations for future research. COMPUT SPEECH LANG 2022. [DOI: 10.1016/j.csl.2021.101255] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/28/2023]
|
7
|
Shimoda S, Jamone L, Ognibene D, Nagai T, Sciutti A, Costa-Garcia A, Oseki Y, Taniguchi T. What is the role of the next generation of cognitive robotics? Adv Robot 2021. [DOI: 10.1080/01691864.2021.2011780] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
Affiliation(s)
- Shingo Shimoda
- RIKEN Center for Brain Science TOYOTA Collaboration Center, Nagoya, Japan
| | - Lorenzo Jamone
- School of Electronic Engineering and Computer Science, Queen Mary University of London, London, UK
| | - Dimitri Ognibene
- Computer Science and Artificial Intelligence, University of Milano Biccoca, Milano, Italy
| | - Takayuki Nagai
- Graduate School of Engineering Science, Osaka University, Osaka, Japan
| | - Alessandra Sciutti
- Cognitive Architecture for Collaborative Technologies Unit, Italian Institute of Technology, Genova, Italy
| | | | - Yohei Oseki
- Department of Language and Information Sciences, University of Tokyo, Tokyo, Japan
| | - Tadahiro Taniguchi
- Department of Human and Computer Intelligence, Ritsumeikan University, Shiga, Japan
| |
Collapse
|
8
|
Sagara R, Taguchi R, Taniguchi A, Taniguchi T, Hattori K, Hoguro M, Umezaki T. Unsupervised lexical acquisition of relative spatial concepts using spoken user utterances. Adv Robot 2021. [DOI: 10.1080/01691864.2021.2007168] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
Affiliation(s)
- Rikunari Sagara
- Taguchi Laboratory, Department of Computer Science, Nagoya Institute of Technology, Aichi, Japan
| | - Ryo Taguchi
- Taguchi Laboratory, Department of Computer Science, Nagoya Institute of Technology, Aichi, Japan
| | - Akira Taniguchi
- Emergent Systems Laboratory, College of Information Science and Engineering, Ritsumeikan University, Shiga, Japan
| | - Tadahiro Taniguchi
- Emergent Systems Laboratory, College of Information Science and Engineering, Ritsumeikan University, Shiga, Japan
| | | | | | - Taizo Umezaki
- College of Engineering, Chubu University, Aichi, Japan
| |
Collapse
|
9
|
Angleraud A, Mehman Sefat A, Netzev M, Pieters R. Coordinating Shared Tasks in Human-Robot Collaboration by Commands. Front Robot AI 2021; 8:734548. [PMID: 34738018 PMCID: PMC8560701 DOI: 10.3389/frobt.2021.734548] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/01/2021] [Accepted: 10/01/2021] [Indexed: 11/24/2022] Open
Abstract
Human-robot collaboration is gaining more and more interest in industrial settings, as collaborative robots are considered safe and robot actions can be programmed easily by, for example, physical interaction. Despite this, robot programming mostly focuses on automated robot motions and interactive tasks or coordination between human and robot still requires additional developments. For example, the selection of which tasks or actions a robot should do next might not be known beforehand or might change at the last moment. Within a human-robot collaborative setting, the coordination of complex shared tasks, is therefore more suited to a human, where a robot would act upon requested commands.In this work we explore the utilization of commands to coordinate a shared task between a human and a robot, in a shared work space. Based on a known set of higher-level actions (e.g., pick-and-placement, hand-over, kitting) and the commands that trigger them, both a speech-based and graphical command-based interface are developed to investigate its use. While speech-based interaction might be more intuitive for coordination, in industrial settings background sounds and noise might hinder its capabilities. The graphical command-based interface circumvents this, while still demonstrating the capabilities of coordination. The developed architecture follows a knowledge-based approach, where the actions available to the robot are checked at runtime whether they suit the task and the current state of the world. Experimental results on industrially relevant assembly, kitting and hand-over tasks in a laboratory setting demonstrate that graphical command-based and speech-based coordination with high-level commands is effective for collaboration between a human and a robot.
Collapse
Affiliation(s)
- Alexandre Angleraud
- Cognitive Robotics Group, Faculty of Engineering and Natural Sciences, Tampere University, Tampere, Finland
| | - Amir Mehman Sefat
- Cognitive Robotics Group, Faculty of Engineering and Natural Sciences, Tampere University, Tampere, Finland
| | - Metodi Netzev
- Cognitive Robotics Group, Faculty of Engineering and Natural Sciences, Tampere University, Tampere, Finland
| | - Roel Pieters
- Cognitive Robotics Group, Faculty of Engineering and Natural Sciences, Tampere University, Tampere, Finland
| |
Collapse
|
10
|
Friston K, Moran RJ, Nagai Y, Taniguchi T, Gomi H, Tenenbaum J. World model learning and inference. Neural Netw 2021; 144:573-590. [PMID: 34634605 DOI: 10.1016/j.neunet.2021.09.011] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/15/2021] [Revised: 07/28/2021] [Accepted: 09/09/2021] [Indexed: 11/19/2022]
Abstract
Understanding information processing in the brain-and creating general-purpose artificial intelligence-are long-standing aspirations of scientists and engineers worldwide. The distinctive features of human intelligence are high-level cognition and control in various interactions with the world including the self, which are not defined in advance and are vary over time. The challenge of building human-like intelligent machines, as well as progress in brain science and behavioural analyses, robotics, and their associated theoretical formalisations, speaks to the importance of the world-model learning and inference. In this article, after briefly surveying the history and challenges of internal model learning and probabilistic learning, we introduce the free energy principle, which provides a useful framework within which to consider neuronal computation and probabilistic world models. Next, we showcase examples of human behaviour and cognition explained under that principle. We then describe symbol emergence in the context of probabilistic modelling, as a topic at the frontiers of cognitive robotics. Lastly, we review recent progress in creating human-like intelligence by using novel probabilistic programming languages. The striking consensus that emerges from these studies is that probabilistic descriptions of learning and inference are powerful and effective ways to create human-like artificial intelligent machines and to understand intelligence in the context of how humans interact with their world.
Collapse
Affiliation(s)
- Karl Friston
- Wellcome Centre for Human Neuroimaging, Institute of Neurology, University College London (UCL), WC1N 3BG, UK.
| | - Rosalyn J Moran
- Department of Neuroimaging, Institute of Psychiatry, Psychology & Neuroscience, King's College London, SE5 8AF, UK.
| | - Yukie Nagai
- International Research Center for Neurointelligence (IRCN), The University of Tokyo, Tokyo, Japan.
| | - Tadahiro Taniguchi
- College of Information Science and Engineering, Ritsumeikan University, Shiga, Japan.
| | - Hiroaki Gomi
- NTT Communication Science Labs., Nippon Telegraph and Telephone, Kanawaga, Japan.
| | - Josh Tenenbaum
- Department of Brain and Cognitive Sciences, MIT, Cambridge, MA, USA; The Center for Brains, Minds and Machines, MIT, Cambridge, MA, USA.
| |
Collapse
|
11
|
Kuniyasu R, Nakamura T, Taniguchi T, Nagai T. Robot Concept Acquisition Based on Interaction Between Probabilistic and Deep Generative Models. FRONTIERS IN COMPUTER SCIENCE 2021. [DOI: 10.3389/fcomp.2021.618069] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/13/2022] Open
Abstract
We propose a method for multimodal concept formation. In this method, unsupervised multimodal clustering and cross-modal inference, as well as unsupervised representation learning, can be performed by integrating the multimodal latent Dirichlet allocation (MLDA)-based concept formation and variational autoencoder (VAE)-based feature extraction. Multimodal clustering, representation learning, and cross-modal inference are critical for robots to form multimodal concepts from sensory data. Various models have been proposed for concept formation. However, in previous studies, features were extracted using manually designed or pre-trained feature extractors and representation learning was not performed simultaneously. Moreover, the generative probabilities of the features extracted from the sensory data could be predicted, but the sensory data could not be predicted in the cross-modal inference. Therefore, a method that can perform clustering, feature learning, and cross-modal inference among multimodal sensory data is required for concept formation. To realize such a method, we extend the VAE to the multinomial VAE (MNVAE), the latent variables of which follow a multinomial distribution, and construct a model that integrates the MNVAE and MLDA. In the experiments, the multimodal information of the images and words acquired by a robot was classified using the integrated model. The results demonstrated that the integrated model can classify the multimodal information as accurately as the previous model despite the feature extractor learning in an unsupervised manner, suitable image features for clustering can be learned, and cross-modal inference from the words to images is possible.
Collapse
|
12
|
Sumioka H, Shiomi M, Honda M, Nakazawa A. Technical Challenges for Smooth Interaction With Seniors With Dementia: Lessons From Humanitude™. Front Robot AI 2021; 8:650906. [PMID: 34150858 PMCID: PMC8207295 DOI: 10.3389/frobt.2021.650906] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/08/2021] [Accepted: 05/20/2021] [Indexed: 11/13/2022] Open
Abstract
Due to cognitive and socio-emotional decline and mental diseases, senior citizens, especially people with dementia (PwD), struggle to interact smoothly with their caregivers. Therefore, various care techniques have been proposed to develop good relationships with seniors. Among them, Humanitude is one promising technique that provides caregivers with useful interaction skills to improve their relationships with PwD, from four perspectives: face-to-face interaction, verbal communication, touch interaction, and helping care receivers stand up (physical interaction). Regardless of advances in elderly care techniques, since current social robots interact with seniors in the same manner as they do with younger adults, they lack several important functions. For example, Humanitude emphasizes the importance of interaction at a relatively intimate distance to facilitate communication with seniors. Unfortunately, few studies have developed an interaction model for clinical care communication. In this paper, we discuss the current challenges to develop a social robot that can smoothly interact with PwDs and overview the interaction skills used in Humanitude as well as the existing technologies.
Collapse
Affiliation(s)
- Hidenobu Sumioka
- Advanced Telecommunications Research Institute International, Kyoto, Japan
| | - Masahiro Shiomi
- Advanced Telecommunications Research Institute International, Kyoto, Japan
| | - Miwako Honda
- National Hospital Organization Tokyo Medical Center, Tokyo, Japan
| | | |
Collapse
|
13
|
Taniguchi T, El Hafi L, Hagiwara Y, Taniguchi A, Shimada N, Nishiura T. Semiotically adaptive cognition: toward the realization of remotely-operated service robots for the new normal symbiotic society. Adv Robot 2021. [DOI: 10.1080/01691864.2021.1928552] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 09/30/2022]
Affiliation(s)
- Tadahiro Taniguchi
- Department of Information Science and Engineering, Ritsumeikan University, Kusatsu, Shiga, Japan
| | - Lotfi El Hafi
- Ritsumeikan Global Innovation Research Organization, Ritsumeikan University, Kusatsu, Shiga, Japan
| | - Yoshinobu Hagiwara
- Department of Information Science and Engineering, Ritsumeikan University, Kusatsu, Shiga, Japan
| | - Akira Taniguchi
- Department of Information Science and Engineering, Ritsumeikan University, Kusatsu, Shiga, Japan
| | - Nobutaka Shimada
- Department of Information Science and Engineering, Ritsumeikan University, Kusatsu, Shiga, Japan
| | - Takanobu Nishiura
- Department of Information Science and Engineering, Ritsumeikan University, Kusatsu, Shiga, Japan
| |
Collapse
|
14
|
Taniguchi T, Horii T, Hinaut X, Spranger M, Mochihashi D, Nagai T. Editorial: Language and Robotics. Front Robot AI 2021; 8:674832. [PMID: 33912598 PMCID: PMC8072269 DOI: 10.3389/frobt.2021.674832] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/02/2021] [Accepted: 03/08/2021] [Indexed: 11/13/2022] Open
Affiliation(s)
- Tadahiro Taniguchi
- College of Information Science and Engineering, Ritsumeikan University, Kyoto, Japan
| | - Takato Horii
- Graduate School of Engineering Science, Osaka University, Suita, Japan
| | - Xavier Hinaut
- Institut National de Recherche en Informatique et en Automatique (INRIA), Rocquencourt, Île-de-France, France
| | | | - Daichi Mochihashi
- Department of Statistical Inference and Mathematics, The Institute of Statistical Mathematics, Tokyo, Japan
| | - Takayuki Nagai
- Graduate School of Engineering Science, Osaka University, Suita, Japan
| |
Collapse
|
15
|
Taniguchi A, Hagiwara Y, Taniguchi T, Inamura T. Spatial concept-based navigation with human speech instructions via probabilistic inference on Bayesian generative model. Adv Robot 2020. [DOI: 10.1080/01691864.2020.1817777] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/23/2022]
Affiliation(s)
- Akira Taniguchi
- College of Information Science and Engineering, Ritsumeikan University, Shiga, Japan
| | - Yoshinobu Hagiwara
- College of Information Science and Engineering, Ritsumeikan University, Shiga, Japan
| | - Tadahiro Taniguchi
- College of Information Science and Engineering, Ritsumeikan University, Shiga, Japan
| | - Tetsunari Inamura
- National Institute of Informatics, The Graduate University for Advanced Studies, SOKENDAI, Tokyo, Japan
| |
Collapse
|
16
|
Hagiwara Y, Kobayashi H, Taniguchi A, Taniguchi T. Symbol Emergence as an Interpersonal Multimodal Categorization. Front Robot AI 2019; 6:134. [PMID: 33501149 PMCID: PMC7805687 DOI: 10.3389/frobt.2019.00134] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/24/2019] [Accepted: 11/19/2019] [Indexed: 11/13/2022] Open
Abstract
This study focuses on category formation for individual agents and the dynamics of symbol emergence in a multi-agent system through semiotic communication. In this study, the semiotic communication refers to exchanging signs composed of the signifier (i.e., words) and the signified (i.e., categories). We define the generation and interpretation of signs associated with the categories formed through the agent's own sensory experience or by exchanging signs with other agents as basic functions of the semiotic communication. From the viewpoint of language evolution and symbol emergence, organization of a symbol system in a multi-agent system (i.e., agent society) is considered as a bottom-up and dynamic process, where individual agents share the meaning of signs and categorize sensory experience. A constructive computational model can explain the mutual dependency of the two processes and has mathematical support that guarantees a symbol system's emergence and sharing within the multi-agent system. In this paper, we describe a new computational model that represents symbol emergence in a two-agent system based on a probabilistic generative model for multimodal categorization. It models semiotic communication via a probabilistic rejection based on the receiver's own belief. We have found that the dynamics by which cognitively independent agents create a symbol system through their semiotic communication can be regarded as the inference process of a hidden variable in an interpersonal multimodal categorizer, i.e., the complete system can be regarded as a single agent performing multimodal categorization using the sensors of all agents, if we define the rejection probability based on the Metropolis-Hastings algorithm. The validity of the proposed model and algorithm for symbol emergence, i.e., forming and sharing signs and categories, is also verified in an experiment with two agents observing daily objects in the real-world environment. In the experiment, we compared three communication algorithms: no communication, no rejection, and the proposed algorithm. The experimental results demonstrate that our model reproduces the phenomena of symbol emergence, which does not require a teacher who would know a pre-existing symbol system. Instead, the multi-agent system can form and use a symbol system without having pre-existing categories.
Collapse
Affiliation(s)
- Yoshinobu Hagiwara
- Emergent Systems Laboratory, College of Information Science and Engineering, Ritsumeikan University, Shiga, Japan
| | - Hiroyoshi Kobayashi
- Emergent Systems Laboratory, College of Information Science and Engineering, Ritsumeikan University, Shiga, Japan
| | - Akira Taniguchi
- Emergent Systems Laboratory, College of Information Science and Engineering, Ritsumeikan University, Shiga, Japan
| | - Tadahiro Taniguchi
- Emergent Systems Laboratory, College of Information Science and Engineering, Ritsumeikan University, Shiga, Japan
| |
Collapse
|
17
|
Taniguchi T, Ugur E, Ogata T, Nagai T, Demiris Y. Editorial: Machine Learning Methods for High-Level Cognitive Capabilities in Robotics. Front Neurorobot 2019; 13:83. [PMID: 31695604 PMCID: PMC6817914 DOI: 10.3389/fnbot.2019.00083] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/19/2019] [Accepted: 09/25/2019] [Indexed: 11/18/2022] Open
Affiliation(s)
- Tadahiro Taniguchi
- Department of Information Science and Engineering, Ritsumeikan University, Kyoto, Japan
| | - Emre Ugur
- Department of Computer Engineering, Boǧaziçi University, Istanbul, Turkey
| | - Tetsuya Ogata
- Department of Intermedia Art and Science, School of Fundamental Science and Engineering, Waseda University, Tokyo, Japan
| | - Takayuki Nagai
- Department of Systems Innovation, Graduate School of Engineering Science, Osaka University, Osaka, Japan
| | - Yiannis Demiris
- Department of Electrical and Electronic Engineering, Imperial College London, London, United Kingdom
| |
Collapse
|