1
|
Krichmar JL, Hwu TJ. Design Principles for Neurorobotics. Front Neurorobot 2022; 16:882518. [PMID: 35692490 PMCID: PMC9174684 DOI: 10.3389/fnbot.2022.882518] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/23/2022] [Accepted: 04/19/2022] [Indexed: 11/13/2022] Open
Abstract
In their book “How the Body Shapes the Way We Think: A New View of Intelligence,” Pfeifer and Bongard put forth an embodied approach to cognition. Because of this position, many of their robot examples demonstrated “intelligent” behavior despite limited neural processing. It is our belief that neurorobots should attempt to follow many of these principles. In this article, we discuss a number of principles to consider when designing neurorobots and experiments using robots to test brain theories. These principles are strongly inspired by Pfeifer and Bongard, but build on their design principles by grounding them in neuroscience and by adding principles based on neuroscience research. Our design principles fall into three categories. First, organisms must react quickly and appropriately to events. Second, organisms must have the ability to learn and remember over their lifetimes. Third, organisms must weigh options that are crucial for survival. We believe that by following these design principles a robot's behavior will be more naturalistic and more successful.
Collapse
Affiliation(s)
- Jeffrey L. Krichmar
- Department of Cognitive Sciences, University of California, Irvine, Irvine, CA, United States
- Department of Computer Science, University of California, Irvine, Irvine, CA, United States
- *Correspondence: Jeffrey L. Krichmar
| | | |
Collapse
|
2
|
Gumbsch C, Butz MV, Martius G. Autonomous Identification and Goal-Directed Invocation of Event-Predictive Behavioral Primitives. IEEE Trans Cogn Dev Syst 2021. [DOI: 10.1109/tcds.2019.2925890] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
|
3
|
Heinrich S, Yao Y, Hinz T, Liu Z, Hummel T, Kerzel M, Weber C, Wermter S. Crossmodal Language Grounding in an Embodied Neurocognitive Model. Front Neurorobot 2020; 14:52. [PMID: 33154720 PMCID: PMC7591775 DOI: 10.3389/fnbot.2020.00052] [Citation(s) in RCA: 14] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/13/2019] [Accepted: 07/03/2020] [Indexed: 01/29/2023] Open
Abstract
Human infants are able to acquire natural language seemingly easily at an early age. Their language learning seems to occur simultaneously with learning other cognitive functions as well as with playful interactions with the environment and caregivers. From a neuroscientific perspective, natural language is embodied, grounded in most, if not all, sensory and sensorimotor modalities, and acquired by means of crossmodal integration. However, characterizing the underlying mechanisms in the brain is difficult and explaining the grounding of language in crossmodal perception and action remains challenging. In this paper, we present a neurocognitive model for language grounding which reflects bio-inspired mechanisms such as an implicit adaptation of timescales as well as end-to-end multimodal abstraction. It addresses developmental robotic interaction and extends its learning capabilities using larger-scale knowledge-based data. In our scenario, we utilize the humanoid robot NICO in obtaining the EMIL data collection, in which the cognitive robot interacts with objects in a children's playground environment while receiving linguistic labels from a caregiver. The model analysis shows that crossmodally integrated representations are sufficient for acquiring language merely from sensory input through interaction with objects in an environment. The representations self-organize hierarchically and embed temporal and spatial information through composition and decomposition. This model can also provide the basis for further crossmodal integration of perceptually grounded cognitive representations.
Collapse
Affiliation(s)
- Stefan Heinrich
- Knowledge Technology Group, Department of Informatics, Universität Hamburg, Hamburg, Germany.,International Research Center for Neurointelligence, The University of Tokyo, Tokyo, Japan
| | - Yuan Yao
- Natural Language Processing Lab, Department of Computer Science and Technology, Tsinghua University, Beijing, China
| | - Tobias Hinz
- Knowledge Technology Group, Department of Informatics, Universität Hamburg, Hamburg, Germany
| | - Zhiyuan Liu
- Natural Language Processing Lab, Department of Computer Science and Technology, Tsinghua University, Beijing, China
| | - Thomas Hummel
- Knowledge Technology Group, Department of Informatics, Universität Hamburg, Hamburg, Germany
| | - Matthias Kerzel
- Knowledge Technology Group, Department of Informatics, Universität Hamburg, Hamburg, Germany
| | - Cornelius Weber
- Knowledge Technology Group, Department of Informatics, Universität Hamburg, Hamburg, Germany
| | - Stefan Wermter
- Knowledge Technology Group, Department of Informatics, Universität Hamburg, Hamburg, Germany
| |
Collapse
|
4
|
White J. The role of robotics and AI in technologically mediated human evolution: a constructive proposal. AI & SOCIETY 2020. [DOI: 10.1007/s00146-019-00877-z] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
|
5
|
Krichmar JL, Hwu T, Zou X, Hylton T. Advantage of prediction and mental imagery for goal‐directed behaviour in agents and robots. COGNITIVE COMPUTATION AND SYSTEMS 2019. [DOI: 10.1049/ccs.2018.0002] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/19/2022] Open
Affiliation(s)
- Jeffrey L. Krichmar
- Department of Cognitive SciencesUniversity of CaliforniaIrvineUSA
- Department of Computer ScienceUniversity of CaliforniaIrvineUSA
| | - Tiffany Hwu
- Department of Cognitive SciencesUniversity of CaliforniaIrvineUSA
| | - Xinyun Zou
- Department of Computer ScienceUniversity of CaliforniaIrvineUSA
| | - Todd Hylton
- Department of Electrical and Computer EngineeringUniversity of CaliforniaSan DiegoUSA
| |
Collapse
|
6
|
A limit-cycle self-organizing map architecture for stable arm control. Neural Netw 2017; 85:165-181. [DOI: 10.1016/j.neunet.2016.10.005] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/23/2016] [Revised: 10/20/2016] [Accepted: 10/21/2016] [Indexed: 11/30/2022]
|
7
|
Abstract
AbstractModelling and simulation have long been dominated by equation-based approaches, until the recent advent of agent-based approaches. To curb the resulting complexity of models, Axelrod promoted the KISS principle: ‘Keep It Simple, Stupid’. But the community is divided and a new principle appeared: KIDS, ‘Keep It Descriptive, Stupid’. Richer models were thus developed for a variety of phenomena, while agent cognition still tends to be modelled with simple reactive particle-like agents. This is not always appropriate, in particular in the social sciences trying to account for the complexity of human behaviour. One solution is to model humans as belief, desire and intention (BDI) agents, an expressive paradigm using concepts from folk psychology, making it easier for modellers and users to understand the simulation. This paper provides a methodological guide to the use of BDI agents in social simulations, and an overview of existing methodologies and tools for using them.
Collapse
|
8
|
|