51
|
Silva F, Correia L, Christensen AL. Evolutionary online behaviour learning and adaptation in real robots. ROYAL SOCIETY OPEN SCIENCE 2017; 4:160938. [PMID: 28791130 PMCID: PMC5541525 DOI: 10.1098/rsos.160938] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/19/2016] [Accepted: 06/28/2017] [Indexed: 05/26/2023]
Abstract
Online evolution of behavioural control on real robots is an open-ended approach to autonomous learning and adaptation: robots have the potential to automatically learn new tasks and to adapt to changes in environmental conditions, or to failures in sensors and/or actuators. However, studies have so far almost exclusively been carried out in simulation because evolution in real hardware has required several days or weeks to produce capable robots. In this article, we successfully evolve neural network-based controllers in real robotic hardware to solve two single-robot tasks and one collective robotics task. Controllers are evolved either from random solutions or from solutions pre-evolved in simulation. In all cases, capable solutions are found in a timely manner (1 h or less). Results show that more accurate simulations may lead to higher-performing controllers, and that completing the optimization process in real robots is meaningful, even if solutions found in simulation differ from solutions in reality. We furthermore demonstrate for the first time the adaptive capabilities of online evolution in real robotic hardware, including robots able to overcome faults injected in the motors of multiple units simultaneously, and to modify their behaviour in response to changes in the task requirements. We conclude by assessing the contribution of each algorithmic component on the performance of the underlying evolutionary algorithm.
Collapse
Affiliation(s)
- Fernando Silva
- Bio-inspired Computation and Intelligent Machines Lab, 1649-026 Lisboa, Portugal
- BioISI, Faculdade de Ciências, Universidade de Lisboa, 1749-016 Lisboa, Portugal
- Instituto de Telecomunicações, 1049-001 Lisboa, Portugal
| | - Luís Correia
- BioISI, Faculdade de Ciências, Universidade de Lisboa, 1749-016 Lisboa, Portugal
| | - Anders Lyhne Christensen
- Bio-inspired Computation and Intelligent Machines Lab, 1649-026 Lisboa, Portugal
- Instituto de Telecomunicações, 1049-001 Lisboa, Portugal
- Instituto Universitário de Lisboa (ISCTE-IUL), 1649-026 Lisboa, Portugal
| |
Collapse
|
52
|
Chandra R, Ong YS, Goh CK. Co-evolutionary multi-task learning with predictive recurrence for multi-step chaotic time series prediction. Neurocomputing 2017. [DOI: 10.1016/j.neucom.2017.02.065] [Citation(s) in RCA: 58] [Impact Index Per Article: 8.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
|
53
|
Jiménez A, Cotterell J, Munteanu A, Sharpe J. A spectrum of modularity in multi-functional gene circuits. Mol Syst Biol 2017; 13:925. [PMID: 28455348 PMCID: PMC5408781 DOI: 10.15252/msb.20167347] [Citation(s) in RCA: 52] [Impact Index Per Article: 7.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/25/2023] Open
Abstract
A major challenge in systems biology is to understand the relationship between a circuit's structure and its function, but how is this relationship affected if the circuit must perform multiple distinct functions within the same organism? In particular, to what extent do multi‐functional circuits contain modules which reflect the different functions? Here, we computationally survey a range of bi‐functional circuits which show no simple structural modularity: They can switch between two qualitatively distinct functions, while both functions depend on all genes of the circuit. Our analysis reveals two distinct classes: hybrid circuits which overlay two simpler mono‐functional sub‐circuits within their circuitry, and emergent circuits, which do not. In this second class, the bi‐functionality emerges from more complex designs which are not fully decomposable into distinct modules and are consequently less intuitive to predict or understand. These non‐intuitive emergent circuits are just as robust as their hybrid counterparts, and we therefore suggest that the common bias toward studying modular systems may hinder our understanding of real biological circuits.
Collapse
Affiliation(s)
- Alba Jiménez
- EMBL-CRG Systems Biology Research Unit, Centre for Genomic Regulation, The Barcelona Institute of Science and Technology, Barcelona, Spain.,Universitat Pompeu Fabra (UPF), Barcelona, Spain
| | - James Cotterell
- EMBL-CRG Systems Biology Research Unit, Centre for Genomic Regulation, The Barcelona Institute of Science and Technology, Barcelona, Spain.,Universitat Pompeu Fabra (UPF), Barcelona, Spain
| | - Andreea Munteanu
- EMBL-CRG Systems Biology Research Unit, Centre for Genomic Regulation, The Barcelona Institute of Science and Technology, Barcelona, Spain.,Universitat Pompeu Fabra (UPF), Barcelona, Spain
| | - James Sharpe
- EMBL-CRG Systems Biology Research Unit, Centre for Genomic Regulation, The Barcelona Institute of Science and Technology, Barcelona, Spain .,Universitat Pompeu Fabra (UPF), Barcelona, Spain.,Institució Catalana de Recerca i Estudis Avançats (ICREA), Barcelona, Spain
| |
Collapse
|
54
|
Damicelli F, Hilgetag CC, Hütt MT, Messé A. Modular topology emerges from plasticity in a minimalistic excitable network model. CHAOS (WOODBURY, N.Y.) 2017; 27:047406. [PMID: 28456166 DOI: 10.1063/1.4979561] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/04/2023]
Abstract
Topological features play a major role in the emergence of complex brain network dynamics underlying brain function. Specific topological properties of brain networks, such as their modular organization, have been widely studied in recent years and shown to be ubiquitous across spatial scales and species. However, the mechanisms underlying the generation and maintenance of such features are still unclear. Using a minimalistic network model with excitable nodes and discrete deterministic dynamics, we studied the effects of a local Hebbian plasticity rule on global network topology. We found that, despite the simple model set-up, the plasticity rule was able to reorganize the global network topology into a modular structure. The structural reorganization was accompanied by enhanced correlations between structural and functional connectivity, and the final network organization reflected features of the dynamical model. These findings demonstrate the potential of simple plasticity rules for structuring the topology of brain connectivity.
Collapse
Affiliation(s)
- Fabrizio Damicelli
- Institute of Computational Neuroscience, University Medical Center Hamburg-Eppendorf, Hamburg University, Hamburg, Germany
| | - Claus C Hilgetag
- Institute of Computational Neuroscience, University Medical Center Hamburg-Eppendorf, Hamburg University, Hamburg, Germany
| | - Marc-Thorsten Hütt
- School of Engineering and Science, Jacobs University Bremen, Bremen, Germany
| | - Arnaud Messé
- Institute of Computational Neuroscience, University Medical Center Hamburg-Eppendorf, Hamburg University, Hamburg, Germany
| |
Collapse
|
55
|
Bassett DS, Mattar MG. A Network Neuroscience of Human Learning: Potential to Inform Quantitative Theories of Brain and Behavior. Trends Cogn Sci 2017; 21:250-264. [PMID: 28259554 PMCID: PMC5366087 DOI: 10.1016/j.tics.2017.01.010] [Citation(s) in RCA: 65] [Impact Index Per Article: 9.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/30/2016] [Revised: 01/15/2017] [Accepted: 01/19/2017] [Indexed: 01/21/2023]
Abstract
Humans adapt their behavior to their external environment in a process often facilitated by learning. Efforts to describe learning empirically can be complemented by quantitative theories that map changes in neurophysiology to changes in behavior. In this review we highlight recent advances in network science that offer a sets of tools and a general perspective that may be particularly useful in understanding types of learning that are supported by distributed neural circuits. We describe recent applications of these tools to neuroimaging data that provide unique insights into adaptive neural processes, the attainment of knowledge, and the acquisition of new skills, forming a network neuroscience of human learning. While promising, the tools have yet to be linked to the well-formulated models of behavior that are commonly utilized in cognitive psychology. We argue that continued progress will require the explicit marriage of network approaches to neuroimaging data and quantitative models of behavior.
Collapse
Affiliation(s)
- Danielle S Bassett
- Department of Bioengineering, University of Pennsylvania, Philadelphia, PA 19104, USA; Department of Electrical and Systems Engineering, University of Pennsylvania, Philadelphia, PA 19104, USA.
| | - Marcelo G Mattar
- Department of Bioengineering, University of Pennsylvania, Philadelphia, PA 19104, USA; Department of Psychology, University of Pennsylvania, Philadelphia, PA 19104, USA
| |
Collapse
|
56
|
Improving HybrID: How to best combine indirect and direct encoding in evolutionary algorithms. PLoS One 2017; 12:e0174635. [PMID: 28334002 PMCID: PMC5363933 DOI: 10.1371/journal.pone.0174635] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/19/2016] [Accepted: 03/12/2017] [Indexed: 11/19/2022] Open
Abstract
Many challenging engineering problems are regular, meaning solutions to one part of a problem can be reused to solve other parts. Evolutionary algorithms with indirect encoding perform better on regular problems because they reuse genomic information to create regular phenotypes. However, on problems that are mostly regular, but contain some irregularities, which describes most real-world problems, indirect encodings struggle to handle the irregularities, hurting performance. Direct encodings are better at producing irregular phenotypes, but cannot exploit regularity. An algorithm called HybrID combines the best of both: it first evolves with indirect encoding to exploit problem regularity, then switches to direct encoding to handle problem irregularity. While HybrID has been shown to outperform both indirect and direct encoding, its initial implementation required the manual specification of when to switch from indirect to direct encoding. In this paper, we test two new methods to improve HybrID by eliminating the need to manually specify this parameter. Auto-Switch-HybrID automatically switches from indirect to direct encoding when fitness stagnates. Offset-HybrID simultaneously evolves an indirect encoding with directly encoded offsets, eliminating the need to switch. We compare the original HybrID to these alternatives on three different problems with adjustable regularity. The results show that both Auto-Switch-HybrID and Offset-HybrID outperform the original HybrID on different types of problems, and thus offer more tools for researchers to solve challenging problems. The Offset-HybrID algorithm is particularly interesting because it suggests a path forward for automatically and simultaneously combining the best traits of indirect and direct encoding.
Collapse
|
57
|
Modular Brain Network Organization Predicts Response to Cognitive Training in Older Adults. PLoS One 2016; 11:e0169015. [PMID: 28006029 PMCID: PMC5179237 DOI: 10.1371/journal.pone.0169015] [Citation(s) in RCA: 73] [Impact Index Per Article: 9.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/25/2016] [Accepted: 12/05/2016] [Indexed: 12/11/2022] Open
Abstract
Cognitive training interventions are a promising approach to mitigate cognitive deficits common in aging and, ultimately, to improve functioning in older adults. Baseline neural factors, such as properties of brain networks, may predict training outcomes and can be used to improve the effectiveness of interventions. Here, we investigated the relationship between baseline brain network modularity, a measure of the segregation of brain sub-networks, and training-related gains in cognition in older adults. We found that older adults with more segregated brain sub-networks (i.e., more modular networks) at baseline exhibited greater training improvements in the ability to synthesize complex information. Further, the relationship between modularity and training-related gains was more pronounced in sub-networks mediating “associative” functions compared with those involved in sensory-motor processing. These results suggest that assessments of brain networks can be used as a biomarker to guide the implementation of cognitive interventions and improve outcomes across individuals. More broadly, these findings also suggest that properties of brain networks may capture individual differences in learning and neuroplasticity. Trail Registration: ClinicalTrials.gov, NCT#00977418
Collapse
|
58
|
Livingston N, Bernatskiy A, Livingston K, Smith ML, Schwarz J, Bongard JC, Wallach D, Long JH. Modularity and Sparsity: Evolution of Neural Net Controllers in Physically Embodied Robots. Front Robot AI 2016. [DOI: 10.3389/frobt.2016.00075] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/13/2022] Open
|
59
|
Cappelle CK, Bernatskiy A, Livingston K, Livingston N, Bongard J. Morphological Modularity Can Enable the Evolution of Robot Behavior to Scale Linearly with the Number of Environmental Features. Front Robot AI 2016. [DOI: 10.3389/frobt.2016.00059] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/13/2022] Open
|
60
|
Can computational efficiency alone drive the evolution of modularity in neural networks? Sci Rep 2016; 6:31982. [PMID: 27573614 PMCID: PMC5004152 DOI: 10.1038/srep31982] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/25/2015] [Accepted: 07/26/2016] [Indexed: 11/08/2022] Open
Abstract
Some biologists have abandoned the idea that computational efficiency in processing multipart tasks or input sets alone drives the evolution of modularity in biological networks. A recent study confirmed that small modular (neural) networks are relatively computationally-inefficient but large modular networks are slightly more efficient than non-modular ones. The present study determines whether these efficiency advantages with network size can drive the evolution of modularity in networks whose connective architecture can evolve. The answer is no, but the reason why is interesting. All simulations (run in a wide variety of parameter states) involving gradualistic connective evolution end in non-modular local attractors. Thus while a high performance modular attractor exists, such regions cannot be reached by gradualistic evolution. Non-gradualistic evolutionary simulations in which multi-modularity is obtained through duplication of existing architecture appear viable. Fundamentally, this study indicates that computational efficiency alone does not drive the evolution of modularity, even in large biological networks, but it may still be a viable mechanism when networks evolve by non-gradualistic means.
Collapse
|
61
|
Behavioral plasticity through the modulation of switch neurons. Neural Netw 2015; 74:35-51. [PMID: 26655337 DOI: 10.1016/j.neunet.2015.11.001] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/10/2015] [Revised: 10/13/2015] [Accepted: 11/03/2015] [Indexed: 11/20/2022]
Abstract
A central question in artificial intelligence is how to design agents capable of switching between different behaviors in response to environmental changes. Taking inspiration from neuroscience, we address this problem by utilizing artificial neural networks (NNs) as agent controllers, and mechanisms such as neuromodulation and synaptic gating. The novel aspect of this work is the introduction of a type of artificial neuron we call "switch neuron". A switch neuron regulates the flow of information in NNs by selectively gating all but one of its incoming synaptic connections, effectively allowing only one signal to propagate forward. The allowed connection is determined by the switch neuron's level of modulatory activation which is affected by modulatory signals, such as signals that encode some information about the reward received by the agent. An important aspect of the switch neuron is that it can be used in appropriate "switch modules" in order to modulate other switch neurons. As we show, the introduction of the switch modules enables the creation of sequences of gating events. This is achieved through the design of a modulatory pathway capable of exploring in a principled manner all permutations of the connections arriving on the switch neurons. We test the model by presenting appropriate architectures in nonstationary binary association problems and T-maze tasks. The results show that for all tasks, the switch neuron architectures generate optimal adaptive behaviors, providing evidence that the switch neuron model could be a valuable tool in simulations where behavioral plasticity is required.
Collapse
|
62
|
Abstract
The development of new technologies for mapping structural and functional brain connectivity has led to the creation of comprehensive network maps of neuronal circuits and systems. The architecture of these brain networks can be examined and analyzed with a large variety of graph theory tools. Methods for detecting modules, or network communities, are of particular interest because they uncover major building blocks or subnetworks that are particularly densely connected, often corresponding to specialized functional components. A large number of methods for community detection have become available and are now widely applied in network neuroscience. This article first surveys a number of these methods, with an emphasis on their advantages and shortcomings; then it summarizes major findings on the existence of modules in both structural and functional brain networks and briefly considers their potential functional roles in brain evolution, wiring minimization, and the emergence of functional specialization and complex dynamics.
Collapse
Affiliation(s)
- Olaf Sporns
- Department of Psychological and Brain Sciences, Indiana University, Bloomington, Indiana 47405; .,Indiana University Network Science Institute, Indiana University, Bloomington, Indiana 47405
| | - Richard F Betzel
- Department of Psychological and Brain Sciences, Indiana University, Bloomington, Indiana 47405;
| |
Collapse
|