1
|
Buckley CL, Lewens T, Levin M, Millidge B, Tschantz A, Watson RA. Natural Induction: Spontaneous Adaptive Organisation without Natural Selection. ENTROPY (BASEL, SWITZERLAND) 2024; 26:765. [PMID: 39330098 PMCID: PMC11431681 DOI: 10.3390/e26090765] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/31/2024] [Revised: 08/19/2024] [Accepted: 08/27/2024] [Indexed: 09/28/2024]
Abstract
Evolution by natural selection is believed to be the only possible source of spontaneous adaptive organisation in the natural world. This places strict limits on the kinds of systems that can exhibit adaptation spontaneously, i.e., without design. Physical systems can show some properties relevant to adaptation without natural selection or design. (1) The relaxation, or local energy minimisation, of a physical system constitutes a natural form of optimisation insomuch as it finds locally optimal solutions to the frustrated forces acting on it or between its components. (2) When internal structure 'gives way' or accommodates a pattern of forcing on a system, this constitutes learning insomuch, as it can store, recall, and generalise past configurations. Both these effects are quite natural and general, but in themselves insufficient to constitute non-trivial adaptation. However, here we show that the recurrent interaction of physical optimisation and physical learning together results in significant spontaneous adaptive organisation. We call this adaptation by natural induction. The effect occurs in dynamical systems described by a network of viscoelastic connections subject to occasional disturbances. When the internal structure of such a system accommodates slowly across many disturbances and relaxations, it spontaneously learns to preferentially visit solutions of increasingly greater quality (exceptionally low energy). We show that adaptation by natural induction thus produces network organisations that improve problem-solving competency with experience (without supervised training or system-level reward). We note that the conditions for adaptation by natural induction, and its adaptive competency, are different from those of natural selection. We therefore suggest that natural selection is not the only possible source of spontaneous adaptive organisation in the natural world.
Collapse
Affiliation(s)
- Christopher L. Buckley
- Department of Informatics, University of Sussex, Brighton BN1 9RH, UK; (C.L.B.); (B.M.); (A.T.)
| | - Tim Lewens
- History and Philosophy of Science, Cambridge University, Cambridge CB2 1TN, UK;
| | - Michael Levin
- Department of Biology, Tufts University, Medford, MA 02155, USA;
| | - Beren Millidge
- Department of Informatics, University of Sussex, Brighton BN1 9RH, UK; (C.L.B.); (B.M.); (A.T.)
| | - Alexander Tschantz
- Department of Informatics, University of Sussex, Brighton BN1 9RH, UK; (C.L.B.); (B.M.); (A.T.)
| | - Richard A. Watson
- Electronics and Computer Science/Institute for Life Sciences, University of Southampton, Southampton SO17 1BJ, UK
| |
Collapse
|
2
|
Hartl B, Risi S, Levin M. Evolutionary Implications of Self-Assembling Cybernetic Materials with Collective Problem-Solving Intelligence at Multiple Scales. ENTROPY (BASEL, SWITZERLAND) 2024; 26:532. [PMID: 39056895 PMCID: PMC11275831 DOI: 10.3390/e26070532] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/27/2024] [Revised: 06/10/2024] [Accepted: 06/14/2024] [Indexed: 07/28/2024]
Abstract
In recent years, the scientific community has increasingly recognized the complex multi-scale competency architecture (MCA) of biology, comprising nested layers of active homeostatic agents, each forming the self-orchestrated substrate for the layer above, and, in turn, relying on the structural and functional plasticity of the layer(s) below. The question of how natural selection could give rise to this MCA has been the focus of intense research. Here, we instead investigate the effects of such decision-making competencies of MCA agential components on the process of evolution itself, using in silico neuroevolution experiments of simulated, minimal developmental biology. We specifically model the process of morphogenesis with neural cellular automata (NCAs) and utilize an evolutionary algorithm to optimize the corresponding model parameters with the objective of collectively self-assembling a two-dimensional spatial target pattern (reliable morphogenesis). Furthermore, we systematically vary the accuracy with which the uni-cellular agents of an NCA can regulate their cell states (simulating stochastic processes and noise during development). This allows us to continuously scale the agents' competency levels from a direct encoding scheme (no competency) to an MCA (with perfect reliability in cell decision executions). We demonstrate that an evolutionary process proceeds much more rapidly when evolving the functional parameters of an MCA compared to evolving the target pattern directly. Moreover, the evolved MCAs generalize well toward system parameter changes and even modified objective functions of the evolutionary process. Thus, the adaptive problem-solving competencies of the agential parts in our NCA-based in silico morphogenesis model strongly affect the evolutionary process, suggesting significant functional implications of the near-ubiquitous competency seen in living matter.
Collapse
Affiliation(s)
- Benedikt Hartl
- Allen Discovery Center, Tufts University, Medford, MA 02155, USA;
- Institute for Theoretical Physics, Center for Computational Materials Science (CMS), TU Wien, 1040 Wien, Austria
| | - Sebastian Risi
- Digital Design, IT University of Copenhagen, 2300 Copenhagen, Denmark;
| | - Michael Levin
- Allen Discovery Center, Tufts University, Medford, MA 02155, USA;
- Wyss Institute for Biologically Inspired Engineering, Harvard University, Boston, MA 02115, USA
| |
Collapse
|
3
|
Watson R. Agency, Goal-Directed Behavior, and Part-Whole Relationships in Biological Systems. BIOLOGICAL THEORY 2023; 19:22-36. [PMID: 38463532 PMCID: PMC10920425 DOI: 10.1007/s13752-023-00447-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 12/19/2022] [Accepted: 08/17/2023] [Indexed: 03/12/2024]
Abstract
In this essay we aim to present some considerations regarding a minimal but concrete notion of agency and goal-directed behavior that are useful for characterizing biological systems at different scales. These considerations are a particular perspective, bringing together concepts from dynamical systems, combinatorial problem-solving, and connectionist learning with an emphasis on the relationship between parts and wholes. This perspective affords some ways to think about agents that are concrete and quantifiable, and relevant to some important biological issues. Instead of advocating for a strict definition of minimally agential characteristics, we focus on how (even for a modest notion of agency) the agency of a system can be more than the sum of the agency of its parts. We quantify this in terms of the problem-solving competency of a system with respect to resolution of the frustrations between its parts. This requires goal-directed behavior in the sense of delayed gratification, i.e., taking dynamical trajectories that forego short-term gains (or sustain short-term stress or frustration) in favor of long-term gains. In order for this competency to belong to the system (rather than to its parts or given by its construction or design), it can involve distributed systemic knowledge that is acquired through experience, i.e., changes in the organization of the relationships among its parts (without presupposing a system-level reward function for such changes). This conception of agency helps us think about the ways in which cells, organisms, and perhaps other biological scales, can be agential (i.e., more agential than their parts) in a quantifiable sense, without denying that the behavior of the whole depends on the behaviors of the parts in their current organization.
Collapse
Affiliation(s)
- Richard Watson
- Institute for Life Sciences/Electronics and Computer Science, University of Southampton, Southampton, UK
| |
Collapse
|
4
|
Froese T, Weber N, Shpurov I, Ikegami T. From autopoiesis to self-optimization: Toward an enactive model of biological regulation. Biosystems 2023:104959. [PMID: 37380066 DOI: 10.1016/j.biosystems.2023.104959] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/03/2023] [Revised: 06/08/2023] [Accepted: 06/15/2023] [Indexed: 06/30/2023]
Abstract
The theory of autopoiesis has been influential in many areas of theoretical biology, especially in the fields of artificial life and origins of life. However, it has not managed to productively connect with mainstream biology, partly for theoretical reasons, but arguably mainly because deriving specific working hypotheses has been challenging. The theory has recently undergone significant conceptual development in the enactive approach to life and mind. Hidden complexity in the original conception of autopoiesis has been explicated in the service of other operationalizable concepts related to self-individuation: precariousness, adaptivity, and agency. Here we advance these developments by highlighting the interplay of these concepts with considerations from thermodynamics: reversibility, irreversibility, and path-dependence. We interpret this interplay in terms of the self-optimization model, and present modeling results that illustrate how these minimal conditions enable a system to re-organize itself such that it tends toward coordinated constraint satisfaction at the system level. Although the model is still very abstract, these results point in a direction where the enactive approach could productively connect with cell biology.
Collapse
Affiliation(s)
- Tom Froese
- Embodied Cognitive Science Unit, Okinawa Institute of Science and Technology Graduate University, Tancha, Okinawa, Japan.
| | - Natalya Weber
- Embodied Cognitive Science Unit, Okinawa Institute of Science and Technology Graduate University, Tancha, Okinawa, Japan
| | - Ivan Shpurov
- Embodied Cognitive Science Unit, Okinawa Institute of Science and Technology Graduate University, Tancha, Okinawa, Japan
| | - Takashi Ikegami
- Theoretical Sciences Visiting Program, Okinawa Institute of Science and Technology Graduate University, Tancha, Okinawa, Japan; Ikegami Lab, Department of General Systems Studies, University of Tokyo, Komaba, Tokyo, Japan
| |
Collapse
|
5
|
Mathews J, Chang A(J, Devlin L, Levin M. Cellular signaling pathways as plastic, proto-cognitive systems: Implications for biomedicine. PATTERNS (NEW YORK, N.Y.) 2023; 4:100737. [PMID: 37223267 PMCID: PMC10201306 DOI: 10.1016/j.patter.2023.100737] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 05/25/2023]
Abstract
Many aspects of health and disease are modeled using the abstraction of a "pathway"-a set of protein or other subcellular activities with specified functional linkages between them. This metaphor is a paradigmatic case of a deterministic, mechanistic framework that focuses biomedical intervention strategies on altering the members of this network or the up-/down-regulation links between them-rewiring the molecular hardware. However, protein pathways and transcriptional networks exhibit interesting and unexpected capabilities such as trainability (memory) and information processing in a context-sensitive manner. Specifically, they may be amenable to manipulation via their history of stimuli (equivalent to experiences in behavioral science). If true, this would enable a new class of biomedical interventions that target aspects of the dynamic physiological "software" implemented by pathways and gene-regulatory networks. Here, we briefly review clinical and laboratory data that show how high-level cognitive inputs and mechanistic pathway modulation interact to determine outcomes in vivo. Further, we propose an expanded view of pathways from the perspective of basal cognition and argue that a broader understanding of pathways and how they process contextual information across scales will catalyze progress in many areas of physiology and neurobiology. We argue that this fuller understanding of the functionality and tractability of pathways must go beyond a focus on the mechanistic details of protein and drug structure to encompass their physiological history as well as their embedding within higher levels of organization in the organism, with numerous implications for data science addressing health and disease. Exploiting tools and concepts from behavioral and cognitive sciences to explore a proto-cognitive metaphor for the pathways underlying health and disease is more than a philosophical stance on biochemical processes; at stake is a new roadmap for overcoming the limitations of today's pharmacological strategies and for inferring future therapeutic interventions for a wide range of disease states.
Collapse
Affiliation(s)
- Juanita Mathews
- Allen Discovery Center at Tufts University, Medford, MA, USA
| | | | - Liam Devlin
- Allen Discovery Center at Tufts University, Medford, MA, USA
| | - Michael Levin
- Allen Discovery Center at Tufts University, Medford, MA, USA
- Wyss Institute for Biologically Inspired Engineering at Harvard University, Boston, MA, USA
| |
Collapse
|
6
|
Levin M. Darwin's agential materials: evolutionary implications of multiscale competency in developmental biology. Cell Mol Life Sci 2023; 80:142. [PMID: 37156924 PMCID: PMC10167196 DOI: 10.1007/s00018-023-04790-z] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/05/2023] [Revised: 04/24/2023] [Accepted: 04/27/2023] [Indexed: 05/10/2023]
Abstract
A critical aspect of evolution is the layer of developmental physiology that operates between the genotype and the anatomical phenotype. While much work has addressed the evolution of developmental mechanisms and the evolvability of specific genetic architectures with emergent complexity, one aspect has not been sufficiently explored: the implications of morphogenetic problem-solving competencies for the evolutionary process itself. The cells that evolution works with are not passive components: rather, they have numerous capabilities for behavior because they derive from ancestral unicellular organisms with rich repertoires. In multicellular organisms, these capabilities must be tamed, and can be exploited, by the evolutionary process. Specifically, biological structures have a multiscale competency architecture where cells, tissues, and organs exhibit regulative plasticity-the ability to adjust to perturbations such as external injury or internal modifications and still accomplish specific adaptive tasks across metabolic, transcriptional, physiological, and anatomical problem spaces. Here, I review examples illustrating how physiological circuits guiding cellular collective behavior impart computational properties to the agential material that serves as substrate for the evolutionary process. I then explore the ways in which the collective intelligence of cells during morphogenesis affect evolution, providing a new perspective on the evolutionary search process. This key feature of the physiological software of life helps explain the remarkable speed and robustness of biological evolution, and sheds new light on the relationship between genomes and functional anatomical phenotypes.
Collapse
Affiliation(s)
- Michael Levin
- Allen Discovery Center at Tufts University, 200 Boston Ave. 334 Research East, Medford, MA, 02155, USA.
- Wyss Institute for Biologically Inspired Engineering at Harvard University, 3 Blackfan St., Boston, MA, 02115, USA.
| |
Collapse
|
7
|
Collective computational intelligence in biology - Emergence of memory in somatic tissues. Biosystems 2023; 223:104816. [PMID: 36436698 DOI: 10.1016/j.biosystems.2022.104816] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/22/2022] [Revised: 11/20/2022] [Accepted: 11/21/2022] [Indexed: 11/27/2022]
Abstract
Role of memory in the function of biological tissues, organs and organisms remains unexplored with many unanswered questions. In this study, the emergence of associative memory in somatic (non-neural) tissues and its potential relation to tissue function was explored using a number of biologically plausible network topologies in in silico tissues with computing cells. These topologies were local cooperation; complete system-wide cooperation or inhibition; and local cooperation and short- or long-range inhibition. These were tested with and without self-feedback on two-dimensional (2D) three-dimensional (3D) cell networks, resulting in various forms of fully and partially connected networks. Further, both binary inputs with threshold processing and real-valued inputs with nonlinear processing were considered. Results revealed the emergence of diverse forms of tissue memory. In full cooperation, networks produced one fixed attractor indicating the propensity towards a stable memory pattern which in a real tissue could correspond to an invariable physiological state, such as bioelectric homeostasis. The local neighbourhood cooperation produced both a fixed and a limit cycle attractor that could be beneficial for a tissue to hold few associative memories including circadian rhythms. Most interesting results were found for the local cooperation with short- or long-range inhibition topologies that produced a cluster of fixed and limit cycle attractors offering diverse memories. Fixed attractors could correspond to inactive tissue states and active nonrhythmic functional states and limit cycles could correspond to circadian rhythms such as pumping in heart, kidney or liver in various oscillatory regimes. In all topologies, self-feedback abolished or drastically reduced the limit cycles in favour of fixed stable state. These attractor patterns were found to be largely invariant to scale (2D or 3D) and type of inputs and processing. We also explored the self-optimising ability of the 'local cooperation with global (short- or long-range) inhibition' 2D topologies with Hebbian learning with fixed and flexible topologies. The fixed topology learned to self-model to consolidate memory towards fewer more stable attractors. The flexible topology even formed new connections to bring the system to a single fixed state. Thus local cooperation with global inhibition topology can offer greater freedom to create diverse memory pattens that can be tempered by learning, self-feedback, and to some extent continuous processing to simplify and consolidate memory towards manageable forms.
Collapse
|
8
|
Biswas S, Clawson W, Levin M. Learning in Transcriptional Network Models: Computational Discovery of Pathway-Level Memory and Effective Interventions. Int J Mol Sci 2022; 24:285. [PMID: 36613729 PMCID: PMC9820177 DOI: 10.3390/ijms24010285] [Citation(s) in RCA: 11] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/01/2022] [Revised: 11/23/2022] [Accepted: 12/20/2022] [Indexed: 12/28/2022] Open
Abstract
Trainability, in any substrate, refers to the ability to change future behavior based on past experiences. An understanding of such capacity within biological cells and tissues would enable a particularly powerful set of methods for prediction and control of their behavior through specific patterns of stimuli. This top-down mode of control (as an alternative to bottom-up modification of hardware) has been extensively exploited by computer science and the behavioral sciences; in biology however, it is usually reserved for organism-level behavior in animals with brains, such as training animals towards a desired response. Exciting work in the field of basal cognition has begun to reveal degrees and forms of unconventional memory in non-neural tissues and even in subcellular biochemical dynamics. Here, we characterize biological gene regulatory circuit models and protein pathways and find them capable of several different kinds of memory. We extend prior results on learning in binary transcriptional networks to continuous models and identify specific interventions (regimes of stimulation, as opposed to network rewiring) that abolish undesirable network behavior such as drug pharmacoresistance and drug sensitization. We also explore the stability of created memories by assessing their long-term behavior and find that most memories do not decay over long time periods. Additionally, we find that the memory properties are quite robust to noise; surprisingly, in many cases noise actually increases memory potential. We examine various network properties associated with these behaviors and find that no one network property is indicative of memory. Random networks do not show similar memory behavior as models of biological processes, indicating that generic network dynamics are not solely responsible for trainability. Rational control of dynamic pathway function using stimuli derived from computational models opens the door to empirical studies of proto-cognitive capacities in unconventional embodiments and suggests numerous possible applications in biomedicine, where behavior shaping of pathway responses stand as a potential alternative to gene therapy.
Collapse
Affiliation(s)
- Surama Biswas
- Allen Discovery Center, Tufts University, Medford, MA 02155, USA
- Department of Computer Science & Engineering and Information Technology, Meghnad Saha Institute of Technology, Kolkata 700150, India
| | - Wesley Clawson
- Allen Discovery Center, Tufts University, Medford, MA 02155, USA
| | - Michael Levin
- Allen Discovery Center, Tufts University, Medford, MA 02155, USA
- Wyss Institute for Biologically Inspired Engineering, Harvard University, Boston, MA 02115, USA
| |
Collapse
|
9
|
Watson RA, Levin M, Buckley CL. Design for an Individual: Connectionist Approaches to the Evolutionary Transitions in Individuality. Front Ecol Evol 2022. [DOI: 10.3389/fevo.2022.823588] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/27/2022] Open
Abstract
The truly surprising thing about evolution is not how it makes individuals better adapted to their environment, but how it makes individuals. All individuals are made of parts that used to be individuals themselves, e.g., multicellular organisms from unicellular organisms. In such evolutionary transitions in individuality, the organised structure of relationships between component parts causes them to work together, creating a new organismic entity and a new evolutionary unit on which selection can act. However, the principles of these transitions remain poorly understood. In particular, the process of transition must be explained by “bottom-up” selection, i.e., on the existing lower-level evolutionary units, without presupposing the higher-level evolutionary unit we are trying to explain. In this hypothesis and theory manuscript we address the conditions for evolutionary transitions in individuality by exploiting adaptive principles already known in learning systems. Connectionist learning models, well-studied in neural networks, demonstrate how networks of organised functional relationships between components, sufficient to exhibit information integration and collective action, can be produced via fully-distributed and unsupervised learning principles, i.e., without centralised control or an external teacher. Evolutionary connectionism translates these distributed learning principles into the domain of natural selection, and suggests how relationships among evolutionary units could become adaptively organised by selection from below without presupposing genetic relatedness or selection on collectives. In this manuscript, we address how connectionist models with a particular interaction structure might explain transitions in individuality. We explore the relationship between the interaction structures necessary for (a) evolutionary individuality (where the evolution of the whole is a non-decomposable function of the evolution of the parts), (b) organismic individuality (where the development and behaviour of the whole is a non-decomposable function of the behaviour of component parts) and (c) non-linearly separable functions, familiar in connectionist models (where the output of the network is a non-decomposable function of the inputs). Specifically, we hypothesise that the conditions necessary to evolve a new level of individuality are described by the conditions necessary to learn non-decomposable functions of this type (or deep model induction) familiar in connectionist models of cognition and learning.
Collapse
|
10
|
Gene regulatory networks exhibit several kinds of memory: quantification of memory in biological and random transcriptional networks. iScience 2021; 24:102131. [PMID: 33748699 PMCID: PMC7970124 DOI: 10.1016/j.isci.2021.102131] [Citation(s) in RCA: 22] [Impact Index Per Article: 7.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/17/2020] [Revised: 12/09/2020] [Accepted: 01/26/2021] [Indexed: 02/08/2023] Open
Abstract
Gene regulatory networks (GRNs) process important information in developmental biology and biomedicine. A key knowledge gap concerns how their responses change over time. Hypothesizing long-term changes of dynamics induced by transient prior events, we created a computational framework for defining and identifying diverse types of memory in candidate GRNs. We show that GRNs from a wide range of model systems are predicted to possess several types of memory, including Pavlovian conditioning. Associative memory offers an alternative strategy for the biomedical use of powerful drugs with undesirable side effects, and a novel approach to understanding the variability and time-dependent changes of drug action. We find evidence of natural selection favoring GRN memory. Vertebrate GRNs overall exhibit more memory than invertebrate GRNs, and memory is most prevalent in differentiated metazoan cell networks compared with undifferentiated cells. Timed stimuli are a powerful alternative for biomedical control of complex in vivo dynamics without genomic editing or transgenes. Gene regulatory networks' dynamics are modified by transient stimuli GRNs have several different types of memory, including associative conditioning Evolution favored GRN memory, and differentiated cells have the most memory capacity Training GRNs offers a novel biomedical strategy not dependent on genetic rewiring
Collapse
|
11
|
Ginsburg S, Jablonka E. Evolutionary transitions in learning and cognition. Philos Trans R Soc Lond B Biol Sci 2021; 376:20190766. [PMID: 33550955 DOI: 10.1098/rstb.2019.0766] [Citation(s) in RCA: 19] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/23/2022] Open
Abstract
We define a cognitive system as a system that can learn, and adopt an evolutionary-transition-oriented framework for analysing different types of neural cognition. This enables us to classify types of cognition and point to the continuities and discontinuities among them. The framework we use for studying evolutionary transitions in learning capacities focuses on qualitative changes in the integration, storage and use of neurally processed information. Although there are always grey areas around evolutionary transitions, we recognize five major neural transitions, the first two of which involve animals at the base of the phylogenetic tree: (i) the evolutionary transition from learning in non-neural animals to learning in the first neural animals; (ii) the transition to animals showing limited, elemental associative learning, entailing neural centralization and primary brain differentiation; (iii) the transition to animals capable of unlimited associative learning, which, on our account, constitutes sentience and entails hierarchical brain organization and dedicated memory and value networks; (iv) the transition to imaginative animals that can plan and learn through selection among virtual events; and (v) the transition to human symbol-based cognition and cultural learning. The focus on learning provides a unifying framework for experimental and theoretical studies of cognition in the living world. This article is part of the theme issue 'Basal cognition: multicellularity, neurons and the cognitive lens'.
Collapse
Affiliation(s)
- Simona Ginsburg
- Natural Science Department, The Open University of Israel, 1 University Road, POB 808, Raanana 4353701, Israel
| | - Eva Jablonka
- The Cohn Institute for the History and Philosophy of Science and Ideas, Tel Aviv University, 6934525 Ramat Aviv, Israel.,CPNSS, London School of Economics, Houghton Street, London WC2A 2AE, UK
| |
Collapse
|
12
|
Using Patent Technology Networks to Observe Neurocomputing Technology Hotspots and Development Trends. SUSTAINABILITY 2020. [DOI: 10.3390/su12187696] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
In recent years, development in the fields of big data and artificial intelligence has given rise to interest among scholars in neurocomputing-related applications. Neurocomputing has relatively widespread applications because it is a critical technology in numerous fields. However, most studies on neurocomputing have focused on improving related algorithms or application fields; they have failed to highlight the main technology hotspots and development trends from a comprehensive viewpoint. To fill the research gap, this study adopts a new viewpoint and employs technological fields as its main subject. Neurocomputing patents are subjected to network analysis to construct a neurocomputing technology hotspot. The results reveal that the neurocomputing technology hotspots are algorithms, methods or devices for reading or recognizing printed or written characters or patterns, and digital storage characterized by the use of particular electric or magnetic storage elements. Furthermore, the technology hotspots are discovered to not be clustered around particular fields but, rather, are multidisciplinary. The applications that combine neurocomputing with digital storage are currently undergoing the most extensive development. Finally, patentee analysis reveal that neurocomputing technology is mainly being developed by information technology corporations, thereby indicating the market development potential of neurocomputing technology. This study constructs a technology hotspot network model to elucidate the trend in development of neurocomputing technology, and the findings may serve as a reference for industries planning to promote emerging technologies.
Collapse
|
13
|
Gershenson C, Trianni V, Werfel J, Sayama H. Self-Organization and Artificial Life. ARTIFICIAL LIFE 2020; 26:391-408. [PMID: 32697161 DOI: 10.1162/artl_a_00324] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
Self-organization can be broadly defined as the ability of a system to display ordered spatiotemporal patterns solely as the result of the interactions among the system components. Processes of this kind characterize both living and artificial systems, making self-organization a concept that is at the basis of several disciplines, from physics to biology and engineering. Placed at the frontiers between disciplines, artificial life (ALife) has heavily borrowed concepts and tools from the study of self-organization, providing mechanistic interpretations of lifelike phenomena as well as useful constructivist approaches to artificial system design. Despite its broad usage within ALife, the concept of self-organization has been often excessively stretched or misinterpreted, calling for a clarification that could help with tracing the borders between what can and cannot be considered self-organization. In this review, we discuss the fundamental aspects of self-organization and list the main usages within three primary ALife domains, namely "soft" (mathematical/computational modeling), "hard" (physical robots), and "wet" (chemical/biological systems) ALife. We also provide a classification to locate this research. Finally, we discuss the usefulness of self-organization and related concepts within ALife studies, point to perspectives and challenges for future research, and list open questions. We hope that this work will motivate discussions related to self-organization in ALife and related fields.
Collapse
Affiliation(s)
- Carlos Gershenson
- Universidad Nacional Autónoma de México, Instituto de Investigaciones en Matemáticas Aplicadas y en Sistemas, Centro de Ciencias de la Complejidad.
- ITMO University
| | - Vito Trianni
- Italian National Research Council, Institute of Cognitive Sciences and Technologies.
| | - Justin Werfel
- Harvard University, Wyss Institute for Biologically Inspired Engineering.
| | - Hiroki Sayama
- Binghamton University, Center for Collective Dynamics of Complex Systems.
- Waseda University, Waseda Innovation Laboratory
| |
Collapse
|
14
|
Morales A, Froese T. Unsupervised Learning Facilitates Neural Coordination Across the Functional Clusters of the C. elegans Connectome. Front Robot AI 2020; 7:40. [PMID: 33501208 PMCID: PMC7805867 DOI: 10.3389/frobt.2020.00040] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/06/2019] [Accepted: 03/09/2020] [Indexed: 11/23/2022] Open
Abstract
Modeling of complex adaptive systems has revealed a still poorly understood benefit of unsupervised learning: when neural networks are enabled to form an associative memory of a large set of their own attractor configurations, they begin to reorganize their connectivity in a direction that minimizes the coordination constraints posed by the initial network architecture. This self-optimization process has been replicated in various neural network formalisms, but it is still unclear whether it can be applied to biologically more realistic network topologies and scaled up to larger networks. Here we continue our efforts to respond to these challenges by demonstrating the process on the connectome of the widely studied nematode worm C. elegans. We extend our previous work by considering the contributions made by hierarchical partitions of the connectome that form functional clusters, and we explore possible beneficial effects of inter-cluster inhibitory connections. We conclude that the self-optimization process can be applied to neural network topologies characterized by greater biological realism, and that long-range inhibitory connections can facilitate the generalization capacity of the process.
Collapse
Affiliation(s)
- Alejandro Morales
- Embodied Cognitive Science Unit, Okinawa Institute of Science and Technology Graduate University, Okinawa, Japan
- Computer Science and Engineering Postgraduate Program, National Autonomous University of Mexico, Mexico City, Mexico
| | - Tom Froese
- Embodied Cognitive Science Unit, Okinawa Institute of Science and Technology Graduate University, Okinawa, Japan
| |
Collapse
|
15
|
Modeling collective rule at ancient Teotihuacan as a complex adaptive system: Communal ritual makes social hierarchy more effective. COGN SYST RES 2018. [DOI: 10.1016/j.cogsys.2018.09.018] [Citation(s) in RCA: 20] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022]
|
16
|
Zarco M, Froese T. Self-Optimization in Continuous-Time Recurrent Neural Networks. Front Robot AI 2018; 5:96. [PMID: 33500975 PMCID: PMC7805835 DOI: 10.3389/frobt.2018.00096] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/22/2017] [Accepted: 07/27/2018] [Indexed: 11/13/2022] Open
Abstract
A recent advance in complex adaptive systems has revealed a new unsupervised learning technique called self-modeling or self-optimization. Basically, a complex network that can form an associative memory of the state configurations of the attractors on which it converges will optimize its structure: it will spontaneously generalize over these typically suboptimal attractors and thereby also reinforce more optimal attractors—even if these better solutions are normally so hard to find that they have never been previously visited. Ideally, after sufficient self-optimization the most optimal attractor dominates the state space, and the network will converge on it from any initial condition. This technique has been applied to social networks, gene regulatory networks, and neural networks, but its application to less restricted neural controllers, as typically used in evolutionary robotics, has not yet been attempted. Here we show for the first time that the self-optimization process can be implemented in a continuous-time recurrent neural network with asymmetrical connections. We discuss several open challenges that must still be addressed before this technique could be applied in actual robotic scenarios.
Collapse
Affiliation(s)
- Mario Zarco
- Departamento de Ciencias de la Computación, Instituto de Investigaciones en Matemáticas Aplicadas y en Sistemas, Universidad Nacional Autónoma de México, Mexico City, Mexico
- *Correspondence: Mario Zarco
| | - Tom Froese
- Departamento de Ciencias de la Computación, Instituto de Investigaciones en Matemáticas Aplicadas y en Sistemas, Universidad Nacional Autónoma de México, Mexico City, Mexico
- Centro de Ciencias de la Complejidad, Universidad Nacional Autónoma de México, Mexico City, Mexico
| |
Collapse
|
17
|
Watson RA, Szathmáry E. How Can Evolution Learn? Trends Ecol Evol 2016; 31:147-157. [DOI: 10.1016/j.tree.2015.11.009] [Citation(s) in RCA: 92] [Impact Index Per Article: 11.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/20/2015] [Revised: 10/02/2015] [Accepted: 11/12/2015] [Indexed: 12/14/2022]
|
18
|
Watson RA, Mills R, Buckley CL, Kouvaris K, Jackson A, Powers ST, Cox C, Tudge S, Davies A, Kounios L, Power D. Evolutionary Connectionism: Algorithmic Principles Underlying the Evolution of Biological Organisation in Evo-Devo, Evo-Eco and Evolutionary Transitions. Evol Biol 2015; 43:553-581. [PMID: 27932852 PMCID: PMC5119841 DOI: 10.1007/s11692-015-9358-z] [Citation(s) in RCA: 38] [Impact Index Per Article: 4.2] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/03/2015] [Accepted: 10/31/2015] [Indexed: 12/16/2022]
Abstract
The mechanisms of variation, selection and inheritance, on which evolution by natural selection depends, are not fixed over evolutionary time. Current evolutionary biology is increasingly focussed on understanding how the evolution of developmental organisations modifies the distribution of phenotypic variation, the evolution of ecological relationships modifies the selective environment, and the evolution of reproductive relationships modifies the heritability of the evolutionary unit. The major transitions in evolution, in particular, involve radical changes in developmental, ecological and reproductive organisations that instantiate variation, selection and inheritance at a higher level of biological organisation. However, current evolutionary theory is poorly equipped to describe how these organisations change over evolutionary time and especially how that results in adaptive complexes at successive scales of organisation (the key problem is that evolution is self-referential, i.e. the products of evolution change the parameters of the evolutionary process). Here we first reinterpret the central open questions in these domains from a perspective that emphasises the common underlying themes. We then synthesise the findings from a developing body of work that is building a new theoretical approach to these questions by converting well-understood theory and results from models of cognitive learning. Specifically, connectionist models of memory and learning demonstrate how simple incremental mechanisms, adjusting the relationships between individually-simple components, can produce organisations that exhibit complex system-level behaviours and improve the adaptive capabilities of the system. We use the term "evolutionary connectionism" to recognise that, by functionally equivalent processes, natural selection acting on the relationships within and between evolutionary entities can result in organisations that produce complex system-level behaviours in evolutionary systems and modify the adaptive capabilities of natural selection over time. We review the evidence supporting the functional equivalences between the domains of learning and of evolution, and discuss the potential for this to resolve conceptual problems in our understanding of the evolution of developmental, ecological and reproductive organisations and, in particular, the major evolutionary transitions.
Collapse
Affiliation(s)
- Richard A. Watson
- Agents, Interactions and Complexity, ECS, University of Southampton, Southampton, UK
- Institute for Life Sciences, University of Southampton, Southampton, UK
| | - Rob Mills
- Biosystems & Integrative Sciences Institute (BioISI), Faculty of Sciences, University of Lisbon, Lisbon, Portugal
| | - C. L. Buckley
- School of Engineering and Informatics, University of Sussex, Falmer, UK
| | - Kostas Kouvaris
- Agents, Interactions and Complexity, ECS, University of Southampton, Southampton, UK
| | - Adam Jackson
- Agents, Interactions and Complexity, ECS, University of Southampton, Southampton, UK
| | | | - Chris Cox
- Agents, Interactions and Complexity, ECS, University of Southampton, Southampton, UK
| | - Simon Tudge
- Agents, Interactions and Complexity, ECS, University of Southampton, Southampton, UK
| | - Adam Davies
- Agents, Interactions and Complexity, ECS, University of Southampton, Southampton, UK
| | - Loizos Kounios
- Agents, Interactions and Complexity, ECS, University of Southampton, Southampton, UK
| | - Daniel Power
- Agents, Interactions and Complexity, ECS, University of Southampton, Southampton, UK
| |
Collapse
|
19
|
Power DA, Watson RA, Szathmáry E, Mills R, Powers ST, Doncaster CP, Czapp B. What can ecosystems learn? Expanding evolutionary ecology with learning theory. Biol Direct 2015; 10:69. [PMID: 26643685 PMCID: PMC4672551 DOI: 10.1186/s13062-015-0094-1] [Citation(s) in RCA: 29] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/08/2015] [Accepted: 10/26/2015] [Indexed: 11/30/2022] Open
Abstract
Background The structure and organisation of ecological interactions within an ecosystem is modified by the evolution and coevolution of the individual species it contains. Understanding how historical conditions have shaped this architecture is vital for understanding system responses to change at scales from the microbial upwards. However, in the absence of a group selection process, the collective behaviours and ecosystem functions exhibited by the whole community cannot be organised or adapted in a Darwinian sense. A long-standing open question thus persists: Are there alternative organising principles that enable us to understand and predict how the coevolution of the component species creates and maintains complex collective behaviours exhibited by the ecosystem as a whole? Results Here we answer this question by incorporating principles from connectionist learning, a previously unrelated discipline already using well-developed theories on how emergent behaviours arise in simple networks. Specifically, we show conditions where natural selection on ecological interactions is functionally equivalent to a simple type of connectionist learning, ‘unsupervised learning’, well-known in neural-network models of cognitive systems to produce many non-trivial collective behaviours. Accordingly, we find that a community can self-organise in a well-defined and non-trivial sense without selection at the community level; its organisation can be conditioned by past experience in the same sense as connectionist learning models habituate to stimuli. This conditioning drives the community to form a distributed ecological memory of multiple past states, causing the community to: a) converge to these states from any random initial composition; b) accurately restore historical compositions from small fragments; c) recover a state composition following disturbance; and d) to correctly classify ambiguous initial compositions according to their similarity to learned compositions. We examine how the formation of alternative stable states alters the community’s response to changing environmental forcing, and we identify conditions under which the ecosystem exhibits hysteresis with potential for catastrophic regime shifts. Conclusions This work highlights the potential of connectionist theory to expand our understanding of evo-eco dynamics and collective ecological behaviours. Within this framework we find that, despite not being a Darwinian unit, ecological communities can behave like connectionist learning systems, creating internal conditions that habituate to past environmental conditions and actively recalling those conditions. Reviewers This article was reviewed by Prof. Ricard V Solé, Universitat Pompeu Fabra, Barcelona and Prof. Rob Knight, University of Colorado, Boulder.
Collapse
Affiliation(s)
- Daniel A Power
- Electronics and Computer Science, University of Southampton, Southampton, SO17 1BJ, UK.
| | - Richard A Watson
- Institute for Life Sciences/Electronics and Computer Science, University of Southampton, Southampton, UK.
| | - Eörs Szathmáry
- The Parmenides Found, Center for the Conceptual Foundations of Science, Pullach, Germany.
| | - Rob Mills
- Department of Informatics, Faculty of Sciences, University of Lisbon, Lisbon, Portugal.
| | - Simon T Powers
- Department of Ecology & Evolution, University of Lausanne, Lausanne, Switzerland.
| | | | - Błażej Czapp
- School of Biological Sciences, University of Southampton, Southampton, UK.
| |
Collapse
|
20
|
Froese T, Gershenson C, Manzanilla LR. Can government be self-organized? A mathematical model of the collective social organization of ancient Teotihuacan, central Mexico. PLoS One 2014; 9:e109966. [PMID: 25303308 PMCID: PMC4193847 DOI: 10.1371/journal.pone.0109966] [Citation(s) in RCA: 35] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/23/2014] [Accepted: 09/12/2014] [Indexed: 11/19/2022] Open
Abstract
Teotihuacan was the first urban civilization of Mesoamerica and one of the largest of the ancient world. Following a tradition in archaeology to equate social complexity with centralized hierarchy, it is widely believed that the city’s origin and growth was controlled by a lineage of powerful individuals. However, much data is indicative of a government of co-rulers, and artistic traditions expressed an egalitarian ideology. Yet this alternative keeps being marginalized because the problems of collective action make it difficult to conceive how such a coalition could have functioned in principle. We therefore devised a mathematical model of the city’s hypothetical network of representatives as a formal proof of concept that widespread cooperation was realizable in a fully distributed manner. In the model, decisions become self-organized into globally optimal configurations even though local representatives behave and modify their relations in a rational and selfish manner. This self-optimization crucially depends on occasional communal interruptions of normal activity, and it is impeded when sections of the network are too independent. We relate these insights to theories about community-wide rituals at Teotihuacan and the city’s eventual disintegration.
Collapse
Affiliation(s)
- Tom Froese
- Instituto de Investigaciones en Matemáticas Aplicadas y en Sistemas, Universidad Nacional Autónoma de México, Ciudad Universitaria, Mexico City, Distrito Federal, Mexico
- Centro de Ciencias de la Complejidad, Universidad Nacional Autónoma de México, Ciudad Universitaria, Mexico City, Distrito Federal, Mexico
- * E-mail:
| | - Carlos Gershenson
- Instituto de Investigaciones en Matemáticas Aplicadas y en Sistemas, Universidad Nacional Autónoma de México, Ciudad Universitaria, Mexico City, Distrito Federal, Mexico
- Centro de Ciencias de la Complejidad, Universidad Nacional Autónoma de México, Ciudad Universitaria, Mexico City, Distrito Federal, Mexico
| | - Linda R. Manzanilla
- Instituto de Investigaciones Antropológicas, Universidad Nacional Autónoma de México, Ciudad Universitaria, Mexico City, Distrito Federal, Mexico
| |
Collapse
|
21
|
Woodward A, Froese T, Ikegami T. Neural coordination can be enhanced by occasional interruption of normal firing patterns: a self-optimizing spiking neural network model. Neural Netw 2014; 62:39-46. [PMID: 25257715 DOI: 10.1016/j.neunet.2014.08.011] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/31/2014] [Revised: 08/26/2014] [Accepted: 08/26/2014] [Indexed: 11/19/2022]
Abstract
The state space of a conventional Hopfield network typically exhibits many different attractors of which only a small subset satisfies constraints between neurons in a globally optimal fashion. It has recently been demonstrated that combining Hebbian learning with occasional alterations of normal neural states avoids this problem by means of self-organized enlargement of the best basins of attraction. However, so far it is not clear to what extent this process of self-optimization is also operative in real brains. Here we demonstrate that it can be transferred to more biologically plausible neural networks by implementing a self-optimizing spiking neural network model. In addition, by using this spiking neural network to emulate a Hopfield network with Hebbian learning, we attempt to make a connection between rate-based and temporal coding based neural systems. Although further work is required to make this model more realistic, it already suggests that the efficacy of the self-optimizing process is independent from the simplifying assumptions of a conventional Hopfield network. We also discuss natural and cultural processes that could be responsible for occasional alteration of neural firing patterns in actual brains.
Collapse
Affiliation(s)
- Alexander Woodward
- Graduate School of Arts and Sciences, The University of Tokyo, Komaba, Tokyo 153-8902, Japan.
| | - Tom Froese
- Departamento de Ciencias de la Computacion, Instituto de Investigaciones en Matematicas Aplicadas y en Sistemas, Universidad Nacional Autonoma de Mexico, Mexico; Centro de Ciencias de la Complejidad, Universidad Nacional Autonoma de Mexico, Mexico
| | - Takashi Ikegami
- Graduate School of Arts and Sciences, The University of Tokyo, Komaba, Tokyo 153-8902, Japan
| |
Collapse
|
22
|
Watson RA, Wagner GP, Pavlicev M, Weinreich DM, Mills R. The evolution of phenotypic correlations and "developmental memory". Evolution 2014; 68:1124-38. [PMID: 24351058 DOI: 10.1111/evo.12337] [Citation(s) in RCA: 64] [Impact Index Per Article: 6.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/23/2013] [Accepted: 11/22/2013] [Indexed: 12/14/2022]
Abstract
Development introduces structured correlations among traits that may constrain or bias the distribution of phenotypes produced. Moreover, when suitable heritable variation exists, natural selection may alter such constraints and correlations, affecting the phenotypic variation available to subsequent selection. However, exactly how the distribution of phenotypes produced by complex developmental systems can be shaped by past selective environments is poorly understood. Here we investigate the evolution of a network of recurrent nonlinear ontogenetic interactions, such as a gene regulation network, in various selective scenarios. We find that evolved networks of this type can exhibit several phenomena that are familiar in cognitive learning systems. These include formation of a distributed associative memory that can "store" and "recall" multiple phenotypes that have been selected in the past, recreate complete adult phenotypic patterns accurately from partial or corrupted embryonic phenotypes, and "generalize" (by exploiting evolved developmental modules) to produce new combinations of phenotypic features. We show that these surprising behaviors follow from an equivalence between the action of natural selection on phenotypic correlations and associative learning, well-understood in the context of neural networks. This helps to explain how development facilitates the evolution of high-fitness phenotypes and how this ability changes over evolutionary time.
Collapse
Affiliation(s)
- Richard A Watson
- Natural Systems Group, ECS/Institute for Life Sciences/Institute for Complex Systems Simulation, University of Southampton, Southampton, SO17 1BJ, United Kingdom.
| | | | | | | | | |
Collapse
|
23
|
Davies AP, Watson RA, Mills R, Buckley CL, Noble J. "If you can't be with the one you love, love the one you're with": how individual habituation of agent interactions improves global utility. ARTIFICIAL LIFE 2011; 17:167-181. [PMID: 21554113 DOI: 10.1162/artl_a_00030] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/30/2023]
Abstract
Simple distributed strategies that modify the behavior of selfish individuals in a manner that enhances cooperation or global efficiency have proved difficult to identify. We consider a network of selfish agents who each optimize their individual utilities by coordinating (or anticoordinating) with their neighbors, to maximize the payoffs from randomly weighted pairwise games. In general, agents will opt for the behavior that is the best compromise (for them) of the many conflicting constraints created by their neighbors, but the attractors of the system as a whole will not maximize total utility. We then consider agents that act as creatures of habit by increasing their preference to coordinate (anticoordinate) with whichever neighbors they are coordinated (anticoordinated) with at present. These preferences change slowly while the system is repeatedly perturbed, so that it settles to many different local attractors. We find that under these conditions, with each perturbation there is a progressively higher chance of the system settling to a configuration with high total utility. Eventually, only one attractor remains, and that attractor is very likely to maximize (or almost maximize) global utility. This counterintuitive result can be understood using theory from computational neuroscience; we show that this simple form of habituation is equivalent to Hebbian learning, and the improved optimization of global utility that is observed results from well-known generalization capabilities of associative memory acting at the network scale. This causes the system of selfish agents, each acting individually but habitually, to collectively identify configurations that maximize total utility.
Collapse
Affiliation(s)
- Adam P Davies
- Natural System Group, Electronics and Computer Science, University of Southampton, UK.
| | | | | | | | | |
Collapse
|