1
|
Clawson WP, Levin M. Endless forms most beautiful 2.0: teleonomy and the bioengineering of chimaeric and synthetic organisms. Biol J Linn Soc Lond 2022. [DOI: 10.1093/biolinnean/blac073] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/24/2022]
Abstract
Abstract
The rich variety of biological forms and behaviours results from one evolutionary history on Earth, via frozen accidents and selection in specific environments. This ubiquitous baggage in natural, familiar model species obscures the plasticity and swarm intelligence of cellular collectives. Significant gaps exist in our understanding of the origin of anatomical novelty, of the relationship between genome and form, and of strategies for control of large-scale structure and function in regenerative medicine and bioengineering. Analysis of living forms that have never existed before is necessary to reveal deep design principles of life as it can be. We briefly review existing examples of chimaeras, cyborgs, hybrots and other beings along the spectrum containing evolved and designed systems. To drive experimental progress in multicellular synthetic morphology, we propose teleonomic (goal-seeking, problem-solving) behaviour in diverse problem spaces as a powerful invariant across possible beings regardless of composition or origin. Cybernetic perspectives on chimaeric morphogenesis erase artificial distinctions established by past limitations of technology and imagination. We suggest that a multi-scale competency architecture facilitates evolution of robust problem-solving, living machines. Creation and analysis of novel living forms will be an essential testbed for the emerging field of diverse intelligence, with numerous implications across regenerative medicine, robotics and ethics.
Collapse
Affiliation(s)
| | - Michael Levin
- Allen Discovery Center at Tufts University , Medford, MA , USA
- Wyss Institute for Biologically Inspired Engineering at Harvard University , Boston, MA , USA
| |
Collapse
|
2
|
Watson RA, Levin M, Buckley CL. Design for an Individual: Connectionist Approaches to the Evolutionary Transitions in Individuality. Front Ecol Evol 2022. [DOI: 10.3389/fevo.2022.823588] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/27/2022] Open
Abstract
The truly surprising thing about evolution is not how it makes individuals better adapted to their environment, but how it makes individuals. All individuals are made of parts that used to be individuals themselves, e.g., multicellular organisms from unicellular organisms. In such evolutionary transitions in individuality, the organised structure of relationships between component parts causes them to work together, creating a new organismic entity and a new evolutionary unit on which selection can act. However, the principles of these transitions remain poorly understood. In particular, the process of transition must be explained by “bottom-up” selection, i.e., on the existing lower-level evolutionary units, without presupposing the higher-level evolutionary unit we are trying to explain. In this hypothesis and theory manuscript we address the conditions for evolutionary transitions in individuality by exploiting adaptive principles already known in learning systems. Connectionist learning models, well-studied in neural networks, demonstrate how networks of organised functional relationships between components, sufficient to exhibit information integration and collective action, can be produced via fully-distributed and unsupervised learning principles, i.e., without centralised control or an external teacher. Evolutionary connectionism translates these distributed learning principles into the domain of natural selection, and suggests how relationships among evolutionary units could become adaptively organised by selection from below without presupposing genetic relatedness or selection on collectives. In this manuscript, we address how connectionist models with a particular interaction structure might explain transitions in individuality. We explore the relationship between the interaction structures necessary for (a) evolutionary individuality (where the evolution of the whole is a non-decomposable function of the evolution of the parts), (b) organismic individuality (where the development and behaviour of the whole is a non-decomposable function of the behaviour of component parts) and (c) non-linearly separable functions, familiar in connectionist models (where the output of the network is a non-decomposable function of the inputs). Specifically, we hypothesise that the conditions necessary to evolve a new level of individuality are described by the conditions necessary to learn non-decomposable functions of this type (or deep model induction) familiar in connectionist models of cognition and learning.
Collapse
|
3
|
Czégel D, Giaffar H, Tenenbaum JB, Szathmáry E. Bayes and Darwin: How replicator populations implement Bayesian computations. Bioessays 2022; 44:e2100255. [PMID: 35212408 DOI: 10.1002/bies.202100255] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/28/2021] [Revised: 02/01/2022] [Accepted: 02/03/2022] [Indexed: 11/07/2022]
Abstract
Bayesian learning theory and evolutionary theory both formalize adaptive competition dynamics in possibly high-dimensional, varying, and noisy environments. What do they have in common and how do they differ? In this paper, we discuss structural and dynamical analogies and their limits, both at a computational and an algorithmic-mechanical level. We point out mathematical equivalences between their basic dynamical equations, generalizing the isomorphism between Bayesian update and replicator dynamics. We discuss how these mechanisms provide analogous answers to the challenge of adapting to stochastically changing environments at multiple timescales. We elucidate an algorithmic equivalence between a sampling approximation, particle filters, and the Wright-Fisher model of population genetics. These equivalences suggest that the frequency distribution of types in replicator populations optimally encodes regularities of a stochastic environment to predict future environments, without invoking the known mechanisms of multilevel selection and evolvability. A unified view of the theories of learning and evolution comes in sight.
Collapse
Affiliation(s)
- Dániel Czégel
- Institute of Evolution, Centre for Ecological Research, Budapest, Hungary.,Parmenides Foundation, Center for the Conceptual Foundations of Science, Pullach, Germany.,Doctoral School of Biology, Institute of Biology, Eötvös Loránd University, Budapest, Hungary.,Beyond Center for Fundamental Concepts in Science, Arizona State University, Tempe, Arizona, USA
| | - Hamza Giaffar
- Cold Spring Harbor Laboratory, Cold Spring Harbor, New York, USA
| | - Joshua B Tenenbaum
- Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, Massachusetts, USA.,Center for Brains, Minds and Machines, Massachusetts Institute of Technology, Cambridge, Massachusetts, USA.,Computer Science and Artificial Intelligence Laboratory, Massachusetts Institute of Technology, Cambridge, Massachusetts, USA
| | - Eörs Szathmáry
- Institute of Evolution, Centre for Ecological Research, Budapest, Hungary.,Parmenides Foundation, Center for the Conceptual Foundations of Science, Pullach, Germany.,Department of Plant Systematics, Ecology and Theoretical Biology, Eötvös Loránd University, Budapest, Hungary
| |
Collapse
|
4
|
Abstract
We apply the theory of learning to physically renormalizable systems in an attempt to outline a theory of biological evolution, including the origin of life, as multilevel learning. We formulate seven fundamental principles of evolution that appear to be necessary and sufficient to render a universe observable and show that they entail the major features of biological evolution, including replication and natural selection. It is shown that these cornerstone phenomena of biology emerge from the fundamental features of learning dynamics such as the existence of a loss function, which is minimized during learning. We then sketch the theory of evolution using the mathematical framework of neural networks, which provides for detailed analysis of evolutionary phenomena. To demonstrate the potential of the proposed theoretical framework, we derive a generalized version of the Central Dogma of molecular biology by analyzing the flow of information during learning (back propagation) and predicting (forward propagation) the environment by evolving organisms. The more complex evolutionary phenomena, such as major transitions in evolution (in particular, the origin of life), have to be analyzed in the thermodynamic limit, which is described in detail in the paper by Vanchurin et al. [V. Vanchurin, Y. I. Wolf, E. V. Koonin, M. I. Katsnelson, Proc. Natl. Acad. Sci. U.S.A. 119, 10.1073/pnas.2120042119 (2022)].
Collapse
|
5
|
Abstract
Modern evolutionary theory gives a detailed quantitative description of microevolutionary processes that occur within evolving populations of organisms, but evolutionary transitions and emergence of multiple levels of complexity remain poorly understood. Here, we establish the correspondence among the key features of evolution, learning dynamics, and renormalizability of physical theories to outline a theory of evolution that strives to incorporate all evolutionary processes within a unified mathematical framework of the theory of learning. According to this theory, for example, replication of genetic material and natural selection readily emerge from the learning dynamics, and in sufficiently complex systems, the same learning phenomena occur on multiple levels or on different scales, similar to the case of renormalizable physical theories. We apply the theory of learning to physically renormalizable systems in an attempt to outline a theory of biological evolution, including the origin of life, as multilevel learning. We formulate seven fundamental principles of evolution that appear to be necessary and sufficient to render a universe observable and show that they entail the major features of biological evolution, including replication and natural selection. It is shown that these cornerstone phenomena of biology emerge from the fundamental features of learning dynamics such as the existence of a loss function, which is minimized during learning. We then sketch the theory of evolution using the mathematical framework of neural networks, which provides for detailed analysis of evolutionary phenomena. To demonstrate the potential of the proposed theoretical framework, we derive a generalized version of the Central Dogma of molecular biology by analyzing the flow of information during learning (back propagation) and predicting (forward propagation) the environment by evolving organisms. The more complex evolutionary phenomena, such as major transitions in evolution (in particular, the origin of life), have to be analyzed in the thermodynamic limit, which is described in detail in the paper by Vanchurin et al. [V. Vanchurin, Y. I. Wolf, E. V. Koonin, M. I. Katsnelson, Proc. Natl. Acad. Sci. U.S.A. 119, 10.1073/pnas.2120042119 (2022)].
Collapse
|
6
|
Czégel D, Giaffar H, Csillag M, Futó B, Szathmáry E. Novelty and imitation within the brain: a Darwinian neurodynamic approach to combinatorial problems. Sci Rep 2021; 11:12513. [PMID: 34131159 PMCID: PMC8206098 DOI: 10.1038/s41598-021-91489-5] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/12/2021] [Accepted: 05/21/2021] [Indexed: 02/05/2023] Open
Abstract
Efficient search in vast combinatorial spaces, such as those of possible action sequences, linguistic structures, or causal explanations, is an essential component of intelligence. Is there any computational domain that is flexible enough to provide solutions to such diverse problems and can be robustly implemented over neural substrates? Based on previous accounts, we propose that a Darwinian process, operating over sequential cycles of imperfect copying and selection of neural informational patterns, is a promising candidate. Here we implement imperfect information copying through one reservoir computing unit teaching another. Teacher and learner roles are assigned dynamically based on evaluation of the readout signal. We demonstrate that the emerging Darwinian population of readout activity patterns is capable of maintaining and continually improving upon existing solutions over rugged combinatorial reward landscapes. We also demonstrate the existence of a sharp error threshold, a neural noise level beyond which information accumulated by an evolutionary process cannot be maintained. We introduce a novel analysis method, neural phylogenies, that displays the unfolding of the neural-evolutionary process.
Collapse
Affiliation(s)
- Dániel Czégel
- Institute of Evolution, Centre for Ecological Research, Budapest, Hungary.
- Department of Plant Systematics, Ecology and Theoretical Biology, Eötvös University, Budapest, Hungary.
- Parmenides Foundation, Center for the Conceptual Foundations of Science, Pullach, Germany.
- Beyond Center for Fundamental Concepts in Science, Arizona State University, Tempe, AZ, USA.
| | - Hamza Giaffar
- Cold Spring Harbor Laboratory, Cold Spring Harbor, NY, USA
| | - Márton Csillag
- Institute of Evolution, Centre for Ecological Research, Budapest, Hungary
| | - Bálint Futó
- Institute of Evolution, Centre for Ecological Research, Budapest, Hungary
| | - Eörs Szathmáry
- Institute of Evolution, Centre for Ecological Research, Budapest, Hungary.
- Department of Plant Systematics, Ecology and Theoretical Biology, Eötvös University, Budapest, Hungary.
- Parmenides Foundation, Center for the Conceptual Foundations of Science, Pullach, Germany.
| |
Collapse
|
7
|
Groups as organisms: Implications for therapy and training. Clin Psychol Rev 2021; 85:101987. [PMID: 33725511 DOI: 10.1016/j.cpr.2021.101987] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/09/2020] [Revised: 01/23/2021] [Accepted: 02/08/2021] [Indexed: 11/24/2022]
Abstract
The intellectual tradition of individualism treats the individual person as the fundamental unit of analysis and reduces all things social to the motives and actions of individuals. Most methods in clinical psychology are influenced by individualism and therefore treat the individual as the primary object of therapy/training, even when recognizing the importance of nurturing social relationships for individual wellbeing. Multilevel selection theory offers an alternative to individualism in which individuals become part of something larger than themselves that qualifies as an organism in its own right. Seeing individuals as parts of social organisms provides a new perspective with numerous implications for improving wellbeing at all scales, from individuals to the planet.
Collapse
|